Restart Nextcloud Docker containers

After I used the Nextbox only via SSH to do remote backups for a while, today I actually went to the browser and tried to access the Nextcloud web interface, but only got “couldn’t connect”. So I SSHed to the box and found out that all the Nextcloud-related Docker containers were not running, and they were not even there:

nextuser@nextbox:~ $ sudo docker image ls
REPOSITORY          TAG                 IMAGE ID            CREATED             SIZE
vladgh/minidlna     latest              707c28c79d0e        8 weeks ago         70.9MB
nextcloud           21.0.0-apache       3d06f2d17193        8 months ago        780MB
mariadb             10.5.9              939d05495a90        9 months ago        387MB
redis               5.0.11-alpine       f84d97cdcb59        9 months ago        29.3MB
nextuser@nextbox:~ $ sudo docker container ls -a
CONTAINER ID        IMAGE               COMMAND                  CREATED             STATUS                  PORTS               NAMES
bb903919a72e        vladgh/minidlna     "/sbin/tini -- /entr…"   2 weeks ago         Up 18 hours (healthy)                       vladgh_minidlna

I did sudo systemctl restart docker and Docker restarted, but this did not bring back the Nextcloud containers.

Since the volumes are called nextbox-compose_… I searched for a so-called directory and found it in /usr/lib/nextbox-compose and no suprise there is the docker-compose.yml, so I tried to restart it:

nextuser@nextbox:/usr/lib/nextbox-compose $ sudo docker-compose up -d && sudo docker-compose logs -f
Creating network "nextbox-compose_default" with the default driver
Creating volume "nextbox-compose_db" with local driver
Creating nextbox-compose_db_1    ... done
Creating nextbox-compose_redis_1 ... done
Creating nextbox-compose_app_1   ... done
Creating nextbox-compose_cron_1  ... done
Attaching to nextbox-compose_app_1, nextbox-compose_cron_1, nextbox-compose_db_1, nextbox-compose_redis_1
app_1    | start delayed by 60secs...
cron_1   | crond: crond (busybox 1.30.1) started, log level 0
cron_1   | crond: user:www-data entry:(null)
cron_1   | 100001000010000100001000010000100001000010000100001000010000
cron_1   | 111111111111111111111111
cron_1   | 11111111111111111111111111111111
cron_1   | 111111111111
cron_1   | 1111111
db_1     | 2021-11-30 14:18:04+00:00 [Note] [Entrypoint]: Entrypoint script for MySQL Server 1:10.5.9+maria~focal started.
db_1     | 2021-11-30 14:18:05+00:00 [Note] [Entrypoint]: Switching to dedicated user 'mysql'
db_1     | 2021-11-30 14:18:05+00:00 [Note] [Entrypoint]: Entrypoint script for MySQL Server 1:10.5.9+maria~focal started.
db_1     | 2021-11-30 14:18:06 0 [Note] mysqld (mysqld 10.5.9-MariaDB-1:10.5.9+maria~focal) starting as process 1 ...
db_1     | 2021-11-30 14:18:06 0 [Warning] You need to use --log-bin to make --binlog-format work.
db_1     | 2021-11-30 14:18:06 0 [Note] InnoDB: Uses event mutexes
db_1     | 2021-11-30 14:18:06 0 [Note] InnoDB: Compressed tables use zlib 1.2.11
db_1     | 2021-11-30 14:18:06 0 [Note] InnoDB: Number of pools: 1
db_1     | 2021-11-30 14:18:06 0 [Note] InnoDB: Using ARMv8 crc32 instructions
db_1     | 2021-11-30 14:18:06 0 [Note] mysqld: O_TMPFILE is not supported on /tmp (disabling future attempts)
db_1     | 2021-11-30 14:18:06 0 [Note] InnoDB: Using Linux native AIO
db_1     | 2021-11-30 14:18:06 0 [Note] InnoDB: Initializing buffer pool, total size = 134217728, chunk size = 134217728
db_1     | 2021-11-30 14:18:06 0 [Note] InnoDB: Completed initialization of buffer pool
db_1     | 2021-11-30 14:18:06 0 [Note] InnoDB: If the mysqld execution user is authorized, page cleaner thread priority can be changed. See the man page of setpriority().
db_1     | 2021-11-30 14:18:06 0 [Note] InnoDB: 128 rollback segments are active.
db_1     | 2021-11-30 14:18:06 0 [Note] InnoDB: Creating shared tablespace for temporary tables
db_1     | 2021-11-30 14:18:06 0 [Note] InnoDB: Setting file './ibtmp1' size to 12 MB. Physically writing the file full; Please wait ...
db_1     | 2021-11-30 14:18:06 0 [Note] InnoDB: File './ibtmp1' size is now 12 MB.
db_1     | 2021-11-30 14:18:06 0 [Note] InnoDB: 10.5.9 started; log sequence number 1870513443; transaction id 9227558
db_1     | 2021-11-30 14:18:06 0 [Note] Plugin 'FEEDBACK' is disabled.
db_1     | 2021-11-30 14:18:06 0 [Note] InnoDB: Loading buffer pool(s) from /var/lib/mysql/ib_buffer_pool
db_1     | 2021-11-30 14:18:06 0 [Note] Server socket created on IP: '::'.
db_1     | 2021-11-30 14:18:06 0 [Warning] 'proxies_priv' entry '@% root@1d093b02d446' ignored in --skip-name-resolve mode.
db_1     | 2021-11-30 14:18:06 0 [Note] Reading of all Master_info entries succeeded
db_1     | 2021-11-30 14:18:06 0 [Note] Added new Master_info '' to hash table
db_1     | 2021-11-30 14:18:06 0 [Note] mysqld: ready for connections.
db_1     | Version: '10.5.9-MariaDB-1:10.5.9+maria~focal'  socket: '/run/mysqld/mysqld.sock'  port: 3306  mariadb.org binary distribution
db_1     | 2021-11-30 14:18:06 0 [Note] InnoDB: Buffer pool(s) load completed at 211130 14:18:06
redis_1  | 1:C 30 Nov 2021 14:18:04.419 # oO0OoO0OoO0Oo Redis is starting oO0OoO0OoO0Oo
redis_1  | 1:C 30 Nov 2021 14:18:04.420 # Redis version=5.0.11, bits=64, commit=00000000, modified=0, pid=1, just started
redis_1  | 1:C 30 Nov 2021 14:18:04.420 # Warning: no config file specified, using the default config. In order to specify a config file use redis-server /path/to/redis.conf
redis_1  | 1:M 30 Nov 2021 14:18:04.431 * Running mode=standalone, port=6379.
redis_1  | 1:M 30 Nov 2021 14:18:04.431 # Server initialized
redis_1  | 1:M 30 Nov 2021 14:18:04.431 # WARNING overcommit_memory is set to 0! Background save may fail under low memory condition. To fix this issue add 'vm.overcommit_memory = 1' to /etc/sysctl.conf and then reboot or run the command 'sysctl vm.overcommit_memory=1' for this to take effect.
redis_1  | 1:M 30 Nov 2021 14:18:04.437 * Ready to accept connections
redis_1  | 1:signal-handler (1638281885) Received SIGTERM scheduling shutdown...
redis_1  | 1:M 30 Nov 2021 14:18:05.241 # User requested shutdown...
redis_1  | 1:M 30 Nov 2021 14:18:05.241 * Saving the final RDB snapshot before exiting.
redis_1  | 1:M 30 Nov 2021 14:18:05.244 * DB saved on disk
redis_1  | 1:M 30 Nov 2021 14:18:05.244 # Redis is now ready to exit, bye bye...
nextbox-compose_redis_1 exited with code 0
nextbox-compose_db_1 exited with code 137
nextbox-compose_cron_1 exited with code 137
nextbox-compose_app_1 exited with code 137

That did not work. What happened, what went wrong? Is this really the correct way to start Nextcloud on the Nextbox? Did I break something? How can I fix this?

As you might have seen, I run another Docker container on the Nextbox to have it speak MiniDLNA. While I am pretty sure this is not influencing the Nextcloud Docker stuff, here is the configuration for it:

nextuser@nextbox:~ $ cat ~/opt/minidlna/docker-compose.yml 
version: "3"

services:
  minidlna:
    image: vladgh/minidlna
    container_name: vladgh_minidlna
    restart: unless-stopped
    network_mode: "host"
    volumes:
      - nextbox-compose_nextcloud:/nextcloud
    environment:
      - MINIDLNA_MEDIA_DIR=/nextcloud/data/frank/files
      - MINIDLNA_FRIENDLY_NAME=NextboxDLNA
      - PUID=33
      - PGID=33

volumes:
  nextbox-compose_nextcloud:
    external: true

there is a systemd-service which should be used to restart the nextbox-related containers: nextbox-compose. Please try to use systemctl restart nextbox-compose

Thanks for the quick reply, this helped.

I checked what the service is actually doing and found that it first tries to stop the containers using /usr/bin/nextbox-stop-compose.sh, so I ran that one manually. And here it goes, this script died with the following error:

ERROR: remove nextbox-compose_nextcloud: volume is in use - [bb903919a72e4b5210262da3d7e4e93c2db934be7b377beea12094208d8ddf61]

LOL so my assumption from above is proven wrong, because my MiniDLNA container really does influence Nextcloud, because it occupies the volume.

I guess it is expected behavior to see the stop script fail in this case. Yes I did read the warnings to not fiddle around down there, so no complaints and a big thank you for your help. :slight_smile:

1 Like