change -e to long syntax --env

Signed-off-by: Simon L <szaimen@e.mail.de>
This commit is contained in:
Simon L 2023-04-16 21:32:50 +02:00
parent 30883d0d61
commit 4b1ed4e227
4 changed files with 55 additions and 55 deletions

View file

@ -8,7 +8,7 @@ You can run AIO with docker rootless by following the steps below.
1. Do not forget to set the mentioned environmental variables and in best case add them to your `~/.bashrc` file as shown!
1. Also do not forget to run `loginctl enable-linger USERNAME` (and substitute USERNAME with the correct one) in order to make sure that user services are automatically started after every reboot.
1. Expose the privileged ports by following https://docs.docker.com/engine/security/rootless/#exposing-privileged-ports. (`sudo setcap cap_net_bind_service=ep $(which rootlesskit); systemctl --user restart docker`)
1. Use the official AIO startup command but use `--volume $XDG_RUNTIME_DIR/docker.sock:/var/run/docker.sock:ro` instead of `--volume /var/run/docker.sock:/var/run/docker.sock:ro` and also add `-e DOCKER_SOCKET_PATH=$XDG_RUNTIME_DIR/docker.sock` to the initial container startup (which is needed for mastercontainer updates to work correctly).
1. Use the official AIO startup command but use `--volume $XDG_RUNTIME_DIR/docker.sock:/var/run/docker.sock:ro` instead of `--volume /var/run/docker.sock:/var/run/docker.sock:ro` and also add `--env DOCKER_SOCKET_PATH=$XDG_RUNTIME_DIR/docker.sock` to the initial container startup (which is needed for mastercontainer updates to work correctly).
1. Now everything should work like without docker rootless. You can consider using docker-compose for this or running it behind a reverse proxy. Basically the only thing that needs to be adjusted always in the startup command or docker-compose file (after installing docker rootles) are things that are mentioned in point 3.
**Please note:** All files outside the containers get created, written to and accessed as the user that is running the docker daemon or a subuid of it. So for the built-in backup to work you need to allow this user to write to the target directory. E.g. with `sudo chown -R USERNAME:GROUPNAME /mnt/backup`. The same applies when changing Nextcloud's datadir. E.g. `sudo chown -R USERNAME:GROUPNAME /mnt/ncdata`. When you want to use the NEXTCLOUD_MOUNT option for local external storage, you need to adjust the permissions of the chosen folders to be accessible/writeable by the userid `100032:100032` (if running `grep ^$(whoami): /etc/subuid` as the user that is running the docker daemon returns 100000 as first value).

View file

@ -45,7 +45,7 @@ The following instructions are meant for installations without a web server or r
- `--volume nextcloud_aio_mastercontainer:/mnt/docker-aio-config` This means that the files that are created by the mastercontainer will be stored in a docker volume that is called `nextcloud_aio_mastercontainer`. This line is not allowed to be changed, since built-in backups would fail later on.
- `--volume /var/run/docker.sock:/var/run/docker.sock:ro` The docker socket is mounted into the container which is used for spinning up all the other containers and for further features. It needs to be adjusted on Windows/macOS and on docker rootless. See the applicable documentation on this. If adjusting, don't forget to also set `DOCKER_SOCKET_PATH`! If you dislike this, see https://github.com/nextcloud/all-in-one/discussions/500#discussioncomment-2740767 and the whole thread for options.
- `nextcloud/all-in-one:latest` This is the docker container image that is used.
- Further options can be set using environment variables, for example `-e NEXTCLOUD_DATADIR="/mnt/ncdata"` (This is an example for Linux. See [this](https://github.com/nextcloud/all-in-one#how-to-change-the-default-location-of-nextclouds-datadir) for other OS' and for an explanation of what this value does. This specific one needs to be specified upon the first startup if you want to change it to a specific path instead of the default Docker volume). To see explanations and examples for further variables (like changing the location of Nextcloud's datadir or mounting some locations as external storage into the Nextcloud container), read through this readme and look at the docker-compose file: https://github.com/nextcloud/all-in-one/blob/main/docker-compose.yml
- Further options can be set using environment variables, for example `--env NEXTCLOUD_DATADIR="/mnt/ncdata"` (This is an example for Linux. See [this](https://github.com/nextcloud/all-in-one#how-to-change-the-default-location-of-nextclouds-datadir) for other OS' and for an explanation of what this value does. This specific one needs to be specified upon the first startup if you want to change it to a specific path instead of the default Docker volume). To see explanations and examples for further variables (like changing the location of Nextcloud's datadir or mounting some locations as external storage into the Nextcloud container), read through this readme and look at the docker-compose file: https://github.com/nextcloud/all-in-one/blob/main/docker-compose.yml
</details>
Note: You may be interested in adjusting Nextclouds datadir to store the files in a different location than the default docker volume. See [this documentation](https://github.com/nextcloud/all-in-one#how-to-change-the-default-location-of-nextclouds-datadir) on how to do it.
@ -106,7 +106,7 @@ Also, you may be interested in adjusting Nextcloud's Datadir to store the files
⚠️ **Please note:** Almost all commands in this project's documentation use `sudo docker ...`. Since `sudo` is not available on Windows, you simply remove `sudo` from the commands and they should work.
### How to run AIO on Synology DSM
On Synology, there are two things different in comparison to Linux: instead of using `--volume /var/run/docker.sock:/var/run/docker.sock:ro`, you need to use `--volume /volume1/docker/docker.sock:/var/run/docker.sock:ro` to run it. You also need to add `-e DOCKER_SOCKET_PATH="/volume1/docker/docker.sock"`to the startup command. Apart from that it should work and behave the same like on Linux. Obviously the Synology Docker GUI will not work with that so you will need to either use SSH or create a user-defined script task in the task scheduler as the user 'root' in order to run the command.
On Synology, there are two things different in comparison to Linux: instead of using `--volume /var/run/docker.sock:/var/run/docker.sock:ro`, you need to use `--volume /volume1/docker/docker.sock:/var/run/docker.sock:ro` to run it. You also need to add `--env DOCKER_SOCKET_PATH="/volume1/docker/docker.sock"`to the startup command. Apart from that it should work and behave the same like on Linux. Obviously the Synology Docker GUI will not work with that so you will need to either use SSH or create a user-defined script task in the task scheduler as the user 'root' in order to run the command.
Also, you may be interested in adjusting Nextcloud's Datadir to store the files on the host system. See [this documentation](https://github.com/nextcloud/all-in-one#how-to-change-the-default-location-of-nextclouds-datadir) on how to do it.
@ -178,7 +178,7 @@ The recommended way is to set up a local dns-server like a pi-hole and set up a
- https://dockerlabs.collabnix.com/intermediate/networking/Configuring_DNS.html
### How to skip the domain validation?
If you are completely sure that you've configured everything correctly and are not able to pass the domain validation, you may skip the domain validation by adding `-e SKIP_DOMAIN_VALIDATION=true` to the docker run command of the mastercontainer.
If you are completely sure that you've configured everything correctly and are not able to pass the domain validation, you may skip the domain validation by adding `--env SKIP_DOMAIN_VALIDATION=true` to the docker run command of the mastercontainer.
### How to resolve firewall problems with Fedora Linux, RHEL OS, CentOS, SUSE Linux and others?
It is known that Linux distros that use [firewalld](https://firewalld.org) as their firewall daemon have problems with docker networks. In case the containers are not able to communicate with each other, you may change your firewalld to use the iptables backend by running:
@ -442,23 +442,23 @@ You can do so by running the `/daily-backup.sh` script that is stored in the mas
- `STOP_CONTAINERS` if set to `1`, it will automatically stop the containers.
- `CHECK_BACKUP` if set to `1`, it will start the backup check. This is not allowed to be enabled at the same time like `DAILY_BACKUP`. Please be aware that this option is non-blocking which means that the backup check is not done when the process is finished since it only start the borgbackup container with the correct configuration.
One example for this would be `sudo docker exec -it -e DAILY_BACKUP=1 nextcloud-aio-mastercontainer /daily-backup.sh`, which you can run via a cronjob or put it in a script.
One example for this would be `sudo docker exec -it --env DAILY_BACKUP=1 nextcloud-aio-mastercontainer /daily-backup.sh`, which you can run via a cronjob or put it in a script.
⚠️ Please note that none of the option returns error codes. So you need to check for the correct result yourself.
### How to disable the backup section?
If you already have a backup solution in place, you may want to hide the backup section. You can do so by adding `-e DISABLE_BACKUP_SECTION=true` to the initial startup of the mastercontainer.
If you already have a backup solution in place, you may want to hide the backup section. You can do so by adding `--env DISABLE_BACKUP_SECTION=true` to the initial startup of the mastercontainer.
### How to change the default location of Nextcloud's Datadir?
⚠️ **Attention:** It is very important to change the datadir **before** Nextcloud is installed/started the first time and not to change it afterwards! If you still want to do it afterwards, see [this](https://github.com/nextcloud/all-in-one/discussions/890#discussioncomment-3089903) on how to do it.
You can configure the Nextcloud container to use a specific directory on your host as data directory. You can do so by adding the environmental variable `NEXTCLOUD_DATADIR` to the initial startup of the mastercontainer. Allowed values for that variable are strings that start with `/` and are not equal to `/`.
- An example for Linux is `-e NEXTCLOUD_DATADIR="/mnt/ncdata"`.
- On macOS it might be `-e NEXTCLOUD_DATADIR="/var/nextcloud-data"`
- For Synology it may be `-e NEXTCLOUD_DATADIR="/volume1/docker/nextcloud/data"`.
- On Windows it might be `-e NEXTCLOUD_DATADIR="/run/desktop/mnt/host/c/ncdata"`. (This path is equivalent to `C:\ncdata` on your Windows host so you need to translate the path accordingly. Hint: the path that you enter needs to start with `/run/desktop/mnt/host/`. Append to that the exact location on your windows host, e.g. `c/ncdata` which is equivalent to `C:\ncdata`.)
- Another option is to provide a specific volume name here with: `-e NEXTCLOUD_DATADIR="nextcloud_aio_nextcloud_datadir"`. This volume needs to be created beforehand manually by you in order to be able to use it. e.g. with:
- An example for Linux is `--env NEXTCLOUD_DATADIR="/mnt/ncdata"`.
- On macOS it might be `--env NEXTCLOUD_DATADIR="/var/nextcloud-data"`
- For Synology it may be `--env NEXTCLOUD_DATADIR="/volume1/docker/nextcloud/data"`.
- On Windows it might be `--env NEXTCLOUD_DATADIR="/run/desktop/mnt/host/c/ncdata"`. (This path is equivalent to `C:\ncdata` on your Windows host so you need to translate the path accordingly. Hint: the path that you enter needs to start with `/run/desktop/mnt/host/`. Append to that the exact location on your windows host, e.g. `c/ncdata` which is equivalent to `C:\ncdata`.)
- Another option is to provide a specific volume name here with: `--env NEXTCLOUD_DATADIR="nextcloud_aio_nextcloud_datadir"`. This volume needs to be created beforehand manually by you in order to be able to use it. e.g. with:
```
docker volume create ^
--driver local ^
@ -488,12 +488,12 @@ Now you can use `/mnt/storagebox` as Nextcloud's datadir like described in the s
### How to allow the Nextcloud container to access directories on the host?
By default, the Nextcloud container is confined and cannot access directories on the host OS. You might want to change this when you are planning to use local external storage in Nextcloud to store some files outside the data directory and can do so by adding the environmental variable `NEXTCLOUD_MOUNT` to the initial startup of the mastercontainer. Allowed values for that variable are strings that start with `/` and are not equal to `/`.
- Two examples for Linux are `-e NEXTCLOUD_MOUNT="/mnt/"` and `-e NEXTCLOUD_MOUNT="/media/"`.
- On macOS it might be `-e NEXTCLOUD_MOUNT="/Volumes/your_drive/"`
- For Synology it may be `-e NEXTCLOUD_MOUNT="/volume1/"`.
- On Windows it might be `-e NEXTCLOUD_MOUNT="/run/desktop/mnt/host/d/your-folder/"`. (This path is equivalent to `D:\your-folder` on your Windows host so you need to translate the path accordingly. Hint: the path that you enter needs to start with `/run/desktop/mnt/host/`. Append to that the exact location on your windows host, e.g. `d/your-folder/` which is equivalent to `D:\your-folder`.)
- Two examples for Linux are `--env NEXTCLOUD_MOUNT="/mnt/"` and `--env NEXTCLOUD_MOUNT="/media/"`.
- On macOS it might be `--env NEXTCLOUD_MOUNT="/Volumes/your_drive/"`
- For Synology it may be `--env NEXTCLOUD_MOUNT="/volume1/"`.
- On Windows it might be `--env NEXTCLOUD_MOUNT="/run/desktop/mnt/host/d/your-folder/"`. (This path is equivalent to `D:\your-folder` on your Windows host so you need to translate the path accordingly. Hint: the path that you enter needs to start with `/run/desktop/mnt/host/`. Append to that the exact location on your windows host, e.g. `d/your-folder/` which is equivalent to `D:\your-folder`.)
After using this option, please make sure to apply the correct permissions to the directories that you want to use in Nextcloud. E.g. `sudo chown -R 33:0 /mnt/your-drive-mountpoint` and `sudo chmod -R 750 /mnt/your-drive-mountpoint` should make it work on Linux when you have used `-e NEXTCLOUD_MOUNT="/mnt/"`. On Windows you could do this e.g. with `docker exec -it nextcloud-aio-nextcloud chown -R 33:0 /run/desktop/mnt/host/d/your-folder/` and `docker exec -it nextcloud-aio-nextcloud chmod -R 750 /run/desktop/mnt/host/d/your-folder/`.
After using this option, please make sure to apply the correct permissions to the directories that you want to use in Nextcloud. E.g. `sudo chown -R 33:0 /mnt/your-drive-mountpoint` and `sudo chmod -R 750 /mnt/your-drive-mountpoint` should make it work on Linux when you have used `--env NEXTCLOUD_MOUNT="/mnt/"`. On Windows you could do this e.g. with `docker exec -it nextcloud-aio-nextcloud chown -R 33:0 /run/desktop/mnt/host/d/your-folder/` and `docker exec -it nextcloud-aio-nextcloud chmod -R 750 /run/desktop/mnt/host/d/your-folder/`.
You can then navigate to the apps management page, activate the external storage app, navigate to `https://your-nc-domain.com/settings/admin/externalstorages` and add a local external storage directory that will be accessible inside the container at the same place that you've entered. E.g. `/mnt/your-drive-mountpoint` will be mounted to `/mnt/your-drive-mountpoint` inside the container, etc.
@ -502,16 +502,16 @@ Be aware though that these locations will not be covered by the built-in backup
**Please note:** If you can't see the type "local storage" in the external storage admin options, a restart of the containers from the AIO interface may be required.
### How to adjust the Talk port?
By default will the talk container use port `3478/UDP` and `3478/TCP` for connections. You can adjust the port by adding e.g. `-e TALK_PORT=3478` to the initial docker run command and adjusting the port to your desired value.
By default will the talk container use port `3478/UDP` and `3478/TCP` for connections. You can adjust the port by adding e.g. `--env TALK_PORT=3478` to the initial docker run command and adjusting the port to your desired value.
### How to adjust the upload limit for Nextcloud?
By default are uploads to Nextcloud limited to a max of 10G. You can adjust the upload limit by providing `-e NEXTCLOUD_UPLOAD_LIMIT=10G` to the docker run command of the mastercontainer and customize the value to your fitting. It must start with a number and end with `G` e.g. `10G`.
By default are uploads to Nextcloud limited to a max of 10G. You can adjust the upload limit by providing `--env NEXTCLOUD_UPLOAD_LIMIT=10G` to the docker run command of the mastercontainer and customize the value to your fitting. It must start with a number and end with `G` e.g. `10G`.
### How to adjust the max execution time for Nextcloud?
By default are uploads to Nextcloud limited to a max of 3600s. You can adjust the upload time limit by providing `-e NEXTCLOUD_MAX_TIME=3600` to the docker run command of the mastercontainer and customize the value to your fitting. It must be a number e.g. `3600`.
By default are uploads to Nextcloud limited to a max of 3600s. You can adjust the upload time limit by providing `--env NEXTCLOUD_MAX_TIME=3600` to the docker run command of the mastercontainer and customize the value to your fitting. It must be a number e.g. `3600`.
### How to adjust the PHP memory limit for Nextcloud?
By default is each PHP process in the Nextcloud container limited to a max of 512 MB. You can adjust the memory limit by providing `-e NEXTCLOUD_MEMORY_LIMIT=512M` to the docker run command of the mastercontainer and customize the value to your fitting. It must start with a number and end with `M` e.g. `1024M`.
By default is each PHP process in the Nextcloud container limited to a max of 512 MB. You can adjust the memory limit by providing `--env NEXTCLOUD_MEMORY_LIMIT=512M` to the docker run command of the mastercontainer and customize the value to your fitting. It must start with a number and end with `M` e.g. `1024M`.
### What can I do to fix the internal or reserved ip-address error?
If you get an error during the domain validation which states that your ip-address is an internal or reserved ip-address, you can fix this by first making sure that your domain indeed has the correct public ip-address that points to the server and then adding `--add-host yourdomain.com:<public-ip-address>` to the initial docker run command which will allow the domain validation to work correctly. And so that you know: even if the `A` record of your domain should change over time, this is no problem since the mastercontainer will not make any attempt to access the chosen domain after the initial domain validation.
@ -526,23 +526,23 @@ You can run AIO also with docker rootless. How to do this is documented here: [d
No. Since Podman is not 100% compatible with the Docker API, you cannot use Podman instead of Docker (since that would add yet another platform where the maintaner would need to test on). However you can use and follow the [manual-install documentation](./manual-install/) to get AIO's containers running with Podman or use Docker rootless, as described in the above section.
### How to change the Nextcloud apps that are installed on the first startup?
You might want to adjust the Nextcloud apps that are installed upon the first startup of the Nextcloud container. You can do so by adding `-e NEXTCLOUD_STARTUP_APPS="deck twofactor_totp tasks calendar contacts"` to the docker run command of the mastercontainer and customize the value to your fitting. It must be a string with small letters a-z, 0-9, spaces and hyphens or '_'. You can disable shipped and by default enabled apps by adding a hyphen in front of the appid. E.g. `-contactsinteraction`.
You might want to adjust the Nextcloud apps that are installed upon the first startup of the Nextcloud container. You can do so by adding `--env NEXTCLOUD_STARTUP_APPS="deck twofactor_totp tasks calendar contacts"` to the docker run command of the mastercontainer and customize the value to your fitting. It must be a string with small letters a-z, 0-9, spaces and hyphens or '_'. You can disable shipped and by default enabled apps by adding a hyphen in front of the appid. E.g. `-contactsinteraction`.
### How to add OS packages permanently to the Nextcloud container?
Some Nextcloud apps require additional external dependencies that must be bundled within Nextcloud container in order to work correctly. As we cannot put each and every dependency for all apps into the container - as this would make the project very fast unmaintainable - there is an official way how you can add additional dependencies into the Nextcloud container. However note that doing this is disrecommended since we do not test Nextcloud apps that require external dependencies.
You can do so by adding `-e NEXTCLOUD_ADDITIONAL_APKS="imagemagick dependency2 dependency3"` to the docker run command of the mastercontainer and customize the value to your fitting. It must be a string with small letters a-z, digits 0-9, spaces, dots and hyphens or '_'. You can find available packages here: https://pkgs.alpinelinux.org/packages?name=&branch=v3.16&repo=&arch=&maintainer=. By default added is `imagemagick`. If you want to keep that, you need to specify it as well.
You can do so by adding `--env NEXTCLOUD_ADDITIONAL_APKS="imagemagick dependency2 dependency3"` to the docker run command of the mastercontainer and customize the value to your fitting. It must be a string with small letters a-z, digits 0-9, spaces, dots and hyphens or '_'. You can find available packages here: https://pkgs.alpinelinux.org/packages?name=&branch=v3.16&repo=&arch=&maintainer=. By default added is `imagemagick`. If you want to keep that, you need to specify it as well.
### How to add PHP extensions permanently to the Nextcloud container?
Some Nextcloud apps require additional php extensions that must be bundled within Nextcloud container in order to work correctly. As we cannot put each and every dependency for all apps into the container - as this would make the project very fast unmaintainable - there is an official way how you can add additional php extensions into the Nextcloud container. However note that doing this is disrecommended since we do not test Nextcloud apps that require additional php extensions.
You can do so by adding `-e NEXTCLOUD_ADDITIONAL_PHP_EXTENSIONS="imagick extension1 extension2"` to the docker run command of the mastercontainer and customize the value to your fitting. It must be a string with small letters a-z, digits 0-9, spaces, dots and hyphens or '_'. You can find available extensions here: https://pecl.php.net/packages.php. By default added is `imagick`. If you want to keep that, you need to specify it as well.
You can do so by adding `--env NEXTCLOUD_ADDITIONAL_PHP_EXTENSIONS="imagick extension1 extension2"` to the docker run command of the mastercontainer and customize the value to your fitting. It must be a string with small letters a-z, digits 0-9, spaces, dots and hyphens or '_'. You can find available extensions here: https://pecl.php.net/packages.php. By default added is `imagick`. If you want to keep that, you need to specify it as well.
### What about the pdlib PHP extension for the facerecognition app?
The [facerecognition app](https://apps.nextcloud.com/apps/facerecognition) requires the pdlib PHP extension to be installed. Unfortunately, it is not available on PECL nor via PHP core, so there is no way to add this into AIO currently. However you can vote up [this issue](https://github.com/goodspb/pdlib/issues/56) to bring it to PECL and there is the [recognize app](https://apps.nextcloud.com/apps/recognize) that also allows to do face-recognition.
### How to enable hardware-transcoding for Nextcloud?
The [memories app](https://apps.nextcloud.com/apps/memories) allows to enable hardware transcoding for videos. In order to use that, you need to add `-e NEXTCLOUD_ENABLE_DRI_DEVICE=true` to the docker run command of the mastercontainer which will mount the `/dev/dri` device into the container (⚠️ Attention: this only works if the device is present on the host!). Additionally, you need to add required packets to the Nextcloud container by using [this feature](https://github.com/nextcloud/all-in-one#how-to-add-os-packages-permanently-to-the-nextcloud-container) and adding the required Alpine packages that are documented [here](https://github.com/pulsejet/memories/wiki/QSV-Transcoding).
The [memories app](https://apps.nextcloud.com/apps/memories) allows to enable hardware transcoding for videos. In order to use that, you need to add `--env NEXTCLOUD_ENABLE_DRI_DEVICE=true` to the docker run command of the mastercontainer which will mount the `/dev/dri` device into the container (⚠️ Attention: this only works if the device is present on the host!). Additionally, you need to add required packets to the Nextcloud container by using [this feature](https://github.com/nextcloud/all-in-one#how-to-add-os-packages-permanently-to-the-nextcloud-container) and adding the required Alpine packages that are documented [here](https://github.com/pulsejet/memories/wiki/QSV-Transcoding).
### Huge docker logs
When your containers run for a few days without a restart, the container logs that you can view from the AIO interface can get really huge. You can limit the loge sizes by enabling logrotate for docker container logs. Feel free to enable this by following those instructions: https://sandro-keil.de/blog/logrotate-for-docker-container/
@ -598,12 +598,12 @@ For some applications it might be necessary to enstablish a secured connection t
You can make the Nextcloud container trust any Certification Authority by providing the environmental variable `NEXTCLOUD_TRUSTED_CACERTS_DIR` when starting the AIO-mastercontainer. The value of the variables should be set to the absolute path to a directory on the host, which contains one or more Certification Authority's certificate. You should use X.509 certificates, Base64 encoded. (Other formats may work but have not been tested!) All the certificates in the directory will be trusted.
When using `docker run`, the environmental variable can be set with `-e NEXTCLOUD_TRUSTED_CACERTS_DIR=/path/to/my/cacerts`.
When using `docker run`, the environmental variable can be set with `--env NEXTCLOUD_TRUSTED_CACERTS_DIR=/path/to/my/cacerts`.
In order for the value to be valid, the path should start with `/` and not end with '/' and point to an existing **directory**. Pointing the variable directly to a certificate **file** will not work and may also break things.
### How to disable Collabora's Seccomp feature?
The Collabora container enables Seccomp by default, which is a security feature of the Linux kernel. On systems without this kernel feature enabled, you need to provide `-e COLLABORA_SECCOMP_DISABLED=true` to the initial docker run command in order to make it work.
The Collabora container enables Seccomp by default, which is a security feature of the Linux kernel. On systems without this kernel feature enabled, you need to provide `--env COLLABORA_SECCOMP_DISABLED=true` to the initial docker run command in order to make it work.
### How to enable automatic updates without creating a backup beforehand?
If you have an external backup solution, you might want to enable automatic updates without creating a backup first. However note that doing this is disrecommended since you will not be able to easily create and restore a backup from the AIO interface anymore and you need to make sure to shut down all the containers properly before creating the backup, e.g. by stopping them from the AIO interface first.
@ -617,7 +617,7 @@ But anyhow, is here a guide that helps you automate the whole procedure:
#!/bin/bash
# Stop the containers
docker exec -e STOP_CONTAINERS=1 nextcloud-aio-mastercontainer /daily-backup.sh
docker exec --env STOP_CONTAINERS=1 nextcloud-aio-mastercontainer /daily-backup.sh
# Below is optional if you run AIO in a VM which will shut down the VM afterwards
# poweroff
@ -645,7 +645,7 @@ Afterwards apply the correct permissions with `sudo chown root:root /root/shutdo
#!/bin/bash
# Run container update once
if ! docker exec -e AUTOMATIC_UPDATES=1 nextcloud-aio-mastercontainer /daily-backup.sh; then
if ! docker exec --env AUTOMATIC_UPDATES=1 nextcloud-aio-mastercontainer /daily-backup.sh; then
while docker ps --format "{{.Names}}" | grep -q "^nextcloud-aio-watchtower$"; do
echo "Waiting for watchtower to stop"
sleep 30
@ -657,7 +657,7 @@ if ! docker exec -e AUTOMATIC_UPDATES=1 nextcloud-aio-mastercontainer /daily-bac
done
# Run container update another time to make sure that all containers are updated correctly.
docker exec -e AUTOMATIC_UPDATES=1 nextcloud-aio-mastercontainer /daily-backup.sh
docker exec --env AUTOMATIC_UPDATES=1 nextcloud-aio-mastercontainer /daily-backup.sh
fi
```

View file

@ -126,7 +126,7 @@ You can get AIO running using the ACME DNS-challenge. Here is how to do it.
}
```
Of course you need to modify `<your-nc-domain>` to the domain on which you want to run Nextcloud. You also need to adjust `<provider>` and `<key>` to match your case. Also make sure to adjust the port 11000 to match the chosen `APACHE_PORT`. **Please note:** The above configuration will only work if your reverse proxy is running directly on the host that is running the docker daemon. If the reverse proxy is running in a docker container, you can use the `--network host` option (or `network_mode: host` for docker-compose) when starting the reverse proxy container in order to connect the reverse proxy container to the host network. If that is not an option or not possible for you (like e.g. on Windows or if the reverse proxy is running on a different host), you can alternatively instead of `localhost` use the private ip-address of the host that is running the docker daemon. If you are not sure how to retrieve that, you can run: `ip a | grep "scope global" | head -1 | awk '{print $2}' | sed 's|/.*||'` (the command only works on Linux)
1. Now continue with [point 2](#2-use-this-startup-command) but additionally, add `-e SKIP_DOMAIN_VALIDATION=true` to the docker run command which will disable the dommain validation (because it is known that the domain validation will not when using the DNS-challenge since no port is publicly opened.
1. Now continue with [point 2](#2-use-this-startup-command) but additionally, add `--env SKIP_DOMAIN_VALIDATION=true` to the docker run command which will disable the dommain validation (because it is known that the domain validation will not when using the DNS-challenge since no port is publicly opened.
**Advice:** In order to make it work in your home network, you may add the internal ipv4-address of your reverse proxy as A DNS-record to your domain and disable the dns-rebind-protection in your router. Another way it to set up a local dns-server like a pi-hole and set up a custom dns-record for that domain that points to the internal ip-adddress of your reverse proxy (see https://github.com/nextcloud/all-in-one#how-can-i-access-nextcloud-locally). If both is not possible, you may add the domain to the hosts file which is needed then for any devices that shall use the server.
@ -141,7 +141,7 @@ You can get AIO running using the ACME DNS-challenge. Here is how to do it.
Although it does not seems like it is the case but from AIO perspective a Cloudflare Tunnel works like a reverse proxy. Here is how to make it work:
1. Install the Cloudflare Tunnel on the same machine where AIO will be running on and point the Tunnel with the domain that you want to use for AIO to `http://localhost:11000`. If the Tunnel is running on a different machine, you can alternatively instead of `localhost` use the private ip-address of the host that is running the docker daemon. If you are not sure how to retrieve that, you can run: `ip a | grep "scope global" | head -1 | awk '{print $2}' | sed 's|/.*||'` (the command only works on Linux)
1. Now continue with [point 2](#2-use-this-startup-command) but additionally, add `-e SKIP_DOMAIN_VALIDATION=true` to the docker run command which will disable the dommain validation (because it is known that the domain validation will not work behind a Cloudflare Tunnel). So you need to ensure yourself that you've configured everything correctly.
1. Now continue with [point 2](#2-use-this-startup-command) but additionally, add `--env SKIP_DOMAIN_VALIDATION=true` to the docker run command which will disable the dommain validation (because it is known that the domain validation will not work behind a Cloudflare Tunnel). So you need to ensure yourself that you've configured everything correctly.
**Advice:** Make sure to [disable Cloudflares Rocket Loader feature](https://help.nextcloud.com/t/login-page-not-working-solved/149417/8) as otherwise Nextcloud's login prompt will not be shown.
@ -478,8 +478,8 @@ sudo docker run \
--name nextcloud-aio-mastercontainer \
--restart always \
--publish 8080:8080 \
-e APACHE_PORT=11000 \
-e APACHE_IP_BINDING=0.0.0.0 \
--env APACHE_PORT=11000 \
--env APACHE_IP_BINDING=0.0.0.0 \
--volume nextcloud_aio_mastercontainer:/mnt/docker-aio-config \
--volume /var/run/docker.sock:/var/run/docker.sock:ro \
nextcloud/all-in-one:latest
@ -501,8 +501,8 @@ docker run ^
--name nextcloud-aio-mastercontainer ^
--restart always ^
--publish 8080:8080 ^
-e APACHE_PORT=11000 ^
-e APACHE_IP_BINDING=0.0.0.0 ^
--env APACHE_PORT=11000 ^
--env APACHE_IP_BINDING=0.0.0.0 ^
--volume nextcloud_aio_mastercontainer:/mnt/docker-aio-config ^
--volume //var/run/docker.sock:/var/run/docker.sock:ro ^
nextcloud/all-in-one:latest
@ -520,7 +520,7 @@ Simply translate the docker run command into a docker-compose file. You can have
## 3. Limit the access to the apache container
Use this envorinmental variable during the initial startup of the mastercontainer to make the apache container only listen on localhost: `-e APACHE_IP_BINDING=127.0.0.1`. **Attention:** This is only recommended to be set if you use `localhost` in your reverse proxy config to connect to your AIO instance. If you use an ip-address instead of localhost, you should set it to `0.0.0.0`.
Use this envorinmental variable during the initial startup of the mastercontainer to make the apache container only listen on localhost: `--env APACHE_IP_BINDING=127.0.0.1`. **Attention:** This is only recommended to be set if you use `localhost` in your reverse proxy config to connect to your AIO instance. If you use an ip-address instead of localhost, you should set it to `0.0.0.0`.
## 4. Open the AIO interface.
After starting AIO, you should be able to access the AIO Interface via `https://ip.address.of.the.host:8080`. Enter your domain that you've entered in the reverse proxy config and you should be done. Please do not forget to open port `3478/TCP` and `3478/UDP` in your firewall/router for the Talk container!
@ -547,11 +547,11 @@ Afterwards should the AIO interface be accessible via `https://ip.address.of.the
If something does not work, follow the steps below:
1. Make sure to exactly follow the whole reverse proxy documentation step-for-step from top to bottom!
1. Make sure that you used the docker run command that is described in this reverse proxy documentation.
1. Make sure to set the `APACHE_IP_BINDING` variable correctly. If in doubt, set it to `-e APACHE_IP_BINDING=0.0.0.0`
1. Make sure to set the `APACHE_IP_BINDING` variable correctly. If in doubt, set it to `--env APACHE_IP_BINDING=0.0.0.0`
1. Make sure that all ports match the chosen `APACHE_PORT`.
1. Make sure that the reverse proxy is running on the host OS or if running in a container, connected to the host network. If that is not possible (e.g. on Windows or if the reverse proxy is running on a different host), substitute `localhost` or `127.0.0.1` in the default configurations by the private ip-address of the host that is running the docker daemon. If you are not sure how to retrieve that, you can run: `ip a | grep "scope global" | head -1 | awk '{print $2}' | sed 's|/.*||'` (The command only works on Linux)
1. Make sure that the mastercontainer is able to spawn other containers. You can do so by checking that the mastercontainer indeed has access to the Docker socket which might not be positioned in one of the suggested directories like `/var/run/docker.sock` but in a different directory, based on your OS and the way how you installed Docker. The mastercontainer logs should help figuring this out. You can have a look at them by running `sudo docker logs nextcloud-aio-mastercontainer` after the container is started the first time.
1. Check if after the mastercontainer was started, the reverse proxy if running inside a container, can reach the provided apache port. You can test this by running `nc -z localhost 11000; echo $?` from inside the reverse proxy container. If the output is `0`, everything works. Alternatively you can of course use instead of `localhost` the ip-address of the host here for the test.
1. Try to configure everything from scratch if it still does not work by following https://github.com/nextcloud/all-in-one#how-to-properly-reset-the-instance.
1. As last resort, you may disable the domain validation by adding `-e SKIP_DOMAIN_VALIDATION=true` to the docker run command. But only use this if you are completely sure that you've correctly configured everything!
1. As last resort, you may disable the domain validation by adding `--env SKIP_DOMAIN_VALIDATION=true` to the docker run command. But only use this if you are completely sure that you've correctly configured everything!

View file

@ -1,23 +1,23 @@
# Environmental variables
- [ ] When starting the mastercontainer with `-e APACHE_PORT=11000` on a clean instance, the domaincheck container should be started with that same port published. That makes sure that also the Apache container will use that port later on. Using a value here that is not a port will not allow the mastercontainer to start correctly.
- [ ] When starting the mastercontainer with `-e APACHE_IP_BINDING=127.0.0.1` on a clean instance, the domaincheck container's apache port should only listen on localhost on the host. Using a value here that is not a number or dot will not allow the mastercontainer to start correctly.
- [ ] When starting the mastercontainer with `-e TALK_PORT=3479` on a clean instance, the talk container should use this port later on. Using a value here that is not a port will not allow the mastercontainer to start correctly. Also it should stop if apache_port and talk_port are set to the same value.
- [ ] When starting the mastercontainer with `--env APACHE_PORT=11000` on a clean instance, the domaincheck container should be started with that same port published. That makes sure that also the Apache container will use that port later on. Using a value here that is not a port will not allow the mastercontainer to start correctly.
- [ ] When starting the mastercontainer with `--env APACHE_IP_BINDING=127.0.0.1` on a clean instance, the domaincheck container's apache port should only listen on localhost on the host. Using a value here that is not a number or dot will not allow the mastercontainer to start correctly.
- [ ] When starting the mastercontainer with `--env TALK_PORT=3479` on a clean instance, the talk container should use this port later on. Using a value here that is not a port will not allow the mastercontainer to start correctly. Also it should stop if apache_port and talk_port are set to the same value.
- [ ] Make also sure that reverse proxies work by following https://github.com/nextcloud/all-in-one/blob/main/reverse-proxy.md#reverse-proxy-documentation and following [001-initial-setup.md](./001-initial-setup.md) and [002-new-instance.md](./002-new-instance.md)
- [ ] When starting the mastercontainer with `-e SKIP_DOMAIN_VALIDATION=true` on a clean instance, it should skip the domain verification. So it should accept any domain that you type in then.
- [ ] When starting the mastercontainer with `-e NEXTCLOUD_DATADIR="/mnt/testdata"` it should map that location from `/mnt/testdata` to `/mnt/ncdata` inside the Nextcloud container. Not having adjusted the permissions correctly before starting the Nextcloud container the first time will not allow the Nextcloud container to start correctly. See https://github.com/nextcloud/all-in-one#how-to-change-the-default-location-of-nextclouds-datadir for allowed values.
- [ ] When starting the mastercontainer with `-e NEXTCLOUD_MOUNT="/mnt/"` it should map `/mnt/` to `/mnt/` inside the Nextcloud container. See https://github.com/nextcloud/all-in-one#how-to-allow-the-nextcloud-container-to-access-directories-on-the-host for allowed values.
- [ ] When starting the mastercontainer with `-e NEXTCLOUD_UPLOAD_LIMIT=11G` it should change Nextclouds upload limit to 11G. See https://github.com/nextcloud/all-in-one#how-to-adjust-the-upload-limit-for-nextcloud for allowed values.
- [ ] When starting the mastercontainer with `-e NEXTCLOUD_MEMORY_LIMIT=1024M` it should change Nextclouds PHP memory limit to 1024M. See https://github.com/nextcloud/all-in-one#how-to-adjust-the-php-memory-limit-for-nextcloud for allowed values.
- [ ] When starting the mastercontainer with `-e NEXTCLOUD_MAX_TIME=4000` it should change Nextclouds upload max time 4000s. See https://github.com/nextcloud/all-in-one#how-to-adjust-the-max-execution-time-for-nextcloud for allowed values.
- [ ] When starting the mastercontainer with `-e DOCKER_SOCKET_PATH="$XDG_RUNTIME_DIR/docker.sock"` it should map `$XDG_RUNTIME_DIR/docker.sock` to `/var/run/docker.sock` inside the watchtower container which allow to update the mastercontainer on docker rootless.
- [ ] When starting the mastercontainer with `-e DISABLE_BACKUP_SECTION=true` it should hide the backup section that gets shown after AIO is set up (everything of [020-backup-and-restore](./020-backup-and-restore.md)) and simply show that the backup section is disabled.
- [ ] When starting the mastercontainer with `-e NEXTCLOUD_TRUSTED_CACERTS_DIR=/path/to/my/cacerts`, the resulting nextcloud container should trust all the Certification Authorities, whose certificates are included in the directory `/path/to/my/cacerts` on the host.
- [ ] When starting the mastercontainer with `--env SKIP_DOMAIN_VALIDATION=true` on a clean instance, it should skip the domain verification. So it should accept any domain that you type in then.
- [ ] When starting the mastercontainer with `--env NEXTCLOUD_DATADIR="/mnt/testdata"` it should map that location from `/mnt/testdata` to `/mnt/ncdata` inside the Nextcloud container. Not having adjusted the permissions correctly before starting the Nextcloud container the first time will not allow the Nextcloud container to start correctly. See https://github.com/nextcloud/all-in-one#how-to-change-the-default-location-of-nextclouds-datadir for allowed values.
- [ ] When starting the mastercontainer with `--env NEXTCLOUD_MOUNT="/mnt/"` it should map `/mnt/` to `/mnt/` inside the Nextcloud container. See https://github.com/nextcloud/all-in-one#how-to-allow-the-nextcloud-container-to-access-directories-on-the-host for allowed values.
- [ ] When starting the mastercontainer with `--env NEXTCLOUD_UPLOAD_LIMIT=11G` it should change Nextclouds upload limit to 11G. See https://github.com/nextcloud/all-in-one#how-to-adjust-the-upload-limit-for-nextcloud for allowed values.
- [ ] When starting the mastercontainer with `--env NEXTCLOUD_MEMORY_LIMIT=1024M` it should change Nextclouds PHP memory limit to 1024M. See https://github.com/nextcloud/all-in-one#how-to-adjust-the-php-memory-limit-for-nextcloud for allowed values.
- [ ] When starting the mastercontainer with `--env NEXTCLOUD_MAX_TIME=4000` it should change Nextclouds upload max time 4000s. See https://github.com/nextcloud/all-in-one#how-to-adjust-the-max-execution-time-for-nextcloud for allowed values.
- [ ] When starting the mastercontainer with `--env DOCKER_SOCKET_PATH="$XDG_RUNTIME_DIR/docker.sock"` it should map `$XDG_RUNTIME_DIR/docker.sock` to `/var/run/docker.sock` inside the watchtower container which allow to update the mastercontainer on docker rootless.
- [ ] When starting the mastercontainer with `--env DISABLE_BACKUP_SECTION=true` it should hide the backup section that gets shown after AIO is set up (everything of [020-backup-and-restore](./020-backup-and-restore.md)) and simply show that the backup section is disabled.
- [ ] When starting the mastercontainer with `--env NEXTCLOUD_TRUSTED_CACERTS_DIR=/path/to/my/cacerts`, the resulting nextcloud container should trust all the Certification Authorities, whose certificates are included in the directory `/path/to/my/cacerts` on the host.
See https://github.com/nextcloud/all-in-one#how-to-trust-user-defiend-certification-authorities-ca
- [ ] When starting the mastercontainer with `-e COLLABORA_SECCOMP_DISABLED=true`, the resulting collabora container should have `--o:security.seccomp=false` applied to it.
- [ ] When starting the mastercontainer with `-e NEXTCLOUD_STARTUP_APPS=deck`, the resulting Nextcloud should have only installed the deck app and not the other apps that get installed by default. Default are `deck twofactor_totp tasks calendar contacts`.
- [ ] When starting the mastercontainer with `-e NEXTCLOUD_ADDITIONAL_APKS=zip`, the resulting Nextcloud container should have the zip package installed and not imagemagick.
- [ ] When starting the mastercontainer with `-e NEXTCLOUD_ADDITIONAL_PHP_EXTENSIONS=inotify`, the resulting Nextcloud container should have the inotify extension installed and not the imagick extension.
- [ ] When starting the mastercontainer with `-e NEXTCLOUD_ENABLE_DRI_DEVICE=true`, the resulting Nextcloud container should have the /dev/dri device mounted into the container. (Only works if a `/dev/dri` device is present on the host)
- [ ] When starting the mastercontainer with `--env COLLABORA_SECCOMP_DISABLED=true`, the resulting collabora container should have `--o:security.seccomp=false` applied to it.
- [ ] When starting the mastercontainer with `--env NEXTCLOUD_STARTUP_APPS=deck`, the resulting Nextcloud should have only installed the deck app and not the other apps that get installed by default. Default are `deck twofactor_totp tasks calendar contacts`.
- [ ] When starting the mastercontainer with `--env NEXTCLOUD_ADDITIONAL_APKS=zip`, the resulting Nextcloud container should have the zip package installed and not imagemagick.
- [ ] When starting the mastercontainer with `--env NEXTCLOUD_ADDITIONAL_PHP_EXTENSIONS=inotify`, the resulting Nextcloud container should have the inotify extension installed and not the imagick extension.
- [ ] When starting the mastercontainer with `--env NEXTCLOUD_ENABLE_DRI_DEVICE=true`, the resulting Nextcloud container should have the /dev/dri device mounted into the container. (Only works if a `/dev/dri` device is present on the host)
You can now continue with [070-timezone-change.md](./070-timezone-change.md)