As your apps grow more complex, you may find significant benefit in running some services in separate containers. Splitting your app into multiple containers allows you to better isolate and maintain key services, providing a more modular and secure approach to fleet management. Each service can be packaged with the operating environment and tools it specifically needs to run, and each service can be limited to the minimum system resources necessary to perform its task. The benefits of multicontainer fleets compound as the complexity of the app grows. With multicontainer, each service is updated independently. Hence, larger apps can be developed and maintained by separate teams, each free to work in a way that best supports their service.
Note: For additional information on working with multiple containers with balena see the services masterclass.
This guide will cover the considerations you need to take into account when running multiple containers, including
docker-compose.yml configuration and some important balena specific settings.
The multicontainer functionality provided by balena is built around the Docker Compose file format. The balena device supervisor implements a subset of the Compose v2.1 feature set. You can find a full list of supported and known unsupported features in our device supervisor reference docs.
At the root of your multicontainer release, you'll use a
docker-compose.yml file to specify the configuration of your containers. The
docker-compose.yml defines the services you'll be building, as well as how the services interact with each other and the host OS.
Here's an example
docker-compose.yml for a simple multicontainer release, composed of a static site server, a websocket server, and a proxy:
version: '2' services: frontend: build: ./frontend restart: always expose: - "80" proxy: build: ./haproxy depends_on: - frontend - data ports: - "80:80" data: build: ./data expose: - "8080"
Each service can either be built from a directory containing a
Dockerfile, as shown here, or can use a Docker image that has already been built, by replacing
image:. If your containers need to be started in a specific order, make sure to use the
Note: Note that
depends_on only controls the startup order and will not restart services if a dependency restarts. Also if a service is expected to stop after performing some actions, do not include it as a dependency or the service that depends on it may not start.
Unlike single container fleets, multicontainer fleets do not run containers in privileged mode by default. If you want to make use of hardware, you will either have to set some services to privileged, using
privileged: true, or use the
devices settings to map in the correct hardware access to the container.
Also, in a balena device, unlike with regular docker compose, containers restart by default if their process crashes or the device is restarted. This behavior can be changed by setting the restart policy to one of the values supported by the engine (the default is
As an example, here the
gpio service is set up to use i2c and serial uart sensors and a restart policy of
gpio: build: ./gpio devices: - "/dev/i2c-1:/dev/i2c-1" - "/dev/mem:/dev/mem" - "/dev/ttyACM0:/dev/ttyACM0" cap_add: - SYS_RAWIO restart: on-failure
There are a few settings and considerations specific to balena that need to be taken into account when building multicontainer fleets.
host allows the container to share the same network namespace as the host OS. When this is set, any ports exposed on the container will be exposed locally on the device. This is necessary for features such as bluetooth.
With multicontainer fleets, balena supports the use of named volumes, a feature that expands on the persistent storage functionality used by older versions of balenaOS. Named volumes can be given arbitrary names and can be linked to a directory in one or more containers. As long as every release of the fleet includes a
docker-compose.yml and the volume name does not change, the data in the volume will persist across updates.
volumes field of the service to link a directory in your container to your named volume. The named volume should also be specified at the top level of the
version: '2' volumes: resin-data: services: example: build: ./example volumes: - 'resin-data:/data'
For devices upgraded from older versions of balenaOS to v2.12.0 or higher, a link will automatically be created from the
/data directory of the container to the
resin-data named volume (similar to above). This ensures fleet behavior will remain consistent across host OS versions. One notable difference is that accessing this data via the host OS is done at
/var/lib/docker/volumes/<FLEET ID>_resin-data/_data, rather than the
/mnt/data/resin-data/<FLEET ID> location used with earlier host OS versions.
balena does not support the use of bind mounts at this time, aside from the ones which are provided by feature labels.
Core dumps for containers
BalenaOS v2.113.31 and later, by default, will not generate core dump files when a container crashes. This prevents a buggy container that is crash looping to fill up all available storage space on a device.
Core dumps are files with the contents of the memory used by a process at the time it crashes. They are a relatively advanced troubleshooting tool, particularly useful for low-level debugging of programs written in languages that compile to native code. For example, a tool like the
gdb debugger can read a core dump and provide a stack trace showing the sequence of functions calls that led to the crash.
If you need core dumps, you can easily enable them by editing your
docker-compose.yml and setting
ulimits.core: -1 for the desired services. For example:
version: '2' services: example: ulimits: core: -1
In addition to the settings above, there are some balena specific labels that can be defined in the
docker-compose.yml file. These provide access to certain bind mounts and environment variables without requiring you to run the container as privileged:
|io.balena.features.balena-socket||false||Bind mounts the balena container engine socket into the container and sets the environment variable
|io.balena.features.dbus||false||Bind mounts the host OS dbus into the container using
|io.balena.features.sysfs||false||Bind mounts the host OS
|io.balena.features.procfs||false||Bind mounts the host OS
|io.balena.features.kernel-modules||false||Bind mounts the host OS
|io.balena.features.firmware||false||Bind mounts the host OS
|io.balena.features.journal-logs||false||Bind mounts journal log directories
|io.balena.features.balena-api||false||When enabled, it will make sure that
|io.balena.update.strategy||download-then-kill||Set the fleet update strategy.||v7.23.0||v2.21.0|
|io.balena.update.handover-timeout||60000||Time, in milliseconds, before an old container is automatically killed. Only used with the
* balenaOS versions that ship with a compatible device supervisor version as per balenaOS Changelog.
These labels are applied to a specific service with the
labels: io.balena.features.balena-socket: '1' io.balena.features.kernel-modules: '1' io.balena.features.firmware: '1' io.balena.features.dbus: '1' io.balena.features.sysfs: '1' io.balena.features.procfs: '1' io.balena.features.journal-logs: '1' io.balena.features.supervisor-api: '1' io.balena.features.balena-api: '1' io.balena.update.strategy: download-then-kill io.balena.update.handover-timeout: ''