Multiple containers

As your apps grow more complex, you may find significant benefit in running some services in separate containers. Splitting your app into multiple containers allows you to better isolate and maintain key services, providing a more modular and secure approach to fleet management. Each service can be packaged with the operating environment and tools it specifically needs to run, and each service can be limited to the minimum system resources necessary to perform its task. The benefits of multicontainer fleets compound as the complexity of the app grows. With multicontainer, each service is updated independently. Hence, larger apps can be developed and maintained by separate teams, each free to work in a way that best supports their service.

circle-exclamation

This guide will cover the considerations you need to take into account when running multiple containers, including docker-compose.yml configuration and some important balena specific settings.

docker-compose.yml file

The multicontainer functionality provided by balena is built around the Docker Composearrow-up-right file format. The balena device supervisor implements a subset of the Compose v2.1 feature setarrow-up-right. You can find a full list of supported and known unsupported features in our device supervisor reference docs.

At the root of your multicontainer release, you'll use a docker-compose.yml file to specify the configuration of your containers. The docker-compose.yml defines the services you'll be building, as well as how the services interact with each other and the host OS.

Here's an example docker-compose.yml for a simple multicontainer releasearrow-up-right, composed of a static site server, a websocket server, and a proxy:

version: '2'
services:
  frontend:
    build: ./frontend
    restart: always
    expose:
      - '80'
  proxy:
    build: ./haproxy
    depends_on:
      - frontend
      - data
    ports:
      - '80:80'
  data:
    build: ./data
    expose:
      - '8080'

Each service can either be built from a directory containing a Dockerfile, as shown here, or can use a Docker image that has already been built, by replacing build: with image:. If your containers need to be started in a specific order, make sure to use the depends_on: settingarrow-up-right.

circle-exclamation

Unlike single container fleets, multicontainer fleets do not run containers in privileged mode by default. If you want to make use of hardware, you will either have to set some services to privileged, using privileged: true, or use the cap_add and devices settings to map in the correct hardware access to the container.

Also, in a balena device, unlike with regular docker compose, containers restart by default if their process crashes or the device is restarted. This behavior can be changed by setting the restart policy to one of the values supported by the enginearrow-up-right (the default is restart: always).

As an example, here the gpio service is set up to use i2c and serial uart sensors and a restart policy of on-failure.

Balena settings

There are a few settings and considerations specific to balena that need to be taken into account when building multicontainer fleets.

Network mode

Setting network_mode to host allows the container to share the same network namespace as the host OS. When this is set, any ports exposed on the container will be exposed locally on the device. This is necessary for features such as bluetooth.

Named volumes

With multicontainer fleets, balena supports the use of named volumes, a feature that expands on the persistent storage functionality used by older versions of balenaOS. Named volumes can be given arbitrary names and can be linked to a directory in one or more containers. As long as every release of the fleet includes a docker-compose.yml and the volume name does not change, the data in the volume will persist across updates.

Use the volumes field of the service to link a directory in your container to your named volume. The named volume should also be specified at the top level of the docker-compose.yml:

For devices upgraded from older versions of balenaOS to v2.12.0 or higher, a link will automatically be created from the /data directory of the container to the resin-data named volume (similar to above). This ensures fleet behavior will remain consistent across host OS versions. One notable difference is that accessing this data via the host OS is done at /var/lib/docker/volumes/<FLEET ID>_resin-data/_data, rather than the /mnt/data/resin-data/<FLEET ID> location used with earlier host OS versions.

balena does not support the use of bind mounts at this time, aside from the ones which are provided by feature labels.

Core dumps for containers

BalenaOS v2.113.31 and later, by default, will not generate core dump files when a container crashes. This prevents a buggy container that is crash looping to fill up all available storage space on a device.

Core dumpsarrow-up-right are files with the contents of the memory used by a process at the time it crashes. They are a relatively advanced troubleshooting tool, particularly useful for low-level debugging of programs written in languages that compile to native code. For example, a tool like the gdb debugger can read a core dump and provide a stack trace showing the sequence of functions calls that led to the crash.

If you need core dumps, you can easily enable them by editing your docker-compose.yml and setting ulimits.core: -1 for the desired services. For example:

Labels

In addition to the settings above, there are some balena specific labels that can be defined in the docker-compose.yml file. These provide access to certain bind mounts and environment variables without requiring you to run the container as privileged. You can a detailed list of these labels on the docker-compose.yml fields page.

Container requirements

circle-exclamation

An additional set of labels ensures device compatibility for running a service. For example, before updating to a new release, it may be desirable to ensure that the device is running a specific version of Supervisor or has a specific version of the NVIDIA Tegra Linux Driver Packagearrow-up-right (L4T).

The following set of requirement labels are enforced via the supervisor. Each service may define one or more requirements and if any of them is not met for any non-optional service, then the release will be rejected and no changes will be performed for the new release.

Label
Description
Valid from Supervisor

io.balena.features.requires.sw.supervisor

Device Supervisor version (specified as a version rangearrow-up-right)

10.16.17

io.balena.features.requires.sw.l4t

10.16.17

io.balena.features.requires.hw.device-type

The device type as given by BALENA_MACHINE_NAME

11.1.0

io.balena.features.requires.arch.sw

The architecture as given by BALENA_ARCH

14.10.11

io.balena.features.requires.sw.balena-os

balenaOS version (specified as a version rangearrow-up-right)

17.4.0

io.balena.features.requires.sw.linux

Linux kernel version (specified as a version rangearrow-up-right)

17.4.0

For example, the following composition defines requirements on the supervisor and l4t version on the first service, and on the device type and architecture on the second service.

Here's an example of how to specify an OS and kernel requirement:

Optional containers

By default, when a container requirement is not met, none of the services are deployed to the device. However, in a multi-container release, it is possible to ignore those services that do not meet requirements with the other services being deployed as normal. To do so, we make use of the io.balena.features.optional: 1 label to indicate which services should be considered optional.

In the docker-compose.yml file, add the io.balena.features.optional: 1 to the labels list for each service you wish to mark as optional. In the following example, even if the first-service requirements fail, the second-service service will still be deployed.

circle-exclamation

Last updated

Was this helpful?