Handling of multiple containers in TorizonOS

Hello,

I have some questions about handling multiple docker containers in a self-made TorizonOS image build with TCB.

For my project I have multiple containers for different lifecycles of the device.
Some of the containers should be enabled during production for testing purposes and the rest of the containers should be activated after the testing is finished, the test containers will be disabled then.

I have bundled all my containers with TCB and added them to my image. The problem is that now all containers get started on device power on and killing a container seems to have no effect after reboot.

My question is, how can I select which containers are active on a newly installed image, and how do I changes to running containers persist after reboot?

Best regards

Hi, I’m the Toradex AI Assistant.

I might make mistakes. Our support team will step in if needed. If I answered your question, please tick the “Solution” checkbox. For follow-up questions to me, please use @ToradexAI in your post.

Did you like my answer? :+1: :-1:


Hello!

Thank you for your detailed question about managing multiple Docker containers with distinct lifecycles in a custom Torizon OS image built with TorizonCore Builder (TCB).

1. Selecting Which Containers Are Active at Boot

When you pre-provision containers as part of your TorizonCore image, they are configured to start automatically with the Docker daemon at boot (Torizon OS Architecture). This is why all bundled containers launch on power-on.

To control which containers start by default, you can use the following approaches:

  • Multiple Docker Compose Files:
    Define separate Docker Compose files for different lifecycle stages (e.g., docker-compose.production.yml for production/test containers and docker-compose.runtime.yml for normal operation). Only include the relevant containers in the appropriate file.

  • Selective Deployment via TorizonCore Builder:
    When building your Torizon OS image with TorizonCore Builder, provision only the set of containers (and their Compose file) you want enabled for the initial install. After testing, you can deploy an updated Compose file (or update container configuration) to change which ones start on boot (Pre-Provisioning Docker Containers).

  • Disabling Containers Post-Deployment:
    Containers are controlled by docker-compose files stored in /etc/docker/compose (which is a persistent, writable directory). To stop specific containers from starting on reboot, you can:

    1. SSH into the device.
    2. Edit or replace the active docker-compose YAML file in /etc/docker/compose.
    3. Run docker-compose down to stop test containers or remove their entries from the file.
    4. Optionally, use docker-compose up -d to start the set you now want active.

2. Persistence of Container States & Configurations

In Torizon OS, the root filesystem is read-only, but /etc and /var are writable and preserved during OTA or manual upgrades (Torizon OS Architecture).

  • Any changes you make to Docker Compose files in /etc/docker/compose will persist across reboots.
  • Simply stopping or killing a running container does not persist after reboot, since Docker (via the Compose file) will restart containers listed as enabled.
  • To persistently disable a container:
    Remove or comment out its entry in the active Compose file, then restart Docker or reboot.

3. Best Practices

  • For production vs. test:
    Preload only production containers in your default image. Add test containers only via a temporary Compose file during test procedures, then remove them from /etc/docker/compose before shipping or entering runtime.

  • For update flexibility:
    Consider using Torizon OTA or remote updates to manage Compose file changes post-deployment, ensuring you can enable/disable containers in the field.


Feel free to ask for a sample workflow or further details on managing lifecycle stages!
Best regards,
Toradex AI Assistant.

References:

Hello,

after some further tests I found the docker-compose file in /var/sota/storage/docker-compose/ not in /etc/docker/compose, is this a change between TorizonOS6(the one I am currently running) and TorizonOS7 or just a mistake of the AI?

Switching between compose files after testing, via SSH, is a workable solution for me, but is it possible to add multiple bundles, for example productive- and testing-bundle, to the image via TCB or do I need to add the different docker-compose files manually?

Best regards

Greetings @aigF,

First of all:

after some further tests I found the docker-compose file in /var/sota/storage/docker-compose/ not in /etc/docker/compose, is this a change between TorizonOS6(the one I am currently running) and TorizonOS7 or just a mistake of the AI?

The AI is mistaken, on Torizon we have always stared the docker-compose file in /var/sota/storage/docker-compose/.

Now as for your use-case. If I understand correctly you have several “sets” of containers. The set of containers that should be ran on your devices depends on where in the lifecycle that device is. Is that correct?

If so, let me start by stating the facts.

  • Torizon OS as it is currently only manages 1 docker-compose at a time. With it being located at /var/sota/storage/docker-compose/
    • When you add your docker-compose file with TorizonCore Builder this is the location where it gets added.
  • This docker-compose file gets automatically started on boot via the systemd docker-compose.service
    • If you look at this service it just executes a generic docker-compose -p torizon up -d --remove-orphans. This basically attempts to start every container specified by the single compose file.

Now you’ve already listed some potential solutions:

  • Changing/updating the docker-compose file as the device changes lifecycle states. That way the correct set of containers is always used.
  • Change the docker-compose.service to either point to a different compose file, or start a subset of the containers listed in your compose file.

Now my question here is, how do you plan to do these changes on the device when it’s deployed in production?

Are you using, or plan to use our Torizon Cloud updates? With this you could easily update the docker-compose file the device has when it’s appropriate.

Or do you have some other way to access and change things on these devices at scale?

Best Regards,
Jeremias

Hi @jeremias.tx

This is pretty much correct, yes.

After my tests yesterday, I figured as much.

My solution for this problem will most likely be to ship the production image with all containers and to rewrite the docker-compose right afterwards to testing spec.
After a successful test the docker-compose will be shifted to production either through SSH, or the test containers themselfs.

Torizon Cloud should not be needed for this problem, since the switch from testing to production is entirely in house.

Thank you for you help.
Best regards

since the switch from testing to production is entirely in house.

Okay I see. I thought you were planning to change the set of containers while these devices would be deployed out in the field. But if they’re still in-house then that’s more manageable.

Glad I could help clarify.

Best Regards,
Jeremias