How to control pre-provisioned containers in Torizoncore

I am pre-provisioning Docker containers in my base image build. I have noticed that these containers start automatically on boot. I want to control these containers using systemd to decide on when to start and stop.
How can it be done?

Hi @hexagun

Welcome to Toradex community.

Please refer to the attached links.
1.Managing docker containers
2.working with multicontainers

Just as an additional bit of information. The reasons the containers auto-start after provisioning is because of a systemd service that we have in TorizonCore by default. If you wish to modify this service the name of the service is docker-compose.

Best Regards,

I looked up for the service and found it here at /usr/lib/systemd/.

apalis-imx8-06852506:/etc# systemctl status docker-compose
â—Ź docker-compose.service - Docker Compose service with docker compose
     Loaded: loaded (/usr/lib/systemd/system/docker-compose.service; enabled; vendor preset: enabled)
     Active: active (exited) since Tue 2022-02-15 04:45:32 UTC; 4min 0s ago
    Process: 1086 ExecStart=/usr/bin/docker-compose -p torizon up -d --remove-orphans (code=exited, status=0/SUCCESS)
    Process: 1087 ExecStartPost=/usr/bin/rm -f /tmp/recovery-attempt.txt (code=exited, status=0/SUCCESS)
   Main PID: 1086 (code=exited, status=0/SUCCESS)

This service file is not located in /etc/ and I wonder if isolate command will capture any changes made to this. According to The Isolate command /etc changes are captured and some files/directories are ignored.

Will changes in /usr be captured, as this is not mentioned in the ignored files/directiories section of Isolate command?

Let me give you an overview of the system. I have a custom device provisioning container, which starts first on boot and provision’s the device after receiving response from remote user. After my device is provisioned, my other containers should be up and start performing operations.

This service file is not located in /etc/ and I wonder if isolate command will capture any changes made to this. According to The Isolate command /etc changes are captured and some files/directories are ignored.

Your intuition here is correct. However you can just disable our default service and create your own in /etc that will be picked up by isolate. I just mentioned our docker-compose.service as a reference for you to use.

Since you have a rather unique case where one container needs to run before the others. I would suggest creating and defining your own systemd services so you have better control over this process.

Best Regards,

I am just starting out with development and new in the space. I have created systemd services for other containers in /etc and I can disable the default docker-compose.service in development. But I am not sure how to do in production systems while also using torizon OTA.

Simply, Is there any way to disable the docker-compose.service through torizoncore builder and isolate command?

Actually wait, this detail changes things. To be clear are you planning to use our OTA update system to update the containers on your devices?

If this is the case there are some extra complications you must consider. The way our OTA updates work currently, is that they manage and update the single docker-compose.yml file at /var/sota/storage/docker-compose/docker-compose.yml. This is also the location TorizonCore Builder pre-provisions the file too and where docker-compose.service looks when it starts containers on boot.

If you have multiple docker-compose.yml files then our OTA will not update them all. Though the ability to manage/update multiple compose files is something that is being discussed internally.

With all that said, maybe let’s back up and think about how best to handle this. Could you describe on a high level what you want to do? Previously you said:

  • You have a device provisioning container that will start on boot.
  • When this provisioning container receives a remote response, it will provision the device
  • After the device is provisioned successfully this will trigger the rest of your containers to start.

Are there two sets of docker-compose files here? 1 for the provisioning and 1 for the other containers?

Do you want to be able to update both sets of containers with OTA?

Best Regards,



Yes, I have referred the design of AI at the Edge, Pasta Detection Demo with AWS. Here there are multiple service files. Although here, all the containers start together, I am looking at the same architecture with OTA and different trigger times.


This is one of the architecture we are considering other being to start all containers together and using scripts in them to communicate and trigger actions.

I think with current implementation of Torizon OTA, multiple compose file updates would not be possible yet.

Alright, I believe I have the full picture now. Thank you for clarifying your use-case here.

As I said previously multiple compose files will not be update-able with the current implementation of our OTA. This might be something you’ll have to work-around for the time being. At least until a more proper solution is in place on our side for managing multiple compose files.

But, back to your previous question of disabling docker-compose.service. While the service file is in /usr/lib/systemd, when a systemd service gets enabled a symlink is created in /etc. Therefore if you use systemctl disable this will remove the symlink. Then, since the symlink is in /etc this change will get picked up by the isolate command.

Best Regards,

Thank you for that.

Glad I could help. In the meantime keep a lookout for future feature additions that may make multiple docker-compose file management possible with our OTA update platform.

Best Regards,

1 Like