Torizon ota and project structure

in our project we are using torizon and docker containers. our code is in python. so currently i have a bind mounted directory available to all the containers which contains the code. the containers execute code from here.
this is really helpful as code can be updated once via scp without necessarily having to rebuild the images.

i have been evaluating the torizon ota platform and in there for application updates the recommended way for application updates is to deploy using a new docker-compose file. As in our scenario the code is not residing in the containers i was wondering if there is a way to do it. One would be to keep copy of the code in each of the containers but this will mean multiple copies of the code. i wanted to know if there is a better way to achieve this ?

Greetings @nkj,

Before I provide more in-depth advice perhaps you can tell me a bit more about your setup/architecture decisions.

  • What’s the reason/use-case for having multiple separate containers that bind-mount the same code?
  • If the code really needs to reside outside of the container then have you considered OS updates? This would update the filesystem including your python code which lives outside of the container.

Without any further details/knowledge about your setup here’s what I would initially recommend.

I’d still go for keeping the code inside the containers if possible. A smarter way to do this would be to create a “base” container image which contains your code. Then your end-application containers can base themselves off of this. Thus inheriting the code from the base container image. Then you can just script the building of all the containers in sequence when the code updates.

The reason I advocate for keeping the containers and code together is that it’s easier to manage in terms of updates. If the code lives outside of the containers then this gets a bit harder to manage. For example, imagine you start with container version A which bind mounts in code version A. Say some time goes by and now you have container version X and code version H. Suddenly container X no longer works with code H for some reason. You’d need a compatibility table somewhere just to keep track of what versions of the code are compatible with which version of container. Whereas if you keep them bundled you can guarantee that running container X will have a compatible version of the code inside of that specific container version.

Another reason is that if the code is outside of the container the only way to update it with OTA would be a OS update. An OS update requires a full reboot to apply while container updates do not. Depending on your use-case this may not be desirable. Furthermore if you need to do actual OS updates, you would need to track/differentiate the versions of the OS that just change the code, and the versions of the OS that actually change something about the OS.

I want to be clear that I’m not saying your current method doesn’t/can’t work. I just see it complicating things further down the road in terms of versioning and maintenance.

Best Regards,

Hi Jeremias,

the different containers need to talk to each other and for this we have a interface library using redis and the applications running in the different containers are also referencing to same libraries for different functions.
the code doesn’t need to necessarily reside out of the containers. we did this initially for testing as it let us change things without having to rebuild containers. just a restart of containers was enough

your idea of using a base image which contains the code and the other container images being based of it should work i think for our use case.

Nitish Jha

Glad I could help with a suggestion.

The idea of a common base container is a good idea to also cut down on not just duplication, but memory and storage overhead as well. This is because Docker is generally smart enough to re-use resources if it sees, for example, that container B and container C both have a common base of container A.

If you need a reference you can also see the the graphic in this article here: Debian Container for Torizon

Which shows how we built up our Debian containers step by step using common base containers to isolate features that are shared among multiple end containers.

Best Regards,