I am analyzing the update sizes for the application update and for the base OS currently and have some observations on which I would like to have the communities comment.
As far I understand the base OS changes are transmitted via the OTA server to the device and I see they amount to around 30 - 40 KiloBytes for changes in 16 files (4 to five line changes per file). I would assume here that the complete file is part of the OTA in this case not just the change so the size seems okay.
For the application container, the OTA server only transmits the compose file and the docker pull is used on the device to pull the image changes. Now this is where I see problem and would like to have suggestion on how to improve
- A one line change in 5 python file results in an image layer of size 9.6 MegaBytes
- I tried also with a single line change in a file as well and this also was of 9.6 MegaBytes.
~10 MegaBytes is quite huge for single line change. An update for this size over modem on the device takes 10 to 30 minutes which is not ideal.
If somebody has similar experience and has some suggestion on reducing the size please let me know. I would like to know if there is some parameters which I can use to reduce the size of the image update?
This is something that might be able to be mitigated via design of your container image.
The flow for a container/compose update goes something like this:
- Download new compose file
docker-compose pull to get the new images
docker-compose down to stop the old containers
docker-compose up to start the new containers
docker system prune -a --force to remove unused containers and container images
During the pull docker will only need to pull what’s needed. Meaning if your old container image is very similar to the new container image then it only needs to download the diff. Now here’s how you might be able to use this.
Let’s say you have as simple container image via this Dockerfile:
RUN apt install some package
COPY src/ src/
Let’s say the
COPY step is where you copy your entire python source code into the container image. This
COPY step constitute it’s own layer in the image. However like you observed making any changes to a layer will cause that layer to be “different”. Doesn’t matter if it’s a change to 5 files or 1 line in one file, if it changes the metadata even slightly then the layer is different. This of course means you need to pull down the entire layer despite a small change. Furthermore this changes the metadata of every layer after as well.
Something we might be able to do here is break up the steps like this:
RUN apt install some package
COPY src/ui src/ui
COPY src/backend src/backend
So now in my example instead of using a single
COPY for the entire source code I broke it up into 2
COPY. Now if you only change a file in the backend you only change that layer, which of course will be less than before with a single
To summarize I guess the lesson here is that in your Dockerfile put the things/steps that are most likely to change towards the end of the Dockerfile and try to separate it from things that aren’t likely to change. By doing this you’ll keep possible changes “smaller”.
Though do keep in mind there is some downside to this approach. Each layer in your Dockerfile has some overhead to it. Overall the single
COPY will make your overall container image size smaller than the double
COPY. Meaning more flash storage is needed to store it and initial download of the entire container would take longer. This will be something you have to consider.
So this is just one possible way off the top of my head. Let me run this by my team as well internally and see if there are any other ideas/thoughts.
Thanks a lot for the response.
Yeah this makes sense and I was looking at the layers already and noticed that the torizon plugin regenerates all the layers every time which I think is needed for now.
Is it possible to update the
Dockerfile.release file so that I can try to include the application in more granular way as you suggested?
I tried modifying the
Dockerfile.release but it is overwritten by the plugin. May be there is a way to avoid this.
Hmm now here’s the tricky part. The IDE extensions weren’t exactly designed with such consideration for OTA optimization. I imagine this is something we’d need to work on.
Let me bring this up internally for discussion.