we are almost finished with our QT-application which is running on the Verdin-AM62 on a custom Hardware. The last time I build a production image (on Mar 11, 2025 at 10:25 am) the image size was 293.73 MB compressed and 927 MB uncompressed. Today the same build is 453.53 MB compressed and 1.45GB uncompressed.
Nothing has changed in the application sources, in the Dockerfiles (Dockerfile, Dockerfile.sdk, Dockerfile.debug), in the torizonPackage.json or in any file in the .vscode folder.
The 1.45GB are way to much and must be smaller! What could have caused this massive gain of size? Are there any update responsible for this?
This is an urgent request! A customer is waiting for the updated application, which is now to big.
Hi, I’m the Toradex AI Assistant. The Toradex Support team will follow up with you shortly!
Did you like my answer?
Hello,
Thank you for reaching out with your question. While I don’t have specific documentation on troubleshooting image size increases, I can offer some general guidance on how to investigate and manage container image sizes, which might be helpful in your situation.
Analyze Image Layers: Use tools like docker history to inspect the layers of your container image. This can help identify which layers contribute most to the size.
Optimize Dockerfile: Review your Dockerfile for opportunities to reduce size. This might include:
Combining RUN commands to minimize the number of layers.
Using multi-stage builds to keep only necessary artifacts in the final image.
Choosing a smaller base image if possible.
Remove Unnecessary Files: Ensure that temporary files and build dependencies are removed after they are no longer needed.
Regularly Update and Clean: Keep your images updated and remove unused images and containers with commands like docker system prune.
These steps can help you manage and potentially reduce the size of your container images. If you have any more questions or need further assistance, feel free to ask.
So I inspected the layers with the docker history tool like the AI suggested. I diffed the output from the old build (2025-03-11) against the new one (today 2025-04-09).
I noticed that in the new build the following layers were added:
<missing> 11 months ago ENV DISPLAY=:0 0B buildkit.dockerfile.v0
<missing> 11 months ago ENV WAYLAND_DISPLAY=wayland-0 0B buildkit.dockerfile.v0
<missing> 11 months ago ENV XDG_RUNTIME_DIR=/tmp/1000-runtime-dir 0B buildkit.dockerfile.v0
<missing> 11 months ago ENV WAYLAND_USER=torizon 0B buildkit.dockerfile.v0
<missing> 11 months ago RUN /bin/sh -c usermod -a -G video,render to… 0B buildkit.dockerfile.v0
<missing> 11 months ago RUN /bin/sh -c apt-get -y update && apt-get … 712MB buildkit.dockerfile.v0
This may be the reason why the new build image is much bigger. But why is it added?
It has something to do with the variable torizon_gpu in the settings.json which is set to -am62. I can’t remember if this variable was set a month ago or not. Could it be, that this variable is now set due to an update of the VSCode Torizon plugin and causes the installation of all the packages?
I cleared that variable ("torizon_gpu": "") and build a production image again. Turns out the size is the same as a month ago.
Question: How can it be prevented that this variable is set? or can it always be cleared?
The difference you see is because now the extension is using packages which are specific for the AM62 SoC, including hardware acceleration with the GPU.
However, if you are using a variant of the Verdin AM62 without a GPU or don’t need the GPU acceleration, you can safely use the CPU implementation as before.
To ensure that you are not using the AM62-specific containers as a base, you can edit your Dockerfiles.
The following changes would be important:
Remove the ${GPU} entries:
In all FROM statements, you will see <container>${GPU}:<tag>.
To use the version which is not specific for GPU acceleration, this needs to be changed to <container>:<tag>.
Remove Logic that is specific for the AM62:
If you have any [ "${GPU}" = "-am62" ] checks in your Dockerfiles, you need to remove them and keep the behavior that would result from an empty GPU string.
By default in our Qt template this is not present, so if you did not add it there is no action needed here.
I have seen builds get stuck in seemingly random spots in the past, therefore I think this is what is happening here. This is likely not directly related to the changes.
Can you try to stop the build with a Ctrl+C on the terminal and start it again?