Building Container with torizon/debian:3-bookworm failed

Hello

We built an appplication a while back and wanted to update it. However now it fails. I did not change anything on my side (maybe deleted local docker images).

I do build with: “Run Task” → “create-production-image” in VSCode

Error log:

=> [climateprogrammerapp internal] load .dockerignore                                                                                                           0.0s
 => => transferring context: 117B                                                                                                                                0.0s
 => [climateprogrammerapp internal] load build definition from Dockerfile                                                                                        0.0s
 => => transferring dockerfile: 1.61kB                                                                                                                           0.0s
 => [climateprogrammerapp internal] load metadata for docker.io/torizon/debian:3.3-bookworm                                                                      0.4s
 => [climateprogrammerapp 1/9] FROM docker.io/torizon/debian:3.3-bookworm@sha256:0ce675b0a48960560e2add9299f60b025271caba22a371e05c1f28029b1f6b28                0.0s
 => [climateprogrammerapp internal] load build context                                                                                                           0.1s
 => => transferring context: 148.51kB                                                                                                                            0.1s
 => CACHED [climateprogrammerapp 2/9] RUN apt-get -q -y update &&     apt-get -q -y install     python3-minimal     python3-pip     python3-venv  libgpiod2:arm  0.0s
 => CACHED [climateprogrammerapp 3/9] RUN python3 -m venv /home/torizon/.venv --system-site-packages                                                             0.0s
 => CACHED [climateprogrammerapp 4/9] COPY requirements-release.txt /requirements-release.txt                                                                    0.0s
 => ERROR [climateprogrammerapp 5/9] RUN . /home/torizon/.venv/bin/activate &&     pip3 install --upgrade pip && pip3 install -r requirements-release.txt &&    58.2s
------                                                                                                                                                                
 > [climateprogrammerapp 5/9] RUN . /home/torizon/.venv/bin/activate &&     pip3 install --upgrade pip && pip3 install -r requirements-release.txt &&     rm requirements-release.txt:                                                                                                                                                      
2.222 Requirement already satisfied: pip in /home/torizon/.venv/lib/python3.11/site-packages (23.0.1)                                                                 
3.218 Collecting pip                                                                                                                                                  
3.605   Downloading pip-24.0-py3-none-any.whl (2.1 MB)                                                                                                                
3.891      ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 2.1/2.1 MB 8.1 MB/s eta 0:00:00
4.328 Installing collected packages: pip
4.332   Attempting uninstall: pip
4.374     Found existing installation: pip 23.0.1
5.799     Uninstalling pip-23.0.1:
6.349       Successfully uninstalled pip-23.0.1
13.35 Successfully installed pip-24.0
16.17 Collecting gpiod (from -r requirements-release.txt (line 1))
16.56   Downloading gpiod-2.1.3.tar.gz (53 kB)
16.65      ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 53.1/53.1 kB 698.1 kB/s eta 0:00:00
16.75   Installing build dependencies: started
27.40   Installing build dependencies: finished with status 'done'
27.42   Getting requirements to build wheel: started
28.58   Getting requirements to build wheel: finished with status 'done'
28.60   Preparing metadata (pyproject.toml): started
29.83   Preparing metadata (pyproject.toml): finished with status 'done'
30.07 Collecting future (from -r requirements-release.txt (line 2))
30.10   Downloading future-1.0.0-py3-none-any.whl.metadata (4.0 kB)
30.24 Collecting iso8601 (from -r requirements-release.txt (line 3))
30.26   Downloading iso8601-2.1.0-py3-none-any.whl.metadata (3.7 kB)
30.36 Collecting pexpect (from -r requirements-release.txt (line 4))
30.38   Downloading pexpect-4.9.0-py2.py3-none-any.whl.metadata (2.5 kB)
30.47 Collecting ptyprocess (from -r requirements-release.txt (line 5))
30.49   Downloading ptyprocess-0.7.0-py2.py3-none-any.whl.metadata (1.3 kB)
30.68 Collecting pyserial (from -r requirements-release.txt (line 6))
30.70   Downloading pyserial-3.5-py2.py3-none-any.whl.metadata (1.6 kB)
31.00 Collecting PyYAML (from -r requirements-release.txt (line 7))
31.02   Downloading PyYAML-6.0.1.tar.gz (125 kB)
31.06      ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 125.2/125.2 kB 6.1 MB/s eta 0:00:00
31.71   Installing build dependencies: started
46.20   Installing build dependencies: finished with status 'done'
46.20   Getting requirements to build wheel: started
52.66   Getting requirements to build wheel: finished with status 'done'
52.67   Preparing metadata (pyproject.toml): started
54.21   Preparing metadata (pyproject.toml): finished with status 'done'
54.32 Downloading future-1.0.0-py3-none-any.whl (491 kB)
54.38    ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 491.3/491.3 kB 12.5 MB/s eta 0:00:00
54.40 Downloading iso8601-2.1.0-py3-none-any.whl (7.5 kB)
54.43 Downloading pexpect-4.9.0-py2.py3-none-any.whl (63 kB)
54.46    ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 63.8/63.8 kB 4.8 MB/s eta 0:00:00
54.48 Downloading ptyprocess-0.7.0-py2.py3-none-any.whl (13 kB)
54.51 Downloading pyserial-3.5-py2.py3-none-any.whl (90 kB)
54.54    ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 90.6/90.6 kB 6.5 MB/s eta 0:00:00
54.60 Building wheels for collected packages: gpiod, PyYAML
54.61   Building wheel for gpiod (pyproject.toml): started
55.83   Building wheel for gpiod (pyproject.toml): finished with status 'error'
55.87   error: subprocess-exited-with-error
55.87   
55.87   × Building wheel for gpiod (pyproject.toml) did not run successfully.
55.87   │ exit code: 1
55.87   ╰─> [26 lines of output]
55.87       running bdist_wheel
55.87       running build
55.87       running build_py
55.87       creating build
55.87       creating build/lib.linux-armv7l-cpython-311
55.87       creating build/lib.linux-armv7l-cpython-311/gpiod
55.87       copying gpiod/internal.py -> build/lib.linux-armv7l-cpython-311/gpiod
55.87       copying gpiod/__init__.py -> build/lib.linux-armv7l-cpython-311/gpiod
55.87       copying gpiod/line_settings.py -> build/lib.linux-armv7l-cpython-311/gpiod
55.87       copying gpiod/edge_event.py -> build/lib.linux-armv7l-cpython-311/gpiod
55.87       copying gpiod/line_request.py -> build/lib.linux-armv7l-cpython-311/gpiod
55.87       copying gpiod/line_info.py -> build/lib.linux-armv7l-cpython-311/gpiod
55.87       copying gpiod/version.py -> build/lib.linux-armv7l-cpython-311/gpiod
55.87       copying gpiod/info_event.py -> build/lib.linux-armv7l-cpython-311/gpiod
55.87       copying gpiod/exception.py -> build/lib.linux-armv7l-cpython-311/gpiod
55.87       copying gpiod/line.py -> build/lib.linux-armv7l-cpython-311/gpiod
55.87       copying gpiod/chip.py -> build/lib.linux-armv7l-cpython-311/gpiod
55.87       copying gpiod/chip_info.py -> build/lib.linux-armv7l-cpython-311/gpiod
55.87       running build_ext
55.87       building 'gpiod._ext' extension
55.87       creating build/temp.linux-armv7l-cpython-311
55.87       creating build/temp.linux-armv7l-cpython-311/gpiod
55.87       creating build/temp.linux-armv7l-cpython-311/gpiod/ext
55.87       creating build/temp.linux-armv7l-cpython-311/lib
55.87       arm-linux-gnueabihf-gcc -Wsign-compare -DNDEBUG -g -fwrapv -O2 -Wall -g -fstack-protector-strong -Wformat -Werror=format-security -g -fwrapv -O2 -fPIC -D_GNU_SOURCE=1 -Iinclude -Ilib -Igpiod/ext -I/home/torizon/.venv/include -I/usr/include/python3.11 -c gpiod/ext/chip.c -o build/temp.linux-armv7l-cpython-311/gpiod/ext/chip.o -Wall -Wextra -DGPIOD_VERSION_STR=\"2.1\"
55.87       error: command 'arm-linux-gnueabihf-gcc' failed: No such file or directory
55.87       [end of output]
55.87   
55.87   note: This error originates from a subprocess, and is likely not a problem with pip.
55.87   ERROR: Failed building wheel for gpiod
55.88   Building wheel for PyYAML (pyproject.toml): started
57.71   Building wheel for PyYAML (pyproject.toml): finished with status 'done'
57.72   Created wheel for PyYAML: filename=PyYAML-6.0.1-cp311-cp311-linux_armv7l.whl size=45361 sha256=f6b46df31ad2646c7985c404e9fcf77c02dd8df47d54e93aa547d8b2ba562105
57.72   Stored in directory: /root/.cache/pip/wheels/20/40/04/9edd5f1052f28aff139c0b315b3d5ad7ba893c93ccde03f1b4
57.73 Successfully built PyYAML
57.73 Failed to build gpiod
57.73 ERROR: Could not build wheels for gpiod, which is required to install pyproject.toml-based projects
------
failed to solve: process "/bin/sh -c . ${APP_ROOT}/.venv/bin/activate &&     pip3 install --upgrade pip && pip3 install -r requirements-release.txt &&     rm requirements-release.txt" did not complete successfully: exit code: 1
NativeCommandExitException: /home/pschenker/builds/ClimateProgrammerApp/.conf/createDockerComposeProduction.ps1:124:1
Line |
 124 |  docker compose build --build-arg IMAGE_ARCH=$imageArch $imageName
     |  ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
     | Program "docker" ended with non-zero exit code: 17.

It seems that arm-linux-gnueabihf-gcc is not installed in the container, has it been in there at some point?

Can we check the Dockerfile of torizon/debian somewhere?

Thanks and best regards,
Philippe

1 Like

Greetings @philschenker,

It looks like your base container image here is torizon/debian. The Dockerfile for this should be this one: torizon-containers/debian-docker-images/base/Dockerfile at bookworm · toradex/torizon-containers · GitHub

It’s a pretty bare-bones container image with minimal packages installed. I’m not surprised it doesn’t have the compiler by default. That said, not sure why this worked for you previously. Something must have changed but it’s not obvious to me what. I don’t think this specific container image ever had the compiler installed by default.

Maybe something changed on the python wheel side? Such that this wheel now needs to be built. Hard to say, though I assume just adding arm-linux-gnueabihf-gcc to your Dockerfile fixes things right?

Best Regards,
Jeremias

Hi Jeremias!

Our application is a firmware flasher that utilizes JLink with a python script for automation. What we intended to do is just update the firmware. For that we needed to recompile the container and that blew up terribly. Even JLink can no longer be run as user torizon. Since we didn’t change anything on our end we suspect the torizon/debian:3-bookworm container to have things changed. I’m aware that python constantly seems to change its libraries, but the fact we can also no longer run JLink as user torizon is kind of sad.

I primarily wanted to let you guys know in the hope of improvement as this simple change after roughly 6months of the initial development is a very bad Torizon experience.

We used VSCode python examples (Dockerfile/docker-compose etc.) and just changed it to our needs .

Best Regards,
Philippe

Another question just popped up that I mentioned in my first post.

What is really bad from our point of view is that it totally is not reproducible, our repo is still the same however things seems to have changed on python side. But for those changes to land in our container they need to come in through torizon/debian that we are using, correct?

https://hub.docker.com/layers/torizon/debian/3-bookworm/images/sha256-8b0fc0344322b359e1284f1d994864956b7000edd3de23afb0bf112f7512086d?context=explore

Can I check somewhere the git-history of the Dockerfile so I can see what changed and when?

Thanks in advance and best regards,
Philippe

I double-checked with one of the developers behind our container images. We can confirm that arm-linux-gnueabihf-gcc was never present in the torizon/debian container image.

I still believe that the likely explanation here is that something changed on the python pip side of things. Such that for some reason one of your packages you’re fetching with pip now needs to be compiled with arm-linux-gnueabihf-gcc.

But for those changes to land in our container they need to come in through torizon/debian that we are using, correct?

Not really. You’re installing packages here with pip which is a 3rd-party package manager that we have no control over. If something changes on the pip side of things and you’re installing the latest changes, then yeah you’d grab the latest changes.

Can I check somewhere the git-history of the Dockerfile so I can see what changed and when?

You can check the git history via the link I previously shared: torizon-containers/debian-docker-images/base at bookworm · torizon/torizon-containers · GitHub

As you can see there’s hardly been any changes to the torizon/debian source in the past ~6 months. Definitely nothing substantial that would affect your use-case here.

Even JLink can no longer be run as user torizon.

Not sure how to explain this. But as you can see from the git history we haven’t made any changes to the torizon user inside the container in over 6 months. Maybe for some reason your Jlink peripheral needs different permissions than it did before due to changes in the Python packages/libraries you’re using. Again hard to say exactly, but you can see all the changes we’ve done to our container and I don’t see any of these changes affecting this.

Best Regards,
Jeremias

Thank you very much for your time and checks, much appreciated! And also thanks for the link to the repo containing all Dockerfiles.

So then this is a bad experience with Python and not Torizon.

Best Regards,
Philippe

Hi @philschenker !

If this topic is solved, please mark the most suitable message as the solution.

If it is not, let us know how we can further help you :slight_smile:

Best regards,

Hey Henrique,

Yes sorry, may be helpful if anyone stumbles on the same issue. Our solution is based solely on workarounds:

  • We install python packages from debian feed instead of pip now.
  • We execute the container privileged and as root.

Best Regards,
Philippe