Vs code Deploy error code 18, port issue?

Hi :slight_smile:

After solving with your help issue with kernel module, i started to work on application developpement.
My Setup:
Mallow board with AM62.
Windows PC but running a virtual machine ubuntu LTS 22.04 via VMWARE
Mallow board connected on the same network switch than my PC.

I followed ( as usual :wink: ) the tutorial to deploy a hello world simple python script.
I installed vscode on my vm linux , add toradex extensions 2 ( I’m using torizon6).
Created my project based on simple pyhton template.

First thing: i was not able to discover automatically the board in the network devices.

I add it manually i can see it and connect.
But when i try to run and debug… i have an error 18 :

βœ” Pushing localhost:5002/container-ftest1-debug:arm64: 5eda8e9db600 Layer already exists    0.0s 
 βœ” Pushing localhost:5002/container-ftest1-debug:arm64: 9c76f5bd7f07 Layer already exists    0.0s 
 βœ” Pushing localhost:5002/container-ftest1-debug:arm64: 39940505a96f Layer already exists    0.0s 
 βœ” Pushing localhost:5002/container-ftest1-debug:arm64: f4e4d9391e13 Layer already exists    0.0s 
 *  Terminal will be reused by tasks, press any key to close it. 

 *  Executing task: sshpass -p flo ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no torizon@192.168.0.132 LOCAL_REGISTRY=192.168.20.134 TAG=arm64 docker compose pull container-ftest1-debug 

Warning: Permanently added '192.168.0.132' (ED25519) to the list of known hosts.
time="2024-02-09T09:50:18Z" level=warning msg="The \"DOCKER_LOGIN\" variable is not set. Defaulting to a blank string."
 container-ftest1-debug Pulling 
 container-ftest1-debug Warning 
WARNING: Some service image(s) must be built from source by running:
    docker compose build container-ftest1-debug
1 error occurred:
        * Error response from daemon: Get "http://192.168.20.134:5002/v2/": dial tcp 192.168.20.134:5002: connect: no route to host



 *  The terminal process "sshpass '-p', 'flo', 'ssh', '-o', 'UserKnownHostsFile=/dev/null', '-o', 'StrictHostKeyChecking=no', 'torizon@192.168.0.132', 'LOCAL_REGISTRY=192.168.20.134 TAG=arm64 docker compose pull container-ftest1-debug'" terminated with exit code: 18. 
 *  Terminal will be reused by tasks, press any key to close it.

So it seems to point to a network port issue may be because of the vm ?

Thanks for your help :slight_smile:

Hi @FloSolio ,

Thanks for reaching out again. :slight_smile:

All right, so the situation is as follows.

You can see here that VS Code set the host IP to 192.168.0.132 but then later its trying to access the host on a different IP:

Please check what your host IP is. In case they don’t match you can do the following.

Press CTRL-SHIFT + P and search for β€œuser setting” and select the one with (JSON)
image

Then open this file and add the following line as in the documentation here

apollox.overwriteHostIp: "<your-host-ip>"

This setting will force the overwrite of your projects settings.json file with the correct IP of your host.

Let me know if that works.

Best Regards
Kevin

Hi Kevin, thank you.

I’m working on a second PC today and i don’t have the error 18, so i will try your solution on the pc that fail at home further.

Buthere, on my companie machine, i’m having an issue in the same spirit :

 *  The terminal process "sshpass '-p', 'flo', 'ssh', '-o', 'UserKnownHostsFile=/dev/null', '-o', 'StrictHostKeyChecking=no', 'torizon@192.168.0.132', 'LOCAL_REGISTRY=192.168.79.128 TAG=arm64 docker compose pull test-debug'" terminated with exit code: 18. 
 *  Terminal will be reused by tasks, press any key to close it. 

 *  Executing task: pwsh -nop .conf/validateDepsRunning.ps1 


⚠️ VALIDATING ENVIRONMENT


βœ… Environment is valid!

 *  Terminal will be reused by tasks, press any key to close it. 

 *  Executing task: bash -c [[ ! -z "192.168.0.132" ]] && true || false 

 *  Terminal will be reused by tasks, press any key to close it. 

 *  Executing task: bash -c [[ "aarch64" == "aarch64" ]] && true || false 

 *  Terminal will be reused by tasks, press any key to close it. 

 *  Executing task: [ $(/bin/python3 -m pip --version | awk '{print $2}' | cut -d'.' -f1) -lt 23 ] && /bin/python3 -m pip install --upgrade pip || true 

 *  Terminal will be reused by tasks, press any key to close it. 

 *  Executing task: /bin/python3 -m pip install --break-system-packages -r requirements-debug.txt 

Defaulting to user installation because normal site-packages is not writeable
Requirement already satisfied: debugpy in /home/flo/.local/lib/python3.10/site-packages (from -r requirements-debug.txt (line 1)) (1.8.1)
 *  Terminal will be reused by tasks, press any key to close it. 

 *  Executing task: sleep 10 

 *  Terminal will be reused by tasks, press any key to close it. 

 *  Executing task: sshpass -p flo scp -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no /home/flo/test1/docker-compose.yml torizon@192.168.0.132:~/ 

Warning: Permanently added '192.168.0.132' (ED25519) to the list of known hosts.
 *  Terminal will be reused by tasks, press any key to close it. 

 *  Executing task: DOCKER_HOST=192.168.0.132:2375 docker image prune -f --filter=dangling=true 

Total reclaimed space: 0B
 *  Terminal will be reused by tasks, press any key to close it. 

 *  Executing task: if [ false == false ]; then DOCKER_HOST=192.168.0.132:2375 docker compose -p torizon down --remove-orphans ; fi 

Warning: No resource found to remove for project "torizon".
 *  Terminal will be reused by tasks, press any key to close it. 

 *  Executing task: DOCKER_HOST= docker compose build --pull --build-arg SSHUSERNAME= --build-arg APP_ROOT= --build-arg IMAGE_ARCH=arm64 --build-arg SSH_DEBUG_PORT= --build-arg GPU= test-debug 

WARN[0000] The "DOCKER_LOGIN" variable is not set. Defaulting to a blank string. 
[+] Building 1.1s (15/15) FINISHED                                                                                           docker:default
 => [test-debug internal] load build definition from Dockerfile.debug                                                                  0.0s
 => => transferring dockerfile: 3.02kB                                                                                                 0.0s
 => [test-debug internal] load metadata for docker.io/torizon/debian:3.2.1-bookworm                                                    0.9s
 => [test-debug internal] load .dockerignore                                                                                           0.0s
 => => transferring context: 117B                                                                                                      0.0s
 => [test-debug  1/10] FROM docker.io/torizon/debian:3.2.1-bookworm@sha256:c645d6bc14f7d419340df0be25dbbe115ada029fa2e502a1c9149f335c  0.0s
 => [test-debug internal] load build context                                                                                           0.0s
 => => transferring context: 157B                                                                                                      0.0s
 => CACHED [test-debug  2/10] RUN apt-get -q -y update &&     apt-get -q -y install     openssl     openssh-server     rsync     file  0.0s
 => CACHED [test-debug  3/10] RUN apt-get -q -y update &&     apt-get -q -y install     &&     apt-get clean && apt-get autoremove &&  0.0s
 => CACHED [test-debug  4/10] RUN python3 -m venv /.venv --system-site-packages                                                        0.0s
 => CACHED [test-debug  5/10] COPY requirements-debug.txt /requirements-debug.txt                                                      0.0s
 => CACHED [test-debug  6/10] RUN . /.venv/bin/activate &&     pip3 install --upgrade pip && pip3 install --break-system-packages -r   0.0s
 => CACHED [test-debug  7/10] COPY .conf/id_rsa.pub /id_rsa.pub                                                                        0.0s
 => CACHED [test-debug  8/10] RUN mkdir /var/run/sshd &&     sed 's@session\s*required\s*pam_loginuid.so@session optional pam_loginui  0.0s
 => CACHED [test-debug  9/10] RUN rm -r /etc/ssh/ssh*key &&     dpkg-reconfigure openssh-server                                        0.0s
 => CACHED [test-debug 10/10] COPY --chown=: ./src /src                                                                                0.0s
 => [test-debug] exporting to image                                                                                                    0.0s
 => => exporting layers                                                                                                                0.0s
 => => writing image sha256:0d8b3fc72a821b5baccc481970f6446ed2675194ee03c03659e074f2a5d143a4                                           0.0s
 => => naming to localhost:5002/test-debug:arm64                                                                                       0.0s
 *  Terminal will be reused by tasks, press any key to close it. 

 *  Executing task: DOCKER_HOST= docker compose push test-debug 

WARN[0000] The "DOCKER_LOGIN" variable is not set. Defaulting to a blank string. 
[+] Pushing 19/19
 βœ” Pushing localhost:5002/test-debug:arm64: c1e892f438c8 Layer already exists                                                          0.0s 
 βœ” Pushing localhost:5002/test-debug:arm64: 7c21bbe332f8 Layer already exists                                                          0.0s 
 βœ” Pushing localhost:5002/test-debug:arm64: 994ad98e0fc4 Layer already exists                                                          0.0s 
 βœ” Pushing localhost:5002/test-debug:arm64: 3540ca7728a9 Layer already exists                                                          0.0s 
 βœ” Pushing localhost:5002/test-debug:arm64: d93d8860ee31 Layer already exists                                                          0.1s 
 βœ” Pushing localhost:5002/test-debug:arm64: d83df379f505 Layer already exists                                                          0.0s 
 βœ” Pushing localhost:5002/test-debug:arm64: 15b537025ed8 Layer already exists                                                          0.0s 
 βœ” Pushing localhost:5002/test-debug:arm64: 70db2c3b175a Layer already exists                                                          0.1s 
 βœ” Pushing localhost:5002/test-debug:arm64: 7f54ebc5f46c Layer already exists                                                          0.1s 
 βœ” Pushing localhost:5002/test-debug:arm64: 4d43de620b2b Layer already exists                                                          0.1s 
 βœ” Pushing localhost:5002/test-debug:arm64: 2b1b9a443581 Layer already exists                                                          0.1s 
 βœ” Pushing localhost:5002/test-debug:arm64: 78b4e0bbebbf Layer already exists                                                          0.1s 
 βœ” Pushing localhost:5002/test-debug:arm64: af2e04463901 Layer already exists                                                          0.1s 
 βœ” Pushing localhost:5002/test-debug:arm64: fa80bad6bfa4 Layer already exists                                                          0.1s 
 βœ” Pushing localhost:5002/test-debug:arm64: 99556bba1730 Layer already exists                                                          0.1s 
 βœ” Pushing localhost:5002/test-debug:arm64: 5eda8e9db600 Layer already exists                                                          0.1s 
 βœ” Pushing localhost:5002/test-debug:arm64: 9c76f5bd7f07 Layer already exists                                                          0.1s 
 βœ” Pushing localhost:5002/test-debug:arm64: 39940505a96f Layer already exists                                                          0.1s 
 βœ” Pushing localhost:5002/test-debug:arm64: f4e4d9391e13 Layer already exists                                                          0.1s 
 *  Terminal will be reused by tasks, press any key to close it. 

 *  Executing task: sshpass -p flo ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no torizon@192.168.0.132 LOCAL_REGISTRY=192.168.79.128 TAG=arm64 docker compose pull test-debug 

Warning: Permanently added '192.168.0.132' (ED25519) to the list of known hosts.
time="2024-02-12T12:31:05Z" level=warning msg="The \"DOCKER_LOGIN\" variable is not set. Defaulting to a blank string."
 test-debug Pulling 
 test-debug Warning 
WARNING: Some service image(s) must be built from source by running:
    docker compose build test-debug
1 error occurred:
        * Error response from daemon: Get "http://192.168.79.128:5002/v2/": dial tcp 192.168.79.128:5002: connect: network is unreachable



 *  The terminal process "sshpass '-p', 'flo', 'ssh', '-o', 'UserKnownHostsFile=/dev/null', '-o', 'StrictHostKeyChecking=no', 'torizon@192.168.0.132', 'LOCAL_REGISTRY=192.168.79.128 TAG=arm64 docker compose pull test-debug'" terminated with exit code: 18. 
 *  Terminal will be reused by tasks, press any key to close it. 

I’m a bit confused between the several ip between the remote AM62 board where a docker run, my host machin on windows running my VM Linux etc…

I can also see port 5002, but in my settings. json file port 2375 is used.

NOTE : I had to add in settings.json the wait_sync with value of 10sec ( arbitrary value) else, i has a build error. May be this is a bug known by toradex ?

I’m using the simple container python project.

The error is :
Error response from daemon: Get β€œhttp://192.168.79.128:5002/v2/”: dial tcp 192.168.79.128:5002: connect: network is unreachable

But some line are pointing the DOCKER_LOGIN variable not set . Is there something to do with that normally ?

Thanks a lot for your help, i think i’m almost to the point to see β€˜Hello’ in my terminal, i can’t wait for it , embeeded is not a straight forward adventure :smiley:

Hi @FloSolio

The way the container registry works here is that a container is spun up on your development machine running the Docker registry. The container images we build with the extension are stored there. And then the board tries to connect to that registry to download the images. This requires that the development PC is directly routable from the target board. Basically, can you ping from the target board to the development PC? It’s not clear if that works in your case but the fact that one of the IPs is 192.168.0.x and the other is 192.168.79.x which may indicate a different subnet and thus not a directly reachable IP.

Drew

Hi @drew.tx !

I finally decided to install a real ubuntu on my two host Pc .

I think that you pointed to the right problem. We also wondered about this possible cause but after trying some configuration with port adapter etc in windows without success, i decided to switch for a full linux machine, and it works now :slight_smile: .

I’m able to deploy the python packaged in the container to the target board.

Just a question, when I modify even one line of my pyhon script, is it mandatory to rebuilt all the container and deploy all again what takes around 15sec each time ? Or may be there is a faster way to work ?

Thanks a lot !

Hi @FloSolio

Creating the containers and deploying them over the local area network is about as fast as it can get without doing something that bypasses containers completely.

One possibility would be to map the python sources into the container as a volume and then just use β€œscp” to transfer the updated files. All the VSCode tasks are in the .vscode directory so you could certainly implement something like this in your own project. My guess is that stopping and restarting the containers will still take some time so it’s not clear how much time this approach will save.

Drew

That could be a good tips :).

I meet an issue today with python gpiod package in container deployement. I don’t figure out why but may be i would have an idea ?

I started by the basics : modify the python simple container app to control one gpio.

I can deploy the python hello Toradex app on my AM62 board succesfully.
I add in requirement-debug.txt the gpiod python package and the libgpiod in torizonPackages.json.
I modified the main.py with the simple Toradex python gpio package example.

But when i try to deploy , there a pip install issue. Gpiod need python3-dev so i also added this required dependency in torizonPackage but i always get an error.

So to be sure it is possible to install the gpiod with pip I ran a clean new debian docker directly on the AM62 and it successfully installed by default the β€œdepracated version of gpiod” : 1.5.4.

So it seems that something goes wrong with the full toradex extension 2 process but i’m not able at the moment to figure out what.

Do you have an idea about it or should i create a public ticket ?

Thanks a lot, starting embedded development today is not an easy task, even to blink a led :smiley:

*  Executing task: pwsh -nop .conf/validateDepsRunning.ps1 


⚠️ VALIDATING ENVIRONMENT


βœ… Environment is valid!

 *  Terminal will be reused by tasks, press any key to close it. 

 *  Executing task: bash -c [[ ! -z "192.168.0.132" ]] && true || false 

 *  Terminal will be reused by tasks, press any key to close it. 

 *  Executing task: bash -c [[ "aarch64" == "aarch64" ]] && true || false 

 *  Terminal will be reused by tasks, press any key to close it. 

 *  Executing task: [ $(/bin/python3 -m pip --version | awk '{print $2}' | cut -d'.' -f1) -lt 23 ] && /bin/python3 -m pip install --upgrade pip || true 

 *  Terminal will be reused by tasks, press any key to close it. 

 *  Executing task: /bin/python3 -m pip install --break-system-packages -r requirements-debug.txt 

Defaulting to user installation because normal site-packages is not writeable
Requirement already satisfied: debugpy in /home/flo/.local/lib/python3.10/site-packages (from -r requirements-debug.txt (line 1)) (1.8.1)
Requirement already satisfied: gpiod in /home/flo/.local/lib/python3.10/site-packages (from -r requirements-debug.txt (line 2)) (2.1.3)
 *  Terminal will be reused by tasks, press any key to close it. 

 *  Executing task: sleep 1 

 *  Terminal will be reused by tasks, press any key to close it. 

 *  Executing task: sshpass -p flo scp -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no /home/flo/ProjectVSCode/test1/test1/docker-compose.yml torizon@192.168.0.132:~/ 

Warning: Permanently added '192.168.0.132' (ED25519) to the list of known hosts.
 *  Terminal will be reused by tasks, press any key to close it. 

 *  Executing task: DOCKER_HOST=192.168.0.132:2375 docker image prune -f --filter=dangling=true 

Total reclaimed space: 0B
 *  Terminal will be reused by tasks, press any key to close it. 

 *  Executing task: if [ false == false ]; then DOCKER_HOST=192.168.0.132:2375 docker compose -p torizon down --remove-orphans ; fi 

Warning: No resource found to remove for project "torizon".
 *  Terminal will be reused by tasks, press any key to close it. 

 *  Executing task: DOCKER_HOST= docker compose build --pull --build-arg SSHUSERNAME=torizon --build-arg APP_ROOT=/home/torizon/app --build-arg IMAGE_ARCH=arm64 --build-arg SSH_DEBUG_PORT=6502 --build-arg GPU= container-test1-debug 

WARN[0000] The "DOCKER_LOGIN" variable is not set. Defaulting to a blank string. 
[+] Building 22.5s (10/14)                                                                                                                        docker:default
 => [container-test1-debug internal] load build definition from Dockerfile.debug                                                                            0.0s
 => => transferring dockerfile: 3.06kB                                                                                                                      0.0s
 => [container-test1-debug internal] load metadata for docker.io/torizon/debian:3.2.1-bookworm                                                              0.3s
 => [container-test1-debug internal] load .dockerignore                                                                                                     0.0s
 => => transferring context: 117B                                                                                                                           0.0s
 => [container-test1-debug  1/10] FROM docker.io/torizon/debian:3.2.1-bookworm@sha256:c645d6bc14f7d419340df0be25dbbe115ada029fa2e502a1c9149f335c59fc08      0.0s
 => [container-test1-debug internal] load build context                                                                                                     0.0s
 => => transferring context: 158B                                                                                                                           0.0s
 => CACHED [container-test1-debug  2/10] RUN apt-get -q -y update &&     apt-get -q -y install     openssl     openssh-server     rsync     file     scree  0.0s
 => CACHED [container-test1-debug  3/10] RUN apt-get -q -y update &&     apt-get -q -y install  python3-dev:arm64  libgpiod-dev:arm64     &&     apt-get c  0.0s
 => CACHED [container-test1-debug  4/10] RUN python3 -m venv /home/torizon/app/.venv --system-site-packages                                                 0.0s
 => CACHED [container-test1-debug  5/10] COPY requirements-debug.txt /requirements-debug.txt                                                                0.0s
 => ERROR [container-test1-debug  6/10] RUN . /home/torizon/app/.venv/bin/activate &&     pip3 install --upgrade pip && pip3 install --break-system-packa  22.1s
------                                                                                                                                                           
 > [container-test1-debug  6/10] RUN . /home/torizon/app/.venv/bin/activate &&     pip3 install --upgrade pip && pip3 install --break-system-packages -r requirements-debug.txt &&     rm requirements-debug.txt:                                                                                                                 
1.694 Requirement already satisfied: pip in /home/torizon/app/.venv/lib/python3.11/site-packages (23.0.1)                                                        
2.358 Collecting pip                                                                                                                                             
2.570   Downloading pip-24.0-py3-none-any.whl (2.1 MB)                                                                                                           
2.869      ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 2.1/2.1 MB 8.7 MB/s eta 0:00:00
3.185 Installing collected packages: pip
3.186   Attempting uninstall: pip
3.197     Found existing installation: pip 23.0.1
4.331     Uninstalling pip-23.0.1:
4.766       Successfully uninstalled pip-23.0.1
9.085 Successfully installed pip-24.0
10.80 Requirement already satisfied: debugpy in /usr/lib/python3/dist-packages (from -r requirements-debug.txt (line 1)) (1.6.3+git20221103.a2a3328)
11.17 Collecting gpiod (from -r requirements-debug.txt (line 2))
11.38   Downloading gpiod-2.1.3.tar.gz (53 kB)
11.50      ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 53.1/53.1 kB 694.6 kB/s eta 0:00:00
11.59   Installing build dependencies: started
19.05   Installing build dependencies: finished with status 'done'
19.07   Getting requirements to build wheel: started
19.88   Getting requirements to build wheel: finished with status 'done'
19.89   Preparing metadata (pyproject.toml): started
20.79   Preparing metadata (pyproject.toml): finished with status 'done'
20.81 Building wheels for collected packages: gpiod
20.81   Building wheel for gpiod (pyproject.toml): started
21.71   Building wheel for gpiod (pyproject.toml): finished with status 'error'
21.76   error: subprocess-exited-with-error
21.76   
21.76   Γ— Building wheel for gpiod (pyproject.toml) did not run successfully.
21.76   β”‚ exit code: 1
21.76   ╰─> [26 lines of output]
21.76       running bdist_wheel
21.76       running build
21.76       running build_py
21.76       creating build
21.76       creating build/lib.linux-aarch64-cpython-311
21.76       creating build/lib.linux-aarch64-cpython-311/gpiod
21.76       copying gpiod/version.py -> build/lib.linux-aarch64-cpython-311/gpiod
21.76       copying gpiod/line_settings.py -> build/lib.linux-aarch64-cpython-311/gpiod
21.76       copying gpiod/chip.py -> build/lib.linux-aarch64-cpython-311/gpiod
21.76       copying gpiod/line.py -> build/lib.linux-aarch64-cpython-311/gpiod
21.76       copying gpiod/line_info.py -> build/lib.linux-aarch64-cpython-311/gpiod
21.76       copying gpiod/internal.py -> build/lib.linux-aarch64-cpython-311/gpiod
21.76       copying gpiod/info_event.py -> build/lib.linux-aarch64-cpython-311/gpiod
21.76       copying gpiod/exception.py -> build/lib.linux-aarch64-cpython-311/gpiod
21.76       copying gpiod/__init__.py -> build/lib.linux-aarch64-cpython-311/gpiod
21.76       copying gpiod/line_request.py -> build/lib.linux-aarch64-cpython-311/gpiod
21.76       copying gpiod/chip_info.py -> build/lib.linux-aarch64-cpython-311/gpiod
21.76       copying gpiod/edge_event.py -> build/lib.linux-aarch64-cpython-311/gpiod
21.76       running build_ext
21.76       building 'gpiod._ext' extension
21.76       creating build/temp.linux-aarch64-cpython-311
21.76       creating build/temp.linux-aarch64-cpython-311/gpiod
21.76       creating build/temp.linux-aarch64-cpython-311/gpiod/ext
21.76       creating build/temp.linux-aarch64-cpython-311/lib
21.76       aarch64-linux-gnu-gcc -Wsign-compare -DNDEBUG -g -fwrapv -O2 -Wall -g -fstack-protector-strong -Wformat -Werror=format-security -g -fwrapv -O2 -fPIC -D_GNU_SOURCE=1 -Iinclude -Ilib -Igpiod/ext -I/home/torizon/app/.venv/include -I/usr/include/python3.11 -c gpiod/ext/chip.c -o build/temp.linux-aarch64-cpython-311/gpiod/ext/chip.o -Wall -Wextra -DGPIOD_VERSION_STR=\"2.1\"
21.76       error: command 'aarch64-linux-gnu-gcc' failed: No such file or directory
21.76       [end of output]
21.76   
21.76   note: This error originates from a subprocess, and is likely not a problem with pip.
21.76   ERROR: Failed building wheel for gpiod
21.76 Failed to build gpiod
21.76 ERROR: Could not build wheels for gpiod, which is required to install pyproject.toml-based projects
------
failed to solve: process "/bin/sh -c . ${APP_ROOT}/.venv/bin/activate &&     pip3 install --upgrade pip && pip3 install --break-system-packages -r requirements-debug.txt &&     rm requirements-debug.txt" did not complete successfully: exit code: 1

 *  The terminal process "/usr/bin/bash '-c', 'DOCKER_HOST= docker compose build --pull --build-arg SSHUSERNAME=torizon --build-arg APP_ROOT=/home/torizon/app --build-arg IMAGE_ARCH=arm64 --build-arg SSH_DEBUG_PORT=6502 --build-arg GPU= container-test1-debug'" terminated with exit code: 17. 
 *  Terminal will be reused by tasks, press any key to close it.

@FloSolio

When you installed python3-dev, how exactly did you do that? In Debian, I believe the APT package name is libpython3-dev and it has a lot of extra dependencies that it will add automatically when installing via APT. If you are trying to install it via PIP, I’m not sure that will work.

And yes, if the above does not get you sorted, it would be best to open a new ticket on the community. Feel free to @ mention me so I’ll get an immediate email notification.

Drew

Hi Drew,

First thanks a lot for your help.

I install python3-dev by adding it into the torizonPackages.json files.

I replace python3-dev by libpython3-dev but i get the same issue.

I think it is really related to gpiod build as the error mentions:

22.13       error: command 'aarch64-linux-gnu-gcc' failed: No such file or directory
22.13       [end of output]
22.13   
22.13   note: This error originates from a subprocess, and is likely not a problem with pip.
22.13   ERROR: Failed building wheel for gpiod
22.13 Failed to build gpiod
22.13 ERROR: Could not build wheels for gpiod, which is required to install pyproject.toml-based projects
type or paste code here

Indeed it seems that the gcc cross compiler is missing. It’s not normally installed as a part of a python project it seems that building gpiod in your case requires it. Does adding β€œgcc-aarch64-linux-gnu” to devDeps help?

Drew

I finally get it !

I add to add : python3-dev build-essential gcc\

directly into the DockerFile debug . With that it was able to build and start python debugging.

Now , the Toradex code seem to not work because some function are not in the gpiod class like the gpiod.iter.

When we look into gpiod function only a few are available:

The gpiod module is really poor unfortunately :frowning: and seems quiet complex to use but i will learn.

When I deploy my container, it seems that gpiochip are not mounted :

So my Python program can not list it.

I will have a quick break to eat and continue this afternoon :slight_smile: