@eric.tx
This last project was created on May23rd. Did something change since then?
Well, only that it was a problem trying to debug (in regards to the .zip file issue).
Since I went to 24.04 (installed xonsh, etc) and downloaded the files as of the 23rd, things are showing up and mostly working as it should with the exception of the actual running of the program. Here’s the interesting thing.
You are supposed to be able to view the logs of a container, even if it was up for only a short while. However, there are not logs for the container. Yet, I can look at the container extension and right click on that issue and run it just fine, of course there is no rsync run, so my files aren’t there. So maybe its a problem with how the container is launched? I can see it when it’s started and get the ID during that time it is trying to do all the other things it does after launching the container.
Hey @EvetsMostel1,
The largest change is the Microsoft docker container topic, but a fresh project elements any settings changes, ect. So it gives a base line, if it doesn’t happen in a new project, its most likely related to something on the modified one.
-Eric
@eric.tx ,
Yes, but I started in 20.04, then 22.04 when I realized that 20.04 was going eol. When I used the templates over a year ago to start my new projects, it always worked. I just created the Hello World project (c++ in my case), and it just worked. I was up and running in minutes. I then just copied my files from where the code was started and modified the make file abit, and away I went. I have been trying to get this to work for more than 3 weeks. First was the python version problem which didn’t make sense (the zip issue), then I couldn’t debug on the toradex board with it going from this issue I am having now to I couldn’t even get past some obvious things (this was the changing of the templates) back to I can’t run to a breakpoint without it stopping.
But now I"m am convinced that the container just isn’t staying up which is causing the issue.
BTW, as of this morning I get this warning in the powershell launch:
debugpy
0.00s - Debugger warning: It seems that frozen modules are being used, which may
0.00s - make the debugger miss breakpoints. Please pass -Xfrozen_modules=off
0.00s - to python to disable frozen modules.
0.00s - Note: Debugging will proceed. Set PYDEVD_DISABLE_FILE_VALIDATION=1 to disable this validation.
debugpy go
Which is weird because you’ve seen my code. There is nothing special here and I am not importing anything.
Steve
@eric.tx
OK, I have executed the run task of starting the container. I won’t stay up with the values given. Not sure why it works when I launch it via the container extension, it doesn’t have an issue, so it must be in the command.
Hey @EvetsMostel1
Can you explain a little more? When you run the debug from the debug VSCode Tab, this works as intended for debugging? And when you run manual commands this does not? Can you link the command you tried?
-Eric
@eric.tx
OK, It has been a while since I have delved into Containers, but they won’t stay running if there is no running process. I think that this is the problem, or it doesn’t see the debugpy as a running process. You start the container then rsync the files over and by the time it loads python to run, maybe it is too late? But I have had it run to the point where it’s counted and output over 1000 and still it dies. But there are no logs to say why, which also leads me to believe that the container is just shutting down. I wonder if we can have 2 processes running. Just debug the python and leave the other process running to keep the container up.
Hi @eric.tx ,
I changed this line in the tasks under the deploy-torizon-arm64 task.
“cT=0; while ! grep -r ": wait_for_client()" ${config:torizon_app_root}/src/log/debugpy.server*.log 2> /dev/null; do sleep 0.0001; if [ "$cT" -eq 10000 ]; then echo "Problem debugging main.py file"; break; fi; cT=$(expr $cT + 1); done”
I can now consistently go into debugging. However, the problem still remains that it exits before the debugpy app is done.
Hey @EvetsMostel1,
Can you do a sanity check and see if this issue is persistent on a newly created template project?
-Eric
@eric.tx ,
Yes, I have the issue I can’t get into debug consistently without reducing the sleep and adding the 2 zeros to 100. And I also have the same issue when I get into debugging, where without the breakpoint, the program is terminated prematurely and with a breakpoint, it will stop and then a few seconds later the container disappears and the connection is broken. I have also confirmed that I can step after the breakpoint, until the container disappears.
If you come up with any ideas to keep the container up, I am all ears. Are you running torizon 7.2 or later?
Steve
Hi @eric.tx ,
So I did an experiment. Normally docker containers are supposed to stay up as long as the main app is running. There are many examples where they put the main app as tail -f /dev/null or sleep infinity. These are all supposed to keep the container up.
So, I tried using the docker compose -p torizon up -d test10-debug, using the following docker-compose.yml file:
services:
test10-debug:
build:
context: .
dockerfile: Dockerfile.debug
image: ${LOCAL_REGISTRY}:5002/test10-debug:${TAG}
command: bash -c “sleep 3600”
ports:
- ${DEBUG_SSH_PORT}:${DEBUG_SSH_PORT}
- ${DEBUG_PORT1}:${DEBUG_PORT1}
test10:
build:
context: .
dockerfile: Dockerfile
image: ${DOCKER_LOGIN}/test10:${TAG}
I exported all the values that we use in the file, and it starts up correctly, but instead of staying up for 3600 seconds, it doesn’t last longer than 20 seconds. It also seems to vary a bit in the time up, but nevertheless, it doesn’t stay up on the IVY. Again, I am running torizon 7.2.
Is there something faulty with my logic here? Or is there something faulty here? It should stay up, right? Also when I try to see the logs, there are none and it is like it never ran. I know it starts and if I am quick I can see it with docker ps -a. I think it doesn’t actually come up completely.
Let me know what you think.
@eric.tx
So, the problem was a legacy docker image that was on the device. It shouldn’t have caused a problem but for some reason it did. After I removed it just to clean everything up, it worked fine.
Steve