I have a running app that runs in a container using the imx8m-plus with the verdin dev board with access to the devices I need. I want to make a self starting build that will run when the board is powered up. I am using bookworm and the Apollo-X extension.
When I have VSC up, and click on the Docker extension, I see images with my app name in it. For example, one of the image listed is :5002/appnamecontainer-debug.
I want to use my local registry, which is running on Docker. It shows how to create that, but it is already there, so I don’t have to do that.
The next step it says to do is this:
docker build -t localhost:5000/ .
I have tried this in the WSL window, and it doesn’t work, but the image is already there, so I’m not sure I need to do this step either.
The next step is to push it to the registry, which I try from the WSL commandline
docker push localhost:5000/
This command fails and says it can’t find the image.
In the docker-compose.yml it says the image is:
image: ${LOCAL_REGISTRY}:5002/myappnamecontainer-debug:${TAG}
I have no idea with the ${TAG} value is.
Can you help?
Well, I have tried to make a new docker-compose.yml file, but I can’t as I don’t have root privileges. I was told that there is no pw for root, but it won’t take an empty string. I can’t even create a file in this directory, (var/sota/storage/docker-compose/)
You can use sudo to create the file and folder (if needed).
From the article linked:
…
For development, you can manually create a docker-compose.yml file at /var/sota/storage/docker-compose/ to start the containers at boot. Before moving the docker-compose.yml file, you will need to create that folder, which can be done with sudo mkdir -p /var/sota/storage/docker-compose.
Also, you can create your own Systemd service and start your container through the docker run command, but using the docker-compose.yml method as mentioned above is the preferred way of starting a Container on Torizon OS.
So, there is already a folder there with the docker-compose.yml with the weston and other containers. In version 6.4 there is a target_name file that contains the name of the .yml file. Is this something new?
I have the docker-compose.yml in my local project directory. Do I just use that, or do I need to add other commands there? They don’t specify what needs to be in there to run your app.
The docker-compose file that you see already located @ /var/sota/storage/docker-compose/ is from the evaluation containers from “Torizon OS with Evaluation Containers” image. If you build this docker-compose.yml you will see the evaluation containers boot/run.
This file can be replaced with the docker-compose.yml of your program. This location + docker-compose.yml file is configured to start at boot via systemd service (everytime).
@eric.tx ,
I have put my docker-compose in that folder after renaming the torizon one. rebooting doesn’t start my container. I have tried using the run command with my PC ip address to pull the container, but it doesn’'t work.
So, right now, I can log into the board, but can’t run anything under root since I have to run as root due to the use of the CAN bus. sudo doesn’t work. Does it work if I just download torizon OS without the containers?
Have you adjusted the restart flag for the containers you are expected? Information on this is in the linked article.
When ssh into the device, can you run sudo docker-compose -f /var/sota/storage/docker-compose/docker-compose.yml up --build . This should run your docker-compose file at this location, if it fails its most likey something wrong with the docker-compose file itself.
I’m confused on what this means. Torizon OS by default doesn’t let you log in as root, when you run sudo you are acting as the default non-privileged “torizon” user running a command with elevated root privilege, on that command. A nuanced, but technically different thing than being root.
Can you elaborate on this issue?
No. I’m not sure what the exact issue is. But this should not matter. The only different between Torizon OS and Torizon OS with evaluation containers, is presents of the evaluation containers.
OK, I kind of got it to work by changing my .yml file to put in my PC as the registry source IP, at least the container is there, but some of the files are missing. One of the files that is copied to the bin directory after everything is made (part of the make file).
@eric.tx wrote:
“I’m confused on what this means. Torizon OS by default doesn’t let you log in as root”
So, in order to enable the can bus, you have to be root, that is what I’ve been told and indeed i can’t turn on the bus(es) without being root. I need to run a batch file as root, then maybe It would work after that. I’ve looked for a solution, but I’ve not found one yet. And I can’t do a sudo command in a batch file.
@eric.tx
So when I build/run my container, I get warnings that I’m not sure why I get. Like:
#0 15.17 =time="2023-time=“2023-12-15T22:02:13Z” level=warning msg=“skipping containerd worker, as "/run/containerd/containerd.sock" does not exist”
Yes, it’s clear that it doesn’t exist, but I am not sure why it doesn’t? I do need sockets so I can talk to the can bus, but not sure if this is related?
This isn’t a warning but it doesn’t appear that the build completed successufully from the wording.
#0 15.17 time=“2023-12-16T00:25:12Z” level=info msg=“running server on /run/buildkit/buildkitd.sock” #0 15.17 buildkitd: context canceled #0 15.17 Otime="2023-12-18T16:04:08buildkitd: context canceled
And then I get a bunch of warnings that all seem to be related to the containered.sock issue:
:04:08Z" level=warning msg=“skipping containerd worker, as "/run/containerd/containerd.sock" does not exist” #0 15.17 dtime=“2023-12-18T16:04:08Z” level=info msg=“found 1 workers, default="so2c50rrkg05kkzfiin6r5rwk"” #0 15.17 `time=“2023-12-18T16:04:08Z” level=warning msg=“currently, only the default worker can be used.” #0 15.17 \time=“2023-12-18T16:04:08Z” level=info msg=“running server on /run/buildkit/buildkitd.sock” #0 15.17 time=“2023-12-18T16:04:08Z” level=warning msg=“skipping containerd worker, as "/run/containerd/containerd.sock" does not exist” #0 15.18 time=“2023-12-18T16:04:08Z” level=warning msg=“currently, only the default worker can be used.” #0 15.18 time=“2023-12-18T16:04:08Z” level=warning msg=“currently, only the default worker can be used.” #0 15.18 ------ http: invalid Host header
I don’t think I’m following the string of information. Are these errors related to getting the docker-compose file to run on start? Are we onto a different issue?
Can you summarize what goal you want to achieve, and the method/systems/tools you are using? And then post the errors related to that action
hi @eric.tx ,
Yes, I am still trying to build a docker container that will run on start and run my app inside it. I am able to pull the debug image from my local repository using my modified docker-compose.yml file from by debug C++ project. These warning above are from that build process.
The first build I tried gave me the directory I was expecting, but it didn’t run my app, and I hadn’t added a needed file to it. But when I tried to do it again after adding the file in my dockerfile, it no longer builds much of anything and I get the warnings above. Additionally, I was using this as a guide from docker: Try Docker Compose | Docker Docs
And then I get this error when I try to build it: services.gimbal3bcontainer-debug.build Additional property volumes is not allowed
Can you share the output from sudo tdx-info while shelled into the device?
If you search our community boards, you’ll find a few different examples of " http: invalid Host header.
This sometimes gets solved by updated to the newest quarterly release of Torizon OS, and/or updating docker. Can you link you docker version?
And you are running windows w/ WSL correct?
The last error looks like a docker specific error where you have a property volume that isn’t allowed. Can you check/post the docker properties?
@eric.tx .
Here is what we ended up doing and it is working for the most part.
We debug our program. This creates a container of which we can shel into. Then we commit that container to an image. This image has the files copied over needed for my program. Then we use basically the same .yml with the name of the image changed to compose the container. We can then exec -it container bash
to get a bash shell going and cd into our directory and run our program. All is well!! But there is still too much manual work to get it going.
I want an image that will run my program as soon as the container starts. I’ve tried putting in a cmd [“appname”] to do that in the --change option , but that hasn’t worked.
This sounds like you are using the debug container in the final deployment? I would think this is atypical. Is there a reason for this? The debug container has it’s own CMD that generates the ssh for debugging. There may be some conflict here.
Hi @eric.tx,
Well, if I try to run without debugging, It still compiles and uses the debug container. So, no that is not the final thing, but one thing at a time. lol I would like to know how to make a release version and how to set the proper parameters for that.
The intention would be to build a release version, and use the same trick to create the image. It is auto starting now with the correct yml commands inside, from a cold start or a reset.
However, it takes almost 45 seconds from a reset before my program runs, and that’s just running torizon as the normal linux startup. Is there a way to speed that up? We don’t need any graphics or anything but certain devices (CAN, uarts and spi). For debugging also, ethernet and USB.