I’m not sure I understand your question. We already use systemd on TorizonCore by default. Or are you saying you want to use systemd inside a container on TorizonCore?
My real need is not necessarily to use systemd but to start different services as root while my application runs as user torizon.
I need to start /lib/systemd/systemd-udevd -d and /etc/init.d/ueyeusbdrc start (IDS-imaging uEye camera) as root.
This works if I run the container as root (-u 0) and --privileged (I would like to avoid this but I did not find out) and execute the services in CMD of the dockerfile. But then my application runs as root, which is not ideal.
Let me see if I can understand your use-case here. Running systemd in a container is “possible”, but honestly it probably makes things more complex than you really need.
So your ultimate objective is to start your camera here and use in inside a container. To do this you’re executing /lib/systemd/systemd-udevd -d and /etc/init.d/ueyeusbdrc start. I assume these are what gets your camera detected and then initialized?
Is it the case then that you just need to do this once on system start and that’s it? Or will you need to execute these multiple times while the system is running?
If it’s the case where you only need to execute this once, then why not just have a systemd service that executes this on system start-up. Then when you start your application container, your camera should be up and ready. Or is there a detail here that I’m missing?
Indeed, I could probably use systemd directly on the target (and not inside the container), but then I loose the possibility to have all my infrastructure stored as code. How would I deploy the same thing on many target then ? Would I need to make a new image with TorizonCore ?
To summarize, the option of customizing a TorizonCore does not seem feasible since the drivers have to be installed in /usr and only /etc changes can be captured with torizoncore-builder isolate . A new TorizonCore must be build with Yocto according to what was discussed in this thread : Install deb packages on Torizon target (NOT on container).
So I continue with the first option, i.e. installing and loading in the application container :
nb: this is a bit weird that systemd-udevd has to be launched in the container while it is already running on the target but if it is not executed, the camera does not get recognized. Can you explain this mechanism ?
In release configuration, this can be automatized using %#application.targetcommands%# = CMD /home/torizon/entrypoint-torizon.sh
#! /bin/sh
# This is /home/torizon/entrypoint-torizon.sh
/lib/systemd/systemd-udevd -d
sleep 3
/etc/init.d/ueyeusbdrc start
However, in debug configuration, %#application.targetcommands%# is not used in the moses platform template (the last CMD concerns the gdbserver) so I don’t manage to automatize the process.
So my question is : is there a way to execute entrypoint-torizon.sh in debug configuration? I was thinking of editing the moses platform template, but changing the template does not have any effect on the generation of the Dockerfiles (maybe just when the project gets created).
To summarize, the option of customizing a TorizonCore does not seem feasible since the drivers have to be installed in /usr and only /etc changes can be captured with torizoncore-builder isolate .
You’re right about that the driver installation, but you can still use torizoncore-builder isolate to deploy scripts that run systemd commands on the SoM.
nb: this is a bit weird that systemd-udevd has to be launched in the container while it is already running on the target but if it is not executed, the camera does not get recognized. Can you explain this mechanism ?
It looks like you’re running systemd commands inside a container, and I’m not sure how systemd behaves in this scenario.
So my question is : is there a way to execute entrypoint-torizon.sh in debug configuration?
The team here has found a method that should execute your script in debug mode. Put this line in the targetfiles field:
Sorry to answer so late. I am only testing now to set an ENTRYPOINT in application.targetfile.
Unfortunately, the command you propose gives the following VSCode warning (popup) :
Unable to start debugging. Unexpected GDB output from command "-target-select remote 172.16.15.19:-1". 172.16.15.19:-1: No such file or directory
I don’t understand this message but I guess why it does not work : in the debug.dockerfile template, application.targetfiles appears beforeUSER and WORKDIR. This will probably have an impact on how my application runs (permission for instance), isn’it ?
FYI, here is the template file that I use : FILE C:\Users\dortu2\.vscode\extensions\toradex.torizon-early-access-1.6.4\moses-linux\platforms\arm64v8-qt5-vivante-no-ssh_bullseye\debug.dockerfile
I would be tempted to edit the template by lowering the targetfield line below the WORKDIR line but I know by experience that the template file is not re-read if modified. Or maybe is there a way to do it that I don’t know ?
I did some tests here and I was able to reproduce this issue. This can happen if your script file isn’t executable.
On your host machine set your .sh file as executable by running on a terminal
chmod +x entrypoint-torizon.sh
Then on VSCode you should rebuild the debug container by pressing F1 → Torizon: Build debug container before trying to debug with F5. Keep targetfiles unchanged:
Yes this was the issue : after having changed the permission to executable, there is no error any more and entrypoint-torizon.sh gets executed.
However, there is still an issue : even though system-udevd and ueyesbdrc are running in my debug container (ps -A), the camera does not get recognized.
In the mean time I solve the issue by calling the commands from within my C code
However, this solution is not 100% satisfactory, because targetfiles is also used in the dockerfile for building the release container, which breaks the release container run because it tries to run gdbserver.
So I have to remove/add the content of targetfiles depending on which release/debug container is built, which is a bit annoying.
You can apply configuration properties such as targetfiles to a specific type of build. By default the config options on the extension are applied to a common configuration for both debug and release.
To apply a property exclusively on debug builds, right-click on appconfig_N, where N is an integer (usually 0), then select Configuration: Debug.
After that, just add targetfiles as a new property and put its value as before.
However, when a select a configuration, all configurations switch back to common, and I need to reselect debug or release. Is it the expected behaviour ?
However, when a select a configuration, all configurations switch back to common, and I need to reselect debug or release. Is it the expected behaviour ?
I did some tests and this also happens on my side, so I believe this is the expected behavior, although not ideal. I’ll ask the extension team for more details.
This works but I have the impression that the image needs to be fully rebuilt each time even though it did not change, which takes a long time.
That’s strange. If no configuration was changed since the last build everything should use cached layers, which doesn’t take long. This is what I encountered when testing. Does this happen on a new template project?