Dear Jeremias,
Yes indeed, I mean inside a container.
Regards,
Fabian
Dear Jeremias,
Yes indeed, I mean inside a container.
Regards,
Fabian
Dear Jeremias,
My real need is not necessarily to use systemd
but to start different services as root
while my application runs as user torizon
.
I need to start /lib/systemd/systemd-udevd -d
and /etc/init.d/ueyeusbdrc start
(IDS-imaging uEye camera) as root
.
This works if I run the container as root (-u 0
) and --privileged
(I would like to avoid this but I did not find out) and execute the services in CMD
of the dockerfile
. But then my application runs as root, which is not ideal.
What is the right way to do that ?
Fabian
Let me see if I can understand your use-case here. Running systemd in a container is “possible”, but honestly it probably makes things more complex than you really need.
So your ultimate objective is to start your camera here and use in inside a container. To do this you’re executing /lib/systemd/systemd-udevd -d
and /etc/init.d/ueyeusbdrc start
. I assume these are what gets your camera detected and then initialized?
Is it the case then that you just need to do this once on system start and that’s it? Or will you need to execute these multiple times while the system is running?
If it’s the case where you only need to execute this once, then why not just have a systemd service that executes this on system start-up. Then when you start your application container, your camera should be up and ready. Or is there a detail here that I’m missing?
Best Regards,
Jeremias
Dear Jeremias,
Indeed, I could probably use systemd
directly on the target (and not inside the container), but then I loose the possibility to have all my infrastructure stored as code. How would I deploy the same thing on many target then ? Would I need to make a new image with TorizonCore ?
Best regards,
Fabian
How would I deploy the same thing on many target then ? Would I need to make a new image with TorizonCore ?
You could use our TorizonCore Builder tool to produce a customized TorizonCore image that can then be installed on many devices: TorizonCore Builder Tool - Customizing TorizonCore Images | Toradex Developer Center
Best Regards,
Jeremias
To summarize, the option of customizing a TorizonCore does not seem feasible since the drivers have to be installed in /usr
and only /etc
changes can be captured with torizoncore-builder isolate
. A new TorizonCore must be build with Yocto according to what was discussed in this thread : Install deb packages on Torizon target (NOT on container).
So I continue with the first option, i.e. installing and loading in the application container :
#%application.buildfiles%#
= COPY ueye_4.94.0.1220_arm64.run /home/torizon/
#%application.buildcommands%#
= RUN yes y | sh /home/torizon/ueye_4.94.0.1220_arm64.run
)/dev
, /run
, /tmp
, /var/run
) and device_cgroup_rules: '["c 4:0 rmw", "c 4:7rmw", "c 13:* rmw", "c 189:* rmw", "c 81:* rmw", "c 245:* rmw"]
for hotpluging usb devices.Then I have to load the drivers manually as follows (##
means the container):
# docker exec -it -u 0 --privileged <dontainerID> /bin/bash
## /lib/systemd/systemd-udevd -d
## sleep 3
## /etc/init.d/ueyeusbdrc start
nb: this is a bit weird that systemd-udevd
has to be launched in the container while it is already running on the target but if it is not executed, the camera does not get recognized. Can you explain this mechanism ?
In release
configuration, this can be automatized using %#application.targetcommands%#
= CMD /home/torizon/entrypoint-torizon.sh
#! /bin/sh
# This is /home/torizon/entrypoint-torizon.sh
/lib/systemd/systemd-udevd -d
sleep 3
/etc/init.d/ueyeusbdrc start
However, in debug
configuration, %#application.targetcommands%#
is not used in the moses platform template (the last CMD
concerns the gdbserver
) so I don’t manage to automatize the process.
So my question is : is there a way to execute entrypoint-torizon.sh
in debug
configuration? I was thinking of editing the moses platform template, but changing the template does not have any effect on the generation of the Dockerfiles (maybe just when the project gets created).
Can you suggest a way ?
Hi @fdortu ,
To summarize, the option of customizing a TorizonCore does not seem feasible since the drivers have to be installed in
/usr
and only/etc
changes can be captured withtorizoncore-builder isolate
.
You’re right about that the driver installation, but you can still use torizoncore-builder isolate
to deploy scripts that run systemd commands on the SoM.
nb: this is a bit weird that
systemd-udevd
has to be launched in the container while it is already running on the target but if it is not executed, the camera does not get recognized. Can you explain this mechanism ?
It looks like you’re running systemd commands inside a container, and I’m not sure how systemd behaves in this scenario.
So my question is : is there a way to execute
entrypoint-torizon.sh
indebug
configuration?
The team here has found a method that should execute your script in debug mode. Put this line in the targetfiles
field:
ENTRYPOINT /home/torizon/entrypoint-torizon.sh && stdbuf -oL -eL gdbserver 0.0.0.0:6502 /#%application.appname%#/#%application.exename%# #%application.appargs%#
We tested this by creating a directory before starting the debug:
Best regards,
Lucas Akira
Dear Lucas,
Thanks for the feedback regarding the ENTRYPOINT.
Regards,
Fabian
Hi Kevin,
Sorry to answer so late. I am only testing now to set an ENTRYPOINT
in application.targetfile
.
Unfortunately, the command you propose gives the following VSCode warning (popup) :
Unable to start debugging. Unexpected GDB output from command "-target-select remote 172.16.15.19:-1". 172.16.15.19:-1: No such file or directory
I don’t understand this message but I guess why it does not work : in the debug.dockerfile
template, application.targetfiles
appears before USER
and WORKDIR
. This will probably have an impact on how my application runs (permission for instance), isn’it ?
FYI, here is the template file that I use : FILE C:\Users\dortu2\.vscode\extensions\toradex.torizon-early-access-1.6.4\moses-linux\platforms\arm64v8-qt5-vivante-no-ssh_bullseye\debug.dockerfile
...
#%application.targetfiles%#
USER #%application.username%#
WORKDIR /#%application.appname%#
CMD stdbuf -oL -eL gdbserver 0.0.0.0:6502 /#%application.appname%#/#%application.exename%# #%application.appargs%#
I would be tempted to edit the template by lowering the targetfield
line below the WORKDIR
line but I know by experience that the template file is not re-read if modified. Or maybe is there a way to do it that I don’t know ?
Best regards,
Fabian
Hi @fdortu ,
I did some tests here and I was able to reproduce this issue. This can happen if your script file isn’t executable.
On your host machine set your .sh file as executable by running on a terminal
chmod +x entrypoint-torizon.sh
Then on VSCode you should rebuild the debug container by pressing F1 → Torizon: Build debug container before trying to debug with F5. Keep targetfiles
unchanged:
ENTRYPOINT /home/torizon/entrypoint-torizon.sh && stdbuf -oL -eL gdbserver 0.0.0.0:6502 /#%application.appname%#/#%application.exename%# #%application.appargs%#
Let me know if this works.
Best regards,
Lucas Akira
Yes this was the issue : after having changed the permission to executable, there is no error any more and entrypoint-torizon.sh
gets executed.
However, there is still an issue : even though system-udevd
and ueyesbdrc
are running in my debug container (ps -A
), the camera does not get recognized.
In the mean time I solve the issue by calling the commands from within my C code
system("sudo sh -c \"/lib/systemd/systemd-udevd -d && sleep 3 && /etc/init.d/ueyeusbdrc start\"");
The key point was to call the command with sudo sh -c
. Otherwise it does not work.
Translating that to targetfiles
, I guess the following should work (I did not try yet)
ENTRYPOINT sudo sh -c "/home/torizon/entrypoint-torizon.sh" && stdbuf -oL -eL gdbserver 0.0.0.0:6502 /#%application.appname%#/#%application.exename%# #%application.appargs%#
Hi @fdortu !
Thanks for the feedback!
From your last message, I understood that your problem was solved right?
If I understood correctly, can you please mark the related message as the solution?
Thanks!
Best regards,
Henrique
I marked the last message as solution!
However, this solution is not 100% satisfactory, because targetfiles
is also used in the dockerfile for building the release container, which breaks the release container run because it tries to run gdbserver.
So I have to remove/add the content of targetfiles
depending on which release/debug container is built, which is a bit annoying.
Hi @fdortu ,
You can apply configuration properties such as targetfiles
to a specific type of build. By default the config options on the extension are applied to a common configuration for both debug and release.
To apply a property exclusively on debug builds, right-click on appconfig_N
, where N
is an integer (usually 0), then select Configuration: Debug
.
After that, just add targetfiles
as a new property and put its value as before.
Do the same with the release build, but changing the value of targetfiles
to something without gdbserver, similar to this:
ENTRYPOINT sudo sh -c "/home/torizon/entrypoint-torizon.sh" && /#%application.appname%#/#%application.exename%# #%application.appargs%#
Don’t forget to rebuild each container with:
Hope this helps.
Best regards,
Lucas Akira
Great, I did not know this feature !
However, when a select a configuration, all configurations switch back to common, and I need to reselect debug or release. Is it the expected behaviour ?
I also tried to use the same configuration (appconfig_0
), specifying common, debug and release within the same config.yaml
file :
...
common:
appargs: '`cat appargs.txt`'
appname: phosddvr
arg: ''
buildcommands:
....[lot of things]...
debug:
arg: 'ARG SSHUSERNAME=#%application.username%#
'
targetfiles: 'ENTRYPOINT sudo sh -c "/home/torizon/entrypoint-torizon.sh"
&& stdbuf -oL -eL gdbserver 0.0.0.0:6502 /#%application.appname%#/#%application.exename%#
#%application.appargs%#'
release:
targetcommands: CMD sudo sh -c "/home/torizon/entrypoint-torizon.sh"
...
This works but I have the impression that the image needs to be fully rebuilt each time even though it did not change, which takes a long time.
Best regards,
Fabian
Hi @fdortu ,
However, when a select a configuration, all configurations switch back to common, and I need to reselect debug or release. Is it the expected behaviour ?
I did some tests and this also happens on my side, so I believe this is the expected behavior, although not ideal. I’ll ask the extension team for more details.
This works but I have the impression that the image needs to be fully rebuilt each time even though it did not change, which takes a long time.
That’s strange. If no configuration was changed since the last build everything should use cached layers, which doesn’t take long. This is what I encountered when testing. Does this happen on a new template project?
Best regards,
Lucas Akira
Quick update on the config options switching to common: The team has confirmed this is a bug, and it’s currently being investigated. Thank you for the report!
I don’t known if it happens on a new template project. How can I use a new template with my existing project ?
To illustrate what happens, see the following output after invoking F5 on VSCode.
...
[10-04 13:26:23.766] Selecting device...
[10-04 13:26:23.788] Device 06852340 selected.
[10-04 13:26:23.789] Updating app configuration...
[10-04 13:26:23.834] Image is not up to date, building it (this may take some time)...
[10-04 13:29:45.362] Step 1/12 : FROM --platform=linux/arm64 torizon/qt5-wayland-vivante:2
[10-04 13:29:56.874] ---> ac9480d4076a
[10-04 13:29:56.903] Step 2/12 : EXPOSE 6502
[10-04 13:29:56.904] ---> Using cache
[10-04 13:29:56.905] ---> f0706834af12
[10-04 13:29:56.907] Step 3/12 : ARG SSHUSERNAME=torizon
[10-04 13:29:56.908] ---> Using cache
[10-04 13:29:56.909] ---> 54fa11141b11
[10-04 13:29:56.910] Step 4/12 : ENV DEBIAN_FRONTEND="noninteractive"
[10-04 13:29:56.912] ---> Using cache
[10-04 13:29:56.915] ---> 7c59382c9f9e
[10-04 13:29:56.917] Step 5/12 : RUN apt-get -q -y update && apt-get -q -y install gdbserver procps && rm -rf /var/lib/apt/lists/*
[10-04 13:29:56.920] ---> Using cache
[10-04 13:29:56.921] ---> 86baac177b83
[10-04 13:29:56.922] Step 6/12 : RUN if [ ! -z "libopencv-core4.4 libgomp1 libomp5 libcap2 libqt5core5a libqt5serialport5 libopencv-core4.4 net-tools openssh-client apt libopencv-imgcodecs4.4 libopencv-imgproc4.4 file libglfw3-wayland libatomic1 udev procps usbutils libusb-1.0-0 kmod pciutils libpng16-16 libjpeg62-turbo libglapi-mesa libglu1-mesa systemd vim fxload x11-apps strace feh libimlib2" ]; then apt-get -q -y update && apt-get -q -y install libopencv-core4.4 libgomp1 libomp5 libcap2 libqt5core5a libqt5serialport5 libopencv-core4.4 net-tools openssh-client apt libopencv-imgcodecs4.4 libopencv-imgproc4.4 file libglfw3-wayland libatomic1 udev procps usbutils libusb-1.0-0 kmod pciutils libpng16-16 libjpeg62-turbo libglapi-mesa libglu1-mesa systemd vim fxload x11-apps strace feh libimlib2 && rm -rf /var/lib/apt/lists/* ; fi
[10-04 13:29:56.924] ---> Using cache
[10-04 13:29:56.925] ---> b740ee64fdfa
[10-04 13:29:56.926] Step 7/12 : COPY ueye_4.94.0.1220_arm64.run ids-software-suite-linux-arm64-4.95.2-debian.tgz NITLibrary-bin-3.0.1.tar.gz libffmpeg-bin-3.2.18.tar.gz libboost-bin-1.62.0.tar.gz AlliedVision_Apalis_iMX8_Torizon-4.14.126-00001-ga39899f.tar.gz NUC.tar.gz entrypoint-torizon.sh /home/torizon/
[10-04 13:29:56.927] ---> Using cache
[10-04 13:29:56.928] ---> 10dbc86bedbb
[10-04 13:29:56.929] Step 8/12 : RUN yes y | sh /home/torizon/ueye_4.94.0.1220_arm64.run && tar -C / -xvf /home/torizon/NITLibrary-bin-3.0.1.tar.gz && ln -sf /lib/libNITLibrary.so.3.0.1 /lib/libNITLibrary.so && tar -C /usr -xvf /home/torizon/libffmpeg-bin-3.2.18.tar.gz && tar -C /usr -xvf /home/torizon/libboost-bin-1.62.0.tar.gz && tar -C /home/torizon -xvf /home/torizon/NUC.tar.gz && usermod -a -G plugdev torizon && usermod -a -G sudo torizon
[10-04 13:29:56.931] ---> Using cache
[10-04 13:29:56.932] ---> c4c3101b8f98
[10-04 13:29:56.934] Step 9/12 : ENTRYPOINT sudo sh -c "/home/torizon/entrypoint-torizon.sh" && stdbuf -oL -eL gdbserver 0.0.0.0:6502 /phosddvr/phosddvr_core_exe `cat appargs.txt`
[10-04 13:29:56.936] ---> Using cache
[10-04 13:29:56.937] ---> 1c5c61dcd717
[10-04 13:29:56.938] Step 10/12 : USER torizon
[10-04 13:29:56.938] ---> Using cache
[10-04 13:29:56.940] ---> 3b071c900958
[10-04 13:29:56.941] Step 11/12 : WORKDIR /phosddvr
[10-04 13:29:56.941] ---> Using cache
[10-04 13:29:56.942] ---> 79418dd2eeea
[10-04 13:29:56.943] Step 12/12 : CMD stdbuf -oL -eL gdbserver 0.0.0.0:6502 /phosddvr/phosddvr_core_exe `cat appargs.txt`
[10-04 13:29:56.944] ---> Using cache
[10-04 13:29:56.944] ---> 74dc21decaef
[10-04 13:29:57.701] Successfully built 74dc21decaef
[10-04 13:29:57.738] Successfully tagged phosddvr_arm64v8-qt5-vivante-no-ssh_bullseye_debug_513050ae-f541-4a8a-baef-1599a9e43911:latest
[10-04 13:29:57.832] Deploying image to device (may take a few minutes)...
[10-04 13:29:58.053] Deploying application to device...
[10-04 13:29:58.487] Image on target is already up to date.
[10-04 13:30:00.422] sending incremental file list
...
So it first says
[10-04 13:26:23.834] Image is not up to date, building it (this may take some time)...
(even though it uses the cache at every step) and then
[10-04 13:29:58.487] Image on target is already up to date.
which is a bit contradictory, isnt’ it ?
I also wonder how ENTRYPOINT (step 9) and CMD (step 12) are dealt with when executing the container ? It seems only ENTRYPOINT is executed and CMD is ignored (which is what I want).
Best regards,
Fabian