Systemd service management on Torizon

I would like to use systemd to automatically manage services on Torizon.

I have installed the systemd package on the target but I cannot invoke a systemd commands and get the following error, for instance :

/phosddvr# systemctl 
System has not been booted with systemd as init system (PID 1). Can't operate.
Failed to connect to bus: Host is down

How can I boot with systemd on Torizon ?

Fabian

Greetings @fdortu,

I’m not sure I understand your question. We already use systemd on TorizonCore by default. Or are you saying you want to use systemd inside a container on TorizonCore?

Best Regards,
Jeremias

Dear Jeremias,

Yes indeed, I mean inside a container.

Regards,
Fabian

Dear Jeremias,

My real need is not necessarily to use systemd but to start different services as root while my application runs as user torizon.

I need to start /lib/systemd/systemd-udevd -d and /etc/init.d/ueyeusbdrc start (IDS-imaging uEye camera) as root.

This works if I run the container as root (-u 0) and --privileged (I would like to avoid this but I did not find out) and execute the services in CMD of the dockerfile. But then my application runs as root, which is not ideal.

What is the right way to do that ?

Fabian

Let me see if I can understand your use-case here. Running systemd in a container is “possible”, but honestly it probably makes things more complex than you really need.

So your ultimate objective is to start your camera here and use in inside a container. To do this you’re executing /lib/systemd/systemd-udevd -d and /etc/init.d/ueyeusbdrc start. I assume these are what gets your camera detected and then initialized?

Is it the case then that you just need to do this once on system start and that’s it? Or will you need to execute these multiple times while the system is running?

If it’s the case where you only need to execute this once, then why not just have a systemd service that executes this on system start-up. Then when you start your application container, your camera should be up and ready. Or is there a detail here that I’m missing?

Best Regards,
Jeremias

Dear Jeremias,

Indeed, I could probably use systemd directly on the target (and not inside the container), but then I loose the possibility to have all my infrastructure stored as code. How would I deploy the same thing on many target then ? Would I need to make a new image with TorizonCore ?

Best regards,
Fabian

How would I deploy the same thing on many target then ? Would I need to make a new image with TorizonCore ?

You could use our TorizonCore Builder tool to produce a customized TorizonCore image that can then be installed on many devices: TorizonCore Builder Tool - Customizing Torizon OS Images | Toradex Developer Center

Best Regards,
Jeremias

To summarize, the option of customizing a TorizonCore does not seem feasible since the drivers have to be installed in /usr and only /etc changes can be captured with torizoncore-builder isolate . A new TorizonCore must be build with Yocto according to what was discussed in this thread : Install deb packages on Torizon target (NOT on container).

So I continue with the first option, i.e. installing and loading in the application container :

  • installing the driver in the container
    • #%application.buildfiles%# = COPY ueye_4.94.0.1220_arm64.run /home/torizon/
    • #%application.buildcommands%# = RUN yes y | sh /home/torizon/ueye_4.94.0.1220_arm64.run)
  • running the container with the correct volumes bind mounts (/dev, /run, /tmp, /var/run) and device_cgroup_rules: '["c 4:0 rmw", "c 4:7rmw", "c 13:* rmw", "c 189:* rmw", "c 81:* rmw", "c 245:* rmw"] for hotpluging usb devices.

Then I have to load the drivers manually as follows (## means the container):

# docker exec -it -u 0 --privileged <dontainerID> /bin/bash
## /lib/systemd/systemd-udevd -d
## sleep 3
## /etc/init.d/ueyeusbdrc start

nb: this is a bit weird that systemd-udevd has to be launched in the container while it is already running on the target but if it is not executed, the camera does not get recognized. Can you explain this mechanism ?

In release configuration, this can be automatized using %#application.targetcommands%# = CMD /home/torizon/entrypoint-torizon.sh

#! /bin/sh
# This is /home/torizon/entrypoint-torizon.sh
/lib/systemd/systemd-udevd -d
sleep 3
/etc/init.d/ueyeusbdrc start

However, in debug configuration, %#application.targetcommands%# is not used in the moses platform template (the last CMD concerns the gdbserver) so I don’t manage to automatize the process.

So my question is : is there a way to execute entrypoint-torizon.sh in debug configuration? I was thinking of editing the moses platform template, but changing the template does not have any effect on the generation of the Dockerfiles (maybe just when the project gets created).

Can you suggest a way ?

Hi @fdortu ,

To summarize, the option of customizing a TorizonCore does not seem feasible since the drivers have to be installed in /usr and only /etc changes can be captured with torizoncore-builder isolate .

You’re right about that the driver installation, but you can still use torizoncore-builder isolate to deploy scripts that run systemd commands on the SoM.

nb: this is a bit weird that systemd-udevd has to be launched in the container while it is already running on the target but if it is not executed, the camera does not get recognized. Can you explain this mechanism ?

It looks like you’re running systemd commands inside a container, and I’m not sure how systemd behaves in this scenario.

So my question is : is there a way to execute entrypoint-torizon.sh in debug configuration?

The team here has found a method that should execute your script in debug mode. Put this line in the targetfiles field:

ENTRYPOINT /home/torizon/entrypoint-torizon.sh && stdbuf -oL -eL gdbserver 0.0.0.0:6502 /#%application.appname%#/#%application.exename%# #%application.appargs%#

We tested this by creating a directory before starting the debug:

Best regards,
Lucas Akira

Dear Lucas,

Thanks for the feedback regarding the ENTRYPOINT.

Regards,
Fabian

Hi @fdortu ,

did this solve your issue? :slight_smile:

Best Regards
Kevin

Hi Kevin,

Sorry to answer so late. I am only testing now to set an ENTRYPOINT in application.targetfile.

Unfortunately, the command you propose gives the following VSCode warning (popup) :

Unable to start debugging. Unexpected GDB output from command "-target-select remote 172.16.15.19:-1". 172.16.15.19:-1: No such file or directory

I don’t understand this message but I guess why it does not work : in the debug.dockerfile template, application.targetfiles appears before USER and WORKDIR. This will probably have an impact on how my application runs (permission for instance), isn’it ?

FYI, here is the template file that I use : FILE C:\Users\dortu2\.vscode\extensions\toradex.torizon-early-access-1.6.4\moses-linux\platforms\arm64v8-qt5-vivante-no-ssh_bullseye\debug.dockerfile

...
#%application.targetfiles%#

USER #%application.username%#

WORKDIR /#%application.appname%#

CMD stdbuf -oL -eL gdbserver 0.0.0.0:6502 /#%application.appname%#/#%application.exename%# #%application.appargs%#

I would be tempted to edit the template by lowering the targetfield line below the WORKDIR line but I know by experience that the template file is not re-read if modified. Or maybe is there a way to do it that I don’t know ?

Best regards,
Fabian

Hi @fdortu ,

I did some tests here and I was able to reproduce this issue. This can happen if your script file isn’t executable.

On your host machine set your .sh file as executable by running on a terminal

chmod +x entrypoint-torizon.sh

Then on VSCode you should rebuild the debug container by pressing F1 → Torizon: Build debug container before trying to debug with F5. Keep targetfiles unchanged:

ENTRYPOINT /home/torizon/entrypoint-torizon.sh && stdbuf -oL -eL gdbserver 0.0.0.0:6502 /#%application.appname%#/#%application.exename%# #%application.appargs%#

Let me know if this works.

Best regards,
Lucas Akira

1 Like

Yes this was the issue : after having changed the permission to executable, there is no error any more and entrypoint-torizon.sh gets executed.

However, there is still an issue : even though system-udevd and ueyesbdrc are running in my debug container (ps -A), the camera does not get recognized.

In the mean time I solve the issue by calling the commands from within my C code

system("sudo sh -c \"/lib/systemd/systemd-udevd -d && sleep 3 && /etc/init.d/ueyeusbdrc start\"");

The key point was to call the command with sudo sh -c. Otherwise it does not work.

Translating that to targetfiles, I guess the following should work (I did not try yet)

ENTRYPOINT sudo sh -c "/home/torizon/entrypoint-torizon.sh" && stdbuf -oL -eL gdbserver 0.0.0.0:6502 /#%application.appname%#/#%application.exename%# #%application.appargs%#

Hi @fdortu !

Thanks for the feedback!

From your last message, I understood that your problem was solved right?

If I understood correctly, can you please mark the related message as the solution?

Thanks!

Best regards,
Henrique

I marked the last message as solution!

However, this solution is not 100% satisfactory, because targetfiles is also used in the dockerfile for building the release container, which breaks the release container run because it tries to run gdbserver.

So I have to remove/add the content of targetfiles depending on which release/debug container is built, which is a bit annoying.

Hi @fdortu ,

You can apply configuration properties such as targetfiles to a specific type of build. By default the config options on the extension are applied to a common configuration for both debug and release.

To apply a property exclusively on debug builds, right-click on appconfig_N, where N is an integer (usually 0), then select Configuration: Debug.

vs_code_change_config_debug

After that, just add targetfiles as a new property and put its value as before.

Do the same with the release build, but changing the value of targetfiles to something without gdbserver, similar to this:

ENTRYPOINT sudo sh -c "/home/torizon/entrypoint-torizon.sh" &&  /#%application.appname%#/#%application.exename%# #%application.appargs%#

Don’t forget to rebuild each container with:

  • F1 → Torizon: Build debug container
  • F1 → Torizon: Build release container for the application

Hope this helps.

Best regards,
Lucas Akira

Great, I did not know this feature !

However, when a select a configuration, all configurations switch back to common, and I need to reselect debug or release. Is it the expected behaviour ?

I also tried to use the same configuration (appconfig_0), specifying common, debug and release within the same config.yaml file :

...
common:
        appargs: '`cat appargs.txt`'
        appname: phosddvr
        arg: ''
        buildcommands: 

        ....[lot of things]...

    debug:
        arg: 'ARG SSHUSERNAME=#%application.username%#

            '
        targetfiles: 'ENTRYPOINT sudo sh -c "/home/torizon/entrypoint-torizon.sh"
            && stdbuf -oL -eL gdbserver 0.0.0.0:6502 /#%application.appname%#/#%application.exename%#
            #%application.appargs%#'
    release:
        targetcommands: CMD sudo sh -c "/home/torizon/entrypoint-torizon.sh"

...

This works but I have the impression that the image needs to be fully rebuilt each time even though it did not change, which takes a long time.

Best regards,
Fabian

Hi @fdortu ,

However, when a select a configuration, all configurations switch back to common, and I need to reselect debug or release. Is it the expected behaviour ?

I did some tests and this also happens on my side, so I believe this is the expected behavior, although not ideal. I’ll ask the extension team for more details.

This works but I have the impression that the image needs to be fully rebuilt each time even though it did not change, which takes a long time.

That’s strange. If no configuration was changed since the last build everything should use cached layers, which doesn’t take long. This is what I encountered when testing. Does this happen on a new template project?

Best regards,
Lucas Akira