Systemd service and OSTree

Hi all,

I’m using Torizon Minimal and have an application running as a systemd service with ExecStart=/usr/bin/my-application.

When developing, I want to copy and run my debug build instead of the release at /usr/bin/. I now wonder if and how that is possible. I want the debug build to persist over reboots, but on a OS update, I want my Release build to be run again (so /usr/bin).
Has anyone ever tried something like this? Any input is welcome.

Cheers
Tim

Hi, I’m the Toradex AI Assistant.

I might make mistakes or occasionally get you wrong. Our support team will step in if needed.


Running Debug Builds with Systemd on Torizon OS

For your use case on Torizon OS, there are several approaches to handle debug vs release builds with systemd services that persist through reboots but revert on OS updates.

Recommended Approach: Container-Based Development

Since you’re using Torizon OS, the recommended approach is to use containerized applications rather than installing directly to /usr/bin/ (1). On Torizon OS, applications are packaged in containers and run by Docker, which is already set to autorun (1).

For development workflows, you can:

  1. Use separate container images for debug and release builds
  2. Deploy debug containers that persist until you update the OS image
  3. Leverage the Torizon IDE Extension for seamless debug/release switching

Systemd Service Configuration

If you need to continue with the systemd approach, you can create a service configuration that checks for a debug build first (1). Here’s an example service file structure:

[Unit]
Description=My Application Service
After=multi-user.target

[Service]
Type=simple
ExecStart=/usr/bin/my-application

[Install]
WantedBy=multi-user.target

(1)

Managing Debug vs Release Builds

For your specific requirement, consider these strategies:

  1. Conditional ExecStart: Modify your service to check for debug builds in a persistent location (like /var/ or /home/)
  2. Overlay approach: Place debug builds in a location that survives reboots but gets reset on OS updates
  3. Script wrapper: Create a wrapper script that determines which binary to execute

Using Systemctl Commands

You can manage your service using these systemctl commands (1):

# systemctl --system daemon-reload
# systemctl status <service_name>.service
# systemctl start <service_name>.service
# systemctl stop <service_name>.service
# systemctl enable <service_name>.service
# systemctl disable <service_name>.service

(1)

Development Workflow Recommendation

For optimal development experience on Torizon OS, consider using the Torizon IDE Extension which provides:

  • Debug container images that can be easily deployed (2)
  • Release container builds for production (3)
  • Automated container management that handles the debug/release switching

This approach aligns better with Torizon OS’s container-first architecture and provides the persistence and update behavior you’re looking for.

Best regards, Toradex AI Assistant.


Did you like my answer? :+1: :-1:

If I answered your question, please tick the “Solution” checkbox.

For follow-up questions to me, please use @ToradexAI in your post.

Greetings @TimSiegrist,

Not sure I fully understand your use-case. I don’t really know how you have your debug and deployment process setup.

But, could you just not have like a flag file or something that if present it runs the debug application if not it runs the release one?

It sounds like that’s all you really need right? Just a flag or some mechanism that tells the system to run debug vs release. Then just need to make sure that your OS update doesn’t contain or make sure it clears the debug flag so the release build runs again.

Just brainstorming random ideas now, but could have a script or service that is part of your OS update. After the OS update if the debug flag is present on the system it will remove the flag stop the debug application if it’s already running then run the release application. Something like this.

Best Regards,
Jeremias

Hi @jeremias.tx
So to clarify my use-case:
I have a released application built in my Yocto build, stored at /usr/bin/. This one is the one also being deployed to our devices in the field as part of an OS update. We do not want to have it separate as a subsystem.

But while developing, we improve and expand this application. To test this application on the device (where we usually have the whole released image), we want to stop the released application, and run a debug version with the changes.

We will have to stop the released application, copy the debug to somewhere like /var/debug-applications/ (/usr/bin/ is read only) and then start it.

Now with the idea of a flag file, I guess the systemd service can check if it exists and then start the debug version. But where should this flag file be stored? If it is in /var/, an OS update won’t change it, and I can’t put it in /usr/bin because its read-only (as far as I understand).
So the file needs to be removed during an OS update, but how can I do this? Is there some sort of post-deploy script I can execute?
(I guess we can also do it directly if the application exists at /var/debug-applications/, but the issue is the same)

I also thought of editing the service with an override (when deploying the debug version), but this changes are put into /etc/ right? And there the changes would persist over a OS update because of the 3-way merge?

Best regards,
Tim

Wait there’s one crucial detail I’m missing.

How exactly is it decided when the debug application will be deployed and started? Is this a manual process that occurs, or is it also triggered by an OS update?

If it’s triggered by an OS update can the flag just be changed via OS update? For example:

  • Base OS with no debug flag and release app running
  • OS update containing debug flag and debug app running
  • Another OS update but with no debug flag and release app is running again.

The flag could be in /etc, and if the flag is part of the OSTree package itself it should be overwritten on each update instead of 3-way merging.

Now if you’re not deploying the debug via OS update then the story is different but, this depends on how exactly you’re deciding and deploying the debug app.

Otherwise you would need something like this:

  • Script/service that runs on boot before the systemd service that triggers your application.
  • This script/service will only execute if the current boot was caused due to the reboot from an OS update
    • In theory this can be detected by checking the upgrade_available variable in U-boot as detailed here: Torizon Updates Technical Overview | Toradex Developer Center
    • Make sure not to trigger if there was a rollback (i.e rollback u-boot variable is set). Or maybe you still want to execute even on rollback, I guess this depends on you.
  • When this script/service triggers it removes the debug flag if it exists on the system.
    • If we have this run before the systemd service that starts your application runs, then your systemd service can correctly choose the right application version to start.
    • Make sure this service/script also runs before the systemd service for aktualizr-torizon runs as aktualizr-torizon will clear the upgrade_available variable on a successful update. Which would make it hard for the script to determine whether the current boot was triggered cause of an update.

I think something like this would work. Again I did not strenuously test it, but fundamentally something like this could be a path forward.

Best Regards,
Jeremias

Hi @jeremias.tx

The idea is to do this manually (or with dedicated vscode task to be precise), so not an OS update (we only want to change the exe of one application at a time..).

While the idea with the flag probably works, I’m still wondering if there is a cleaner way to do this. By “cleaner” i mean mainly that I don’t want to have development-related things in my release image (e.g. the check for the debug flag, which will always return false in the field..). Maybe as a background, we used a A/B system and just replaced the executable, and updated the Image again when done developing.

Maybe OSTree has something useful too? Maybe having a local commit, or ostree-admin-unlock --hotfix? Although I’m not sure if that’s a cleaner way..
Or maybe systemd has some service magic for debugging which we can apply?

Best Regards,
Tim

Hi @TimSiegrist

I would definitely not recommend you use ostree-admin-unlock unless you are planning to wipe the entire system and reinstall using the Toradex Easy Installer. It’s just too easy to get the board in an unmanageable shape.

The suggestion from @jeremias.tx is likely the best solution. Any other solutions I can think of all have complications due to not knowing if you are on the first boot of a new OTA upgrade. The complications Jeremias discuss are pretty thorough.

Drew

By “cleaner” i mean mainly that I don’t want to have development-related things in my release image (e.g. the check for the debug flag, which will always return false in the field..).

One question I have is why are you trying to do this in a release image in the first place? As you said you don’t want to have any debugging artifacts/settings left over when the device is sent to the field.

After doing the debugging, wouldn’t it just be simpler to re-flash the device to the production image to “reset” it for release. You said this was for development purposes right? So this would only be a handful of devices I imagine.

Or since the deployment of the debug application is manual as you said. Couldn’t there be a script/VSCode task that can be ran manually when development is done to put the device back into a release state? This way there would be no development related things on the device’s system, it would just be managed externally.

I guess the question is why the specific requirement, that it needs to be an OS update that resets the device back into a release state? Especially when the action that puts the device into the debug state is manual and outside of the normal OTA update mechanisms. You’re trying to synchronize two processes that have no knowledge of one another.

The suggestion of using a flag in the filesystem would be one way to bridge the two processes, but if you don’t want to use this method then it gets more complicated. Which I assume you don’t want either. Just thinking it through, since the OS update process knows nothing about what you did manually with VSCode, then there has to be something on the system to act upon (i.e a flag and scripts that act on this flag). Otherwise how can the OS update “undo” something it doesn’t know about?

Maybe as a background, we used a A/B system and just replaced the executable, and updated the Image again when done developing.

You’re saying you used an A/B method in the past where one of the partitions was strictly for debug/development? Isn’t that just a more extreme version of what we’re suggesting here?

You say you don’t want any development related things in the system, which would only amount to a flag and some small scripts. But it’s okay to sacrifice an entire partition just for development related things?

Best Regards,
Jeremias

@jeremias.tx @drew.tx
For the first point, the main concern I had is that for flashing the production Image via Tezi is quite hard outside of production, because our electronics needs to be accessed within our devices. So an OS update to get to the release state would be waaay easier.

But good point regarding the release image, I think we will need to go with a release and a development image anyway, so we can implement this service to check for flags only in the development version.
In that way, we can probably keep the whole “debug” logic (e.g. the service to check/delete flags) in the development image. In the release image we just start our services as normal.

As for the A/B method, yes you are right. If we don’t flash an Image twice, we still have one partition with debug stuff on it. So not really clean..

So to sum it up, I’ll try out to have 2 separate images (release vs dev), where in the dev I can handle the whole debug application handling. In the release build, we will just have our services in the OS which we start, without any knowledge of the debug versions.
I’ll have to try it out and work with it for a bit, but it sound quite promising.

Thanks a lot for the suggestions and your patience!

Best regards,
Tim

Hi @TimSiegrist

Great to hear. I do believe having different images is the right stance. And you can easily switch between them using an OTA update so hopefully that meets your needs. Please do let us know if you have any further thoughts on this.

Drew