Several question for torizon-minimal and offline updates

Hello,

I have several general questions regarding Yocto build of Torizon and the offline updates:

  1. In OSTree | Toradex Developer Center it says

“Whenever Torizon OS is built using Yocto/OpenEmbedded’s bitbake, a OSTree repository is automatically initialized and a new OSTree commit is created. For every following build, a new commit is created where only changed files are added to the repository.”

What happens to the OSTree repo when the build environment is reset?
How is an installation and rollback possible on the device if the repo of the new package is completely reinitialized and has only one commit?
Do I have to save the repo before resetting the build environment? If yes: how do I import an existing OSTree repo when the build environment is freshly set up?
Or is the OSTree repo restored from the layer revision information (Custom meta layers, recipes and images in Yocto Project (hello-world examples) | Toradex Developer Center)?

  1. After uploading a new package to the Torizon Cloud, a new lockbox with this new update package must be defined manually for offline updates (How to Use Secure Offline Updates with Torizon OS | Toradex Developer Center).
    Is it possible to use TorizonCore-Builder to define the lockbox instead, so that a Yocto build and an offline update image can be created fully automatically using a script?

  2. It is possible that during the Yocto build the new package is automatically loaded into the Torizon cloud (Build Torizon OS from Source With Yocto Project/OpenEmbedded | Toradex Developer Center).
    Is it also possible to set the name and version number of the package, as is supported by the TorizonCore builder (Signing and Pushing Torizon OS Packages to Torizon Cloud | Toradex Developer Center)?

Best regards,
Markus

Hi, I’m the Toradex AI Assistant. I’m in early beta, and I would appreciate your feedback.
The Toradex Support team will follow up with you shortly!

Did you like my answer?

:+1: :-1:


Hello Markus,

  1. When the Yocto build environment is reset, the OSTree repository initialized during the build process does not automatically persist. If you reset your build environment without backing up the OSTree repository, you will lose the history of commits, making it impossible to rollback to those previous states from the device. To preserve the ability for installation and rollback, you should indeed save the OSTree repository before resetting the build environment. To import an existing OSTree repo when setting up a new build environment, you would typically copy the saved repository back into the appropriate location within your new build environment. This process is not directly related to the layer revision information in Yocto but rather involves manually handling the OSTree repository files.

  2. Using TorizonCore Builder to define the lockbox for offline updates is not directly supported as described in the context you’ve provided. The TorizonCore Builder tool is primarily designed for tasks such as customizing the TorizonCore image, including adding or removing software, configuring the device tree, and handling container images. For integrating the creation of a lockbox into an automated process, you might need to develop a custom script or process that utilizes the TorizonCore Builder and other tools to package the update and create the necessary lockbox configuration as part of your CI/CD pipeline.

  3. When pushing a new package to the Torizon Cloud as part of a Yocto build, setting the name and version number of the package explicitly through automation is a feature that would typically be handled by the CI/CD integration tools and scripts you use. While TorizonCore Builder provides capabilities for signing and managing Torizon OS packages, including specifying version information, integrating this process with Yocto builds and Torizon Cloud uploads would require a custom automation script. This script would use TorizonCore Builder commands in conjunction with your Yocto build process to set the package name and version, sign the package, and then push it to the Torizon Cloud.

Best regards, Toradex AI Assistant

Please, also check if these links can help you solve your problem:

OSTree | Toradex Developer Center.

Greetings @Mowlwurf,

Let me try and explain what I can.

What happens to the OSTree repo when the build environment is reset?
How is an installation and rollback possible on the device if the repo of the new package is completely reinitialized and has only one commit?
Do I have to save the repo before resetting the build environment? If yes: how do I import an existing OSTree repo when the build environment is freshly set up?
Or is the OSTree repo restored from the layer revision information (Custom meta layers, recipes and images in Yocto Project (hello-world examples) | Toradex Developer Center)?

When you do a build with OSTree enabled there should be an ostree repo in your deploy directory, named ostree_repo by default. When you do a build from scratch it will initialize this repo for the first time and create a ref/branch and commit into this repo relating to the Yocto build you just did. If you delete the deploy directory then this will be remade from scratch. If you do another build without deleting it then it will re-use this repo as much as possible. This means subsequent branches and commits will be created in this repo. If your 2nd build uses the same branch name as the 1st build then some contents may get overwritten depending on the case.

All that said, you shouldn’t consider this ostree repo as “stable” if you want content to be retained long-term. What is proper is to instead transfer the content of the ostree repo from your build into a “master” ostree repo of sorts. This is actually what we do ourselves with our own CI/CD. When a build is first done we run some tests on the produced ostree content from that build to make sure it passes some basic sanity checks. If all is well, then we promote the ostree content from that build to a “master” ostree repo that contains all the ostree information accumulated from all our builds. We actually publish this master ostree repo here: Index of /ostree

For a more sturctured approach to ostree repo management in your buildsystem I would recommend this article from the ostree documentation: Writing a buildsystem and managing repositories | ostreedev/ostree

The best practices in this article is what we and many other distributions who use ostree follow.

After uploading a new package to the Torizon Cloud, a new lockbox with this new update package must be defined manually for offline updates (How to Use Secure Offline Updates with Torizon OS | Toradex Developer Center).
Is it possible to use TorizonCore-Builder to define the lockbox instead, so that a Yocto build and an offline update image can be created fully automatically using a script?

TorizonCore Builder can only download an already defined Lockbox. Defining Lockboxes happen server-side either via the Web UI or via using our API. If you want scriptable behavior then I would suggest taking a look at our API for our cloud services: Torizon Cloud API (Beta) | Toradex Developer Center

This will give you more flexibility in your CI/CD scripting/pipeline. A decent number of our other customers actually do just this in their own CI/CD pipelines, using the API to create/update lockboxes based on their builds.

It is possible that during the Yocto build the new package is automatically loaded into the Torizon cloud (Build Torizon OS from Source With Yocto Project/OpenEmbedded | Toradex Developer Center).
Is it also possible to set the name and version number of the package, as is supported by the TorizonCore builder (Signing and Pushing Torizon OS Packages to Torizon Cloud | Toradex Developer Center)?

Yes it is actually. The functionality for pushing the ostree from a Yocto build doesn’t actually come from us, it comes from this bbclass in meta-updater: meta-updater/classes/image_types_ostree.bbclass at kirkstone · uptane/meta-updater · GitHub

Notice in the specific function I linked in that bbclass there’s 2 variables to control package name and version that will be used during the package upload. Namely GARAGE_TARGET_VERSION and GARAGE_TARGET_NAME, for package version and package name respectively.

By default GARAGE_TARGET_VERSION will be set to the commit hash of the ostree commit that was generated in that build. GARAGE_TARGET_NAME will be set to the ostree ref/branch name that was used during that build. These variables can be overridden by the user to other values of course. One word of caution though with GARAGE_TARGET_VERSION specifically. You want to be careful with this variable as it can overwrite packages on the server side. For example say you do one build with package name “foo” and package version “bar”. If you do another build with the same name and same version, then the previous foo-bar package you uploaded will be overwritten by this new upload. If you didn’t save the previous build this can be problematic as you can imagine. Of course this is the same behavior with pushing packages with TorizonCore Builder. Just wanted to mention it for caution’s sake.

I know I just typed out quite a long response, but I hope it was helpful for your understanding of all this.

Best Regards,
Jeremias

Hello @jeremias.tx,

thank you a lot for your detailed answer.

So do I understand it correctly: If I create an updated image from scratch or with a reset build directory, I have to restore the ostree_repo directory first to have the possibility of a rollback on my device, right?

I have a few other questions for the updates:

  1. As far as I understand it, the check for updates and their installation runs in the background without user interaction. Are there any hooks in this process where user defined scripts can run? For example to get a user confirmation before starting an update?

  2. Is it possible to get a progress information of an ongoing update installation?

Best regards,
Markus

So do I understand it correctly: If I create an updated image from scratch or with a reset build directory, I have to restore the ostree_repo directory first to have the possibility of a rollback on my device, right?

Okay wait a minute. It sounds like you’re conflating two separate things now. The ostree_repo from your build has no effect on the rollback of your device. When you do an update on your device it downloads the ostree for that version from the server. The device will retain the previous version’s ostree deployment as a rollback, locally on that device. The ostree_repo on your local Yocto build does not matter at all for this process.

As far as I understand it, the check for updates and their installation runs in the background without user interaction. Are there any hooks in this process where user defined scripts can run? For example to get a user confirmation before starting an update?

The update process itself does not have any hooks so to speak. We do have something called “Greenboot” on the system: Aktualizr - Modifying the Settings of Torizon Update Client | Toradex Developer Center

This allows you to add user-defined update checks to determine whether your update is successful, or whether to initiate a rollback. Are you looking for something like this? Or maybe you could give me an example of what you would want to do here and why.

Is it possible to get a progress information of an ongoing update installation?

Progress in what sense? You could look at the logs for the update client on the device, Aktualizr to see where in the update the process is it at. Or you could look at the web UI or even query the server API on whether an update is still ongoing or not.

Again, if you could describe what your exact use-case/goal is here then I could have a better understanding of what your expectations are.

Best Regards,
Jeremias

Okay wait a minute. It sounds like you’re conflating two separate things now. The ostree_repo from your build has no effect on the rollback of your device. When you do an update on your device it downloads the ostree for that version from the server. The device will retain the previous version’s ostree deployment as a rollback, locally on that device. The ostree_repo on your local Yocto build does not matter at all for this process.

I’m quite new to OSTree and aktualizr so I got that mixed up.

So the OSTree repo of a Yocto build is independent from the update and rollback process on the device. For every update package I want to deploy it is generally possible to have a new build environment and therefore a new repo.
One disadvantage is that the device has to download the entire repo from the server each time and not just the differences.
But when I use offline updates, this disadvantage doesn’t matter because the lockbox always contains the complete repo.
Another disadvantage could be that a completely new rootfs is written during an update due to the new repo and not just the differences, which significantly increases the memory requirements on the device.

Are my thoughts correct? Are there other disadvantages or problems when using a new repo for every build?

Again, if you could describe what your exact use-case/goal is here then I could have a better understanding of what your expectations are.

Perhaps I still have the wrong idea of how an update works in detail.
In the meantime, I found this thread, which essentially answers the question about the hooks :slight_smile: : Selectively trigger aktualizr for offline updates - check and install subcommands
And for my question about the update progress, I’ll have to take a closer look at the logs from akutalizr.

Best regards,
Markus

One disadvantage is that the device has to download the entire repo from the server each time and not just the differences.

Not necessarily, what makes you think that? When you upload your ostree from your build to our platform it gets put into a kind of “master” ostree repo in the cloud. This master ostree repo will contain all your ostree commits that get pushed. When an update on the device occurs ostree should only download the objects that differ between what it has and what is in the cloud. Now if the filesystems between two versions are nearly completely different, then okay you’re probably downloading a lot of objects due to the difference. But this is unlikely if you’re doing more incremental updates.

But when I use offline updates, this disadvantage doesn’t matter because the lockbox always contains the complete repo.

This is correct, because in an offline update you don’t know what objects are needed on the device until the update actually happens. Therefore the entire ostree repo is needed ahead of time just in case.

Another disadvantage could be that a completely new rootfs is written during an update due to the new repo and not just the differences, which significantly increases the memory requirements on the device.

As I said, this shouldn’t happen. The only files that would get changed in an update are the files that actually were changed.

Are my thoughts correct? Are there other disadvantages or problems when using a new repo for every build?

Where exactly are you getting these concerns and ideas from? Any OSTree documentation or article would say otherwise. To be specific, in our own nightly builds we literally generate a new ostree repo during those builds every night per build. These commits then just get promoted to our master ostree repo via the process I described earlier.

Perhaps I still have the wrong idea of how an update works in detail.

I would still like to understand what exactly you’re trying to achieve here. Given the questions you’re asking here I have to wonder what your use-case is here.

Best Regards,
Jeremias

Where exactly are you getting these concerns and ideas from?

These are my own thoughts and conclusions, but now I’m confused.
You wrote earlier that the ostree from my build isn’t considered to be “stable” and I have to transfer it into a master ostree repo by myself. Now you say:

When you upload your ostree from your build to our platform it gets put into a kind of “master” ostree repo in the cloud.

So do I have to worry about the ostree repo when building an image for existing devices but from scratch? Or is it all covered by the Torizon Cloud?
Perhaps we are talking at cross purposes.

I would still like to understand what exactly you’re trying to achieve here. Given the questions you’re asking here I have to wonder what your use-case is here.

We have a Yocto image based on toradex-reference-multimedia with an working offline update process using Mender. For several reasons we want to migrate to Torizon.
Porting our image to Torizon was successful but the implementation of the offline update process is pending.
So on the one hand I’m trying to find out what I have to consider when (re) building the image and the ostree repo.
On the other hand, I am also thinking about how the start and the process of an offline update can be implemented according to our ideas.
We don’t want aktulizr to install an update as a background service and restart the device without confirmation from the user.
What we want is for the customer to start an offline update manually via the GUI, see the installation progress if necessary and possible and confirm the successful installation after a reboot.
I now have a rough idea of how I can implement this. Only the necessary handling of the ostree repo is not quite clear yet.

Thank you for your patience with me :slight_smile:
Markus

So do I have to worry about the ostree repo when building an image for existing devices but from scratch? Or is it all covered by the Torizon Cloud?

Well I said what I said originally, because your use-case wasn’t very clear to me at the time. It’s why I’ve been asking you a lot of questions to try and understand what you want to achieve. Keep in mind, we’re not mind readers here sometimes it can be hard to tell what one is trying to do on first impression. At first I thought you wanted to manage your own ostree repo outside of our cloud services.

Now that you’ve further explained your use-case to me I think I understand what is going on here. In summary:

  • You’re using Torizon OS and want to make use of offline updates.
    • By the way I’ve been meaning to ask why are you building Torizon OS with Yocto? I mean this is fine, but many of our customers find the benefit of Torizon OS being the fact they are not forced to use Yocto in many cases. I’m just curious then why you’re using Yocto then. If it’s cause your comfortable with Yocto and you want to use it, then that’s fine.
  • You want more manual control over the update process rather than just letting it go on in the background automatically.
    • This is actually a somewhat common request from other customers as well. We have it in our backlog to try and improve this use-case this will take time of course. In the meantime other customers have created alternative solutions like that other thread you linked previously.

Now that I understand this about your use-case. I can say confidently you do not have to worry about the ostree repo at all. If you are using our cloud update services (including offline updates) the only thing you need to worry about when it comes to ostree management is pushing it to our server as an update package that you can use later. Once you’ve pushed it, it doesn’t really matter what you do with it afterwards. This is how 99% of our customers use our systems, they push their OS update packages to the server then perform updates, that’s it. There are very few cases you ever have to worry about managing your own ostree repo and the nuances that come with this.

Best Regards,
Jeremias

I apologize for the confusion. I’ve already spoken to several other colleagues at Toradex about our use case recently, so I forgot to explain it in more detail here too.

  • You’re using Torizon OS and want to make use of offline updates.
    […]
  • You want more manual control over the update process rather than just letting it go on in the background automatically.
    […]

Yes and yes.

We have been working with the tdxref-image and Yocto for several years now and are on the home straight with our product development.
However, we did not necessarily make the move from Mender to Torizon as an update solution voluntarily, but because Mender’s long-term support for the Toradex BSP is too uncertain for us.
Since Docker and container application development are completely new territory for us, we don’t have the time to familiarize ourselves with this topic so shortly before the market launch and to change the way the entire team develops the application.
That’s why we thought it would be better to port our Yocto image and update procedure to Torizon (which is working quite well so far). This way, our colleagues can keep the familiar development environment for the application for the time being.
In the medium term, however, we will certainly look into Docker and containers and, if necessary, port our application as well. Perhaps we can then save ourselves the effort of using a custom TorizonOS. We’ll see …

Thank you again for patiently answering my questions :+1:

Best regards,
Markus

Thank you for explaining your use-case in detail. It makes more sense now where you’re coming from. Your process makes sense to me. It’s understandable you want to ease into this new solution while using a familiar process (Yocto). As I said I have no issues whether you choose to use Yocto or not, either method can get you to your goal.

In any case, glad I was able to help clear up some of your misunderstandings.

Best Regards,
Jeremias