NOTE: Tasks Summary: Attempted 5897 tasks of which 5875 didn't need to be rerun and all succeeded.
after an update and reboot initialized from the torizon cloud and the same after the torizon-corebuilder deploy method I get some Errors:
[ 4.827157] systemd[1]: Failed to start Create System Users.
[FAILED] Failed to start Create System Users.
[FAILED] Failed to start RPC Bind.
[FAILED] Failed to start Network Time Synchronization.
[ 6.755448] debugfs: File 'Capture' in directory 'dapm' already present!
[FAILED] Failed to start Network Time Synchronization.
[FAILED] Failed to start Network Time Synchronization.
[FAILED] Failed to start Network Time Synchronization.
[FAILED] Failed to start Network Time Synchronization.
[FAILED] Failed to start Network Time Synchronization.
[ 7.171964] imx-audmix imx-audmix.0: failed to find SAI platform device
[FAILED] Failed to start D-Bus System Message Bus.
[FAILED] Failed to start Network Configuration.
[FAILED] Failed to start D-Bus System Message Bus.
[FAILED] Failed to start Network Configuration.
[FAILED] Failed to start D-Bus System Message Bus.
[FAILED] Failed to start Network Configuration.
[FAILED] Failed to start D-Bus System Message Bus.
[FAILED] Failed to start Network Configuration.
[FAILED] Failed to start D-Bus System Message Bus.
[FAILED] Failed to start D-Bus System Message Bus.
[FAILED] Failed to start Network Configuration.
[FAILED] Failed to start Network Configuration.
[FAILED] Failed to start Network Manager.
[DEPEND] Dependency failed for Network Manager Wait Online.
[FAILED] Failed to start Network Name Resolution.
[FAILED] Failed to start Network Name Resolution.
[FAILED] Failed to start Disk device monitor based on udisks2.
[FAILED] Failed to start Network Name Resolution.
[FAILED] Failed to start D-Bus System Message Bus.
[FAILED] Failed to start Network Name Resolution.
[FAILED] Failed to start Network Manager.
[FAILED] Failed to start Network Name Resolution.
[FAILED] Failed to start Network Name Resolution.
[FAILED] Failed to start Avahi mDNS/DNS-SD Stack.
[FAILED] Failed to start D-Bus System Message Bus.
[FAILED] Failed to start Network Manager.
[FAILED] Failed to start D-Bus System Message Bus.
[FAILED] Failed to start Network Manager.
[FAILED] Failed to start D-Bus System Message Bus.
[FAILED] Failed to start Network Manager.
[FAILED] Failed to listen on D-Bus System Message Bus Socket.
[DEPEND] Dependency failed for Network Manager.
[DEPEND] Dependency failed for User Login Management.
[ 11.109824] Bluetooth: hci0: unexpected event for opcode 0x0000
[FAILED] Failed to listen on D-Bus System Message Bus Socket.
[DEPEND] Dependency failed for Bluetooth service.
...
after the bitbake there were some warnings:
WARNING: torizon-core-docker-1.0-r0 do_image_ostree: Data in 'var/lib' directory is not preserved by OSTree. Consider moving it under '/usr'
WARNING: torizon-core-docker-1.0-r0 do_image_ostree: Data in 'var' directory is not preserved by OSTree. Consider moving it under '/usr'
WARNING: torizon-core-docker-1.0-r0 do_image_ostree: Data in 'home' directory is not preserved by OSTree. Consider moving it under '/usr'
Hi, I’m the Toradex AI Assistant. I’m in early beta, and I would appreciate your feedback.
The Toradex Support team will follow up with you shortly!
Did you like my answer?
Hello Gerald,
The errors you’re encountering after deploying the TorizonCore image and rebooting, such as failures to start system services, are indicative of a system initialization issue. This could be due to several factors, including but not limited to, improper system configuration, missing or corrupt system files, or issues with the OSTree deployment.
Given the warnings about data in var/lib, var, and home directories not being preserved by OSTree, it seems there might be a misconfiguration in how the filesystem and data directories are set up in your build. OSTree manages system files in a way that expects certain directories to be immutable between updates, with /usr being the primary directory for system software. Data that needs to be writable or persistent across updates should be carefully managed to ensure it’s placed in the correct location, such as /var or /etc, but it seems there’s a recommendation to move data under /usr, which is unusual and might be a misinterpretation of the warning messages.
Here are some steps to troubleshoot and potentially resolve the issue:
Review OSTree Configuration: Ensure that your Yocto Project configuration for OSTree is correctly set up. The warnings suggest that there might be an attempt to store dynamic data in locations that OSTree expects to be immutable. Review the OSTree and filesystem layout configuration in your Yocto recipes and configuration files.
System Services and D-Bus: The failure to start critical services like D-Bus, Network Manager, and system message bus indicates that the system’s basic IPC (Inter-Process Communication) mechanisms are not functioning correctly. This could be due to missing components or incorrect permissions. Ensure that all necessary system components are included in your build and that their configuration files are correctly placed and formatted.
Validate OSTree Deployment: After building the image, ensure that the OSTree deployment process to the device is completed without errors. An incomplete or corrupted OSTree deployment could lead to the issues you’re experiencing. You might want to redeploy the image to ensure it’s not a one-time error during the deployment process.
Check for Missing Dependencies: The errors and warnings you’ve encountered could also be a result of missing runtime dependencies. Ensure that all required packages and their dependencies are correctly included in your Yocto build. This includes checking for any updates or changes to packages that might not have been accounted for in your current build configuration.
Review System Journal: For more detailed diagnostics, review the system journal
Please, also check if these links can help you solve your problem:
to 1.
I intentionally haven’t made any changes to the image myself
to 2.-5.
helps me not
In yocto my first step was to bitbake tdx-reference-minimal-image.
The second was to change the repo init to torizoncore/default.xml
…
source setup-enviroment …
~/oe-core/build-torizon$ nano conf/local.conf (eula & credentials)
WARNING: /home/<myusername>/oe-core/build-torizon/conf/../../layers/meta-updater/recipes-sota/aktualizr/aktualizr-shared-prov-creds.bb: Getting checksum for aktualizr-shared-prov-creds SRC_URI entry credentials.zip: file not found except in DL_DIR
Without automated deployment to the torizon cloud, everything works perfectly.
Now my question is: What is the correct configuration if I want the package to be deployed to the torizon cloud after it has been built?
Let me summarize to make sure I understand your situation so far.
You used Easy Installer to flash 6.6.1 to your module. You then did a Yocto build and are trying to push the results of the build to our cloud services. You use our cloud services to update your device to this newly built image, it then fails with all the errors you listed. Is that more or less right?
after an update and reboot initialized from the torizon cloud and the same after the torizon-corebuilder deploy method I get some Errors:
Instead of deploying the new image as an update what happens if you just flash the image with Easy Installer? Do you see the same issues on the device? This would help tell us if it’s the image build itself that is strange or the update/deploy process.
after the bitbake there were some warnings:
These warnings are normal.
In yocto my first step was to bitbake tdx-reference-minimal-image.
The second was to change the repo init to torizoncore/default.xml
Is there a reason you built the reference image then moved to building Torizon OS?
and now I notice the following warning:
I don’t believe I’ve seen this warning before, but let me check with a fresh build.
I have to admit that I always got a bit confused with the instructions for ‘Build a Reference Image with Yocto Project’ and ‘Build Torizon OS With Yocto’ and never quite understood which instruction from which manual should be used.
I was happy that the bitbake process worked.
Maybe I should start over and only start a bitbake from the torizoncore/default.xml repo. Is that even possible?
I would say/guess that the problems come from me making a mistake somewhere that I can’t reproduce.
If the same error occurs in the new attempt, I will follow this tip.
Okay an update on the test I was running. So I did the following:
Flashed an Apalis i.MX8 with the latest 6.6.1 Torizon OS release using Easy Installer
Perform a Yocto build for Torizon OS using the latest default manifest.
I also set SOTA_PACKED_CREDENTIALS and SOTA_HARDWARE_ID in my local.conf to push the results of the build to my OTA account.
Build finished without issue.
Provisioned my device to our cloud services.
Pushed an update using the results of the Yocto build that were uploaded to my account.
Everything worked as expected without any issues. I didn’t see any of the errors on the device that you observed.
Given these results I can only suspect that perhaps something in your build wasn’t configured right or went wrong somehow. Maybe when you started with the tdx-reference-minimal-image build, then went to Torizon something went wrong during the transition. Not enough to make the build fail but still cause the resulting image to be affected in a way. It’s hard to say exactly though. I would probably recommend starting the build from scratch just doing Torizon and not the reference image to be sure. Just follow this article as documented: Build Torizon OS from Source With Yocto Project/OpenEmbedded | Toradex Developer Center
The reference image is only if you want to build our reference BSP which is distinct from Torizon OS.
Also if you don’t mind me asking, what is your goal here with building Torizon OS with Yocto?
We would like to use Torizon OS with fleet management.
However, we have developed two versions of carrier boards that do not have a number of hardware components like the Ixora board that we used as a template.
For example, no CAN, no Bluetooth, no WLAN, any PIN configs.
And we need adjustments to an intree kernel module, in particular we need to increase the buffer size of the RPMSG driver, which has been tested for us and works well.
In the future, we want to have the skills to fully understand the system and be able to intervene if necessary.
I’ve now started a new attempt from scratch… and I’m almost sure that it works now …
However, reference is made to the instructions, which refer to a reference image.
In the Torizon OS project I don’t have #/build/conf/local.conf
but a build-torizon/conf/local.conf which are slightly different
I started a new bitbake with modifications via a new layer (adding 2 patches).
There are a lot of warnings coming now…we’ll see
I guess the only complexity beyond creating the patch in the first place, would be the maintenance of this patch for you going forward. For example it may be necessary to rebase your patch once in a while as the Linux kernel source code changes.
However, reference is made to the instructions, which refer to a reference image.
In the Torizon OS project I don’t have #/build/conf/local.conf
but a build-torizon/conf/local.conf which are slightly different
Those are the same file. The local.conf file is a universal concept in Yocto builds. The only difference here is the file path.
I started a new bitbake with modifications via a new layer (adding 2 patches).
There are a lot of warnings coming now…we’ll see
Well, all I can say is welcome to being a Yocto developer. It has a high learning curve for a reason. But I’m sure with more time and experience you can get the hang of it. If you have any future questions feel free to ask us. Though, keep in mind it can be a bit difficult for us to debug a custom Yocto build on your side, but we’ll always do what we can within reason.