I’m using Verdin iMX8M Plus module together with a custom base board. For building the image, I’m using torizoncore 6 and following online tutorials to build and deploy the image. The image is actually torizon-core-docker-rt-verdin-imx8mp-Tezi_6.2.0-devel-202303+build.6 with preempt RT.
For the custom board I need to have device tree customization and during the development phase I would like to use SSH deploy command to deploy the image remotely.
So after unpacking, modifying the yaml build configuration file to accept custom device tree and running the build command, I call this command:
The result is that the image gets partially updated. For instance, I can see that the kernel version is correct and corresponding to the image, but the device tree is not properly applied and default device tree file is set for some reason. Device tree file declared in tcbuild.yaml does not get applied, and it does not even get copied into boot/ostree/torizon-xxxx/dtb directory.
I’m not using dt checkout on torizoncore 6, I’m manually cloning and setting device trees as instructed in tutorial and passing those paths into tcbuild.yaml.
On the other hand, if I use an USB stick, copy the output directory there and then use recovery mode to flash the image via easy installer everything gets updated correctly. So, when flashing the same image from USB, device tree is properly set.
My question is, why doesn’t remote deploy completely update the image with custom device tree files? I would prefer to use this method during the development phase and I’m wondering how to make it work.
I think we had the same problem. While you wait for an official answer from Toradex you can try my solution to see if solves your problem as well.
I think that when execute the torizoncore-builder deploy command you are deploying the last image you unpacked.
In order to deploy the custom image you just built you have to unpack it first.
I don’t know how you have it setup in your tcbuild.yaml, but we have it set up to output the custom image to a directory named custom_image.
So order of operations for us is:
Modify source code, be it device tree, overlays, drivers, etc.
Delete the custom_image directory containing the previous built image with rm -r custom_image command or else the torizoncore-builder build will throw an error.
Build the modified custom image with torizoncore-builder build command
Unpack the newly built custom image with torizoncore-builder images unpack custom_image (Note that the custom_image is this command is the name of the output directory that is specified in our tcbuild.yaml)
Lastly execute torizoncore-builder deploy --remote-host 192.168.0.xxx --remote-username torizon --remote-password xxxx --reboot to deploy the custom image to target
With this you can have a known branch name that you can then use with the deploy command. This branch name should then have your actual changes that were performed by the build command. Keep in mind this branch name will exist for as long as the storage volume of TorizonCore Builder is kept intact. Whenever you run the build command this volume is wiped out by default for a clean build.
Hi @mmarcos.sensor and @jeremias.tx,
First of all thank you both for your help! In the end I’ve opted for the second approach with branches. This was the critical step I was missing in my workflow. After calling the union command to create a branch and then passing that branch as argument to deploy command the image gets properly updated. I was able to verify that correct device tree file is indeed selected on target board.