Board: toradex verdin imx8mp plus.
In the tcbuild.yml file config, i see we can pull containers and install them in our output core image by simply passing a docker-compose.yml file and username/password (optionally).
bundle:
# >> Choose one of the options:
# >> (1) Specify a docker-compose file whose referenced images will be downloaded.
# >> Properties platform, username, password and registry are optional.
compose-file: files/docker-compose.yml
# platform: linux/arm/v7
#username: ""
#password: "${DOCKER_REGISTRY_PULL_SA}"
#registry: https://europe-docker.pkg.dev/v2/xxxxxxxxxxx/docker-registry-xxxxxxxxxxxxxxx
In our case, we are pulling our containers from google cloud and we use a token to grant access.
We run this command to authenticate to GCP (google cloud platform).
I don’t think TorizonCore Builder would be able to easily access this container registry with this authentication method. Currently Torizoncore Builder can authenticate container registries via the following methods:
Username/Password login
Authentication via provided CA certificate
In this case it looks like you need to authenticate using a specific google cloud tool along with a json file that looks specific to google cloud as well. Currently TorizonCore Builder doesn’t handle anything google cloud specific, so I would think this is currently not possible. Unless there’s a more generic way to authenticate access to google cloud registry, though I’m not familiar enough with google cloud to know.
A possible workaround would be to first pull the container image from your google cloud registry to your development machine. Then you could create a local registry on your machine and then have TorizonCore Builder target that instead. Basically something like what is described here: Try to make a customized TorizonCore with Pre-provisioned Docker-Images - #12 by jeremias.tx
This should work in theory, though it is a bit clunky. But until proper google cloud authentication support could be added to TorizonCore Builder, this might be your only practical option.
Hello,
The idea is to copy docker-compose.yml and install.sh files to /home/torizon and run this commands from wrapup.sh :
bash ~/home/torizon/install.sh
install.sh looks like :
#!/bin/bash
package_architecture=$(arch)
if [[ "$package_architecture" == "aarch64" ]]
then
package=https://dl.google.com/dl/cloudsdk/channels/rapid/downloads/google-cloud-cli-419.0.0-linux-arm.tar.gz
else
package=https://dl.google.com/dl/cloudsdk/channels/rapid/downloads/google-cloud-cli-419.0.0-linux-x86_64.tar.gz
fi
wget $package -O gcloud.tar.gz
tar xf gcloud.tar.gz
./google-cloud-sdk/install.sh -q
source google-cloud-sdk/path.bash.inc # this cmd to use gcloud command line
# authenticate to gcp
gcloud auth activate-service-account --key-file="/tmp/service_account.json" -q
# pull and install containers
docker-compose up -d
I see now that I understand what you are trying to do here, maybe the wrapup.sh method is not ideal.
First of all what you need to realize is that wrapup.sh is being executed in Easy Installer not in TorizonCore. Easy Installer is in RAM not in the eMMC like TorizonCore would be. So you can’t just write to the filesystem of TorizonCore since Easy Installer is separate in RAM. In theory you could try to mount the eMMC device to have access to the newly flashed TorizonCore filesystem. But even then I don’t know if you’d have the ability to modify it freely.
Secondly, the install.sh script you have isn’t going to work in Easy Installer. Your script replies on various binaries and utilities like docker-compose which aren’t in Easy Installer to begin with. And again this would be executing from the Easy Installer filesystem not the TorizonCore one.
Due to these complications I would recommend you create a systemd service on TorizonCore that executes your script upon first boot. That way everything gets installed and executed from the perspective of TorizonCore which is where you want these things to end up.
Please note that everything should be in /etc in the filesystem for it to be captured properly. This includes the script that your systemd service will call.
Traceback (most recent call last):
File "/builder/torizoncore-builder", line 221, in <module>
mainargs.func(mainargs)
File "/builder/tcbuilder/cli/isolate.py", line 55, in isolate_subcommand
ret = isolate.isolate_user_changes(changes_dir,
File "/builder/tcbuilder/backend/isolate.py", line 163, in isolate_user_changes
indx = output.index("Password: ")
ValueError: 'Password: ' is not in list
Do you know why i get this error ? how to solve this ?
Hello,
I was able to fix the problem by running the command:
sudo usermod -aG sudo torizon
Now I wonder why when I make modifications again and I execute the isolate command then the union command, I do not see my modifications in the .tar.zst ?
First time : isolate + build + union … working fine Second time: (after doing changes in /etc/) isolate + union … is not working, isolate + build + union is applying only the last isolated changes in /etc.
Second time: (after doing changes in /etc/) isolate + union … is not working, isolate + build + union is applying only the last isolated changes in /etc.
The issue here is an understanding of how the isolate command works. So before you started you ran the unpack command on a base image, correct? When you do this the TorizonCore Builder tool has a fresh image as a reference. Then when you run the isolate command it compares the filesystem on the target device to the reference from unpack. It finds the differences and captures them.
However, once this is done the reference is no longer of the base image it’s of the base image plus your initial changes from the first isolate. So now if you were to run isolate the second time it will only capture the differences between the first and second isolate. Not the differences between the base and the second isolate.
Therefore, what you need to do is run unpack again on the base image. This resets TorizonCore Builder’s reference to a fresh filesystem. That way when you run isolate it will capture the differences between the fresh filesystem and whatever you have on your target device.
Or better yet use the method with the build command instead, since the build command always runs unpack before doing anything to ensure a clean starting reference.
One last note before closing this post, do you have any idea why I can’t capture the changes in /etc/sudoers.d/ when i add torizon to sudoers with this command?
echo “torizon ALL=(ALL) NOPASSWD:ALL” | sudo tee /etc/sudoers.d/torizon
One last note before closing this post, do you have any idea why I can’t capture the changes in /etc/sudoers.d/ when i add torizon to sudoers with this command?
The issue here is that you’ve configured sudoers such that the torizon user doesn’t require a password input for sudo. When TorizonCore Builder runs for some commands like isolate it needs to execute commands remotely on the target device via ssh. Some of these commands are executed with sudo. When TorizonCore Builder processes the output from the target device it filters out the Password: prompt that comes when you execute commands with sudo. But since you’ve disabled this there is no Password: string to filter, which completely messes up the output handling and parsing.
Basically don’t do this change, also it’s ill-advised to remove the password input for sudo commands.