When I try to create an image with pre-provisioned containers I get:
$ torizoncore-builder bundle docker-compose.yml --bundle-directory bundle
An unexpected Exception occured. Please provide the following stack trace to
the Toradex TorizonCore support team:
Traceback (most recent call last):
File "/builder/torizoncore-builder", line 215, in <module>
mainargs.func(mainargs)
File "/builder/tcbuilder/cli/bundle.py", line 94, in do_bundle
bundle(bundle_dir=args.bundle_directory,
File "/builder/tcbuilder/cli/bundle.py", line 46, in bundle
host_workdir = common.get_host_workdir()
File "/builder/tcbuilder/backend/common.py", line 426, in get_host_workdir
container = docker_client.containers.get(container_id)
File "/usr/local/lib/python3.9/dist-packages/docker/models/containers.py", line 889, in get
resp = self.client.api.inspect_container(container_id)
File "/usr/local/lib/python3.9/dist-packages/docker/utils/decorators.py", line 16, in wrapped
raise errors.NullResource(
docker.errors.NullResource: Resource ID was not provided
This happens with my docker-compose.yml file and also with this example.
torizoncore-builder seems up to date:
$ source tcb-env-setup.sh
You may have an outdated version installed. Would you like to check for updates online? [y/n] y
Setting up TorizonCore Builder with version 3.
Pulling TorizonCore Builder...
3: Pulling from torizon/torizoncore-builder
Digest: sha256:3d796d6ede4dde94ccd8ada7594d569be6e2bf765465260ce1163da997993b76
Status: Image is up to date for torizon/torizoncore-builder:3
docker.io/torizon/torizoncore-builder:3
Done!
Two things come to mind. I don’t have the machine I used for this connected to a boar or powered up so I cannot look.
-
You didn’t create a cheat file and put it in your $HOME/bin directory named “enable-arm” where you would never forget to run it before attempting any builds.
-
Most likely, you forgot to “docker login” before attempting your build and your containers are behind a password protected account.
You “can” login, but did you docker login before you ran the command in that terminal session?
Usually those errors are due to stupid access violations. Trust me, I’ve had them enough to know.
Forget to run my enable-arm command, weird stupid errors.
Forget to run docker login immediately in front of build where I need private containers - weird stupid errors.
Yes, I did log into docker.
And just in case: yes, from the command line. I can easily create images , push them to docker hub and get them back before running torizonc core builder.
Then I’m all out of the simple suggestions.
Sorry.
Well, no, I’m not. I have two more stupid things.
Your example is for 64-bit. You wouldn’t per-chance be using a 32-bit target/platform would you?
In the example, take the version numbers off the images. I remember something weird once with that. Just see if it builds pulling tip of tip.
Everything is 64 bits and I can pull the containers from within the device and run them without issues.
I’ll try that last thing, thanks for your time!!
Oh, my images do not have version numbers, and I get the same error.
Greetings @lisandropm,
I believe what you’re seeing is this known issue here: TorizonCore Builder Issue Tracker
We do have a fix for this bug internally, but it hasn’t been released yet in our TorizonCore Builder container image on Dockerhub. However, the fix is available on the open source code repo for TorizonCore Builder: GitHub - toradex/torizoncore-builder: TorizonCore Builder is a tool that allows the customization of TorizonCore images.
So what you can do for the time being is build the container image from source and then have access to the fix that way.
Best Regards,
Jeremias
Well, that sounds good, but when building the tool:
Unpacking garage-deploy (2020.10-37-g2cb76c46e) ...
dpkg: dependency problems prevent configuration of garage-deploy:
garage-deploy depends on openjdk-11-jre-headless; however:
Package openjdk-11-jre-headless is not installed.
garage-deploy depends on libboost-filesystem1.74.0 (>= 1.74.0); however:
Package libboost-filesystem1.74.0 is not installed.
garage-deploy depends on libboost-log1.74.0 (>= 1.74.0); however:
Package libboost-log1.74.0 is not installed.
garage-deploy depends on libboost-program-options1.74.0 (>= 1.74.0); however:
Package libboost-program-options1.74.0 is not installed.
garage-deploy depends on libboost-thread1.74.0 (>= 1.74.0); however:
Package libboost-thread1.74.0 is not installed.
dpkg: error processing package garage-deploy (--install):
dependency problems - leaving unconfigured
Errors were encountered while processing:
garage-deploy
I get the same errors when building too. However, these are non-fatal errors and the tool successfully finishes building. Does this not happen for you? Or does the building process stop at this error for you?
Best Regards,
Jeremias
The docker build seems to work, but mind you are telling me to provide a client with a development build with errors on it.
Of course I’ll test it non the less, but please understand that this is far from ideal.
And at least the -dev branch does not works:
$ torizoncore-builder build
docker: Error response from daemon: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: exec: "build": executable file not found in $PATH: unknown.
lisandro@gryffindor:~/foo/repos/torizoncore(build_rv3028)$ git diff
diff --git a/tcb-env-setup.sh b/tcb-env-setup.sh
index f75b8c1..0fec85a 100644
--- a/tcb-env-setup.sh
+++ b/tcb-env-setup.sh
@@ -210,7 +210,8 @@ then
fi
fi
-alias torizoncore-builder='docker run --rm -it'"$volumes"'-v $(pwd):/workdir -v '"$storage"':/storage --net=host -v /var/run/docker.sock:/var/run/docker.sock torizon/torizoncore-builder:'"$chosen_tag"
+#alias torizoncore-builder='docker run --rm -it'"$volumes"'-v $(pwd):/workdir -v '"$storage"':/storage --net=host -v /var/run/docker.sock:/var/run/docker.sock torizon/torizoncore-builder:'"$chosen_tag"
+alias torizoncore-builder='docker run --rm -it -v /deploy -v $(pwd):/workdir -v storage:/storage -v /var/run/docker.sock:/var/run/docker.sock --net=host torizoncore-builder-dev:local'
echo "Setup complete! TorizonCore Builder is now ready to use."
The non-dev version seems to work, now I do have some login issues, but I guess that’s a different story.
The docker build seems to work, but mind you are telling me to provide a client with a development build with errors on it.
The same non-fatal build error happens in the build that’s published on Dockerhub as well. It’s really not a huge issue.
Of course I’ll test it non the less, but please understand that this is far from ideal.
Well I’m sure you understand that you’re asking access for a fix that’s not available yet on the “stable” Dockerhub build of TorizonCore Builder. Therefore we have to undergo less than ideal approaches in order for you to have early access to this fix.
And at least the -dev branch does not works:
The build from the Github repository seems to work okay for me. Looking at the error you got it seems to be an error from Docker rather than from the TorizonCore Builder tool itself.
Actually looking at your alias you have torizoncore-builder-dev:local
. Did you build the development image that is used for testing the tool rather than the tool itself? It explains the difference in the README of the repo fairly clearly.
The non-dev version seems to work, now I do have some login issues, but I guess that’s a different story.
Are we talking about the non-dev version as in the version that’s on DockerHub? Or did you build the correct image this time from the Github repo.
Best Regards,
Jeremias
Hi!
The same non-fatal build error happens in the build that’s published on Dockerhub as well. It’s really not a huge issue.
OK!
Did you build the development image that is used for testing the tool rather than the tool itself? It explains the difference in the README of the repo fairly clearly.
My bad! Yes, I missed that part. Sorry for that.
Are we talking about the non-dev version as in the version that’s on DockerHub? Or did you build the correct image this time from the Github repo.
I’m talking about torizoncore-builder/local, built with the Github repo. By using this image I managed to build the final OS image with the previouslt generated bundle in it, at least by the log. But when the image is pushed into the device we are still getting the Torizon app instead of the ones in our docker-compose.yml. I did not have the time to look further into this yet. It might or not be an error on my side.
But when the image is pushed into the device we are still getting the Torizon app instead of the ones in our docker-compose.yml.
How are you pushing the image to the device? Using the bundle
and combine
commands you should have an image that can then be installed with Toradex Easy Installer. If you’re pushing the image with the deploy
command then this won’t work.
I’m pushing the image via deploy, so that would explain it.
I’m pushing the image via deploy, so that would explain it.
Okay then yes that won’t work. Currently the only way to deploy a TorizonCore image that has been pre-provisioned with containers is by producing an Easy Installer image and flashing the custom image that way.
Due to technical reasons you can’t use the deploy
command to deploy pre-provisioned container customization. This is described in further detail in the article here: Pre-provisioning Docker Containers onto a Torizon OS image | Toradex Developer Center
Best Regards,
Jeremias