Hardware: Verdin iMX8MP WB V1.1A
Torizon OS 6.4.0-build.5
Torizoncore-builder v3.8.1
Hi all,
I am trying to integrate our Torizon easy installer package creation with our existing bitbucket pipelines workflow and am getting the following error:
2024-03-29 13:17:28,662 - torizon.tcbuilder.cli.build - DEBUG - Removing temporary bundle directory bundle_20240329131728_641433.tmp
2024-03-29 13:17:28,663 - torizon.tcbuilder.cli.build - INFO - Removing output directory 'output_directory' due to build errors
2024-03-29 13:17:28,716 - root - ERROR - Error: Can't determine current container ID.
2024-03-29 13:17:28,717 - root - DEBUG - Traceback (most recent call last):
File "/builder/torizoncore-builder", line 221, in <module>
mainargs.func(mainargs)
File "/builder/tcbuilder/cli/build.py", line 506, in do_build
raise exc
File "/builder/tcbuilder/cli/build.py", line 479, in do_build
build(args.config_fname, args.storage_directory,
File "/builder/tcbuilder/cli/build.py", line 465, in build
raise exc
File "/builder/tcbuilder/cli/build.py", line 454, in build
handle_output_section(
File "/builder/tcbuilder/cli/build.py", line 303, in handle_output_section
handle_bundle_output(
File "/builder/tcbuilder/cli/build.py", line 359, in handle_bundle_output
"host_workdir": common.get_host_workdir()[0],
File "/builder/tcbuilder/backend/common.py", line 409, in get_host_workdir
container_id = get_own_container_id(docker_client)
File "/builder/tcbuilder/backend/common.py", line 402, in get_own_container_id
raise OperationFailureError("Can't determine current container ID.")
tcbuilder.errors.OperationFailureError: Can't determine current container ID.
The bitbucket pipeline step is similar to this:
- step:
name: Build EasyInstaller Image
image: torizon/torizoncore-builder:3.8.1
services:
- docker
script:
- mkdir /deploy
- apt update
- apt install -y zip
- export $(grep -v '^#' tmp.env | xargs)
- torizoncore-builder --log-level debug build --file ./Scripts/tcbuild.yaml
The tcbuild.yaml is similar to this:
# >> The input section specifies the image to be taken as the base for the
# >> customization.
input:
easy-installer:
remote: "https://artifacts.toradex.com/artifactory/torizoncore-oe-prod-frankfurt/kirkstone-6.x.y/release/5/verdin-imx8mp/torizon/torizon-core-docker/oedeploy/torizon-core-docker-verdin-imx8mp-Tezi_6.4.0+build.5.tar"
# >> The customization section defines the modifications to be applied to get
# >> the desired output image.
customization:
device-tree:
overlays:
remove:
- verdin-imx8mp_dsi-to-hdmi_overlay.dtbo
kernel:
arguments:
- ipv6.disable=1
filesystem:
- Scripts/FileSystemChanges/
# >> The output section defines properties of the output image.
output:
# >> Parameters for deploying to an Easy Installer image.
easy-installer:
# >> Output directory of the customized image (REQUIRED):
local: output_directory
# >> Information used by Toradex Easy Installer:
name: "Build ${VERSION_TAG}"
description: "Easy Installer Image Build for device"
# licence: files/custom-licence.html
# release-notes: files/custom-release-notes.html
# accept-licence: true
# autoinstall: true
# autoreboot: true
bundle:
compose-file: docker-compose-production.yml
platform: linux/arm/v8
username: "${USERNAME}"
password: "${PASSWORD}"
registry: <ACR registry>
The issue seems to surround the bundle step of the build command and possibly with the docker in docker requirement. Has anyone gotten the bundle step working in bitbucket pipelines? And am I right on this being an issue with running a docker in docker container? I also found a list of restricted docker commands in pipelines, will these be an issue during the bundle and downstream processes?
Thank you so much!
-Adam