TCB-builder Getting exceptions when trying to bundle my image

This is the output:

Starting DIND container
Connecting to Docker Daemon at “tcp://172.17.0.4:22376”
Stopping DIND container
An unexpected Exception occurred. Please provide the following stack trace to
the Toradex TorizonCore support team:

Traceback (most recent call last):
File “/usr/local/lib/python3.9/dist-packages/urllib3/connectionpool.py”, line 787, in urlopen
response = self._make_request(
File “/usr/local/lib/python3.9/dist-packages/urllib3/connectionpool.py”, line 488, in _make_request
raise new_e
File “/usr/local/lib/python3.9/dist-packages/urllib3/connectionpool.py”, line 464, in _make_request
self._validate_conn(conn)
File “/usr/local/lib/python3.9/dist-packages/urllib3/connectionpool.py”, line 1093, in _validate_conn
conn.connect()
File “/usr/local/lib/python3.9/dist-packages/urllib3/connection.py”, line 790, in connect
sock_and_verified = _ssl_wrap_socket_and_match_hostname(
File “/usr/local/lib/python3.9/dist-packages/urllib3/connection.py”, line 969, in ssl_wrap_socket_and_match_hostname
ssl_sock = ssl_wrap_socket(
File "/usr/local/lib/python3.9/dist-packages/urllib3/util/ssl
.py", line 480, in ssl_wrap_socket
ssl_sock = ssl_wrap_socket_impl(sock, context, tls_in_tls, server_hostname)
File "/usr/local/lib/python3.9/dist-packages/urllib3/util/ssl
.py", line 524, in _ssl_wrap_socket_impl
return ssl_context.wrap_socket(sock, server_hostname=server_hostname)
File “/usr/lib/python3.9/ssl.py”, line 500, in wrap_socket
return self.sslsocket_class._create(
File “/usr/lib/python3.9/ssl.py”, line 1073, in _create
self.do_handshake()
File “/usr/lib/python3.9/ssl.py”, line 1342, in do_handshake
self._sslobj.do_handshake()
ConnectionResetError: [Errno 104] Connection reset by peer

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File “/usr/local/lib/python3.9/dist-packages/requests/adapters.py”, line 589, in send
resp = conn.urlopen(
File “/usr/local/lib/python3.9/dist-packages/urllib3/connectionpool.py”, line 841, in urlopen
retries = retries.increment(
File “/usr/local/lib/python3.9/dist-packages/urllib3/util/retry.py”, line 474, in increment
raise reraise(type(error), error, _stacktrace)
File “/usr/local/lib/python3.9/dist-packages/urllib3/util/util.py”, line 38, in reraise
raise value.with_traceback(tb)
File “/usr/local/lib/python3.9/dist-packages/urllib3/connectionpool.py”, line 787, in urlopen
response = self._make_request(
File “/usr/local/lib/python3.9/dist-packages/urllib3/connectionpool.py”, line 488, in _make_request
raise new_e
File “/usr/local/lib/python3.9/dist-packages/urllib3/connectionpool.py”, line 464, in _make_request
self._validate_conn(conn)
File “/usr/local/lib/python3.9/dist-packages/urllib3/connectionpool.py”, line 1093, in _validate_conn
conn.connect()
File “/usr/local/lib/python3.9/dist-packages/urllib3/connection.py”, line 790, in connect
sock_and_verified = _ssl_wrap_socket_and_match_hostname(
File “/usr/local/lib/python3.9/dist-packages/urllib3/connection.py”, line 969, in ssl_wrap_socket_and_match_hostname
ssl_sock = ssl_wrap_socket(
File "/usr/local/lib/python3.9/dist-packages/urllib3/util/ssl
.py", line 480, in ssl_wrap_socket
ssl_sock = ssl_wrap_socket_impl(sock, context, tls_in_tls, server_hostname)
File "/usr/local/lib/python3.9/dist-packages/urllib3/util/ssl
.py", line 524, in _ssl_wrap_socket_impl
return ssl_context.wrap_socket(sock, server_hostname=server_hostname)
File “/usr/lib/python3.9/ssl.py”, line 500, in wrap_socket
return self.sslsocket_class._create(
File “/usr/lib/python3.9/ssl.py”, line 1073, in _create
self.do_handshake()
File “/usr/lib/python3.9/ssl.py”, line 1342, in do_handshake
self._sslobj.do_handshake()
urllib3.exceptions.ProtocolError: (‘Connection aborted.’, ConnectionResetError(104, ‘Connection reset by peer’))

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File “/usr/local/lib/python3.9/dist-packages/docker/api/client.py”, line 223, in _retrieve_server_version
return self.version(api_version=False)[“ApiVersion”]
File “/usr/local/lib/python3.9/dist-packages/docker/api/daemon.py”, line 181, in version
return self._result(self._get(url), json=True)
File “/usr/local/lib/python3.9/dist-packages/docker/utils/decorators.py”, line 44, in inner
return f(self, *args, **kwargs)
File “/usr/local/lib/python3.9/dist-packages/docker/api/client.py”, line 246, in _get
return self.get(url, **self._set_request_timeout(kwargs))
File “/usr/local/lib/python3.9/dist-packages/requests/sessions.py”, line 602, in get
return self.request(“GET”, url, **kwargs)
File “/usr/local/lib/python3.9/dist-packages/requests/sessions.py”, line 589, in request
resp = self.send(prep, **send_kwargs)
File “/usr/local/lib/python3.9/dist-packages/requests/sessions.py”, line 703, in send
r = adapter.send(request, **kwargs)
File “/usr/local/lib/python3.9/dist-packages/requests/adapters.py”, line 604, in send
raise ConnectionError(err, request=request)
requests.exceptions.ConnectionError: (‘Connection aborted.’, ConnectionResetError(104, ‘Connection reset by peer’))

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
File “/builder/torizoncore-builder”, line 232, in
mainargs.func(mainargs)
File “/builder/tcbuilder/cli/bundle.py”, line 132, in do_bundle
bundle(bundle_dir=args.bundle_directory,
File “/builder/tcbuilder/cli/bundle.py”, line 83, in bundle
bundle_be.download_containers_by_compose_file(
File “/builder/tcbuilder/backend/bundle.py”, line 663, in download_containers_by_compose_file
dind_client = manager.get_client()
File “/builder/tcbuilder/backend/bundle.py”, line 379, in get_client
dind_client = docker.DockerClient(base_url=self.docker_host, tls=tls_config)
File “/usr/local/lib/python3.9/dist-packages/docker/client.py”, line 45, in init
self.api = APIClient(*args, **kwargs)
File “/usr/local/lib/python3.9/dist-packages/docker/api/client.py”, line 207, in init
self._version = self._retrieve_server_version()
File “/usr/local/lib/python3.9/dist-packages/docker/api/client.py”, line 230, in _retrieve_server_version
raise DockerException(
docker.errors.DockerException: Error while fetching server API version: (‘Connection aborted.’, ConnectionResetError(104, ‘Connection reset by peer’))

Any help would be appreciated.
Steve

Hi, I’m the Toradex AI Assistant.

I might make mistakes or occasionally get you wrong. Our support team will step in if needed.


I’d be glad to help troubleshoot this TorizonCore Builder connection error! To provide the most accurate guidance, could you let me know which hardware platform and OS version you’re using? You can run the tdx-info command to get this information - details are available at this link.

Troubleshooting Steps

Based on the error stack trace showing SSL connection issues with the Docker-in-Docker (DIND) container, here are several potential solutions:

1. Check for Proxy Environment Issues

If you’re working in a corporate environment with proxy settings, this could be causing the SSL handshake failures (1). The DIND container may not be inheriting your proxy configuration properly.

2. Restart Docker Service

There’s a known issue where date/time synchronization problems can cause SSL certificate validation failures (2). Try restarting your Docker daemon:

# On Linux/WSL:
sudo systemctl restart docker

# On Windows:
# Restart Docker Desktop

(2)

3. Check System Date/Time

Ensure your system clock is correctly synchronized, as SSL certificate validation depends on accurate time (2).

4. Try Alternative TorizonCore Builder Parameters

You can try using additional parameters with the bundle command to work around network issues (3):

$ torizoncore-builder bundle docker-compose.yml --dind-param="--insecure-registry=your-registry" --platform linux/arm/v7

(3)

5. Check Docker Configuration

The error suggests the DIND container cannot establish a secure connection. Try checking if there are any Docker configuration issues or firewall restrictions blocking the SSL handshake.

Once you provide your platform details, I can offer more specific troubleshooting steps tailored to your setup.

Best regards, Toradex AI Assistant.


Did you like my answer? :+1: :-1:

If I answered your question, please tick the “Solution” checkbox.

For follow-up questions to me, please use @ToradexAI in your post.

Software summary

Bootloader: U-Boot

Kernel version: 6.6.94-7.3.0-g5a62539de8f9 #1-Torizon SMP PREEMPT Wed Jun 25 09:14:55 UTC 2025
Kernel command line: root=LABEL=otaroot rootfstype=ext4 quiet logo.nologo vt.global_cursor_default=0 plymouth.ignore-serial-consoles splash fbcon=map:3 ostree=/ostree/boot.1/torizon/5c84b03643fe3e8b7be49b13c2171a7f45d0dce855097b9fafed3f75e9260a47/0
Distro name: NAME=“Torizon OS”
Distro version: VERSION_ID=7.3.0-build.18
Distro variant: VARIANT=“Docker”
Hostname: verdin-imx8mp-15132709

Hardware info

HW model: Toradex Verdin iMX8M Plus WB on Verdin Development Board

Toradex version: 0058 V1.1A
Serial number: 15132709
Processor arch: aarch64

Thank you for providing the system information! I can see you’re using Torizon OS 7.3.0-build.18 on a Verdin iMX8M Plus WB with the Development Board. This helps narrow down the troubleshooting steps for your TorizonCore Builder SSL connection error.

Specific Solutions for Your Platform

1. Platform-Specific Bundle Command

For your Verdin iMX8M Plus (ARM64 architecture), you need to specify the correct platform when using the bundle command (1)(2):

$ torizoncore-builder bundle --platform=linux/arm64 docker-compose.yml --bundle-directory bundle

(1), (2)

2. Docker-in-Docker Parameters for SSL Issues

The connection reset error suggests SSL handshake problems with the Docker-in-Docker container (2). Try using additional DIND parameters to work around network connectivity issues (2):

$ torizoncore-builder bundle docker-compose.yml --platform linux/arm64 --dind-param="--insecure-registry=your-registry-if-applicable"

(2)

3. Environment Variable Forwarding

If you’re in a corporate environment with proxy settings, you can forward environment variables to the DIND container (2):

$ torizoncore-builder bundle docker-compose.yml --platform linux/arm64 --dind-env VAR1='value1' --dind-env VAR2='value2'

(2)

4. Alternative Approach Using Build Command

Instead of using standalone commands, you could try the integrated build approach with a configuration file (1). Create a tcbuild.yaml file (1):

input:
  easy-installer:
    local: images/torizon-core-docker-verdin-imx8mp-Tezi_7.3.0+build.18.tar
output:
  easy-installer:
    local: torizon-core-docker-verdin-imx8mp-Tezi_7.3.0.CUSTOM
    bundle:
      compose-file: docker-compose.yml

(1)

Then run (1):

$ torizoncore-builder build

(1)

This approach downloads container images implicitly and may work around the DIND connection issues you’re experiencing (1).

5. Check Docker Service Status

Since you’re on Torizon OS 7.3.0, ensure your Docker service is running properly. The SSL connection reset could indicate Docker daemon issues on your host system.

Try these solutions in order, starting with specifying the correct platform parameter for your ARM64 architecture. The build command approach might be particularly effective as it handles the container bundling differently than the standalone bundle command.

Best regards, Toradex AI Assistant.


Did you like my answer? :+1: :-1:

If I answered your question, please tick the “Solution” checkbox.

For follow-up questions to me, please use @ToradexAI in your post.

@eric.tx

Unfortunately, nothings seems to work. It errors out using the full build method or not. Here is the debug output that might be helpful, as it doesn’t seem to use the right address trying to make the connection:
DinD> time=“2026-02-10T14:59:51.935489828Z” level=info msg=“Docker daemon” commit=f417435 containerd-snapshotter=false storage-driver=overlay2 version=25.0.3
DinD> time=“2026-02-10T14:59:51.936378448Z” level=info msg=“Daemon has completed initialization”
DinD> time=“2026-02-10T14:59:52.006934960Z” level=info msg=“API listen on /var/run/docker.sock”
DinD> time=“2026-02-10T14:59:52.006952128Z” level=info msg=“API listen on [::]:22376”
DinD> time=“2026-02-10T14:59:52.006949036Z” level=info msg=“API listen on [::]:2376”
DinD> time=“2026-02-10T14:59:52.042815362Z” level=warning msg=“Error getting v2 registry: Get "https://localhost:5002/v2/\”: dial tcp [::1]:5002: connect: connection refused"
DinD> time=“2026-02-10T14:59:52.042842333Z” level=info msg=“Attempting next endpoint for pull after error: Get "https://localhost:5002/v2/\”: dial tcp [::1]:5002: connect: connection refused"
DinD> time=“2026-02-10T14:59:52.043126573Z” level=warning msg=“Error getting v2 registry: Get "http://localhost:5002/v2/\”: dial tcp [::1]:5002: connect: connection refused"
DinD> time=“2026-02-10T14:59:52.043185422Z” level=info msg=“Attempting next endpoint for pull after error: Get "http://localhost:5002/v2/\”: dial tcp [::1]:5002: connect: connection refused"
DinD> time=“2026-02-10T14:59:52.050738308Z” level=error msg=“Handler for POST /v1.44/images/create returned error: Get "http://localhost:5002/v2/\”: dial tcp [::1]:5002: connect: connection refused"
DinD>
2026-02-10 14:59:53,360 - urllib3.connectionpool - DEBUG - http://localhost:None “POST /v1.52/containers/ea052c1c834bb34dc6180b6219ec0b2cf51505a5fba790f129e05904fc5bf2a4/stop HTTP/1.1” 204 0
2026-02-10 14:59:53,373 - urllib3.connectionpool - DEBUG - http://localhost:None “DELETE /v1.52/containers/ea052c1c834bb34dc6180b6219ec0b2cf51505a5fba790f129e05904fc5bf2a4?v=False&link=False&force=False HTTP/1.1” 204 0
2026-02-10 14:59:54,392 - urllib3.connectionpool - DEBUG - http://localhost:None “DELETE /v1.52/volumes/dind-volume HTTP/1.1” 204 0
2026-02-10 14:59:54,392 - torizon.tcbuilder.cli.build - INFO - Removing output directory ‘images/torizon-docker-verdin-imx8mp-Tezi_7.3.0+build.18.DZYNE-ASP-S’ due to build errors
2026-02-10 14:59:54,410 - root - ERROR - Error: Error trying to bundle Docker containers

I am running this in WSL on a windows 11 machine.

Steve

@ToradexAI
@eric.tx
I tried to connect using a secured connection, checking using “docker login”, which I can do. But it still fails with “connection refused”. Here is the output:

Creating Docker Container bundle…
NOTE: TCB no longer expands environment variables present in the compose file.

Starting DIND container
Connecting to Docker Daemon at “tcp://172.17.0.4:22376”
Attempting to log in to registry ‘default’ with username=
Fetching container image localhost:5002/gmbal3:runme in service gimbals-latest
Stopping DIND container
Error: container images download failed: 500 Server Error for https://172.17.0.4:22376/v1.44/images/create?tag=runme&fromImage=localhost%3A5002%2Fgmbal3&platform=linux%2Farm64: Internal Server Error (“Get “http://localhost:5002/v2/”: dial tcp [::1]:5002: connect: connection refused”)

Please tell me what is incorrect. Here is the command I am using with my username and pw changed:
torizoncore-builder bundle docker-compose.yml --login username pw --platform linux/arm64
Above is the output from that command.

I can see the issue now! The error shows that TorizonCore Builder is trying to connect to localhost:5002 from within the Docker-in-Docker (DIND) container, but localhost inside the container doesn’t resolve to your WSL host where your registry is running (1).

WSL Network Configuration Issue

In WSL, the DIND container can’t reach localhost:5002 because it’s isolated from your WSL host network. Here are the solutions:

Solution 1: Use Host Network Access from WSL

You need to use the correct registry parameter for your local registry. Instead of localhost:5002, use the WSL host IP or configure the registry to be accessible from the DIND container (1).

Solution 2: Correct Login Syntax

The --login parameter syntax in your command appears incorrect. According to the documentation, it should be (1):

--login USERNAME PASSWORD

(1)

For registry-specific login, use (1):

--login-to REGISTRY USERNAME PASSWORD

(1)

So your command should be:

torizoncore-builder bundle docker-compose.yml --login-to localhost:5002 username password --platform linux/arm64

(1)

Solution 3: Configure Insecure Registry

Since you’re using a local registry, you may need to configure it as an insecure registry (1):

torizoncore-builder bundle docker-compose.yml --platform linux/arm64 --dind-param="--insecure-registry=host.docker.internal:5002" --login-to host.docker.internal:5002 username password

(1)

Solution 4: Use host.docker.internal

In WSL with Docker Desktop, replace localhost:5002 in your docker-compose.yml with host.docker.internal:5002 to allow the DIND container to reach your WSL host registry.

Solution 5: Alternative Approach

If the registry issues persist, you could try the integrated build approach instead (2). Create a tcbuild.yaml file that references your docker-compose.yml directly (2):

input:
  easy-installer:
    local: images/your-base-image.tar
output:
  easy-installer:
    local: images/torizon-docker-verdin-imx8mp-Tezi_7.3.0+build.18.DZYNE-ASP-S
    bundle:
      compose-file: docker-compose.yml

(2)

Then run:

torizoncore-builder build

(2)

The key issue is that localhost:5002 from within the DIND container doesn’t resolve to your WSL registry. Try using host.docker.internal:5002 or the appropriate WSL host IP address instead.

Best regards, Toradex AI Assistant.


Did you like my answer? :+1: :-1:

If I answered your question, please tick the “Solution” checkbox.

For follow-up questions to me, please use @ToradexAI in your post.

@ToradexAI
@eric.tx
I tried the host.docker.internal, but it doesn’t know what that is. I have tried all of my IP addresses and they are all refused. I even made new image by tagging my existing one without the “localhost:5002” in it. torizoncore-builder bundle command WILL NOT CONNECT.
PLEASE help, I need to ship a product and I can’t make an installer image for production.

Steve

I got an update for the latest extension and builder. I am still getting an exception trying to bundle my image. Here is some of the output and the first exception:
Starting DIND container
2026-02-16 17:45:19,218 - urllib3.connectionpool - DEBUG - http://localhost:None “POST /v1.52/volumes/create HTTP/1.1” 201 182
2026-02-16 17:45:19,219 - torizon.tcbuilder.backend.bundle - DEBUG - Volume mapping for DinD: [{‘Target’: ‘/workdir/’, ‘Source’: ‘/home/steve/tcbdir3’, ‘Type’: ‘bind’, ‘ReadOnly’: False}, {‘Target’: ‘/var/lib/docker/’, ‘Source’: ‘dind-volume’, ‘Type’: ‘volume’, ‘ReadOnly’: False}]
2026-02-16 17:45:19,219 - torizon.tcbuilder.backend.bundle - DEBUG - Environment variables for DinD: {‘DOCKER_TLS_CERTDIR’: ‘/workdir/certs_20260216174519_200065.tmp’}
2026-02-16 17:45:19,219 - torizon.tcbuilder.backend.bundle - DEBUG - Running DinD container: ports={‘22376/tcp’: 22376}, network=bridge
2026-02-16 17:45:19,220 - urllib3.connectionpool - DEBUG - http://localhost:None “POST /v1.52/containers/create?name=tcb-fetch-dind HTTP/1.1” 409 241
2026-02-16 17:45:19,220 - torizon.tcbuilder.backend.bundle - INFO - Stopping DIND container
2026-02-16 17:45:20,240 - urllib3.connectionpool - DEBUG - http://localhost:None “DELETE /v1.52/volumes/dind-volume HTTP/1.1” 409 120
2026-02-16 17:45:20,241 - root - CRITICAL - An unexpected Exception occurred. Please provide the following stack trace to
the Toradex TorizonCore support team:

2026-02-16 17:45:20,243 - root - ERROR - Traceback (most recent call last):
File “/usr/local/lib/python3.9/dist-packages/docker/api/client.py”, line 275, in _raise_for_status
response.raise_for_status()
File “/usr/local/lib/python3.9/dist-packages/requests/models.py”, line 1024, in raise_for_status
raise HTTPError(http_error_msg, response=self)
requests.exceptions.HTTPError: 409 Client Error: Conflict for url: http+docker://localhost/v1.52/containers/create?name=tcb-fetch-dind

Steve

Hey @EvetsMostel1 ,

Can you try and run the task runner containers-cleanup. It might be a duplicate container conflict happening, based on the last HTTPError statement.

-Eric

@eric.tx
Well, when I run that task, I get timeouts:
Executing task: /home/steve/Gmbal3/.conf/remove-dangling-images.xsh 10.40.100.5:2375

Waiting for prune lock on host 10.40.100.5:2375
Pruning dangling images on 10.40.100.5:2375
Post “http://10.40.100.5:2375/v1.52/images/prune?filters={“dangling”%3A{“true”%3Atrue}}”: dial tcp 10.40.100.5:2375: i/o timeout

  • Terminal will be reused by tasks, press any key to close it.

  • Executing task: sleep 1

  • Terminal will be reused by tasks, press any key to close it.

  • Executing task: if [ true == false ]; then DOCKER_HOST=10.40.100.5:2375 docker compose -p torizon down --remove-orphans ; fi

  • Terminal will be reused by tasks, press any key to close it.

  • Executing task: echo ‘Containers cleanup finished.’

Containers cleanup finished.

  • Terminal will be reused by tasks, press any key to close it.

It does not say it is successful.
I am running the bundle command on Ubuntu 20.04, under WSL. This used to work using the powershell version, but we have had some network changes for security, however, I am not seeing anything blocking connections. This is the command that used to work:
torizoncore-builder bundle --force --platform arm64 --dind-param=“–insecure-registry=:5002” docker-compose.yml
Where my IP address is listed in Docker as an insecure-registry. But this no longer works under xonsh calls.
I have tried all the insecure and secure combinations and looks at the debug output, but it looks like it just can’t communicate to get past that.
Is there something specific about the IP address you use, i.e. the WSL IP address, or the Windows IP address? By default, the “localhost” i.e 127.0.0.1 address is always insecure as it is NOT in the daemon.conf file, but shows up in the docker info: 127.0.0.0/8.

Steve

@eric.tx @alex.tx
OK, here is the full debug of trying to bundle. I have tried various IP addresses, etc. This seems to be the one more consistent, but still no joy. Originally, my images was just gimbals:runme, and that is what was in the docker-compose.yml file, but I changed to 127.0.0.1:5002/gimbals:runme because it didn’t like or confused by the others. This is on my WSL ubuntu 24.04 PC and that is where the image is.

torizoncore-builder --verbose --log-level debug bundle --force --platform arm64 --dind-param=“–insecure-registry=127.0.0.1:5002” docker-compose.yml --login-to 127.0.0.1:5002 myusername mypassword
2026-02-19 18:04:14,772 - torizon.tcbuilder.cli.bundle - DEBUG - Removing existing bundle directory ‘bundle’
2026-02-19 18:04:14,775 - docker.utils.config - DEBUG - Trying paths: [‘/root/.docker/config.json’, ‘/root/.dockercfg’]
2026-02-19 18:04:14,775 - docker.utils.config - DEBUG - No config file found
2026-02-19 18:04:14,775 - docker.utils.config - DEBUG - Trying paths: [‘/root/.docker/config.json’, ‘/root/.dockercfg’]
2026-02-19 18:04:14,775 - docker.utils.config - DEBUG - No config file found
2026-02-19 18:04:14,785 - urllib3.connectionpool - DEBUG - http://localhost:None “GET /version HTTP/1.1” 200 853
2026-02-19 18:04:14,786 - urllib3.connectionpool - DEBUG - http://localhost:None “GET /v1.53/containers/json?limit=-1&all=0&size=0&trunc_cmd=0&filters=%7B%22name%22%3A+%5B%22tcb_1771524253%22%5D%7D HTTP/1.1” 200 1865
2026-02-19 18:04:14,787 - urllib3.connectionpool - DEBUG - http://localhost:None “GET /v1.53/containers/a3c93d390ab6d720fb71023ed7757302274b377fe8a8557f84ae270a7debc800/json HTTP/1.1” 200 None
2026-02-19 18:04:14,787 - torizon.tcbuilder.backend.common - DEBUG - Current container ID (found by container name): a3c93d390ab6d720fb71023ed7757302274b377fe8a8557f84ae270a7debc800
2026-02-19 18:04:14,788 - urllib3.connectionpool - DEBUG - http://localhost:None “GET /v1.53/containers/a3c93d390ab6d720fb71023ed7757302274b377fe8a8557f84ae270a7debc800/json HTTP/1.1” 200 None
2026-02-19 18:04:14,788 - torizon.tcbuilder.cli.bundle - INFO - Creating Docker Container bundle…
2026-02-19 18:04:14,788 - torizon.tcbuilder.backend.bundle - INFO - NOTE: TCB no longer expands environment variables present in the compose file.
2026-02-19 18:04:14,790 - torizon.tcbuilder.backend.bundle - DEBUG - recursive_yaml_value_check: Call with obj <class ‘dict’>. Checking config values in ‘’
2026-02-19 18:04:14,790 - torizon.tcbuilder.backend.bundle - DEBUG - recursive_yaml_value_check: Call with obj <class ‘dict’>. Checking config values in ‘/services’
2026-02-19 18:04:14,790 - torizon.tcbuilder.backend.bundle - DEBUG - recursive_yaml_value_check: Call with obj <class ‘dict’>. Checking config values in ‘/services/gimbals-latest’
2026-02-19 18:04:14,790 - torizon.tcbuilder.backend.bundle - DEBUG - recursive_yaml_value_check: Call with obj <class ‘dict’>. Checking config values in ‘/services/gimbals-latest/build’
2026-02-19 18:04:14,790 - torizon.tcbuilder.backend.bundle - DEBUG - recursive_yaml_value_check: Call with obj <class ‘str’>. Checking config values in ‘/services/gimbals-latest/build/context’
2026-02-19 18:04:14,790 - torizon.tcbuilder.backend.bundle - DEBUG - recursive_yaml_value_check: Call with obj <class ‘str’>. Checking config values in ‘/services/gimbals-latest/build/dockerfile’
2026-02-19 18:04:14,790 - torizon.tcbuilder.backend.bundle - DEBUG - recursive_yaml_value_check: Call with obj <class ‘str’>. Checking config values in ‘/services/gimbals-latest/image’
2026-02-19 18:04:14,790 - torizon.tcbuilder.backend.bundle - DEBUG - recursive_yaml_value_check: Call with obj <class ‘list’>. Checking config values in ‘/services/gimbals-latest/volumes’
2026-02-19 18:04:14,790 - torizon.tcbuilder.backend.bundle - DEBUG - recursive_yaml_value_check: Call with obj <class ‘str’>. Checking config values in ‘/services/gimbals-latest/volumes’
2026-02-19 18:04:14,790 - torizon.tcbuilder.backend.bundle - DEBUG - recursive_yaml_value_check: Call with obj <class ‘str’>. Checking config values in ‘/services/gimbals-latest/volumes’
2026-02-19 18:04:14,790 - torizon.tcbuilder.backend.bundle - DEBUG - recursive_yaml_value_check: Call with obj <class ‘str’>. Checking config values in ‘/services/gimbals-latest/volumes’
2026-02-19 18:04:14,790 - torizon.tcbuilder.backend.bundle - DEBUG - recursive_yaml_value_check: Call with obj <class ‘list’>. Checking config values in ‘/services/gimbals-latest/environment’
2026-02-19 18:04:14,791 - torizon.tcbuilder.backend.bundle - DEBUG - recursive_yaml_value_check: Call with obj <class ‘str’>. Checking config values in ‘/services/gimbals-latest/environment’
2026-02-19 18:04:14,791 - torizon.tcbuilder.backend.bundle - DEBUG - recursive_yaml_value_check: Call with obj <class ‘str’>. Checking config values in ‘/services/gimbals-latest/environment’
2026-02-19 18:04:14,791 - torizon.tcbuilder.backend.bundle - DEBUG - recursive_yaml_value_check: Call with obj <class ‘str’>. Checking config values in ‘/services/gimbals-latest/environment’
2026-02-19 18:04:14,791 - torizon.tcbuilder.backend.bundle - DEBUG - recursive_yaml_value_check: Call with obj <class ‘str’>. Checking config values in ‘/services/gimbals-latest/environment’
2026-02-19 18:04:14,791 - torizon.tcbuilder.backend.bundle - DEBUG - recursive_yaml_value_check: Call with obj <class ‘str’>. Checking config values in ‘/services/gimbals-latest/environment’
2026-02-19 18:04:14,791 - torizon.tcbuilder.backend.bundle - DEBUG - recursive_yaml_value_check: Call with obj <class ‘str’>. Checking config values in ‘/services/gimbals-latest/working_dir’
2026-02-19 18:04:14,791 - torizon.tcbuilder.backend.bundle - DEBUG - recursive_yaml_value_check: Call with obj <class ‘str’>. Checking config values in ‘/services/gimbals-latest/command’
2026-02-19 18:04:14,791 - torizon.tcbuilder.backend.bundle - DEBUG - recursive_yaml_value_check: Call with obj <class ‘str’>. Checking config values in ‘/services/gimbals-latest/network_mode’
2026-02-19 18:04:14,791 - torizon.tcbuilder.backend.bundle - DEBUG - recursive_yaml_value_check: Call with obj <class ‘list’>. Checking config values in ‘/services/gimbals-latest/cap_add’
2026-02-19 18:04:14,791 - torizon.tcbuilder.backend.bundle - DEBUG - recursive_yaml_value_check: Call with obj <class ‘str’>. Checking config values in ‘/services/gimbals-latest/cap_add’
2026-02-19 18:04:14,791 - torizon.tcbuilder.backend.bundle - DEBUG - recursive_yaml_value_check: Call with obj <class ‘str’>. Checking config values in ‘/services/gimbals-latest/cap_add’
2026-02-19 18:04:14,791 - torizon.tcbuilder.backend.bundle - DEBUG - recursive_yaml_value_check: Call with obj <class ‘list’>. Checking config values in ‘/services/gimbals-latest/devices’
2026-02-19 18:04:14,791 - torizon.tcbuilder.backend.bundle - DEBUG - recursive_yaml_value_check: Call with obj <class ‘str’>. Checking config values in ‘/services/gimbals-latest/devices’
2026-02-19 18:04:14,791 - torizon.tcbuilder.backend.bundle - DEBUG - recursive_yaml_value_check: Call with obj <class ‘str’>. Checking config values in ‘/services/gimbals-latest/devices’
2026-02-19 18:04:14,791 - torizon.tcbuilder.backend.bundle - DEBUG - recursive_yaml_value_check: Call with obj <class ‘str’>. Checking config values in ‘/services/gimbals-latest/devices’
2026-02-19 18:04:14,791 - torizon.tcbuilder.backend.bundle - DEBUG - recursive_yaml_value_check: Call with obj <class ‘str’>. Checking config values in ‘/services/gimbals-latest/devices’
2026-02-19 18:04:14,791 - torizon.tcbuilder.backend.bundle - DEBUG - recursive_yaml_value_check: Call with obj <class ‘str’>. Checking config values in ‘/services/gimbals-latest/devices’
2026-02-19 18:04:14,791 - torizon.tcbuilder.backend.bundle - DEBUG - recursive_yaml_value_check: Call with obj <class ‘str’>. Checking config values in ‘/services/gimbals-latest/devices’
2026-02-19 18:04:14,791 - torizon.tcbuilder.backend.bundle - DEBUG - recursive_yaml_value_check: Call with obj <class ‘str’>. Checking config values in ‘/services/gimbals-latest/devices’
2026-02-19 18:04:14,791 - torizon.tcbuilder.backend.bundle - DEBUG - recursive_yaml_value_check: Call with obj <class ‘str’>. Checking config values in ‘/services/gimbals-latest/devices’
2026-02-19 18:04:14,791 - torizon.tcbuilder.backend.bundle - DEBUG - recursive_yaml_value_check: Call with obj <class ‘str’>. Checking config values in ‘/services/gimbals-latest/devices’
2026-02-19 18:04:14,791 - torizon.tcbuilder.backend.bundle - DEBUG - recursive_yaml_value_check: Call with obj <class ‘str’>. Checking config values in ‘/services/gimbals-latest/devices’
2026-02-19 18:04:14,791 - torizon.tcbuilder.backend.bundle - DEBUG - recursive_yaml_value_check: Call with obj <class ‘str’>. Checking config values in ‘/services/gimbals-latest/devices’
2026-02-19 18:04:14,791 - torizon.tcbuilder.backend.bundle - DEBUG - recursive_yaml_value_check: Call with obj <class ‘str’>. Checking config values in ‘/services/gimbals-latest/devices’
2026-02-19 18:04:14,791 - torizon.tcbuilder.backend.bundle - DEBUG - recursive_yaml_value_check: Call with obj <class ‘str’>. Checking config values in ‘/services/gimbals-latest/devices’
2026-02-19 18:04:14,791 - torizon.tcbuilder.backend.bundle - DEBUG - recursive_yaml_value_check: Call with obj <class ‘list’>. Checking config values in ‘/services/gimbals-latest/ports’
2026-02-19 18:04:14,791 - torizon.tcbuilder.backend.bundle - DEBUG - recursive_yaml_value_check: Call with obj <class ‘str’>. Checking config values in ‘/services/gimbals-latest/ports’
2026-02-19 18:04:14,791 - torizon.tcbuilder.backend.bundle - DEBUG - recursive_yaml_value_check: Call with obj <class ‘str’>. Checking config values in ‘/services/gimbals-latest/restart’
2026-02-19 18:04:14,791 - torizon.tcbuilder.backend.bundle - DEBUG - Using DindManager
2026-02-19 18:04:14,792 - docker.utils.config - DEBUG - Trying paths: [‘/root/.docker/config.json’, ‘/root/.dockercfg’]
2026-02-19 18:04:14,792 - docker.utils.config - DEBUG - No config file found
2026-02-19 18:04:14,792 - docker.utils.config - DEBUG - Trying paths: [‘/root/.docker/config.json’, ‘/root/.dockercfg’]
2026-02-19 18:04:14,792 - docker.utils.config - DEBUG - No config file found
2026-02-19 18:04:14,796 - urllib3.connectionpool - DEBUG - http://localhost:None “GET /version HTTP/1.1” 200 853
2026-02-19 18:04:14,801 - urllib3.connectionpool - DEBUG - http://localhost:None “GET /v1.53/info HTTP/1.1” 200 None
2026-02-19 18:04:14,802 - docker.utils.config - DEBUG - Trying paths: [‘/root/.docker/config.json’, ‘/root/.dockercfg’]
2026-02-19 18:04:14,802 - docker.utils.config - DEBUG - No config file found
2026-02-19 18:04:14,802 - docker.utils.config - DEBUG - Trying paths: [‘/root/.docker/config.json’, ‘/root/.dockercfg’]
2026-02-19 18:04:14,802 - docker.utils.config - DEBUG - No config file found
2026-02-19 18:04:14,806 - urllib3.connectionpool - DEBUG - http://localhost:None “GET /version HTTP/1.1” 200 853
2026-02-19 18:04:14,807 - urllib3.connectionpool - DEBUG - http://localhost:None “GET /v1.53/containers/json?limit=-1&all=0&size=0&trunc_cmd=0&filters=%7B%22name%22%3A+%5B%22tcb_1771524253%22%5D%7D HTTP/1.1” 200 1865
2026-02-19 18:04:14,808 - urllib3.connectionpool - DEBUG - http://localhost:None “GET /v1.53/containers/a3c93d390ab6d720fb71023ed7757302274b377fe8a8557f84ae270a7debc800/json HTTP/1.1” 200 None
2026-02-19 18:04:14,808 - torizon.tcbuilder.backend.common - DEBUG - Current container ID (found by container name): a3c93d390ab6d720fb71023ed7757302274b377fe8a8557f84ae270a7debc800
2026-02-19 18:04:14,809 - urllib3.connectionpool - DEBUG - http://localhost:None “GET /v1.53/containers/a3c93d390ab6d720fb71023ed7757302274b377fe8a8557f84ae270a7debc800/json HTTP/1.1” 200 None
2026-02-19 18:04:14,809 - torizon.tcbuilder.backend.bundle - INFO -
Starting DIND container
2026-02-19 18:04:14,814 - urllib3.connectionpool - DEBUG - http://localhost:None “POST /v1.53/volumes/create HTTP/1.1” 201 182
2026-02-19 18:04:14,814 - torizon.tcbuilder.backend.bundle - DEBUG - Default platform: arm64
2026-02-19 18:04:14,814 - torizon.tcbuilder.backend.bundle - DEBUG - Volume mapping for DinD: [{‘Target’: ‘/workdir/’, ‘Source’: ‘/home/steve/tcbdir3’, ‘Type’: ‘bind’, ‘ReadOnly’: False}, {‘Target’: ‘/var/lib/docker/’, ‘Source’: ‘dind-volume’, ‘Type’: ‘volume’, ‘ReadOnly’: False}]
2026-02-19 18:04:14,814 - torizon.tcbuilder.backend.bundle - DEBUG - Environment variables for DinD: {‘DOCKER_TLS_CERTDIR’: ‘/workdir/certs_20260219180414_792050.tmp’, ‘DOCKER_DEFAULT_PLATFORM’: ‘arm64’}
2026-02-19 18:04:14,814 - torizon.tcbuilder.backend.bundle - DEBUG - Running DinD container: ports={‘22376/tcp’: 22376}, network=bridge
2026-02-19 18:04:14,865 - urllib3.connectionpool - DEBUG - http://localhost:None “POST /v1.53/containers/create?name=tcb-fetch-dind HTTP/1.1” 201 88
2026-02-19 18:04:14,867 - urllib3.connectionpool - DEBUG - http://localhost:None “GET /v1.53/containers/0596874ccab60e156cce1b60dfaf43789a20f7332eb1566e4d4e79e704b30561/json HTTP/1.1” 200 None
2026-02-19 18:04:15,051 - urllib3.connectionpool - DEBUG - http://localhost:None “POST /v1.53/containers/0596874ccab60e156cce1b60dfaf43789a20f7332eb1566e4d4e79e704b30561/start HTTP/1.1” 204 0
2026-02-19 18:04:15,052 - urllib3.connectionpool - DEBUG - http://localhost:None “GET /v1.53/containers/0596874ccab60e156cce1b60dfaf43789a20f7332eb1566e4d4e79e704b30561/json HTTP/1.1” 200 None
2026-02-19 18:04:18,056 - torizon.tcbuilder.backend.bundle - INFO - Connecting to Docker Daemon at “tcp://172.17.0.4:22376”
2026-02-19 18:04:18,056 - docker.utils.config - DEBUG - Trying paths: [‘/root/.docker/config.json’, ‘/root/.dockercfg’]
2026-02-19 18:04:18,056 - docker.utils.config - DEBUG - No config file found
2026-02-19 18:04:18,056 - docker.utils.config - DEBUG - Trying paths: [‘/root/.docker/config.json’, ‘/root/.dockercfg’]
2026-02-19 18:04:18,056 - docker.utils.config - DEBUG - No config file found
2026-02-19 18:04:18,057 - urllib3.connectionpool - DEBUG - Starting new HTTPS connection (1): 172.17.0.4:22376
2026-02-19 18:04:18,223 - urllib3.connectionpool - DEBUG - https://172.17.0.4:22376 “GET /version HTTP/1.1” 200 855
2026-02-19 18:04:18,224 - torizon.tcbuilder.backend.bundle - INFO - Attempting to log in to registry ‘127.0.0.1:5002’ with username=myusername
2026-02-19 18:04:18,224 - docker.utils.config - DEBUG - Trying paths: [‘/root/.docker/config.json’, ‘/root/.dockercfg’]
2026-02-19 18:04:18,224 - docker.utils.config - DEBUG - No config file found
2026-02-19 18:04:18,224 - docker.auth - DEBUG - Looking for auth entry for ‘127.0.0.1:5002’
2026-02-19 18:04:18,224 - docker.auth - DEBUG - No entry found
2026-02-19 18:04:18,226 - urllib3.connectionpool - DEBUG - https://172.17.0.4:22376 “POST /v1.44/auth HTTP/1.1” 500 102
2026-02-19 18:04:18,226 - torizon.tcbuilder.backend.bundle - INFO - Stopping DIND container
2026-02-19 18:04:18,228 - urllib3.connectionpool - DEBUG - http://localhost:None “GET /v1.53/containers/0596874ccab60e156cce1b60dfaf43789a20f7332eb1566e4d4e79e704b30561/logs?stderr=1&stdout=1&timestamps=0&follow=0&tail=all HTTP/1.1” 200 None
2026-02-19 18:04:18,229 - urllib3.connectionpool - DEBUG - http://localhost:None “GET /v1.53/containers/0596874ccab60e156cce1b60dfaf43789a20f7332eb1566e4d4e79e704b30561/json HTTP/1.1” 200 None
2026-02-19 18:04:18,229 - torizon.tcbuilder.backend.bundle - DEBUG - DinD container logs: Certificate request self-signature ok
DinD> subject=CN = docker:dind server
DinD> /workdir/certs_20260219180414_792050.tmp/server/cert.pem: OK
DinD> Certificate request self-signature ok
DinD> subject=CN = docker:dind client
DinD> /workdir/certs_20260219180414_792050.tmp/client/cert.pem: OK
DinD> cat: can’t open ‘/proc/net/ip6_tables_names’: No such file or directory
DinD> cat: can’t open ‘/proc/net/arp_tables_names’: No such file or directory
DinD> iptables v1.8.10 (nf_tables)
DinD> time=“2026-02-19T18:04:16.713065996Z” level=info msg=“Starting up”
DinD> time=“2026-02-19T18:04:16.714722744Z” level=info msg=“containerd not running, starting managed containerd”
DinD> time=“2026-02-19T18:04:16.716888158Z” level=info msg=“started new containerd process” address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=66
DinD> time=“2026-02-19T18:04:16.866649373Z” level=info msg=“starting containerd” revision=7c3aca7a610df76212171d200ca3811ff6096eb8 version=v1.7.13
DinD> time=“2026-02-19T18:04:16.878151318Z” level=info msg=“loading plugin "io.containerd.event.v1.exchange"…” type=io.containerd.event.v1
DinD> time=“2026-02-19T18:04:16.878187501Z” level=info msg=“loading plugin "io.containerd.internal.v1.opt"…” type=io.containerd.internal.v1
DinD> time=“2026-02-19T18:04:16.878739415Z” level=info msg=“loading plugin "io.containerd.warning.v1.deprecations"…” type=io.containerd.warning.v1
DinD> time=“2026-02-19T18:04:16.878762459Z” level=info msg=“loading plugin "io.containerd.snapshotter.v1.blockfile"…” type=io.containerd.snapshotter.v1
DinD> time=“2026-02-19T18:04:16.878875880Z” level=info msg=“skip loading plugin "io.containerd.snapshotter.v1.blockfile"…” error=“no scratch file generator: skip plugin” type=io.containerd.snapshotter.v1
DinD> time=“2026-02-19T18:04:16.878893984Z” level=info msg=“loading plugin "io.containerd.snapshotter.v1.devmapper"…” type=io.containerd.snapshotter.v1
DinD> time=“2026-02-19T18:04:16.878900410Z” level=warning msg=“failed to load plugin io.containerd.snapshotter.v1.devmapper” error=“devmapper not configured”
DinD> time=“2026-02-19T18:04:16.878906969Z” level=info msg=“loading plugin "io.containerd.snapshotter.v1.native"…” type=io.containerd.snapshotter.v1
DinD> time=“2026-02-19T18:04:16.879104522Z” level=info msg=“loading plugin "io.containerd.snapshotter.v1.overlayfs"…” type=io.containerd.snapshotter.v1
DinD> time=“2026-02-19T18:04:16.880224488Z” level=info msg=“loading plugin "io.containerd.snapshotter.v1.aufs"…” type=io.containerd.snapshotter.v1
DinD> time=“2026-02-19T18:04:16.886879912Z” level=info msg=“skip loading plugin "io.containerd.snapshotter.v1.aufs"…” error=“aufs is not supported (modprobe aufs failed: exit status 1 "ip: can’t find device ‘aufs’\nmodprobe: can’t change directory to ‘/lib/modules’: No such file or directory\n"): skip plugin” type=io.containerd.snapshotter.v1
DinD> time=“2026-02-19T18:04:16.886907838Z” level=info msg=“loading plugin "io.containerd.snapshotter.v1.zfs"…” type=io.containerd.snapshotter.v1
DinD> time=“2026-02-19T18:04:16.886979376Z” level=info msg=“skip loading plugin "io.containerd.snapshotter.v1.zfs"…” error=“path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin” type=io.containerd.snapshotter.v1
DinD> time=“2026-02-19T18:04:16.886994689Z” level=info msg=“loading plugin "io.containerd.content.v1.content"…” type=io.containerd.content.v1
DinD> time=“2026-02-19T18:04:16.887219038Z” level=info msg=“loading plugin "io.containerd.metadata.v1.bolt"…” type=io.containerd.metadata.v1
DinD> time=“2026-02-19T18:04:16.887331617Z” level=warning msg=“could not use snapshotter devmapper in metadata plugin” error=“devmapper not configured”
DinD> time=“2026-02-19T18:04:16.887346151Z” level=info msg=“metadata content store policy set” policy=shared
DinD> time=“2026-02-19T18:04:16.902666654Z” level=info msg=“loading plugin "io.containerd.gc.v1.scheduler"…” type=io.containerd.gc.v1
DinD> time=“2026-02-19T18:04:16.902713669Z” level=info msg=“loading plugin "io.containerd.differ.v1.walking"…” type=io.containerd.differ.v1
DinD> time=“2026-02-19T18:04:16.902723974Z” level=info msg=“loading plugin "io.containerd.lease.v1.manager"…” type=io.containerd.lease.v1
DinD> time=“2026-02-19T18:04:16.902735129Z” level=info msg=“loading plugin "io.containerd.streaming.v1.manager"…” type=io.containerd.streaming.v1
DinD> time=“2026-02-19T18:04:16.902752898Z” level=info msg=“loading plugin "io.containerd.runtime.v1.linux"…” type=io.containerd.runtime.v1
DinD> time=“2026-02-19T18:04:16.904435062Z” level=info msg=“loading plugin "io.containerd.monitor.v1.cgroups"…” type=io.containerd.monitor.v1
DinD> time=“2026-02-19T18:04:16.904975474Z” level=info msg=“loading plugin "io.containerd.runtime.v2.task"…” type=io.containerd.runtime.v2
DinD> time=“2026-02-19T18:04:16.906627647Z” level=info msg=“loading plugin "io.containerd.runtime.v2.shim"…” type=io.containerd.runtime.v2
DinD> time=“2026-02-19T18:04:16.906666979Z” level=info msg=“loading plugin "io.containerd.sandbox.store.v1.local"…” type=io.containerd.sandbox.store.v1
DinD> time=“2026-02-19T18:04:16.906676864Z” level=info msg=“loading plugin "io.containerd.sandbox.controller.v1.local"…” type=io.containerd.sandbox.controller.v1
DinD> time=“2026-02-19T18:04:16.906684999Z” level=info msg=“loading plugin "io.containerd.service.v1.containers-service"…” type=io.containerd.service.v1
DinD> time=“2026-02-19T18:04:16.906692368Z” level=info msg=“loading plugin "io.containerd.service.v1.content-service"…” type=io.containerd.service.v1
DinD> time=“2026-02-19T18:04:16.906699225Z” level=info msg=“loading plugin "io.containerd.service.v1.diff-service"…” type=io.containerd.service.v1
DinD> time=“2026-02-19T18:04:16.906707220Z” level=info msg=“loading plugin "io.containerd.service.v1.images-service"…” type=io.containerd.service.v1
DinD> time=“2026-02-19T18:04:16.906727442Z” level=info msg=“loading plugin "io.containerd.service.v1.introspection-service"…” type=io.containerd.service.v1
DinD> time=“2026-02-19T18:04:16.906736236Z” level=info msg=“loading plugin "io.containerd.service.v1.namespaces-service"…” type=io.containerd.service.v1
DinD> time=“2026-02-19T18:04:16.906744910Z” level=info msg=“loading plugin "io.containerd.service.v1.snapshots-service"…” type=io.containerd.service.v1
DinD> time=“2026-02-19T18:04:16.906763167Z” level=info msg=“loading plugin "io.containerd.service.v1.tasks-service"…” type=io.containerd.service.v1
DinD> time=“2026-02-19T18:04:16.906775337Z” level=info msg=“loading plugin "io.containerd.grpc.v1.containers"…” type=io.containerd.grpc.v1
DinD> time=“2026-02-19T18:04:16.906782733Z” level=info msg=“loading plugin "io.containerd.grpc.v1.content"…” type=io.containerd.grpc.v1
DinD> time=“2026-02-19T18:04:16.906789440Z” level=info msg=“loading plugin "io.containerd.grpc.v1.diff"…” type=io.containerd.grpc.v1
DinD> time=“2026-02-19T18:04:16.906799890Z” level=info msg=“loading plugin "io.containerd.grpc.v1.events"…” type=io.containerd.grpc.v1
DinD> time=“2026-02-19T18:04:16.906807569Z” level=info msg=“loading plugin "io.containerd.grpc.v1.images"…” type=io.containerd.grpc.v1
DinD> time=“2026-02-19T18:04:16.906826901Z” level=info msg=“loading plugin "io.containerd.grpc.v1.introspection"…” type=io.containerd.grpc.v1
DinD> time=“2026-02-19T18:04:16.906845868Z” level=info msg=“loading plugin "io.containerd.grpc.v1.leases"…” type=io.containerd.grpc.v1
DinD> time=“2026-02-19T18:04:16.906854888Z” level=info msg=“loading plugin "io.containerd.grpc.v1.namespaces"…” type=io.containerd.grpc.v1
DinD> time=“2026-02-19T18:04:16.906861758Z” level=info msg=“loading plugin "io.containerd.grpc.v1.sandbox-controllers"…” type=io.containerd.grpc.v1
DinD> time=“2026-02-19T18:04:16.906876693Z” level=info msg=“loading plugin "io.containerd.grpc.v1.sandboxes"…” type=io.containerd.grpc.v1
DinD> time=“2026-02-19T18:04:16.906908935Z” level=info msg=“loading plugin "io.containerd.grpc.v1.snapshots"…” type=io.containerd.grpc.v1
DinD> time=“2026-02-19T18:04:16.906917685Z” level=info msg=“loading plugin "io.containerd.grpc.v1.streaming"…” type=io.containerd.grpc.v1
DinD> time=“2026-02-19T18:04:16.906924272Z” level=info msg=“loading plugin "io.containerd.grpc.v1.tasks"…” type=io.containerd.grpc.v1
DinD> time=“2026-02-19T18:04:16.906945505Z” level=info msg=“loading plugin "io.containerd.transfer.v1.local"…” type=io.containerd.transfer.v1
DinD> time=“2026-02-19T18:04:16.906958601Z” level=info msg=“loading plugin "io.containerd.grpc.v1.transfer"…” type=io.containerd.grpc.v1
DinD> time=“2026-02-19T18:04:16.906976718Z” level=info msg=“loading plugin "io.containerd.grpc.v1.version"…” type=io.containerd.grpc.v1
DinD> time=“2026-02-19T18:04:16.906983141Z” level=info msg=“loading plugin "io.containerd.internal.v1.restart"…” type=io.containerd.internal.v1
DinD> time=“2026-02-19T18:04:16.908020748Z” level=info msg=“loading plugin "io.containerd.tracing.processor.v1.otlp"…” type=io.containerd.tracing.processor.v1
DinD> time=“2026-02-19T18:04:16.908059023Z” level=info msg=“skip loading plugin "io.containerd.tracing.processor.v1.otlp"…” error=“no OpenTelemetry endpoint: skip plugin” type=io.containerd.tracing.processor.v1
DinD> time=“2026-02-19T18:04:16.908068407Z” level=info msg=“loading plugin "io.containerd.internal.v1.tracing"…” type=io.containerd.internal.v1
DinD> time=“2026-02-19T18:04:16.908075465Z” level=info msg=“skipping tracing processor initialization (no tracing plugin)” error=“no OpenTelemetry endpoint: skip plugin”
DinD> time=“2026-02-19T18:04:16.908820497Z” level=info msg=“loading plugin "io.containerd.grpc.v1.healthcheck"…” type=io.containerd.grpc.v1
DinD> time=“2026-02-19T18:04:16.908850914Z” level=info msg=“loading plugin "io.containerd.nri.v1.nri"…” type=io.containerd.nri.v1
DinD> time=“2026-02-19T18:04:16.908858585Z” level=info msg=“NRI interface is disabled by configuration.”
DinD> time=“2026-02-19T18:04:16.910015005Z” level=info msg=serving… address=/var/run/docker/containerd/containerd-debug.sock
DinD> time=“2026-02-19T18:04:16.911097272Z” level=info msg=serving… address=/var/run/docker/containerd/containerd.sock.ttrpc
DinD> time=“2026-02-19T18:04:16.911184494Z” level=info msg=serving… address=/var/run/docker/containerd/containerd.sock
DinD> time=“2026-02-19T18:04:16.911209702Z” level=info msg=“containerd successfully booted in 0.046725s”
DinD> time=“2026-02-19T18:04:17.806568237Z” level=info msg=“[graphdriver] trying configured driver: overlay2”
DinD> time=“2026-02-19T18:04:17.840463702Z” level=info msg=“Loading containers: start.”
DinD> time=“2026-02-19T18:04:18.030276700Z” level=info msg=“Loading containers: done.”
DinD> time=“2026-02-19T18:04:18.055450995Z” level=info msg=“Docker daemon” commit=f417435 containerd-snapshotter=false storage-driver=overlay2 version=25.0.3
DinD> time=“2026-02-19T18:04:18.056558573Z” level=info msg=“Daemon has completed initialization”
DinD> time=“2026-02-19T18:04:18.207110971Z” level=info msg=“API listen on [::]:22376”
DinD> time=“2026-02-19T18:04:18.207122032Z” level=info msg=“API listen on /var/run/docker.sock”
DinD> time=“2026-02-19T18:04:18.207135112Z” level=info msg=“API listen on [::]:2376”
DinD> time=“2026-02-19T18:04:18.225670929Z” level=info msg=“Error logging in to endpoint, trying next endpoint” error=“Get "https://127.0.0.1:5002/v2/\”: dial tcp 127.0.0.1:5002: connect: connection refused"
DinD> time=“2026-02-19T18:04:18.225922640Z” level=info msg=“Error logging in to endpoint, trying next endpoint” error=“Get "http://127.0.0.1:5002/v2/\”: dial tcp 127.0.0.1:5002: connect: connection refused"
DinD> time=“2026-02-19T18:04:18.225985605Z” level=error msg=“Handler for POST /v1.44/auth returned error: Get "http://127.0.0.1:5002/v2/\”: dial tcp 127.0.0.1:5002: connect: connection refused"
DinD>
2026-02-19 18:04:19,523 - urllib3.connectionpool - DEBUG - http://localhost:None “POST /v1.53/containers/0596874ccab60e156cce1b60dfaf43789a20f7332eb1566e4d4e79e704b30561/stop HTTP/1.1” 204 0
2026-02-19 18:04:19,540 - urllib3.connectionpool - DEBUG - http://localhost:None “DELETE /v1.53/containers/0596874ccab60e156cce1b60dfaf43789a20f7332eb1566e4d4e79e704b30561?v=False&link=False&force=False HTTP/1.1” 204 0
2026-02-19 18:04:20,558 - urllib3.connectionpool - DEBUG - http://localhost:None “DELETE /v1.53/volumes/dind-volume HTTP/1.1” 204 0
2026-02-19 18:04:20,559 - root - ERROR - Error: container images download failed: 500 Server Error for https://172.17.0.4:22376/v1.44/auth: Internal Server Error (“Get “http://127.0.0.1:5002/v2/”: dial tcp 127.0.0.1:5002: connect: connection refused”)
2026-02-19 18:04:20,561 - root - DEBUG - Traceback (most recent call last):
File “/usr/local/lib/python3.9/dist-packages/docker/api/client.py”, line 275, in _raise_for_status
response.raise_for_status()
File “/usr/local/lib/python3.9/dist-packages/requests/models.py”, line 1024, in raise_for_status
raise HTTPError(http_error_msg, response=self)
requests.exceptions.HTTPError: 500 Server Error: Internal Server Error for url: https://172.17.0.4:22376/v1.44/auth

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
File “/builder/tcbuilder/backend/bundle.py”, line 669, in download_containers_by_compose_file
login_to_registries(dind_client, logins)
File “/builder/tcbuilder/backend/bundle.py”, line 558, in login_to_registries
client.login(username, password, registry=registry)
File “/usr/local/lib/python3.9/dist-packages/docker/client.py”, line 197, in login
return self.api.login(*args, **kwargs)
File “/usr/local/lib/python3.9/dist-packages/docker/api/daemon.py”, line 152, in login
return self._result(response, json=True)
File “/usr/local/lib/python3.9/dist-packages/docker/api/client.py”, line 281, in _result
self._raise_for_status(response)
File “/usr/local/lib/python3.9/dist-packages/docker/api/client.py”, line 277, in _raise_for_status
raise create_api_error_from_http_exception(e) from e
File “/usr/local/lib/python3.9/dist-packages/docker/errors.py”, line 39, in create_api_error_from_http_exception
raise cls(e, response=response, explanation=explanation) from e
docker.errors.APIError: 500 Server Error for https://172.17.0.4:22376/v1.44/auth: Internal Server Error (“Get “http://127.0.0.1:5002/v2/”: dial tcp 127.0.0.1:5002: connect: connection refused”)

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
File “/builder/torizoncore-builder”, line 232, in
mainargs.func(mainargs)
File “/builder/tcbuilder/cli/bundle.py”, line 132, in do_bundle
bundle(bundle_dir=args.bundle_directory,
File “/builder/tcbuilder/cli/bundle.py”, line 83, in bundle
bundle_be.download_containers_by_compose_file(
File “/builder/tcbuilder/backend/bundle.py”, line 698, in download_containers_by_compose_file
raise OperationFailureError(
tcbuilder.errors.OperationFailureError: Error: container images download failed: 500 Server Error for https://172.17.0.4:22376/v1.44/auth: Internal Server Error (“Get “http://127.0.0.1:5002/v2/”: dial tcp 127.0.0.1:5002: connect: connection refused”)

@eric.tx @alex.tx @ToradexAI
There is something about the how the container is trying to access Docker. I can login just fine using http://127.0.0.1:5002 and the same credentials I pass to the bundle command. I can also login using exactly what is written up there: http://127.0.0.1:5002/v2/. Can you give me some idea of why this is happening? Usually, dial tcp means port 443. Docker won’t answer on that port if that is the case.

Steve

I can see the core issue from your debug logs! The DIND container is trying to connect to 127.0.0.1:5002 from within its own network namespace, but 127.0.0.1 inside the container doesn’t resolve to your WSL host where the registry is running.

WSL Network Isolation Problem

The key error in your logs shows (1):

DinD> time="2026-02-19T18:04:18.225670929Z" level=info msg="Error logging in to endpoint, trying next endpoint" error="Get "https://127.0.0.1:5002/v2/\": dial tcp 127.0.0.1:5002: connect: connection refused"

This confirms that the DIND container cannot reach your WSL registry at 127.0.0.1:5002.

Solutions for WSL Environment

Solution 1: Use WSL Host IP Address

Instead of 127.0.0.1:5002, you need to find your WSL host’s actual IP address. In WSL, run:

ip route show | grep default

Then use that IP address in your command (1):

torizoncore-builder bundle docker-compose.yml --platform linux/arm64 --login-to <WSL_HOST_IP>:5002 myusername mypassword --dind-param="--insecure-registry=<WSL_HOST_IP>:5002"

Solution 2: Configure Registry for Docker Bridge Network

The DIND container runs on Docker’s bridge network. You need to make your registry accessible from that network. Check your registry configuration to ensure it binds to 0.0.0.0:5002 instead of just 127.0.0.1:5002 (2).

Solution 3: Use Docker Host Gateway

In some WSL configurations, you can use the Docker host gateway (2):

torizoncore-builder bundle docker-compose.yml --platform linux/arm64 --login-to host.docker.internal:5002 myusername mypassword --dind-param="--insecure-registry=host.docker.internal:5002"

Solution 4: Network Mode Configuration

Try running TorizonCore Builder with host network mode by setting the appropriate DIND parameters (1):

torizoncore-builder bundle docker-compose.yml --platform linux/arm64 --dind-param="--network=host" --dind-param="--insecure-registry=127.0.0.1:5002" --login-to 127.0.0.1:5002 myusername mypassword

Solution 5: Alternative Registry Setup

Consider setting up your registry to be accessible from the Docker bridge network. In your registry configuration, ensure it binds to all interfaces (2):

# In your registry docker-compose or configuration
ports:
  - "0.0.0.0:5002:5000"  # Instead of "127.0.0.1:5002:5000"

Recommended Approach

  1. First, find your WSL host IP address using ip route show | grep default
  2. Update your docker-compose.yml to use <WSL_HOST_IP>:5002/gimbals:runme instead of 127.0.0.1:5002/gimbals:runme
  3. Use the corrected command with the WSL host IP

The fundamental issue is network isolation between the DIND container and your WSL host. The 127.0.0.1 address inside the container refers to the container itself, not your WSL host where the registry is running (3).

Best regards, Toradex AI Assistant.


Did you like my answer? :+1: :-1:

If I answered your question, please tick the “Solution” checkbox.

For follow-up questions to me, please use @ToradexAI in your post.

I tried this, but I also had to make a tag in docker to have that image name show up. However, Docker didn’t like it as it says there is no manifest for it.

Not Found (“manifest for 172.26.48.1:5002/gimbals:runme not found: manifest unknown: manifest unknown”

If I try to leave the compose file as the original name of gimbals:runme, it tries to go out to docker.io.
I don’t know how to do solution 5.
Solution 4 got an exception: Max retries exceeded with url
Solution 2: I don’t know how to do that.
Solution 1: is the closest, but still not working. It can’t get to the image, but is able to log in.

@ToradexAI @eric.tx @alex.tx
Is there a way that the bundle command could just take an export tar file of an image? Then there would be no need to login to docker and all this fiddling would go away. That is how I create the image in the first place. I create it with the build process, then run it through VSC, and then commit that container to an image, then I export that to a .tar file.

Steve

Unfortunately, TorizonCore Builder’s bundle command doesn’t directly support importing tar files - it’s designed to work with Docker registries or Docker Compose files that reference registry images (1). However, there are several alternative approaches that can solve your WSL networking issues.

Alternative Solutions

Solution 1: Use Docker Save/Load Workflow

Since you already export your images as tar files, you can use the Docker save/load approach documented by Toradex (2):

  1. Load your tar file into the local Docker daemon (2):
docker load -i my-dockerfile.tar

(2)

  1. Tag the image appropriately:
docker tag gimbals:runme localhost:5000/gimbals:runme
  1. Start a simple local registry (2):
docker run -d -p 5000:5000 --name registry registry:2

(2)

  1. Push to the local registry:
docker push localhost:5000/gimbals:runme
  1. Update your docker-compose.yml to reference localhost:5000/gimbals:runme

  2. Run the bundle command (1):

torizoncore-builder bundle docker-compose.yml --platform linux/arm64 --dind-param="--insecure-registry=localhost:5000"

(1)

Solution 2: Use the Build Command Approach

The most reliable solution for your production needs is to use the integrated build command approach (3)(4). This bypasses many of the networking issues:

  1. Create a tcbuild.yaml file (3):
input:
  easy-installer:
    local: images/your-base-torizon-image.tar
output:
  easy-installer:
    local: images/torizon-docker-verdin-imx8mp-custom
    bundle:
      compose-file: docker-compose.yml

(3)

  1. Run the build command (3):
torizoncore-builder build

(3)

This approach downloads container images implicitly and may work around the DIND connection issues you’re experiencing (3).

Solution 3: Pre-bundled Directory Approach

You can also use the pre-bundled directory method (4):

  1. First, bundle the containers separately (4):
torizoncore-builder bundle docker-compose.yml --bundle-directory bundle --platform linux/arm64

(4)

  1. Then reference the bundle in your tcbuild.yaml (4):
output:
  easy-installer:
    local: path/to/your/custom-image-folder
    bundle:
      dir: bundle/

(4)

Registry Configuration Fix

For your current registry setup, the issue is that your registry needs to be accessible from the Docker bridge network (2). Configure your registry to bind to all interfaces (2):

docker run -d -p 0.0.0.0:5002:5000 --name registry registry:2

Instead of binding only to 127.0.0.1:5002.

Recommended Production Approach

For production shipping, I recommend using Solution 2 (the build command approach) as it’s specifically designed for production programming scenarios (3)(5). This method:

  • Handles container bundling automatically (3)
  • Avoids complex networking issues (3)
  • Is the recommended approach for production programming (5)
  • Works reliably in CI/CD environments (3)

The build command approach should resolve your networking issues while providing the production-ready installer image you need for shipping.

Best regards, Toradex AI Assistant.


Did you like my answer? :+1: :-1:

If I answered your question, please tick the “Solution” checkbox.

For follow-up questions to me, please use @ToradexAI in your post.

@ToradexAI
docker run -d -p 0.0.0.0:5002:5000 --name registry registry:2
This command fails even though the container and image have been removed, I get this error after the download of the Newer image of registry:
docker: Error response from daemon: Conflict. The container name “/registry” is already in use by container “046b40a84a504d93ad9850acaf3cfafd2e24e9ae7e13b32ffa99748da4046b35”. You have to remove (or rename) that container to be able to reuse that name.