Error about container already in use while bundling

I am getting an error during my torizoncore build in VSCode. I am executing the tcb-build task, which was working as of yesterday. Today, after rebooting my PC, I am not able to build. I get an error about a dind container name that is already in use.

The error says:

    raise cls(e, response=response, explanation=explanation) from e
docker.errors.APIError: 409 Client Error for http+docker://localhost/v1.48/containers/create?name=tcb-fetch-dind: Conflict ("Conflict. The container name "/tcb-fetch-dind" is already in use by container "a37f51d17af720790c9ba5a98f673d114532d5bcd83134a34b65b73e48b301d3". You have to remove (or rename) that container to be able to reuse that name.")

However, when I run docker container ls or docker ps -a I cannot see any container with that name or that hash.

Executing task: DOCKER_HOST= source ./.conf/tcb-env-setup.sh -s /home/jzylkin/repos/chipotle_dbpt/storage -t 3.13.0 

Warning: If you intend to use torizoncore-builder as a server (listening to ports), then you should pass extra parameters to "docker run" (via the -- switch).
Setting up TorizonCore Builder with version 3.13.0.

2025-07-08 16:45:52,448 - torizon.tcbuilder.cli.build - INFO - Building image as per configuration file 'tcbuild.yaml'...
2025-07-08 16:45:52,472 - torizon.tcbuilder.cli.build - INFO - 
=>> Handling input section
2025-07-08 16:45:53,099 - torizon.tcbuilder.backend.images - INFO - Unpacking Toradex Easy Installer image.
2025-07-08 16:45:53,099 - torizon.tcbuilder.backend.images - DEBUG - Running tar command: tar -xf /workdir/torizon-docker-verdin-imx8mp-Tezi_7.3.0-devel-202506+build.9.tar -C /storage/tmpicu8xh5i
2025-07-08 16:45:53,243 - torizon.tcbuilder.backend.images - INFO - Unpacking TorizonCore Toradex Easy Installer image.
2025-07-08 16:45:53,243 - torizon.tcbuilder.backend.images - DEBUG - Running tar command: tar --xattrs '--xattrs-include=*' -xhf /storage/tezi/torizon-docker-verdin-imx8mp.ota.tar.zst -C /storage/sysroot --use-compress-program 'zstd -dc'
2025-07-08 16:45:55,923 - torizon.tcbuilder.backend.images - INFO - Importing OSTree revision ca92207f703e96336e78f889a461f124b80a2d09fa9819097b4e129eb1e2bcff from local repository...
2025-07-08 16:45:55,925 - torizon.tcbuilder.backend.ostree - DEBUG - Pulling from local repository /storage/sysroot/ostree/repo commit checksum ca92207f703e96336e78f889a461f124b80a2d09fa9819097b4e129eb1e2bcff
1283 metadata, 9748 content objects imported; 648.3 MB content written
2025-07-08 16:45:59,833 - torizon.tcbuilder.backend.ostree - DEBUG - Pulling from local repository /storage/sysroot/ostree/repo commit checksum ca92207f703e96336e78f889a461f124b80a2d09fa9819097b4e129eb1e2bcff
0 metadata, 0 content objects imported; 0 bytes content written
2025-07-08 16:45:59,876 - torizon.tcbuilder.backend.images - INFO - Unpacked OSTree from Toradex Easy Installer image:
2025-07-08 16:45:59,876 - torizon.tcbuilder.backend.images - INFO -   Commit checksum: ca92207f703e96336e78f889a461f124b80a2d09fa9819097b4e129eb1e2bcff
2025-07-08 16:45:59,876 - torizon.tcbuilder.backend.images - INFO -   TorizonCore Version: 7.3.0-devel-202506+build.9
2025-07-08 16:45:59,876 - torizon.tcbuilder.cli.build - INFO - 
=>> Handling customization section
2025-07-08 16:45:59,876 - torizon.tcbuilder.cli.build - INFO - 
=> Handling device-tree subsection
2025-07-08 16:45:59,876 - torizon.tcbuilder.cli.build - INFO - 
=> Selecting custom device-tree 'imx8mp-verdin-non-wifi-ivy-flattened.dts'
2025-07-08 16:45:59,876 - torizon.tcbuilder.cli.dt - DEBUG - dt_apply: include directories: 
2025-07-08 16:45:59,901 - torizon.tcbuilder.backend.dt - INFO - 'imx8mp-verdin-non-wifi-ivy-flattened.dts' compiles successfully.
2025-07-08 16:45:59,921 - torizon.tcbuilder.cli.dt - INFO - warning: removing currently applied device tree overlays
2025-07-08 16:45:59,921 - torizon.tcbuilder.cli.dt - INFO - Device tree imx8mp-verdin-non-wifi-ivy-flattened.dtb successfully applied.
2025-07-08 16:45:59,921 - torizon.tcbuilder.cli.dto - DEBUG - Removing all overlays
2025-07-08 16:45:59,947 - torizon.tcbuilder.cli.build - INFO - 
=>> Handling output section
2025-07-08 16:45:59,957 - torizon.tcbuilder.cli.union - DEBUG - union: subject='first-attempt' body='None'
2025-07-08 16:45:59,959 - torizon.tcbuilder.cli.union - INFO - Applying changes from STORAGE/dt.
2025-07-08 16:45:59,963 - torizon.tcbuilder.backend.union - DEBUG - Processing whiteouts.
2025-07-08 16:46:00,011 - torizon.tcbuilder.cli.union - INFO - Applying changes from WORKDIR/isolated-changes.
2025-07-08 16:46:00,014 - torizon.tcbuilder.backend.union - DEBUG - Processing whiteouts.
2025-07-08 16:46:00,015 - torizon.tcbuilder.backend.union - DEBUG - Removing all contents from /usr/etc/rpm-postinsts.
2025-07-08 16:46:00,062 - torizon.tcbuilder.backend.union - DEBUG - Transaction committed. 102521 bytes 21 objects written.
2025-07-08 16:46:00,065 - torizon.tcbuilder.cli.union - INFO - Commit 19f33dd03f655bd8d71b4941b809309352e0ffc68fcdfecbce41b1903badf373 has been generated for changes and is ready to be deployed.
2025-07-08 16:46:00,070 - torizon.tcbuilder.backend.deploy - INFO - Pulling OSTree with ref default from local archive repository...
2025-07-08 16:46:00,070 - torizon.tcbuilder.backend.deploy - INFO -   Commit checksum: 19f33dd03f655bd8d71b4941b809309352e0ffc68fcdfecbce41b1903badf373
2025-07-08 16:46:00,070 - torizon.tcbuilder.backend.deploy - INFO -   TorizonCore Version: 7.3.0-devel-202506+build.9-tcbuilder.20250708164600
2025-07-08 16:46:00,070 - torizon.tcbuilder.backend.deploy - INFO -   Default kernel arguments: quiet logo.nologo vt.global_cursor_default=0 plymouth.ignore-serial-consoles splash fbcon=map:3

2025-07-08 16:46:00,070 - torizon.tcbuilder.backend.ostree - DEBUG - Pulling from local repository /storage/ostree-archive commit checksum 19f33dd03f655bd8d71b4941b809309352e0ffc68fcdfecbce41b1903badf373
1289 metadata, 9761 content objects imported; 648.4 MB content written
2025-07-08 16:46:01,654 - torizon.tcbuilder.backend.deploy - INFO - Pulling done.
2025-07-08 16:46:01,654 - torizon.tcbuilder.backend.deploy - INFO - Deploying OSTree with checksum 19f33dd03f655bd8d71b4941b809309352e0ffc68fcdfecbce41b1903badf373
2025-07-08 16:46:01,655 - torizon.tcbuilder.backend.deploy - DEBUG - Deploying revision 19f33dd03f655bd8d71b4941b809309352e0ffc68fcdfecbce41b1903badf373
2025-07-08 16:46:02,150 - torizon.tcbuilder.backend.deploy - INFO - Bootloader found in unpacked image: U-Boot
2025-07-08 16:46:02,150 - torizon.tcbuilder.backend.deploy - DEBUG - Write deployment for revision 19f33dd03f655bd8d71b4941b809309352e0ffc68fcdfecbce41b1903badf373
2025-07-08 16:46:02,274 - torizon.tcbuilder.backend.deploy - INFO - Deploying done.
2025-07-08 16:46:02,274 - torizon.tcbuilder.backend.deploy - INFO - Copy files not under OSTree control from original deployment.
2025-07-08 16:46:02,280 - torizon.tcbuilder.backend.deploy - INFO - Packing rootfs...
2025-07-08 16:46:02,283 - torizon.tcbuilder.backend.deploy - DEBUG - Running tar command: tar --xattrs '--xattrs-include=*' -cf /workdir/chipotle-TCB-CUSTOM/torizon-docker-verdin-imx8mp.ota.tar -S -C /deploy -p .
2025-07-08 16:46:03,128 - torizon.tcbuilder.backend.deploy - DEBUG - Running compress command: zstd --rm /workdir/chipotle-TCB-CUSTOM/torizon-docker-verdin-imx8mp.ota.tar
2025-07-08 16:46:05,137 - torizon.tcbuilder.backend.deploy - INFO - Packing rootfs done.
2025-07-08 16:46:05,137 - torizon.tezi.image - DEBUG - No 'filelist' present in image configuration.
2025-07-08 16:46:05,138 - torizon.tezi.image - DEBUG - Reading json_data[key], value 'Torizon OS'
2025-07-08 16:46:05,138 - torizon.tezi.image - DEBUG - Writing json_data[key] = 'Torizon OS'
2025-07-08 16:46:05,138 - torizon.tezi.image - DEBUG - Reading json_data[key], value '7.3.0-devel-202506+build.9'
2025-07-08 16:46:05,138 - torizon.tezi.image - DEBUG - Writing json_data[key] = '7.3.0-devel-202506+build.9.modified'
2025-07-08 16:46:05,138 - torizon.tezi.image - DEBUG - Writing json_data[key] = '2025-07-08'
2025-07-08 16:46:05,138 - torizon.tezi.image - DEBUG - Writing json_data[key] = 'True'
2025-07-08 16:46:05,138 - torizon.tezi.image - DEBUG - Reading json_data[key], value '7.3.0-devel-202506+build.9.modified'
2025-07-08 16:46:05,138 - torizon.tcbuilder.cli.build - INFO - Bundling images to directory bundle_20250708164605_138633.tmp
2025-07-08 16:46:05,139 - docker.utils.config - DEBUG - Trying paths: ['/root/.docker/config.json', '/root/.dockercfg']
2025-07-08 16:46:05,139 - docker.utils.config - DEBUG - No config file found
2025-07-08 16:46:05,139 - docker.utils.config - DEBUG - Trying paths: ['/root/.docker/config.json', '/root/.dockercfg']
2025-07-08 16:46:05,139 - docker.utils.config - DEBUG - No config file found
2025-07-08 16:46:05,146 - urllib3.connectionpool - DEBUG - http://localhost:None "GET /version HTTP/1.1" 200 852
2025-07-08 16:46:05,149 - urllib3.connectionpool - DEBUG - http://localhost:None "GET /v1.48/containers/json?limit=-1&all=0&size=0&trunc_cmd=0&filters=%7B%22name%22%3A+%5B%22tcb_1751993152%22%5D%7D HTTP/1.1" 200 1697
2025-07-08 16:46:05,150 - urllib3.connectionpool - DEBUG - http://localhost:None "GET /v1.48/containers/da29f806c759fe78e0676c6ce06a96d0f3ee6539f6b49aed43b21ac816caef82/json HTTP/1.1" 200 None
2025-07-08 16:46:05,151 - torizon.tcbuilder.backend.common - DEBUG - Current container ID (found by container name): da29f806c759fe78e0676c6ce06a96d0f3ee6539f6b49aed43b21ac816caef82
2025-07-08 16:46:05,152 - urllib3.connectionpool - DEBUG - http://localhost:None "GET /v1.48/containers/da29f806c759fe78e0676c6ce06a96d0f3ee6539f6b49aed43b21ac816caef82/json HTTP/1.1" 200 None
2025-07-08 16:46:05,152 - torizon.tcbuilder.backend.bundle - INFO - NOTE: TCB no longer expands environment variables present in the compose file.
2025-07-08 16:46:05,153 - torizon.tcbuilder.backend.bundle - DEBUG - recursive_yaml_value_check: Call with obj <class 'dict'>. Checking config values in ''
2025-07-08 16:46:05,153 - torizon.tcbuilder.backend.bundle - DEBUG - recursive_yaml_value_check: Call with obj <class 'dict'>. Checking config values in '/services'
2025-07-08 16:46:05,153 - torizon.tcbuilder.backend.bundle - DEBUG - recursive_yaml_value_check: Call with obj <class 'dict'>. Checking config values in '/services/chipotle'
2025-07-08 16:46:05,153 - torizon.tcbuilder.backend.bundle - DEBUG - recursive_yaml_value_check: Call with obj <class 'str'>. Checking config values in '/services/chipotle/image'
2025-07-08 16:46:05,153 - torizon.tcbuilder.backend.bundle - DEBUG - recursive_yaml_value_check: Call with obj <class 'str'>. Checking config values in '/services/chipotle/network_mode'
2025-07-08 16:46:05,153 - torizon.tcbuilder.backend.bundle - DEBUG - recursive_yaml_value_check: Call with obj <class 'str'>. Checking config values in '/services/chipotle/restart'
2025-07-08 16:46:05,153 - torizon.tcbuilder.backend.bundle - DEBUG - recursive_yaml_value_check: Call with obj <class 'list'>. Checking config values in '/services/chipotle/volumes'
2025-07-08 16:46:05,154 - torizon.tcbuilder.backend.bundle - DEBUG - recursive_yaml_value_check: Call with obj <class 'str'>. Checking config values in '/services/chipotle/volumes'
2025-07-08 16:46:05,154 - torizon.tcbuilder.backend.bundle - DEBUG - recursive_yaml_value_check: Call with obj <class 'dict'>. Checking config values in '/services/faad-wes'
2025-07-08 16:46:05,154 - torizon.tcbuilder.backend.bundle - DEBUG - recursive_yaml_value_check: Call with obj <class 'str'>. Checking config values in '/services/faad-wes/image'
2025-07-08 16:46:05,154 - torizon.tcbuilder.backend.bundle - DEBUG - recursive_yaml_value_check: Call with obj <class 'str'>. Checking config values in '/services/faad-wes/network_mode'
2025-07-08 16:46:05,154 - torizon.tcbuilder.backend.bundle - DEBUG - recursive_yaml_value_check: Call with obj <class 'str'>. Checking config values in '/services/faad-wes/restart'
2025-07-08 16:46:05,154 - torizon.tcbuilder.backend.bundle - DEBUG - Using DindManager
2025-07-08 16:46:05,154 - docker.utils.config - DEBUG - Trying paths: ['/root/.docker/config.json', '/root/.dockercfg']
2025-07-08 16:46:05,155 - docker.utils.config - DEBUG - No config file found
2025-07-08 16:46:05,155 - docker.utils.config - DEBUG - Trying paths: ['/root/.docker/config.json', '/root/.dockercfg']
2025-07-08 16:46:05,155 - docker.utils.config - DEBUG - No config file found
2025-07-08 16:46:05,162 - urllib3.connectionpool - DEBUG - http://localhost:None "GET /version HTTP/1.1" 200 852
2025-07-08 16:46:05,169 - urllib3.connectionpool - DEBUG - http://localhost:None "GET /v1.48/info HTTP/1.1" 200 None
2025-07-08 16:46:05,170 - docker.utils.config - DEBUG - Trying paths: ['/root/.docker/config.json', '/root/.dockercfg']
2025-07-08 16:46:05,170 - docker.utils.config - DEBUG - No config file found
2025-07-08 16:46:05,170 - docker.utils.config - DEBUG - Trying paths: ['/root/.docker/config.json', '/root/.dockercfg']
2025-07-08 16:46:05,170 - docker.utils.config - DEBUG - No config file found
2025-07-08 16:46:05,177 - urllib3.connectionpool - DEBUG - http://localhost:None "GET /version HTTP/1.1" 200 852
2025-07-08 16:46:05,179 - urllib3.connectionpool - DEBUG - http://localhost:None "GET /v1.48/containers/json?limit=-1&all=0&size=0&trunc_cmd=0&filters=%7B%22name%22%3A+%5B%22tcb_1751993152%22%5D%7D HTTP/1.1" 200 1697
2025-07-08 16:46:05,180 - urllib3.connectionpool - DEBUG - http://localhost:None "GET /v1.48/containers/da29f806c759fe78e0676c6ce06a96d0f3ee6539f6b49aed43b21ac816caef82/json HTTP/1.1" 200 None
2025-07-08 16:46:05,180 - torizon.tcbuilder.backend.common - DEBUG - Current container ID (found by container name): da29f806c759fe78e0676c6ce06a96d0f3ee6539f6b49aed43b21ac816caef82
2025-07-08 16:46:05,181 - urllib3.connectionpool - DEBUG - http://localhost:None "GET /v1.48/containers/da29f806c759fe78e0676c6ce06a96d0f3ee6539f6b49aed43b21ac816caef82/json HTTP/1.1" 200 None
2025-07-08 16:46:05,182 - torizon.tcbuilder.backend.bundle - INFO - 
Starting DIND container
2025-07-08 16:46:05,183 - urllib3.connectionpool - DEBUG - http://localhost:None "POST /v1.48/volumes/create HTTP/1.1" 201 182
2025-07-08 16:46:05,183 - torizon.tcbuilder.backend.bundle - DEBUG - Default platform: linux/arm/v7
2025-07-08 16:46:05,183 - torizon.tcbuilder.backend.bundle - DEBUG - Volume mapping for DinD: [{'Target': '/workdir/', 'Source': '/home/jzylkin/repos/chipotle_dbpt', 'Type': 'bind', 'ReadOnly': False}, {'Target': '/var/lib/docker/', 'Source': 'dind-volume', 'Type': 'volume', 'ReadOnly': False}]
2025-07-08 16:46:05,183 - torizon.tcbuilder.backend.bundle - DEBUG - Environment variables for DinD: {'DOCKER_TLS_CERTDIR': '/workdir/certs_20250708164605_154605.tmp', 'DOCKER_DEFAULT_PLATFORM': 'linux/arm/v7'}
2025-07-08 16:46:05,183 - torizon.tcbuilder.backend.bundle - DEBUG - Running DinD container: ports={'22376/tcp': 22376}, network=bridge
2025-07-08 16:46:05,185 - urllib3.connectionpool - DEBUG - http://localhost:None "POST /v1.48/containers/create?name=tcb-fetch-dind HTTP/1.1" 409 241
2025-07-08 16:46:05,185 - torizon.tcbuilder.backend.bundle - INFO - Stopping DIND container
2025-07-08 16:46:06,188 - urllib3.connectionpool - DEBUG - http://localhost:None "DELETE /v1.48/volumes/dind-volume HTTP/1.1" 409 120
2025-07-08 16:46:06,189 - torizon.tcbuilder.cli.build - INFO - Removing output directory 'chipotle-TCB-CUSTOM' due to build errors
2025-07-08 16:46:06,210 - root - ERROR - Error: Error trying to bundle Docker containers
2025-07-08 16:46:06,211 - root - DEBUG - Traceback (most recent call last):
  File "/usr/local/lib/python3.9/dist-packages/docker/api/client.py", line 275, in _raise_for_status
    response.raise_for_status()
  File "/usr/local/lib/python3.9/dist-packages/requests/models.py", line 1024, in raise_for_status
    raise HTTPError(http_error_msg, response=self)
requests.exceptions.HTTPError: 409 Client Error: Conflict for url: http+docker://localhost/v1.48/containers/create?name=tcb-fetch-dind

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/builder/tcbuilder/backend/bundle.py", line 636, in download_containers_by_compose_file
    manager.start(network, default_platform=platform,
  File "/builder/tcbuilder/backend/bundle.py", line 282, in start
    self.dind_container = self.host_client.containers.run(
  File "/usr/local/lib/python3.9/dist-packages/docker/models/containers.py", line 876, in run
    container = self.create(image=image, command=command,
  File "/usr/local/lib/python3.9/dist-packages/docker/models/containers.py", line 935, in create
    resp = self.client.api.create_container(**create_kwargs)
  File "/usr/local/lib/python3.9/dist-packages/docker/api/container.py", line 440, in create_container
    return self.create_container_from_config(config, name, platform)
  File "/usr/local/lib/python3.9/dist-packages/docker/api/container.py", line 457, in create_container_from_config
    return self._result(res, True)
  File "/usr/local/lib/python3.9/dist-packages/docker/api/client.py", line 281, in _result
    self._raise_for_status(response)
  File "/usr/local/lib/python3.9/dist-packages/docker/api/client.py", line 277, in _raise_for_status
    raise create_api_error_from_http_exception(e) from e
  File "/usr/local/lib/python3.9/dist-packages/docker/errors.py", line 39, in create_api_error_from_http_exception
    raise cls(e, response=response, explanation=explanation) from e
docker.errors.APIError: 409 Client Error for http+docker://localhost/v1.48/containers/create?name=tcb-fetch-dind: Conflict ("Conflict. The container name "/tcb-fetch-dind" is already in use by container "a37f51d17af720790c9ba5a98f673d114532d5bcd83134a34b65b73e48b301d3". You have to remove (or rename) that container to be able to reuse that name.")

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/builder/tcbuilder/backend/bundle.py", line 675, in download_containers_by_compose_file
    raise OperationFailureError(
tcbuilder.errors.OperationFailureError: Error: container images download failed: 409 Client Error for http+docker://localhost/v1.48/containers/create?name=tcb-fetch-dind: Conflict ("Conflict. The container name "/tcb-fetch-dind" is already in use by container "a37f51d17af720790c9ba5a98f673d114532d5bcd83134a34b65b73e48b301d3". You have to remove (or rename) that container to be able to reuse that name.")

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/usr/local/lib/python3.9/dist-packages/docker/api/client.py", line 275, in _raise_for_status
    response.raise_for_status()
  File "/usr/local/lib/python3.9/dist-packages/requests/models.py", line 1024, in raise_for_status
    raise HTTPError(http_error_msg, response=self)
requests.exceptions.HTTPError: 409 Client Error: Conflict for url: http+docker://localhost/v1.48/volumes/dind-volume

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/builder/tcbuilder/cli/build.py", line 459, in handle_bundle_common
    download_containers_by_compose_file(**download_params)
  File "/builder/tcbuilder/backend/bundle.py", line 679, in download_containers_by_compose_file
    manager.stop()
  File "/builder/tcbuilder/backend/bundle.py", line 346, in stop
    self.dind_volume.remove()
  File "/usr/local/lib/python3.9/dist-packages/docker/models/volumes.py", line 25, in remove
    return self.client.api.remove_volume(self.id, force=force)
  File "/usr/local/lib/python3.9/dist-packages/docker/api/volume.py", line 163, in remove_volume
    self._raise_for_status(resp)
  File "/usr/local/lib/python3.9/dist-packages/docker/api/client.py", line 277, in _raise_for_status
    raise create_api_error_from_http_exception(e) from e
  File "/usr/local/lib/python3.9/dist-packages/docker/errors.py", line 39, in create_api_error_from_http_exception
    raise cls(e, response=response, explanation=explanation) from e
docker.errors.APIError: 409 Client Error for http+docker://localhost/v1.48/volumes/dind-volume: Conflict ("remove dind-volume: volume is in use - [a37f51d17af720790c9ba5a98f673d114532d5bcd83134a34b65b73e48b301d3]")

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/builder/torizoncore-builder", line 231, in <module>
    mainargs.func(mainargs)
  File "/builder/tcbuilder/cli/build.py", line 695, in do_build
    raise exc
  File "/builder/tcbuilder/cli/build.py", line 668, in do_build
    build(args.config_fname, args.storage_directory,
  File "/builder/tcbuilder/cli/build.py", line 654, in build
    raise exc
  File "/builder/tcbuilder/cli/build.py", line 639, in build
    handle_output_section(
  File "/builder/tcbuilder/cli/build.py", line 309, in handle_output_section
    handle_easy_installer_output(tezi_props, storage_dir, union_params)
  File "/builder/tcbuilder/cli/build.py", line 387, in handle_easy_installer_output
    handle_bundle_output(
  File "/builder/tcbuilder/cli/build.py", line 476, in handle_bundle_output
    bundle_dir, is_tmp_dir = handle_bundle_common(
  File "/builder/tcbuilder/cli/build.py", line 464, in handle_bundle_common
    raise TorizonCoreBuilderError(
tcbuilder.errors.TorizonCoreBuilderError: Error trying to bundle Docker containers

Deploying commit ref: default

 *  The terminal process "/usr/bin/bash '-c', 'DOCKER_HOST= source ./.conf/tcb-env-setup.sh -s /home/jzylkin/repos/chipotle_dbpt/storage -t 3.13.0'" terminated with exit code: 255. 
 *  Terminal will be reused by tasks, press any key to close it. 

Hi, I’m the Toradex AI Assistant.

I might make mistakes. Our support team will step in if needed. If I answered your question, please tick the “Solution” checkbox. For follow-up questions to me, please use @ToradexAI in your post.

Did you like my answer? :+1: :-1:


Hello! Thank you for providing detailed output regarding your TorizonCore Builder issue.

Your error message indicates a conflict with a Docker container named /tcb-fetch-dind, which TorizonCore Builder uses for Docker-in-Docker operations. This type of conflict often appears even if you do not see the container in docker ps -a. It typically happens due to orphaned containers or inconsistent Docker contexts, especially after restarting your PC or switching between VS Code and a terminal.

Here are clear steps to resolve this issue:

1. Force Remove the Stuck Container

Even if it is not listed, try to force remove it by name:

docker rm -f tcb-fetch-dind

Or, if you have the container ID (from the error: a37f51d17af720790c9ba5a98f673d114532d5bcd83134a34b65b73e48b301d3):

docker rm -f a37f51d17af720790c9ba5a98f673d114532d5bcd83134a34b65b73e48b301d3

2. Remove the DinD Volume

There may also be a lingering volume (dind-volume) in use. Try:

docker volume rm dind-volume

If you get a “volume is in use” error, first ensure no containers are using it—removing the container above helps.

3. Verify Docker Context in VS Code

VS Code can use its own Docker context, which might differ from your terminal:

  • In the VS Code command palette, search for “Docker: Contexts” and ensure it is set to “default” (unless you know you’re using a specific context).

  • You can also check the DOCKER_HOST variable. In your VS Code terminal, run:

    echo $DOCKER_HOST
    

    If it is set, try unsetting it:

    unset DOCKER_HOST
    

Or, simply close and reopen VS Code after cleaning up containers above.

4. Restart Docker Daemon

After cleanup, restart Docker for good measure:

For Linux:

sudo systemctl restart docker

For Windows/Mac, restart Docker Desktop.

5. Prune Docker System (Optional and Destructive)

If the problem persists, prune all unused containers, volumes, networks, and images (this will remove unused stuff—be careful):

docker system prune -a --volumes

6. Alternative: Run From Command Line

If VS Code continues to have problems, try running your TorizonCore Builder operations in your system terminal or command prompt instead (not the VS Code integrated terminal) (source). Sometimes, context mismatches only occur in the VS Code terminal.


Once you have completed these steps, you should be able to build again. These steps address typical situations where a Docker context or orphaned container is causing a conflict.

Please let me know if you need further help or if this resolves the issue.
Best regards,
Toradex AI Assistant.

Reference: Docker container name conflict in TorizonCore Builder and VS Code context

Hey @jzylkin2,

Where you able to solve this topic? It sounds like it may be a one off issue, Maybe duplicate vs code instances?

-Eric

Hey

Eric Singer
Toradex

I have tried all these steps and I still get the error. There is no volume or container with this name on my system

Yes I resolved the problem. It was important to use DOCKER_HOST= before the docker command so I could view the local docker volumes. There was a volume that needed deleting.