Force docker-compose to rebuild a container

Thanks for the conversation! I’ve reached my limit of replies in this thread.

If you still have questions, feel free to start a new one. I’m happy to help!

Did you like my answers? :+1: :-1:

Good effort @ToradexAI, I think we are close. I need a break too, so let’s pick this up tomorrow.

This topic on StackOverflow describes the same problem that I have, although the suggested solution of using 192.168.5.2 does not work for me.
go - Docker Push to Private Repo instantly returns “Connection Refused” - Stack Overflow

One last comment for tonight. If I run docker info then I can see that 127.0.0.0 is already listed as insecure registry and so I’m not sure why I can’t push to it.

 containerd version: bcc810d6b9066471b0b6fa75f557a15a1cbf31bb
 runc version: v1.2.4-0-g6c52b3f
 init version: de40ad0
 Security Options:
  seccomp
   Profile: builtin
 Kernel Version: 5.15.167.4-microsoft-standard-WSL2
 Operating System: Torizon Development Environment for WSL 2
 OSType: linux
 Architecture: x86_64
 CPUs: 16
 Total Memory: 23.08GiB
 Name: USSCZW0218
 ID: 75535dd8-c058-4e26-a4d5-e3bcd60e2197
 Docker Root Dir: /var/lib/docker
 Debug Mode: false
 Experimental: false
 Insecure Registries:
  ::1/128
  127.0.0.0/8
 Live Restore Enabled: false

Hi @jeremias.tx, the AI chatbot was very close to helping me find a solution here before it hit its limit for maximum number of replies. Can you or one of your colleagues have a look at this?

I am using a few secondary containers in my project. I have successfully added a task to the build process in Visual Studio Code which detects if the source code for the secondary container has changed and it then rebuilds the container image. The step that fails is pushing the container image to the local registry, it causes this error message:

Get "http://127.0.0.1:5002/v2/": dial tcp 127.0.0.1:5002: connect: connection refused

The step that fails is pushing the container image to the local registry, it causes this error message:

Which container image is failing to be pushed? The primary one or the secondary ones?

What does your full compose file look like?

If you manually try to run the same push command that the VSCode task is attempting to do, do you get the same error?

Did this push error not happen before you added the task for your secondary container images?

Best Regards,
Jeremias

Hi @jeremias.tx, thanks for replying. I think I discovered something important while collecting data to answer your questions. But first I’ll answer some of your questions:

It’s the secondary container while fails to push from the VSCode task. I’ve been using a set of secondary containers for quite a long time now and everything works well if the images for the secondary containers have not yet been deployed to the Colibri module. When I build my project in VSCode docker-compose does successfully build and deploy the secondary containers as specified in my docker-compose.yaml along with the primary container.

The problem that I’m trying to solve is that if I edit the source files for a secondary container then docker-compose ignores that and does not rebuild the image, it only rebuilds the image if I manually delete it from the Colibri module first. The task that I added to VSCode is intended to solve that by detecting that the image for the secondary container is out of date and rebuilding it.

If I take the new task that I added to VSCode and simply execute it in a WSL terminal window then it does work. It builds the image and pushes it into the local registry. That’s new information today which came from responding your questions.

This made me realize that the VSCode task is executing in a different environment than the WSL terminal, perhaps the VSCode task is executing inside a container like the cross-toolchain-arm container. For example, if I run docker image ls in the WSL terminal then I see a different set of images than when I add that command to the VSCode task. In the WSL terminal the list of images includes my project, the cross compiler, hello-world, the registry, torizoncore-builder and so on. When the VSCode task is executing the list of images only includes my projects.

More telling is the response to docker ps. In the WSL terminal I can see that the registry is running:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 93d68e122a8a registry:2 "/entrypoint.sh /etc…" 4 months ago Up 6 days 0.0.0.0:5002->5000/tcp, [::]:5002->5000/tcp registry

But when I add docker ps to the VSCode task then I see this, showing that no containers are running:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES

This tells me that the VSCode task is running in an environment where it cannot see the registry and that’s why the push fails. Is there a way to expose the local registry to the VSCode task? There must be a way to do this as the standard Torizon VSCode tasks do interact with the registry.

The way my VSCode task is implemented might be important. What I have done is add a simple task which just executes a bash script. That way all the complexity can be added to the script file rather than to tasks.json. Is my script executing in a shell which can’t see the registry?

{
    "label": "configure-web",
    "type": "shell",
    "command": "source ./web/configure-web.sh"
}

@jeremias.tx I just realized why the response to the docker commands look different in my VSCode task compared to the WSL terminal. My VSCode task is using docker on the Colibri module, so commands like docker ps return the containers running on the Colibri module and not the containers running on my development PC. The registry is running in a container on my development PC. The WSL terminal is using the local docker on the development PC, which is the correct one and to the script works there.

This might be related to the DOCKER_HOST environment variable. I’m looking at the other VSCode tasks provided by Toradex to see if I can figure out how they work in this regard.

My VSCode task is using docker on the Colibri module, so commands like docker ps return the containers running on the Colibri module and not the containers running on my development PC

That would make sense based on your observations.

This might be related to the DOCKER_HOST environment variable. I’m looking at the other VSCode tasks provided by Toradex to see if I can figure out how they work in this regard.

DOCKER_HOST should be the culprit here. This variable is the only thing I can think of that would affect which environment your docker commands are being executed on. Somewhere in your task flow DOCKER_HOST must be getting set somewhere and not being cleared by the time it gets to the push task.

Now that confusing part is that from what I can see all the push-container-torizon* tasks should set DOCKER_HOST=: vscode-torizon-templates/assets/tasks/common.json at dev · torizon/vscode-torizon-templates · GitHub

This would lead all docker commands to be executed on the local machine (i.e your development PC). But that doesn’t seem to be the case in your setup.

It’s the secondary container while fails to push from the VSCode task

So you said it’s specifically the secondary containers that fail to push? Are these being pushed differently than the primary container? Maybe the difference is the key here.

Best Regards,
Jeremias

@jeremias.tx I have changed my strategy quite significantly. Instead of building the container from a bash script and trying to push the result into the local registry what I am doing now is to add three new VSCode tasks which I made by copying and then modifying these existing tasks:

“build-container-torizon-debug-arm”
“push-container-torizon-debug-arm”
“pull-container-torizon-debug-arm”

That does work. It means I need 3 VSCode tasks for each of my secondary containers instead of one script to do it all, but it does work.

That does work. It means I need 3 VSCode tasks for each of my secondary containers instead of one script to do it all, but it does work.

Is everything regarding this topic resolved then?

Best Regards,
Jeremias

@jeremias.tx I am still testing this but I think I have what I need for now.

I was also able to build and push the image from my script by clearing the DOCKER_HOST setting before running the script, like this:

{
    "label": "configure-web",
    "type": "shell",
    "command": "DOCKER_HOST=",
    "args": [
        "source",
        "./web/configure-web.sh"
    ],
},

Now that this is working both ways (Using a script and using a set of VSCode tasks) I need to figure out which way is better for me.

Thanks for your help, I think I can take it from here.

That’s good to hear. Do let us know if you have further questions related to this topic.

Best Regards,
Jeremias