An external ssh connection to a Debian container in Torizon?

How do I open an external ssh connection to a Debian container? I am trying to use VS Code in remote connection mode, however it requires libraries to establish the remote server function that are not contained in the base Torizon image.

Is there not a way to direct SSH on the Toradex module/Docker to a specific container?

Yes, I know about docker exec -it /bin/bash, but that only works after you’ve already established a SSH connection to the base image.

Greetings @mmccullotn,

First of all are you using our VSCode extension for Torizon? (Visual Studio Code Extension for Torizon)

Or is this something separate/different?

As for SSH-ing directly into a container. Well there are various methods, but in general this isn’t a recommended thing to do as far as I can tell. Though in general you’d need install an SSH server in the Container you want to access. Run the container mapping it’s SSH port to the host network so that it’s accessible to the outside. Then in theory you can target that port which should then map into the container.

But I would like to understand what your goal/objective here is. In general is it not simpler to just ssh to the TorizonCore host OS then docker exec into whatever container you want? Or is there something about your use-case that makes this difficult?

Best Regards,
Jeremias

Our embedded model on a competitor’s module is based on Debian with GCC compilation against several libraries including, but not limited to, cairo, freetype, flex, bison, jansson, etc, and some Python.

We’ve been using VS Code’s remote connection mode to code and compile directly on the competitor’s module. With a dual or quad-core it works just fine. But, we’ve run into supply chain problems and rising prices with the competitor. Therefore, we are looking to support Colibri as an alternative.

I’ve ported our Makefile-based compilation system on the Colibri iMX6D (and completed a full compile but I/O code needs modification), however, I can’t get VS Code to connect in remote mode to your module. I need to be able to deploy the VS Code remote server and it fails on the TorizonCore because there is not library support.

So… my Debian container is named “model17xx”. How do I configure SSH on the module to allow VS Code to connect to that specific container? Can you provide any more specific hints? I just install/configure and SSH server in the container and use a different port than the TorizonCore?

The VS Code extension for Torizon is rather overkill and a lot of extra work at this point as our firmware development as described above only uses 55% of the filesystem and it compiles from clean state to release archive in only 1 minute on the Colibri iMX6D.

Thanks

I tried the SSH server install on the Debian container. I can connect to the server only within the container (I set the port to 2222). When I try to connect externally, it appears the TorizonCore is blocking the port as I get “Connection Refused”. How do get TorizonCore to “pass-thru” the port 2222?

Debian container:

root@f2224752ae44:/# netstat -lpnt
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address           Foreign Address         State       PID/Program name
tcp        0      0 0.0.0.0:2222            0.0.0.0:*               LISTEN      24/sshd: /usr/sbin/
tcp6       0      0 :::2222                 :::*                    LISTEN      24/sshd: /usr/sbin/

TorizonCore:

colibri-imx6-10784400:~$ netstat -lpnt
(Not all processes could be identified, non-owned process info
 will not be shown, you would have to be root to see it all.)
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address           Foreign Address         State       PID/Program name
tcp        0      0 0.0.0.0:5355            0.0.0.0:*               LISTEN      -
tcp        0      0 0.0.0.0:111             0.0.0.0:*               LISTEN      -
tcp        0      0 127.0.0.53:53           0.0.0.0:*               LISTEN      -
tcp6       0      0 :::8840                 :::*                    LISTEN      -
tcp6       0      0 :::5355                 :::*                    LISTEN      -
tcp6       0      0 :::111                  :::*                    LISTEN      -
tcp6       0      0 :::22                   :::*                    LISTEN      -

Again, I don’t recommend this workflow you’re going for. But if you insist, here’s a simple example of SSH in a container. First of all here’s the Dockerfile for the container:

FROM ubuntu:18.04

RUN apt-get update && apt-get install -y openssh-server
RUN mkdir /var/run/sshd
RUN echo 'root:root' | chpasswd
RUN sed -i 's/#PermitRootLogin prohibit-password/PermitRootLogin yes/' /etc/ssh/sshd_config

ENV NOTVISIBLE "in users profile"
RUN echo "export VISIBLE=now" >> /etc/profile

EXPOSE 22
CMD ["/usr/sbin/sshd", "-D"]

You can build it with docker build -t eg_sshd ..

Then run it with docker run -d -P --name test_sshd eg_sshd. Please note that the -P flag will randomly map the internal port 22 of the container to a random available port on the host. You can find the port mapping with docker port test_sshd 22. In my case I got 0.0.0.0:32771, so port internal 22 is mapped to external port 32771.

Once the container is up and running you can go on another system and just run ssh -p 32771 root@<ip address of module>. As you can see in the Dockerfile the password is “root”. After that you should be dropped into a shell inside the container via SSH.

For a more involved example you can see the reference I used here: GitHub - linuxserver/docker-openssh-server

Best Regards,
Jeremias

Actually you can use the -p arg for ‘docker run’ to map the host port to the container port so that it is not random, e.g.

-p 2222:22/tcp

Then the SSH to the container is via port 2222.

Thanks. It is working for me now. I’ve got VS Code working in remote connection mode to the container. It is working ok, but it is a little RAM-starved with only 512M. 1G would be quite comfortable.

EDIT: After I stopped the demo containers for portainer and kiosk, the RAM usage was fine and VS Code is working well.

One of the confusing aspects of using these Docker containers is that I don’t know which is best to work out of to get ALL the features I need to support my app. For example, I need access to the framebuffer device for my code that uses cairo. Which Debian container is therefore best suited for this?

Do I start with a container and then just try to load all the stuff my app needs in it alone? Can I add a container with Weston support and then talk to it from my customized Debian container? Can I add Qt support later as a container and talk to it from my customized Debian container? I don’t understand the cross-container scope when working with a fully-featured embedded app. It seems like I have know exactly what features my app will need to exploit before I select a base container or I end up with a dead end.

Here’s a specific example I just ran into…

I used your instructions with the torizon/debian:2-bullseye container to get the SSH capability setup. I installed a bunch of stuff needed to support my development. Here is the Dockerfile and I tagged it base-model17xx-dev:

FROM --platform=linux/arm torizon/debian:2-bullseye

RUN apt-get update && apt-get install apt-utils

RUN apt-get install -y openssh-server

RUN mkdir /var/run/sshd

RUN echo 'root:root' | chpasswd

RUN sed -i 's/#PermitRootLogin prohibit-password/PermitRootLogin yes/' /etc/ssh/sshd_config

ENV NOTVISIBLE "in users profile"

RUN echo "export VISIBLE=now" >> /etc/profile

RUN apt-get install -y libc6 libstdc++6 python2-minimal ca-certificates tar

RUN apt-get install curl wget

RUN apt-get install bc bison build-essential flex git

RUN apt-get install libcairo2-dev libfl-dev libfreetype-dev libjansson-dev libts-dev

RUN apt-get install nano neofetch netbase sudo

RUN apt-get install lighttpd

RUN apt-get install python net-tools

EXPOSE 22
EXPOSE 80

CMD ["/usr/sbin/sshd", "-D"]

Then I ran this container using:

docker run -d -p 2222:22/tcp -p 8080:80/tcp --name model17xx-dev-2 base-model17xx-dev

That works fine. I cloned my codebase and got VS Code remote running. Now I find out that none of the hardware devices I need are mapped to the container. Ugh. Apparently I needed to run the container with:

docker run -d -p 2222:22/tcp -p 8080:80/tcp --privileged -v /dev:/dev --name model17xx-dev-2 base-model17xx-dev

There is no way once a container has been run to “restart” it with a different parameter? So all my edits in the first container are now a dead end?

Am I missing something? This workflow seems ridiculous… I have to know every detail about what my container needs before I do any work within it or I risk having to go back and recreate it all again.

EDIT: I found this hack. :face_vomiting: [tried it and it doesn’t seem to work for the /dev mount anyway]

One of the confusing aspects of using these Docker containers is that I don’t know which is best to work out of to get ALL the features I need to support my app. For example, I need access to the framebuffer device for my code that uses cairo. Which Debian container is therefore best suited for this?

Here’s an overview of all the Debian containers we create and provide: Debian Container for Torizon

As well as the source for these containers so you know what’s being put in them: GitHub - toradex/debian-docker-images: Official supported Debian based Docker images.

If you’re looking for framebuffer/graphical stuff then the respective Weston container would make the most sense as a starting place. Of course you are not limited to just our Debian containers. You can use any starting container you want or define your own containers from scratch, if it makes sense for your case.

Do I start with a container and then just try to load all the stuff my app needs in it alone?

More or less yes. You start with a base container find out what else you need additionally to run and build your app. Then you define this all via a Dockerfile so that it can be reproduced reliably.

Can I add a container with Weston support and then talk to it from my customized Debian container? Can I add Qt support later as a container and talk to it from my customized Debian container?

Well there’s multiple methods for cross-container interaction/communication. It depends what you mean by “talk to”. You could share file volumes or bind mounts between containers so that files/sockets are shared. You could setup networks between containers so they can talk that way.

There is no way once a container has been run to “restart” it with a different parameter? So all my edits in the first container are now a dead end?

There’s no “nice” way to do this as you discovered. Containers are meant to be a set instance sandbox. Changing the parameters of this instance on the fly without starting a new instance kind of defeats this point.

If you don’t know your full requirements/needs what you typically do is the following:

  • Start with a base container that you think it’s close.
  • Run the container with basic arguments
  • Begin by figuring out what extra packages you need in the container. Obtain these pacakges via apt or whatever package manager is available in the container.
  • As you add/configure things inside the container have a Dockerfile on the side where you reproduce these steps. This way if you ever need to create a new container, you can just build your Dockerfile to reproduce everything you’ve done up to that point.
  • Once you have every package you need in the container then start figuring out run-time parameters. Do you need port/network access? Access to a peripheral or such? Figure out what you need for your docker run command.

Once all that’s done you have a Dockerfile and docker run command that should be able to reproduce everything your app/system needs. Of course this is generalizing a bit. But that’s the general workflow .

A Dockerfile is set blueprint that guarantees what will be in a container image built from it. If you configure it right you can even guarantee versions of specific libraries will be the same every-time.

Best Regards,
Jeremias

@mmccullotn for your use case I would recommend using docker-compose as a wrapper around the custom container image. You use the container image (ie the Dockerfile generated image) to specify what goes into the container (ie openssh-server) and then use docker-compose to tell it how to run the container. The parameters that you specified ​in your docker command:

docker run -d -p 2222:22/tcp -p 8080:80/tcp --privileged -v /dev:/dev --name model17xx-dev-2 base-model17xx-dev

can be stored in a filed called docker-compose.yml that looks like:

version: "2.1"
services:
  base-model17xx-dev:
    image: base-model17xx-dev
    container_name: model17xx-d-2
    volumes:
      - /dev:/dev
    ports:
      - 2222:22/tcp
      - 8080:80/tcp
    privileged: true
    restart: unless-stopped

Then you can simply run docker-compose up -d and docker-compose down to start and stop the service.

When using docker-compose up -d, nothing appears to be persistent after a reboot? Is this correct?

I need the container changes to be persistent…

Is the container running after reboot? I would expect that it is but yes, any changes you make inside the container will be lost when the container exits. This is by design with containers and you need to use a volume to store persistent data. Depending on the nature of your changes you may be better off updating your Dockerfile and building a new version of your image; basically, if it is something that needs to be in all instances of the container then it should go in the image. ie Things like package installs and global configuration items. Use a volume to store runtime aspects of the container.

Drew

I think this is trying to force every use case into the same round hole, whether or not it is square… Our device is not an IoT box where all state is stored on some remote server.

We have a complex instrument device that has hooks into the operating system that must change service configuration files with a user preference. We also have user preferences and instrument history files that have to be preserved across reboots.

This abstraction of everything into its own container, and now “volume”, does not seem to offer any tangible advantage. It just seems to make everything more complex to manage. Docker is also riddled with changes from version-to-version as I ran into problems with docker-compose config examples that use syntax that doesn’t work with the version running on the Colibri. More complexity…

Do you offer a straight up Debian-based operating system kernel/drivers for the Colibri iMX6D/7D compatible with standard armhf Debian distribution such as Stretch/Buster/Bullseye?

Thanks.

I’m not aware of anyone using straight-up Debian but I see no reason it would not work. We normally use Yocto (Linux BSPs - Toradex Colibri and Apalis System on Modules) when Torizon does not work. But you should be able to build our kernel and bootloader using this page (https://developer.toradex.com/knowledge-base/build-u-boot-and-linux-kernel-from-source-code)

Drew

Drew, I appreciate your help. I’ve reviewed your last comment, and I’ve made it this far – maybe I can get there with Torizon and containers…

So, may I ask some further assistance with mounting persistent volumes for our app’s data and code? I want to do the following:

Volume      -> mount point
------------------------------
17xx-webgui -> /webgui
17xx-update -> /update
17xx-code   -> /root/Clients/AMI

My present docker-compose.yml looks like:

colibri-imx6-10784400:~$ more docker-compose.yml
version: "2.1"
services:
  base-model17xx-dev:
    image: base-model17xx-dev
    container_name: model17xx-dev
    volumes:
      - /tmp:/tmp
      - /dev:/dev
      - /run/udev:/run/udev
    cap_add:
      - CAP_SYS_TTY_CONFIG
    # Add device access rights through cgroup...
    #device_cgroup_rules:
      # ... for tty0
      #- 'c 4:0 rmw'
      # ... for tty7
      #- 'c 4:7 rmw'
      # ... for /dev/input devices
      #- 'c 13:* rmw'
      # ... for /dev/dri devices
      #- 'c 226:* rmw'
    ports:
      - 2222:22/tcp
      - 8080:80/tcp
    privileged: true
    restart: unless-stopped
colibri-imx6-10784400:~$ docker-compose -v
docker-compose version 1.26.0, build unknown

What do I need to add my docker-compose.yml to get the three desired persistent volumes to mount on execution?

Also, what latest “Version” in the yml file can be used with compose version 1.26.0? It gets confusing to try to track what compose syntax is acceptable with each “Version”, especially when browsing examples on the web and even the Docker docs.

Thanks!

Also, perhaps you can clarify how this lack of persistence is supposed to be “properly” handled for a simple example as follows:

Say, for instance, I want to allow my end-user to set the network parameters for my instrument through my app’s interface (see example screen below) – not via CLI into the operating system. So… I have a configuration screen that allows network interface entries and these are saved into a persistent volume location that I define, correct? My app will also write these changes into the appropriate system locations (e.g. a Debian-based container) and call a refresh of the network state.

Network-Edit

Now, when the system reboots my operating system container is not persistent, so the network configuration is not restored during subsequent reboots – that is until the Debian container starts, reads my persistent volume, and then rewrites the system configuration files associated with network configuration and refreshes the network.

So… in this Docker/Compose world there is no way I can get my device to boot into a stored network configuration per user-defined parameters? It will always boot into most likely a default DHCP configuration, even if the user wants to use a static network address, and then after some period from reboot it will be refreshed to actually be what the customer wants.

Am I missing something, or is this the way system services that are configured by the end-user will have to work? They will come up in some generic default state on boot and then have to be reconfigured and refreshed (maybe even restarted) to work as desired?

Thanks for any clarification.

Hi @mmccullotn

Regarding the volume syntax, I think just adding the following lines after - /run/udev:/run/udev should work:

 - 17xx-webgui:/webgui
 - 17xx-update:/update
 - 17xx-code:/root/Clients/AMI

Regarding docker-compose versions, according to this link, version 3.0 support was added in docker-compose v1.10.0 and v3.8 support requires docker engine version 19.03 which is what we have in Torizon so any version up to 3.8 should work.

Regarding the question of network configuration and storage, with Torizon specifically, network management is handled outside of the container using NetworkManager. It’s an interesting question how to update that configuration from within a container. Certainly you can map the /etc/NetworkManager files into the container and manipulate them directly but that’s likely a bad approach. I know NM has a DBUS API and that’s probably the best approach. You can use DBUS from within a container to connect to the host OS as documented in our samples here. I’ll ask around in case anyone has more specific advice related to NetworkManager manipulation.

Drew

I just did an experiment with a Debian container. I added the “network-manager” package in Dockerfile and added a volume as /var/run/dbus:/var/run/dbus. When I then invoked the nmcli command it manipulated the NetworkManager instance that was running on the host. So you can either write your own code using the NetworkManager dbus API or just install NetworkManager in the container and script it using the nmcli utility. Details on our docs here:

HTH,
Drew