Container limit?

Hi

for days I’ve been looking at a weird problem with my application. I have a docker-compose.yml with 8 containers defined in it. One of them was using a lot of resources at the start and we got stuck in a bootloop. So, for testing, I temporarily removed that container and all was well. Another problem I’m trying to fix is the boot order and a possibility to show something on screen while our webserver is still booting. I added a small nginx container to serve a loading page. It only uses about 6MB of ram, so I didn’t think that would be a problem. But now, I’m in that same bootloop as before. It looks as if 7 is the maximum number of containers that can be started from docker-compose. Is this normal and expected behaviour?

Merijn

So, after a whole day of trying to figure out what is going on, I think I can say 8 containers is the limit, no matter how big they are.

I created a docker-compose.yml that shows this limit on a Verdin iMX8MM DL 1GB W/B v1.1B mounted on a Verdin Development board v1.1A. It has a weston container, a kiosk container, and 6 lightweight nginx webservers. If all goes well, you should see a green webpage with white text “Loading…”, but it never gets there. The nginx webserver needs 6.5MB so I wouldn’t think the module runs out of resources, but it does. What can be done to fix this?

docker-compose-fix-8c.yml (2.4 KB)

Greetings @mvandenabeele,

That sounds very odd, I mean of course eventually there is a container limit due to limited hardware resources, but 8 sounds a bit lower than what I’d expect.Though I don’t specifically have the exact same variant of the Verdin i.MX8MM that you do.

Some questions about your setup:

  • What version of TorizonCore are you running?
  • Can you describe this “bootloop” in more detail? What is the device exactly doing? Do you see any logs from the kernel while this is occuring?
  • How much memory is being used on the system when you are at your “max” amount of 7 containers?

Best Regards,
Jeremias

All,

8 sounds right. I have not looked into this, I’m just older than dirt. When it comes to switching environments like docker and the plethora that came before, especially those developed using AGILE, someone always gets the bright idea to use a bitmap. If they bitmapped a byte, that would be the exact number.

You need to rule the Web Server out though. Web servers and Web browsers tend to launch about a thousand threads at startup to use as part of their “thread pool.”

You may simply be smashing into a thread limit or something like that.

https://hub.docker.com/repository/docker/seasonedgeek/xclock-demo

Try and get 9 of those running instead of your Web Server. It’s public so you should be able to pull and run it. It’s just the X11 clock demo program running in a container. It needs a Weston container already running.

docker pull seasonedgeek/xclock-demo

docker run --rm -d -v /tmp:/tmp -v /var/run/dbus:/var/run/dbus -v /dev/galcore:/dev/galcore --device-cgroup-rule=‘c 199:* rmw’ seasonedgeek/xclock-demo

If you see a bunch of clocks and your splash screen, your problem is your Web server is exhausting some other resource like threads.

Thanks for your reply Jeremias

I used a brand new module and installed the latest TorizonCore with demo containers trough EasyInstaller. I replaced /var/sota/stora/docker-compose/docker-compose.yml with the file I added in a previous post.

The bootloop goes as follows:

  • I see the torizon logo with a spinner
  • The spinners tops
  • the display stays black. At that point I can’t connect to it using SSH so I can’t give a lot more details.
  • After a while, the device restarts and I’m back to step one.

I’ll try one more time as soon as I have the hardware available.

Hi @seasoned_geek

I ran the tests with your clock and have the same results. Up to six clocks can be started. Adding a seventh is troubles.

image

What commands should I run to further investigate this issue?

docker-compose-1-clock.yml (972 Bytes)
docker-compose-6-clocks.yml (2.9 KB)
docker-compose-7-clocks.yml (3.3 KB)

Okay,

I was honestly hoping that worked for you so we could turn this into a “bad web server” conversation.

Here are my next guesses. SSH into your board.

`ulimit -Sa ## Show soft limit ##

ulimit -Ha ## Show hard limit ##`

Look for an unusually low “Max User processes” or some other number.

Odds are the above have nothing to do with your issue, but needs to be ruled out. If something seems really low, especially when compared to a Ubuntu desktop, might want to research it.

Your next thing to look at is

docker info

In particular I would be paying attention to

Architecture: x86_64
CPUs: 48
Total Memory: 125.8 GiB

If your virtual CPU count is 8 or smaller or for some stupid reason you have a hideously small amount of RAM, that’s where you need to start digging. While you can’t do squat about the CPU count as that is a combination of virtualization support of both hardware and the OS, you can tunnel into the docker documentation universe and find out how it allocates virtual physical resources.

When you are running Oracle VirtualBox on a rather capable machine with hardware virtualization you can have more CPUs allocated across multiple machines that virtually exist on the box. The virtual environment is smart enough to realize they aren’t all actually “in use” so they can be shared.

This might not be the case with Docker. You may get one, period. In that case, if you have a quad-core machine and the virtualization can magically double that to eight, you are going to cap out at eight.

Sorry, but you’re screwed.

Re-architect your solution. Maybe trim it down to two Web servers that do a lot more?

Hi seasoned_geek

Thanks for your reply. I was thinking exactly the same with that bitmapped byte, but I don’t feel were old because of that :wink:

I stumbled on this issue because my real application containers got stuck in a bootloop. They use only a few threads each so I don’t think that’s the problem. I used the webserver as a demonstration because it was available to the public and seemed lightweight when running on my laptop. From memory, I think nginx started 10 worker threads and uses just 6.5MB of memory. So that would add up to 60 threads and around 40MB of memory in total, plus whatever is needed for 1 weston and 1 cog container. That doesn’t sound like a lot to me. Just to be sure, I’ll do some testing with your suggested containers as soon as I have my hardware at hand. It doesn’t hurt to rule out things.

hi @seasoned_geek

Thanks for following up on this. I ran your commands and here are the results:

verdin-imx8mm-06944471:~$ ulimit -Sa
core file size          (blocks, -c) 0
data seg size           (kbytes, -d) unlimited
scheduling priority             (-e) 0
file size               (blocks, -f) unlimited
pending signals                 (-i) 2835
max locked memory       (kbytes, -l) 64
max memory size         (kbytes, -m) unlimited
open files                      (-n) 1024
pipe size            (512 bytes, -p) 8
POSIX message queues     (bytes, -q) 819200
real-time priority              (-r) 0
stack size              (kbytes, -s) 8192
cpu time               (seconds, -t) unlimited
max user processes              (-u) 2835
virtual memory          (kbytes, -v) unlimited
file locks                      (-x) unlimited

stack size might be a bit low, but it wouldn’t explain why the container type doesn’t matter. Heavy, light or mixed, I always get to 8.

verdin-imx8mm-06944471:~$ ulimit -Ha
core file size          (blocks, -c) unlimited
data seg size           (kbytes, -d) unlimited
scheduling priority             (-e) 0
file size               (blocks, -f) unlimited
pending signals                 (-i) 2835
max locked memory       (kbytes, -l) 64
max memory size         (kbytes, -m) unlimited
open files                      (-n) 524288
pipe size            (512 bytes, -p) 8
POSIX message queues     (bytes, -q) 819200
real-time priority              (-r) 0
stack size              (kbytes, -s) unlimited
cpu time               (seconds, -t) unlimited
max user processes              (-u) 2835
virtual memory          (kbytes, -v) unlimited
file locks                      (-x) unlimited

Pipe size: 8. Could that be the issue?

verdin-imx8mm-06944471:~$ docker info
Client:
 Debug Mode: false

Server:
 Containers: 3
  Running: 3
  Paused: 0
  Stopped: 0
 Images: 3
 Server Version: 19.03.14-ce
 Storage Driver: overlay2
  Backing Filesystem: extfs
  Supports d_type: true
  Native Overlay Diff: true
 Logging Driver: journald
 Cgroup Driver: cgroupfs
 Plugins:
  Volume: local
  Network: bridge host ipvlan macvlan null overlay
  Log: awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog
 Swarm: inactive
 Runtimes: runc
 Default Runtime: runc
 Init Binary: docker-init
 containerd version: 3b3e9d5f62a114153829f9fbe2781d27b0a2ddac.m
 runc version: 425e105d5a03fabd737a126ad93d62a9eeede87f-dirty
 init version: fec3683-dirty (expected: fec3683b971d9)
 Security Options:
  seccomp
   Profile: default
 Kernel Version: 5.4.161-5.6.0+git.0f0011824921
 Operating System: TorizonCore 5.6.0+build.13 (dunfell)
 OSType: linux
 Architecture: aarch64
 CPUs: 2
 Total Memory: 977.4MiB
 Name: verdin-imx8mm-06944471
 ID: 6KGF:FJUR:7TJ6:5QNO:HP75:GBVB:7UFX:O6UJ:R5IT:RCVP:7YN6:WTQR
 Docker Root Dir: /var/lib/docker
 Debug Mode: false
 Registry: https://index.docker.io/v1/
 Labels:
 Experimental: false
 Insecure Registries:
  127.0.0.0/8
 Live Restore Enabled: false

verdin-imx8mm-06944471:~$

2 CPUs, but I don’t think that is as relevant to docker as it is to VMs. Docker is not a full hypervisor and I think containers can share CPU-time.

Just to be clear, I’m not running 8 webservers :slight_smile: That was just an example to illustrate the issue. My application consists of 8 completely different containers, most of which are really lightweight. I’m afraid you’re right though, and I’ll need to merge some to get the number under 8 unless this thread rings a bell at Toradex support.

Understood about the clarity. I like my X11 clock thing better because it is just a clock. Doesn’t do or use much so we can rule out other resource restrictions.

Pipe size of 8 is non-issue. It’s actually (8) 512-byte buffers for FIFO. The clock application isn’t squeezing that orange too hard.

Here is some randomly gleaned information.

Maximum number of containers per network bridge is 1023

docker stats

total up the CPU% column

Somewhere I remember briefly seeing something allowing you to assign a minimal CPU % to each container. Don’t remember where. Assuming stats conforms to DEC math 100% == 1 CPU so for 2 CPU system you should have 200% available. The PC world may not have thought that far ahead though.

Someone else hit this on Windows with a 64 container limit. Turned out to be a DOT-NOT thing.

You could also dig through the default CFS scheduler doc

Personally, I believe 2 CPU is massively relevant. You need to have a 4-core SoC to test that theory though. If you can get 8 running on 2 CPU and it magically gags at 16 or just under for the 4-core SoC, then this would point to a minimum CPU allocation per container of .25 and when you’ve used up all your core that’s it. That should be a software setting Toradex could either fix or tell you how to fix though. As one of the above links points out, some dude got 64 to run on a desktop. I didn’t deep read that post, but will assume a quad-core desktop.

If your quad-core SoC also gags at 8 . . . Someone bit mapped a byte.

Hi @mvandenabeele ,

I did some tests with a Colibri iMX8QXP and I was able to reproduce your issue using your web server image, mainly the bootloop, although I had to start around 18 containers for it to happen instead of 8.

Given that this SoM is quad core and has double the amount of RAM (2 GB) compared to your Verdin, this difference makes sense.

By making a second connection to the module and using htop I saw that there’s a spike in CPU and RAM usage when creating/initializing the containers. The higher the number of containers, the higher the peak RAM usage.

When this spike makes memory usage reach 100% the terminal freezes or runs very slowly for a brief time before a system restart occurs. I suspect this is a watchdog reset, as CPU usage on all cores are almost maxed out during this time.

A possible workaround, if your use case allows for it, is starting the containers in batches instead of all at once. This way you have various smaller RAM peaks that hopefully won’t be high enough to reboot your system.

Using your docker compose YAML as an example, you can alter it so that loader-4 through loader-6 only start after the first 3 have initialized:

  loader-4:
    container_name: loader-4
    image: jabbla/allora3-loader:latest
    restart: unless-stopped
    environment:
    - TARGET_PING=http://loader-5
    - TARGET_URL=http://loader-5
    - BACKGROUND_COLOR=orange
    depends_on:
      loader-1:
        condition: service_started
      loader-2:
        condition: service_started
      loader-3:
        condition: service_started
      
  loader-5:
    container_name: loader-5
    image: jabbla/allora3-loader:latest
    restart: unless-stopped
    environment:
    - TARGET_PING=http://loader-1
    - TARGET_URL=http://loader-1
    - BACKGROUND_COLOR=black
    depends_on:
      loader-1:
        condition: service_started
      loader-2:
        condition: service_started
      loader-3:
        condition: service_started
      
  loader-6:
    container_name: loader-6
    image: jabbla/allora3-loader:latest
    restart: unless-stopped
    environment:
    - TARGET_PING=http://loader-1
    - TARGET_URL=http://loader-1
    - BACKGROUND_COLOR=black
    depends_on:
      loader-1:
        condition: service_started
      loader-2:
        condition: service_started
      loader-3:
        condition: service_started

Using batches of 8 I was able to successfully initialize and run 34 containers (weston, kiosk, and 32 web servers) on the Colibri iMX8QXP, almost twice the previous amount.

Keep in mind that depends_on does not wait for services to be “ready”, only until they have been started, as stated in the Docker documentation.

Let me know if this helps you in any way.

Best regards,
Lucas Akira

Hi @lucas_a.tx

Thanks for your answer. The strange thing is that even with the clock @seasoned_geek provided, I get exactly the same behavior. I don’t think the clock needs a lot of resources. Based on the CPU and memory usage of the webservers, I can’t imagine 8 of them to max out the module, even if it’s a lighter Verdin. If docker takes a fixed number of resources when a container starts, no matter how heavy the container is, that could be an explanation but it would be a strange design decision in docker.

I did some tests with spreading container starts before by adding dependencies. The problem here is if I turn the module off, and then back on, docker thinks the containers have crashed, so it restarts them all at once because of the requested restart behavior (unless-stopped). And then the device is bricked again.

Hi @mvandenabeele

I have no technical proof, but I think you suffer from “Splash & Dribble.”

I would definitely not chain stuff together with depends_on because that is just going to cause a crash later on when something has to restart.

Didn’t read all of this, but it has a nice intro per your problem.

By default each container gets everything (-1) and has a big startup splash in RAM. After it is running Docker figures out what it really needs and dribbles the memory back to the pool.

Play around with limiting the memory via the YAML. See if that reduces startup memory requirements.

The problem here is tiny x86 minds write something for Linux, assume they have either a desktop or server with 128GB or more of RAM, then someone says “Ahck! You should use that for your embedded systems” completely ignore the fact you have 1GB (or less) of RAM and that the code you are trying to run has an 800lb gorilla resource requirement at startup.

If you have a method of determining how much RAM one clock application is using (I know there is, just not hitting on all cylinders right now) then giz the YAML to limit the container to that amount plus . . . say 20MEG? See if you can get the number you need running.

I suspect what you are going to have to do is combine lots of functions into one or two containers that run multiple threads.

One thing I’ve noticed in the Docker world is a complete lack of engineering design. Instead of writing one application with many threads that communicate with each other, people write one container for each thread function (possibly far more containers than previous thread count.) They don’t realize that there is significant overhead placed on the little bitty embedded system to launch a container. Watching a client do that now. They haven’t yet hit this problem, but they will if they keep going down that road.

Hi @mvandenabeele,

We did a few tests around the topic to check how one could better proceed. On the example @lucas_a.tx posted here, it will indeed have this behavior on a shutdown as the unless-stopped exists. We then created a systemd service structure to stop every running container from running on the module once a shutdown or a reboot is asked, for instance by running sudo reboot or sudo shutdown.

This way, the containers will not try to restart all at once on startup and will follow the exact startup route proposed on the docker-compose according to the depends_on condition.

The structure could be something as follows:

  1. Create one service such as the one bellow on /etc/systemd/system and activate it using sudo systemctl enable my_service.service
[Unit]
Description=Forcing Docker Stop on shutdown
DefaultDependencies=no
Before=shutdown.target poweroff.target halt.target reboot.target
Requires=poweroff.target

[Service]
Type=oneshot
ExecStart=/home/torizon/compose_kill.sh
RemainAfterExit=yes

[Install]
WantedBy=shutdown.target
  1. Create a shell script that stops all containers, and save it in the path you mentioned on the service, such as:
#!/bin/sh

echo "Killing docker processes"

docker stop $(docker ps -a -q)

echo "Killed"

Please note, however, that if the power is cut, this sample service may not work, and then you may arrive again on a Out of Memory Condition. Some adaptations may be needed according to your use case.

Another possibility would to simply remove the restart condition on the docker compose and set up another service that monitors the health of your containers.

We are trying other things in the meantime to check for alternative solutions such as limiting memory available while starting the containers.

Hi @mvandenabeele !

Were you able to test @gclaudino.tx’s approach? Does it help you in any way?

On the other hand, do you have more concerns regarding this topic?

Best regards,

Hi Henrique

We decided to use a Verdin with more processing power to overcome this limit and because our UI is a lot more responsive. Thanks for the followup!

Merijn

Hi @mvandenabeele !

Thanks for the information!

Best regards,