Torizon Builder w/ Containers: Container does not start and Out of Memory

I completed my first successful TorizonCore Build using my patched kernel, custom DTS and custom kernel module. Hooray!

Now I was able to add a docker-compose file to my TCB, and generate a TEZI image. I copied the files to a USB and then installed it on my iMX7-emmc. I booted, and connected to the board.

I ran docker ps, expecing to see my container. It was not up. When I went to see what’s going on, I started to get these messages:

[  101.783057] Out of memory: Killed process 6815 (python3) total-vm:19544kB, anon-rss:7820kB, file-rss:4020kB, shmem-rss:0kB, UID:0 pgtables:24kB oom_score_adj:0
[  103.897828] Out of memory: Killed process 431 (udisksd) total-vm:62240kB, anon-rss:1808kB, file-rss:824kB, shmem-rss:0kB, UID:0 pgtables:38kB oom_score_adj:0
[  103.964835] Out of memory: Killed process 435 (NetworkManager) total-vm:44620kB, anon-rss:1580kB, file-rss:748kB, shmem-rss:0kB, UID:0 pgtables:34kB oom_score_adj:0
[  104.009262] Out of memory: Killed process 6987 (docker-integrit) total-vm:2724kB, anon-rss:172kB, file-rss:1780kB, shmem-rss:0kB, UID:0 pgtables:12kB oom_score_adj:0
[  104.111045] Out of memory: Killed process 459 (systemd-logind) total-vm:5196kB, anon-rss:368kB, file-rss:1444kB, shmem-rss:0kB, UID:0 pgtables:12kB oom_score_adj:0
[  104.255928] Out of memory: Killed process 425 (ModemManager) total-vm:50276kB, anon-rss:1116kB, file-rss:536kB, shmem-rss:0kB, UID:0 pgtables:34kB oom_score_adj:0
[  104.314980] Out of memory: Killed process 406 (systemd-timesyn) total-vm:14816kB, anon-rss:384kB, file-rss:712kB, shmem-rss:0kB, UID:989 pgtables:16kB oom_score_adj:0
[  104.353158] Out of memory: Killed process 460 (systemd-resolve) total-vm:5444kB, anon-rss:364kB, file-rss:700kB, shmem-rss:0kB, UID:990 pgtables:16kB oom_score_adj:0
[  104.958520] Out of memory: Killed process 7088 ((imesyncd)) total-vm:43244kB, anon-rss:1504kB, file-rss:3176kB, shmem-rss:0kB, UID:0 pgtables:28kB oom_score_adj:0
[  104.981969] Out of memory: Killed process 7083 ((resolved)) total-vm:43244kB, anon-rss:1504kB, file-rss:3124kB, shmem-rss:0kB, UID:0 pgtables:28kB oom_score_adj:0
[  105.032200] Out of memory: Killed process 7082 ((modprobe)) total-vm:43080kB, anon-rss:1488kB, file-rss:2372kB, shmem-rss:0kB, UID:0 pgtables:24kB oom_score_adj:0
[  105.063385] Out of memory: Killed process 464 (usermount) total-vm:24588kB, anon-rss:704kB, file-rss:932kB, shmem-rss:0kB, UID:0 pgtables:16kB oom_score_adj:0
[  105.095187] Out of memory: Killed process 485 (avahi-daemon) total-vm:3892kB, anon-rss:296kB, file-rss:588kB, shmem-rss:0kB, UID:987 pgtables:12kB oom_score_adj:0
[  105.111711] Out of memory: Killed process 399 (rpcbind) total-vm:2816kB, anon-rss:184kB, file-rss:508kB, shmem-rss:0kB, UID:997 pgtables:10kB oom_score_adj:0
[  105.129711] Out of memory: Killed process 7087 (NetworkManager) total-vm:5908kB, anon-rss:104kB, file-rss:476kB, shmem-rss:0kB, UID:0 pgtables:12kB oom_score_adj:0
[  105.155673] Out of memory: Killed process 765 (agetty) total-vm:1736kB, anon-rss:96kB, file-rss:400kB, shmem-rss:0kB, UID:0 pgtables:6kB oom_score_adj:0
[  105.171422] Out of memory: Killed process 764 (agetty) total-vm:3784kB, anon-rss:96kB, file-rss:336kB, shmem-rss:0kB, UID:0 pgtables:14kB oom_score_adj:0
[  105.190711] Out of memory: Killed process 436 (systemd-network) total-vm:5800kB, anon-rss:324kB, file-rss:0kB, shmem-rss:0kB, UID:991 pgtables:12kB oom_score_adj:0
[  105.220546] Out of memory: Killed process 487 (avahi-daemon) total-vm:3760kB, anon-rss:220kB, file-rss:0kB, shmem-rss:0kB, UID:987 pgtables:10kB oom_score_adj:0
[  105.238168] Out of memory: Killed process 7086 (ModemManager) total-vm:1332kB, anon-rss:4kB, file-rss:0kB, shmem-rss:0kB, UID:0 pgtables:8kB oom_score_adj:0
[  105.263221] Out of memory: Killed process 381 (systemd-journal) total-vm:25756kB, anon-rss:512kB, file-rss:1392kB, shmem-rss:968kB, UID:0 pgtables:20kB oom_score_adj:-250
[  105.290957] Out of memory: Killed process 537 (dockerd) total-vm:925620kB, anon-rss:35884kB, file-rss:2084kB, shmem-rss:0kB, UID:0 pgtables:102kB oom_score_adj:-500
[  105.355387] Out of memory: Killed process 7032 (docker-proxy) total-vm:850812kB, anon-rss:1504kB, file-rss:1424kB, shmem-rss:0kB, UID:0 pgtables:34kB oom_score_adj:-500
[  105.375448] Out of memory: Killed process 7045 (docker-proxy) total-vm:834420kB, anon-rss:1508kB, file-rss:1424kB, shmem-rss:0kB, UID:0 pgtables:24kB oom_score_adj:-500
[  105.403794] Out of memory: Killed process 7070 (docker-proxy) total-vm:834420kB, anon-rss:1508kB, file-rss:1424kB, shmem-rss:0kB, UID:0 pgtables:34kB oom_score_adj:-500
[  105.432706] Out of memory: Killed process 7056 (docker-proxy) total-vm:859712kB, anon-rss:1504kB, file-rss:1436kB, shmem-rss:0kB, UID:0 pgtables:36kB oom_score_adj:-500
[  105.464224] Out of memory: Killed process 7006 (docker-proxy) total-vm:850812kB, anon-rss:1500kB, file-rss:1356kB, shmem-rss:0kB, UID:0 pgtables:36kB oom_score_adj:-500
[  105.496687] Out of memory: Killed process 2355 (docker-proxy) total-vm:850812kB, anon-rss:1528kB, file-rss:1336kB, shmem-rss:0kB, UID:0 pgtables:34kB oom_score_adj:-500
[  106.676237] systemd[1]: Failed to start Network Name Resolution.
[  106.689982] systemd[1]: Failed to start Modem Manager.
[  106.701282] systemd[1]: Failed to start Network Manager.

Any ideas where I need to start looking?

Greetings @kdubious,

This is odd, do these messages only happen when you attempt to start your container?

Or do they also occur if the system is just left idle?

Looking at the message I see the docker-proxy processes consuming an unusual amount of memory. For reference docker-proxy processes are used to do port-forwarding in containers. However, they shouldn’t be needing near the amount of memory shown in your logs.

As for additional debugging you can try and look in /var/log/* to see if the kernel captured any additional logging related to these issues here. Additionally you might want to check the amount of free memory on the system as these issues happen. To see if it is truly out of memory or not.

Best Regards,
Jeremias

For now, I’m going to call this an issue with the image itself, and I’ll close this. I’ll come back to it if it continues to be an issue.

Wait did you uncover anything further about the issue or are you just leaving this then?

I had an error in the code in the container, so the app was stuck in a loop. I also was trying to open 1000 ports (the app selects a random port between 40000 and 41000).

    ports:
      - 40000-41000:40000-41000

Fixing the error and changing to use:

docker run -d --network host --device /dev/snd

seems to have fixed things. I’ve not published via Torizon OTA yet, but I can deploy the container directly to the device and it runs as expected.

Oh that makes sense then. Trying to open 1000 ports requires as many instances of docker-proxy. Which would explain the enormous memory usage.