Shutdown from C++ application container using D-Bus

Hello!

We have a C++ application running on a device. We intend to shut down the system/imx8m through a shout down command generated by the application container. We believe to have the correct code but apparently the application container does not have the proper permissions to execute this.

A sample code is provided below:

//  Shutdown failed: GDBus.Error:org.freedesktop.DBus.Error.AccessDenied:
// Permission denied // FIXME:
static int shutdownCommand() {
  GDBusConnection* conn;
  GError* error = NULL;

  // Connect to the system bus
  conn = g_bus_get_sync(G_BUS_TYPE_SYSTEM, NULL, &error);
  if (error != NULL) {
    g_printerr("Connection Error: %s\n", error->message);
    g_error_free(error);
    return 1;
  }

  // Create a GVariant with the boolean argument (force shutdown)
  GVariant* param = g_variant_new("(b)", TRUE);

  // Create a message to call the PowerOff method with the boolean argument
  GVariant* result;
  result = g_dbus_connection_call_sync(conn,
                                       "org.freedesktop.login1",          // Destination service
                                       "/org/freedesktop/login1",         // Object path
                                       "org.freedesktop.login1.Manager",  // Interface
                                       "PowerOff",                        // Method
                                       param,                             // Parameters
                                       NULL,                              // Return type
                                       G_DBUS_CALL_FLAGS_NONE,            // Flags
                                       -1,                                // Timeout
                                       NULL,                              // Cancellable
                                       &error);

  if (error != NULL) {
    g_printerr("Shutdown failed: %s\n", error->message);
    g_error_free(error);
    return 1;
  }

  g_variant_unref(result);
  g_print("Shutdown command sent successfully\n");

  // Clean up
  g_object_unref(conn);

  return 0;
}

How can we manage this?

Typing before I’ve finished my first glass of iced tea for the day

Outside of all the containers in the HOST environment create a cron job that wakes up every N seconds, looks for a file of a specific name in a specific directory, when found, deletes said file and shuts down the computer. Expose that directory to each container you want to have shutdown capability.

-or-

Create a Systemd service that listens to a message queue or monitors a region of shared memory for a specific message. When received shuts down the computer.

You basically need the cross container communications answer I provided a while back.

Even if you could open up all of DBus for a container and give it the ability to shut down the computer directly, I wouldn’t do it. You would be opening up a serious can of worms. Docker was primarily created to block that.

If you are really old school you could have a shared relational database with a trigger on a table that kicks off the job on the host.

1 Like

@seasoned_geek Thank you for your suggestions.

I will explore these and get back. Would you be able to provide a link to :

I am very new to this. very new to linux and docker. I’ve somehow managed to do the programming part but this interfacing is very difficult.

I am also facing difficulties in making files in shared volumes from the host.

for example, I have created shared volumes like:

  - "/appdata/config:/appdata/config"
  - "/appdata/log:/appdata/log"
  - "/appdata/data:/appdata/data"

in the docker compose. But apparently the generated volumes are owned by root and I am unable to write files to these from the container or by ssh-ing using torizon user to the host.

In order to implement your first suggestion I would also need to be able to write to a file in one of these locations.

Ask the boys and girls from Toradex to point you to their tutorials on file sharing between host and docker container.

If you go with a message queue or shared memory approach, you should not need though.

I’m guessing you either didn’t give your containers proper access -OR- your Yocto build didn’t change file ownership and protection.

I don’t use Microsoft products, so if you are using Visual Studio Code, you are on your own.

1 Like

@seasoned_geek Still, Thank you for your input! the resources are lengthy. I will try and get upto speed.

Thanks again!

We do have some documentation on using dbus in a container, at your own discretion: Torizon Best Practices Guide | Toradex Developer Center

There is even a link to a coded example in there, though it’s in Python not C++. But in any-case it might help as a loose reference.

Best Regards,
Jeremias

@jeremias.tx Hi! Thanks for the link. I have seen it and used the same approach to send the d-bus command. I have event mounted the /var/run/dbus to give appropriate access to the container. and the dbus works for other parts of the application. but it wont work for the shutdown command.

Is there a way to shutdown the application container from within the C++ application and then execute a shutdown script on the host using the docker-compose or dockerfile that executes the script upon shutdown of the container?

but it wont work for the shutdown command.

I was able to shutdown my module from the command-line using dbus. In the container I ran:

$ apt update &&  apt-get install -y dbus
$ dbus-send --system --print-reply --dest=org.freedesktop.login1 /org/freedesktop/login1 "org.freedesktop.login1.Manager.PowerOff" boolean:true 

After that my module does appear to shutdown, so it “should” work not sure if there’s something special needed to do this from within a C++ application.

Is there a way to shutdown the application container from within the C++ application and then execute a shutdown script on the host using the docker-compose or dockerfile that executes the script upon shutdown of the container?

Well a container continues to run until it’s main process exits. So if the main process is your application, then once your application exits the container should exit. You would then need another process monitoring the state of your container that would initiate your script when the container is no longer running.

Best Regards,
Jeremias

Please check the output of the command executed on the module via SSH.

torizon@verdin-imx8mm-14756428:~$ dbus-send --system --print-reply --dest=org.freedesktop.login1 /org/freedesktop/login1 "org.freedesktop.login1.Manager.PowerOff" boolean:true
Error org.freedesktop.DBus.Error.AccessDenied: Permission denied

Hello @geopaxpvtltd ,

You are executing the command outside of a container. If you execute it from inside a properly configured container as indicated in the link provided by @jeremias.tx it will work.

Best regards,
Josep

You are right, it works. I am so confused. then why wont it work from the code?

The following are my torionpackages:

{
    "deps": [
        "libgpiod2",
        "libnm0",
        "libglib2.0-0",
        "libserial1",
        "libevent-2.1-7",
        "libfmt9",
        "libspdlog1.10-fmt9",
        "libboost-system-dev",
        "libboost-dev",
        "libpoco-dev",
        "libespeak-dev",
        "libdbus-c++-dev"
    ],
    "devDeps": [
        "libgpiod-dev",
        "libnm-dev",
        "libglib2.0-dev",
        "libserial-dev",
        "libevent-dev",
        "libfmt-dev",
        "libspdlog-dev",
        "libboost-system-dev",
        "libboost-dev",
        "libpoco-dev",
        "libespeak-dev",
        "libdbus-c++-dev"
    ]
}

Hello @geopaxpvtltd ,
What is the configuration of your container?

Best regards,
Josep

Docker Compose:

version: "3.9"
services:
  geopaxapp-svc-debug:
    build:
      context: .
      dockerfile: Dockerfile.debug
    image: ${LOCAL_REGISTRY}:5002/geopaxapp-svc-debug:${TAG}
    user: root
    ports:
      - 2230:2230
      - 8000:8000
      - 8443:8443
      - 2101:2101
    devices:
      - "/dev/gpiochip4:/dev/gpiochip4"
      - "/dev/ttyACM0:/dev/ttyACM0"
      - "/dev/ttyACM1:/dev/ttyACM1"
      - "/dev/verdin-uart1:/dev/verdin-uart1"
      - "/dev/verdin-uart2:/dev/verdin-uart2"
    volumes:
      - "/var/run/dbus:/var/run/dbus"
      - "/var/run/sdp:/var/run/sdp"
      - "/sys/block:/sys/block"
      - "/dev:/dev"
      - "/mnt:/mnt"
      - "~/appdata/config:/appdata/config"
      - "~/appdata/log:/appdata/log"
      - "~/appdata/data:/appdata/data"

  geopaxapp-svc:
    build:
      context: .
      dockerfile: Dockerfile
    image: ${DOCKER_LOGIN}/geopaxapp-svc:${TAG}
    user: root
    restart: unless-stopped
    ports:
      - 8000:8000
      - 8443:8443
      - 2101:2101
    devices:
      - "/dev/gpiochip4:/dev/gpiochip4"
      - "/dev/ttyACM0:/dev/ttyACM0"
      - "/dev/ttyACM1:/dev/ttyACM1"
      - "/dev/verdin-uart1:/dev/verdin-uart1"
      - "/dev/verdin-uart2:/dev/verdin-uart2"
    volumes:
      - "/var/run/dbus:/var/run/dbus"
      - "/var/run/sdp:/var/run/sdp"
      - "/sys/block:/sys/block"
      - "/dev:/dev"
      - "/mnt:/mnt"
      - "~/appdata/config:/appdata/config"
      - "~/appdata/log:/appdata/log"
      - "~/appdata/data:/appdata/data"

I never tried this before from a C++ application so I’m not sure what the issue might be. But perhaps there’s some kind of extra limitation or extra configuration needed for this to work in an application as opposed to the command-line? Maybe there’s something in the documentation for this dbus library/api you’re using?

Best Regards,
Jeremias

hello @jeremias.tx,

I believe I have found the issue. the application runs with a user id of 1000 whereas when sent from the bas they run with a id of 0.

the commands sent from the application are denied due to them having id 1000:

method call time=1690373850.807092 sender=:1.44 -> destination=org.freedesktop.login1 serial=78 path=/org/freedesktop/login1; interface=org.freedesktop.login1.Manager; member=CanPowerOff
method call time=1690373850.809950 sender=:1.6 -> destination=org.freedesktop.DBus serial=417 path=/org/freedesktop/DBus; interface=org.freedesktop.DBus; member=GetConnectionUnixUser
   string ":1.44"
method return time=1690373850.810029 sender=org.freedesktop.DBus -> destination=:1.6 serial=77 reply_serial=417
   uint32 1000
method call time=1690373850.812131 sender=:1.6 -> destination=org.freedesktop.DBus serial=418 path=/org/freedesktop/DBus; interface=org.freedesktop.DBus; member=GetConnectionUnixUser
   string ":1.44"
method return time=1690373850.812369 sender=org.freedesktop.DBus -> destination=:1.6 serial=78 reply_serial=418
   uint32 1000
error time=1690373850.814309 sender=:1.6 -> destination=:1.44 error_name=org.freedesktop.DBus.Error.AccessDenied reply_serial=78
   string "Permission denied"

the dbus policy org.freedesktop.login1.conf does not allow any other user except 0 to execute these.

My question hence becomes, even when the app container is run as root but using root user flag in docker compose, why does the application send messages with a user id of 1000?

furthermore, if I were to modify the policy in org.freedesktop.login1.conf by adding something like:

<policy user="1000">
    <allow send_destination="org.freedesktop.login1" 
           send_interface="org.freedesktop.login1.Manager" 
           send_member="PowerOff"/>
    <allow send_destination="org.freedesktop.login1" 
           send_interface="org.freedesktop.login1.Manager" 
           send_member="CanPowerOff"/>
</policy>

How could I do it? Maybe with a tcb project?

This is what I have tried thus far:

create a script that appends the said policy into the conf file and copy it over to the container.
share the config file/directory with the container to be able to modify the config with :rw flag.

but the sript doesnt execute stating:

torizon@verdin-imx8mm-14756428:~$ docker exec -it torizon-geopaxapp-svc-debug-1 /custom-scripts/modify-conf-restart-dbus.sh
WARNING: Error loading config file: /etc/docker/config.json: open /etc/docker/config.json: permission denied
sed: couldn't open temporary file /dbus-conf/sedIRVmn9: Read-only file system

So it looks like the main problem here is your application isn’t being executed by the root user/permissions.

I can see you have user:root in your compose file, but perhaps this isn’t have the effect we think it is. Are you launching/executing this container via the extension?

Perhaps try manually launching the container and getting a shell inside it as root, then as root inside the container manually executing your application, see if that works. If this works, then perhaps the issue all along was your application wasn’t actually being executed with root permissions.

Best Regards,
Jeremias

Yes.

I am unsure of how to run the application manually. :face_with_peeking_eye:

Can you guide please?

Is it possible to modify the login1.conf with torizoncore builder?

I am unsure of how to run the application manually.

Manually docker run your application container to start it, include any needed flags or options to the command, or just run the docker-compose file. Then get a shell inside your container, you can do this from docker run or with docker exec after running the initial docker run. Then inside your container manually execute your application from the command-line, make sure you are root inside the container when you do this.

It’s essentially what you did previously in this thread when you executed dbus-send inside a container except now you’re executing your application.

Is it possible to modify the login1.conf with torizoncore builder?

I believe this file is located normally in /etc in the filesystem correct? If so in theory this should be customizable with TorizonCore Builder. Though I’ve never modified this specific file myself.

Best Regards,
Jeremias

No, it is located in the /usr/share/dbus-1/system.d folder.

Yes, I can get inside the container but I am unable to find the application files copied over to it by the VSCode extension. I am using the default dockerfile.debug from the apollox extension but I am unable to find a location where the application files are being copied into the container.

P.S: this is the dockerfile.debug:

# ARGUMENTS --------------------------------------------------------------------
##
# Board architecture
##
ARG IMAGE_ARCH=
# For armv7 use:
#ARG IMAGE_ARCH=arm

##
# Base container version
##
ARG BASE_VERSION=3-bookworm

##
# Application Name
##
ARG APP_EXECUTABLE=app

##
# Debug port
##
ARG SSH_DEBUG_PORT=

##
# Run as
##
ARG SSHUSERNAME=

# BUILD ------------------------------------------------------------------------
##
# Deploy Step
##
FROM --platform=linux/${IMAGE_ARCH} \
    torizon/debian:${BASE_VERSION} AS Debug

ARG IMAGE_ARCH
ARG SSH_DEBUG_PORT
ARG APP_EXECUTABLE
ARG SSHUSERNAME
ENV APP_EXECUTABLE ${APP_EXECUTABLE}

# SSH for remote debug
EXPOSE ${SSH_DEBUG_PORT}

# Make sure we don't get notifications we can't answer during building.
ENV DEBIAN_FRONTEND="noninteractive"

# your regular RUN statements here
# Install required packages
RUN apt-get -q -y update && \
    apt-get -q -y install \
    openssl \
    openssh-server \
    rsync \
    file \
    curl \
    gdb && \
    apt-get clean && apt-get autoremove && \
    rm -rf /var/lib/apt/lists/*

# automate for torizonPackages.json
RUN apt-get -q -y update && \
    apt-get -q -y install \
    # DOES NOT REMOVE THIS LABEL: this is used for VS Code automation
    # __torizon_packages_dev_start__
	dbus:arm64 \
	libgpiod-dev:arm64 \
	libnm-dev:arm64 \
	libglib2.0-dev:arm64 \
	libserial-dev:arm64 \
	libevent-dev:arm64 \
	libfmt-dev:arm64 \
	libspdlog-dev:arm64 \
	libboost-system-dev:arm64 \
	libboost-dev:arm64 \
	libpoco-dev:arm64 \
	libespeak-dev:arm64 \
	libdbus-c++-dev:arm64 \
    # __torizon_packages_dev_end__
    # DOES NOT REMOVE THIS LABEL: this is used for VS Code automation
    && \
    apt-get clean && apt-get autoremove && \
    rm -rf /var/lib/apt/lists/*

# ⚠️ DEBUG PURPOSES ONLY!!
# copies RSA key to enable SSH login for user
COPY .conf/id_rsa.pub /id_rsa.pub

# create folders needed for the different components
# configures SSH access to the container and sets environment by default
RUN mkdir /var/run/sshd && \
    sed 's@session\s*required\s*pam_loginuid.so@session optional pam_loginuid.so@g' \
    -i /etc/pam.d/sshd && \
    if test $SSHUSERNAME != root ; \
    then mkdir -p /home/$SSHUSERNAME/.ssh ; \
    else mkdir -p /root/.ssh ; fi && \
    if test $SSHUSERNAME != root ; \
    then cp /id_rsa.pub /home/$SSHUSERNAME/.ssh/authorized_keys ; \
    else cp /id_rsa.pub /root/.ssh/authorized_keys ; fi && \
    echo "PermitUserEnvironment yes" >> /etc/ssh/sshd_config && \
    echo "Port ${SSH_DEBUG_PORT}" >> /etc/ssh/sshd_config && \
    su -c "env" $SSHUSERNAME > /etc/environment

RUN rm -r /etc/ssh/ssh*key && \
    dpkg-reconfigure openssh-server

# Copy files from context to container
COPY src/webserver/web_root/ /web_root

COPY custom-scripts/ /custom-scripts
# RUN ls -l /custom-scripts
# RUN chmod +x /custom-scripts/modify-conf-and-restart-dbus.sh && \
#     /custom-scripts/modify-conf-and-restart-dbus.sh

CMD [ "/usr/sbin/sshd", "-D" ]

I wanted to find a solution using DBUS from the application but apparently I do not have the expertise to work with the torizoncore builder to modify the policy files. Tried for about a week.

Instead I took the following approach:

  1. create a script that runs in the background for checking a shutdown signal file, if the file is found, the script executes
dbus-send --system --print-reply --dest=org.freedesktop.login1 /org/freedesktop/login1 "org.freedesktop.login1.Manager.PowerOff" boolean:true 

from a bash shell. apparently if this message is sent from a bash shell, it is sent as root. if sent from the application, it is sent as user id 1000.

  1. modify the dockerfile.debug to execute the shutdown-script and the CMD [ “/usr/sbin/sshd”, “-D” ] from a startup script.

Everything works as expected.

Further Questions:

  1. I am sharing the ~/appdata folder from host with the container from the compose file. I have to manually set the permissions of the forder using chmod -R 777 /home/torizon/appdata to be able to write to it from the app (the container bash works fine without modifying permissions). even if i use the :rw tag in the compose file. My question is: would I have to deal with torizoncore builder to create/set permissions for this folder when it is to be production ready?

  2. So far, I understand I need to deal with torizoncore builder to setup ota related docker settings for a private repository, auto starting of the container when powered on. Are there any video/animated tutorials for a novice like me on the lines of a post by @matheus.tx : ApolloX TorizonCore Builder project template support for 'tcb bundle' - #3 by matheus.castello