We have a C++ application running on a device. We intend to shut down the system/imx8m through a shout down command generated by the application container. We believe to have the correct code but apparently the application container does not have the proper permissions to execute this.
A sample code is provided below:
// Shutdown failed: GDBus.Error:org.freedesktop.DBus.Error.AccessDenied:
// Permission denied // FIXME:
static int shutdownCommand() {
GDBusConnection* conn;
GError* error = NULL;
// Connect to the system bus
conn = g_bus_get_sync(G_BUS_TYPE_SYSTEM, NULL, &error);
if (error != NULL) {
g_printerr("Connection Error: %s\n", error->message);
g_error_free(error);
return 1;
}
// Create a GVariant with the boolean argument (force shutdown)
GVariant* param = g_variant_new("(b)", TRUE);
// Create a message to call the PowerOff method with the boolean argument
GVariant* result;
result = g_dbus_connection_call_sync(conn,
"org.freedesktop.login1", // Destination service
"/org/freedesktop/login1", // Object path
"org.freedesktop.login1.Manager", // Interface
"PowerOff", // Method
param, // Parameters
NULL, // Return type
G_DBUS_CALL_FLAGS_NONE, // Flags
-1, // Timeout
NULL, // Cancellable
&error);
if (error != NULL) {
g_printerr("Shutdown failed: %s\n", error->message);
g_error_free(error);
return 1;
}
g_variant_unref(result);
g_print("Shutdown command sent successfully\n");
// Clean up
g_object_unref(conn);
return 0;
}
Typing before I’ve finished my first glass of iced tea for the day
Outside of all the containers in the HOST environment create a cron job that wakes up every N seconds, looks for a file of a specific name in a specific directory, when found, deletes said file and shuts down the computer. Expose that directory to each container you want to have shutdown capability.
-or-
Create a Systemd service that listens to a message queue or monitors a region of shared memory for a specific message. When received shuts down the computer.
You basically need the cross container communications answer I provided a while back.
Even if you could open up all of DBus for a container and give it the ability to shut down the computer directly, I wouldn’t do it. You would be opening up a serious can of worms. Docker was primarily created to block that.
If you are really old school you could have a shared relational database with a trigger on a table that kicks off the job on the host.
in the docker compose. But apparently the generated volumes are owned by root and I am unable to write files to these from the container or by ssh-ing using torizon user to the host.
In order to implement your first suggestion I would also need to be able to write to a file in one of these locations.
@jeremias.tx Hi! Thanks for the link. I have seen it and used the same approach to send the d-bus command. I have event mounted the /var/run/dbus to give appropriate access to the container. and the dbus works for other parts of the application. but it wont work for the shutdown command.
Is there a way to shutdown the application container from within the C++ application and then execute a shutdown script on the host using the docker-compose or dockerfile that executes the script upon shutdown of the container?
After that my module does appear to shutdown, so it “should” work not sure if there’s something special needed to do this from within a C++ application.
Is there a way to shutdown the application container from within the C++ application and then execute a shutdown script on the host using the docker-compose or dockerfile that executes the script upon shutdown of the container?
Well a container continues to run until it’s main process exits. So if the main process is your application, then once your application exits the container should exit. You would then need another process monitoring the state of your container that would initiate your script when the container is no longer running.
You are executing the command outside of a container. If you execute it from inside a properly configured container as indicated in the link provided by @jeremias.tx it will work.
I never tried this before from a C++ application so I’m not sure what the issue might be. But perhaps there’s some kind of extra limitation or extra configuration needed for this to work in an application as opposed to the command-line? Maybe there’s something in the documentation for this dbus library/api you’re using?
the dbus policy org.freedesktop.login1.conf does not allow any other user except 0 to execute these.
My question hence becomes, even when the app container is run as root but using root user flag in docker compose, why does the application send messages with a user id of 1000?
furthermore, if I were to modify the policy in org.freedesktop.login1.conf by adding something like:
create a script that appends the said policy into the conf file and copy it over to the container.
share the config file/directory with the container to be able to modify the config with :rw flag.
but the sript doesnt execute stating:
torizon@verdin-imx8mm-14756428:~$ docker exec -it torizon-geopaxapp-svc-debug-1 /custom-scripts/modify-conf-restart-dbus.sh
WARNING: Error loading config file: /etc/docker/config.json: open /etc/docker/config.json: permission denied
sed: couldn't open temporary file /dbus-conf/sedIRVmn9: Read-only file system
So it looks like the main problem here is your application isn’t being executed by the root user/permissions.
I can see you have user:root in your compose file, but perhaps this isn’t have the effect we think it is. Are you launching/executing this container via the extension?
Perhaps try manually launching the container and getting a shell inside it as root, then as root inside the container manually executing your application, see if that works. If this works, then perhaps the issue all along was your application wasn’t actually being executed with root permissions.
I am unsure of how to run the application manually.
Manually docker run your application container to start it, include any needed flags or options to the command, or just run the docker-compose file. Then get a shell inside your container, you can do this from docker run or with docker exec after running the initial docker run. Then inside your container manually execute your application from the command-line, make sure you are root inside the container when you do this.
It’s essentially what you did previously in this thread when you executed dbus-send inside a container except now you’re executing your application.
Is it possible to modify the login1.conf with torizoncore builder?
I believe this file is located normally in /etc in the filesystem correct? If so in theory this should be customizable with TorizonCore Builder. Though I’ve never modified this specific file myself.
No, it is located in the /usr/share/dbus-1/system.d folder.
Yes, I can get inside the container but I am unable to find the application files copied over to it by the VSCode extension. I am using the default dockerfile.debug from the apollox extension but I am unable to find a location where the application files are being copied into the container.
P.S: this is the dockerfile.debug:
# ARGUMENTS --------------------------------------------------------------------
##
# Board architecture
##
ARG IMAGE_ARCH=
# For armv7 use:
#ARG IMAGE_ARCH=arm
##
# Base container version
##
ARG BASE_VERSION=3-bookworm
##
# Application Name
##
ARG APP_EXECUTABLE=app
##
# Debug port
##
ARG SSH_DEBUG_PORT=
##
# Run as
##
ARG SSHUSERNAME=
# BUILD ------------------------------------------------------------------------
##
# Deploy Step
##
FROM --platform=linux/${IMAGE_ARCH} \
torizon/debian:${BASE_VERSION} AS Debug
ARG IMAGE_ARCH
ARG SSH_DEBUG_PORT
ARG APP_EXECUTABLE
ARG SSHUSERNAME
ENV APP_EXECUTABLE ${APP_EXECUTABLE}
# SSH for remote debug
EXPOSE ${SSH_DEBUG_PORT}
# Make sure we don't get notifications we can't answer during building.
ENV DEBIAN_FRONTEND="noninteractive"
# your regular RUN statements here
# Install required packages
RUN apt-get -q -y update && \
apt-get -q -y install \
openssl \
openssh-server \
rsync \
file \
curl \
gdb && \
apt-get clean && apt-get autoremove && \
rm -rf /var/lib/apt/lists/*
# automate for torizonPackages.json
RUN apt-get -q -y update && \
apt-get -q -y install \
# DOES NOT REMOVE THIS LABEL: this is used for VS Code automation
# __torizon_packages_dev_start__
dbus:arm64 \
libgpiod-dev:arm64 \
libnm-dev:arm64 \
libglib2.0-dev:arm64 \
libserial-dev:arm64 \
libevent-dev:arm64 \
libfmt-dev:arm64 \
libspdlog-dev:arm64 \
libboost-system-dev:arm64 \
libboost-dev:arm64 \
libpoco-dev:arm64 \
libespeak-dev:arm64 \
libdbus-c++-dev:arm64 \
# __torizon_packages_dev_end__
# DOES NOT REMOVE THIS LABEL: this is used for VS Code automation
&& \
apt-get clean && apt-get autoremove && \
rm -rf /var/lib/apt/lists/*
# ⚠️ DEBUG PURPOSES ONLY!!
# copies RSA key to enable SSH login for user
COPY .conf/id_rsa.pub /id_rsa.pub
# create folders needed for the different components
# configures SSH access to the container and sets environment by default
RUN mkdir /var/run/sshd && \
sed 's@session\s*required\s*pam_loginuid.so@session optional pam_loginuid.so@g' \
-i /etc/pam.d/sshd && \
if test $SSHUSERNAME != root ; \
then mkdir -p /home/$SSHUSERNAME/.ssh ; \
else mkdir -p /root/.ssh ; fi && \
if test $SSHUSERNAME != root ; \
then cp /id_rsa.pub /home/$SSHUSERNAME/.ssh/authorized_keys ; \
else cp /id_rsa.pub /root/.ssh/authorized_keys ; fi && \
echo "PermitUserEnvironment yes" >> /etc/ssh/sshd_config && \
echo "Port ${SSH_DEBUG_PORT}" >> /etc/ssh/sshd_config && \
su -c "env" $SSHUSERNAME > /etc/environment
RUN rm -r /etc/ssh/ssh*key && \
dpkg-reconfigure openssh-server
# Copy files from context to container
COPY src/webserver/web_root/ /web_root
COPY custom-scripts/ /custom-scripts
# RUN ls -l /custom-scripts
# RUN chmod +x /custom-scripts/modify-conf-and-restart-dbus.sh && \
# /custom-scripts/modify-conf-and-restart-dbus.sh
CMD [ "/usr/sbin/sshd", "-D" ]
I wanted to find a solution using DBUS from the application but apparently I do not have the expertise to work with the torizoncore builder to modify the policy files. Tried for about a week.
Instead I took the following approach:
create a script that runs in the background for checking a shutdown signal file, if the file is found, the script executes
from a bash shell. apparently if this message is sent from a bash shell, it is sent as root. if sent from the application, it is sent as user id 1000.
modify the dockerfile.debug to execute the shutdown-script and the CMD [ “/usr/sbin/sshd”, “-D” ] from a startup script.
Everything works as expected.
Further Questions:
I am sharing the ~/appdata folder from host with the container from the compose file. I have to manually set the permissions of the forder using chmod -R 777 /home/torizon/appdata to be able to write to it from the app (the container bash works fine without modifying permissions). even if i use the :rw tag in the compose file. My question is: would I have to deal with torizoncore builder to create/set permissions for this folder when it is to be production ready?