I’ve been trying to utilize GPU on iMX8MP for showing a live feed on screen using wxWidgets.
By default wxwidgets won’t utilize openGL, so I used wxCanvas to get the job done. But I’m getting below mentioned error
runtime directory ‘/tmp/1000-runtime-dir’ is not owned by UID 0, but a directory permissions 0700 owned by UID 1000 GID 0
libEGL warning: MESA-LOADER: failed to open vivante: /usr/lib/dri/vivante_dri.so: cannot open shared object file: No such file or directory (search paths /usr/lib/aarch64-linux-gnu/dri:$${ORIGIN}/dri:/usr/lib/dri, suffix _dri)
I am running a weston container for GUI. My application reside in wayland-vivante based container. How to make sure vivante drivers works?
Hi, I’m the Toradex AI Assistant. The Toradex Support team will follow up with you shortly!
Did you like my answer?
Hello Samarth,
Thank you for reaching out with your question about GPU acceleration on the iMX8MP using wxWidgets. It sounds like you’re encountering an issue with the Vivante drivers. Let’s go through some steps to help resolve this:
Verify Driver Installation: First, ensure that the Vivante GPU drivers are installed in your container. These drivers are crucial for enabling hardware acceleration on the iMX8MP. Check if the package libgal-vivante1 or a similar package is installed in your container.
Container Configuration: Make sure your Wayland-Vivante container is configured to access the GPU. This typically involves ensuring that the /dev/galcore device is accessible within the container. You can achieve this by adding the following to your Docker run command:
--device=/dev/galcore
Weston Configuration: Since you are using a Weston container for the GUI, ensure it is correctly set up to communicate with your application container. This involves sharing the Wayland socket between the containers. You can do this by bind-mounting the /tmp directory:
-v /tmp:/tmp
This allows the Wayland client (your application) to communicate with the Weston compositor.
Check Permissions: The error message about the runtime directory suggests a permissions issue. Ensure that the UID and GID used in your container match those expected by the Weston container. You might need to adjust the user running the application to match the UID 1000.
Troubleshooting: If the Vivante drivers are still not found, verify the library paths in your container. The error indicates that the library is being searched in /usr/lib/dri. Ensure that the Vivante driver files are present in the expected directories or adjust the LD_LIBRARY_PATH to include the correct path.
By following these steps, you should be able to resolve the issue with the Vivante drivers and ensure that your application can utilize GPU acceleration effectively on the iMX8MP with Torizon.
If you have any further questions, feel free to ask.
Best regards,
Toradex AI Assistant
Please, also check if these links can help you solve your problem:
Application container based on wayland-vivante with little customization in my dockerfile:
FROM torizon/wayland-base-vivante:3
# Set the Wayland display variable
ENV WAYLAND_DISPLAY=wayland-0
ENV XDG_RUNTIME_DIR=/tmp/1000-runtime-dir
# Create the runtime directory with appropriate permissions
RUN mkdir -p /tmp/1000-runtime-dir && chmod 0700 /tmp/1000-runtime-dir
After looking into this I have the following findings. Your application is complaining about:
libEGL warning: MESA-LOADER: failed to open vivante: /usr/lib/dri/vivante_dri.so: cannot open shared object file: No such file or directory (search paths /usr/lib/aarch64-linux-gnu/dri:$${ORIGIN}/dri:/usr/lib/dri, suffix _dri)
The issue is that Vivante is a vendor specific GPU interface from NXP themselves. What this means is that Vivante doesn’t provide any DRI shared library file, which is what your framework is looking for here.
We don’t typically see this with other graphics frameworks because most frameworks will use higher level abstractions that will then use the proprietary mechanisms. In your case here the graphics framework you are using seems to be trying to use the lower software layers directly, which Vivante doesn’t provide.
I don’t have any further suggestions since this seems specific to how the wx framework operates.
I see. But then how can I make hardware accelerated GUI applications?
I’m trying with torizon/qt6-wayland-examples-vivante:3 container.
So weston+qt-vivante containers, and I ran the openglcube and other example apps mentioned here. On checking with the gputop tool(I created qt container with --privilege). I don’t see any application attached to GPU.
1)Is the vivante GPU really being utilized with the sample openGL apps that toradex is providing? If yes then, how can I develop similar applications? I tried building a simple QT6 app with openGL ES support on the qt container itself but ran into build issue because of the qt6 apt package not supporting openGL. Probably have to build QT from source. I also tried to use openGL libs directly but no luck. Which is the most preferred framework for using graphics acceleration in torizon stack?
2) In continuation to point 1, I ultimately want to showcase the image processing capability of iMX8MP leveraging its GPU because I see almost 80% CPU utilization running face detection on a rtsp stream running opencv haarcascade face detection by modifying example given by toradex here. This was with native opencv application. When I integrate this with wxWidget it basically not usable because of very high CPU usage by my application.
3) I also tried building the tsflite-rtsp container to check but running into multiple build errors. The sample was updated 2 years ago. Can you confirm that the bookworm version is tested?
I don’t want to go with a custom OS build as I’m not aware of that as well my timeline doesn’t allow.
How can I use the toradex torizon stack(dev env, sdk, libraries, packages) to create an application that can fully utilize the hardware on iMX8MP. Since my main application uses wxWidgets that’s the first choice of framework for me. I thought of using wxCanvas library which supports openGL in my framework and probably use opencv with openCL/vulkan/gstreamer so that my application runs not only on CPU but no luck so far.
Is the vivante GPU really being utilized with the sample openGL apps that toradex is providing?
I’m confused what you shared clearly shows the processes for the cube example in gputop. I can see the same on my side, also top shows these processes barely utilize any CPU usage. It seems fairly clear they are utilizing the GPU then.
I tried building a simple QT6 app with openGL ES support on the qt container itself but ran into build issue because of the qt6 apt package not supporting openGL.
Not sure what you did but basic Qt6 apps should be using the GPU as shown.
Which is the most preferred framework for using graphics acceleration in torizon stack?
Any of these should work. Any other graphical framework we can’t guarantee since it’s not feasible for us to test/try every framework out there.
The sample was updated 2 years ago. Can you confirm that the bookworm version is tested?
Are you sure you’re using the bookworm version? From what i see this was last updated 6 months ago. These samples are not formally tested and are provided as is. Any issues are examined as reported.
Okay I was referring to the “clients associated to GPU” in the gputop output. By default any QT application will use GPU? I believe has to be build with GL support. The libs for which cannot be found on toradex debian repos.
I came across few Toradex Pages about using Gstreamer/ Opencv in this context.
From this page:
OpenCV provides modules for Machine Learning and Neural Networks. These modules target Arm Cortex-A cores and utilize Arm Neon to process models. However, until this moment, these modules are not accelerated by GPU. You can use other inference engines in conjunction with OpenCV to increase AI processing performance. See other AI solutions that Toradex offers through it network of partner.
Which other inference engines can be used? I don’t want to approach toradex partners at the moment.
And basically I cannot use GPU acceleration as of now? I was going through few articles to integrate opencv+openGL but from what I’ve read so far is haar cascade models will use only CPU. Has anyone from toradex tried using opencv with openGL?
Of course not all of these are necessarily hardware accelerated and some are just more demos. Anything more advance than what we have documented is not really the expertise of Toradex.
And basically I cannot use GPU acceleration as of now?
Last I checked our tensorflow-lite example should be capable of using the GPU/NPU.
Has anyone from toradex tried using opencv with openGL?
I checked around but the only thing we have is what we have documented on our website.
This sample was updated 19 days ago on the repo. I was expecting this to run without any issues. Please correct me if I’m missing out something.
Could you try modifying the Dockerfile to have ARG BASE_VERSION=3.3. This seems similar to another known issue.
Firstly I tried building OpenCV with openGL/openCL headers provided by NXP from imx-wayland-dev package, but the CMake refuses to accept the openCL headers path provided by NXP. I;m able to use openCL in an independent terminal based code but trying to use it with openCV.
Secondly I tried modifying the tflite/rtsp source to accept a rtsp feed as a source rather than a usb camera, I’m able to read the rtsp feed as source in an independent code, but when I tried to read the same feed in tflite/rtsp python file gstreamer fails to start stream. Also the tflite/rtsp takes around and hour and half to build the image after src modification, how can I speed up this process, I don’t want to build inference model every time as I don’t need any change in that. Just the src object_detection.py file needs modification.
Than I ran the container and used python to confirm existence of opencl support:
docker run --privileged -v /dev:/dev --rm -it test /bin/bash
root@6916ea895137:/home/torizon# python3
Python 3.11.2 (main, Nov 30 2024, 21:22:50) [GCC 12.2.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import cv2
>>> print(cv2.ocl.haveOpenCL())
True
Also the tflite/rtsp takes around and hour and half to build the image after src modification, how can I speed up this process, I don’t want to build inference model every time as I don’t need any change in that. Just the src object_detection.py file needs modification.
The Dockerfile will rebuild every line starting from the modification onwards. So if you move the python file to the end of the Dockerfile it shouldn’t need to run the prior steps. If your docker build isn’t caching the steps properly it might be some configuration issue that is preventing the caching mechanism for the build.