Tensorflow Light GPU acceleration not working

Hi,

I’m trying to evaluate the AI performance on an Apalis iMX8QM board by following the official TFLite tutoriall. Even though I’m using the pre-built docker image, I’m encountering issues with GPU acceleration.

On my Toradex board, I cloned the linked Github example repository and started the docker image with the provided docker-compose.yml with ${CT_TAG_DEBIAN}=3.0-bookworm. When I attach to the docker container and run the program, I get the message INFO: Created TensorFlow Lite XNNPACK delegate for CPU. According to the tutorial’s compatibility table, GPU acceleration should be supported for my board. Setting USE_GPU_INFERENCE=1 results in the same behavior. I also verified that the device cgroup rules are correct.

System configuration:

  • Apalis iMX8QM V1.1 on an Ixora Carrier Board
  • TorizonCore version 6.8.0-build.22

Any help would be appreciated.

Hi, I’m the Toradex AI Assistant. The Toradex Support team will follow up with you shortly!

Did you like my answer?

:+1: :-1:


Hello,

Thank you for reaching out with your question about enabling GPU acceleration for TensorFlow Lite on your Apalis iMX8QM board. Let’s go through some steps to help resolve this issue:

  1. Verify GPU Support: Ensure that the TensorFlow Lite version you are using is compatible with GPU acceleration on the i.MX8QM platform. Typically, the tensorflow-lite-vx-delegate is required for hardware acceleration on this platform.

  2. Check Docker Image: Confirm that the Docker image includes the necessary libraries and configurations for GPU support. The image should have the tensorflow-lite-vx-delegate or a similar delegate configured for GPU usage.

  3. Environment Variables: Double-check that the environment variable USE_GPU_INFERENCE=1 is correctly set in your Docker environment. This should be passed when starting the container to ensure the application attempts to use the GPU.

  4. Device Access: Ensure that the Docker container has access to the GPU device. This involves setting the correct device permissions and ensuring that the GPU is exposed to the container. Verify the docker-compose.yml file for the correct device mappings.

  5. Driver and Kernel Support: Make sure that the necessary GPU drivers are installed and that the kernel supports GPU operations. Check the TorizonCore version and its release notes for any known issues or updates related to GPU support.

  6. Logs and Debugging: Review the logs for any error messages or warnings related to GPU initialization. This can provide clues if there are issues with the delegate or other components.

If these steps do not resolve the issue, you might consider testing with a different version of TensorFlow Lite or a different Docker image known to work with GPU acceleration on the i.MX8QM.

I hope this helps you get GPU acceleration working on your board. If you have further questions, feel free to ask.

Best regards,
Toradex AI Assistant

Please, also check if these links can help you solve your problem:

iMX8M Plus和 iMX8QM 机器学习框架 eIQ 性能对比.

Regarding steps 1-5: I’ve verified that the official Toradex TFLite example correctly configures the Docker container for GPU inference. Setting USE_GPU_INFERENCE=1 didn’t resolve the issue.

The board still defaults to the CPU without any additional warning or error messages that could help diagnose the problem.

Hello @glad,

I apologize for the delay here.

This is normal when using TensorFlow Lite.
The best way to tell if you are using the GPU or not is the performance.

I would recommend that you try to compare the performance with the CPU-only inference, which you can try with USE_HW_ACCELERATED_INFERENCE=0.

Best Regards,
Bruno