Greetings. We tried building a custom yocto image for machine learning , however the bitbake command took hours and failed everytime due to resource constraints from our side.
This sample supposedly has it, since when I run this, it launches a debian container with OpenVX/libvx_delegate configured, however we are having problems running that as well. We get the following error :-
Vx delegate: allowed_cache_mode set to 0.
Vx delegate: allowed_builtin_code set to 0.
Vx delegate: error_during_init set to 0.
Vx delegate: error_during_prepare set to 0.
Vx delegate: error_during_invoke set to 0.
WARNING: Fallback unsupported op 32 to TfLite
INFO: Created TensorFlow Lite XNNPACK delegate for CPU.
So we were wondering if is possible to run a container (similiar to the debian container that Toradex provides) that has OpenVX and libvx delegate file configured
@ allan.tx
Hi, I’m the Toradex AI Assistant.
I might make mistakes. Our support team will step in if needed. If I answered your question, please tick the “Solution” checkbox. For follow-up questions to me, please use @ToradexAI in your post.
Did you like my answer?

Hello! Thank you for your detailed question.
From your description, it seems you are facing two related issues:
-
Resource constraints while building a custom Yocto image for ML:
Building Toradex Yocto images, especially with machine learning components, does require significant system resources—commonly at least 100GB of free disk space and substantial memory. If that’s problematic, you may want to use the pre-built reference images, which already include components for machine learning. This can save time and also ensures the necessary libraries (OpenVX, TFLite, etc.) are set up and tested.
You’ll find more info here:
Machine Learning Documentation Overview | Toradex Developer Center
-
Problems running the TFLite/RTSP sample container (OpenVX configuration):
The error messages you shared indicate that the VX delegate is not being used for hardware acceleration and the workload is falling back to TensorFlow Lite’s XNNPACK CPU delegate. This often happens if the necessary hardware drivers or permissions are not exposed from the host into the container.
Here are some steps you can take:
-
Ensure hardware acceleration is available on the host and accessible from the container. For use with OpenVX and hardware-accelerated delegates, you may need to add specific devices and permissions using the Docker --device
and --privileged
flags, e.g.:
docker run --privileged --device=/dev/vchiq ... <your other args>
-
Base your container on the Toradex reference images or pull containers built from those images. The torizonextras/arm64v8-sample-tflite-rtsp sample provides a good starting point for working with ML and includes the relevant libraries.
-
Verify installed libraries/drivers:
Check that your container has both OpenVX and the required delegate libraries. If you customize your Dockerfile, you can reference the dependencies found in the repositories linked by Toradex:
-
Test with Camera/UVC:
If you use video input, ensure to configure the container for devices (/dev/video*
) as appropriate.
Reference: Machine Learning Application Development
Summary/Recommendation:
The easiest way is to leverage the provided Torizon reference images for containers. Try running the pre-built TFLite RTSP sample with appropriate device/peripheral access flags. If you still get CPU fallback, please check that your device supports hardware acceleration for the specific operations you use and that required permissions are given to the container.
If you need a container with OpenVX/libvx_delegate already set up, the torizonextras/arm64v8-sample-tflite-rtsp is the recommended starting point. If you continue seeing fallback, please share more details about your device model and Docker command so I can provide even more focused guidance!
Useful links:
Best regards,
Toradex AI Assistant.