Same TFLite example works on i.MX8M Plus (VX) but fails on i.MX95 (Neutron error 442)

Hello Toradex team,

We are currently evaluating Neural Network acceleration on Verdin i.MX95 using TensorFlow Lite + Neutron delegate, and we are facing a blocking issue:

Even official TensorFlow Lite examples provided by Toradex fail with Neutron error 442.


:white_check_mark: Context

Hardware / Software:


:white_check_mark: Test performed (official Toradex example)

Command used:

python3 label_image.py \
  -i grace_hopper.bmp \
  -m mobilenet_v1_1.0_224_quant.tflite \
  -l labels.txt \
  -e /usr/lib/libneutron_delegate.so

Result on Verdin i.MX95:

root@verdin-imx95-12594079:/usr/bin/tensorflow-lite-2.16.2/examples# python3 label_image.py -i grace_hopper.bmp -m mobilenet_v1_1.0_224_quant.tflite -i grace_hopper.bmp -l labels.txt -e /usr/lib/libneutron_delegate.so Loading external delegate from /usr/lib/libneutron_delegate.so with args: {} INFO: NeutronDelegate delegate: 29 nodes delegated out of 31 nodes with 1 partitions. INFO: Created TensorFlow Lite XNNPACK delegate for CPU. fail to create neutron inference job Error: component=‘Neutron Driver’, category=‘internal fault’, code=442 Traceback (most recent call last): File “/usr/bin/tensorflow-lite-2.16.2/examples/label_image.py”, line 120, in interpreter.invoke() File “/usr/lib/python3.12/site-packages/tflite_runtime/interpreter.py”, line 941, in invoke self._interpreter.Invoke() RuntimeError: /usr/src/debug/tensorflow-lite-neutron-delegate/2.16.2/neutron_delegate.cc:261 neutronRC != ENONE (113203 != 0)Node number 31 (NeutronDelegate) failed to invoke.

:white_check_mark: Same test on Verdin i.MX8M Plus (comparison)

With the same model and same command structure:

python3 label_image.py \
  -i grace_hopper.bmp \
  -m mobilenet_v1_1.0_224_quant.tflite \
  -l labels.txt \
  -e /usr/lib/libvx_delegate.so

Result on i.MX8M Plus:

root@verdin-imx8mp-15676722:/usr/bin/tensorflow-lite-2.16.2/examples# python3 label_image.py -i grace_hopper.bmp -m mobilenet_v1_1.0_224_quant.tflite -l labels.txt  -e /usr/lib/libvx_delegate.so
Loading external delegate from /usr/lib/libvx_delegate.so with args: {}
INFO: Vx delegate: allowed_cache_mode set to 0.
INFO: Vx delegate: device num set to 0.
INFO: Vx delegate: allowed_builtin_code set to 0.
INFO: Vx delegate: error_during_init set to 0.
INFO: Vx delegate: error_during_prepare set to 0.
INFO: Vx delegate: error_during_invoke set to 0.
W [HandleLayoutInfer:332]Op 162: default layout inference pass.
Warm-up time: 3368.6 ms

Inference time: 3.1 ms

0.870588: military uniform
0.031373: Windsor tie
0.011765: mortarboard
0.007843: bow tie
0.007843: bulletproof vest

:white_check_mark: NPU acceleration is fully functional on i.MX8M Plus.


:white_check_mark: Conclusion

This demonstrates that:

  • The model is valid.
  • The application works correctly.
  • The same example and model run successfully on i.MX8M Plus.
  • The failure happens only on i.MX95 when using Neutron delegate.

:white_check_mark: Questions

Could you please confirm:

  1. Is Neutron officially supported today on Verdin i.MX95?
  2. Is error 442 a known issue?
  3. Is a fix or BSP update planned?
  4. Is Neutron expected to work similarly to i.MX8M Plus (VX delegate)?
  5. Have you been able to reproduce this internally?

Hi, I’m the Toradex AI Assistant.

I might make mistakes or occasionally get you wrong. Our support team will step in if needed.


I’d be glad to help troubleshoot this Neutron delegate issue on your Verdin i.MX95! To provide you with the most accurate guidance, could you let me know which OS and version you are running on your device? You can use the tdx-info command to get detailed information about your setup - more details at https://developer.toradex.com/software/development-resources/tdx-info/.

Initial Analysis

Based on your detailed testing, you’ve correctly identified that the issue is specific to the Neutron delegate on i.MX95, while the VX delegate works properly on i.MX8M Plus (1).

Known Issues with i.MX95 Neutron Conversion

The community has reported similar issues with Neutron delegate functionality on i.MX95 (2). Specifically, there have been reports of internal errors during model conversion for fully connected layers in classifiers like InceptionV3, ResNet50, and MobileNetV3Small (2).

Neutron Delegate Architecture

The Neutron delegate is designed for i.MX95 platforms with the Neutron-S NPU (1). According to the machine learning documentation, the Neutron delegate should capture operators and aggregate them as neutron graph nodes for acceleration by the Neutron-S NPU.

Troubleshooting Steps

Once you provide your OS version information, I can give you more specific guidance. In the meantime, here are some general approaches:

  1. Verify Model Compatibility: Ensure your model meets the Neutron delegate’s operator constraints
  2. Check Library Versions: Confirm you have the correct TensorFlow Lite and Neutron delegate versions for your BSP
  3. Test with Simpler Models: Try with minimal models to isolate the issue

Alternative Approaches

For i.MX95, you might also consider using the GPU delegate as an alternative acceleration method, which leverages the Arm Mali G310 GPU (1).

Please share your OS version details so I can provide more targeted assistance for your specific configuration.

Best regards,
Toradex AI Assistant.


Did you like my answer? :+1: :-1:

If I answered your question, please tick the “Solution” checkbox.

For follow-up questions to me, please use @ToradexAI in your post.

ToradexAI thank you for your reply.

Here is the requested information from our Verdin i.MX95 system.


System information

Kernel / OS

uname -a
Linux verdin-imx95-12594079 6.6.94-7.4.0-devel #1 SMP PREEMPT Wed Aug 13 14:24:54 UTC 2025 aarch64 GNU/Linux

Neutron-related kernel logs

dmesg output

neutron 4ab00004.imx95-neutron: created neutron device, name=neutron0
remoteproc remoteproc0: neutron-rproc is available
remoteproc remoteproc0: powering up neutron-rproc
remoteproc remoteproc0: Booting fw image NeutronFirmware.elf
remoteproc remoteproc0: remote processor neutron-rproc is now up
neutron 4ab00004.imx95-neutron: Invalid ioctl. cmd=1075577096, arg=549682756584
remoteproc remoteproc0: stopped remote processor neutron-rproc

TensorFlow Lite error

RuntimeError: neutron_delegate.cc:261 neutronRC != ENONE
Node number 31 (NeutronDelegate) failed to invoke

Technical observation

The kernel reports:

Invalid ioctl

This strongly suggests a mismatch between user-space and kernel-side components:

  • libneutron_delegate.so
  • neutron kernel driver
  • Neutron firmware (NeutronFirmware.elf)

It looks like the delegate sends an ioctl which the kernel driver does not recognize.


Questions

Could you please clarify:

  1. Which exact BSP version is officially validated for Neutron on Verdin i.MX95?
  2. Which versions of:
  • libneutron_delegate.so
  • NeutronFirmware.elf
  • neutron kernel driver
    are expected to match?
  1. Is error 442 a known issue internally?
  2. Is a fix or updated BSP planned?
  3. Has Neutron been validated by Toradex on Verdin i.MX95 using label_image.py?

logs
root@verdin-imx95-12594079:/usr/bin/tensorflow-lite-2.16.2/examples# uname -a
Linux verdin-imx95-12594079 6.6.94-7.4.0-devel #1 SMP PREEMPT Wed Aug 13 14:24:54 UTC 2025 aarch64 GNU/Linux
root@verdin-imx95-12594079:/usr/bin/tensorflow-lite-2.16.2/examples# dmesg | grep -i neutron
[ 1.154181] neutron 4ab00004.imx95-neutron: created neutron device, name=neutron0
[ 1.155265] remoteproc remoteproc0: neutron-rproc is available
[ 463.589614] remoteproc remoteproc0: powering up neutron-rproc
[ 463.596458] remoteproc remoteproc0: Booting fw image NeutronFirmware.elf, size 41304
[ 463.604349] remoteproc remoteproc0: remote processor neutron-rproc is now up
[ 528.535936] neutron 4ab00004.imx95-neutron: Invalid ioctl. cmd=1075577096, arg=549682756584
[ 541.218059] remoteproc remoteproc0: stopped remote processor neutron-rproc
root@verdin-imx95-12594079:/usr/bin/tensorflow-lite-2.16.2/examples#

Thanks for your support,
rick,

Thank you for providing the detailed system information! Your analysis about the ioctl mismatch is very insightful and helps narrow down the issue significantly.

System Analysis

Based on your kernel version 6.6.94-7.4.0-devel, you’re running BSP 7.4.0 (1). The “Invalid ioctl” error you’ve identified is indeed a strong indicator of version mismatches between the user-space delegate and kernel driver components.

BSP Version Compatibility for i.MX95

According to the silicon revision compatibility information, there are important considerations for i.MX95 modules (1):

Verdin iMX95 Support Matrix

  • V1.0A (A1 silicon): Supported up to BSP 7.4.0 (1)
  • V1.0B (B0 silicon): Supported starting from BSP 7.5.0 (1)

This suggests that if you have a V1.0A module, BSP 7.4.0 should be the correct version. However, if you have a V1.0B module, you would need BSP 7.5.0 or later (1).

Machine Learning Software Integration

The documentation shows that TensorFlow Lite with machine learning capabilities requires specific configuration steps (2):

$ echo 'IMAGE_INSTALL:append = " tensorflow-lite tensorflow-lite-vx-delegate opencv python3-pillow adwaita-icon-theme "' >> conf/local.conf

(2)

For i.MX95 specifically, there’s also an OpenCL dependency adjustment that’s needed (2):

$ sed -i '/^RDEPENDS_OPENCL/s|= "opencl-icd-loader-dev"|= ""\nRDEPENDS_OPENCL:mx95-nxp-bsp = "opencl-icd-loader-dev"|' ../layers/meta-imx-ml/recipes-libraries/tensorflow-lite/tensorflow-lite_2.16.2.bb

(2)

Known Issues and Community Reports

There are documented issues with Neutron delegate functionality on i.MX95, particularly with fully connected layer conversions showing internal errors (3). The community has reported similar problems where the converter fails with “internal error” messages during model conversion (3).

Recommendations

  1. Verify Module Revision: Check if you have a V1.0A or V1.0B Verdin iMX95 module, as this determines the correct BSP version (1)

  2. BSP Version Alignment: If you have a V1.0B module, consider upgrading to BSP 7.5.0 or later (1)

  3. Component Version Verification: The ioctl mismatch suggests the libneutron_delegate.so, kernel driver, and firmware may not be from the same BSP build

  4. Alternative Acceleration: Consider testing with the GPU delegate as an alternative, which uses the Arm Mali G310 GPU on i.MX95 (4)

Next Steps

To help resolve this issue:

  1. Can you check your module revision (V1.0A vs V1.0B)?
  2. Have you followed the specific OpenCL dependency adjustments mentioned in the machine learning documentation (2)?
  3. Consider testing with a fresh BSP 7.5.0 build if you have a V1.0B module

The ioctl error you’ve identified is a valuable diagnostic clue that points to version compatibility issues rather than model or application problems.

Best regards,
Toradex AI Assistant.


Did you like my answer? :+1: :-1:

If I answered your question, please tick the “Solution” checkbox.

For follow-up questions to me, please use @ToradexAI in your post.

Hi @LinuxMan,

Thanks for sending this information.

We are not aware of this error and have not seen it yet.

The tutorial that you are following to build the BSP is will not work out of the box for Verdin iMX95. It is tested on Verdin iMX8MP, and it works there as you found.
Can you try the following steps:

$ mkdir -p ~/yocto-ml-build/bsp-toradex
$ cd ~/yocto-ml-build/bsp-toradex
$ repo init -u git://git.toradex.com/toradex-manifest.git -b refs/tags/7.4.0 -m tdxref/default.xml
$ repo sync

$ . export

$ cd ~/yocto-ml-build/bsp-toradex/build
$ git clone --depth 1 -b scarthgap-6.6.52-2.2.1 https://github.com/nxp-imx/meta-imx.git ../meta-imx

$ bitbake-layers create-layer ../layers/meta-imx-ml
$ bitbake-layers add-layer ../layers/meta-imx-ml
$ rm -rf ../layers/meta-imx-ml/recipes-example
$ cp -r ../meta-imx/meta-imx-ml/recipes-* ../layers/meta-imx-ml/

# In local.conf file:
IMAGE_INSTALL:append = " tensorflow-lite tensorflow-lite-neutron-delegate opencv python3-pillow adwaita-icon-theme "
MACHINE ?= "verdin-imx95"
ACCEPT_FSL_EULA = "1"

# In layers/meta-toradex-nxp/conf/machine/verdin-imx95.conf:
MACHINEOVERRIDES =. "tdx:mx95:mx95-nxp-bsp"

# Remove the line *PACKAGECONFIG_GPU_DELEGATE:mx95-nxp-bsp = "gpu-delegate"* from the file layers/meta-imx-ml/recipes-libraries/tensorflow-lite/tensorflow-lite_2.16.2.bb.
#PACKAGECONFIG_GPU_DELEGATE:mx95-nxp-bsp = "gpu-delegate"

# Build the image:
bitbake tdx-reference-multimedia-image