NPU Inference returns static/flat output on Verdin iMX95 V1.0B with libneutron_delegate.so

We are currently migrating our liquid level detection application to the newest hardware revision, but we’ve run into a critical issue with the Machine Learning stack.

On the new Verdin iMX95 H 8GB WB IT V1.0B, our TFLite model output is failing silently.

We are using the exact same Python inference script, the exact same .tflite model, and the exact same input .jpg as we did on the V1.0A hardware. The script uses tflite_runtime and successfully loads the /usr/lib/libneutron_delegate.so external delegate.

However, on the V1.0B board, the NPU returns a completely static, incorrect output (a flat probability distribution) for every single inference, regardless of the input image provided.

Here is the tdx-info for the failing V1.0B setup:

PlaintextSoftware summary Bootloader: U-Boot Kernel version: 6.6.119-7.5.0-devel #1 SMP PREEMPT Mon Jan 5 09:23:13 UTC 2026 Kernel command line: root=PARTUUID=cb6ffc87-02 ro rootwait console=tty1 console=ttyLP2,115200 Distro name: NAME="TDX Wayland with XWayland" Distro version: VERSION_ID=7.5.0-devel-20251222135345-build.0 Hostname: verdin-imx95-12594956

For comparison, here is the exact same pipeline running perfectly on our older V1.0A board:

Here is the tdx-info for the working V1.0A setup:

PlaintextSoftware summary Bootloader: U-Boot Kernel version: 6.6.101-7.4.0-devel #1 SMP PREEMPT Thu Sep 25 07:49:28 UTC 2025 Kernel command line: root=PARTUUID=3cb8eaf4-02 ro rootwait console=tty1 console=ttyLP2,115200 Distro name: NAME="TDX Wayland with XWayland" Distro version: VERSION_ID=7.4.0-devel-20251222121713-build.0 Hostname: verdin-imx95-12594073

Questions:

  1. Are there known regressions or architectural changes to the Neutron NPU driver/delegate between the two version that would cause an unconverted .tflite model to fail silently?
  2. Could you provide a link to a known-good Reference OS Image for the V1.0B hardware that includes a working eIQ/Neutron stack? We would like to re-flash the board to completely rule out an issue with our specific Yocto build.

Hi, I’m the Toradex AI Assistant.

I might make mistakes or occasionally get you wrong. Our support team will step in if needed.


I understand you’re experiencing a critical issue with TensorFlow Lite model inference between V1.0A and V1.0B hardware revisions of the Verdin iMX95. Based on the information available, I can provide some insights about the Neutron NPU architecture and potential troubleshooting steps.

Neutron NPU Architecture Overview

The Verdin iMX95 features the eIQ Neutron NPU, which is designed to accelerate machine learning inference (1). The Neutron-S NPU comprises three main blocks: the Neutron computation core, Neutron controller (RISC-V core), and Data Mover for exchanging data between host DDR and Neutron dedicated TCM.

Potential Issues and Troubleshooting

1. Model Conversion Requirements

One important consideration is that the Neutron NPU typically requires models to be converted using the eIQ Toolkit’s Neutron converter for optimal performance. The fact that you’re using an “unconverted .tflite model” may be relevant here. The Neutron delegate expects models that have been processed through the conversion pipeline to replace supported operators with custom neutronGraph nodes.

2. Hardware Revision Differences

The difference between V1.0A and V1.0B hardware revisions could involve changes to the NPU firmware, driver stack, or hardware implementation that affect model compatibility. Your observation of “flat probability distribution” outputs suggests the NPU may be returning default/uninitialized values rather than actual inference results.

3. Software Stack Differences

I notice you’re running different BSP versions:

  • V1.0A: 7.4.0-devel with kernel 6.6.101
  • V1.0B: 7.5.0-devel with kernel 6.6.119

This suggests potential driver or delegate library changes between versions that could affect NPU behavior.

Recommended Troubleshooting Steps

1. Model Conversion

Consider converting your TensorFlow Lite model using the eIQ Toolkit’s Neutron converter specifically for the iMX95 platform. This ensures the model is properly optimized for the Neutron NPU architecture.

2. Delegate Verification

Verify that the libneutron_delegate.so library version matches the expected version for your BSP. You might want to check if there are any version mismatches or compatibility issues.

3. Fallback Testing

Test your model with CPU-only inference (without the Neutron delegate) to confirm the model itself works correctly on the V1.0B hardware.

Reference Images

For BSP 7.x.y reference images, you can build them using the Yocto Project following the instructions in our documentation (2). The build process involves setting up the environment, configuring for verdin-imx95 machine, and building with bitbake.

However, I don’t have specific information about pre-built reference images with confirmed working eIQ/Neutron stacks for V1.0B hardware in the available documentation.

Next Steps

Given the complexity of this issue and the specific hardware revision differences, I’d recommend:

  1. Testing with a properly converted Neutron model
  2. Checking for any known issues or updates in the BSP release notes
  3. Comparing the exact versions of the Neutron delegate libraries between your working and non-working setups

The community has reported similar issues with Neutron converter functionality (3), which suggests this is an area where additional support may be beneficial.

Best regards, Toradex AI Assistant.


Did you like my answer? :+1: :-1:

If I answered your question, please tick the “Solution” checkbox.

For follow-up questions to me, please use @ToradexAI in your post.

Hi @TobiasAtAGI,

Thanks for reporting this. We have a Toradex Walnascar BSP for verdin iMX95 at the moment. However, you need to rebuild it with the necessary NPU libraries. I am working on providing you with the right build steps. I think the Walnascar image could solve your issues. In the meantime, we have a working Walnascar image binary we built some time ago. Could you please try your model on this image? You can download the image here: Download - Toradex File Sharing Platform

Please note that, unlike the Linux 6.6 version, you need to convert your quantized model with the eiq neutron SDK here for it to work. Then you would run your converted model with the neutron delegate on the iMX95. Please use eiq neutron SDK version 2.2.3 or 3 - which is compatible with the Linux version 6.12.

Hi @rudhi.tx,

Thank you for the image and the specific neutron converter version.

I converted the model and ran our test script with it, and everything works perfectly!

The only issue I have now is that the board is not connecting to the network via Ethernet. We suspect this might be because it is an older image.

To help us plan the timeline for shipping these AI models with our product, do you know when this Walnascar BSP version is expected to be production-ready for the i.MX95? As far as we can tell from the Embedded Linux Release Matrix, this version isn’t officially released yet. Is that correct?

Thanks again for your help!

Hi @TobiasAtAGI,

The Walnascar image for the iMX95 is not production-ready because Walnascar itself is EOL since November 2025. It is built from our side as an engineering release to support our customers, specifically for NPU evaluation. From Toradex side we will have the next production-ready release version of our BSPs and Torizon OS in Q1 2027. That being said, we might be able to help you if you need a production-ready release before that. Could you please tell me what your expected timeline is for going into production?

Hi @rudhi.tx,

Thank you for the response.

Since we could not get the NPU working on the first version, which we will ship in the coming weeks, we plan to ship the second version around October 2026 to our customers.

The integrated NPU was a key reason why we selected the i.MX95 in the first place, so it is vital that we can use it in the next release. To ensure we can properly test and integrate everything, we would need to have a produciton ready BSP and Torizon OS where the NPU stack works by the end of August 2026. Would this be a realistic timeline to achieve on your end?

Hi @TobiasAtAGI,

I am discussing with our software product manager to find the right solution for you regarding the production release. As of today, the situation is what I explained earlier. Just like you, we are also affected by the EOL of Walnascar. I will update you soon with some suggestions on how to move forward.

Hi @rudhi.tx,

That is perfect. I understand that this is not an issue you can resolve by yourself.

Looking forward to your suggestions.

Hi @TobiasAtAGI,

My answer here on this thread would be the solution for you I believe: Verdin iMX95 Neutron Converter "No more TCM space for data" - #15 by rudhi.tx