Tensorflow Lite model loading error

I previously inquired about Tensorflow Lite model loading error.

I want to do object detection using the object detection api.
I was unable to convert and load a saved_model from sensorflow2 to Ttflite using BSP5.4 and TensorflowLite 2.3.1.
SSD MobileNet v2 320x320

Tensorflow1 models could be converted to tflite and loaded. ssd_mobilenet_v1_quantized_coco

This query is below.

This is the same content again.
Building Machine Learning Software with Reference Images for Yocto Project
I have now tried TensorflowLite2.8 with BSP5.7 based on the above. The conversion is also using Tensorflow2.8.
Tensorflow2 models can be loaded when running on CPU, but when using NPU, an error occurs. That is, whether or not /usr/lib/libvx_delegate.so is specified.

Why is this?
I am attaching the source code of the conversion, the source code of the execution, and log.
convert.py (1.9 KB)
log.txt (642 Bytes)
test.py (864 Bytes)

Best regards.

Dear @developer0916 , are you using Verdin iMX8M Plus, right? Because the initial post you mentioned used Apalis iMX8 that doesn’t have NPU.

EDIT: Nevermind, just read the log. We are checking but I’m reading here and there and I’m not 100% sure if model conversion from TensorFlow2 is supported as TensorFlow(1). Do you know?


I am using Verdin-iMX8M.
Please close on this matter.
It appears that the model conversion was incorrect.
Tensorflow2 model also worked.

Best regards.

Thanks for the update. That is great to hear. Can you give a bit more information of what you changed to make it work for future reference?


The following information was helpful

Armadillo page.


Their manual is very well done.
I think we too need to create a manual for AI.
There is a demand for it.
I am in the process of doing some research for that.

Best regards.

1 Like

Thanks for the info! Having a detailed manual would be great… If we have a Japanese manual, we will publish it ASAP.

ie. Linux OS カスタムマニュアル BSP5用