About loading tflite model converted with tensorflow ops enabled in imx8mp

Hi, I’m having trouble loading the tflite model.
The tflite model does the following when converting.
select ops enabled
The conversion was successful without error

Reference site

Code

import tensorflow as tf
converter = tf.lite.TFLiteConverter.from_saved_model(saved_model_dir)
converter.target_spec.supported_ops = [
  tf.lite.OpsSet.TFLITE_BUILTINS, # enable TensorFlow Lite ops.
  tf.lite.OpsSet.SELECT_TF_OPS # enable TensorFlow ops.
]
tflite_model = converter.convert()
open("converted_model.tflite", "wb").write(tflite_model)

When I copied the converted tflite model to imx8mp and ran it, I got the following error:
“Op builtin_code out of range:127. Are you using old TFLite binary with newr model?Registrayion failed.”

Is there a solution?

other info
I have confirmed that the label_image demo can be run.
I confirmed that the original model without ops can also be executed.

Hi

A few days have passed
Please respond

Hi @akito ,

Can you please let me know which tflite model are you converting? Is it an customized model or some example model from tensorflow? I will try to reproduce this on my side to investigate.

Hello
thank you for your reply

I converted from a tensorflow model (* .pb) to a tflite model (* .tflite). The conversion can be completed without error.
The model is a customized model that uses select ops (conv3d etc).
I’m only using the tflite runtime at runtime

Run-time code

import tflite_runtime.interpreter as tflite
interpreter = tf.lite.Interpreter(model_path=args.model_file)           <---error

No error occurred when checking with label_image.py

Dear @akito , sorry for the late reply, this fell under the radar. I’m also checking this as the main FAE for the Japan region. It is the first time I’m working with TF so I need to setup the environment and make some quick tests. I’ll get back to you ASAP.

Could you give us as many details as possible? What kind of models did you run?

The model is a customized model that uses select ops (conv3d etc).

Did you convert other models and those worked without error? Could this be an issue from the conv3d dependencies? (cc @denis.tx)

Alvaro.

1 Like

Dear @alvaro.tx thank you for reply

Q.Could you give us as many details as possible? What kind of models did you run?
I will send the outline of the model as a png file.(preDNN.png)

Q.Did you convert other models and those worked without error?
Yes. I tested it with a simple original cos function model.(cos_model.png

Q.Could this be an issue from the conv3d dependencies?
I suggest so.

Hi @akito , since you have used eIQ tool, could you please convert your OPS enabled TFLite model to TFLite model again by eIQ tool? And try the model converted by eIQ on iMX8M Plus.

Hi @benjamin.tx , thank you for reply.

I tried converting from tflite to tflite by eiQ.
But I have error.
Another model(cos model) has same error.
image

Next, I tried converting from tflite to ONNX.
But, I have error again.
image

Another model(cos model) could convert without error.

@akito , the error " "Op builtin_code out of range:127" can be caused by a mismatch between tensorflow and tensorflow lite versions. Can you let me know which version of Tensorflow you are using? You can check it with:

import tensorflow as tf
print(tf.__version__)

NXP BSP L5.4.70-2.3.0 uses TensorFlow Lite v2.3.1

@denis.tx
Hi, thank you for reply.

image
version is 2.6.0

The model construction environment is google colab.

I had the same error and I solved it in the same way! I got my answer here!

Hi @alexrentier , thanks for the feedback! Can you clarify the solution? Basically, the suggestion of using the same version for TF and TFLite did the trick?

1 Like

Hi, @alvaro.tx

I tried downgraded the TF version (2.4.0) to convert the model and run it on board.

I have a following error:
Regular tensorflow ops are not supported by this interpreter.
Make sure you apply/link the Flex delegate before inference.

Model conversion failed in TF 2.3.1.

@akito It is likely an issue from that version of TF. Unfortunately, until the TF version is updated from NXP, I don’t think there is much we can do… What do you think @denis.tx?

Yes, it was the use of the same version for TF and TFLite that solved the problem!

2 Likes

@alvaro.tx
I see. Where should I watch NXP if that hypothesis is correct?

@alexrentier
Kindly, thank you for your feedback!
Could you share a list of python library versions with the development environment that converted the model?

in your host pc
$ pip list

Sorry @akito what do you mean? You should have the same issue even in an iMX8MP EVK (Let us know if is not the case)

Hi @akito ,

As stated on Tensorflow documentation ( เลือกตัวดำเนินการ TensorFlow  |  TensorFlow Lite ) , not every model are directly convertible to Tensorflow Lite because some TF ops do not have a corresponding TFLite op. However, in some situations (not every) you can use a mix of Tensorflow and Tensorflow Lite ops. There is a list of Tensorflow ops that can be used with Tensorflow Lite by enabling the Select TensorFlow Ops feature ( समर्थित TensorFlow ऑपरेटरों का चयन करें  |  TensorFlow Lite ) . Please, see the Tensorflow Documentation on the mentioned link for more information about this feature and how to enable it.

@denis.tx @alvaro.tx
Hi, thank you for reply.

I understand that the cause is the different versions of TF and TFlite.

So I tried to upgrade the version of TFlite that writes to imx8mp from 2.3.1 to 2.4.0 to unify the versions.

The method is described in the URL of the manual below, I changed the version of git clone from zeus-5.4.70-2.3.1 to zeus-5.4.70-2.3.3 and ran the build again with the same procedure.
We can see that zeus-5.4.70-2.3.3 has TFlite 2.4.0.
https://source.codeaurora.org/external/imx/meta-imx/

However, an error occurred and the build could not be performed. Is there a way to upgrade TFlite?

manual

Hi Akito san, unless @denis.tx proves me wrong, I don’t think there is an easy way to update this until we internally update our BSP to support 2.4.0 with the newer iteration of 5.4-2.3. I’ll wait for Denis confirmation, but in the worst case scenario, we will create an internal ticket to evaluate updating this in our BSP5…

Sorry for the inconvenience.
Alvaro.