Hi, I’m having trouble loading the tflite model.
The tflite model does the following when converting.
select ops enabled
The conversion was successful without error
When I copied the converted tflite model to imx8mp and ran it, I got the following error: “Op builtin_code out of range:127. Are you using old TFLite binary with newr model?Registrayion failed.”
Is there a solution?
other info
I have confirmed that the label_image demo can be run.
I confirmed that the original model without ops can also be executed.
Can you please let me know which tflite model are you converting? Is it an customized model or some example model from tensorflow? I will try to reproduce this on my side to investigate.
I converted from a tensorflow model (* .pb) to a tflite model (* .tflite). The conversion can be completed without error.
The model is a customized model that uses select ops (conv3d etc).
I’m only using the tflite runtime at runtime
Run-time code
import tflite_runtime.interpreter as tflite
interpreter = tf.lite.Interpreter(model_path=args.model_file) <---error
No error occurred when checking with label_image.py
Dear @akito , sorry for the late reply, this fell under the radar. I’m also checking this as the main FAE for the Japan region. It is the first time I’m working with TF so I need to setup the environment and make some quick tests. I’ll get back to you ASAP.
Could you give us as many details as possible? What kind of models did you run?
The model is a customized model that uses select ops (conv3d etc).
Did you convert other models and those worked without error? Could this be an issue from the conv3d dependencies? (cc @denis.tx)
Hi @akito , since you have used eIQ tool, could you please convert your OPS enabled TFLite model to TFLite model again by eIQ tool? And try the model converted by eIQ on iMX8M Plus.
@akito , the error " "Op builtin_code out of range:127" can be caused by a mismatch between tensorflow and tensorflow lite versions. Can you let me know which version of Tensorflow you are using? You can check it with:
Hi @alexrentier , thanks for the feedback! Can you clarify the solution? Basically, the suggestion of using the same version for TF and TFLite did the trick?
@akito It is likely an issue from that version of TF. Unfortunately, until the TF version is updated from NXP, I don’t think there is much we can do… What do you think @denis.tx?
@alvaro.tx
I see. Where should I watch NXP if that hypothesis is correct?
@alexrentier
Kindly, thank you for your feedback!
Could you share a list of python library versions with the development environment that converted the model?
As stated on Tensorflow documentation ( เลือกตัวดำเนินการ TensorFlow | TensorFlow Lite ) , not every model are directly convertible to Tensorflow Lite because some TF ops do not have a corresponding TFLite op. However, in some situations (not every) you can use a mix of Tensorflow and Tensorflow Lite ops. There is a list of Tensorflow ops that can be used with Tensorflow Lite by enabling the Select TensorFlow Ops feature ( समर्थित TensorFlow ऑपरेटरों का चयन करें | TensorFlow Lite ) . Please, see the Tensorflow Documentation on the mentioned link for more information about this feature and how to enable it.
I understand that the cause is the different versions of TF and TFlite.
So I tried to upgrade the version of TFlite that writes to imx8mp from 2.3.1 to 2.4.0 to unify the versions.
The method is described in the URL of the manual below, I changed the version of git clone from zeus-5.4.70-2.3.1 to zeus-5.4.70-2.3.3 and ran the build again with the same procedure.
We can see that zeus-5.4.70-2.3.3 has TFlite 2.4.0. https://source.codeaurora.org/external/imx/meta-imx/
However, an error occurred and the build could not be performed. Is there a way to upgrade TFlite?
Hi Akito san, unless @denis.tx proves me wrong, I don’t think there is an easy way to update this until we internally update our BSP to support 2.4.0 with the newer iteration of 5.4-2.3. I’ll wait for Denis confirmation, but in the worst case scenario, we will create an internal ticket to evaluate updating this in our BSP5…