How to implement ML Kit in microflow after converting the ML model into ONNX Model?

1
Good evening all,     I have created a ML chatbot model (the code is already defined and I reused the code in python) and converted it into ONNX and trying to import it into mendix using ML kit and before implementing it into mendix I checked whether the ONNX model has successfully loaded or not . My model actually based on predefined intents and depending on the user input it will search for already mentioned responses and provide it I have so many questions regarding this model while implementing in ML kit.   I used this command to check whether the model successfully loaded or not.                  import onnx                model = onnx.load("your_model.onnx")                onnx.checker.check_model(model)                print("ONNX model loaded and checked successfully.") 1. When I am importing into mendix the ML model throws some errors where it already created few non persistable input and output entities, displaying error that mention tensor shapes(static and dynamic) which number I should mention(i.e. it is two dimensional or multi dimensional ? how can I find it).But when I check for tensor shapes and input and output data in python it displays like this Input Nodes: Name: dense_input, Shape: ['unk__6', 88]  Output Nodes: Name: dense_2, Shape: ['unk__7', 9] 2. I gave 1 number for tensor shapes and  trying to write a microflow and including  ML kit. So here do I need to create a persistable entity (with two attributes like user Input and Response) or I should use already existing entities. 3. I tried to use already existing entities and in home page I created a create button which automatically creates object for input to input entity and included ML kit assigned values and displaying output to  another page .But it is showing errors. and how can I confirm that the input data provided to the ONNX model is in the expected format and size. My Domain Model   Call ML Model  Connector error   4. Client Error   5. How should I write code for Inference, do I need to create preprocessing and post processing? 6. My model ONNX version is 1.16.0 and mendix 10 and IR version is: 8 but I read in mendix documents that for ML kit released version of mendix is 9.23 and ONNX runtime version is 1.11.0. Do need to do any conversions ONNX model and mendix versions in order to implement in mendix.       And how can I ensure that the ONNX model  is compatible with the version of the ONNX runtime used in Mendix? Can anyone help me through this, Thank you in advance
asked
0 answers