21.08
|
Execute models from different machine learning platforms efficiently with our parsers.
Simply choose a parser according to the model you want to run e.g. If you've got a model in onnx format (<model_name>.onnx) use our onnx-parser.
If you would like to run a Tensorflow Lite (TfLite) model you probably also want to take a look at our TfLite Delegate.
All parsers are written in C++ but it is also possible to use them in python. For more information on our python bindings take a look into the PyArmNN section.
armnnOnnxParser
is a library for loading neural networks defined in ONNX protobuf files into the Arm NN runtime.
This reference guide provides a list of ONNX operators the Arm NN SDK currently supports.
The Arm NN SDK ONNX parser currently only supports fp32 operators.
Arm tested these operators with the following ONNX fp32 neural networks:
More machine learning operators will be supported in future releases.
armnnTfLiteParser
is a library for loading neural networks defined by TensorFlow Lite FlatBuffers files into the Arm NN runtime.
This reference guide provides a list of TensorFlow Lite operators the Arm NN SDK currently supports.
The Arm NN SDK TensorFlow Lite parser currently supports the following operators:
Arm tested these operators with the following TensorFlow Lite neural network:
More machine learning operators will be supported in future releases.