About the delegate
'armnnDelegate' is a library for accelerating certain TensorFlow Lite (TfLite) operators on Arm hardware. It can be integrated in TfLite using its delegation mechanism. TfLite will then delegate the execution of operators supported by Arm NN to Arm NN.
The main difference to our Arm NN Tf Lite Parser is the amount of operators you can run with it. If none of the active backends support an operation in your model you won't be able to execute it with our parser. In contrast to that, TfLite only delegates operations to the armnnDelegate if it does support them and otherwise executes them itself. In other words, every TfLite model can be executed and every operation in your model that we can accelerate will be accelerated. That is the reason why the armnnDelegate is our recommended way to accelerate TfLite models.
If you need help building the armnnDelegate, please take a look at our build guide. An example how to setup TfLite to integrate the armnnDelegate can be found in this guide: Integrate the delegate into python
Supported Operators
This reference guide provides a list of TensorFlow Lite operators the Arm NN SDK currently supports.
Fully supported
The Arm NN SDK TensorFlow Lite delegate currently supports the following operators:
- ABS
- ADD
- ARG_MAX
- ARG_MIN
- AVERAGE_POOL_2D, Supported Fused Activation: RELU, RELU6, RELU_N1_TO_1, SIGMOID, TANH, NONE
- AVERAGE_POOL_3D, Supported Fused Activation: RELU, RELU6, RELU_N1_TO_1, SIGMOID, SIGN_BIT, TANH, NONE
- BATCH_MATMUL
- BATCH_TO_SPACE_ND
- CAST
- CONCATENATION, Supported Fused Activation: RELU, RELU6, RELU_N1_TO_1, SIGMOID, TANH, NONE
- CONV_2D, Supported Fused Activation: RELU, RELU6, RELU_N1_TO_1, SIGMOID, TANH, NONE
- CONV_3D, Supported Fused Activation: RELU, RELU6, RELU_N1_TO_1, SIGMOID, TANH, NONE
- DEPTH_TO_SPACE
- DEPTHWISE_CONV_2D, Supported Fused Activation: RELU, RELU6, RELU_N1_TO_1, SIGMOID, TANH, NONE
- DEQUANTIZE
- DIV
- EQUAL
- ELU
- EXP
- EXPAND_DIMS
- FILL
- FLOOR
- FLOOR_DIV
- FULLY_CONNECTED, Supported Fused Activation: RELU, RELU6, RELU_N1_TO_1, SIGMOID, TANH, NONE
- GATHER
- GATHER_ND
- GREATER
- GREATER_EQUAL
- HARD_SWISH
- L2_NORMALIZATION
- L2_POOL_2D
- LESS
- LESS_EQUAL
- LOCAL_RESPONSE_NORMALIZATION
- LOG
- LOGICAL_AND
- LOGICAL_NOT
- LOGICAL_OR
- LOGISTIC
- LOG_SOFTMAX
- LSTM
- MAXIMUM
- MAX_POOL_2D, Supported Fused Activation: RELU, RELU6, RELU_N1_TO_1, SIGMOID, TANH, NONE
- MAX_POOL_3D, Supported Fused Activation: RELU, RELU6, RELU_N1_TO_1, SIGMOID, SIGN_BIT, TANH, NONE
- MEAN
- MINIMUM
- MIRROR_PAD
- MUL
- NEG
- NOT_EQUAL
- PACK
- PAD
- PADV2
- PRELU
- QUANTIZE
- RANK
- REDUCE_MAX
- REDUCE_MIN
- REDUCE_PROD
- RELU
- RELU6
- RELU_N1_TO_1
- RESHAPE
- RESIZE_BILINEAR
- RESIZE_NEAREST_NEIGHBOR
- RSQRT
- SHAPE
- SIN
- SOFTMAX
- SPACE_TO_BATCH_ND
- SPACE_TO_DEPTH
- SPLIT
- SPLIT_V
- SQRT
- SQUEEZE
- STRIDED_SLICE
- SUB
- SUM
- TANH
- TRANSPOSE
- TRANSPOSE_CONV
- UNIDIRECTIONAL_SEQUENCE_LSTM
- UNPACK
More machine learning operators will be supported in future releases.
Delegate Options
The general list of runtime options are described in Runtime options