CMSIS-NN  
CMSIS NN Software Library
Activation Functions

Functions

void arm_nn_activation_s16 (const int16_t *input, int16_t *output, const uint16_t size, const uint16_t left_shift, const arm_nn_activation_type type)
 s16 neural network activation function using direct table look-up More...
 
void arm_relu6_s8 (int8_t *data, uint16_t size)
 s8 ReLU6 function More...
 
void arm_relu_q15 (int16_t *data, uint16_t size)
 Q15 RELU function. More...
 
void arm_relu_q7 (int8_t *data, uint16_t size)
 Q7 RELU function. More...
 

Description

Perform activation layers, including ReLU (Rectified Linear Unit), sigmoid and tanh

Function Documentation

◆ arm_nn_activation_s16()

void arm_nn_activation_s16 ( const int16_t *  input,
int16_t *  output,
const uint16_t  size,
const uint16_t  left_shift,
const arm_nn_activation_type  type 
)
Parameters
[in]inputpointer to input data
[out]outputpointer to output
[in]sizenumber of elements
[in]left_shiftbit-width of the integer part, assume to be smaller than 3
[in]typetype of activation functions

Supported framework: TensorFlow Lite for Microcontrollers. This activation function must be bit precise congruent with the corresponding TFLM tanh and sigmoid actication functions

◆ arm_relu6_s8()

void arm_relu6_s8 ( int8_t *  data,
uint16_t  size 
)
Parameters
[in,out]datapointer to input
[in]sizenumber of elements

◆ arm_relu_q15()

void arm_relu_q15 ( int16_t *  data,
uint16_t  size 
)
Parameters
[in,out]datapointer to input
[in]sizenumber of elements

◆ arm_relu_q7()

void arm_relu_q7 ( int8_t *  data,
uint16_t  size 
)
Parameters
[in,out]datapointer to input
[in]sizenumber of elements