CMSIS-NN  Version 3.0.0
CMSIS NN Software Library
 All Data Structures Files Functions Variables Enumerations Enumerator Macros Groups Pages
Softmax Functions

Functions

void arm_softmax_q15 (const q15_t *vec_in, const uint16_t dim_vec, q15_t *p_out)
 Q15 softmax function. More...
 
void arm_softmax_q7 (const q7_t *vec_in, const uint16_t dim_vec, q7_t *p_out)
 Q7 softmax function. More...
 
void arm_softmax_s8 (const int8_t *input, const int32_t num_rows, const int32_t row_size, const int32_t mult, const int32_t shift, const int32_t diff_min, int8_t *output)
 S8 softmax function. More...
 
void arm_softmax_u8 (const uint8_t *input, const int32_t num_rows, const int32_t row_size, const int32_t mult, const int32_t shift, const int32_t diff_min, uint8_t *output)
 U8 softmax function. More...
 
void arm_softmax_with_batch_q7 (const q7_t *vec_in, const uint16_t nb_batches, const uint16_t dim_vec, q7_t *p_out)
 Q7 softmax function with batch parameter. More...
 

Description

EXP(2) based softmax functions.

Function Documentation

void arm_softmax_q15 ( const q15_t *  vec_in,
const uint16_t  dim_vec,
q15_t *  p_out 
)
Parameters
[in]vec_inpointer to input vector
[in]dim_vecinput vector dimention
[out]p_outpointer to output vector

Here, instead of typical e based softmax, we use 2-based softmax, i.e.,:

y_i = 2^(x_i) / sum(2^x_j)

The relative output will be different here. But mathematically, the gradient will be the same with a log(2) scaling factor.

void arm_softmax_q7 ( const q7_t *  vec_in,
const uint16_t  dim_vec,
q7_t *  p_out 
)
Parameters
[in]vec_inpointer to input vector
[in]dim_vecinput vector dimention
[out]p_outpointer to output vector

Here, instead of typical natural logarithm e based softmax, we use 2-based softmax here, i.e.,:

y_i = 2^(x_i) / sum(2^x_j)

The relative output will be different here. But mathematically, the gradient will be the same with a log(2) scaling factor.

Referenced by arm_softmax_with_batch_q7().

void arm_softmax_s8 ( const int8_t *  input,
const int32_t  num_rows,
const int32_t  row_size,
const int32_t  mult,
const int32_t  shift,
const int32_t  diff_min,
int8_t *  output 
)
Parameters
[in]inputPointer to the input tensor
[in]num_rowsNumber of rows in the input tensor
[in]row_sizeNumber of elements in each input row
[in]multInput quantization multiplier
[in]shiftInput quantization shift within the range [0, 31]
[in]diff_minMinimum difference with max in row. Used to check if the quantized exponential operation can be performed
[out]outputPointer to the output tensor
Note
Supported framework: TensorFlow Lite micro (bit-accurate)

References ACCUM_BITS, CLAMP, DIV_POW2, DIV_POW2_MVE, EXP_ON_NEG, MAX, MUL_SAT, MUL_SAT_MVE, and ONE_OVER1.

void arm_softmax_u8 ( const uint8_t *  input,
const int32_t  num_rows,
const int32_t  row_size,
const int32_t  mult,
const int32_t  shift,
const int32_t  diff_min,
uint8_t *  output 
)
Parameters
[in]inputPointer to the input tensor
[in]num_rowsNumber of rows in the input tensor
[in]row_sizeNumber of elements in each input row
[in]multInput quantization multiplier
[in]shiftInput quantization shift within the range [0, 31]
[in]diff_minMinimum difference with max in row. Used to check if the quantized exponential operation can be performed
[out]outputPointer to the output tensor
Note
Supported framework: TensorFlow Lite micro (bit-accurate)

References ACCUM_BITS, CLAMP, DIV_POW2, EXP_ON_NEG, MAX, MUL_SAT, and ONE_OVER1.

void arm_softmax_with_batch_q7 ( const q7_t *  vec_in,
const uint16_t  nb_batches,
const uint16_t  dim_vec,
q7_t *  p_out 
)
Parameters
[in]vec_inpointer to input vector
[in]nb_batchesnumber of batches
[in]dim_vecinput vector dimention
[out]p_outpointer to output vector

Here, instead of typical natural logarithm e based softmax, we use 2-based softmax here, i.e.,:

y_i = 2^(x_i) / sum(2^x_j)

The relative output will be different here. But mathematically, the gradient will be the same with a log(2) scaling factor.

References arm_softmax_q7().