CMSIS-NN  
CMSIS NN Software Library
 
Loading...
Searching...
No Matches
Fully-connected Layer Functions

Content

 GetBufferSizeFC
 

Functions

arm_cmsis_nn_status arm_fully_connected_s16 (const cmsis_nn_context *ctx, const cmsis_nn_fc_params *fc_params, const cmsis_nn_per_tensor_quant_params *quant_params, const cmsis_nn_dims *input_dims, const int16_t *input, const cmsis_nn_dims *filter_dims, const int8_t *kernel, const cmsis_nn_dims *bias_dims, const int64_t *bias, const cmsis_nn_dims *output_dims, int16_t *output)
 Basic s16 Fully Connected function.
 
arm_cmsis_nn_status arm_fully_connected_s4 (const cmsis_nn_context *ctx, const cmsis_nn_fc_params *fc_params, const cmsis_nn_per_tensor_quant_params *quant_params, const cmsis_nn_dims *input_dims, const int8_t *input, const cmsis_nn_dims *filter_dims, const int8_t *kernel, const cmsis_nn_dims *bias_dims, const int32_t *bias, const cmsis_nn_dims *output_dims, int8_t *output)
 Basic s4 Fully Connected function.
 
arm_cmsis_nn_status arm_fully_connected_s8 (const cmsis_nn_context *ctx, const cmsis_nn_fc_params *fc_params, const cmsis_nn_per_tensor_quant_params *quant_params, const cmsis_nn_dims *input_dims, const int8_t *input, const cmsis_nn_dims *filter_dims, const int8_t *kernel, const cmsis_nn_dims *bias_dims, const int32_t *bias, const cmsis_nn_dims *output_dims, int8_t *output)
 Basic s8 Fully Connected function.
 
arm_cmsis_nn_status arm_vector_sum_s8 (int32_t *vector_sum_buf, const int32_t vector_cols, const int32_t vector_rows, const int8_t *vector_data, const int32_t lhs_offset, const int32_t *bias_data)
 Calculate the sum of each row in vector_data, multiply by lhs_offset and optionally add s32 bias_data.
 
arm_cmsis_nn_status arm_vector_sum_s8_s64 (int64_t *vector_sum_buf, const int32_t vector_cols, const int32_t vector_rows, const int8_t *vector_data, const int32_t lhs_offset, const int64_t *bias_data)
 Calculate the sum of each row in vector_data, multiply by lhs_offset and optionally add s64 bias_data.
 

Description

Collection of fully-connected and matrix multiplication functions.

Fully-connected layer is basically a matrix-vector multiplication with bias. The matrix is the weights and the input/output vectors are the activation values. Supported {weight, activation} precisions include {8-bit, 8-bit} and {8-bit, 16-bit}

Function Documentation

◆ arm_fully_connected_s16()

arm_cmsis_nn_status arm_fully_connected_s16 ( const cmsis_nn_context ctx,
const cmsis_nn_fc_params fc_params,
const cmsis_nn_per_tensor_quant_params quant_params,
const cmsis_nn_dims input_dims,
const int16_t *  input_data,
const cmsis_nn_dims filter_dims,
const int8_t *  filter_data,
const cmsis_nn_dims bias_dims,
const int64_t *  bias_data,
const cmsis_nn_dims output_dims,
int16_t *  output_data 
)

Basic s16 Fully Connected function.

Parameters
[in,out]ctxFunction context (e.g. temporary buffer). Check the function definition file to see if an additional buffer is required. Optional function {API}_get_buffer_size() provides the buffer size if an additional buffer is required. The caller is expected to clear the buffer, if applicable, for security reasons.
[in]fc_paramsFully Connected layer parameters. fc_params->input_offset : 0 fc_params->filter_offset : 0 fc_params->output_offset : 0
[in]quant_paramsPer-tensor quantization info. It contains the multiplier and shift values to be applied to the output tensor.
[in]input_dimsInput (activation) tensor dimensions. Format: [N, H, W, C_IN] Input dimension is taken as Nx(H * W * C_IN)
[in]input_dataInput (activation) data pointer. Data type: int16
[in]filter_dimsTwo dimensional filter dimensions. Format: [N, C] N : accumulation depth and equals (H * W * C_IN) from input_dims C : output depth and equals C_OUT in output_dims H & W : Not used
[in]filter_dataFilter data pointer. Data type: int8
[in]bias_dimsBias tensor dimensions. Format: [C_OUT] N, H, W : Not used
[in]bias_dataBias data pointer. Data type: int64
[in]output_dimsOutput tensor dimensions. Format: [N, C_OUT] N : Batches C_OUT : Output depth H & W : Not used.
[in,out]output_dataOutput data pointer. Data type: int16
Returns
The function returns ARM_CMSIS_NN_SUCCESS
  • Supported framework: TensorFlow Lite

◆ arm_fully_connected_s4()

arm_cmsis_nn_status arm_fully_connected_s4 ( const cmsis_nn_context ctx,
const cmsis_nn_fc_params fc_params,
const cmsis_nn_per_tensor_quant_params quant_params,
const cmsis_nn_dims input_dims,
const int8_t *  input_data,
const cmsis_nn_dims filter_dims,
const int8_t *  filter_data,
const cmsis_nn_dims bias_dims,
const int32_t *  bias_data,
const cmsis_nn_dims output_dims,
int8_t *  output_data 
)

Basic s4 Fully Connected function.

Parameters
[in,out]ctxFunction context (e.g. temporary buffer). Check the function definition file to see if an additional buffer is required. Optional function {API}_get_buffer_size() provides the buffer size if an additional buffer is required. The caller is expected to clear the buffer ,if applicable, for security reasons.
[in]fc_paramsFully Connected layer parameters. Range of fc_params->input_offset : [-127, 128] fc_params->filter_offset : 0 Range of fc_params->output_offset : [-128, 127]
[in]quant_paramsPer-tensor quantization info. It contains the multiplier and shift values to be applied to the output tensor.
[in]input_dimsInput (activation) tensor dimensions. Format: [N, H, W, C_IN] Input dimension is taken as Nx(H * W * C_IN)
[in]input_dataInput (activation) data pointer. Data type: int8
[in]filter_dimsTwo dimensional filter dimensions. Format: [N, C] N : accumulation depth and equals (H * W * C_IN) from input_dims C : output depth and equals C_OUT in output_dims H & W : Not used
[in]filter_dataFilter data pointer. Data type: int8_t packed 4-bit weights, e.g four sequential weights [0x1, 0x2, 0x3, 0x4] packed as [0x21, 0x43].
[in]bias_dimsBias tensor dimensions. Format: [C_OUT] N, H, W : Not used
[in]bias_dataBias data pointer. Data type: int32
[in]output_dimsOutput tensor dimensions. Format: [N, C_OUT] N : Batches C_OUT : Output depth H & W : Not used.
[in,out]output_dataOutput data pointer. Data type: int8
Returns
The function returns ARM_CMSIS_NN_SUCCESS
  • Supported framework: TensorFlow Lite

◆ arm_fully_connected_s8()

arm_cmsis_nn_status arm_fully_connected_s8 ( const cmsis_nn_context ctx,
const cmsis_nn_fc_params fc_params,
const cmsis_nn_per_tensor_quant_params quant_params,
const cmsis_nn_dims input_dims,
const int8_t *  input_data,
const cmsis_nn_dims filter_dims,
const int8_t *  filter_data,
const cmsis_nn_dims bias_dims,
const int32_t *  bias_data,
const cmsis_nn_dims output_dims,
int8_t *  output_data 
)

Basic s8 Fully Connected function.

Parameters
[in,out]ctxFunction context (e.g. temporary buffer). Check the function definition file to see if an additional buffer is required. Optional function {API}_get_buffer_size() provides the buffer size if an additional buffer is required. The caller is expected to clear the buffer, if applicable, for security reasons.
[in]fc_paramsFully Connected layer parameters. Range of fc_params->input_offset : [-127, 128] fc_params->filter_offset : 0 Range of fc_params->output_offset : [-128, 127]
[in]quant_paramsPer-tensor quantization info. It contains the multiplier and shift values to be applied to the output tensor.
[in]input_dimsInput (activation) tensor dimensions. Format: [N, H, W, C_IN] Input dimension is taken as Nx(H * W * C_IN)
[in]input_dataInput (activation) data pointer. Data type: int8
[in]filter_dimsTwo dimensional filter dimensions. Format: [N, C] N : accumulation depth and equals (H * W * C_IN) from input_dims C : output depth and equals C_OUT in output_dims H & W : Not used
[in]filter_dataFilter data pointer. Data type: int8
[in]bias_dimsBias tensor dimensions. Format: [C_OUT] N, H, W : Not used
[in]bias_dataBias data pointer. Data type: int32
[in]output_dimsOutput tensor dimensions. Format: [N, C_OUT] N : Batches C_OUT : Output depth H & W : Not used.
[in,out]output_dataOutput data pointer. Data type: int8
Returns
The function returns either ARM_CMSIS_NN_ARG_ERROR if argument constraints fail. or, ARM_CMSIS_NN_SUCCESS on successful completion.
  • Supported framework: TensorFlow Lite

◆ arm_vector_sum_s8()

arm_cmsis_nn_status arm_vector_sum_s8 ( int32_t *  vector_sum_buf,
const int32_t  vector_cols,
const int32_t  vector_rows,
const int8_t *  vector_data,
const int32_t  lhs_offset,
const int32_t *  bias_data 
)

Calculate the sum of each row in vector_data, multiply by lhs_offset and optionally add s32 bias_data.

Parameters
[in,out]vector_sum_bufBuffer for vector sums
[in]vector_colsNumber of vector columns
[in]vector_rowsNumber of vector rows
[in]vector_dataVector of weigths data
[in]lhs_offsetConstant multiplied with each sum
[in]bias_dataVector of bias data, added to each sum.
Returns
The function returns ARM_CMSIS_NN_SUCCESS - Successful operation

◆ arm_vector_sum_s8_s64()

arm_cmsis_nn_status arm_vector_sum_s8_s64 ( int64_t *  vector_sum_buf,
const int32_t  vector_cols,
const int32_t  vector_rows,
const int8_t *  vector_data,
const int32_t  lhs_offset,
const int64_t *  bias_data 
)

Calculate the sum of each row in vector_data, multiply by lhs_offset and optionally add s64 bias_data.

Parameters
[in,out]vector_sum_bufBuffer for vector sums
[in]vector_colsNumber of vector columns
[in]vector_rowsNumber of vector rows
[in]vector_dataVector of weigths data
[in]lhs_offsetConstant multiplied with each sum
[in]bias_dataVector of bias data, added to each sum.
Returns
The function returns ARM_CMSIS_NN_SUCCESS - Successful operation