Compute Library
 19.08
NEGEMMLowpQuantizeDownInt32ToUint8Scale Class Reference

Basic function to execute NEGEMMLowpQuantizeDownInt32ToUint8Scale on NEON. More...

#include <NEGEMMLowpOutputStage.h>

Collaboration diagram for NEGEMMLowpQuantizeDownInt32ToUint8Scale:
[legend]

Public Member Functions

void configure (const ITensor *input, const ITensor *bias, ITensor *output, int result_offset, int result_mult_int, int result_shift, int min=0, int max=0)
 Initialise the kernel's inputs, output. More...
 
- Public Member Functions inherited from INESimpleFunctionNoBorder
 INESimpleFunctionNoBorder ()
 Constructor. More...
 
void run () override final
 Run the kernels contained in the function. More...
 
- Public Member Functions inherited from IFunction
virtual ~IFunction ()=default
 Destructor. More...
 
virtual void prepare ()
 Prepare the function for executing. More...
 

Static Public Member Functions

static Status validate (const ITensorInfo *input, const ITensorInfo *bias, const ITensorInfo *output, int min=0, int max=0)
 Static function to check if given info will lead to a valid configuration of NEGEMMLowpQuantizeDownInt32ToUint8Scale. More...
 

Detailed Description

Basic function to execute NEGEMMLowpQuantizeDownInt32ToUint8Scale on NEON.

NEGEMMLowpQuantizeDownInt32ToUint8Scale depends on 3 parameters: result_offset, result_mult_int, result_shift The final result is:

((input[i][k] + result_offset) * result_mult_int) >> result_shift

In case the bias tensor is provided, the final result is:

((input[i][k] + bias[k] + result_offset) * result_mult_int) >> result_shift

This function calls the following NEON kernels:

  1. NEGEMMLowpQuantizeDownInt32ToUint8ScaleKernel
Note
The function accepts also 2 optional input arguments (min and max) which can be used to implement "rectified linear unit" activation functions after the result is shifted right by result_shift

Definition at line 59 of file NEGEMMLowpOutputStage.h.

Member Function Documentation

◆ configure()

void configure ( const ITensor input,
const ITensor bias,
ITensor output,
int  result_offset,
int  result_mult_int,
int  result_shift,
int  min = 0,
int  max = 0 
)

Initialise the kernel's inputs, output.

Parameters
[in]inputInput tensor. It is the output of NEGEMMLowpMatrixMultiplyCore function. Data type supported: S32
[in]biasBiases tensor. Only shared biases supported and it can be a nullptr if the addition of biases is not required. Biases are 1D tensor with dimensions [OFM]. Data type supported: Same as input.
[out]outputOutput tensor. Data type supported: Data type supported: QASYMM8
[in]result_offsetOffset to be added to each element of the input matrix
[in]result_mult_intValue to be multiplied to each element of the input matrix when once the result_offset has been add
[in]result_shiftNumber of bits to shift right the result before converting back to QASYMM8
[in]min(Optional) Min value used to saturate down the output result before converting back to QASYMM8
[in]max(Optional) Max value used to saturate up the output result before converting back to QASYMM8, Along with min, this value can be used to implement "rectified linear unit" activation functions

Definition at line 34 of file NEGEMMLowpOutputStage.cpp.

35 {
36  auto k = arm_compute::support::cpp14::make_unique<NEGEMMLowpQuantizeDownInt32ToUint8ScaleKernel>();
37  k->configure(input, bias, output, result_offset, result_mult_int, result_shift, min, max);
38  _kernel = std::move(k);
39 }

References arm_compute::test::validation::bias.

◆ validate()

Status validate ( const ITensorInfo input,
const ITensorInfo bias,
const ITensorInfo output,
int  min = 0,
int  max = 0 
)
static

Static function to check if given info will lead to a valid configuration of NEGEMMLowpQuantizeDownInt32ToUint8Scale.

Parameters
[in]inputInput tensor. It is the output of NEGEMMLowpMatrixMultiplyCore function. Data type supported: S32
[in]biasBiases tensor. Only shared biases supported and it can be a nullptr if the addition of biases is not required. Biases are 1D tensor with dimensions [OFM]. Data type supported: Same as input.
[in]outputOutput tensor. Data type supported: Data type supported: QASYMM8
[in]min(Optional) Min value used to saturate down the output result before converting back to QASYMM8
[in]max(Optional) Max value used to saturate up the output result before converting back to QASYMM8, Along with min, this value can be used to implement "rectified linear unit" activation functions
Returns
a status

Definition at line 41 of file NEGEMMLowpOutputStage.cpp.

42 {
43  return NEGEMMLowpQuantizeDownInt32ToUint8ScaleKernel::validate(input, bias, output, min, max);
44 }
static Status validate(const ITensorInfo *input, const ITensorInfo *bias, const ITensorInfo *output, int min=0, int max=0)
Static function to check if given info will lead to a valid configuration of NEGEMMLowpQuantizeDownIn...

References arm_compute::test::validation::bias, and NEGEMMLowpQuantizeDownInt32ToUint8ScaleKernel::validate().


The documentation for this class was generated from the following files: