Compute Library
 21.02
NEGEMMLowpQuantizeDownInt32ToUint8ScaleByFixedPoint Class Reference

Basic function to execute NEGEMMLowpQuantizeDownInt32ToUint8ScaleByFixedPoint on Neon. More...

#include <NEGEMMLowpOutputStage.h>

Collaboration diagram for NEGEMMLowpQuantizeDownInt32ToUint8ScaleByFixedPoint:
[legend]

Public Member Functions

 NEGEMMLowpQuantizeDownInt32ToUint8ScaleByFixedPoint ()=default
 Constructor. More...
 
 NEGEMMLowpQuantizeDownInt32ToUint8ScaleByFixedPoint (const NEGEMMLowpQuantizeDownInt32ToUint8ScaleByFixedPoint &)=delete
 Prevent instances of this class from being copied (As this class contains pointers) More...
 
NEGEMMLowpQuantizeDownInt32ToUint8ScaleByFixedPointoperator= (const NEGEMMLowpQuantizeDownInt32ToUint8ScaleByFixedPoint &)=delete
 Prevent instances of this class from being copied (As this class contains pointers) More...
 
 NEGEMMLowpQuantizeDownInt32ToUint8ScaleByFixedPoint (NEGEMMLowpQuantizeDownInt32ToUint8ScaleByFixedPoint &&)=delete
 Prevent instances of this class from being moved (As this class contains non movable objects) More...
 
NEGEMMLowpQuantizeDownInt32ToUint8ScaleByFixedPointoperator= (NEGEMMLowpQuantizeDownInt32ToUint8ScaleByFixedPoint &&)=delete
 Prevent instances of this class from being moved (As this class contains non movable objects) More...
 
 ~NEGEMMLowpQuantizeDownInt32ToUint8ScaleByFixedPoint ()
 Default destructor. More...
 
void configure (const ITensor *input, const ITensor *bias, ITensor *output, int result_fixedpoint_multiplier, int result_shift, int result_offset_after_shift, int min=std::numeric_limits< int32_t >::lowest(), int max=std::numeric_limits< int32_t >::max())
 Initialise the kernel's inputs, output. More...
 
- Public Member Functions inherited from INESimpleFunctionNoBorder
 INESimpleFunctionNoBorder (IRuntimeContext *ctx=nullptr)
 Constructor. More...
 
 INESimpleFunctionNoBorder (const INESimpleFunctionNoBorder &)=delete
 Prevent instances of this class from being copied (As this class contains pointers) More...
 
 INESimpleFunctionNoBorder (INESimpleFunctionNoBorder &&)=default
 Default move constructor. More...
 
INESimpleFunctionNoBorderoperator= (const INESimpleFunctionNoBorder &)=delete
 Prevent instances of this class from being copied (As this class contains pointers) More...
 
INESimpleFunctionNoBorderoperator= (INESimpleFunctionNoBorder &&)=default
 Default move assignment operator. More...
 
 ~INESimpleFunctionNoBorder ()
 Default destructor. More...
 
void run () override final
 Run the kernels contained in the function. More...
 
- Public Member Functions inherited from IFunction
virtual ~IFunction ()=default
 Destructor. More...
 
virtual void prepare ()
 Prepare the function for executing. More...
 

Static Public Member Functions

static Status validate (const ITensorInfo *input, const ITensorInfo *bias, const ITensorInfo *output, int min=std::numeric_limits< int32_t >::lowest(), int max=std::numeric_limits< int32_t >::max())
 Static function to check if given info will lead to a valid configuration of NEGEMMLowpQuantizeDownInt32ToUint8ScaleByFixedPoint. More...
 

Detailed Description

Basic function to execute NEGEMMLowpQuantizeDownInt32ToUint8ScaleByFixedPoint on Neon.

NEGEMMLowpQuantizeDownInt32ToUint8ScaleByFixedPoint depends on 3 parameters:

result_fixedpoint_multiplier, result_shift, result_offset_after_shift

The final result is:

(FixedPointMul(input[i][k], result_fixedpoint_multiplier) >> result_shift) + result_offset_after_shift

where FixedPointMul(x, y) is the nearest integer to the following mathematical expression, evaluated without overflow or intermediate rounding:

(x * y) / 2^31

For more information: https://github.com/google/gemmlowp/blob/master/public/output_stages.h#L68

In case the bias tensor is provided, the final result is:

((FixedPointMul(input[i][k] + bias[k], result_fixedpoint_multiplier)) >> result_shift) + result_offset_after_shift

This function calls the following Neon kernels:

  1. NEGEMMLowpQuantizeDownInt32ToUint8ScaleByFixedPointKernel
Note
The function accepts also 2 optional input arguments (min and max) which can be used to implement "rectified linear unit" activation functions after the result is shifted right by result_shift

Definition at line 71 of file NEGEMMLowpOutputStage.h.

Constructor & Destructor Documentation

◆ NEGEMMLowpQuantizeDownInt32ToUint8ScaleByFixedPoint() [1/3]

◆ NEGEMMLowpQuantizeDownInt32ToUint8ScaleByFixedPoint() [2/3]

Prevent instances of this class from being copied (As this class contains pointers)

◆ NEGEMMLowpQuantizeDownInt32ToUint8ScaleByFixedPoint() [3/3]

Prevent instances of this class from being moved (As this class contains non movable objects)

◆ ~NEGEMMLowpQuantizeDownInt32ToUint8ScaleByFixedPoint()

Default destructor.

Member Function Documentation

◆ configure()

void configure ( const ITensor input,
const ITensor bias,
ITensor output,
int  result_fixedpoint_multiplier,
int  result_shift,
int  result_offset_after_shift,
int  min = std::numeric_limits<int32_t>::lowest(),
int  max = std::numeric_limits<int32_t>::max() 
)

Initialise the kernel's inputs, output.

Parameters
[in]inputInput tensor. Data type supported: S32
[in]biasBiases tensor. Only shared biases supported and it can be a nullptr if the biases addition is not required. Biases are 1D tensor with dimensions [OFM]. Data type supported: Same as input.
[out]outputOutput tensor. Data type supported: Data type supported: QASYMM8
[in]result_fixedpoint_multiplierFixed point value to be multiplied to each element of the input matrix when once the result_offset has been add
[in]result_shiftNumber of bits to shift right the result after the fixed point multiplication
[in]result_offset_after_shiftOffset to be applied to result before converting it back to QASYMM8
[in]min(Optional) Min value used to saturate down the output result before converting back to QASYMM8. Defaults to the minimum possible 32-bit signed integer.
[in]max(Optional) Max value used to saturate up the output result before converting back to QASYMM8, Along with min, this value can be used to implement "rectified linear unit" activation functions. Defaults to the maximum possible 32-bit signed integer.

Definition at line 37 of file NEGEMMLowpOutputStage.cpp.

Referenced by main().

39 {
40  auto k = std::make_unique<NEGEMMLowpQuantizeDownInt32ToUint8ScaleByFixedPointKernel>();
41  k->configure(input, bias, output, result_fixedpoint_multiplier, result_shift, result_offset_after_shift, min, max);
42  _kernel = std::move(k);
43 }

◆ operator=() [1/2]

Prevent instances of this class from being copied (As this class contains pointers)

◆ operator=() [2/2]

Prevent instances of this class from being moved (As this class contains non movable objects)

◆ validate()

Status validate ( const ITensorInfo input,
const ITensorInfo bias,
const ITensorInfo output,
int  min = std::numeric_limits<int32_t>::lowest(),
int  max = std::numeric_limits<int32_t>::max() 
)
static

Static function to check if given info will lead to a valid configuration of NEGEMMLowpQuantizeDownInt32ToUint8ScaleByFixedPoint.

Parameters
[in]inputInput tensor. It is the output of NEGEMMLowpMatrixMultiplyCore function. Data type supported: S32
[in]biasBiases tensor. Only shared biases supported and it can be a nullptr if the addition of biases is not required. Biases are 1D tensor with dimensions [OFM]. Data type supported: Same as input.
[in]outputOutput tensor. Data type supported: Data type supported: QASYMM8
[in]min(Optional) Min value used to saturate down the output result before converting back to QASYMM8. Defaults to the minimum possible 32-bit signed integer.
[in]max(Optional) Max value used to saturate up the output result before converting back to QASYMM8, Along with min, this value can be used to implement "rectified linear unit" activation functions. Defaults to the maximum possible 32-bit signed integer.
Returns
a status

Definition at line 45 of file NEGEMMLowpOutputStage.cpp.

References NEGEMMLowpQuantizeDownInt32ToUint8ScaleByFixedPointKernel::validate(), and NEGEMMLowpQuantizeDownInt32ToInt8ScaleByFixedPoint::~NEGEMMLowpQuantizeDownInt32ToInt8ScaleByFixedPoint().

46 {
48 }
static Status validate(const ITensorInfo *input, const ITensorInfo *bias, const ITensorInfo *output, int min=0, int max=0)
Static function to check if given info will lead to a valid configuration of NEGEMMLowpQuantizeDownIn...

The documentation for this class was generated from the following files: