Compute Library
 21.02
NEGEMMLowpQuantizeDownInt32ToInt8ScaleByFixedPoint Class Reference

Basic function to execute NEGEMMLowpQuantizeDownInt32ToInt8ScaleByFixedPoint on Neon. More...

#include <NEGEMMLowpOutputStage.h>

Collaboration diagram for NEGEMMLowpQuantizeDownInt32ToInt8ScaleByFixedPoint:
[legend]

Public Member Functions

 NEGEMMLowpQuantizeDownInt32ToInt8ScaleByFixedPoint ()=default
 Constructor. More...
 
 NEGEMMLowpQuantizeDownInt32ToInt8ScaleByFixedPoint (const NEGEMMLowpQuantizeDownInt32ToInt8ScaleByFixedPoint &)=delete
 Prevent instances of this class from being copied (As this class contains pointers) More...
 
NEGEMMLowpQuantizeDownInt32ToInt8ScaleByFixedPointoperator= (const NEGEMMLowpQuantizeDownInt32ToInt8ScaleByFixedPoint &)=delete
 Prevent instances of this class from being copied (As this class contains pointers) More...
 
 NEGEMMLowpQuantizeDownInt32ToInt8ScaleByFixedPoint (NEGEMMLowpQuantizeDownInt32ToInt8ScaleByFixedPoint &&)=delete
 Prevent instances of this class from being moved (As this class contains non movable objects) More...
 
NEGEMMLowpQuantizeDownInt32ToInt8ScaleByFixedPointoperator= (NEGEMMLowpQuantizeDownInt32ToInt8ScaleByFixedPoint &&)=delete
 Prevent instances of this class from being moved (As this class contains non movable objects) More...
 
 ~NEGEMMLowpQuantizeDownInt32ToInt8ScaleByFixedPoint ()
 Default destructor. More...
 
void configure (const ITensor *input, const ITensor *bias, ITensor *output, int result_fixedpoint_multiplier, int result_shift, int result_offset_after_shift, int min=std::numeric_limits< int32_t >::lowest(), int max=std::numeric_limits< int32_t >::max())
 Initialise the kernel's inputs, output. More...
 
- Public Member Functions inherited from INESimpleFunctionNoBorder
 INESimpleFunctionNoBorder (IRuntimeContext *ctx=nullptr)
 Constructor. More...
 
 INESimpleFunctionNoBorder (const INESimpleFunctionNoBorder &)=delete
 Prevent instances of this class from being copied (As this class contains pointers) More...
 
 INESimpleFunctionNoBorder (INESimpleFunctionNoBorder &&)=default
 Default move constructor. More...
 
INESimpleFunctionNoBorderoperator= (const INESimpleFunctionNoBorder &)=delete
 Prevent instances of this class from being copied (As this class contains pointers) More...
 
INESimpleFunctionNoBorderoperator= (INESimpleFunctionNoBorder &&)=default
 Default move assignment operator. More...
 
 ~INESimpleFunctionNoBorder ()
 Default destructor. More...
 
void run () override final
 Run the kernels contained in the function. More...
 
- Public Member Functions inherited from IFunction
virtual ~IFunction ()=default
 Destructor. More...
 
virtual void prepare ()
 Prepare the function for executing. More...
 

Static Public Member Functions

static Status validate (const ITensorInfo *input, const ITensorInfo *bias, const ITensorInfo *output, int min=std::numeric_limits< int32_t >::lowest(), int max=std::numeric_limits< int32_t >::max())
 Static function to check if given info will lead to a valid configuration of NEGEMMLowpQuantizeDownInt32ToInt8ScaleByFixedPoint. More...
 

Detailed Description

Basic function to execute NEGEMMLowpQuantizeDownInt32ToInt8ScaleByFixedPoint on Neon.

NEGEMMLowpQuantizeDownInt32ToInt8ScaleByFixedPoint depends on 3 parameters:

result_fixedpoint_multiplier, result_shift, result_offset_after_shift

The final result is:

(FixedPointMul(input[i][k], result_fixedpoint_multiplier) >> result_shift) + result_offset_after_shift

where FixedPointMul(x, y) is the nearest integer to the following mathematical expression, evaluated without overflow or intermediate rounding:

(x * y) / 2^31

For more information: https://github.com/google/gemmlowp/blob/master/public/output_stages.h#L68

In case the bias tensor is provided, the final result is:

((FixedPointMul(input[i][k] + bias[k], result_fixedpoint_multiplier)) >> result_shift) + result_offset_after_shift

This function calls the following Neon kernels:

  1. NEGEMMLowpQuantizeDownInt32ToInt8ScaleByFixedPointKernel
Note
The function accepts also 2 optional input arguments (min and max) which can be used to implement "rectified linear unit" activation functions after the result is shifted right by result_shift

Definition at line 143 of file NEGEMMLowpOutputStage.h.

Constructor & Destructor Documentation

◆ NEGEMMLowpQuantizeDownInt32ToInt8ScaleByFixedPoint() [1/3]

◆ NEGEMMLowpQuantizeDownInt32ToInt8ScaleByFixedPoint() [2/3]

Prevent instances of this class from being copied (As this class contains pointers)

◆ NEGEMMLowpQuantizeDownInt32ToInt8ScaleByFixedPoint() [3/3]

Prevent instances of this class from being moved (As this class contains non movable objects)

◆ ~NEGEMMLowpQuantizeDownInt32ToInt8ScaleByFixedPoint()

Member Function Documentation

◆ configure()

void configure ( const ITensor input,
const ITensor bias,
ITensor output,
int  result_fixedpoint_multiplier,
int  result_shift,
int  result_offset_after_shift,
int  min = std::numeric_limits<int32_t>::lowest(),
int  max = std::numeric_limits<int32_t>::max() 
)

Initialise the kernel's inputs, output.

Parameters
[in]inputInput tensor. Data type supported: S32
[in]biasBiases tensor. Only shared biases supported and it can be a nullptr if the biases addition is not required. Biases are 1D tensor with dimensions [OFM]. Data type supported: Same as input.
[out]outputOutput tensor. Data type supported: Data type supported: QASYMM8_SIGNED
[in]result_fixedpoint_multiplierFixed point value to be multiplied to each element of the input matrix when once the result_offset has been add
[in]result_shiftNumber of bits to shift right the result after the fixed point multiplication
[in]result_offset_after_shiftOffset to be applied to result before converting it back to QASYMM8_SIGNED
[in]min(Optional) Min value used to saturate down the output result before converting back to QASYMM8_SIGNED. Defaults to the minimum possible 32-bit signed integer.
[in]max(Optional) Max value used to saturate up the output result before converting back to QASYMM8_SIGNED, Along with min, this value can be used to implement "rectified linear unit" activation functions. Defaults to the maximum possible 32-bit signed integer.

Definition at line 52 of file NEGEMMLowpOutputStage.cpp.

54 {
55  auto k = std::make_unique<NEGEMMLowpQuantizeDownInt32ToInt8ScaleByFixedPointKernel>();
56  k->configure(input, bias, output, result_fixedpoint_multiplier, result_shift, result_offset_after_shift, min, max);
57  _kernel = std::move(k);
58 }

◆ operator=() [1/2]

Prevent instances of this class from being copied (As this class contains pointers)

◆ operator=() [2/2]

Prevent instances of this class from being moved (As this class contains non movable objects)

◆ validate()

Status validate ( const ITensorInfo input,
const ITensorInfo bias,
const ITensorInfo output,
int  min = std::numeric_limits<int32_t>::lowest(),
int  max = std::numeric_limits<int32_t>::max() 
)
static

Static function to check if given info will lead to a valid configuration of NEGEMMLowpQuantizeDownInt32ToInt8ScaleByFixedPoint.

Parameters
[in]inputInput tensor. It is the output of NEGEMMLowpMatrixMultiplyCore function. Data type supported: S32
[in]biasBiases tensor. Only shared biases supported and it can be a nullptr if the addition of biases is not required. Biases are 1D tensor with dimensions [OFM]. Data type supported: Same as input.
[in]outputOutput tensor. Data type supported: Data type supported: QASYMM8_SIGNED
[in]min(Optional) Min value used to saturate down the output result before converting back to QASYMM8_SIGNED. Defaults to the minimum possible 32-bit signed integer.
[in]max(Optional) Max value used to saturate up the output result before converting back to QASYMM8_SIGNED, Along with min, this value can be used to implement "rectified linear unit" activation functions. Defaults to the maximum possible 32-bit signed integer.
Returns
a status

Definition at line 60 of file NEGEMMLowpOutputStage.cpp.

References NEGEMMLowpQuantizeDownInt32ToInt8ScaleByFixedPointKernel::validate(), and NEGEMMLowpQuantizeDownInt32ToInt16ScaleByFixedPoint::~NEGEMMLowpQuantizeDownInt32ToInt16ScaleByFixedPoint().

Referenced by arm_compute::test::validation::DATA_TEST_CASE().

61 {
63 }
static Status validate(const ITensorInfo *input, const ITensorInfo *bias, const ITensorInfo *output, int min=0, int max=0)
Static function to check if given info will lead to a valid configuration of NEGEMMLowpQuantizeDownIn...

The documentation for this class was generated from the following files: