21.02
|
Neon kernel used to add the offset contribution and perform the output stage after NEGEMMLowpMatrixMultiplyKernel. More...
#include <NEGEMMLowpOffsetContributionOutputStageKernel.h>
Public Member Functions | |
const char * | name () const override |
Name of the kernel. More... | |
NEGEMMLowpOffsetContributionOutputStageKernel () | |
Constructor. More... | |
NEGEMMLowpOffsetContributionOutputStageKernel (const NEGEMMLowpOffsetContributionOutputStageKernel &)=delete | |
Prevent instances of this class from being copied (As this class contains pointers) More... | |
NEGEMMLowpOffsetContributionOutputStageKernel & | operator= (const NEGEMMLowpOffsetContributionOutputStageKernel &)=delete |
Prevent instances of this class from being copied (As this class contains pointers) More... | |
NEGEMMLowpOffsetContributionOutputStageKernel (NEGEMMLowpOffsetContributionOutputStageKernel &&)=default | |
Allow instances of this class to be moved. More... | |
NEGEMMLowpOffsetContributionOutputStageKernel & | operator= (NEGEMMLowpOffsetContributionOutputStageKernel &&)=default |
Allow instances of this class to be moved. More... | |
~NEGEMMLowpOffsetContributionOutputStageKernel ()=default | |
Default destructor. More... | |
void | configure (const ITensor *mm_result, const ITensor *vector_sum_col, const ITensor *vector_sum_row, const ITensor *bias, ITensor *output, int32_t k, int32_t a_offset, int32_t b_offset, GEMMLowpOutputStageInfo output_stage) |
Initialise the kernel's input and output. More... | |
void | run (const Window &window, const ThreadInfo &info) override |
Execute the kernel on the passed window. More... | |
Public Member Functions inherited from ICPPKernel | |
virtual | ~ICPPKernel ()=default |
Default destructor. More... | |
virtual void | run_nd (const Window &window, const ThreadInfo &info, const Window &thread_locator) |
legacy compatibility layer for implemantions which do not support thread_locator In these cases we simply narrow the interface down the legacy version More... | |
virtual void | run_op (ITensorPack &tensors, const Window &window, const ThreadInfo &info) |
Execute the kernel on the passed window. More... | |
Public Member Functions inherited from IKernel | |
IKernel () | |
Constructor. More... | |
virtual | ~IKernel ()=default |
Destructor. More... | |
virtual bool | is_parallelisable () const |
Indicates whether or not the kernel is parallelisable. More... | |
virtual BorderSize | border_size () const |
The size of the border for that kernel. More... | |
const Window & | window () const |
The maximum window the kernel can be executed on. More... | |
Static Public Member Functions | |
static Status | validate (const ITensorInfo *mm_result, const ITensorInfo *vector_sum_col, const ITensorInfo *vector_sum_row, const ITensorInfo *bias, const ITensorInfo *output, int32_t a_offset, int32_t b_offset, GEMMLowpOutputStageInfo output_stage) |
Static function to check if given info will lead to a valid configuration of NEGEMMLowpOffsetContributionOutputStageKernel. More... | |
Neon kernel used to add the offset contribution and perform the output stage after NEGEMMLowpMatrixMultiplyKernel.
The computation is performed in-place
This kernel takes a final int32 accumulator value (the output of NEGEMMLowpMatrixMultiplyKernel), and adds to it the offset contribution of matrix A and matrix B in-place.
The output stage can perform either QuantizeDownInt32ToUint8Scale or QuantizeDownInt32ToUint8ScaleByFixedPoint for Uint8. The output stage can perform either QuantizeDownInt32ToInt8Scale or QuantizeDownInt32ToInt8ScaleByFixedPoint for Int8.
For QuantizeDownInt32ToUint8Scale/QuantizeDownInt32ToInt8Scale the final result is:
((mm_result'[i][k] + result_offset) * result_mult_int) >> result_shift
For QuantizeDownInt32ToUint8ScaleByFixedPoint/QuantizeDownInt32ToInt8ScaleByFixedPoint the final result is:
(FixedPointMul(mm_result'[i][k], result_fixedpoint_multiplier) >> result_shift) + result_offset_after_shift
where FixedPointMul(x, y) is the nearest integer to the following mathematical expression, evaluated without overflow or intermediate rounding:
(x * y) / 2^31
and mm_result'[i][k] = mm_result[i][k] + (vector_sum_col[k] * a_offset) + (vector_sum_row[i] * b_offset) + (a_offset * b_offset * k)
Definition at line 62 of file NEGEMMLowpOffsetContributionOutputStageKernel.h.
Constructor.
Definition at line 862 of file NEGEMMLowpOffsetContributionOutputStageKernel.cpp.
Referenced by NEGEMMLowpOffsetContributionOutputStageKernel::name().
|
delete |
Prevent instances of this class from being copied (As this class contains pointers)
|
default |
Allow instances of this class to be moved.
|
default |
Default destructor.
Referenced by NEGEMMLowpOffsetContributionOutputStageKernel::name().
void configure | ( | const ITensor * | mm_result, |
const ITensor * | vector_sum_col, | ||
const ITensor * | vector_sum_row, | ||
const ITensor * | bias, | ||
ITensor * | output, | ||
int32_t | k, | ||
int32_t | a_offset, | ||
int32_t | b_offset, | ||
GEMMLowpOutputStageInfo | output_stage | ||
) |
Initialise the kernel's input and output.
[in] | mm_result | Input tensor containing the result of NEGEMMLowpMatrixMultiplyKernel. Data type supported: S32 |
[in] | vector_sum_col | Input row-vector of sums of all the entries in each column of matrix B. Note: vector_sum_col can be a nullptr in case a_offset = 0. Data type supported: same as mm_result |
[in] | vector_sum_row | Input row-vector of sums of all the entries in each row of matrix A. |
[in] | bias | Biases tensor. Only shared biases supported and it can be a nullptr if the addition of biases is not required. Biases are 1D tensor with dimensions [OFM]. Data type supported: Same as mm_result . |
[out] | output | Output tensor containing the final quantized result. Data type supported: QASYMM8/QASYMM8_SIGNED |
[in] | k | Number of matrix A columns or Matrix B rows |
[in] | a_offset | Offset to be added to each element of the matrix A. |
[in] | b_offset | Offset to be added to each element of the matrix B. |
[in] | output_stage | GEMMLowp output stage info, providing the type of quantization and the necessary parameters. |
Definition at line 869 of file NEGEMMLowpOffsetContributionOutputStageKernel.cpp.
References ARM_COMPUTE_ERROR_ON_NULLPTR, ARM_COMPUTE_ERROR_THROW_ON, ITensor::info(), Dimensions< T >::num_dimensions(), ITensorInfo::tensor_shape(), and arm_compute::validate_arguments().
Referenced by NEGEMMLowpOffsetContributionOutputStageKernel::name().
|
inlineoverridevirtual |
Name of the kernel.
Implements ICPPKernel.
Definition at line 65 of file NEGEMMLowpOffsetContributionOutputStageKernel.h.
References NEGEMMLowpOffsetContributionOutputStageKernel::configure(), arm_compute::test::validation::info, NEGEMMLowpOffsetContributionOutputStageKernel::NEGEMMLowpOffsetContributionOutputStageKernel(), NEGEMMLowpOffsetContributionOutputStageKernel::operator=(), NEGEMMLowpOffsetContributionOutputStageKernel::run(), NEGEMMLowpOffsetContributionOutputStageKernel::validate(), IKernel::window(), and NEGEMMLowpOffsetContributionOutputStageKernel::~NEGEMMLowpOffsetContributionOutputStageKernel().
|
delete |
Prevent instances of this class from being copied (As this class contains pointers)
Referenced by NEGEMMLowpOffsetContributionOutputStageKernel::name().
|
default |
Allow instances of this class to be moved.
|
overridevirtual |
Execute the kernel on the passed window.
[in] | window | Region on which to execute the kernel. (Must be a region of the window returned by window()) |
[in] | info | Info about executing thread and CPU. |
Reimplemented from ICPPKernel.
Definition at line 918 of file NEGEMMLowpOffsetContributionOutputStageKernel.cpp.
References ARM_COMPUTE_ERROR_ON_INVALID_SUBWINDOW, ARM_COMPUTE_ERROR_ON_UNCONFIGURED_KERNEL, ARM_COMPUTE_UNUSED, ITensorInfo::data_type(), GEMMLowpOutputStageInfo::gemmlowp_max_bound, GEMMLowpOutputStageInfo::gemmlowp_min_bound, arm_compute::get_min_max(), ITensor::info(), GEMMLowpOutputStageInfo::is_quantized_per_channel, ITensorInfo::num_dimensions(), arm_compute::QASYMM8_SIGNED, arm_compute::QUANTIZE_DOWN, ITensorInfo::tensor_shape(), GEMMLowpOutputStageInfo::type, type_max, type_min, IKernel::window(), Dimensions< T >::x(), and Dimensions< T >::y().
Referenced by NEGEMMLowpOffsetContributionOutputStageKernel::name().
|
static |
Static function to check if given info will lead to a valid configuration of NEGEMMLowpOffsetContributionOutputStageKernel.
[in] | mm_result | Input tensor info containing the result of NEGEMMLowpMatrixMultiplyKernel. Data type supported: S32 |
[in] | vector_sum_col | Tensor info for the input row-vector of sums of all the entries in each column of matrix B. Note: vector_sum_col can be a nullptr in case a_offset = 0. Data type supported: same as mm_result |
[in] | vector_sum_row | Tensor info for the input row-vector of sums of all the entries in each row of matrix A. Note: vector_sum_row can be a nullptr in case b_offset = 0. Data type supported: same as mm_result |
[in] | bias | Biases tensor info. Only shared biases supported and it can be a nullptr if the addition of biases is not required. Biases are 1D tensor with dimensions [OFM]. Data type supported: Same as mm_result . |
[in] | output | Output tensor info containing the final quantized result. Data type supported: QASYMM8/QASYMM8_SIGNED |
[in] | a_offset | Offset to be added to each element of the matrix A. |
[in] | b_offset | Offset to be added to each element of the matrix B. |
[in] | output_stage | GEMMLowp output stage info, providing the type of quantization and the necessary parameters. |
Definition at line 908 of file NEGEMMLowpOffsetContributionOutputStageKernel.cpp.
References ARM_COMPUTE_ERROR_ON_NULLPTR, ARM_COMPUTE_RETURN_ON_ERROR, ICloneable< T >::clone(), and arm_compute::validate_arguments().
Referenced by NEGEMMLowpOffsetContributionOutputStageKernel::name(), and NEGEMMLowpMatrixMultiplyCore::validate().