24.02.1
|
Kernel used to add the offset contribution and perform the output stage after CpuGemmLowpMatrixMultiplyKernel. More...
#include <CpuGemmLowpOffsetContributionOutputStageKernel.h>
Public Member Functions | |
CpuGemmLowpOffsetContributionOutputStageKernel ()=default | |
Default constructor. More... | |
ARM_COMPUTE_DISALLOW_COPY_ALLOW_MOVE (CpuGemmLowpOffsetContributionOutputStageKernel) | |
void | configure (const ITensorInfo *mm_result, const ITensorInfo *vector_sum_col, const ITensorInfo *vector_sum_row, const ITensorInfo *bias, ITensorInfo *dst, int32_t k, int32_t a_offset, int32_t b_offset, GEMMLowpOutputStageInfo output_stage) |
Initialise the kernel inputs and output. More... | |
void | run_op (ITensorPack &tensors, const Window &window, const ThreadInfo &info) override |
Execute the kernel on the passed window. More... | |
const char * | name () const override |
Name of the kernel. More... | |
Public Member Functions inherited from ICPPKernel | |
virtual | ~ICPPKernel ()=default |
Default destructor. More... | |
virtual void | run (const Window &window, const ThreadInfo &info) |
Execute the kernel on the passed window. More... | |
virtual void | run_nd (const Window &window, const ThreadInfo &info, const Window &thread_locator) |
legacy compatibility layer for implemantions which do not support thread_locator In these cases we simply narrow the interface down the legacy version More... | |
virtual size_t | get_mws (const CPUInfo &platform, size_t thread_count) const |
Return minimum workload size of the relevant kernel. More... | |
Public Member Functions inherited from IKernel | |
IKernel () | |
Constructor. More... | |
virtual | ~IKernel ()=default |
Destructor. More... | |
virtual bool | is_parallelisable () const |
Indicates whether or not the kernel is parallelisable. More... | |
virtual BorderSize | border_size () const |
The size of the border for that kernel. More... | |
const Window & | window () const |
The maximum window the kernel can be executed on. More... | |
bool | is_window_configured () const |
Function to check if the embedded window of this kernel has been configured. More... | |
Static Public Member Functions | |
static Status | validate (const ITensorInfo *mm_result, const ITensorInfo *vector_sum_col, const ITensorInfo *vector_sum_row, const ITensorInfo *bias, const ITensorInfo *dst, int32_t a_offset, int32_t b_offset, GEMMLowpOutputStageInfo output_stage) |
Static function to check if given info will lead to a valid configuration. More... | |
Static Public Member Functions inherited from ICpuKernel< CpuGemmLowpOffsetContributionOutputStageKernel > | |
static const auto * | get_implementation (const SelectorType &selector, KernelSelectionType selection_type=KernelSelectionType::Supported) |
Micro-kernel selector. More... | |
Additional Inherited Members | |
Static Public Attributes inherited from ICPPKernel | |
static constexpr size_t | default_mws = 1 |
Kernel used to add the offset contribution and perform the output stage after CpuGemmLowpMatrixMultiplyKernel.
The computation is performed in-place
This kernel takes a final int32 accumulator value (the output of CpuGemmLowpMatrixMultiplyKernel), and adds to it the offset contribution of matrix A and matrix B in-place.
The output stage can perform either QuantizeDownInt32ToUint8Scale or QuantizeDownInt32ToUint8ScaleByFixedPoint for Uint8. The output stage can perform either QuantizeDownInt32ToInt8Scale or QuantizeDownInt32ToInt8ScaleByFixedPoint for Int8.
For QuantizeDownInt32ToUint8Scale/QuantizeDownInt32ToInt8Scale the final result is:
((mm_result'[i][k] + result_offset) * result_mult_int) >> result_shift
For QuantizeDownInt32ToUint8ScaleByFixedPoint/QuantizeDownInt32ToInt8ScaleByFixedPoint the final result is:
(FixedPointMul(mm_result'[i][k], result_fixedpoint_multiplier) >> result_shift) + result_offset_after_shift
where FixedPointMul(x, y) is the nearest integer to the following mathematical expression, evaluated without overflow or intermediate rounding:
(x * y) / 2^31
and mm_result'[i][k] = mm_result[i][k] + (vector_sum_col[k] * a_offset) + (vector_sum_row[i] * b_offset) + (a_offset * b_offset * k)
Definition at line 67 of file CpuGemmLowpOffsetContributionOutputStageKernel.h.
|
default |
Default constructor.
ARM_COMPUTE_DISALLOW_COPY_ALLOW_MOVE | ( | CpuGemmLowpOffsetContributionOutputStageKernel | ) |
void configure | ( | const ITensorInfo * | mm_result, |
const ITensorInfo * | vector_sum_col, | ||
const ITensorInfo * | vector_sum_row, | ||
const ITensorInfo * | bias, | ||
ITensorInfo * | dst, | ||
int32_t | k, | ||
int32_t | a_offset, | ||
int32_t | b_offset, | ||
GEMMLowpOutputStageInfo | output_stage | ||
) |
Initialise the kernel inputs and output.
[in] | mm_result | Input tensor info containing the result of CpuGemmLowpMatrixMultiplyKernel. Data type supported: S32 |
[in] | vector_sum_col | Input row-vector tensor info of sums of all the entries in each column of matrix B. Can be a 1D or 2D tensor, in case of 2D, y dim is the batch dimension Note: vector_sum_col can be a nullptr in case a_offset = 0. Data type supported: same as mm_result |
[in] | vector_sum_row | Input row-vector tensor info of sums of all the entries in each row of matrix A. Can be a 1D or 2D tensor, in case of 2D, y dim is the batch dimension |
[in] | bias | Biases tensor info. Only shared biases supported and it can be a nullptr if the addition of biases is not required. Biases are 1D tensor with dimensions [OFM]. Data type supported: Same as mm_result . |
[out] | dst | Output tensor info containing the final quantized result. Data type supported: QASYMM8/QASYMM8_SIGNED |
[in] | k | Number of matrix A columns or Matrix B rows |
[in] | a_offset | Offset to be added to each element of the matrix A. |
[in] | b_offset | Offset to be added to each element of the matrix B. |
[in] | output_stage | GEMMLowp output stage info, providing the type of quantization and the necessary parameters. |
Definition at line 904 of file CpuGemmLowpOffsetContributionOutputStageKernel.cpp.
References ARM_COMPUTE_ERROR_ON_NULLPTR, ARM_COMPUTE_ERROR_THROW_ON, ARM_COMPUTE_UNUSED, arm_compute::auto_init_if_empty(), bias, arm_compute::calculate_max_window(), ICloneable< T >::clone(), arm_compute::test::validation::dst, Dimensions< T >::num_dimensions(), output_stage, arm_compute::QASYMM8, ITensorInfo::tensor_shape(), and arm_compute::cpu::kernels::validate_arguments().
|
overridevirtual |
Name of the kernel.
Implements ICPPKernel.
Definition at line 1019 of file CpuGemmLowpOffsetContributionOutputStageKernel.cpp.
|
overridevirtual |
Execute the kernel on the passed window.
[in] | tensors | A vector containing the tensors to operate on. |
[in] | window | Region on which to execute the kernel. (Must be a region of the window returned by window()) |
[in] | info | Info about executing thread and CPU. |
Reimplemented from ICPPKernel.
Definition at line 961 of file CpuGemmLowpOffsetContributionOutputStageKernel.cpp.
References arm_compute::ACL_DST, arm_compute::ACL_SRC_0, arm_compute::ACL_SRC_1, arm_compute::ACL_SRC_2, arm_compute::ACL_SRC_3, ARM_COMPUTE_ERROR_ON_INVALID_SUBWINDOW, ARM_COMPUTE_ERROR_ON_UNCONFIGURED_KERNEL, ARM_COMPUTE_UNUSED, bias, arm_compute::test::validation::dst, GEMMLowpOutputStageInfo::gemmlowp_max_bound, GEMMLowpOutputStageInfo::gemmlowp_min_bound, ITensorPack::get_const_tensor(), arm_compute::get_min_max(), ITensorPack::get_tensor(), arm_compute::test::validation::info, GEMMLowpOutputStageInfo::is_quantized_per_channel, arm_compute::QASYMM8_SIGNED, arm_compute::QUANTIZE_DOWN, GEMMLowpOutputStageInfo::type, and IKernel::window().
|
static |
Static function to check if given info will lead to a valid configuration.
Similar to CpuGemmLowpOffsetContributionOutputStageKernel::configure()
Definition at line 946 of file CpuGemmLowpOffsetContributionOutputStageKernel.cpp.
References ARM_COMPUTE_ERROR_ON_NULLPTR, ARM_COMPUTE_RETURN_ON_ERROR, bias, output_stage, and arm_compute::cpu::kernels::validate_arguments().
Referenced by CpuGemmLowpMatrixMultiplyCore::validate().