Compute Library
 21.08
CpuGemmLowpOffsetContributionOutputStageKernel Class Reference

Kernel used to add the offset contribution and perform the output stage after CpuGemmLowpMatrixMultiplyKernel. More...

#include <CpuGemmLowpOffsetContributionOutputStageKernel.h>

Collaboration diagram for CpuGemmLowpOffsetContributionOutputStageKernel:
[legend]

Public Member Functions

 CpuGemmLowpOffsetContributionOutputStageKernel ()=default
 Default constructor. More...
 
 ARM_COMPUTE_DISALLOW_COPY_ALLOW_MOVE (CpuGemmLowpOffsetContributionOutputStageKernel)
 
void configure (const ITensorInfo *mm_result, const ITensorInfo *vector_sum_col, const ITensorInfo *vector_sum_row, const ITensorInfo *bias, ITensorInfo *dst, int32_t k, int32_t a_offset, int32_t b_offset, GEMMLowpOutputStageInfo output_stage)
 Initialise the kernel inputs and output. More...
 
void run_op (ITensorPack &tensors, const Window &window, const ThreadInfo &info) override
 Execute the kernel on the passed window. More...
 
const char * name () const override
 Name of the kernel. More...
 
- Public Member Functions inherited from ICPPKernel
virtual ~ICPPKernel ()=default
 Default destructor. More...
 
virtual void run (const Window &window, const ThreadInfo &info)
 Execute the kernel on the passed window. More...
 
virtual void run_nd (const Window &window, const ThreadInfo &info, const Window &thread_locator)
 legacy compatibility layer for implemantions which do not support thread_locator In these cases we simply narrow the interface down the legacy version More...
 
- Public Member Functions inherited from IKernel
 IKernel ()
 Constructor. More...
 
virtual ~IKernel ()=default
 Destructor. More...
 
virtual bool is_parallelisable () const
 Indicates whether or not the kernel is parallelisable. More...
 
virtual BorderSize border_size () const
 The size of the border for that kernel. More...
 
const Windowwindow () const
 The maximum window the kernel can be executed on. More...
 
bool is_window_configured () const
 Function to check if the embedded window of this kernel has been configured. More...
 

Static Public Member Functions

static Status validate (const ITensorInfo *mm_result, const ITensorInfo *vector_sum_col, const ITensorInfo *vector_sum_row, const ITensorInfo *bias, const ITensorInfo *dst, int32_t a_offset, int32_t b_offset, GEMMLowpOutputStageInfo output_stage)
 Static function to check if given info will lead to a valid configuration. More...
 

Detailed Description

Kernel used to add the offset contribution and perform the output stage after CpuGemmLowpMatrixMultiplyKernel.

The computation is performed in-place

This kernel takes a final int32 accumulator value (the output of CpuGemmLowpMatrixMultiplyKernel), and adds to it the offset contribution of matrix A and matrix B in-place.

The output stage can perform either QuantizeDownInt32ToUint8Scale or QuantizeDownInt32ToUint8ScaleByFixedPoint for Uint8. The output stage can perform either QuantizeDownInt32ToInt8Scale or QuantizeDownInt32ToInt8ScaleByFixedPoint for Int8.

For QuantizeDownInt32ToUint8Scale/QuantizeDownInt32ToInt8Scale the final result is:

((mm_result'[i][k] + result_offset) * result_mult_int) >> result_shift

For QuantizeDownInt32ToUint8ScaleByFixedPoint/QuantizeDownInt32ToInt8ScaleByFixedPoint the final result is:

(FixedPointMul(mm_result'[i][k], result_fixedpoint_multiplier) >> result_shift) + result_offset_after_shift

where FixedPointMul(x, y) is the nearest integer to the following mathematical expression, evaluated without overflow or intermediate rounding:

(x * y) / 2^31

and mm_result'[i][k] = mm_result[i][k] + (vector_sum_col[k] * a_offset) + (vector_sum_row[i] * b_offset) + (a_offset * b_offset * k)

Definition at line 66 of file CpuGemmLowpOffsetContributionOutputStageKernel.h.

Constructor & Destructor Documentation

◆ CpuGemmLowpOffsetContributionOutputStageKernel()

Default constructor.

Member Function Documentation

◆ ARM_COMPUTE_DISALLOW_COPY_ALLOW_MOVE()

ARM_COMPUTE_DISALLOW_COPY_ALLOW_MOVE ( CpuGemmLowpOffsetContributionOutputStageKernel  )

◆ configure()

void configure ( const ITensorInfo mm_result,
const ITensorInfo vector_sum_col,
const ITensorInfo vector_sum_row,
const ITensorInfo bias,
ITensorInfo dst,
int32_t  k,
int32_t  a_offset,
int32_t  b_offset,
GEMMLowpOutputStageInfo  output_stage 
)

Initialise the kernel inputs and output.

Parameters
[in]mm_resultInput tensor info containing the result of CpuGemmLowpMatrixMultiplyKernel. Data type supported: S32
[in]vector_sum_colInput row-vector tensor info of sums of all the entries in each column of matrix B. Note: vector_sum_col can be a nullptr in case a_offset = 0. Data type supported: same as mm_result
[in]vector_sum_rowInput row-vector tensor info of sums of all the entries in each row of matrix A.
[in]biasBiases tensor info. Only shared biases supported and it can be a nullptr if the addition of biases is not required. Biases are 1D tensor with dimensions [OFM]. Data type supported: Same as mm_result.
[out]dstOutput tensor info containing the final quantized result. Data type supported: QASYMM8/QASYMM8_SIGNED
[in]kNumber of matrix A columns or Matrix B rows
[in]a_offsetOffset to be added to each element of the matrix A.
[in]b_offsetOffset to be added to each element of the matrix B.
[in]output_stageGEMMLowp output stage info, providing the type of quantization and the necessary parameters.

Definition at line 842 of file CpuGemmLowpOffsetContributionOutputStageKernel.cpp.

References ARM_COMPUTE_ERROR_ON_NULLPTR, ARM_COMPUTE_ERROR_THROW_ON, ARM_COMPUTE_UNUSED, arm_compute::auto_init_if_empty(), arm_compute::calculate_max_window(), ICloneable< T >::clone(), Dimensions< T >::num_dimensions(), arm_compute::QASYMM8, and ITensorInfo::tensor_shape().

846 {
847  ARM_COMPUTE_UNUSED(vector_sum_row, bias);
848  // Perform validate step
849  ARM_COMPUTE_ERROR_ON_NULLPTR(mm_result, dst);
850  ARM_COMPUTE_ERROR_THROW_ON(validate_arguments(mm_result, vector_sum_col, vector_sum_row, bias, dst, a_offset, b_offset, output_stage));
851 
852  _a_offset = a_offset;
853  _b_offset = b_offset;
854  _k_offset = a_offset * b_offset * k;
855  _output_stage = output_stage;
856 
857  // If a_offset == 0, vector_sum_col can be a nullptr
858  if(a_offset != 0)
859  {
860  // Check if vector_sum_col_shape should be slidden or not
861  // Don't slide vector_sum_col_shape along the y dimension if vector_sum_col_shape has just 1 dimension and vector_sum_row_shape more than 1
862  // This scenario can happen when the the matrix multiplication is used to perform a convolution operation
863  _slide_vector_sum_col = vector_sum_col->tensor_shape().num_dimensions() > 1;
864  }
865 
866  // Output auto inizialitation if not yet initialized
867  auto_init_if_empty(*dst, mm_result->clone()->set_data_type(DataType::QASYMM8));
868 
869  // Configure kernel window
870  Window win = calculate_max_window(*mm_result, Steps());
871 
872  // Note: This kernel performs 16 elements per iteration.
873  // However, since we use a left-over for loop, we cannot have any read or write out of memory
874  // For this reason num_elems_processed_per_iteration is 1 and so update_window_and_padding() can be skipped
875  ICpuKernel::configure(win);
876 }
Window calculate_max_window(const ValidRegion &valid_region, const Steps &steps, bool skip_border, BorderSize border_size)
#define ARM_COMPUTE_ERROR_THROW_ON(status)
Definition: Error.h:455
#define ARM_COMPUTE_UNUSED(...)
To avoid unused variables warnings.
Definition: Error.h:152
quantized, asymmetric fixed-point 8-bit number unsigned
bool auto_init_if_empty(ITensorInfo &info, const TensorShape &shape, int num_channels, DataType data_type, QuantizationInfo quantization_info=QuantizationInfo())
Auto initialize the tensor info (shape, number of channels and data type) if the current assignment i...
#define ARM_COMPUTE_ERROR_ON_NULLPTR(...)
Definition: Validate.h:157

◆ name()

const char * name ( ) const
overridevirtual

Name of the kernel.

Returns
Kernel name

Implements ICPPKernel.

Definition at line 940 of file CpuGemmLowpOffsetContributionOutputStageKernel.cpp.

941 {
942  return "CpuGemmLowpOffsetContributionOutputStageKernel";
943 }

◆ run_op()

void run_op ( ITensorPack tensors,
const Window window,
const ThreadInfo info 
)
overridevirtual

Execute the kernel on the passed window.

Warning
If is_parallelisable() returns false then the passed window must be equal to window()
Note
The window has to be a region within the window returned by the window() method
The width of the window has to be a multiple of num_elems_processed_per_iteration().
Parameters
[in]tensorsA vector containing the tensors to operate on.
[in]windowRegion on which to execute the kernel. (Must be a region of the window returned by window())
[in]infoInfo about executing thread and CPU.

Reimplemented from ICPPKernel.

Definition at line 887 of file CpuGemmLowpOffsetContributionOutputStageKernel.cpp.

References arm_compute::ACL_DST, arm_compute::ACL_SRC_0, arm_compute::ACL_SRC_1, arm_compute::ACL_SRC_2, arm_compute::ACL_SRC_3, ARM_COMPUTE_ERROR_ON_INVALID_SUBWINDOW, ARM_COMPUTE_ERROR_ON_UNCONFIGURED_KERNEL, ARM_COMPUTE_UNUSED, arm_compute::test::validation::dst, ITensorPack::get_const_tensor(), arm_compute::get_min_max(), ITensorPack::get_tensor(), arm_compute::QASYMM8_SIGNED, arm_compute::QUANTIZE_DOWN, and IKernel::window().

888 {
892 
893  auto mm_result = tensors.get_const_tensor(TensorType::ACL_SRC_0);
894  auto vector_sum_col = tensors.get_const_tensor(TensorType::ACL_SRC_1);
895  auto vector_sum_row = tensors.get_const_tensor(TensorType::ACL_SRC_2);
896  auto bias = tensors.get_const_tensor(TensorType::ACL_SRC_3);
897  auto dst = tensors.get_tensor(TensorType::ACL_DST);
898 
899  PixelValue type_min{};
900  PixelValue type_max{};
901  std::tie(type_min, type_max) = get_min_max(dst->info()->data_type());
902  int32_t type_min_int = type_min.get<int32_t>();
903  int32_t type_max_int = type_max.get<int32_t>();
904 
905  const bool reinterpret_as_3d = vector_sum_row != nullptr
906  && mm_result->info()->num_dimensions() > 1
907  && mm_result->info()->tensor_shape().y() != vector_sum_row->info()->tensor_shape().x();
908 
909  const bool is_bounded_relu = !(_output_stage.gemmlowp_min_bound <= type_min_int && _output_stage.gemmlowp_max_bound >= type_max_int);
910 
911  // Check if we need to perform fixed point requantization
912  const bool is_fixed_point = _output_stage.type != GEMMLowpOutputStageType::QUANTIZE_DOWN;
913 
914  // Check if symmetric per-channel execution
915  const bool is_signed = dst->info()->data_type() == DataType::QASYMM8_SIGNED;
916 
917  // Check if symmetric per-channel execution
918  const bool is_symm = _output_stage.is_quantized_per_channel;
919 
920  if(is_symm)
921  {
922  run_offset_contribution_output_stage_symm(window, mm_result, vector_sum_col, vector_sum_row, bias, dst, _a_offset, _b_offset, _k_offset, _slide_vector_sum_col, _output_stage,
923  reinterpret_as_3d, is_bounded_relu, is_fixed_point);
924  }
925  else
926  {
927  if(is_signed)
928  {
929  run_offset_contribution_output_stage<int8_t>(window, mm_result, vector_sum_col, vector_sum_row, bias, dst, _a_offset, _b_offset, _k_offset, _slide_vector_sum_col, _output_stage,
930  reinterpret_as_3d, is_bounded_relu, is_fixed_point);
931  }
932  else
933  {
934  run_offset_contribution_output_stage<uint8_t>(window, mm_result, vector_sum_col, vector_sum_row, bias, dst, _a_offset, _b_offset, _k_offset, _slide_vector_sum_col, _output_stage,
935  reinterpret_as_3d, is_bounded_relu, is_fixed_point);
936  }
937  }
938 }
const Window & window() const
The maximum window the kernel can be executed on.
Definition: IKernel.cpp:28
int32_t gemmlowp_max_bound
GEMMLowp max value used to saturate down the output result before converting back to QASYMM8...
Definition: Types.h:1895
GEMMLowpOutputStageType type
GEMMLowp output stage type.
Definition: Types.h:1890
bool is_quantized_per_channel
GEMMLowp quantized per-channel flag.
Definition: Types.h:1899
#define ARM_COMPUTE_UNUSED(...)
To avoid unused variables warnings.
Definition: Error.h:152
Quantize using an integer multiplication.
#define ARM_COMPUTE_ERROR_ON_UNCONFIGURED_KERNEL(k)
Definition: Validate.h:915
ScaleKernelInfo info(interpolation_policy, default_border_mode, PixelValue(), sampling_policy, false)
quantized, asymmetric fixed-point 8-bit number signed
int32_t gemmlowp_min_bound
GEMMLowp min value used to saturate down the output result before converting back to QASYMM8...
Definition: Types.h:1894
std::tuple< PixelValue, PixelValue > get_min_max(DataType dt)
Compute the mininum and maximum values a data type can take.
Definition: Utils.h:564
#define ARM_COMPUTE_ERROR_ON_INVALID_SUBWINDOW(f, s)
Definition: Validate.h:201

◆ validate()

Status validate ( const ITensorInfo mm_result,
const ITensorInfo vector_sum_col,
const ITensorInfo vector_sum_row,
const ITensorInfo bias,
const ITensorInfo dst,
int32_t  a_offset,
int32_t  b_offset,
GEMMLowpOutputStageInfo  output_stage 
)
static

Static function to check if given info will lead to a valid configuration.

Similar to CpuGemmLowpOffsetContributionOutputStageKernel::configure()

Returns
a status

Definition at line 878 of file CpuGemmLowpOffsetContributionOutputStageKernel.cpp.

References ARM_COMPUTE_ERROR_ON_NULLPTR, and ARM_COMPUTE_RETURN_ON_ERROR.

Referenced by CpuGemmLowpMatrixMultiplyCore::validate().

881 {
882  ARM_COMPUTE_ERROR_ON_NULLPTR(mm_result, output);
883  ARM_COMPUTE_RETURN_ON_ERROR(validate_arguments(mm_result, vector_sum_col, vector_sum_row, bias, output, a_offset, b_offset, output_stage));
884  return Status{};
885 }
#define ARM_COMPUTE_RETURN_ON_ERROR(status)
Checks if a status contains an error and returns it.
Definition: Error.h:204
#define ARM_COMPUTE_ERROR_ON_NULLPTR(...)
Definition: Validate.h:157

The documentation for this class was generated from the following files: