Compute Library
 21.02
NEGEMMLowpOffsetContributionOutputStageKernel Class Reference

Neon kernel used to add the offset contribution and perform the output stage after NEGEMMLowpMatrixMultiplyKernel. More...

#include <NEGEMMLowpOffsetContributionOutputStageKernel.h>

Collaboration diagram for NEGEMMLowpOffsetContributionOutputStageKernel:
[legend]

Public Member Functions

const char * name () const override
 Name of the kernel. More...
 
 NEGEMMLowpOffsetContributionOutputStageKernel ()
 Constructor. More...
 
 NEGEMMLowpOffsetContributionOutputStageKernel (const NEGEMMLowpOffsetContributionOutputStageKernel &)=delete
 Prevent instances of this class from being copied (As this class contains pointers) More...
 
NEGEMMLowpOffsetContributionOutputStageKerneloperator= (const NEGEMMLowpOffsetContributionOutputStageKernel &)=delete
 Prevent instances of this class from being copied (As this class contains pointers) More...
 
 NEGEMMLowpOffsetContributionOutputStageKernel (NEGEMMLowpOffsetContributionOutputStageKernel &&)=default
 Allow instances of this class to be moved. More...
 
NEGEMMLowpOffsetContributionOutputStageKerneloperator= (NEGEMMLowpOffsetContributionOutputStageKernel &&)=default
 Allow instances of this class to be moved. More...
 
 ~NEGEMMLowpOffsetContributionOutputStageKernel ()=default
 Default destructor. More...
 
void configure (const ITensor *mm_result, const ITensor *vector_sum_col, const ITensor *vector_sum_row, const ITensor *bias, ITensor *output, int32_t k, int32_t a_offset, int32_t b_offset, GEMMLowpOutputStageInfo output_stage)
 Initialise the kernel's input and output. More...
 
void run (const Window &window, const ThreadInfo &info) override
 Execute the kernel on the passed window. More...
 
- Public Member Functions inherited from ICPPKernel
virtual ~ICPPKernel ()=default
 Default destructor. More...
 
virtual void run_nd (const Window &window, const ThreadInfo &info, const Window &thread_locator)
 legacy compatibility layer for implemantions which do not support thread_locator In these cases we simply narrow the interface down the legacy version More...
 
virtual void run_op (ITensorPack &tensors, const Window &window, const ThreadInfo &info)
 Execute the kernel on the passed window. More...
 
- Public Member Functions inherited from IKernel
 IKernel ()
 Constructor. More...
 
virtual ~IKernel ()=default
 Destructor. More...
 
virtual bool is_parallelisable () const
 Indicates whether or not the kernel is parallelisable. More...
 
virtual BorderSize border_size () const
 The size of the border for that kernel. More...
 
const Windowwindow () const
 The maximum window the kernel can be executed on. More...
 

Static Public Member Functions

static Status validate (const ITensorInfo *mm_result, const ITensorInfo *vector_sum_col, const ITensorInfo *vector_sum_row, const ITensorInfo *bias, const ITensorInfo *output, int32_t a_offset, int32_t b_offset, GEMMLowpOutputStageInfo output_stage)
 Static function to check if given info will lead to a valid configuration of NEGEMMLowpOffsetContributionOutputStageKernel. More...
 

Detailed Description

Neon kernel used to add the offset contribution and perform the output stage after NEGEMMLowpMatrixMultiplyKernel.

The computation is performed in-place

This kernel takes a final int32 accumulator value (the output of NEGEMMLowpMatrixMultiplyKernel), and adds to it the offset contribution of matrix A and matrix B in-place.

The output stage can perform either QuantizeDownInt32ToUint8Scale or QuantizeDownInt32ToUint8ScaleByFixedPoint for Uint8. The output stage can perform either QuantizeDownInt32ToInt8Scale or QuantizeDownInt32ToInt8ScaleByFixedPoint for Int8.

For QuantizeDownInt32ToUint8Scale/QuantizeDownInt32ToInt8Scale the final result is:

((mm_result'[i][k] + result_offset) * result_mult_int) >> result_shift

For QuantizeDownInt32ToUint8ScaleByFixedPoint/QuantizeDownInt32ToInt8ScaleByFixedPoint the final result is:

(FixedPointMul(mm_result'[i][k], result_fixedpoint_multiplier) >> result_shift) + result_offset_after_shift

where FixedPointMul(x, y) is the nearest integer to the following mathematical expression, evaluated without overflow or intermediate rounding:

(x * y) / 2^31

and mm_result'[i][k] = mm_result[i][k] + (vector_sum_col[k] * a_offset) + (vector_sum_row[i] * b_offset) + (a_offset * b_offset * k)

Definition at line 62 of file NEGEMMLowpOffsetContributionOutputStageKernel.h.

Constructor & Destructor Documentation

◆ NEGEMMLowpOffsetContributionOutputStageKernel() [1/3]

Constructor.

Definition at line 862 of file NEGEMMLowpOffsetContributionOutputStageKernel.cpp.

Referenced by NEGEMMLowpOffsetContributionOutputStageKernel::name().

863  : _vector_sum_col(nullptr), _vector_sum_row(nullptr), _bias(nullptr), _mm_result(nullptr), _output(nullptr), _a_offset(0), _b_offset(0), _k_offset(0), _slide_vector_sum_col(true),
864  _output_stage(GEMMLowpOutputStageInfo())
865 
866 {
867 }

◆ NEGEMMLowpOffsetContributionOutputStageKernel() [2/3]

Prevent instances of this class from being copied (As this class contains pointers)

◆ NEGEMMLowpOffsetContributionOutputStageKernel() [3/3]

Allow instances of this class to be moved.

◆ ~NEGEMMLowpOffsetContributionOutputStageKernel()

Member Function Documentation

◆ configure()

void configure ( const ITensor mm_result,
const ITensor vector_sum_col,
const ITensor vector_sum_row,
const ITensor bias,
ITensor output,
int32_t  k,
int32_t  a_offset,
int32_t  b_offset,
GEMMLowpOutputStageInfo  output_stage 
)

Initialise the kernel's input and output.

Parameters
[in]mm_resultInput tensor containing the result of NEGEMMLowpMatrixMultiplyKernel. Data type supported: S32
[in]vector_sum_colInput row-vector of sums of all the entries in each column of matrix B. Note: vector_sum_col can be a nullptr in case a_offset = 0. Data type supported: same as mm_result
[in]vector_sum_rowInput row-vector of sums of all the entries in each row of matrix A.
[in]biasBiases tensor. Only shared biases supported and it can be a nullptr if the addition of biases is not required. Biases are 1D tensor with dimensions [OFM]. Data type supported: Same as mm_result.
[out]outputOutput tensor containing the final quantized result. Data type supported: QASYMM8/QASYMM8_SIGNED
[in]kNumber of matrix A columns or Matrix B rows
[in]a_offsetOffset to be added to each element of the matrix A.
[in]b_offsetOffset to be added to each element of the matrix B.
[in]output_stageGEMMLowp output stage info, providing the type of quantization and the necessary parameters.

Definition at line 869 of file NEGEMMLowpOffsetContributionOutputStageKernel.cpp.

References ARM_COMPUTE_ERROR_ON_NULLPTR, ARM_COMPUTE_ERROR_THROW_ON, ITensor::info(), Dimensions< T >::num_dimensions(), ITensorInfo::tensor_shape(), and arm_compute::validate_arguments().

Referenced by NEGEMMLowpOffsetContributionOutputStageKernel::name().

873 {
874  // Perform validate step
875  ARM_COMPUTE_ERROR_ON_NULLPTR(mm_result, output);
876 
878  vector_sum_col != nullptr ? vector_sum_col->info() : nullptr, // NOLINT
879  vector_sum_row != nullptr ? vector_sum_row->info() : nullptr, // NOLINT
880  bias != nullptr ? bias->info() : nullptr, // NOLINT
881  output->info(), a_offset, b_offset, output_stage)); // NOLINT
882 
883  _vector_sum_col = vector_sum_col;
884  _vector_sum_row = vector_sum_row;
885  _bias = bias;
886  _mm_result = mm_result;
887  _output = output;
888  _a_offset = a_offset;
889  _b_offset = b_offset;
890  _k_offset = a_offset * b_offset * k;
891  _output_stage = output_stage;
892 
893  // If a_offset == 0, vector_sum_col can be a nullptr
894  if(a_offset != 0)
895  {
896  // Check if vector_sum_col_shape should be slidden or not
897  // Don't slide vector_sum_col_shape along the y dimension if vector_sum_col_shape has just 1 dimension and vector_sum_row_shape more than 1
898  // This scenario can happen when the the matrix multiplication is used to perform a convolution operation
899  _slide_vector_sum_col = vector_sum_col->info()->tensor_shape().num_dimensions() > 1;
900  }
901 
902  // Configure kernel window
903  auto win_config = validate_and_configure_window(mm_result->info(), output->info());
904  ARM_COMPUTE_ERROR_THROW_ON(win_config.first);
905  INEKernel::configure(win_config.second);
906 }
#define ARM_COMPUTE_ERROR_THROW_ON(status)
Definition: Error.h:455
Status validate_arguments(const ITensorInfo *input, const ITensorInfo *bias, const ITensorInfo *output, const GEMMLowpOutputStageInfo *output_stage)
#define ARM_COMPUTE_ERROR_ON_NULLPTR(...)
Definition: Validate.h:161

◆ name()

◆ operator=() [1/2]

Prevent instances of this class from being copied (As this class contains pointers)

Referenced by NEGEMMLowpOffsetContributionOutputStageKernel::name().

◆ operator=() [2/2]

Allow instances of this class to be moved.

◆ run()

void run ( const Window window,
const ThreadInfo info 
)
overridevirtual

Execute the kernel on the passed window.

Warning
If is_parallelisable() returns false then the passed window must be equal to window()
Note
The window has to be a region within the window returned by the window() method
The width of the window has to be a multiple of num_elems_processed_per_iteration().
Parameters
[in]windowRegion on which to execute the kernel. (Must be a region of the window returned by window())
[in]infoInfo about executing thread and CPU.

Reimplemented from ICPPKernel.

Definition at line 918 of file NEGEMMLowpOffsetContributionOutputStageKernel.cpp.

References ARM_COMPUTE_ERROR_ON_INVALID_SUBWINDOW, ARM_COMPUTE_ERROR_ON_UNCONFIGURED_KERNEL, ARM_COMPUTE_UNUSED, ITensorInfo::data_type(), GEMMLowpOutputStageInfo::gemmlowp_max_bound, GEMMLowpOutputStageInfo::gemmlowp_min_bound, arm_compute::get_min_max(), ITensor::info(), GEMMLowpOutputStageInfo::is_quantized_per_channel, ITensorInfo::num_dimensions(), arm_compute::QASYMM8_SIGNED, arm_compute::QUANTIZE_DOWN, ITensorInfo::tensor_shape(), GEMMLowpOutputStageInfo::type, type_max, type_min, IKernel::window(), Dimensions< T >::x(), and Dimensions< T >::y().

Referenced by NEGEMMLowpOffsetContributionOutputStageKernel::name().

919 {
923 
924  PixelValue type_min{};
925  PixelValue type_max{};
926  std::tie(type_min, type_max) = get_min_max(_output->info()->data_type());
927  int32_t type_min_int = type_min.get<int32_t>();
928  int32_t type_max_int = type_max.get<int32_t>();
929 
930  const bool reinterpret_as_3d = _vector_sum_row != nullptr
931  && _mm_result->info()->num_dimensions() > 1
932  && _mm_result->info()->tensor_shape().y() != _vector_sum_row->info()->tensor_shape().x();
933 
934  const bool is_bounded_relu = !(_output_stage.gemmlowp_min_bound <= type_min_int && _output_stage.gemmlowp_max_bound >= type_max_int);
935 
936  // Check if we need to perform fixed point requantization
937  const bool is_fixed_point = _output_stage.type != GEMMLowpOutputStageType::QUANTIZE_DOWN;
938 
939  // Check if symmetric per-channel execution
940  const bool is_signed = _output->info()->data_type() == DataType::QASYMM8_SIGNED;
941 
942  // Check if symmetric per-channel execution
943  const bool is_symm = _output_stage.is_quantized_per_channel;
944 
945  if(is_symm)
946  {
947  run_offset_contribution_output_stage_symm(window, _mm_result, _vector_sum_col, _vector_sum_row, _bias, _output, _a_offset, _b_offset, _k_offset, _slide_vector_sum_col, _output_stage,
948  reinterpret_as_3d, is_bounded_relu, is_fixed_point);
949  }
950  else
951  {
952  if(is_signed)
953  {
954  run_offset_contribution_output_stage<int8_t>(window, _mm_result, _vector_sum_col, _vector_sum_row, _bias, _output, _a_offset, _b_offset, _k_offset, _slide_vector_sum_col, _output_stage,
955  reinterpret_as_3d, is_bounded_relu, is_fixed_point);
956  }
957  else
958  {
959  run_offset_contribution_output_stage<uint8_t>(window, _mm_result, _vector_sum_col, _vector_sum_row, _bias, _output, _a_offset, _b_offset, _k_offset, _slide_vector_sum_col, _output_stage,
960  reinterpret_as_3d, is_bounded_relu, is_fixed_point);
961  }
962  }
963 }
virtual size_t num_dimensions() const =0
The number of dimensions of the tensor (rank)
const Window & window() const
The maximum window the kernel can be executed on.
Definition: IKernel.cpp:28
virtual DataType data_type() const =0
Data type used for each element of the tensor.
int32_t gemmlowp_max_bound
GEMMLowp max value used to saturate down the output result before converting back to QASYMM8...
Definition: Types.h:1959
GEMMLowpOutputStageType type
GEMMLowp output stage type.
Definition: Types.h:1954
bool is_quantized_per_channel
GEMMLowp quantized per-channel flag.
Definition: Types.h:1963
T x() const
Alias to access the size of the first dimension.
Definition: Dimensions.h:87
#define ARM_COMPUTE_UNUSED(...)
To avoid unused variables warnings.
Definition: Error.h:152
virtual const TensorShape & tensor_shape() const =0
Size for each dimension of the tensor.
virtual ITensorInfo * info() const =0
Interface to be implemented by the child class to return the tensor&#39;s metadata.
Quantize using an integer multiplication.
#define ARM_COMPUTE_ERROR_ON_UNCONFIGURED_KERNEL(k)
Definition: Validate.h:941
__constant DATA_TYPE16 type_min
Definition: minmaxloc.cl:46
ScaleKernelInfo info(interpolation_policy, default_border_mode, PixelValue(), sampling_policy, false)
__constant DATA_TYPE16 type_max
Definition: minmaxloc.cl:47
T y() const
Alias to access the size of the second dimension.
Definition: Dimensions.h:92
quantized, asymmetric fixed-point 8-bit number signed
int32_t gemmlowp_min_bound
GEMMLowp min value used to saturate down the output result before converting back to QASYMM8...
Definition: Types.h:1958
std::tuple< PixelValue, PixelValue > get_min_max(DataType dt)
Compute the mininum and maximum values a data type can take.
Definition: Utils.h:564
#define ARM_COMPUTE_ERROR_ON_INVALID_SUBWINDOW(f, s)
Definition: Validate.h:205

◆ validate()

Status validate ( const ITensorInfo mm_result,
const ITensorInfo vector_sum_col,
const ITensorInfo vector_sum_row,
const ITensorInfo bias,
const ITensorInfo output,
int32_t  a_offset,
int32_t  b_offset,
GEMMLowpOutputStageInfo  output_stage 
)
static

Static function to check if given info will lead to a valid configuration of NEGEMMLowpOffsetContributionOutputStageKernel.

Parameters
[in]mm_resultInput tensor info containing the result of NEGEMMLowpMatrixMultiplyKernel. Data type supported: S32
[in]vector_sum_colTensor info for the input row-vector of sums of all the entries in each column of matrix B. Note: vector_sum_col can be a nullptr in case a_offset = 0. Data type supported: same as mm_result
[in]vector_sum_rowTensor info for the input row-vector of sums of all the entries in each row of matrix A. Note: vector_sum_row can be a nullptr in case b_offset = 0. Data type supported: same as mm_result
[in]biasBiases tensor info. Only shared biases supported and it can be a nullptr if the addition of biases is not required. Biases are 1D tensor with dimensions [OFM]. Data type supported: Same as mm_result.
[in]outputOutput tensor info containing the final quantized result. Data type supported: QASYMM8/QASYMM8_SIGNED
[in]a_offsetOffset to be added to each element of the matrix A.
[in]b_offsetOffset to be added to each element of the matrix B.
[in]output_stageGEMMLowp output stage info, providing the type of quantization and the necessary parameters.
Returns
a status

Definition at line 908 of file NEGEMMLowpOffsetContributionOutputStageKernel.cpp.

References ARM_COMPUTE_ERROR_ON_NULLPTR, ARM_COMPUTE_RETURN_ON_ERROR, ICloneable< T >::clone(), and arm_compute::validate_arguments().

Referenced by NEGEMMLowpOffsetContributionOutputStageKernel::name(), and NEGEMMLowpMatrixMultiplyCore::validate().

911 {
912  ARM_COMPUTE_ERROR_ON_NULLPTR(mm_result, output);
913  ARM_COMPUTE_RETURN_ON_ERROR(validate_arguments(mm_result, vector_sum_col, vector_sum_row, bias, output, a_offset, b_offset, output_stage));
914  ARM_COMPUTE_RETURN_ON_ERROR(validate_and_configure_window(mm_result->clone().get(), output->clone().get()).first);
915  return Status{};
916 }
#define ARM_COMPUTE_RETURN_ON_ERROR(status)
Checks if a status contains an error and returns it.
Definition: Error.h:204
Status validate_arguments(const ITensorInfo *input, const ITensorInfo *bias, const ITensorInfo *output, const GEMMLowpOutputStageInfo *output_stage)
#define ARM_COMPUTE_ERROR_ON_NULLPTR(...)
Definition: Validate.h:161

The documentation for this class was generated from the following files: