Compute Library
 19.08
NEGEMMLowpOffsetContributionOutputStageKernel Class Reference

NEON kernel used to add the offset contribution and perform the output stage after NEGEMMLowpMatrixMultiplyKernel. More...

#include <NEGEMMLowpOffsetContributionOutputStageKernel.h>

Collaboration diagram for NEGEMMLowpOffsetContributionOutputStageKernel:
[legend]

Public Types

using NEGEMMLowpOffsetContributionOutputStageFunction = std::function< void(const Window, const ITensor *, const ITensor *, const ITensor *, const ITensor *, ITensor *, int32_t, int32_t, int32_t, bool, GEMMLowpOutputStageInfo)>
 

Public Member Functions

const char * name () const override
 Name of the kernel. More...
 
 NEGEMMLowpOffsetContributionOutputStageKernel ()
 Constructor. More...
 
 NEGEMMLowpOffsetContributionOutputStageKernel (const NEGEMMLowpOffsetContributionOutputStageKernel &)=delete
 Prevent instances of this class from being copied (As this class contains pointers) More...
 
NEGEMMLowpOffsetContributionOutputStageKerneloperator= (const NEGEMMLowpOffsetContributionOutputStageKernel &)=delete
 Prevent instances of this class from being copied (As this class contains pointers) More...
 
 NEGEMMLowpOffsetContributionOutputStageKernel (NEGEMMLowpOffsetContributionOutputStageKernel &&)=default
 Allow instances of this class to be moved. More...
 
NEGEMMLowpOffsetContributionOutputStageKerneloperator= (NEGEMMLowpOffsetContributionOutputStageKernel &&)=default
 Allow instances of this class to be moved. More...
 
void configure (const ITensor *mm_result, const ITensor *vector_sum_col, const ITensor *vector_sum_row, const ITensor *bias, ITensor *output, int32_t k, int32_t a_offset, int32_t b_offset, GEMMLowpOutputStageInfo output_stage)
 Initialise the kernel's input and output. More...
 
void run (const Window &window, const ThreadInfo &info) override
 Execute the kernel on the passed window. More...
 
- Public Member Functions inherited from ICPPKernel
virtual ~ICPPKernel ()=default
 Default destructor. More...
 
- Public Member Functions inherited from IKernel
 IKernel ()
 Constructor. More...
 
virtual ~IKernel ()=default
 Destructor. More...
 
virtual bool is_parallelisable () const
 Indicates whether or not the kernel is parallelisable. More...
 
virtual BorderSize border_size () const
 The size of the border for that kernel. More...
 
const Windowwindow () const
 The maximum window the kernel can be executed on. More...
 

Static Public Member Functions

static Status validate (const ITensorInfo *mm_result, const ITensorInfo *vector_sum_col, const ITensorInfo *vector_sum_row, const ITensorInfo *bias, const ITensorInfo *output, int32_t a_offset, int32_t b_offset, GEMMLowpOutputStageInfo output_stage)
 Static function to check if given info will lead to a valid configuration of NEGEMMLowpOffsetContributionOutputStageKernel. More...
 

Detailed Description

NEON kernel used to add the offset contribution and perform the output stage after NEGEMMLowpMatrixMultiplyKernel.

The computation is performed in-place

This kernel takes a final int32 accumulator value (the output of NEGEMMLowpMatrixMultiplyKernel), and adds to it the offset contribution of matrix A and matrix B in-place.

The output stage can perform either QuantizeDownInt32ToUint8Scale or QuantizeDownInt32ToUint8ScaleByFixedPoint.

For QuantizeDownInt32ToUint8Scale the final result is:

((mm_result'[i][k] + result_offset) * result_mult_int) >> result_shift

For QuantizeDownInt32ToUint8ScaleByFixedPoint the final result is:

(FixedPointMul(mm_result'[i][k], result_fixedpoint_multiplier) >> result_shift) + result_offset_after_shift

where FixedPointMul(x, y) is the nearest integer to the following mathematical expression, evaluated without overflow or intermediate rounding:

(x * y) / 2^31

and mm_result'[i][k] = mm_result[i][k] + (vector_sum_col[k] * a_offset) + (vector_sum_row[i] * b_offset) + (a_offset * b_offset * k)

Definition at line 61 of file NEGEMMLowpOffsetContributionOutputStageKernel.h.

Member Typedef Documentation

◆ NEGEMMLowpOffsetContributionOutputStageFunction

using NEGEMMLowpOffsetContributionOutputStageFunction = std::function<void(const Window, const ITensor *, const ITensor *, const ITensor *, const ITensor *, ITensor *, int32_t, int32_t, int32_t, bool, GEMMLowpOutputStageInfo)>

Constructor & Destructor Documentation

◆ NEGEMMLowpOffsetContributionOutputStageKernel() [1/3]

Constructor.

Definition at line 586 of file NEGEMMLowpOffsetContributionOutputStageKernel.cpp.

587  : _function(nullptr), _vector_sum_col(nullptr), _vector_sum_row(nullptr), _bias(nullptr), _mm_result(nullptr), _output(nullptr), _a_offset(0), _b_offset(0), _k_offset(0), _slide_vector_sum_col(true),
588  _output_stage(GEMMLowpOutputStageInfo())
589 
590 {
591 }

◆ NEGEMMLowpOffsetContributionOutputStageKernel() [2/3]

Prevent instances of this class from being copied (As this class contains pointers)

◆ NEGEMMLowpOffsetContributionOutputStageKernel() [3/3]

Allow instances of this class to be moved.

Member Function Documentation

◆ configure()

void configure ( const ITensor mm_result,
const ITensor vector_sum_col,
const ITensor vector_sum_row,
const ITensor bias,
ITensor output,
int32_t  k,
int32_t  a_offset,
int32_t  b_offset,
GEMMLowpOutputStageInfo  output_stage 
)

Initialise the kernel's input and output.

Parameters
[in]mm_resultInput tensor containing the result of NEGEMMLowpMatrixMultiplyKernel. Data type supported: S32
[in]vector_sum_colInput row-vector of sums of all the entries in each column of matrix B. Note: vector_sum_col can be a nullptr in case a_offset = 0. Data type supported: same as mm_result
[in]vector_sum_rowInput row-vector of sums of all the entries in each row of matrix A.
[in]biasBiases tensor. Only shared biases supported and it can be a nullptr if the addition of biases is not required. Biases are 1D tensor with dimensions [OFM]. Data type supported: Same as mm_result.
[out]outputOutput tensor containing the final quantized result. Data type supported: QASYMM8
[in]kNumber of matrix A columns or Matrix B rows
[in]a_offsetOffset to be added to each element of the matrix A.
[in]b_offsetOffset to be added to each element of the matrix B.
[in]output_stageGEMMLowp output stage info, providing the type of quantization and the necessary parameters.

Definition at line 593 of file NEGEMMLowpOffsetContributionOutputStageKernel.cpp.

596 {
597  // Perform validate step
598  ARM_COMPUTE_ERROR_ON_NULLPTR(mm_result, output);
599 
600  ARM_COMPUTE_ERROR_THROW_ON(validate_arguments(mm_result->info(),
601  vector_sum_col != nullptr ? vector_sum_col->info() : nullptr, // NOLINT
602  vector_sum_row != nullptr ? vector_sum_row->info() : nullptr, // NOLINT
603  bias != nullptr ? bias->info() : nullptr, // NOLINT
604  output->info(), a_offset, b_offset, output_stage)); // NOLINT
605 
606  _vector_sum_col = vector_sum_col;
607  _vector_sum_row = vector_sum_row;
608  _bias = bias;
609  _mm_result = mm_result;
610  _output = output;
611  _a_offset = a_offset;
612  _b_offset = b_offset;
613  _k_offset = a_offset * b_offset * k;
614  _output_stage = output_stage;
615 
616  // If a_offset == 0, vector_sum_col can be a nullptr
617  if(a_offset != 0)
618  {
619  // Check if vector_sum_col_shape should be slidden or not
620  // Don't slide vector_sum_col_shape along the y dimension if vector_sum_col_shape has just 1 dimension and vector_sum_row_shape more than 1
621  // This scenario can happen when the the matrix multiplication is used to perform a convolution operation
622  _slide_vector_sum_col = vector_sum_col->info()->tensor_shape().num_dimensions() > 1;
623  }
624 
625  // Configure kernel window
626  auto win_config = validate_and_configure_window(mm_result->info(), output->info());
627  ARM_COMPUTE_ERROR_THROW_ON(win_config.first);
628  INEKernel::configure(win_config.second);
629 
630  _function = get_configured_function(mm_result, vector_sum_row, output_stage);
631 }
TensorInfo * info() const override
Interface to be implemented by the child class to return the tensor's metadata.
Definition: CLTensor.cpp:35
std::pair< Status, Window > validate_and_configure_window(ITensorInfo *input, ITensorInfo *weights, ITensorInfo *biases, ITensorInfo *output, const PadStrideInfo &conv_info, unsigned int depth_multiplier, const Size2D &dilation)
#define ARM_COMPUTE_ERROR_THROW_ON(status)
Definition: Error.h:327
#define ARM_COMPUTE_ERROR_ON_NULLPTR(...)
Definition: Validate.h:161

References ARM_COMPUTE_ERROR_ON_NULLPTR, ARM_COMPUTE_ERROR_THROW_ON, arm_compute::test::validation::bias, ITensor::info(), CLTensor::info(), Dimensions< T >::num_dimensions(), ITensorInfo::tensor_shape(), and arm_compute::validate_and_configure_window().

Referenced by NEGEMMLowpMatrixMultiplyCore::configure().

◆ name()

const char* name ( ) const
inlineoverridevirtual

Name of the kernel.

Returns
Kernel name

Implements ICPPKernel.

Definition at line 64 of file NEGEMMLowpOffsetContributionOutputStageKernel.h.

65  {
66  return "NEGEMMLowpOffsetContributionOutputStageKernel";
67  }

◆ operator=() [1/2]

Prevent instances of this class from being copied (As this class contains pointers)

◆ operator=() [2/2]

Allow instances of this class to be moved.

◆ run()

void run ( const Window window,
const ThreadInfo info 
)
overridevirtual

Execute the kernel on the passed window.

Warning
If is_parallelisable() returns false then the passed window must be equal to window()
Note
The window has to be a region within the window returned by the window() method
The width of the window has to be a multiple of num_elems_processed_per_iteration().
Parameters
[in]windowRegion on which to execute the kernel. (Must be a region of the window returned by window())
[in]infoInfo about executing thread and CPU.

Implements ICPPKernel.

Definition at line 643 of file NEGEMMLowpOffsetContributionOutputStageKernel.cpp.

644 {
648  _function(window, _mm_result, _vector_sum_col, _vector_sum_row, _bias, _output, _a_offset, _b_offset, _k_offset, _slide_vector_sum_col, _output_stage);
649 }
const Window & window() const
The maximum window the kernel can be executed on.
Definition: IKernel.cpp:28
#define ARM_COMPUTE_UNUSED(...)
To avoid unused variables warnings.
Definition: Error.h:160
#define ARM_COMPUTE_ERROR_ON_INVALID_SUBWINDOW(f, s)
Definition: Validate.h:205
#define ARM_COMPUTE_ERROR_ON_UNCONFIGURED_KERNEL(k)
Definition: Validate.h:940

References ARM_COMPUTE_ERROR_ON_INVALID_SUBWINDOW, ARM_COMPUTE_ERROR_ON_UNCONFIGURED_KERNEL, ARM_COMPUTE_UNUSED, arm_compute::test::validation::info, and IKernel::window().

◆ validate()

Status validate ( const ITensorInfo mm_result,
const ITensorInfo vector_sum_col,
const ITensorInfo vector_sum_row,
const ITensorInfo bias,
const ITensorInfo output,
int32_t  a_offset,
int32_t  b_offset,
GEMMLowpOutputStageInfo  output_stage 
)
static

Static function to check if given info will lead to a valid configuration of NEGEMMLowpOffsetContributionOutputStageKernel.

Parameters
[in]mm_resultInput tensor info containing the result of NEGEMMLowpMatrixMultiplyKernel. Data type supported: S32
[in]vector_sum_colTensor info for the input row-vector of sums of all the entries in each column of matrix B. Note: vector_sum_col can be a nullptr in case a_offset = 0. Data type supported: same as mm_result
[in]vector_sum_rowTensor info for the input row-vector of sums of all the entries in each row of matrix A. Note: vector_sum_row can be a nullptr in case b_offset = 0. Data type supported: same as mm_result
[in]biasBiases tensor info. Only shared biases supported and it can be a nullptr if the addition of biases is not required. Biases are 1D tensor with dimensions [OFM]. Data type supported: Same as mm_result.
[in]outputOutput tensor info containing the final quantized result. Data type supported: QASYMM8
[in]a_offsetOffset to be added to each element of the matrix A.
[in]b_offsetOffset to be added to each element of the matrix B.
[in]output_stageGEMMLowp output stage info, providing the type of quantization and the necessary parameters.
Returns
a status

Definition at line 633 of file NEGEMMLowpOffsetContributionOutputStageKernel.cpp.

636 {
637  ARM_COMPUTE_ERROR_ON_NULLPTR(mm_result, output);
638  ARM_COMPUTE_RETURN_ON_ERROR(validate_arguments(mm_result, vector_sum_col, vector_sum_row, bias, output, a_offset, b_offset, output_stage));
639  ARM_COMPUTE_RETURN_ON_ERROR(validate_and_configure_window(mm_result->clone().get(), output->clone().get()).first);
640  return Status{};
641 }
std::pair< Status, Window > validate_and_configure_window(ITensorInfo *input, ITensorInfo *weights, ITensorInfo *biases, ITensorInfo *output, const PadStrideInfo &conv_info, unsigned int depth_multiplier, const Size2D &dilation)
#define ARM_COMPUTE_RETURN_ON_ERROR(status)
Checks if a status contains an error and returns it.
Definition: Error.h:193
#define ARM_COMPUTE_ERROR_ON_NULLPTR(...)
Definition: Validate.h:161

References ARM_COMPUTE_ERROR_ON_NULLPTR, ARM_COMPUTE_RETURN_ON_ERROR, arm_compute::test::validation::bias, ICloneable< T >::clone(), and arm_compute::validate_and_configure_window().

Referenced by NEGEMMLowpMatrixMultiplyCore::validate().


The documentation for this class was generated from the following files: