Compute Library
 21.02
NEGEMMLowpQuantizeDownInt32ScaleKernel Class Reference

Neon kernel used to quantize down the int32 accumulator values of GEMMLowp to QASYMM8/QASYMM8_SIGNED. More...

#include <NEGEMMLowpQuantizeDownInt32ScaleKernel.h>

Collaboration diagram for NEGEMMLowpQuantizeDownInt32ScaleKernel:
[legend]

Public Member Functions

const char * name () const override
 Name of the kernel. More...
 
 NEGEMMLowpQuantizeDownInt32ScaleKernel ()
 Constructor. More...
 
 NEGEMMLowpQuantizeDownInt32ScaleKernel (const NEGEMMLowpQuantizeDownInt32ScaleKernel &)=delete
 Prevent instances of this class from being copied (As this class contains pointers) More...
 
NEGEMMLowpQuantizeDownInt32ScaleKerneloperator= (const NEGEMMLowpQuantizeDownInt32ScaleKernel &)=delete
 Prevent instances of this class from being copied (As this class contains pointers) More...
 
 NEGEMMLowpQuantizeDownInt32ScaleKernel (NEGEMMLowpQuantizeDownInt32ScaleKernel &&)=default
 Allow instances of this class to be moved. More...
 
NEGEMMLowpQuantizeDownInt32ScaleKerneloperator= (NEGEMMLowpQuantizeDownInt32ScaleKernel &&)=default
 Allow instances of this class to be moved. More...
 
 ~NEGEMMLowpQuantizeDownInt32ScaleKernel ()=default
 Default destructor. More...
 
void configure (const ITensor *input, const ITensor *bias, ITensor *output, const GEMMLowpOutputStageInfo *output_stage)
 Initialise the kernel's input and output. More...
 
void run (const Window &window, const ThreadInfo &info) override
 Execute the kernel on the passed window. More...
 
- Public Member Functions inherited from ICPPKernel
virtual ~ICPPKernel ()=default
 Default destructor. More...
 
virtual void run_nd (const Window &window, const ThreadInfo &info, const Window &thread_locator)
 legacy compatibility layer for implemantions which do not support thread_locator In these cases we simply narrow the interface down the legacy version More...
 
virtual void run_op (ITensorPack &tensors, const Window &window, const ThreadInfo &info)
 Execute the kernel on the passed window. More...
 
- Public Member Functions inherited from IKernel
 IKernel ()
 Constructor. More...
 
virtual ~IKernel ()=default
 Destructor. More...
 
virtual bool is_parallelisable () const
 Indicates whether or not the kernel is parallelisable. More...
 
virtual BorderSize border_size () const
 The size of the border for that kernel. More...
 
const Windowwindow () const
 The maximum window the kernel can be executed on. More...
 

Static Public Member Functions

static Status validate (const ITensorInfo *input, const ITensorInfo *bias, const ITensorInfo *output, const GEMMLowpOutputStageInfo *output_stage)
 Static function to check if given info will lead to a valid configuration of NEGEMMLowpQuantizeDownInt32ScaleKernel. More...
 

Detailed Description

Neon kernel used to quantize down the int32 accumulator values of GEMMLowp to QASYMM8/QASYMM8_SIGNED.

This kernel takes a final int32 accumulator value (the output of NEGEMMLowpMatrixMultiplyKernel), and processes it to obtain the final QASYMM8/QASYMM8_SIGNED value. The following computations will be performed by the kernel:

  1. Add offset terms to final result
  2. Multiply each entry of result by result_mult_int
  3. Add bias to final result if bias tensor is not a nullptr
  4. Shift the int32 accumulator by result_shift
  5. Clamp the value between the specified min and max bounds
  6. Clamp the resulting int32 values:
  7. -to the [0..255] range and cast to QASYMM8.
  8. -to the [-128..127] range and cast to QASYMM8_SIGNED.

Definition at line 48 of file NEGEMMLowpQuantizeDownInt32ScaleKernel.h.

Constructor & Destructor Documentation

◆ NEGEMMLowpQuantizeDownInt32ScaleKernel() [1/3]

Constructor.

Definition at line 258 of file NEGEMMLowpQuantizeDownInt32ScaleKernel.cpp.

Referenced by NEGEMMLowpQuantizeDownInt32ScaleKernel::name().

259  : _func(nullptr), _input(nullptr), _bias(nullptr), _output(nullptr), _output_stage(nullptr), _is_bounded_relu(false)
260 {
261 }

◆ NEGEMMLowpQuantizeDownInt32ScaleKernel() [2/3]

Prevent instances of this class from being copied (As this class contains pointers)

◆ NEGEMMLowpQuantizeDownInt32ScaleKernel() [3/3]

Allow instances of this class to be moved.

◆ ~NEGEMMLowpQuantizeDownInt32ScaleKernel()

Member Function Documentation

◆ configure()

void configure ( const ITensor input,
const ITensor bias,
ITensor output,
const GEMMLowpOutputStageInfo output_stage 
)

Initialise the kernel's input and output.

Parameters
[in]inputInput tensor. Data type supported: S32
[in]biasBiases tensor. Only shared biases supported and it can be a nullptr if the biases addition is not required. Biases are 1D tensor with dimensions [OFM]. Data type supported: Same as input.
[out]outputOutput tensor. Data type supported: Data type supported: QASYMM8/QASYMM8_SIGNED
[out]output_stageGEMMLowp output stage metadata.

Definition at line 263 of file NEGEMMLowpQuantizeDownInt32ScaleKernel.cpp.

References ARM_COMPUTE_ERROR_ON_NULLPTR, ARM_COMPUTE_ERROR_THROW_ON, arm_compute::auto_init_if_empty(), ICloneable< T >::clone(), ITensor::info(), arm_compute::test::validation::input, GEMMLowpOutputStageInfo::output_data_type, and arm_compute::validate_arguments().

Referenced by NEGEMMLowpQuantizeDownInt32ScaleKernel::name().

264 {
265  // Perform validate step
266  ARM_COMPUTE_ERROR_ON_NULLPTR(input, output, output_stage);
267 
268  // Output auto inizialitation if not yet initialized
269  auto_init_if_empty(*output->info(), input->info()->clone()->set_data_type(output_stage->output_data_type));
270 
272  (bias != nullptr) ? bias->info() : nullptr,
273  output->info(),
274  output_stage));
275 
276  _input = input;
277  _bias = bias;
278  _output = output;
279  _output_stage = output_stage;
280 
281  // Configure kernel window
282  Window win = calculate_max_window(*input->info(), Steps());
283  Coordinates coord;
284  coord.set_num_dimensions(output->info()->num_dimensions());
285  output->info()->set_valid_region(ValidRegion(coord, output->info()->tensor_shape()));
286 
287  INEKernel::configure(win);
288 
289  // Check if we need to clamp the result using min and max
290  _is_bounded_relu = ((_output_stage->gemmlowp_min_bound != _output_stage->gemmlowp_max_bound)
291  && !(_output_stage->gemmlowp_min_bound == std::get<0>(quantization::get_min_max_values_from_quantized_data_type(output_stage->output_data_type))
292  && _output_stage->gemmlowp_max_bound == std::get<1>(quantization::get_min_max_values_from_quantized_data_type(output_stage->output_data_type))));
293  if(_output_stage->output_data_type == DataType::QASYMM8)
294  {
295  _func = &NEGEMMLowpQuantizeDownInt32ScaleKernel::run<uint8_t>;
296  }
297  else if(_output_stage->output_data_type == DataType::QASYMM8_SIGNED)
298  {
299  _func = &NEGEMMLowpQuantizeDownInt32ScaleKernel::run<int8_t>;
300  }
301  else
302  {
303  ARM_COMPUTE_ERROR("Data type not supported");
304  }
305 }
Window calculate_max_window(const ValidRegion &valid_region, const Steps &steps, bool skip_border, BorderSize border_size)
#define ARM_COMPUTE_ERROR(msg)
Print the given message then throw an std::runtime_error.
Definition: Error.h:352
#define ARM_COMPUTE_ERROR_THROW_ON(status)
Definition: Error.h:455
int32_t gemmlowp_max_bound
GEMMLowp max value used to saturate down the output result before converting back to QASYMM8...
Definition: Types.h:1959
std::pair< int, int > get_min_max_values_from_quantized_data_type(DataType data_type)
Get minimum and maximum values for the input quantized data type.
quantized, asymmetric fixed-point 8-bit number unsigned
bool auto_init_if_empty(ITensorInfo &info, const TensorShape &shape, int num_channels, DataType data_type, QuantizationInfo quantization_info=QuantizationInfo())
Auto initialize the tensor info (shape, number of channels and data type) if the current assignment i...
Status validate_arguments(const ITensorInfo *input, const ITensorInfo *bias, const ITensorInfo *output, const GEMMLowpOutputStageInfo *output_stage)
#define ARM_COMPUTE_ERROR_ON_NULLPTR(...)
Definition: Validate.h:161
quantized, asymmetric fixed-point 8-bit number signed
int32_t gemmlowp_min_bound
GEMMLowp min value used to saturate down the output result before converting back to QASYMM8...
Definition: Types.h:1958
DataType output_data_type
Output tensor data type to use if the output is not initialized.
Definition: Types.h:1964

◆ name()

◆ operator=() [1/2]

Prevent instances of this class from being copied (As this class contains pointers)

Referenced by NEGEMMLowpQuantizeDownInt32ScaleKernel::name().

◆ operator=() [2/2]

Allow instances of this class to be moved.

◆ run()

void run ( const Window window,
const ThreadInfo info 
)
overridevirtual

Execute the kernel on the passed window.

Warning
If is_parallelisable() returns false then the passed window must be equal to window()
Note
The window has to be a region within the window returned by the window() method
The width of the window has to be a multiple of num_elems_processed_per_iteration().
Parameters
[in]windowRegion on which to execute the kernel. (Must be a region of the window returned by window())
[in]infoInfo about executing thread and CPU.

Reimplemented from ICPPKernel.

Definition at line 315 of file NEGEMMLowpQuantizeDownInt32ScaleKernel.cpp.

References ARM_COMPUTE_ERROR_ON_INVALID_SUBWINDOW, ARM_COMPUTE_ERROR_ON_UNCONFIGURED_KERNEL, ARM_COMPUTE_UNUSED, and IKernel::window().

Referenced by arm_compute::finalize_quantization(), and NEGEMMLowpQuantizeDownInt32ScaleKernel::name().

316 {
320 
321  (this->*_func)(window);
322 }
const Window & window() const
The maximum window the kernel can be executed on.
Definition: IKernel.cpp:28
#define ARM_COMPUTE_UNUSED(...)
To avoid unused variables warnings.
Definition: Error.h:152
#define ARM_COMPUTE_ERROR_ON_UNCONFIGURED_KERNEL(k)
Definition: Validate.h:941
ScaleKernelInfo info(interpolation_policy, default_border_mode, PixelValue(), sampling_policy, false)
#define ARM_COMPUTE_ERROR_ON_INVALID_SUBWINDOW(f, s)
Definition: Validate.h:205

◆ validate()

Status validate ( const ITensorInfo input,
const ITensorInfo bias,
const ITensorInfo output,
const GEMMLowpOutputStageInfo output_stage 
)
static

Static function to check if given info will lead to a valid configuration of NEGEMMLowpQuantizeDownInt32ScaleKernel.

Parameters
[in]inputInput tensor. Data type supported: S32
[in]biasBiases tensor. Only shared biases supported and it can be a nullptr if the biases addition is not required. Biases are 1D tensor with dimensions [OFM]. Data type supported: Same as input.
[in]outputOutput tensor. Data type supported: Data type supported: QASYMM8/QASYMM8_SIGNED
[out]output_stageGEMMLowp output stage metadata.
Returns
a status

Definition at line 307 of file NEGEMMLowpQuantizeDownInt32ScaleKernel.cpp.

References ARM_COMPUTE_ERROR_ON_NULLPTR, ARM_COMPUTE_RETURN_ON_ERROR, and arm_compute::validate_arguments().

Referenced by NEGEMMLowpQuantizeDownInt32ScaleKernel::name(), and NEGEMMLowpOutputStage::validate().

308 {
310  ARM_COMPUTE_RETURN_ON_ERROR(validate_arguments(input, bias, output, output_stage));
311 
312  return Status{};
313 }
#define ARM_COMPUTE_RETURN_ON_ERROR(status)
Checks if a status contains an error and returns it.
Definition: Error.h:204
Status validate_arguments(const ITensorInfo *input, const ITensorInfo *bias, const ITensorInfo *output, const GEMMLowpOutputStageInfo *output_stage)
#define ARM_COMPUTE_ERROR_ON_NULLPTR(...)
Definition: Validate.h:161

The documentation for this class was generated from the following files: