Compute Library
 19.08
NEGEMMLowpQuantizeDownInt32ToUint8ScaleKernel Class Reference

NEON kernel used to quantize down the int32 accumulator values of GEMMLowp to QASYMM8. More...

#include <NEGEMMLowpQuantizeDownInt32ToUint8ScaleKernel.h>

Collaboration diagram for NEGEMMLowpQuantizeDownInt32ToUint8ScaleKernel:
[legend]

Public Member Functions

const char * name () const override
 Name of the kernel. More...
 
 NEGEMMLowpQuantizeDownInt32ToUint8ScaleKernel ()
 Constructor. More...
 
 NEGEMMLowpQuantizeDownInt32ToUint8ScaleKernel (const NEGEMMLowpQuantizeDownInt32ToUint8ScaleKernel &)=delete
 Prevent instances of this class from being copied (As this class contains pointers) More...
 
NEGEMMLowpQuantizeDownInt32ToUint8ScaleKerneloperator= (const NEGEMMLowpQuantizeDownInt32ToUint8ScaleKernel &)=delete
 Prevent instances of this class from being copied (As this class contains pointers) More...
 
 NEGEMMLowpQuantizeDownInt32ToUint8ScaleKernel (NEGEMMLowpQuantizeDownInt32ToUint8ScaleKernel &&)=default
 Allow instances of this class to be moved. More...
 
NEGEMMLowpQuantizeDownInt32ToUint8ScaleKerneloperator= (NEGEMMLowpQuantizeDownInt32ToUint8ScaleKernel &&)=default
 Allow instances of this class to be moved. More...
 
void configure (const ITensor *input, const ITensor *bias, ITensor *output, int result_offset, int result_mult_int, int result_shift, int min=0, int max=0)
 Initialise the kernel's input and output. More...
 
void run (const Window &window, const ThreadInfo &info) override
 Execute the kernel on the passed window. More...
 
- Public Member Functions inherited from ICPPKernel
virtual ~ICPPKernel ()=default
 Default destructor. More...
 
- Public Member Functions inherited from IKernel
 IKernel ()
 Constructor. More...
 
virtual ~IKernel ()=default
 Destructor. More...
 
virtual bool is_parallelisable () const
 Indicates whether or not the kernel is parallelisable. More...
 
virtual BorderSize border_size () const
 The size of the border for that kernel. More...
 
const Windowwindow () const
 The maximum window the kernel can be executed on. More...
 

Static Public Member Functions

static Status validate (const ITensorInfo *input, const ITensorInfo *bias, const ITensorInfo *output, int min=0, int max=0)
 Static function to check if given info will lead to a valid configuration of NEGEMMLowpQuantizeDownInt32ToUint8ScaleKernel. More...
 

Detailed Description

NEON kernel used to quantize down the int32 accumulator values of GEMMLowp to QASYMM8.

This kernel takes a final int32 accumulator value (the output of NEGEMMLowpMatrixMultiplyKernel), and processes it to obtain the final QASYMM8 value. The following computations will be performed by the kernel:

  1. Add offset terms to final result
  2. Multiply each entry of result by result_mult_int
  3. Add bias to final result if bias tensor is not a nullptr
  4. Shift the int32 accumulator by result_shift
  5. Clamp the value between the specified min and max bounds
  6. Clamp the resulting int32 values to the [0..255] range and cast to QASYMM8.

Definition at line 46 of file NEGEMMLowpQuantizeDownInt32ToUint8ScaleKernel.h.

Constructor & Destructor Documentation

◆ NEGEMMLowpQuantizeDownInt32ToUint8ScaleKernel() [1/3]

Constructor.

Definition at line 293 of file NEGEMMLowpQuantizeDownInt32ToUint8ScaleKernel.cpp.

294  : _func(nullptr), _input(nullptr), _bias(nullptr), _output(nullptr), _result_offset(0), _result_mult_int(0), _result_shift(0), _min(0), _max(0)
295 {
296 }

◆ NEGEMMLowpQuantizeDownInt32ToUint8ScaleKernel() [2/3]

Prevent instances of this class from being copied (As this class contains pointers)

◆ NEGEMMLowpQuantizeDownInt32ToUint8ScaleKernel() [3/3]

Allow instances of this class to be moved.

Member Function Documentation

◆ configure()

void configure ( const ITensor input,
const ITensor bias,
ITensor output,
int  result_offset,
int  result_mult_int,
int  result_shift,
int  min = 0,
int  max = 0 
)

Initialise the kernel's input and output.

Parameters
[in]inputInput tensor. Data type supported: S32
[in]biasBiases tensor. Only shared biases supported and it can be a nullptr if the biases addition is not required. Biases are 1D tensor with dimensions [OFM]. Data type supported: Same as input.
[out]outputOutput tensor. Data type supported: Data type supported: QASYMM8
[in]result_offsetOffset to be added to each element of the input matrix
[in]result_mult_intValue to be multiplied to each element of the input matrix when once the result_offset has been add
[in]result_shiftNumber of bits to shift right the result before converting back to QASYMM8
[in]min(Optional) Min value used to saturate down the output result before converting back to QASYMM8
[in]max(Optional) Max value used to saturate up the output result before converting back to QASYMM8, Along with min, this value can be used to implement "rectified linear unit" activation functions

Definition at line 298 of file NEGEMMLowpQuantizeDownInt32ToUint8ScaleKernel.cpp.

299 {
300  // Perform validate step
301  ARM_COMPUTE_ERROR_ON_NULLPTR(input, output);
302 
303  // Output auto inizialitation if not yet initialized
304  auto_init_if_empty(*output->info(), input->info()->clone()->set_data_type(DataType::QASYMM8));
305 
306  ARM_COMPUTE_ERROR_THROW_ON(validate_arguments(input->info(),
307  (bias != nullptr) ? bias->info() : nullptr,
308  output->info(),
309  min,
310  max));
311 
312  _input = input;
313  _bias = bias;
314  _output = output;
315  _result_offset = result_offset;
316  _result_mult_int = result_mult_int;
317  _result_shift = result_shift;
318  _min = min;
319  _max = max;
320 
321  // Configure kernel window
322  auto win_config = validate_and_configure_window(input->info(), (bias != nullptr) ? bias->info() : nullptr, output->info());
323  ARM_COMPUTE_ERROR_THROW_ON(win_config.first);
324  INEKernel::configure(win_config.second);
325 
326  // Check if we need to clamp the result using min and max
327  const bool is_bounded_relu = ((min != max) && !(min == 0 && max == 255));
328  _func = is_bounded_relu ? &NEGEMMLowpQuantizeDownInt32ToUint8ScaleKernel::run<true> : &NEGEMMLowpQuantizeDownInt32ToUint8ScaleKernel::run<false>;
329 }
TensorInfo * info() const override
Interface to be implemented by the child class to return the tensor's metadata.
Definition: CLTensor.cpp:35
std::pair< Status, Window > validate_and_configure_window(ITensorInfo *input, ITensorInfo *weights, ITensorInfo *biases, ITensorInfo *output, const PadStrideInfo &conv_info, unsigned int depth_multiplier, const Size2D &dilation)
#define ARM_COMPUTE_ERROR_THROW_ON(status)
Definition: Error.h:327
bool auto_init_if_empty(ITensorInfo &info, const TensorShape &shape, int num_channels, DataType data_type, QuantizationInfo quantization_info=QuantizationInfo())
Auto initialize the tensor info (shape, number of channels and data type) if the current assignment i...
Definition: Helpers.inl:201
quantized, asymmetric fixed-point 8-bit number
virtual std::unique_ptr< T > clone() const =0
Provide a clone of the current object of class T.
virtual ITensorInfo * info() const =0
Interface to be implemented by the child class to return the tensor's metadata.
#define ARM_COMPUTE_ERROR_ON_NULLPTR(...)
Definition: Validate.h:161

References ARM_COMPUTE_ERROR_ON_NULLPTR, ARM_COMPUTE_ERROR_THROW_ON, arm_compute::auto_init_if_empty(), arm_compute::test::validation::bias, ICloneable< T >::clone(), ITensor::info(), CLTensor::info(), arm_compute::QASYMM8, and arm_compute::validate_and_configure_window().

◆ name()

const char* name ( ) const
inlineoverridevirtual

Name of the kernel.

Returns
Kernel name

Implements ICPPKernel.

Definition at line 49 of file NEGEMMLowpQuantizeDownInt32ToUint8ScaleKernel.h.

50  {
51  return "NEGEMMLowpQuantizeDownInt32ToUint8ScaleKernel";
52  }

◆ operator=() [1/2]

Prevent instances of this class from being copied (As this class contains pointers)

◆ operator=() [2/2]

Allow instances of this class to be moved.

◆ run()

void run ( const Window window,
const ThreadInfo info 
)
overridevirtual

Execute the kernel on the passed window.

Warning
If is_parallelisable() returns false then the passed window must be equal to window()
Note
The window has to be a region within the window returned by the window() method
The width of the window has to be a multiple of num_elems_processed_per_iteration().
Parameters
[in]windowRegion on which to execute the kernel. (Must be a region of the window returned by window())
[in]infoInfo about executing thread and CPU.

Implements ICPPKernel.

Definition at line 343 of file NEGEMMLowpQuantizeDownInt32ToUint8ScaleKernel.cpp.

344 {
348 
349  (this->*_func)(window);
350 }
const Window & window() const
The maximum window the kernel can be executed on.
Definition: IKernel.cpp:28
#define ARM_COMPUTE_UNUSED(...)
To avoid unused variables warnings.
Definition: Error.h:160
#define ARM_COMPUTE_ERROR_ON_INVALID_SUBWINDOW(f, s)
Definition: Validate.h:205
#define ARM_COMPUTE_ERROR_ON_UNCONFIGURED_KERNEL(k)
Definition: Validate.h:940

References ARM_COMPUTE_ERROR_ON_INVALID_SUBWINDOW, ARM_COMPUTE_ERROR_ON_UNCONFIGURED_KERNEL, ARM_COMPUTE_UNUSED, arm_compute::test::validation::info, and IKernel::window().

◆ validate()

Status validate ( const ITensorInfo input,
const ITensorInfo bias,
const ITensorInfo output,
int  min = 0,
int  max = 0 
)
static

Static function to check if given info will lead to a valid configuration of NEGEMMLowpQuantizeDownInt32ToUint8ScaleKernel.

Parameters
[in]inputInput tensor. Data type supported: S32
[in]biasBiases tensor. Only shared biases supported and it can be a nullptr if the biases addition is not required. Biases are 1D tensor with dimensions [OFM]. Data type supported: Same as input.
[in]outputOutput tensor. Data type supported: Data type supported: QASYMM8
[in]min(Optional) Min value used to saturate down the output result before converting back to QASYMM8
[in]max(Optional) Max value used to saturate up the output result before converting back to QASYMM8, Along with min, this value can be used to implement "rectified linear unit" activation functions
Returns
a status

Definition at line 331 of file NEGEMMLowpQuantizeDownInt32ToUint8ScaleKernel.cpp.

332 {
333  ARM_COMPUTE_ERROR_ON_NULLPTR(input, output);
334  ARM_COMPUTE_RETURN_ON_ERROR(validate_arguments(input, bias, output, min, max));
336  (bias != nullptr) ? bias->clone().get() : nullptr,
337  output->clone().get())
338  .first);
339 
340  return Status{};
341 }
std::pair< Status, Window > validate_and_configure_window(ITensorInfo *input, ITensorInfo *weights, ITensorInfo *biases, ITensorInfo *output, const PadStrideInfo &conv_info, unsigned int depth_multiplier, const Size2D &dilation)
#define ARM_COMPUTE_RETURN_ON_ERROR(status)
Checks if a status contains an error and returns it.
Definition: Error.h:193
Status class.
Definition: Error.h:52
virtual std::unique_ptr< T > clone() const =0
Provide a clone of the current object of class T.
#define ARM_COMPUTE_ERROR_ON_NULLPTR(...)
Definition: Validate.h:161

References ARM_COMPUTE_ERROR_ON_NULLPTR, ARM_COMPUTE_RETURN_ON_ERROR, arm_compute::test::validation::bias, ICloneable< T >::clone(), and arm_compute::validate_and_configure_window().

Referenced by NEGEMMLowpQuantizeDownInt32ToUint8Scale::validate().


The documentation for this class was generated from the following files: