Compute Library
 21.08
CpuGemmLowpQuantizeDownInt32ScaleKernel Class Reference

Kernel used to quantize down the int32 accumulator values of GEMMLowp to QASYMM8/QASYMM8_SIGNED. More...

#include <CpuGemmLowpQuantizeDownInt32ScaleKernel.h>

Collaboration diagram for CpuGemmLowpQuantizeDownInt32ScaleKernel:
[legend]

Public Member Functions

 CpuGemmLowpQuantizeDownInt32ScaleKernel ()=default
 
 ARM_COMPUTE_DISALLOW_COPY_ALLOW_MOVE (CpuGemmLowpQuantizeDownInt32ScaleKernel)
 
void configure (ITensorInfo *src, ITensorInfo *bias, ITensorInfo *dst, const GEMMLowpOutputStageInfo *output_stage)
 Initialise the kernel's input and output. More...
 
void run_op (ITensorPack &tensors, const Window &window, const ThreadInfo &info) override
 Execute the kernel on the passed window. More...
 
const char * name () const override
 Name of the kernel. More...
 
- Public Member Functions inherited from ICPPKernel
virtual ~ICPPKernel ()=default
 Default destructor. More...
 
virtual void run (const Window &window, const ThreadInfo &info)
 Execute the kernel on the passed window. More...
 
virtual void run_nd (const Window &window, const ThreadInfo &info, const Window &thread_locator)
 legacy compatibility layer for implemantions which do not support thread_locator In these cases we simply narrow the interface down the legacy version More...
 
- Public Member Functions inherited from IKernel
 IKernel ()
 Constructor. More...
 
virtual ~IKernel ()=default
 Destructor. More...
 
virtual bool is_parallelisable () const
 Indicates whether or not the kernel is parallelisable. More...
 
virtual BorderSize border_size () const
 The size of the border for that kernel. More...
 
const Windowwindow () const
 The maximum window the kernel can be executed on. More...
 
bool is_window_configured () const
 Function to check if the embedded window of this kernel has been configured. More...
 

Static Public Member Functions

static Status validate (const ITensorInfo *src, const ITensorInfo *bias, const ITensorInfo *dst, const GEMMLowpOutputStageInfo *output_stage)
 Static function to check if given info will lead to a valid configuration. More...
 

Detailed Description

Kernel used to quantize down the int32 accumulator values of GEMMLowp to QASYMM8/QASYMM8_SIGNED.

This kernel takes a final int32 accumulator value (the output of CpuGemmLowpMatrixMultiplyKernel), and processes it to obtain the final QASYMM8/QASYMM8_SIGNED value. The following computations will be performed by the kernel:

  1. Add offset terms to final result
  2. Multiply each entry of result by result_mult_int
  3. Add bias to final result if bias tensor is not a nullptr
  4. Shift the int32 accumulator by result_shift
  5. Clamp the value between the specified min and max bounds
  6. Clamp the resulting int32 values:
  7. -to the [0..255] range and cast to QASYMM8.
  8. -to the [-128..127] range and cast to QASYMM8_SIGNED.

Definition at line 54 of file CpuGemmLowpQuantizeDownInt32ScaleKernel.h.

Constructor & Destructor Documentation

◆ CpuGemmLowpQuantizeDownInt32ScaleKernel()

Member Function Documentation

◆ ARM_COMPUTE_DISALLOW_COPY_ALLOW_MOVE()

ARM_COMPUTE_DISALLOW_COPY_ALLOW_MOVE ( CpuGemmLowpQuantizeDownInt32ScaleKernel  )

◆ configure()

void configure ( ITensorInfo src,
ITensorInfo bias,
ITensorInfo dst,
const GEMMLowpOutputStageInfo output_stage 
)

Initialise the kernel's input and output.

Parameters
[in]srcInput tensor info. Data type supported: S32
[in]biasBiases tensor info. Only shared biases supported and it can be a nullptr if the biases addition is not required. Biases are 1D tensor with dimensions [OFM]. Data type supported: Same as input.
[out]dstOutput tensor info. Data type supported: Data type supported: QASYMM8/QASYMM8_SIGNED
[out]output_stageGEMMLowp output stage metadata.

Definition at line 262 of file CpuGemmLowpQuantizeDownInt32ScaleKernel.cpp.

References ARM_COMPUTE_ERROR, ARM_COMPUTE_ERROR_ON_NULLPTR, ARM_COMPUTE_ERROR_THROW_ON, ARM_COMPUTE_UNUSED, arm_compute::auto_init_if_empty(), arm_compute::calculate_max_window(), ICloneable< T >::clone(), GEMMLowpOutputStageInfo::gemmlowp_max_bound, GEMMLowpOutputStageInfo::gemmlowp_min_bound, arm_compute::quantization::get_min_max_values_from_quantized_data_type(), GEMMLowpOutputStageInfo::output_data_type, arm_compute::QASYMM8, and arm_compute::QASYMM8_SIGNED.

263 {
264  ARM_COMPUTE_UNUSED(bias);
265  // Perform validate step
266  ARM_COMPUTE_ERROR_ON_NULLPTR(src, dst, output_stage);
267 
268  // Output auto inizialitation if not yet initialized
269  auto_init_if_empty(*dst, src->clone()->set_data_type(output_stage->output_data_type));
270 
271  ARM_COMPUTE_ERROR_THROW_ON(validate_arguments(src,
272  bias,
273  dst,
274  output_stage));
275 
276  _output_stage = output_stage;
277 
278  // Configure kernel window
279  Window win = calculate_max_window(*src, Steps());
280 
281  ICpuKernel::configure(win);
282 
283  // Check if we need to clamp the result using min and max
284  _is_bounded_relu = ((_output_stage->gemmlowp_min_bound != _output_stage->gemmlowp_max_bound)
285  && !(_output_stage->gemmlowp_min_bound == std::get<0>(quantization::get_min_max_values_from_quantized_data_type(output_stage->output_data_type))
286  && _output_stage->gemmlowp_max_bound == std::get<1>(quantization::get_min_max_values_from_quantized_data_type(output_stage->output_data_type))));
287  if(_output_stage->output_data_type == DataType::QASYMM8)
288  {
289  _func = &CpuGemmLowpQuantizeDownInt32ScaleKernel::run_internal<uint8_t>;
290  }
291  else if(_output_stage->output_data_type == DataType::QASYMM8_SIGNED)
292  {
293  _func = &CpuGemmLowpQuantizeDownInt32ScaleKernel::run_internal<int8_t>;
294  }
295  else
296  {
297  ARM_COMPUTE_ERROR("Data type not supported");
298  }
299 }
Window calculate_max_window(const ValidRegion &valid_region, const Steps &steps, bool skip_border, BorderSize border_size)
#define ARM_COMPUTE_ERROR(msg)
Print the given message then throw an std::runtime_error.
Definition: Error.h:352
#define ARM_COMPUTE_ERROR_THROW_ON(status)
Definition: Error.h:455
int32_t gemmlowp_max_bound
GEMMLowp max value used to saturate down the output result before converting back to QASYMM8...
Definition: Types.h:1895
SimpleTensor< float > src
Definition: DFT.cpp:155
std::pair< int, int > get_min_max_values_from_quantized_data_type(DataType data_type)
Get minimum and maximum values for the input quantized data type.
#define ARM_COMPUTE_UNUSED(...)
To avoid unused variables warnings.
Definition: Error.h:152
quantized, asymmetric fixed-point 8-bit number unsigned
bool auto_init_if_empty(ITensorInfo &info, const TensorShape &shape, int num_channels, DataType data_type, QuantizationInfo quantization_info=QuantizationInfo())
Auto initialize the tensor info (shape, number of channels and data type) if the current assignment i...
#define ARM_COMPUTE_ERROR_ON_NULLPTR(...)
Definition: Validate.h:157
quantized, asymmetric fixed-point 8-bit number signed
int32_t gemmlowp_min_bound
GEMMLowp min value used to saturate down the output result before converting back to QASYMM8...
Definition: Types.h:1894
DataType output_data_type
Output tensor data type to use if the output is not initialized.
Definition: Types.h:1900

◆ name()

const char * name ( ) const
overridevirtual

Name of the kernel.

Returns
Kernel name

Implements ICPPKernel.

Definition at line 320 of file CpuGemmLowpQuantizeDownInt32ScaleKernel.cpp.

321 {
322  return "CpuGemmLowpQuantizeDownInt32ScaleKernel";
323 }

◆ run_op()

void run_op ( ITensorPack tensors,
const Window window,
const ThreadInfo info 
)
overridevirtual

Execute the kernel on the passed window.

Warning
If is_parallelisable() returns false then the passed window must be equal to window()
Note
The window has to be a region within the window returned by the window() method
The width of the window has to be a multiple of num_elems_processed_per_iteration().
Parameters
[in]tensorsA vector containing the tensors to operate on.
[in]windowRegion on which to execute the kernel. (Must be a region of the window returned by window())
[in]infoInfo about executing thread and CPU.

Reimplemented from ICPPKernel.

Definition at line 307 of file CpuGemmLowpQuantizeDownInt32ScaleKernel.cpp.

References arm_compute::ACL_BIAS, arm_compute::ACL_DST, arm_compute::ACL_SRC, ARM_COMPUTE_ERROR_ON_INVALID_SUBWINDOW, ARM_COMPUTE_ERROR_ON_MSG, ARM_COMPUTE_ERROR_ON_UNCONFIGURED_KERNEL, ARM_COMPUTE_UNUSED, ITensorPack::empty(), ITensorPack::get_const_tensor(), ITensorPack::get_tensor(), and IKernel::window().

308 {
312  ARM_COMPUTE_ERROR_ON_MSG(tensors.empty(), "No inputs provided");
313 
314  auto src = tensors.get_const_tensor(TensorType::ACL_SRC);
315  auto bias = tensors.get_const_tensor(TensorType::ACL_BIAS);
316  auto dst = tensors.get_tensor(TensorType::ACL_DST);
317  (this->*_func)(src, bias, dst, window);
318 }
const Window & window() const
The maximum window the kernel can be executed on.
Definition: IKernel.cpp:28
SimpleTensor< float > src
Definition: DFT.cpp:155
#define ARM_COMPUTE_UNUSED(...)
To avoid unused variables warnings.
Definition: Error.h:152
#define ARM_COMPUTE_ERROR_ON_MSG(cond, msg)
Definition: Error.h:456
#define ARM_COMPUTE_ERROR_ON_UNCONFIGURED_KERNEL(k)
Definition: Validate.h:915
ScaleKernelInfo info(interpolation_policy, default_border_mode, PixelValue(), sampling_policy, false)
#define ARM_COMPUTE_ERROR_ON_INVALID_SUBWINDOW(f, s)
Definition: Validate.h:201

◆ validate()

Status validate ( const ITensorInfo src,
const ITensorInfo bias,
const ITensorInfo dst,
const GEMMLowpOutputStageInfo output_stage 
)
static

Static function to check if given info will lead to a valid configuration.

Similar to CpuGemmLowpQuantizeDownInt32ScaleKernel::configure()

Returns
a status

Definition at line 301 of file CpuGemmLowpQuantizeDownInt32ScaleKernel.cpp.

References ARM_COMPUTE_RETURN_ON_ERROR.

Referenced by CpuGemmLowpOutputStage::validate().

302 {
303  ARM_COMPUTE_RETURN_ON_ERROR(validate_arguments(src, bias, dst, output_stage));
304  return Status{};
305 }
#define ARM_COMPUTE_RETURN_ON_ERROR(status)
Checks if a status contains an error and returns it.
Definition: Error.h:204
SimpleTensor< float > src
Definition: DFT.cpp:155

The documentation for this class was generated from the following files: