Compute Library
 21.02
CLGEMMLowpQuantizeDownInt32ScaleByFixedPointKernel Class Reference

OpenCL kernel used to quantize down the int32 accumulator values of GEMMLowp to QASYMM8/QASYMM8_SIGNED/QSYMM16. More...

#include <CLGEMMLowpQuantizeDownInt32ScaleByFixedPointKernel.h>

Collaboration diagram for CLGEMMLowpQuantizeDownInt32ScaleByFixedPointKernel:
[legend]

Public Member Functions

 CLGEMMLowpQuantizeDownInt32ScaleByFixedPointKernel ()
 Constructor. More...
 
 CLGEMMLowpQuantizeDownInt32ScaleByFixedPointKernel (const CLGEMMLowpQuantizeDownInt32ScaleByFixedPointKernel &)=delete
 Prevent instances of this class from being copied (As this class contains pointers) More...
 
CLGEMMLowpQuantizeDownInt32ScaleByFixedPointKerneloperator= (const CLGEMMLowpQuantizeDownInt32ScaleByFixedPointKernel &)=delete
 Prevent instances of this class from being copied (As this class contains pointers) More...
 
 CLGEMMLowpQuantizeDownInt32ScaleByFixedPointKernel (CLGEMMLowpQuantizeDownInt32ScaleByFixedPointKernel &&)=default
 Allow instances of this class to be moved. More...
 
CLGEMMLowpQuantizeDownInt32ScaleByFixedPointKerneloperator= (CLGEMMLowpQuantizeDownInt32ScaleByFixedPointKernel &&)=default
 Allow instances of this class to be moved. More...
 
void configure (const CLCompileContext &compile_context, const ICLTensor *input, const ICLTensor *bias, ICLTensor *output, const GEMMLowpOutputStageInfo *info)
 Initialise the kernel's input and output. More...
 
void run (const Window &window, cl::CommandQueue &queue) override
 Enqueue the OpenCL kernel to process the given window on the passed OpenCL command queue. More...
 
- Public Member Functions inherited from ICLKernel
 ICLKernel ()
 Constructor. More...
 
cl::Kernel & kernel ()
 Returns a reference to the OpenCL kernel of this object. More...
 
template<typename T >
void add_1D_array_argument (unsigned int &idx, const ICLArray< T > *array, const Strides &strides, unsigned int num_dimensions, const Window &window)
 Add the passed 1D array's parameters to the object's kernel's arguments starting from the index idx. More...
 
void add_1D_tensor_argument (unsigned int &idx, const ICLTensor *tensor, const Window &window)
 Add the passed 1D tensor's parameters to the object's kernel's arguments starting from the index idx. More...
 
void add_1D_tensor_argument_if (bool cond, unsigned int &idx, const ICLTensor *tensor, const Window &window)
 Add the passed 1D tensor's parameters to the object's kernel's arguments starting from the index idx if the condition is true. More...
 
void add_2D_tensor_argument (unsigned int &idx, const ICLTensor *tensor, const Window &window)
 Add the passed 2D tensor's parameters to the object's kernel's arguments starting from the index idx. More...
 
void add_2D_tensor_argument_if (bool cond, unsigned int &idx, const ICLTensor *tensor, const Window &window)
 Add the passed 2D tensor's parameters to the object's kernel's arguments starting from the index idx if the condition is true. More...
 
void add_3D_tensor_argument (unsigned int &idx, const ICLTensor *tensor, const Window &window)
 Add the passed 3D tensor's parameters to the object's kernel's arguments starting from the index idx. More...
 
void add_4D_tensor_argument (unsigned int &idx, const ICLTensor *tensor, const Window &window)
 Add the passed 4D tensor's parameters to the object's kernel's arguments starting from the index idx. More...
 
virtual void run_op (ITensorPack &tensors, const Window &window, cl::CommandQueue &queue)
 Enqueue the OpenCL kernel to process the given window on the passed OpenCL command queue. More...
 
template<typename T >
void add_argument (unsigned int &idx, T value)
 Add the passed parameters to the object's kernel's arguments starting from the index idx. More...
 
void set_lws_hint (const cl::NDRange &lws_hint)
 Set the Local-Workgroup-Size hint. More...
 
cl::NDRange lws_hint () const
 Return the Local-Workgroup-Size hint. More...
 
void set_wbsm_hint (const cl_int &wbsm_hint)
 Set the workgroup batch size modifier hint. More...
 
cl_int wbsm_hint () const
 Return the workgroup batch size modifier hint. More...
 
const std::string & config_id () const
 Get the configuration ID. More...
 
void set_target (GPUTarget target)
 Set the targeted GPU architecture. More...
 
void set_target (cl::Device &device)
 Set the targeted GPU architecture according to the CL device. More...
 
GPUTarget get_target () const
 Get the targeted GPU architecture. More...
 
size_t get_max_workgroup_size ()
 Get the maximum workgroup size for the device the CLKernelLibrary uses. More...
 
template<unsigned int dimension_size>
void add_tensor_argument (unsigned &idx, const ICLTensor *tensor, const Window &window)
 
template<typename T , unsigned int dimension_size>
void add_array_argument (unsigned &idx, const ICLArray< T > *array, const Strides &strides, unsigned int num_dimensions, const Window &window)
 Add the passed array's parameters to the object's kernel's arguments starting from the index idx. More...
 
- Public Member Functions inherited from IKernel
 IKernel ()
 Constructor. More...
 
virtual ~IKernel ()=default
 Destructor. More...
 
virtual bool is_parallelisable () const
 Indicates whether or not the kernel is parallelisable. More...
 
virtual BorderSize border_size () const
 The size of the border for that kernel. More...
 
const Windowwindow () const
 The maximum window the kernel can be executed on. More...
 

Static Public Member Functions

static Status validate (const ITensorInfo *input, const ITensorInfo *bias, const ITensorInfo *output, const GEMMLowpOutputStageInfo *info)
 Static function to check if given info will lead to a valid configuration of CLGEMMLowpQuantizeDownInt32ScaleByFixedPointKernel. More...
 
- Static Public Member Functions inherited from ICLKernel
static constexpr unsigned int num_arguments_per_1D_array ()
 Returns the number of arguments enqueued per 1D array object. More...
 
static constexpr unsigned int num_arguments_per_1D_tensor ()
 Returns the number of arguments enqueued per 1D tensor object. More...
 
static constexpr unsigned int num_arguments_per_2D_tensor ()
 Returns the number of arguments enqueued per 2D tensor object. More...
 
static constexpr unsigned int num_arguments_per_3D_tensor ()
 Returns the number of arguments enqueued per 3D tensor object. More...
 
static constexpr unsigned int num_arguments_per_4D_tensor ()
 Returns the number of arguments enqueued per 4D tensor object. More...
 
static cl::NDRange gws_from_window (const Window &window)
 Get the global work size given an execution window. More...
 

Detailed Description

OpenCL kernel used to quantize down the int32 accumulator values of GEMMLowp to QASYMM8/QASYMM8_SIGNED/QSYMM16.

This kernel takes a final int32 accumulator value (the output of the matrix multiplication), and processes it to obtain the final quantized value. The following computations will be performed by the kernel:

  1. Compute fixed point multiplication between each entry of input by gemmlowp_multiplier
  2. Add bias to final result if bias tensor is not a nullptr
  3. Round to nearest division by a power-of-two using result_shift
  4. Add offset to each result
  5. Clamp the value between the specified min and max bounds
  6. Clamp the resulting int32 values to the proper quantized range and cast to QASYMM8/QASYMM8_SIGNED/QSYMM16.

Definition at line 45 of file CLGEMMLowpQuantizeDownInt32ScaleByFixedPointKernel.h.

Constructor & Destructor Documentation

◆ CLGEMMLowpQuantizeDownInt32ScaleByFixedPointKernel() [1/3]

Constructor.

Definition at line 66 of file CLGEMMLowpQuantizeDownInt32ScaleByFixedPointKernel.cpp.

67  : _input(nullptr), _bias(nullptr), _output(nullptr)
68 {
69 }

◆ CLGEMMLowpQuantizeDownInt32ScaleByFixedPointKernel() [2/3]

Prevent instances of this class from being copied (As this class contains pointers)

◆ CLGEMMLowpQuantizeDownInt32ScaleByFixedPointKernel() [3/3]

Member Function Documentation

◆ configure()

void configure ( const CLCompileContext compile_context,
const ICLTensor input,
const ICLTensor bias,
ICLTensor output,
const GEMMLowpOutputStageInfo info 
)

Initialise the kernel's input and output.

Parameters
[in]compile_contextThe compile context to be used.
[in]inputInput tensor. Data type supported: S32
[in]biasBiases tensor. Only shared biases supported and it can be a nullptr if the biases addition is not required. Biases are 1D tensor with dimensions [OFM]. Data type supported: Same as input.
[out]outputOutput tensor. Data type supported: Data type supported: QASYMM8/QASYMM8_SIGNED/QSYMM16.
[in]infoOutput stage info. Used to pass the quantized output data type

Definition at line 80 of file CLGEMMLowpQuantizeDownInt32ScaleByFixedPointKernel.cpp.

References CLBuildOptions::add_option(), CLBuildOptions::add_option_if(), arm_compute::adjust_vec_size(), ARM_COMPUTE_ERROR_ON, ARM_COMPUTE_ERROR_ON_NULLPTR, ARM_COMPUTE_ERROR_THROW_ON, arm_compute::auto_init_if_empty(), arm_compute::calculate_max_window(), ICloneable< T >::clone(), arm_compute::create_kernel(), ITensorInfo::data_type(), ITensorInfo::dimension(), GEMMLowpOutputStageInfo::gemmlowp_max_bound, GEMMLowpOutputStageInfo::gemmlowp_min_bound, GEMMLowpOutputStageInfo::gemmlowp_multiplier, GEMMLowpOutputStageInfo::gemmlowp_offset, GEMMLowpOutputStageInfo::gemmlowp_shift, arm_compute::get_cl_type_from_data_type(), arm_compute::quantization::get_min_max_values_from_quantized_data_type(), arm_compute::get_padding_info(), arm_compute::has_padding_changed(), ITensor::info(), arm_compute::test::validation::info, arm_compute::test::validation::input, kernel_name, num_elems_processed_per_iteration, CLBuildOptions::options(), GEMMLowpOutputStageInfo::output_data_type, arm_compute::QSYMM16, arm_compute::support::cpp11::to_string(), and arm_compute::validate_arguments().

82 {
83  // Perform validate step
85  ARM_COMPUTE_ERROR_THROW_ON(validate_arguments(input->info(), (bias != nullptr) ? bias->info() : nullptr, output->info(), info));
86 
87  auto padding_info = get_padding_info({ input, bias, output });
88 
89  // Output auto inizialitation if not yet initialized
90  auto_init_if_empty(*output->info(), input->info()->clone()->set_data_type(info->output_data_type));
91 
92  _input = input;
93  _bias = bias;
94  _output = output;
95 
96  const unsigned int num_elems_processed_per_iteration = adjust_vec_size(4, input->info()->dimension(0));
97 
98  // Set the arguments to pass at compile time
99  auto min = info->gemmlowp_min_bound;
100  auto max = info->gemmlowp_max_bound;
101  CLBuildOptions build_opts;
102  build_opts.add_option("-DVEC_SIZE=" + support::cpp11::to_string(num_elems_processed_per_iteration));
103  build_opts.add_option("-DVEC_SIZE_LEFTOVER=" + support::cpp11::to_string(input->info()->dimension(0) % num_elems_processed_per_iteration));
104  build_opts.add_option("-DRESULT_OFFSET_AFTER_SHIFT=" + support::cpp11::to_string(info->gemmlowp_offset));
105  build_opts.add_option("-DRESULT_FIXEDPOINT_MULTIPLIER=" + support::cpp11::to_string(info->gemmlowp_multiplier));
106  build_opts.add_option("-DRESULT_SHIFT=" + support::cpp11::to_string(info->gemmlowp_shift));
107  build_opts.add_option("-DOUTPUT_DATA_TYPE=" + get_cl_type_from_data_type(output->info()->data_type()));
108  build_opts.add_option_if((min > std::get<0>(quantization::get_min_max_values_from_quantized_data_type(info->output_data_type))) && (min != max),
109  "-DMIN_BOUND=" + support::cpp11::to_string(min));
110  build_opts.add_option_if((max < std::get<1>(quantization::get_min_max_values_from_quantized_data_type(info->output_data_type))) && (min != max),
111  "-DMAX_BOUND=" + support::cpp11::to_string(max));
112  build_opts.add_option_if(bias != nullptr, "-DADD_BIAS");
113 
114  // Create kernel
115  const std::string kernel_name = (info->output_data_type == DataType::QSYMM16) ? "gemmlowp_output_stage_quantize_down_fixedpoint_qsymm16" : "gemmlowp_output_stage_quantize_down_fixedpoint";
116  _kernel = create_kernel(compile_context, kernel_name, build_opts.options());
117 
118  // Configure kernel window
119  auto win = calculate_max_window(*output->info(), Steps(num_elems_processed_per_iteration));
120  ICLKernel::configure_internal(win);
121 
123 }
Window calculate_max_window(const ValidRegion &valid_region, const Steps &steps, bool skip_border, BorderSize border_size)
quantized, symmetric fixed-point 16-bit number
std::string to_string(T &&value)
Convert integer and float values to string.
#define ARM_COMPUTE_ERROR_ON(cond)
If the condition is true then an error message is printed and an exception thrown.
Definition: Error.h:466
#define ARM_COMPUTE_ERROR_THROW_ON(status)
Definition: Error.h:455
cl::Kernel create_kernel(const CLCompileContext &ctx, const std::string &kernel_name, const std::set< std::string > &build_opts=std::set< std::string >())
Creates an opencl kernel using a compile context.
Definition: CLHelpers.cpp:403
std::pair< int, int > get_min_max_values_from_quantized_data_type(DataType data_type)
Get minimum and maximum values for the input quantized data type.
std::string kernel_name
std::string get_cl_type_from_data_type(const DataType &dt)
Translates a tensor data type to the appropriate OpenCL type.
Definition: CLHelpers.cpp:37
bool auto_init_if_empty(ITensorInfo &info, const TensorShape &shape, int num_channels, DataType data_type, QuantizationInfo quantization_info=QuantizationInfo())
Auto initialize the tensor info (shape, number of channels and data type) if the current assignment i...
bool has_padding_changed(const std::unordered_map< const ITensorInfo *, PaddingSize > &padding_map)
Check if the previously stored padding info has changed after configuring a kernel.
Definition: Utils.cpp:528
ScaleKernelInfo info(interpolation_policy, default_border_mode, PixelValue(), sampling_policy, false)
std::unordered_map< const ITensorInfo *, PaddingSize > get_padding_info(std::initializer_list< const ITensorInfo *> infos)
Stores padding information before configuring a kernel.
Definition: Utils.cpp:513
Status validate_arguments(const ITensorInfo *input, const ITensorInfo *bias, const ITensorInfo *output, const GEMMLowpOutputStageInfo *output_stage)
unsigned int num_elems_processed_per_iteration
#define ARM_COMPUTE_ERROR_ON_NULLPTR(...)
Definition: Validate.h:161
unsigned int adjust_vec_size(unsigned int vec_size, size_t dim0)
Returns the adjusted vector size in case it is less than the input&#39;s first dimension, getting rounded down to its closest valid vector size.
Definition: Utils.h:1358

◆ operator=() [1/2]

Prevent instances of this class from being copied (As this class contains pointers)

◆ operator=() [2/2]

Allow instances of this class to be moved.

◆ run()

void run ( const Window window,
cl::CommandQueue &  queue 
)
overridevirtual

Enqueue the OpenCL kernel to process the given window on the passed OpenCL command queue.

Note
The queue is not flushed by this method, and therefore the kernel will not have been executed by the time this method returns.
Parameters
[in]windowRegion on which to execute the kernel. (Must be a valid region of the window returned by window()).
[in,out]queueCommand queue on which to enqueue the kernel.

Reimplemented from ICLKernel.

Definition at line 125 of file CLGEMMLowpQuantizeDownInt32ScaleByFixedPointKernel.cpp.

References ICLKernel::add_1D_tensor_argument(), ICLKernel::add_3D_tensor_argument(), ARM_COMPUTE_ERROR_ON_INVALID_SUBWINDOW, ARM_COMPUTE_ERROR_ON_UNCONFIGURED_KERNEL, Window::collapse_if_possible(), Window::DimY, Window::DimZ, arm_compute::enqueue(), Window::first_slice_window_3D(), ICLKernel::lws_hint(), ICLKernel::num_arguments_per_3D_tensor(), Window::set(), arm_compute::test::validation::reference::slice(), Window::slide_window_slice_3D(), and IKernel::window().

126 {
129 
130  // Create input window
132  Window slice = collapsed.first_slice_window_3D();
133 
134  // Setup bias slice
135  unsigned int idx1 = num_arguments_per_3D_tensor();
136  if(_bias != nullptr)
137  {
138  Window biases_slice(slice);
139  biases_slice.set(Window::DimY, Window::Dimension(0, 1, 1));
140  biases_slice.set(Window::DimZ, Window::Dimension(0, 1, 1));
141  add_1D_tensor_argument(idx1, _bias, biases_slice);
142  }
143 
144  do
145  {
146  unsigned int idx = 0;
147  add_3D_tensor_argument(idx, _input, slice);
148  add_3D_tensor_argument(idx1, _output, slice);
149  enqueue(queue, *this, slice, lws_hint());
150  }
151  while(collapsed.slide_window_slice_3D(slice));
152 }
const Window & window() const
The maximum window the kernel can be executed on.
Definition: IKernel.cpp:28
void enqueue(IGCKernel &kernel, const Window &window, const gles::NDRange &lws=gles::NDRange(1U, 1U, 1U))
Add the kernel to the command queue with the given window.
Definition: IGCKernel.cpp:41
cl::NDRange lws_hint() const
Return the Local-Workgroup-Size hint.
Definition: ICLKernel.h:276
void add_3D_tensor_argument(unsigned int &idx, const ICLTensor *tensor, const Window &window)
Add the passed 3D tensor&#39;s parameters to the object&#39;s kernel&#39;s arguments starting from the index idx...
Definition: ICLKernel.h:172
static constexpr unsigned int num_arguments_per_3D_tensor()
Returns the number of arguments enqueued per 3D tensor object.
Definition: ICLKernel.h:214
Window collapse_if_possible(const Window &full_window, size_t first, size_t last, bool *has_collapsed=nullptr) const
Collapse the dimensions between first and last if possible.
Definition: Window.inl:68
#define ARM_COMPUTE_ERROR_ON_UNCONFIGURED_KERNEL(k)
Definition: Validate.h:941
static constexpr size_t DimY
Alias for dimension 1 also known as Y dimension.
Definition: Window.h:45
static constexpr size_t DimZ
Alias for dimension 2 also known as Z dimension.
Definition: Window.h:47
void add_1D_tensor_argument(unsigned int &idx, const ICLTensor *tensor, const Window &window)
Add the passed 1D tensor&#39;s parameters to the object&#39;s kernel&#39;s arguments starting from the index idx...
Definition: ICLKernel.h:124
#define ARM_COMPUTE_ERROR_ON_INVALID_SUBWINDOW(f, s)
Definition: Validate.h:205
SimpleTensor< T > slice(const SimpleTensor< T > &src, Coordinates starts, Coordinates ends)

◆ validate()

Status validate ( const ITensorInfo input,
const ITensorInfo bias,
const ITensorInfo output,
const GEMMLowpOutputStageInfo info 
)
static

Static function to check if given info will lead to a valid configuration of CLGEMMLowpQuantizeDownInt32ScaleByFixedPointKernel.

Parameters
[in]inputInput tensor. Data type supported: S32
[in]biasBiases tensor. Only shared biases supported and it can be a nullptr if the biases addition is not required. Biases are 1D tensor with dimensions [OFM]. Data type supported: Same as input.
[in]outputOutput tensor. Data type supported: Data type supported: QSYMM8/QASYMM8_SIGNED/QSYMM16.
[in]infoOutput stage info. Used to pass the quantized output data type
Returns
a status

Definition at line 71 of file CLGEMMLowpQuantizeDownInt32ScaleByFixedPointKernel.cpp.

References ARM_COMPUTE_ERROR_ON_NULLPTR, ARM_COMPUTE_RETURN_ON_ERROR, and arm_compute::validate_arguments().

Referenced by CLGEMMLowpQuantizeDownInt32ToUint8ScaleByFixedPoint::validate(), CLGEMMLowpQuantizeDownInt32ToInt8ScaleByFixedPoint::validate(), CLGEMMLowpQuantizeDownInt32ToInt16ScaleByFixedPoint::validate(), and CLGEMMLowpOutputStage::validate().

73 {
76 
77  return Status{};
78 }
#define ARM_COMPUTE_RETURN_ON_ERROR(status)
Checks if a status contains an error and returns it.
Definition: Error.h:204
ScaleKernelInfo info(interpolation_policy, default_border_mode, PixelValue(), sampling_policy, false)
Status validate_arguments(const ITensorInfo *input, const ITensorInfo *bias, const ITensorInfo *output, const GEMMLowpOutputStageInfo *output_stage)
#define ARM_COMPUTE_ERROR_ON_NULLPTR(...)
Definition: Validate.h:161

The documentation for this class was generated from the following files: