Compute Library
 19.08
CLGEMMLowpQuantizeDownInt32ToUint8ScaleKernel Class Reference

OpenCL kernel used to quantize down the int32 accumulator values of GEMMLowp to QASYMM8. More...

#include <CLGEMMLowpQuantizeDownInt32ToUint8ScaleKernel.h>

Collaboration diagram for CLGEMMLowpQuantizeDownInt32ToUint8ScaleKernel:
[legend]

Public Member Functions

 CLGEMMLowpQuantizeDownInt32ToUint8ScaleKernel ()
 Constructor. More...
 
 CLGEMMLowpQuantizeDownInt32ToUint8ScaleKernel (const CLGEMMLowpQuantizeDownInt32ToUint8ScaleKernel &)=delete
 Prevent instances of this class from being copied (As this class contains pointers) More...
 
CLGEMMLowpQuantizeDownInt32ToUint8ScaleKerneloperator= (const CLGEMMLowpQuantizeDownInt32ToUint8ScaleKernel &)=delete
 Prevent instances of this class from being copied (As this class contains pointers) More...
 
 CLGEMMLowpQuantizeDownInt32ToUint8ScaleKernel (CLGEMMLowpQuantizeDownInt32ToUint8ScaleKernel &&)=default
 Allow instances of this class to be moved. More...
 
CLGEMMLowpQuantizeDownInt32ToUint8ScaleKerneloperator= (CLGEMMLowpQuantizeDownInt32ToUint8ScaleKernel &&)=default
 Allow instances of this class to be moved. More...
 
void configure (const ICLTensor *input, const ICLTensor *bias, ICLTensor *output, int result_offset, int result_mult_int, int result_shift, int min=0, int max=0)
 Initialise the kernel's input and output. More...
 
void run (const Window &window, cl::CommandQueue &queue) override
 Enqueue the OpenCL kernel to process the given window on the passed OpenCL command queue. More...
 
- Public Member Functions inherited from ICLKernel
 ICLKernel ()
 Constructor. More...
 
cl::Kernel & kernel ()
 Returns a reference to the OpenCL kernel of this object. More...
 
template<typename T >
void add_1D_array_argument (unsigned int &idx, const ICLArray< T > *array, const Strides &strides, unsigned int num_dimensions, const Window &window)
 Add the passed 1D array's parameters to the object's kernel's arguments starting from the index idx. More...
 
void add_1D_tensor_argument (unsigned int &idx, const ICLTensor *tensor, const Window &window)
 Add the passed 1D tensor's parameters to the object's kernel's arguments starting from the index idx. More...
 
void add_1D_tensor_argument_if (bool cond, unsigned int &idx, const ICLTensor *tensor, const Window &window)
 Add the passed 1D tensor's parameters to the object's kernel's arguments starting from the index idx if the condition is true. More...
 
void add_2D_tensor_argument (unsigned int &idx, const ICLTensor *tensor, const Window &window)
 Add the passed 2D tensor's parameters to the object's kernel's arguments starting from the index idx. More...
 
void add_2D_tensor_argument_if (bool cond, unsigned int &idx, const ICLTensor *tensor, const Window &window)
 Add the passed 2D tensor's parameters to the object's kernel's arguments starting from the index idx if the condition is true. More...
 
void add_3D_tensor_argument (unsigned int &idx, const ICLTensor *tensor, const Window &window)
 Add the passed 3D tensor's parameters to the object's kernel's arguments starting from the index idx. More...
 
void add_4D_tensor_argument (unsigned int &idx, const ICLTensor *tensor, const Window &window)
 Add the passed 4D tensor's parameters to the object's kernel's arguments starting from the index idx. More...
 
template<typename T >
void add_argument (unsigned int &idx, T value)
 Add the passed parameters to the object's kernel's arguments starting from the index idx. More...
 
void set_lws_hint (const cl::NDRange &lws_hint)
 Set the Local-Workgroup-Size hint. More...
 
cl::NDRange lws_hint () const
 Return the Local-Workgroup-Size hint. More...
 
const std::string & config_id () const
 Get the configuration ID. More...
 
void set_target (GPUTarget target)
 Set the targeted GPU architecture. More...
 
void set_target (cl::Device &device)
 Set the targeted GPU architecture according to the CL device. More...
 
GPUTarget get_target () const
 Get the targeted GPU architecture. More...
 
size_t get_max_workgroup_size ()
 Get the maximum workgroup size for the device the CLKernelLibrary uses. More...
 
template<typename T , unsigned int dimension_size>
void add_array_argument (unsigned &idx, const ICLArray< T > *array, const Strides &strides, unsigned int num_dimensions, const Window &window)
 Add the passed array's parameters to the object's kernel's arguments starting from the index idx. More...
 
template<unsigned int dimension_size>
void add_tensor_argument (unsigned &idx, const ICLTensor *tensor, const Window &window)
 
- Public Member Functions inherited from IKernel
 IKernel ()
 Constructor. More...
 
virtual ~IKernel ()=default
 Destructor. More...
 
virtual bool is_parallelisable () const
 Indicates whether or not the kernel is parallelisable. More...
 
virtual BorderSize border_size () const
 The size of the border for that kernel. More...
 
const Windowwindow () const
 The maximum window the kernel can be executed on. More...
 

Static Public Member Functions

static Status validate (const ITensorInfo *input, const ITensorInfo *bias, const ITensorInfo *output, int min=0, int max=0)
 Static function to check if given info will lead to a valid configuration of CLGEMMLowpQuantizeDownInt32ToUint8ScaleKernel. More...
 
- Static Public Member Functions inherited from ICLKernel
static constexpr unsigned int num_arguments_per_1D_array ()
 Returns the number of arguments enqueued per 1D array object. More...
 
static constexpr unsigned int num_arguments_per_1D_tensor ()
 Returns the number of arguments enqueued per 1D tensor object. More...
 
static constexpr unsigned int num_arguments_per_2D_tensor ()
 Returns the number of arguments enqueued per 2D tensor object. More...
 
static constexpr unsigned int num_arguments_per_3D_tensor ()
 Returns the number of arguments enqueued per 3D tensor object. More...
 
static constexpr unsigned int num_arguments_per_4D_tensor ()
 Returns the number of arguments enqueued per 4D tensor object. More...
 
static cl::NDRange gws_from_window (const Window &window)
 Get the global work size given an execution window. More...
 

Detailed Description

OpenCL kernel used to quantize down the int32 accumulator values of GEMMLowp to QASYMM8.

This kernel takes a final int32 accumulator value (the output of CLGEMMLowpMatrixMultiplyKernel), and processes it to obtain the final QASYMM8 value. The following computations will be performed by the kernel:

  1. Add offset terms to final result
  2. Multiply each entry of result by result_mult_int
  3. Add bias to final result if bias tensor is not a nullptr
  4. Shift the int32 accumulator by result_shift
  5. Clamp the value between the specified min and max bounds
  6. Clamp the resulting int32 values to the [0..255] range and cast to QASYMM8.

Definition at line 46 of file CLGEMMLowpQuantizeDownInt32ToUint8ScaleKernel.h.

Constructor & Destructor Documentation

◆ CLGEMMLowpQuantizeDownInt32ToUint8ScaleKernel() [1/3]

Constructor.

Definition at line 98 of file CLGEMMLowpQuantizeDownInt32ToUint8ScaleKernel.cpp.

99  : _input(nullptr), _bias(nullptr), _output(nullptr)
100 {
101 }

◆ CLGEMMLowpQuantizeDownInt32ToUint8ScaleKernel() [2/3]

Prevent instances of this class from being copied (As this class contains pointers)

◆ CLGEMMLowpQuantizeDownInt32ToUint8ScaleKernel() [3/3]

Allow instances of this class to be moved.

Member Function Documentation

◆ configure()

void configure ( const ICLTensor input,
const ICLTensor bias,
ICLTensor output,
int  result_offset,
int  result_mult_int,
int  result_shift,
int  min = 0,
int  max = 0 
)

Initialise the kernel's input and output.

Parameters
[in]inputInput tensor. Data type supported: S32
[in]biasBiases tensor. Only shared biases supported and it can be a nullptr if the biases addition is not required. Biases are 1D tensor with dimensions [OFM]. Data type supported: Same as input.
[out]outputOutput tensor. Data type supported: Data type supported: QASYMM8
[in]result_offsetOffset to be added to each element of the input matrix
[in]result_mult_intValue to be multiplied to each element of the input matrix when once the result_offset has been add
[in]result_shiftNumber of bits to shift right the result before converting back to QASYMM8
[in]min(Optional) Min value used to saturate down the output result before converting back to QASYMM8
[in]max(Optional) Max value used to saturate up the output result before converting back to QASYMM8, Along with min, this value can be used to implement "rectified linear unit" activation functions

Definition at line 114 of file CLGEMMLowpQuantizeDownInt32ToUint8ScaleKernel.cpp.

116 {
117  // Perform validate step
118  ARM_COMPUTE_ERROR_ON_NULLPTR(input, output);
119 
120  // Output auto inizialitation if not yet initialized
121  auto_init_if_empty(*output->info(), input->info()->clone()->set_data_type(DataType::QASYMM8));
122 
123  ARM_COMPUTE_ERROR_THROW_ON(validate_arguments(input->info(),
124  (bias != nullptr) ? bias->info() : nullptr,
125  output->info(),
126  min,
127  max));
128 
129  _input = input;
130  _bias = bias;
131  _output = output;
132 
133  // Set the arguments to pass at compile time
134  CLBuildOptions build_opts;
135  build_opts.add_option("-DRESULT_OFFSET=" + support::cpp11::to_string(result_offset));
136  build_opts.add_option("-DRESULT_MULT_INT=" + support::cpp11::to_string(result_mult_int));
137  build_opts.add_option("-DRESULT_SHIFT=" + support::cpp11::to_string(result_shift));
138  build_opts.add_option_if((min != 0) && (min != max), "-DMIN_BOUND=" + support::cpp11::to_string(min));
139  build_opts.add_option_if((max != 255) && (min != max), "-DMAX_BOUND=" + support::cpp11::to_string(max));
140  build_opts.add_option_if(bias != nullptr, "-DADD_BIAS");
141 
142  // Create kernel
143  _kernel = static_cast<cl::Kernel>(CLKernelLibrary::get().create_kernel("gemmlowp_output_stage_quantize_down", build_opts.options()));
144 
145  // Configure kernel window
146  auto win_config = validate_and_configure_window(input->info(), (bias != nullptr) ? bias->info() : nullptr, output->info());
147  ARM_COMPUTE_ERROR_THROW_ON(win_config.first);
148  ICLKernel::configure_internal(win_config.second);
149 }
TensorInfo * info() const override
Interface to be implemented by the child class to return the tensor's metadata.
Definition: CLTensor.cpp:35
const StringSet & options() const
Gets the current options list set.
std::pair< Status, Window > validate_and_configure_window(ITensorInfo *input, ITensorInfo *weights, ITensorInfo *biases, ITensorInfo *output, const PadStrideInfo &conv_info, unsigned int depth_multiplier, const Size2D &dilation)
std::string to_string(T &&value)
Convert integer and float values to string.
static CLKernelLibrary & get()
Access the KernelLibrary singleton.
#define ARM_COMPUTE_ERROR_THROW_ON(status)
Definition: Error.h:327
bool auto_init_if_empty(ITensorInfo &info, const TensorShape &shape, int num_channels, DataType data_type, QuantizationInfo quantization_info=QuantizationInfo())
Auto initialize the tensor info (shape, number of channels and data type) if the current assignment i...
Definition: Helpers.inl:201
void add_option(std::string option)
Adds option to the existing build option list.
quantized, asymmetric fixed-point 8-bit number
virtual std::unique_ptr< T > clone() const =0
Provide a clone of the current object of class T.
virtual ITensorInfo * info() const =0
Interface to be implemented by the child class to return the tensor's metadata.
void add_option_if(bool cond, std::string option)
Adds option if a given condition is true;.
std::unique_ptr< Kernel > create_kernel()
Helper function to create and return a unique_ptr pointed to a CL/GLES kernel object.
Definition: Helpers.h:86
#define ARM_COMPUTE_ERROR_ON_NULLPTR(...)
Definition: Validate.h:161

References CLBuildOptions::add_option(), CLBuildOptions::add_option_if(), ARM_COMPUTE_ERROR_ON_NULLPTR, ARM_COMPUTE_ERROR_THROW_ON, arm_compute::auto_init_if_empty(), arm_compute::test::validation::bias, ICloneable< T >::clone(), arm_compute::create_kernel(), CLKernelLibrary::get(), ITensor::info(), CLTensor::info(), CLBuildOptions::options(), arm_compute::QASYMM8, arm_compute::support::cpp11::to_string(), and arm_compute::validate_and_configure_window().

◆ operator=() [1/2]

Prevent instances of this class from being copied (As this class contains pointers)

◆ operator=() [2/2]

Allow instances of this class to be moved.

◆ run()

void run ( const Window window,
cl::CommandQueue &  queue 
)
overridevirtual

Enqueue the OpenCL kernel to process the given window on the passed OpenCL command queue.

Note
The queue is not flushed by this method, and therefore the kernel will not have been executed by the time this method returns.
Parameters
[in]windowRegion on which to execute the kernel. (Must be a valid region of the window returned by window()).
[in,out]queueCommand queue on which to enqueue the kernel.

Implements ICLKernel.

Definition at line 151 of file CLGEMMLowpQuantizeDownInt32ToUint8ScaleKernel.cpp.

152 {
155 
157  Window slice = collapsed.first_slice_window_3D();
158 
159  unsigned int idx1 = num_arguments_per_3D_tensor();
160  if(_bias != nullptr)
161  {
162  Window biases_slice(slice);
163  biases_slice.set(Window::DimY, Window::Dimension(0, 1, 1));
164  biases_slice.set(Window::DimZ, Window::Dimension(0, 1, 1));
165  add_1D_tensor_argument(idx1, _bias, biases_slice);
166  }
167 
168  do
169  {
170  unsigned int idx = 0;
171  add_3D_tensor_argument(idx, _input, slice);
172  add_3D_tensor_argument(idx1, _output, slice);
173  enqueue(queue, *this, slice);
174  }
175  while(collapsed.slide_window_slice_3D(slice));
176 }
const Window & window() const
The maximum window the kernel can be executed on.
Definition: IKernel.cpp:28
void enqueue(cl::CommandQueue &queue, ICLKernel &kernel, const Window &window, const cl::NDRange &lws_hint=CLKernelLibrary::get().default_ndrange(), bool use_dummy_work_items=false)
Add the kernel to the command queue with the given window.
Definition: ICLKernel.cpp:39
Describe one of the image's dimensions with a start, end and step.
Definition: Window.h:75
void add_3D_tensor_argument(unsigned int &idx, const ICLTensor *tensor, const Window &window)
Add the passed 3D tensor's parameters to the object's kernel's arguments starting from the index idx.
Definition: ICLKernel.h:158
static constexpr unsigned int num_arguments_per_3D_tensor()
Returns the number of arguments enqueued per 3D tensor object.
Definition: ICLKernel.h:200
Window collapse_if_possible(const Window &full_window, size_t first, size_t last, bool *has_collapsed=nullptr) const
Collapse the dimensions between first and last if possible.
Definition: Window.inl:54
bool slide_window_slice_3D(Window &slice) const
Slide the passed 3D window slice.
Definition: Window.h:319
static constexpr size_t DimY
Alias for dimension 1 also known as Y dimension.
Definition: Window.h:45
static constexpr size_t DimZ
Alias for dimension 2 also known as Z dimension.
Definition: Window.h:47
void add_1D_tensor_argument(unsigned int &idx, const ICLTensor *tensor, const Window &window)
Add the passed 1D tensor's parameters to the object's kernel's arguments starting from the index idx.
Definition: ICLKernel.h:110
Window first_slice_window_3D() const
First 3D slice of the window.
Definition: Window.h:275
#define ARM_COMPUTE_ERROR_ON_INVALID_SUBWINDOW(f, s)
Definition: Validate.h:205
Describe a multidimensional execution window.
Definition: Window.h:39
#define ARM_COMPUTE_ERROR_ON_UNCONFIGURED_KERNEL(k)
Definition: Validate.h:940
SimpleTensor< T > slice(const SimpleTensor< T > &src, Coordinates starts, Coordinates ends)

References ICLKernel::add_1D_tensor_argument(), ICLKernel::add_3D_tensor_argument(), ARM_COMPUTE_ERROR_ON_INVALID_SUBWINDOW, ARM_COMPUTE_ERROR_ON_UNCONFIGURED_KERNEL, Window::collapse_if_possible(), Window::DimY, Window::DimZ, arm_compute::enqueue(), Window::first_slice_window_3D(), ICLKernel::num_arguments_per_3D_tensor(), Window::set(), arm_compute::test::validation::reference::slice(), Window::slide_window_slice_3D(), and IKernel::window().

◆ validate()

Status validate ( const ITensorInfo input,
const ITensorInfo bias,
const ITensorInfo output,
int  min = 0,
int  max = 0 
)
static

Static function to check if given info will lead to a valid configuration of CLGEMMLowpQuantizeDownInt32ToUint8ScaleKernel.

Parameters
[in]inputInput tensor. Data type supported: S32
[in]biasBiases tensor. Only shared biases supported and it can be a nullptr if the biases addition is not required. Biases are 1D tensor with dimensions [OFM]. Data type supported: Same as input.
[in]outputOutput tensor. Data type supported: Data type supported: QASYMM8
[in]min(Optional) Min value used to saturate down the output result before converting back to QASYMM8
[in]max(Optional) Max value used to saturate up the output result before converting back to QASYMM8, Along with min, this value can be used to implement "rectified linear unit" activation functions
Returns
a status

Definition at line 102 of file CLGEMMLowpQuantizeDownInt32ToUint8ScaleKernel.cpp.

103 {
104  ARM_COMPUTE_ERROR_ON_NULLPTR(input, output);
105  ARM_COMPUTE_RETURN_ON_ERROR(validate_arguments(input, bias, output, min, max));
107  (bias != nullptr) ? bias->clone().get() : nullptr,
108  output->clone().get())
109  .first);
110 
111  return Status{};
112 }
std::pair< Status, Window > validate_and_configure_window(ITensorInfo *input, ITensorInfo *weights, ITensorInfo *biases, ITensorInfo *output, const PadStrideInfo &conv_info, unsigned int depth_multiplier, const Size2D &dilation)
#define ARM_COMPUTE_RETURN_ON_ERROR(status)
Checks if a status contains an error and returns it.
Definition: Error.h:193
Status class.
Definition: Error.h:52
virtual std::unique_ptr< T > clone() const =0
Provide a clone of the current object of class T.
#define ARM_COMPUTE_ERROR_ON_NULLPTR(...)
Definition: Validate.h:161

References ARM_COMPUTE_ERROR_ON_NULLPTR, ARM_COMPUTE_RETURN_ON_ERROR, arm_compute::test::validation::bias, ICloneable< T >::clone(), and arm_compute::validate_and_configure_window().

Referenced by CLGEMMLowpQuantizeDownInt32ToUint8Scale::validate().


The documentation for this class was generated from the following files: