Compute Library
 21.02
CLQuantizationLayerKernel Class Reference

Interface for the quantization layer kernel. More...

#include <CLQuantizationLayerKernel.h>

Collaboration diagram for CLQuantizationLayerKernel:
[legend]

Public Member Functions

 CLQuantizationLayerKernel ()
 Default constructor. More...
 
 CLQuantizationLayerKernel (const CLQuantizationLayerKernel &)=delete
 Prevent instances of this class from being copied (As this class contains pointers) More...
 
CLQuantizationLayerKerneloperator= (const CLQuantizationLayerKernel &)=delete
 Prevent instances of this class from being copied (As this class contains pointers) More...
 
 CLQuantizationLayerKernel (CLQuantizationLayerKernel &&)=default
 Default Move Constructor. More...
 
CLQuantizationLayerKerneloperator= (CLQuantizationLayerKernel &&)=default
 Default move assignment operator. More...
 
 ~CLQuantizationLayerKernel ()=default
 Default destructor. More...
 
void configure (const ICLTensor *input, ICLTensor *output)
 Set the input, output. More...
 
void configure (const CLCompileContext &compile_context, const ICLTensor *input, ICLTensor *output)
 Set the input, output. More...
 
void run (const Window &window, cl::CommandQueue &queue) override
 Enqueue the OpenCL kernel to process the given window on the passed OpenCL command queue. More...
 
- Public Member Functions inherited from ICLKernel
 ICLKernel ()
 Constructor. More...
 
cl::Kernel & kernel ()
 Returns a reference to the OpenCL kernel of this object. More...
 
template<typename T >
void add_1D_array_argument (unsigned int &idx, const ICLArray< T > *array, const Strides &strides, unsigned int num_dimensions, const Window &window)
 Add the passed 1D array's parameters to the object's kernel's arguments starting from the index idx. More...
 
void add_1D_tensor_argument (unsigned int &idx, const ICLTensor *tensor, const Window &window)
 Add the passed 1D tensor's parameters to the object's kernel's arguments starting from the index idx. More...
 
void add_1D_tensor_argument_if (bool cond, unsigned int &idx, const ICLTensor *tensor, const Window &window)
 Add the passed 1D tensor's parameters to the object's kernel's arguments starting from the index idx if the condition is true. More...
 
void add_2D_tensor_argument (unsigned int &idx, const ICLTensor *tensor, const Window &window)
 Add the passed 2D tensor's parameters to the object's kernel's arguments starting from the index idx. More...
 
void add_2D_tensor_argument_if (bool cond, unsigned int &idx, const ICLTensor *tensor, const Window &window)
 Add the passed 2D tensor's parameters to the object's kernel's arguments starting from the index idx if the condition is true. More...
 
void add_3D_tensor_argument (unsigned int &idx, const ICLTensor *tensor, const Window &window)
 Add the passed 3D tensor's parameters to the object's kernel's arguments starting from the index idx. More...
 
void add_4D_tensor_argument (unsigned int &idx, const ICLTensor *tensor, const Window &window)
 Add the passed 4D tensor's parameters to the object's kernel's arguments starting from the index idx. More...
 
virtual void run_op (ITensorPack &tensors, const Window &window, cl::CommandQueue &queue)
 Enqueue the OpenCL kernel to process the given window on the passed OpenCL command queue. More...
 
template<typename T >
void add_argument (unsigned int &idx, T value)
 Add the passed parameters to the object's kernel's arguments starting from the index idx. More...
 
void set_lws_hint (const cl::NDRange &lws_hint)
 Set the Local-Workgroup-Size hint. More...
 
cl::NDRange lws_hint () const
 Return the Local-Workgroup-Size hint. More...
 
void set_wbsm_hint (const cl_int &wbsm_hint)
 Set the workgroup batch size modifier hint. More...
 
cl_int wbsm_hint () const
 Return the workgroup batch size modifier hint. More...
 
const std::string & config_id () const
 Get the configuration ID. More...
 
void set_target (GPUTarget target)
 Set the targeted GPU architecture. More...
 
void set_target (cl::Device &device)
 Set the targeted GPU architecture according to the CL device. More...
 
GPUTarget get_target () const
 Get the targeted GPU architecture. More...
 
size_t get_max_workgroup_size ()
 Get the maximum workgroup size for the device the CLKernelLibrary uses. More...
 
template<unsigned int dimension_size>
void add_tensor_argument (unsigned &idx, const ICLTensor *tensor, const Window &window)
 
template<typename T , unsigned int dimension_size>
void add_array_argument (unsigned &idx, const ICLArray< T > *array, const Strides &strides, unsigned int num_dimensions, const Window &window)
 Add the passed array's parameters to the object's kernel's arguments starting from the index idx. More...
 
- Public Member Functions inherited from IKernel
 IKernel ()
 Constructor. More...
 
virtual ~IKernel ()=default
 Destructor. More...
 
virtual bool is_parallelisable () const
 Indicates whether or not the kernel is parallelisable. More...
 
virtual BorderSize border_size () const
 The size of the border for that kernel. More...
 
const Windowwindow () const
 The maximum window the kernel can be executed on. More...
 

Static Public Member Functions

static Status validate (const ITensorInfo *input, const ITensorInfo *output)
 Static function to check if given info will lead to a valid configuration of CLQuantizationLayerKernel. More...
 
- Static Public Member Functions inherited from ICLKernel
static constexpr unsigned int num_arguments_per_1D_array ()
 Returns the number of arguments enqueued per 1D array object. More...
 
static constexpr unsigned int num_arguments_per_1D_tensor ()
 Returns the number of arguments enqueued per 1D tensor object. More...
 
static constexpr unsigned int num_arguments_per_2D_tensor ()
 Returns the number of arguments enqueued per 2D tensor object. More...
 
static constexpr unsigned int num_arguments_per_3D_tensor ()
 Returns the number of arguments enqueued per 3D tensor object. More...
 
static constexpr unsigned int num_arguments_per_4D_tensor ()
 Returns the number of arguments enqueued per 4D tensor object. More...
 
static cl::NDRange gws_from_window (const Window &window)
 Get the global work size given an execution window. More...
 

Detailed Description

Interface for the quantization layer kernel.

Note
The implementation supports only 3D input tensors.

Definition at line 37 of file CLQuantizationLayerKernel.h.

Constructor & Destructor Documentation

◆ CLQuantizationLayerKernel() [1/3]

Default constructor.

Definition at line 57 of file CLQuantizationLayerKernel.cpp.

58  : _input(nullptr), _output(nullptr)
59 {
60 }

◆ CLQuantizationLayerKernel() [2/3]

Prevent instances of this class from being copied (As this class contains pointers)

◆ CLQuantizationLayerKernel() [3/3]

Default Move Constructor.

◆ ~CLQuantizationLayerKernel()

Default destructor.

Member Function Documentation

◆ configure() [1/2]

void configure ( const ICLTensor input,
ICLTensor output 
)

Set the input, output.

Parameters
[in]inputSource tensor. Data types supported: QASYMM8/QASYMM8_SIGNED/F32/F16.
[out]outputDestination tensor with the same dimensions of input. Data types supported: QASYMM8/QASYMM8_SIGNED/QASYMM16.
Note
Output auto initialization is not supported by this kernel

Definition at line 62 of file CLQuantizationLayerKernel.cpp.

References CLKernelLibrary::get().

63 {
64  configure(CLKernelLibrary::get().get_compile_context(), input, output);
65 }
static CLKernelLibrary & get()
Access the KernelLibrary singleton.
void configure(const ICLTensor *input, ICLTensor *output)
Set the input, output.

◆ configure() [2/2]

void configure ( const CLCompileContext compile_context,
const ICLTensor input,
ICLTensor output 
)

Set the input, output.

Parameters
[in]compile_contextThe compile context to be used.
[in]inputSource tensor. Data types supported: QASYMM8/QASYMM8_SIGNED/F32/F16.
[out]outputDestination tensor with the same dimensions of input. Data types supported: QASYMM8/QASYMM8_SIGNED/QASYMM16.
Note
Output auto initialization is not supported by this kernel

Definition at line 67 of file CLQuantizationLayerKernel.cpp.

References CLBuildOptions::add_option(), CLBuildOptions::add_option_if(), ARM_COMPUTE_ERROR_ON, ARM_COMPUTE_ERROR_ON_NULLPTR, ARM_COMPUTE_ERROR_THROW_ON, arm_compute::calculate_max_window(), arm_compute::ceil_to_multiple(), arm_compute::create_kernel(), ITensorInfo::data_type(), Window::DimX, ITensorInfo::element_size(), arm_compute::float_to_string_with_full_precision(), arm_compute::get_cl_type_from_data_type(), arm_compute::quantization::get_min_max_values_from_quantized_data_type(), arm_compute::get_padding_info(), arm_compute::has_padding_changed(), ITensor::info(), arm_compute::test::validation::input, arm_compute::is_data_type_float(), arm_compute::is_data_type_quantized_asymmetric(), UniformQuantizationInfo::offset, CLBuildOptions::options(), arm_compute::test::validation::qinfo, ITensorInfo::quantization_info(), UniformQuantizationInfo::scale, Window::set(), ITensorInfo::set_valid_region(), ITensorInfo::tensor_shape(), arm_compute::support::cpp11::to_string(), QuantizationInfo::uniform(), arm_compute::validate_arguments(), and Dimensions< T >::x().

68 {
70 
71  auto padding_info = get_padding_info({ input, output });
72 
73  ARM_COMPUTE_ERROR_THROW_ON(validate_arguments(input->info(), output->info()));
74 
75  _input = input;
76  _output = output;
77 
78  const int vec_size_x = 16 / input->info()->element_size();
79  const int input_width_x = input->info()->tensor_shape().x();
80  const bool multi_access_x = (input_width_x / vec_size_x > 0);
81 
82  const UniformQuantizationInfo qinfo = output->info()->quantization_info().uniform();
83  const DataType output_data_type = output->info()->data_type();
84 
85  float scale_to_apply = qinfo.scale;
86  int32_t offset_to_apply = qinfo.offset;
88  {
89  /*
90  * In case of requantization of a quantized input tensor to an output tensor with another quantization
91  * instead of of apply dequantization and then a quantization functions, we just compute new scale and
92  * offset to apply.
93  *
94  * Assuming:
95  * - q_i as input quantized value
96  * - q_o as output quantized value
97  * - z_i as input quantization offset value
98  * - z_o as output quantization offset value
99  * - s_i as input quantization scale value
100  * - s_o as output quantization scale value
101  * - z_n as new quantization offset value
102  * - s_n as new quantization scale value
103  *
104  * q_o = ( q_i - z_i ) * s_i / s_o + z_o
105  *
106  * We can rewrite the formula as:
107  *
108  * q_o = ( q_i * s_i / s_o ) - z_i * s_i / s_o + z_o
109  *
110  * q_o = q_i / s_n + z_n
111  *
112  * Where:
113  *
114  * s_n = s_o / s_i
115  *
116  * z_n = - z_i * s_i / s_o + z_o
117  *
118  */
119  const UniformQuantizationInfo qinfo_in = _input->info()->quantization_info().uniform();
120  scale_to_apply /= qinfo_in.scale;
121  // In order to minimize flooring we convert the offset to a float,
122  // then compute the new offset in the float domain,
123  // finally we convert it back as int32_t
124  offset_to_apply -= static_cast<int32_t>(static_cast<float>(qinfo_in.offset) * qinfo_in.scale / qinfo.scale);
125  }
126 
127  // Create kernel
128  CLBuildOptions build_opts;
129  build_opts.add_option_if(is_data_type_float(_input->info()->data_type()), "-DIS_FLOAT");
130  build_opts.add_option("-DSCALE=" + float_to_string_with_full_precision(scale_to_apply));
131  build_opts.add_option("-DOFFSET=" + support::cpp11::to_string(offset_to_apply));
132  build_opts.add_option("-DVEC_SIZE=" + support::cpp11::to_string(vec_size_x));
133  build_opts.add_option("-DDATA_TYPE_IN=" + get_cl_type_from_data_type(input->info()->data_type()));
134  build_opts.add_option("-DDATA_TYPE_OUT=" + get_cl_type_from_data_type(output_data_type));
135  build_opts.add_option_if(multi_access_x, "-DLAST_ACCESSED_X=" + support::cpp11::to_string(std::max<int>(input_width_x - vec_size_x, 0)));
136  std::pair<int, int> min_max_quant_values = quantization::get_min_max_values_from_quantized_data_type(output_data_type);
137  build_opts.add_option("-DMIN_QUANT_VAL=" + support::cpp11::to_string(min_max_quant_values.first));
138  build_opts.add_option("-DMAX_QUANT_VAL=" + support::cpp11::to_string(min_max_quant_values.second));
139 
140  _kernel = create_kernel(compile_context, "quantization_layer", build_opts.options());
141 
142  // Configure kernel window
143  Window win = calculate_max_window(*input->info(), Steps());
144  if(multi_access_x)
145  {
146  win.set(Window::DimX, Window::Dimension(win.x().start(), ceil_to_multiple(win.x().end(), vec_size_x), vec_size_x));
147  }
148  ICLKernel::configure_internal(win);
149 
150  output->info()->set_valid_region(ValidRegion(Coordinates(), output->info()->tensor_shape()));
151 
153 }
Window calculate_max_window(const ValidRegion &valid_region, const Steps &steps, bool skip_border, BorderSize border_size)
std::string to_string(T &&value)
Convert integer and float values to string.
virtual DataType data_type() const =0
Data type used for each element of the tensor.
#define ARM_COMPUTE_ERROR_ON(cond)
If the condition is true then an error message is printed and an exception thrown.
Definition: Error.h:466
#define ARM_COMPUTE_ERROR_THROW_ON(status)
Definition: Error.h:455
cl::Kernel create_kernel(const CLCompileContext &ctx, const std::string &kernel_name, const std::set< std::string > &build_opts=std::set< std::string >())
Creates an opencl kernel using a compile context.
Definition: CLHelpers.cpp:403
std::pair< int, int > get_min_max_values_from_quantized_data_type(DataType data_type)
Get minimum and maximum values for the input quantized data type.
static constexpr size_t DimX
Alias for dimension 0 also known as X dimension.
Definition: Window.h:43
std::string float_to_string_with_full_precision(float val)
Create a string with the float in full precision.
Definition: Utils.h:1262
auto ceil_to_multiple(S value, T divisor) -> decltype(((value+divisor - 1)/divisor) *divisor)
Computes the smallest number larger or equal to value that is a multiple of divisor.
Definition: Utils.h:71
UniformQuantizationInfo uniform() const
Return per layer quantization info.
std::string get_cl_type_from_data_type(const DataType &dt)
Translates a tensor data type to the appropriate OpenCL type.
Definition: CLHelpers.cpp:37
virtual ITensorInfo * info() const =0
Interface to be implemented by the child class to return the tensor&#39;s metadata.
void set(size_t dimension, const Dimension &dim)
Set the values of a given dimension.
Definition: Window.inl:49
virtual QuantizationInfo quantization_info() const =0
Get the quantization settings (scale and offset) of the tensor.
bool has_padding_changed(const std::unordered_map< const ITensorInfo *, PaddingSize > &padding_map)
Check if the previously stored padding info has changed after configuring a kernel.
Definition: Utils.cpp:528
bool is_data_type_quantized_asymmetric(DataType dt)
Check if a given data type is of asymmetric quantized type.
Definition: Utils.h:1190
const QuantizationInfo qinfo
Definition: Im2Col.cpp:155
std::unordered_map< const ITensorInfo *, PaddingSize > get_padding_info(std::initializer_list< const ITensorInfo *> infos)
Stores padding information before configuring a kernel.
Definition: Utils.cpp:513
Status validate_arguments(const ITensorInfo *input, const ITensorInfo *bias, const ITensorInfo *output, const GEMMLowpOutputStageInfo *output_stage)
#define ARM_COMPUTE_ERROR_ON_NULLPTR(...)
Definition: Validate.h:161
DataType
Available data types.
Definition: Types.h:77
bool is_data_type_float(DataType dt)
Check if a given data type is of floating point type.
Definition: Utils.h:1148

◆ operator=() [1/2]

CLQuantizationLayerKernel& operator= ( const CLQuantizationLayerKernel )
delete

Prevent instances of this class from being copied (As this class contains pointers)

◆ operator=() [2/2]

Default move assignment operator.

◆ run()

void run ( const Window window,
cl::CommandQueue &  queue 
)
overridevirtual

Enqueue the OpenCL kernel to process the given window on the passed OpenCL command queue.

Note
The queue is not flushed by this method, and therefore the kernel will not have been executed by the time this method returns.
Parameters
[in]windowRegion on which to execute the kernel. (Must be a valid region of the window returned by window()).
[in,out]queueCommand queue on which to enqueue the kernel.

Reimplemented from ICLKernel.

Definition at line 161 of file CLQuantizationLayerKernel.cpp.

References ICLKernel::add_3D_tensor_argument(), ARM_COMPUTE_ERROR_ON_INVALID_SUBWINDOW, ARM_COMPUTE_ERROR_ON_UNCONFIGURED_KERNEL, Window::collapse_if_possible(), arm_compute::enqueue(), Window::first_slice_window_3D(), ICLKernel::lws_hint(), arm_compute::test::validation::reference::slice(), Window::slide_window_slice_3D(), and IKernel::window().

162 {
165 
166  Window window_collapsed = window.collapse_if_possible(ICLKernel::window(), 3);
167  Window slice = window_collapsed.first_slice_window_3D();
168 
169  do
170  {
171  unsigned int idx = 0;
172  add_3D_tensor_argument(idx, _input, slice);
173  add_3D_tensor_argument(idx, _output, slice);
174  enqueue(queue, *this, slice, lws_hint());
175  }
176  while(window_collapsed.slide_window_slice_3D(slice));
177 }
const Window & window() const
The maximum window the kernel can be executed on.
Definition: IKernel.cpp:28
void enqueue(IGCKernel &kernel, const Window &window, const gles::NDRange &lws=gles::NDRange(1U, 1U, 1U))
Add the kernel to the command queue with the given window.
Definition: IGCKernel.cpp:41
cl::NDRange lws_hint() const
Return the Local-Workgroup-Size hint.
Definition: ICLKernel.h:276
void add_3D_tensor_argument(unsigned int &idx, const ICLTensor *tensor, const Window &window)
Add the passed 3D tensor&#39;s parameters to the object&#39;s kernel&#39;s arguments starting from the index idx...
Definition: ICLKernel.h:172
Window collapse_if_possible(const Window &full_window, size_t first, size_t last, bool *has_collapsed=nullptr) const
Collapse the dimensions between first and last if possible.
Definition: Window.inl:68
#define ARM_COMPUTE_ERROR_ON_UNCONFIGURED_KERNEL(k)
Definition: Validate.h:941
#define ARM_COMPUTE_ERROR_ON_INVALID_SUBWINDOW(f, s)
Definition: Validate.h:205
SimpleTensor< T > slice(const SimpleTensor< T > &src, Coordinates starts, Coordinates ends)

◆ validate()

Status validate ( const ITensorInfo input,
const ITensorInfo output 
)
static

Static function to check if given info will lead to a valid configuration of CLQuantizationLayerKernel.

Parameters
[in]inputInput tensor info. Data types supported: QASYMM8/QASYMM8_SIGNED/F32/F16.
[in]outputDestination tensor info with the same dimensions of input. Data types supported: QASYMM8/QASYMM8_SIGNED/QASYMM16.
Returns
a status

Definition at line 155 of file CLQuantizationLayerKernel.cpp.

References ARM_COMPUTE_RETURN_ON_ERROR, and arm_compute::validate_arguments().

Referenced by CLQuantizationLayer::validate(), and CLGenerateProposalsLayer::validate().

156 {
158  return Status{};
159 }
#define ARM_COMPUTE_RETURN_ON_ERROR(status)
Checks if a status contains an error and returns it.
Definition: Error.h:204
Status validate_arguments(const ITensorInfo *input, const ITensorInfo *bias, const ITensorInfo *output, const GEMMLowpOutputStageInfo *output_stage)

The documentation for this class was generated from the following files: