Compute Library
 21.02
CLDepthwiseConvolutionLayer3x3NHWCKernel Class Reference

Interface for the kernel to run a 3x3 depthwise convolution on a tensor when the data layout is NHWC. More...

#include <CLDepthwiseConvolutionLayer3x3NHWCKernel.h>

Collaboration diagram for CLDepthwiseConvolutionLayer3x3NHWCKernel:
[legend]

Public Member Functions

 CLDepthwiseConvolutionLayer3x3NHWCKernel ()
 Default constructor. More...
 
void configure (const ICLTensor *input, const ICLTensor *weights, const ICLTensor *biases, ICLTensor *output, const PadStrideInfo &conv_info, unsigned int depth_multiplier=1, ActivationLayerInfo act_info=ActivationLayerInfo(), const Size2D &dilation=Size2D(1U, 1U), const ICLTensor *output_multipliers=nullptr, const ICLTensor *output_shifts=nullptr) override
 Default move assignment operator. More...
 
void configure (const CLCompileContext &compile_context, const ICLTensor *input, const ICLTensor *weights, const ICLTensor *biases, ICLTensor *output, const PadStrideInfo &conv_info, unsigned int depth_multiplier=1, ActivationLayerInfo act_info=ActivationLayerInfo(), const Size2D &dilation=Size2D(1U, 1U), const ICLTensor *output_multipliers=nullptr, const ICLTensor *output_shifts=nullptr) override
 Initialize the function's source, destination, conv and border_size. More...
 
void run (const Window &window, cl::CommandQueue &queue) override
 Enqueue the OpenCL kernel to process the given window on the passed OpenCL command queue. More...
 
BorderSize border_size () const override
 The size of the border for that kernel. More...
 
- Public Member Functions inherited from ICLDepthwiseConvolutionLayer3x3Kernel
 ICLDepthwiseConvolutionLayer3x3Kernel ()
 Default constructor. More...
 
 ICLDepthwiseConvolutionLayer3x3Kernel (const ICLDepthwiseConvolutionLayer3x3Kernel &)=delete
 Prevent instances of this class from being copied (As this class contains pointers) More...
 
ICLDepthwiseConvolutionLayer3x3Kerneloperator= (const ICLDepthwiseConvolutionLayer3x3Kernel &)=delete
 Prevent instances of this class from being copied (As this class contains pointers) More...
 
 ICLDepthwiseConvolutionLayer3x3Kernel (ICLDepthwiseConvolutionLayer3x3Kernel &&)=default
 Default Move Constructor. More...
 
ICLDepthwiseConvolutionLayer3x3Kerneloperator= (ICLDepthwiseConvolutionLayer3x3Kernel &&)=default
 Default move assignment operator. More...
 
- Public Member Functions inherited from ICLKernel
 ICLKernel ()
 Constructor. More...
 
cl::Kernel & kernel ()
 Returns a reference to the OpenCL kernel of this object. More...
 
template<typename T >
void add_1D_array_argument (unsigned int &idx, const ICLArray< T > *array, const Strides &strides, unsigned int num_dimensions, const Window &window)
 Add the passed 1D array's parameters to the object's kernel's arguments starting from the index idx. More...
 
void add_1D_tensor_argument (unsigned int &idx, const ICLTensor *tensor, const Window &window)
 Add the passed 1D tensor's parameters to the object's kernel's arguments starting from the index idx. More...
 
void add_1D_tensor_argument_if (bool cond, unsigned int &idx, const ICLTensor *tensor, const Window &window)
 Add the passed 1D tensor's parameters to the object's kernel's arguments starting from the index idx if the condition is true. More...
 
void add_2D_tensor_argument (unsigned int &idx, const ICLTensor *tensor, const Window &window)
 Add the passed 2D tensor's parameters to the object's kernel's arguments starting from the index idx. More...
 
void add_2D_tensor_argument_if (bool cond, unsigned int &idx, const ICLTensor *tensor, const Window &window)
 Add the passed 2D tensor's parameters to the object's kernel's arguments starting from the index idx if the condition is true. More...
 
void add_3D_tensor_argument (unsigned int &idx, const ICLTensor *tensor, const Window &window)
 Add the passed 3D tensor's parameters to the object's kernel's arguments starting from the index idx. More...
 
void add_4D_tensor_argument (unsigned int &idx, const ICLTensor *tensor, const Window &window)
 Add the passed 4D tensor's parameters to the object's kernel's arguments starting from the index idx. More...
 
virtual void run_op (ITensorPack &tensors, const Window &window, cl::CommandQueue &queue)
 Enqueue the OpenCL kernel to process the given window on the passed OpenCL command queue. More...
 
template<typename T >
void add_argument (unsigned int &idx, T value)
 Add the passed parameters to the object's kernel's arguments starting from the index idx. More...
 
void set_lws_hint (const cl::NDRange &lws_hint)
 Set the Local-Workgroup-Size hint. More...
 
cl::NDRange lws_hint () const
 Return the Local-Workgroup-Size hint. More...
 
void set_wbsm_hint (const cl_int &wbsm_hint)
 Set the workgroup batch size modifier hint. More...
 
cl_int wbsm_hint () const
 Return the workgroup batch size modifier hint. More...
 
const std::string & config_id () const
 Get the configuration ID. More...
 
void set_target (GPUTarget target)
 Set the targeted GPU architecture. More...
 
void set_target (cl::Device &device)
 Set the targeted GPU architecture according to the CL device. More...
 
GPUTarget get_target () const
 Get the targeted GPU architecture. More...
 
size_t get_max_workgroup_size ()
 Get the maximum workgroup size for the device the CLKernelLibrary uses. More...
 
template<unsigned int dimension_size>
void add_tensor_argument (unsigned &idx, const ICLTensor *tensor, const Window &window)
 
template<typename T , unsigned int dimension_size>
void add_array_argument (unsigned &idx, const ICLArray< T > *array, const Strides &strides, unsigned int num_dimensions, const Window &window)
 Add the passed array's parameters to the object's kernel's arguments starting from the index idx. More...
 
- Public Member Functions inherited from IKernel
 IKernel ()
 Constructor. More...
 
virtual ~IKernel ()=default
 Destructor. More...
 
virtual bool is_parallelisable () const
 Indicates whether or not the kernel is parallelisable. More...
 
const Windowwindow () const
 The maximum window the kernel can be executed on. More...
 

Static Public Member Functions

static Status validate (const ITensorInfo *input, const ITensorInfo *weights, const ITensorInfo *biases, const ITensorInfo *output, const PadStrideInfo &conv_info, unsigned int depth_multiplier=1, ActivationLayerInfo act_info=ActivationLayerInfo(), const Size2D &dilation=Size2D(1U, 1U), const ITensorInfo *output_multipliers=nullptr, const ITensorInfo *output_shifts=nullptr)
 Static function to check if given info will lead to a valid configuration of CLDepthwiseConvolutionLayer3x3NHWCKernel. More...
 
- Static Public Member Functions inherited from ICLKernel
static constexpr unsigned int num_arguments_per_1D_array ()
 Returns the number of arguments enqueued per 1D array object. More...
 
static constexpr unsigned int num_arguments_per_1D_tensor ()
 Returns the number of arguments enqueued per 1D tensor object. More...
 
static constexpr unsigned int num_arguments_per_2D_tensor ()
 Returns the number of arguments enqueued per 2D tensor object. More...
 
static constexpr unsigned int num_arguments_per_3D_tensor ()
 Returns the number of arguments enqueued per 3D tensor object. More...
 
static constexpr unsigned int num_arguments_per_4D_tensor ()
 Returns the number of arguments enqueued per 4D tensor object. More...
 
static cl::NDRange gws_from_window (const Window &window)
 Get the global work size given an execution window. More...
 

Detailed Description

Interface for the kernel to run a 3x3 depthwise convolution on a tensor when the data layout is NHWC.

Definition at line 35 of file CLDepthwiseConvolutionLayer3x3NHWCKernel.h.

Constructor & Destructor Documentation

◆ CLDepthwiseConvolutionLayer3x3NHWCKernel()

Default constructor.

Definition at line 185 of file CLDepthwiseConvolutionLayer3x3NHWCKernel.cpp.

186  : _num_planes_processed_per_iteration(1)
187 {
188 }

Member Function Documentation

◆ border_size()

BorderSize border_size ( ) const
overridevirtual

The size of the border for that kernel.

Returns
The width in number of elements of the border.

Reimplemented from IKernel.

Definition at line 190 of file CLDepthwiseConvolutionLayer3x3NHWCKernel.cpp.

191 {
192  return _border_size;
193 }

◆ configure() [1/2]

void configure ( const ICLTensor input,
const ICLTensor weights,
const ICLTensor biases,
ICLTensor output,
const PadStrideInfo conv_info,
unsigned int  depth_multiplier = 1,
ActivationLayerInfo  act_info = ActivationLayerInfo(),
const Size2D dilation = Size2D(1U, 1U),
const ICLTensor output_multipliers = nullptr,
const ICLTensor output_shifts = nullptr 
)
overridevirtual

Default move assignment operator.

Initialize the function's source, destination, conv and border_size.

Parameters
[in]inputSource tensor. DataType supported: QASYMM8/QASYMM8_SIGNED/F16/F32.
[in]weightsWeights tensor. A 3D tensor with dimensions [IFM, 3, 3]. Data type supported: Same as input or QASYMM8/QASYMM8_SIGNED/QSYMM8_PER_CHANNEL when input is QASYMM8/QASYMM8_SIGNED.
[in]biasesBiases tensor. A 1D tensor with dimensions [IFM]. Must be nullptr if not needed. Data type supported: Same as input, S32 when input is QASYMM8/QASYMM8_SIGNED.
[out]outputDestination tensor. Data type supported: Same as input.
[in]conv_infoPadding and stride information to use for the convolution.
[in]depth_multiplier(Optional) Multiplier to apply to the input's depth in order to retrieve the output's depth. Defaults to 1.
[in]act_info(Optional) Activation layer information in case of a fused activation. Only RELU, BOUNDED_RELU and LU_BOUNDED_RELU are supported.
[in]dilation(Optional) Dilation, in elements, across x and y. Defaults to (1, 1).
[in]output_multipliers(Optional) Output multipliers tensor for quantized computations. In case of per-channel quantization, the number of multipliers must be equal to the number of filters (IFM). Supported data types: S32
[in]output_shifts(Optional) Output shifts tensor for quantized computations. In case of per-channel quantization, the number of multipliers must be equal to the number of filters (IFM). Supported data types: S32

Implements ICLDepthwiseConvolutionLayer3x3Kernel.

Definition at line 195 of file CLDepthwiseConvolutionLayer3x3NHWCKernel.cpp.

References CLKernelLibrary::get().

198 {
199  configure(CLKernelLibrary::get().get_compile_context(), input, weights, biases, output, conv_info, depth_multiplier, act_info, dilation, output_multipliers, output_shifts);
200 }
static CLKernelLibrary & get()
Access the KernelLibrary singleton.
void configure(const ICLTensor *input, const ICLTensor *weights, const ICLTensor *biases, ICLTensor *output, const PadStrideInfo &conv_info, unsigned int depth_multiplier=1, ActivationLayerInfo act_info=ActivationLayerInfo(), const Size2D &dilation=Size2D(1U, 1U), const ICLTensor *output_multipliers=nullptr, const ICLTensor *output_shifts=nullptr) override
Default move assignment operator.

◆ configure() [2/2]

void configure ( const CLCompileContext compile_context,
const ICLTensor input,
const ICLTensor weights,
const ICLTensor biases,
ICLTensor output,
const PadStrideInfo conv_info,
unsigned int  depth_multiplier = 1,
ActivationLayerInfo  act_info = ActivationLayerInfo(),
const Size2D dilation = Size2D(1U, 1U),
const ICLTensor output_multipliers = nullptr,
const ICLTensor output_shifts = nullptr 
)
overridevirtual

Initialize the function's source, destination, conv and border_size.

Parameters
[in]compile_contextThe compile context to be used.
[in]inputSource tensor. DataType supported: QASYMM8/QASYMM8_SIGNED/F16/F32.
[in]weightsWeights tensor. A 3D tensor with dimensions [IFM, 3, 3]. Data type supported: Same as input or QASYMM8/QASYMM8_SIGNED/QSYMM8_PER_CHANNEL when input is QASYMM8/QASYMM8_SIGNED.
[in]biasesBiases tensor. A 1D tensor with dimensions [IFM]. Must be nullptr if not needed. Data type supported: Same as input, S32 when input is QASYMM8/QASYMM8_SIGNED.
[out]outputDestination tensor. Data type supported: Same as input.
[in]conv_infoPadding and stride information to use for the convolution.
[in]depth_multiplier(Optional) Multiplier to apply to the input's depth in order to retrieve the output's depth. Defaults to 1.
[in]act_info(Optional) Activation layer information in case of a fused activation. Only RELU, BOUNDED_RELU and LU_BOUNDED_RELU are supported.
[in]dilation(Optional) Dilation, in elements, across x and y. Defaults to (1, 1).
[in]output_multipliers(Optional) Output multipliers tensor for quantized computations. In case of per-channel quantization, the number of multipliers must be equal to the number of filters (IFM). Supported data types: S32
[in]output_shifts(Optional) Output shifts tensor for quantized computations. In case of per-channel quantization, the number of multipliers must be equal to the number of filters (IFM). Supported data types: S32

Implements ICLDepthwiseConvolutionLayer3x3Kernel.

Definition at line 202 of file CLDepthwiseConvolutionLayer3x3NHWCKernel.cpp.

References CLBuildOptions::add_option(), CLBuildOptions::add_option_if(), arm_compute::adjust_vec_size(), ARM_COMPUTE_ERROR_ON, ARM_COMPUTE_ERROR_ON_NULLPTR, ARM_COMPUTE_ERROR_THROW_ON, arm_compute::quantization::calculate_quantized_multiplier(), arm_compute::test::validation::conv_info, arm_compute::create_kernel(), ITensorInfo::data_type(), ITensorInfo::dimension(), arm_compute::dot8_supported(), ITensorInfo::element_size(), arm_compute::float_to_string_with_full_precision(), CLKernelLibrary::get(), arm_compute::get_cl_promoted_type_from_data_type(), arm_compute::get_cl_type_from_data_type(), arm_compute::get_padding_info(), arm_compute::get_quantized_activation_min_max(), arm_compute::has_padding_changed(), ITensor::info(), arm_compute::test::validation::input, arm_compute::is_data_type_quantized_asymmetric(), arm_compute::is_data_type_quantized_per_channel(), kernel_name, arm_compute::lower_string(), UniformQuantizationInfo::offset, CLBuildOptions::options(), PadStrideInfo::pad_left(), PadStrideInfo::pad_right(), PadStrideInfo::pad_top(), ITensorInfo::padding(), UniformQuantizationInfo::scale, PadStrideInfo::stride(), arm_compute::string_from_activation_func(), arm_compute::string_from_data_type(), arm_compute::support::cpp11::to_string(), and arm_compute::validate_arguments().

205 {
206  ARM_COMPUTE_ERROR_ON_NULLPTR(input, weights, output);
207  ARM_COMPUTE_ERROR_THROW_ON(validate_arguments(input->info(), weights->info(), (biases != nullptr) ? biases->info() : nullptr, output->info(),
208  conv_info, depth_multiplier, act_info, dilation,
209  (output_multipliers != nullptr) ? output_multipliers->info() : nullptr,
210  (output_shifts != nullptr) ? output_shifts->info() : nullptr));
211 
212  auto padding_info = get_padding_info({ input, weights, biases, output });
213 
214  auto win_config = validate_and_configure_window(input->info(), weights->info(), biases != nullptr ? biases->info() : nullptr, output->info(),
215  conv_info, depth_multiplier, dilation,
216  (output_multipliers != nullptr) ? output_multipliers->info() : nullptr,
217  (output_shifts != nullptr) ? output_shifts->info() : nullptr);
218 
219  const bool is_stride_1 = ((conv_info.stride().first == conv_info.stride().second) && (conv_info.stride().first == 1));
220  const bool is_stride_1_dilation_1 = (is_stride_1 && dilation.x() == 1 && dilation.y() == 1);
221  const bool is_quantized_per_channel = is_data_type_quantized_per_channel(weights->info()->data_type());
222  const bool is_dot8_supported = dot8_supported(CLKernelLibrary::get().get_device()) && !is_quantized_per_channel;
223 
224  _input = input;
225  _output = output;
226  _weights = weights;
227  _biases = biases;
228  _conv_stride_y = conv_info.stride().second;
229  _num_planes_processed_per_iteration = is_stride_1_dilation_1 ? 2 : 1;
230  _output_multipliers = output_multipliers;
231  _output_shifts = output_shifts;
232  _is_quantized = is_data_type_quantized_asymmetric(input->info()->data_type());
233 
234  if(_is_quantized)
235  {
236  _border_size = BorderSize(input->info()->padding());
237 
238  // If QASYMM8 and the 8 bit dot product is available, force _num_planes_processed_per_iteration to 1
239  if(is_dot8_supported)
240  {
241  _num_planes_processed_per_iteration = 1;
242  }
243  }
244 
245  unsigned int num_elems_accessed_per_iteration = _is_quantized ? 4 : adjust_vec_size(4 / input->info()->element_size(), input->info()->dimension(0));
246  unsigned int num_rows_processed_per_iteration = is_stride_1_dilation_1 ? 2 : 1;
247 
248  CLBuildOptions build_opts;
249  build_opts.add_option("-DDATA_TYPE=" + get_cl_type_from_data_type(_input->info()->data_type()));
250  build_opts.add_option("-DACTIVATION_TYPE=" + lower_string(string_from_activation_func(act_info.activation())));
251  build_opts.add_option("-DVEC_SIZE=" + support::cpp11::to_string(num_elems_accessed_per_iteration));
252  build_opts.add_option("-DSRC_DIM_1=" + support::cpp11::to_string(_input->info()->dimension(1)));
253  build_opts.add_option("-DSRC_DIM_2=" + support::cpp11::to_string(_input->info()->dimension(2)));
254  build_opts.add_option("-DCONV_PAD_TOP=" + support::cpp11::to_string(conv_info.pad_top()));
255  build_opts.add_option("-DCONV_PAD_LEFT=" + support::cpp11::to_string(conv_info.pad_left()));
256  build_opts.add_option("-DPARTIAL_STORE_N0=" + support::cpp11::to_string(input->info()->dimension(0) % num_elems_accessed_per_iteration));
257  build_opts.add_option_if(_biases != nullptr, "-DHAS_BIAS");
258  build_opts.add_option_if(_input->info()->tensor_shape().total_size_upper(3) > 1,
259  "-DDST_DEPTH=" + support::cpp11::to_string(static_cast<int>(std::ceil(_output->info()->dimension(2) / static_cast<float>(_num_planes_processed_per_iteration)))));
260 
261  if(_is_quantized)
262  {
263  const UniformQuantizationInfo iq_info = _input->info()->quantization_info().uniform();
264  const UniformQuantizationInfo wq_info = _weights->info()->quantization_info().uniform();
265  const UniformQuantizationInfo oq_info = _output->info()->quantization_info().uniform();
266 
267  build_opts.add_option("-DSRC_DIM_1=" + support::cpp11::to_string(_input->info()->dimension(1)));
268  build_opts.add_option("-DINPUT_OFFSET=" + support::cpp11::to_string(-iq_info.offset));
269  build_opts.add_option("-DWEIGHTS_OFFSET=" + support::cpp11::to_string(-wq_info.offset));
270  build_opts.add_option("-DOUTPUT_OFFSET=" + support::cpp11::to_string(oq_info.offset));
271  build_opts.add_option("-DK_OFFSET=" + support::cpp11::to_string(9 * iq_info.offset * wq_info.offset));
272  build_opts.add_option_if(is_quantized_per_channel, "-DPER_CHANNEL_QUANTIZATION");
273  build_opts.add_option_if(is_dot8_supported, "-DIS_DOT8");
274 
275  // Compute non-per-channel multiplier and shift anyway to make OpenCL kernel simpler
276  float multiplier = iq_info.scale * wq_info.scale / oq_info.scale;
277  int output_multiplier = 0;
278  int output_shift = 0;
279  quantization::calculate_quantized_multiplier(multiplier, &output_multiplier, &output_shift);
280  build_opts.add_option("-DOUTPUT_MULTIPLIER=" + support::cpp11::to_string(output_multiplier));
281  build_opts.add_option("-DOUTPUT_SHIFT=" + support::cpp11::to_string(output_shift));
282 
283  if(act_info.enabled())
284  {
285  int a_val{};
286  int b_val{};
287  std::tie(b_val, a_val) = get_quantized_activation_min_max(act_info, input->info()->data_type(), oq_info);
288 
289  const int o1 = oq_info.offset;
290 
291  build_opts.add_option("-DA_VAL=" + support::cpp11::to_string(a_val));
292  build_opts.add_option("-DB_VAL=" + support::cpp11::to_string(b_val));
293  build_opts.add_option("-DCONST_0=" + support::cpp11::to_string(o1));
294 
295  const float s1 = iq_info.scale;
296  build_opts.add_option("-DS1_VAL=" + float_to_string_with_full_precision(s1));
297  build_opts.add_option("-DO1_VAL=" + support::cpp11::to_string(o1));
298  }
299 
300  build_opts.add_option("-DWEIGHTS_TYPE=" + get_cl_type_from_data_type(weights->info()->data_type()));
301  build_opts.add_option("-DWEIGHTS_PROMOTED_TYPE=" + get_cl_promoted_type_from_data_type(weights->info()->data_type()));
302  }
303  else
304  {
305  build_opts.add_option_if(act_info.enabled(), "-DA_VAL=" + float_to_string_with_full_precision(act_info.a()));
306  build_opts.add_option_if(act_info.enabled(), "-DB_VAL=" + float_to_string_with_full_precision(act_info.b()));
307  }
308 
309  if(is_stride_1_dilation_1)
310  {
311  build_opts.add_option("-DNUM_ROWS_PROCESSED=" + support::cpp11::to_string(num_rows_processed_per_iteration));
312  build_opts.add_option("-DNUM_PLANES_PROCESSED=" + support::cpp11::to_string(_num_planes_processed_per_iteration));
313  build_opts.add_option("-DDST_DIM_1=" + support::cpp11::to_string(_output->info()->dimension(1)));
314  build_opts.add_option("-DDST_DIM_2=" + support::cpp11::to_string(_output->info()->dimension(2)));
315  build_opts.add_option("-DPARTIAL_STORE_M0=" + support::cpp11::to_string((input->info()->dimension(1) + conv_info.pad_left() + conv_info.pad_right()) % num_rows_processed_per_iteration));
316  }
317  else
318  {
319  build_opts.add_option("-DCONV_STRIDE_X=" + support::cpp11::to_string(conv_info.stride().first));
320  build_opts.add_option("-DCONV_STRIDE_Y=" + support::cpp11::to_string(_conv_stride_y));
321  build_opts.add_option("-DDILATION_X=" + support::cpp11::to_string(dilation.x()));
322  build_opts.add_option("-DDILATION_Y=" + support::cpp11::to_string(dilation.y()));
323  }
324 
325  std::string kernel_name;
326  // Create kernel
327  if(_is_quantized)
328  {
329  kernel_name = std::string("dwc_3x3_reshaped_quantized8");
330  kernel_name += (is_dot8_supported && is_stride_1_dilation_1 ? "_dot8" : "");
331  kernel_name += (is_stride_1_dilation_1 ? "_stride1" : "");
332  kernel_name += "_nhwc";
333  }
334  else
335  {
336  kernel_name = std::string("depthwise_convolution_3x3_nhwc");
337  kernel_name += (is_stride_1_dilation_1 ? "_stride1" : "");
338  }
339 
340  ICLKernel::configure_internal(win_config.second);
341  _kernel = create_kernel(compile_context, kernel_name, build_opts.options());
342 
343  ARM_COMPUTE_ERROR_ON(!_is_quantized && has_padding_changed(padding_info));
344 
345  // Set config_id for enabling LWS tuning
346  _config_id = kernel_name;
347  _config_id += "_";
348  _config_id += support::cpp11::to_string(input->info()->dimension(0));
349  _config_id += "_";
350  _config_id += support::cpp11::to_string(input->info()->dimension(1));
351  _config_id += "_";
352  _config_id += support::cpp11::to_string(input->info()->dimension(2));
353  _config_id += "_";
354  _config_id += support::cpp11::to_string(output->info()->dimension(0));
355  _config_id += "_";
356  _config_id += support::cpp11::to_string(output->info()->dimension(1));
357  _config_id += "_";
358  _config_id += string_from_data_type(input->info()->data_type());
359 }
bool dot8_supported(const cl::Device &device)
Helper function to check whether the cl_arm_integer_dot_product_int8 extension is supported...
Definition: CLHelpers.cpp:239
std::string to_string(T &&value)
Convert integer and float values to string.
#define ARM_COMPUTE_ERROR_ON(cond)
If the condition is true then an error message is printed and an exception thrown.
Definition: Error.h:466
const std::string & string_from_activation_func(ActivationLayerInfo::ActivationFunction act)
Translates a given activation function to a string.
Definition: Utils.cpp:163
static CLKernelLibrary & get()
Access the KernelLibrary singleton.
#define ARM_COMPUTE_ERROR_THROW_ON(status)
Definition: Error.h:455
Status calculate_quantized_multiplier(float multiplier, int32_t *quant_multiplier, int32_t *shift, bool ignore_epsilon=false)
Calculate quantized representation of multiplier.
std::string lower_string(const std::string &val)
Lower a given string.
Definition: Utils.cpp:350
cl::Kernel create_kernel(const CLCompileContext &ctx, const std::string &kernel_name, const std::set< std::string > &build_opts=std::set< std::string >())
Creates an opencl kernel using a compile context.
Definition: CLHelpers.cpp:403
const std::string & string_from_data_type(DataType dt)
Convert a data type identity into a string.
Definition: Utils.cpp:135
bool is_data_type_quantized_per_channel(DataType dt)
Check if a given data type is of per channel type.
Definition: Utils.h:1245
std::string float_to_string_with_full_precision(float val)
Create a string with the float in full precision.
Definition: Utils.h:1262
std::pair< int32_t, int32_t > get_quantized_activation_min_max(ActivationLayerInfo act_info, DataType data_type, UniformQuantizationInfo oq_info)
Returns a pair of minimum and maximum values for a quantized activation.
Definition: Utils.cpp:483
std::string kernel_name
std::string get_cl_type_from_data_type(const DataType &dt)
Translates a tensor data type to the appropriate OpenCL type.
Definition: CLHelpers.cpp:37
bool has_padding_changed(const std::unordered_map< const ITensorInfo *, PaddingSize > &padding_map)
Check if the previously stored padding info has changed after configuring a kernel.
Definition: Utils.cpp:528
bool is_data_type_quantized_asymmetric(DataType dt)
Check if a given data type is of asymmetric quantized type.
Definition: Utils.h:1190
std::string get_cl_promoted_type_from_data_type(const DataType &dt)
Translates a tensor data type to the appropriate OpenCL promoted type.
Definition: CLHelpers.cpp:73
std::unordered_map< const ITensorInfo *, PaddingSize > get_padding_info(std::initializer_list< const ITensorInfo *> infos)
Stores padding information before configuring a kernel.
Definition: Utils.cpp:513
Status validate_arguments(const ITensorInfo *input, const ITensorInfo *bias, const ITensorInfo *output, const GEMMLowpOutputStageInfo *output_stage)
#define ARM_COMPUTE_ERROR_ON_NULLPTR(...)
Definition: Validate.h:161
unsigned int adjust_vec_size(unsigned int vec_size, size_t dim0)
Returns the adjusted vector size in case it is less than the input&#39;s first dimension, getting rounded down to its closest valid vector size.
Definition: Utils.h:1358

◆ run()

void run ( const Window window,
cl::CommandQueue &  queue 
)
overridevirtual

Enqueue the OpenCL kernel to process the given window on the passed OpenCL command queue.

Note
The queue is not flushed by this method, and therefore the kernel will not have been executed by the time this method returns.
Parameters
[in]windowRegion on which to execute the kernel. (Must be a valid region of the window returned by window()).
[in,out]queueCommand queue on which to enqueue the kernel.

Reimplemented from ICLKernel.

Definition at line 375 of file CLDepthwiseConvolutionLayer3x3NHWCKernel.cpp.

References ICLKernel::add_1D_tensor_argument(), ICLKernel::add_2D_tensor_argument(), ICLKernel::add_3D_tensor_argument(), ICLKernel::add_4D_tensor_argument(), ARM_COMPUTE_ERROR_ON_INVALID_SUBWINDOW, ARM_COMPUTE_ERROR_ON_UNCONFIGURED_KERNEL, Window::collapse_if_possible(), Window::DimX, Window::DimZ, arm_compute::enqueue(), Window::first_slice_window_4D(), ICLKernel::lws_hint(), ICLKernel::num_arguments_per_2D_tensor(), ICLKernel::num_arguments_per_3D_tensor(), ICLKernel::num_arguments_per_4D_tensor(), Window::set(), Window::set_dimension_step(), arm_compute::test::validation::reference::slice(), Window::slide_window_slice_4D(), Window::Dimension::step(), Window::use_tensor_dimensions(), IKernel::window(), and Window::x().

376 {
379 
380  const size_t total_batches = _input->info()->tensor_shape().total_size_upper(3);
381 
383  win.set(Window::DimZ, Window::Dimension(0, std::ceil(_output->info()->dimension(2) / static_cast<float>(_num_planes_processed_per_iteration)) * total_batches, 1));
384 
385  unsigned int idx = 2 * num_arguments_per_4D_tensor() + (_is_quantized ? num_arguments_per_2D_tensor() : num_arguments_per_3D_tensor());
386 
387  if(_is_quantized)
388  {
389  Window slice;
390  slice.use_tensor_dimensions(_output_multipliers->info()->tensor_shape());
391  slice.set_dimension_step(Window::DimX, window.x().step());
392  add_1D_tensor_argument(idx, _output_multipliers, slice);
393  add_1D_tensor_argument(idx, _output_shifts, slice);
394  }
395 
396  if(_biases != nullptr)
397  {
398  Window win_biases;
399  win_biases.use_tensor_dimensions(_biases->info()->tensor_shape());
400  win_biases.set_dimension_step(Window::DimX, window.x().step());
401  add_1D_tensor_argument(idx, _biases, win_biases);
402  }
403 
404  if(_is_quantized)
405  {
406  // Calculate the max_offset.
407  // max_offset is the offset for the last NOT valid value in the Z dimension (spatial dimension Y for NHWC)
408  // |******************|
409  // | pad_top |
410  // |******************|
411  // | |
412  // | plane0 |
413  // | batch0 |
414  // |__________________|
415  // |******************| Batch 0
416  // | pad_bottom |
417  // | pad_top |
418  // |******************|
419  // | |
420  // | plane1 |
421  // | batch0 |
422  // |__________________|-----> max_offset
423  // |******************|
424  // | pad_bottom |
425  // | pad_top |
426  // |******************|
427  // | |
428  // | plane0 |
429  // | batch1 |
430  // |__________________|
431  // |******************| Batch 1
432  // | pad_bottom |
433  // | pad_top |
434  // |******************|
435  // | |
436  // | plane1 |
437  // | batch1 |
438  // |__________________|
439  // | pad_bottom |
440  // |******************|
441  const int max_offset = ((_input->info()->dimension(1) * _input->info()->dimension(2)) + (_input->info()->padding().bottom + _input->info()->padding().top) * (_input->info()->dimension(
442  2) - 1)) * _input->info()->strides_in_bytes().y();
443  _kernel.setArg(idx, max_offset);
444  }
445 
446  Window slice = win.first_slice_window_4D();
447  do
448  {
449  unsigned int idx = 0;
450  add_4D_tensor_argument(idx, _input, slice);
451  add_4D_tensor_argument(idx, _output, slice);
452  if(_is_quantized)
453  {
454  add_2D_tensor_argument(idx, _weights, slice);
455  }
456  else
457  {
458  add_3D_tensor_argument(idx, _weights, slice);
459  }
460  enqueue(queue, *this, slice, lws_hint());
461  }
462  while(win.slide_window_slice_4D(slice));
463 }
const Window & window() const
The maximum window the kernel can be executed on.
Definition: IKernel.cpp:28
void enqueue(IGCKernel &kernel, const Window &window, const gles::NDRange &lws=gles::NDRange(1U, 1U, 1U))
Add the kernel to the command queue with the given window.
Definition: IGCKernel.cpp:41
constexpr int step() const
Return the step of the dimension.
Definition: Window.h:104
cl::NDRange lws_hint() const
Return the Local-Workgroup-Size hint.
Definition: ICLKernel.h:276
void add_3D_tensor_argument(unsigned int &idx, const ICLTensor *tensor, const Window &window)
Add the passed 3D tensor&#39;s parameters to the object&#39;s kernel&#39;s arguments starting from the index idx...
Definition: ICLKernel.h:172
static constexpr size_t DimX
Alias for dimension 0 also known as X dimension.
Definition: Window.h:43
static constexpr unsigned int num_arguments_per_3D_tensor()
Returns the number of arguments enqueued per 3D tensor object.
Definition: ICLKernel.h:214
Window collapse_if_possible(const Window &full_window, size_t first, size_t last, bool *has_collapsed=nullptr) const
Collapse the dimensions between first and last if possible.
Definition: Window.inl:68
void set(size_t dimension, const Dimension &dim)
Set the values of a given dimension.
Definition: Window.inl:49
static constexpr unsigned int num_arguments_per_2D_tensor()
Returns the number of arguments enqueued per 2D tensor object.
Definition: ICLKernel.h:206
static constexpr unsigned int num_arguments_per_4D_tensor()
Returns the number of arguments enqueued per 4D tensor object.
Definition: ICLKernel.h:222
#define ARM_COMPUTE_ERROR_ON_UNCONFIGURED_KERNEL(k)
Definition: Validate.h:941
void add_2D_tensor_argument(unsigned int &idx, const ICLTensor *tensor, const Window &window)
Add the passed 2D tensor&#39;s parameters to the object&#39;s kernel&#39;s arguments starting from the index idx...
Definition: ICLKernel.h:148
static constexpr size_t DimZ
Alias for dimension 2 also known as Z dimension.
Definition: Window.h:47
void add_1D_tensor_argument(unsigned int &idx, const ICLTensor *tensor, const Window &window)
Add the passed 1D tensor&#39;s parameters to the object&#39;s kernel&#39;s arguments starting from the index idx...
Definition: ICLKernel.h:124
void add_4D_tensor_argument(unsigned int &idx, const ICLTensor *tensor, const Window &window)
Add the passed 4D tensor&#39;s parameters to the object&#39;s kernel&#39;s arguments starting from the index idx...
Definition: ICLKernel.h:182
#define ARM_COMPUTE_ERROR_ON_INVALID_SUBWINDOW(f, s)
Definition: Validate.h:205
SimpleTensor< T > slice(const SimpleTensor< T > &src, Coordinates starts, Coordinates ends)
constexpr const Dimension & x() const
Alias to access the first dimension of the window.
Definition: Window.h:145

◆ validate()

Status validate ( const ITensorInfo input,
const ITensorInfo weights,
const ITensorInfo biases,
const ITensorInfo output,
const PadStrideInfo conv_info,
unsigned int  depth_multiplier = 1,
ActivationLayerInfo  act_info = ActivationLayerInfo(),
const Size2D dilation = Size2D(1U, 1U),
const ITensorInfo output_multipliers = nullptr,
const ITensorInfo output_shifts = nullptr 
)
static

Static function to check if given info will lead to a valid configuration of CLDepthwiseConvolutionLayer3x3NHWCKernel.

Parameters
[in]inputSource tensor info. DataType supported: QASYMM8/QASYMM8_SIGNED/F16/F32.
[in]weightsWeights tensor info. A 3D tensor with dimensions [IFM, 3, 3]. Data type supported: Same as input or QASYMM8/QASYMM8_SIGNED/QSYMM8_PER_CHANNEL when input is QASYMM8/QASYMM8_SIGNED.
[in]biasesBiases tensor info. A 1D tensor with dimensions [IFM]. Must be nullptr if not needed. Data type supported: Same as input, S32 when input is QASYMM8/QASYMM8_SIGNED.
[in]outputDestination tensor info. Data type supported: Same as input.
[in]conv_infoPadding and stride information to use for the convolution.
[in]depth_multiplier(Optional) Multiplier to apply to the input's depth in order to retrieve the output's depth. Defaults to 1.
[in]act_info(Optional) Activation layer information in case of a fused activation. Only RELU, BOUNDED_RELU and LU_BOUNDED_RELU are supported.
[in]dilation(Optional) Dilation, in elements, across x and y. Defaults to (1, 1).
[in]output_multipliers(Optional) Output multipliers tensor info for quantized computations. In case of per-channel quantization, the number of multipliers must be equal to the number of filters (IFM). Supported data types: S32
[in]output_shifts(Optional) Output shifts tensor for quantized computations. In case of per-channel quantization, the number of multipliers must be equal to the number of filters (IFM). Supported data types: S32
Returns
a status

Definition at line 361 of file CLDepthwiseConvolutionLayer3x3NHWCKernel.cpp.

References ARM_COMPUTE_RETURN_ON_ERROR, ICloneable< T >::clone(), arm_compute::test::validation::conv_info, and arm_compute::validate_arguments().

364 {
365  ARM_COMPUTE_RETURN_ON_ERROR(validate_arguments(input, weights, biases, output, conv_info, depth_multiplier, act_info, dilation, output_multipliers, output_shifts));
366  ARM_COMPUTE_RETURN_ON_ERROR(validate_and_configure_window(input->clone().get(), weights->clone().get(),
367  biases != nullptr ? biases->clone().get() : nullptr,
368  output->clone().get(), conv_info, depth_multiplier, dilation,
369  (output_multipliers != nullptr) ? output_multipliers->clone().get() : nullptr,
370  (output_shifts != nullptr) ? output_shifts->clone().get() : nullptr)
371  .first);
372  return Status{};
373 }
#define ARM_COMPUTE_RETURN_ON_ERROR(status)
Checks if a status contains an error and returns it.
Definition: Error.h:204
Status validate_arguments(const ITensorInfo *input, const ITensorInfo *bias, const ITensorInfo *output, const GEMMLowpOutputStageInfo *output_stage)

The documentation for this class was generated from the following files: