Compute Library
 22.11
CLDepthwiseConvolutionLayerNativeKernel Class Reference

Interface for the kernel to run a MxN depthwise convolution. More...

#include <CLDepthwiseConvolutionLayerNativeKernel.h>

Collaboration diagram for CLDepthwiseConvolutionLayerNativeKernel:
[legend]

Public Member Functions

 CLDepthwiseConvolutionLayerNativeKernel ()
 Default Constructor. More...
 
 CLDepthwiseConvolutionLayerNativeKernel (const CLDepthwiseConvolutionLayerNativeKernel &)=delete
 Prevent instances of this class from being copied (As this class contains pointers) More...
 
CLDepthwiseConvolutionLayerNativeKerneloperator= (const CLDepthwiseConvolutionLayerNativeKernel &)=delete
 Prevent instances of this class from being copied (As this class contains pointers) More...
 
 CLDepthwiseConvolutionLayerNativeKernel (CLDepthwiseConvolutionLayerNativeKernel &&)=default
 Allow instances of this class to be moved. More...
 
CLDepthwiseConvolutionLayerNativeKerneloperator= (CLDepthwiseConvolutionLayerNativeKernel &&)=default
 Allow instances of this class to be moved. More...
 
void configure (const CLCompileContext &compile_context, ICLTensor *input, const ICLTensor *weights, const ICLTensor *biases, ICLTensor *output, const DWCComputeKernelInfo &dwc_info, const ConvolutionInfo &conv_info, const ICLTensor *output_multipliers=nullptr, const ICLTensor *output_shifts=nullptr)
 Initialize the function's source, destination and parameters. More...
 
void configure (ICLTensor *input, const ICLTensor *weights, const ICLTensor *biases, ICLTensor *output, const DWCComputeKernelInfo &dwc_info, const ConvolutionInfo &conv_info, const ICLTensor *output_multipliers=nullptr, const ICLTensor *output_shifts=nullptr)
 Static function to check if given info will lead to a valid configuration of CLDepthwiseConvolutionLayerNativeKernel. More...
 
void run (const Window &window, cl::CommandQueue &queue) override
 Enqueue the OpenCL kernel to process the given window on the passed OpenCL command queue. More...
 
- Public Member Functions inherited from ICLKernel
 ICLKernel ()
 Constructor. More...
 
cl::Kernel & kernel ()
 Returns a reference to the OpenCL kernel of this object. More...
 
CLKernelType type () const
 Returns the CL kernel type. More...
 
template<typename T >
void add_1D_array_argument (unsigned int &idx, const ICLArray< T > *array, const Strides &strides, unsigned int num_dimensions, const Window &window)
 Add the passed 1D array's parameters to the object's kernel's arguments starting from the index idx. More...
 
void add_1D_tensor_argument (unsigned int &idx, const ICLTensor *tensor, const Window &window)
 Add the passed 1D tensor's parameters to the object's kernel's arguments starting from the index idx. More...
 
void add_1D_tensor_argument_if (bool cond, unsigned int &idx, const ICLTensor *tensor, const Window &window)
 Add the passed 1D tensor's parameters to the object's kernel's arguments starting from the index idx if the condition is true. More...
 
void add_2D_tensor_argument (unsigned int &idx, const ICLTensor *tensor, const Window &window)
 Add the passed 2D tensor's parameters to the object's kernel's arguments starting from the index idx. More...
 
void add_2D_tensor_argument_if (bool cond, unsigned int &idx, const ICLTensor *tensor, const Window &window)
 Add the passed 2D tensor's parameters to the object's kernel's arguments starting from the index idx if the condition is true. More...
 
void add_3D_tensor_argument (unsigned int &idx, const ICLTensor *tensor, const Window &window)
 Add the passed 3D tensor's parameters to the object's kernel's arguments starting from the index idx. More...
 
void add_4D_tensor_argument (unsigned int &idx, const ICLTensor *tensor, const Window &window)
 Add the passed 4D tensor's parameters to the object's kernel's arguments starting from the index idx. More...
 
void add_5D_tensor_argument (unsigned int &idx, const ICLTensor *tensor, const Window &window)
 Add the passed 5D tensor's parameters to the object's kernel's arguments starting from the index idx. More...
 
void add_3d_tensor_nhw_argument (unsigned int &idx, const ICLTensor *tensor)
 Add the passed NHW 3D tensor's parameters to the object's kernel's arguments by passing strides, dimensions and the offset to the first valid element in bytes. More...
 
void add_4d_tensor_nhwc_argument (unsigned int &idx, const ICLTensor *tensor)
 Add the passed NHWC 4D tensor's parameters to the object's kernel's arguments by passing strides, dimensions and the offset to the first valid element in bytes. More...
 
virtual void run_op (ITensorPack &tensors, const Window &window, cl::CommandQueue &queue)
 Enqueue the OpenCL kernel to process the given window on the passed OpenCL command queue. More...
 
virtual void run_composite_op (ITensorPack &tensors, const Window &window, cl::CommandQueue &queue, const experimental::dynamic_fusion::ClExecutionDescriptor &exec_desc)
 The execution is carried out through run_op method. But the run_op method needs to be extended to include ClExecutionDescriptor as now LWS GWS tuning will be separated from the IKernel. More...
 
template<typename T >
void add_argument (unsigned int &idx, T value)
 Add the passed parameters to the object's kernel's arguments starting from the index idx. More...
 
void set_lws_hint (const cl::NDRange &lws_hint)
 Set the Local-Workgroup-Size hint. More...
 
cl::NDRange lws_hint () const
 Return the Local-Workgroup-Size hint. More...
 
void set_wbsm_hint (const cl_int &wbsm_hint)
 Set the workgroup batch size modifier hint. More...
 
cl_int wbsm_hint () const
 Return the workgroup batch size modifier hint. More...
 
const std::string & config_id () const
 Get the configuration ID. More...
 
void set_target (GPUTarget target)
 Set the targeted GPU architecture. More...
 
void set_target (cl::Device &device)
 Set the targeted GPU architecture according to the CL device. More...
 
GPUTarget get_target () const
 Get the targeted GPU architecture. More...
 
size_t get_max_workgroup_size ()
 Get the maximum workgroup size for the device the CLKernelLibrary uses. More...
 
template<unsigned int dimension_size>
void add_tensor_argument (unsigned &idx, const ICLTensor *tensor, const Window &window)
 
template<typename T , unsigned int dimension_size>
void add_array_argument (unsigned &idx, const ICLArray< T > *array, const Strides &strides, unsigned int num_dimensions, const Window &window)
 Add the passed array's parameters to the object's kernel's arguments starting from the index idx. More...
 
- Public Member Functions inherited from IKernel
 IKernel ()
 Constructor. More...
 
virtual ~IKernel ()=default
 Destructor. More...
 
virtual bool is_parallelisable () const
 Indicates whether or not the kernel is parallelisable. More...
 
virtual BorderSize border_size () const
 The size of the border for that kernel. More...
 
const Windowwindow () const
 The maximum window the kernel can be executed on. More...
 
bool is_window_configured () const
 Function to check if the embedded window of this kernel has been configured. More...
 

Static Public Member Functions

static Status validate (const ITensorInfo *input, const ITensorInfo *weights, const ITensorInfo *biases, const ITensorInfo *output, const DWCComputeKernelInfo &dwc_info, const ConvolutionInfo &conv_info, const ITensorInfo *output_multipliers=nullptr, const ITensorInfo *output_shifts=nullptr)
 Static function to check if given info will lead to a valid configuration of CLDepthwiseConvolutionLayerNativeKernel. More...
 
- Static Public Member Functions inherited from ICLKernel
static constexpr unsigned int num_arguments_per_3d_tensor_nhw ()
 Returns the number of arguments enqueued per NHW 3D Tensor object. More...
 
static constexpr unsigned int num_arguments_per_4d_tensor_nhwc ()
 Returns the number of arguments enqueued per NHWC 4D Tensor object. More...
 
static constexpr unsigned int num_arguments_per_1D_array ()
 Returns the number of arguments enqueued per 1D array object. More...
 
static constexpr unsigned int num_arguments_per_1D_tensor ()
 Returns the number of arguments enqueued per 1D tensor object. More...
 
static constexpr unsigned int num_arguments_per_2D_tensor ()
 Returns the number of arguments enqueued per 2D tensor object. More...
 
static constexpr unsigned int num_arguments_per_3D_tensor ()
 Returns the number of arguments enqueued per 3D tensor object. More...
 
static constexpr unsigned int num_arguments_per_4D_tensor ()
 Returns the number of arguments enqueued per 4D tensor object. More...
 
static cl::NDRange gws_from_window (const Window &window)
 Get the global work size given an execution window. More...
 

Detailed Description

Interface for the kernel to run a MxN depthwise convolution.

M and N are respectively the rows and columns of the filter This kernel assumes that tensor for the weights is NOT reshaped (Native version)

Definition at line 37 of file CLDepthwiseConvolutionLayerNativeKernel.h.

Constructor & Destructor Documentation

◆ CLDepthwiseConvolutionLayerNativeKernel() [1/3]

Default Constructor.

Definition at line 157 of file CLDepthwiseConvolutionLayerNativeKernel.cpp.

References arm_compute::DEPTHWISE.

158  : _input(nullptr),
159  _weights(nullptr),
160  _biases(nullptr),
161  _output(nullptr),
162  _depth_multiplier(1),
163  _output_multipliers(nullptr),
164  _output_shifts(nullptr),
165  _export_input_to_cl_image(false),
166  _export_weights_to_cl_image(false),
167  _is_quantized(false)
168 {
169  _type = CLKernelType::DEPTHWISE;
170 }
Depthwise CL kernel type.
Definition: CLTypes.h:83

◆ CLDepthwiseConvolutionLayerNativeKernel() [2/3]

Prevent instances of this class from being copied (As this class contains pointers)

◆ CLDepthwiseConvolutionLayerNativeKernel() [3/3]

Allow instances of this class to be moved.

Member Function Documentation

◆ configure() [1/2]

void configure ( const CLCompileContext compile_context,
ICLTensor input,
const ICLTensor weights,
const ICLTensor biases,
ICLTensor output,
const DWCComputeKernelInfo dwc_info,
const ConvolutionInfo conv_info,
const ICLTensor output_multipliers = nullptr,
const ICLTensor output_shifts = nullptr 
)

Initialize the function's source, destination and parameters.

Parameters
[in]compile_contextThe compile context to be used.
[in]inputSource tensor. Data type supported: QASYMM8/QASYMM8_SIGNED/FP32/FP16. Data layout supported: NHWC
[in]weightsWeights tensor. A 3D tensor with dimensions [IFM, N, M]. Data type supported: Same as input or QASYMM8/QASYMM8_SIGNED/QSYMM8_PER_CHANNEL when input is QASYMM8.
[in]biasesBiases tensor. A 1D tensor with dimensions [IFM]. Must be nullptr if not needed. Data type supported: Same as input, S32 when input is QASYMM8/QASYMM8_SIGNED.
[out]outputDestination tensor. Pass in nullptr or input for in-place operation. Data type supported: Same as input.
[in]dwc_infoDepthwise convolution layer info
[in]conv_infoConvolution info (padding, stride, dilation, ...)
[in]output_multipliers(Optional) Output multipliers tensor for quantized computations. In case of per-channel quantization, the number of multipliers must be equal to the number of filters (IFM). Supported data types: S32
[in]output_shifts(Optional) Output shifts tensor for quantized computations. In case of per-channel quantization, the number of multipliers must be equal to the number of filters (IFM). Supported data types: S32
Note
: In-place is only supported when
  • data layout: NHWC
  • filter: 1x1
  • depth_multiplier: 1
  • strides: 1
  • dilation: 1
  • no padding
  • no change of data layout after configure

Definition at line 179 of file CLDepthwiseConvolutionLayerNativeKernel.cpp.

References CLBuildOptions::add_option(), CLBuildOptions::add_option_if(), CLBuildOptions::add_option_if_else(), arm_compute::adjust_vec_size(), ARM_COMPUTE_ERROR_ON, ARM_COMPUTE_ERROR_ON_NULLPTR, ARM_COMPUTE_ERROR_THROW_ON, arm_compute::auto_init_if_empty(), arm_compute::BIFROST, ActivationLayerInfo::BOUNDED_RELU, arm_compute::calculate_max_window(), arm_compute::quantization::calculate_quantized_multiplier(), ICloneable< T >::clone(), arm_compute::misc::shape_calculator::compute_depthwise_convolution_shape(), arm_compute::test::validation::conv_info, arm_compute::create_kernel(), ITensorInfo::data_type(), ITensorInfo::dimension(), DWCComputeKernelInfo::export_input_to_cl_image, DWCComputeKernelInfo::export_weights_to_cl_image, arm_compute::F16, arm_compute::F32, arm_compute::float_to_string_with_full_precision(), arm_compute::G71, PixelValue::get(), arm_compute::get_cl_type_from_data_type(), arm_compute::get_padding_info(), arm_compute::get_quantized_activation_min_max(), ICLKernel::get_target(), arm_compute::GPU_ARCH_MASK, arm_compute::has_padding_changed(), ITensor::info(), arm_compute::test::validation::input, arm_compute::is_data_type_quantized(), kernel_name, arm_compute::lower_string(), ActivationLayerInfo::LU_BOUNDED_RELU, DWCComputeKernelInfo::m0, DWCComputeKernelInfo::n0, ITensorInfo::num_dimensions(), UniformQuantizationInfo::offset, CLBuildOptions::options(), arm_compute::QSYMM8_PER_CHANNEL, ITensorInfo::quantization_info(), arm_compute::S32, UniformQuantizationInfo::scale, arm_compute::set_unroll_with_pragma(), arm_compute::string_from_activation_func(), arm_compute::string_from_data_type(), arm_compute::support::cpp11::to_string(), QuantizationInfo::uniform(), arm_compute::opencl::kernels::gemm::update_padding_for_cl_image(), and arm_compute::cpu::kernels::validate_arguments().

Referenced by CLDepthwiseConvolutionLayerNativeKernel::configure().

182 {
184  if(output == nullptr)
185  {
186  // In-place
187  output = input;
188  }
189  ARM_COMPUTE_ERROR_THROW_ON(validate_arguments(input->info(), weights->info(), (biases != nullptr) ? biases->info() : nullptr, output->info(),
190  dwc_info, conv_info, (output_multipliers != nullptr) ? output_multipliers->info() : nullptr, (output_shifts != nullptr) ? output_shifts->info() : nullptr));
191 
192  auto padding_info = get_padding_info({ input, output });
193 
195  auto_init_if_empty(*(output->info()), input->info()->clone()->set_tensor_shape(output_shape).set_quantization_info(output->info()->quantization_info()));
196 
197  _input = input;
198  _output = output;
199  _weights = weights;
200  _biases = biases;
201  _depth_multiplier = conv_info.depth_multiplier;
202  _output_multipliers = output_multipliers;
203  _output_shifts = output_shifts;
204  _export_input_to_cl_image = dwc_info.export_input_to_cl_image;
205  _export_weights_to_cl_image = dwc_info.export_weights_to_cl_image;
206  _is_quantized = is_data_type_quantized(input->info()->data_type());
207 
208  const unsigned int n0 = adjust_vec_size(dwc_info.n0, output->info()->dimension(0));
209  const unsigned int m0 = std::min(dwc_info.m0, (unsigned int)output->info()->dimension(1));
210  std::string kernel_name = "";
211 
212  CLBuildOptions build_opts;
213 
214  // Update the padding for the input/weights tensor if we can export to cl_image
215  if(_export_input_to_cl_image)
216  {
218  }
219 
220  if(_export_weights_to_cl_image)
221  {
223  }
224 
225  // Conditions of -cl-fast-relaxed-math causing accuracy issues can be traced from COMPMID-5324
226  const GPUTarget gpu_target = get_target();
227  const auto act_function = conv_info.act_info.activation();
228  const auto dst_data_type = _output->info()->data_type();
229 
230  if((gpu_target != GPUTarget::G71 && (gpu_target & GPUTarget::GPU_ARCH_MASK) == GPUTarget::BIFROST)
232  && (dst_data_type == DataType::F32 || dst_data_type == DataType::F16))
233  {
234  // -cl-fast-relaxed-math also sets -cl-finite-math-only and -cl-unsafe-math-optimizations
235  // to disable -cl-finite-math-only, we only include -cl-unsafe-math-optimizations
236  build_opts.add_option("-cl-unsafe-math-optimizations");
237  }
238  else
239  {
240  build_opts.add_option("-cl-fast-relaxed-math");
241  }
242 
243  build_opts.add_option("-DACTIVATION_TYPE=" + lower_string(string_from_activation_func(act_function)));
244  build_opts.add_option("-DDEPTH_MULTIPLIER=" + support::cpp11::to_string(conv_info.depth_multiplier));
245  build_opts.add_option_if_else(_export_input_to_cl_image, "-DSRC_TENSOR_TYPE=IMAGE", "-DSRC_TENSOR_TYPE=BUFFER");
246  // Note: SRC_DATA_TYPE must have the same data type of WEI_DATA_TYPE. In quantized, we could
247  // have a case where the data types for the activation and weights are different. However, since the implementation
248  // only works when both have same data type, we have to change the offset to take into account this aspect
249  build_opts.add_option("-DSRC_DATA_TYPE=" + get_cl_type_from_data_type(_input->info()->data_type()));
250  build_opts.add_option("-DDST_TENSOR_TYPE=BUFFER");
251  build_opts.add_option("-DDST_DATA_TYPE=" + get_cl_type_from_data_type(dst_data_type));
252  build_opts.add_option_if_else(_export_weights_to_cl_image, "-DWEI_TENSOR_TYPE=IMAGE", "-DWEI_TENSOR_TYPE=BUFFER");
253  build_opts.add_option("-DSRC_WIDTH=" + support::cpp11::to_string(_input->info()->dimension(1)));
254  build_opts.add_option("-DSRC_HEIGHT=" + support::cpp11::to_string(_input->info()->dimension(2)));
255  build_opts.add_option("-DDST_WIDTH=" + support::cpp11::to_string(_output->info()->dimension(1)));
256  build_opts.add_option("-DDST_HEIGHT=" + support::cpp11::to_string(_output->info()->dimension(2)));
257  build_opts.add_option("-DWEI_WIDTH=" + support::cpp11::to_string(_weights->info()->dimension(1)));
258  build_opts.add_option("-DWEI_HEIGHT=" + support::cpp11::to_string(_weights->info()->dimension(2)));
259  build_opts.add_option("-DWEI_DATA_TYPE=" + get_cl_type_from_data_type(_weights->info()->data_type()));
260  build_opts.add_option("-DPAD_TOP=" + support::cpp11::to_string(conv_info.pad_stride_info.pad_top()));
261  build_opts.add_option("-DPAD_LEFT=" + support::cpp11::to_string(conv_info.pad_stride_info.pad_left()));
262  build_opts.add_option("-DSTRIDE_X=" + support::cpp11::to_string(conv_info.pad_stride_info.stride().first));
263  build_opts.add_option("-DSTRIDE_Y=" + support::cpp11::to_string(conv_info.pad_stride_info.stride().second));
264  build_opts.add_option("-DDILATION_X=" + support::cpp11::to_string(conv_info.dilation.x()));
265  build_opts.add_option("-DDILATION_Y=" + support::cpp11::to_string(conv_info.dilation.y()));
266  build_opts.add_option("-DN0=" + support::cpp11::to_string(n0));
267  build_opts.add_option("-DM0=" + support::cpp11::to_string(m0));
268  build_opts.add_option("-DM0_A=" + support::cpp11::to_string(_weights->info()->dimension(1) + m0 - 1));
269  build_opts.add_option_if_else(conv_info.depth_multiplier > 1, "-DN0_A=1", "-DN0_A=" + support::cpp11::to_string(n0));
270  build_opts.add_option("-DPARTIAL_N0=" + support::cpp11::to_string(_output->info()->dimension(0) % n0));
271  build_opts.add_option_if(_input->info()->num_dimensions() > 3, "-DBATCHED_EXECUTION");
272 
273  // Force unroll with pragma when any of the following values exceed the maximum number of manual unroll
274  set_unroll_with_pragma(build_opts, { static_cast<int>(_weights->info()->dimension(1) + m0 - 1),
275  static_cast<int>(_weights->info()->dimension(1)),
276  static_cast<int>(_weights->info()->dimension(2))
277  });
278 
279  if(biases != nullptr)
280  {
281  build_opts.add_option(std::string("-DHAS_BIAS"));
282  build_opts.add_option(std::string("-DBIA_DATA_TYPE=" + get_cl_type_from_data_type(biases->info()->data_type())));
283  }
284 
285  if(_is_quantized)
286  {
287  kernel_name = "dwc_native_quantized_nhwc";
288  const UniformQuantizationInfo iqinfo = input->info()->quantization_info().uniform();
289  const UniformQuantizationInfo wqinfo = weights->info()->quantization_info().uniform();
290  const UniformQuantizationInfo oqinfo = output->info()->quantization_info().uniform();
291 
292  PixelValue zero_value = PixelValue(0, input->info()->data_type(), input->info()->quantization_info());
293  int zero_value_s32;
294  zero_value.get(zero_value_s32);
295 
296  float multiplier = iqinfo.scale * wqinfo.scale / oqinfo.scale;
297  int output_multiplier = 0;
298  int output_shift = 0;
299  quantization::calculate_quantized_multiplier(multiplier, &output_multiplier, &output_shift);
300  build_opts.add_option("-DDST_MULTIPLIER=" + support::cpp11::to_string(output_multiplier));
301  build_opts.add_option("-DDST_SHIFT=" + support::cpp11::to_string(output_shift));
302  build_opts.add_option("-DSRC_OFFSET=" + support::cpp11::to_string(-iqinfo.offset));
303  build_opts.add_option("-DWEI_OFFSET=" + support::cpp11::to_string(-wqinfo.offset));
304  build_opts.add_option("-DDST_OFFSET=" + support::cpp11::to_string(oqinfo.offset));
305  build_opts.add_option("-DZERO_VALUE=" + support::cpp11::to_string(zero_value_s32));
306  build_opts.add_option("-DACC_DATA_TYPE=" + get_cl_type_from_data_type(DataType::S32));
307  build_opts.add_option("-DDST_MULTIPLIERS_DATA_TYPE=" + get_cl_type_from_data_type(_output_multipliers->info()->data_type()));
308  build_opts.add_option("-DDST_SHIFTS_DATA_TYPE=" + get_cl_type_from_data_type(_output_shifts->info()->data_type()));
309  build_opts.add_option_if_else(weights->info()->data_type() == DataType::QSYMM8_PER_CHANNEL, "-DQUANTIZATION_TYPE=PER_CHANNEL", "-DQUANTIZATION_TYPE=PER_TENSOR");
310  // Note: We expect the input and output tensors to always adopt a per-tensor quantization approach
311  int a_val{};
312  int b_val{};
313  std::tie(b_val, a_val) = get_quantized_activation_min_max(conv_info.act_info, input->info()->data_type(), oqinfo);
314 
315  build_opts.add_option_if(conv_info.act_info.enabled(), "-DA_VAL=" + support::cpp11::to_string(a_val));
316  build_opts.add_option_if(conv_info.act_info.enabled(), "-DB_VAL=" + support::cpp11::to_string(b_val));
317  }
318  else
319  {
320  kernel_name = "dwc_native_fp_nhwc";
321  build_opts.add_option("-DACC_DATA_TYPE=" + get_cl_type_from_data_type(input->info()->data_type()));
322  build_opts.add_option_if(conv_info.act_info.enabled(), "-DA_VAL=" + float_to_string_with_full_precision(conv_info.act_info.a()));
323  build_opts.add_option_if(conv_info.act_info.enabled(), "-DB_VAL=" + float_to_string_with_full_precision(conv_info.act_info.b()));
324  }
325 
326  Window win = calculate_max_window(*(output->info()), Steps(n0, m0));
327  ICLKernel::configure_internal(win);
328 
329  _kernel = create_kernel(compile_context, kernel_name, build_opts.options());
330 
332 
333  // Set config_id for enabling LWS tuning
334  _config_id = kernel_name;
335  _config_id += "_";
336  _config_id += support::cpp11::to_string(input->info()->dimension(0));
337  _config_id += "_";
338  _config_id += support::cpp11::to_string(input->info()->dimension(1));
339  _config_id += "_";
340  _config_id += support::cpp11::to_string(input->info()->dimension(2));
341  _config_id += "_";
342  _config_id += support::cpp11::to_string(output->info()->dimension(0));
343  _config_id += "_";
344  _config_id += support::cpp11::to_string(output->info()->dimension(1));
345  _config_id += "_";
346  _config_id += support::cpp11::to_string(output->info()->dimension(2));
347  _config_id += "_";
348  _config_id += string_from_data_type(input->info()->data_type());
349 }
bool is_data_type_quantized(DataType dt)
Check if a given data type is of quantized type.
Definition: Utils.h:1030
virtual size_t num_dimensions() const =0
The number of dimensions of the tensor (rank)
Window calculate_max_window(const ValidRegion &valid_region, const Steps &steps, bool skip_border, BorderSize border_size)
TensorShape compute_depthwise_convolution_shape(const ITensorInfo &input, const ITensorInfo &weights, const ConvolutionInfo &info)
Calculate the depthwise convolution output shape of a tensor.
void set_unroll_with_pragma(CLBuildOptions &built_opts, std::initializer_list< int > values)
Definition: CLHelpers.cpp:482
virtual size_t dimension(size_t index) const =0
Return the size of the requested dimension.
std::string to_string(T &&value)
Convert integer and float values to string.
virtual DataType data_type() const =0
Data type used for each element of the tensor.
1 channel, 1 F32 per channel
#define ARM_COMPUTE_ERROR_ON(cond)
If the condition is true then an error message is printed and an exception thrown.
Definition: Error.h:466
const std::string & string_from_activation_func(ActivationLayerInfo::ActivationFunction act)
Translates a given activation function to a string.
Definition: Utils.cpp:163
#define ARM_COMPUTE_ERROR_THROW_ON(status)
Definition: Error.h:455
Status calculate_quantized_multiplier(float multiplier, int32_t *quant_multiplier, int32_t *shift, bool ignore_epsilon=false)
Calculate quantized representation of multiplier.
std::string lower_string(const std::string &val)
Lower a given string.
Definition: Utils.cpp:353
Status validate_arguments(const ITensorInfo *src, const ITensorInfo *weights, const ITensorInfo *dst, const PadStrideInfo &conv_info)
void update_padding_for_cl_image(ITensorInfo *tensor)
Update padding required to export the OpenCL buffer to OpenCL image2d.
1 channel, 1 F16 per channel
1 channel, 1 S32 per channel
cl::Kernel create_kernel(const CLCompileContext &ctx, const std::string &kernel_name, const std::set< std::string > &build_opts=std::set< std::string >())
Creates an opencl kernel using a compile context.
Definition: CLHelpers.cpp:404
const std::string & string_from_data_type(DataType dt)
Convert a data type identity into a string.
Definition: Utils.cpp:135
std::string float_to_string_with_full_precision(float val)
Create a string with the float in full precision.
Definition: Utils.h:1124
std::pair< int32_t, int32_t > get_quantized_activation_min_max(ActivationLayerInfo act_info, DataType data_type, UniformQuantizationInfo oq_info)
Returns a pair of minimum and maximum values for a quantized activation.
Definition: Utils.cpp:558
GPUTarget get_target() const
Get the targeted GPU architecture.
Definition: ICLKernel.h:443
std::string get_cl_type_from_data_type(const DataType &dt)
Translates a tensor data type to the appropriate OpenCL type.
Definition: CLHelpers.cpp:39
bool auto_init_if_empty(ITensorInfo &info, const TensorShape &shape, int num_channels, DataType data_type, QuantizationInfo quantization_info=QuantizationInfo())
Auto initialize the tensor info (shape, number of channels and data type) if the current assignment i...
virtual ITensorInfo * info() const =0
Interface to be implemented by the child class to return the tensor&#39;s metadata.
bool has_padding_changed(const std::unordered_map< const ITensorInfo *, PaddingSize > &padding_map)
Check if the previously stored padding info has changed after configuring a kernel.
Definition: Utils.cpp:603
quantized, symmetric per channel fixed-point 8-bit number
GPUTarget
Available GPU Targets.
Definition: GPUTarget.h:34
std::unordered_map< const ITensorInfo *, PaddingSize > get_padding_info(std::initializer_list< const ITensorInfo *> infos)
Stores padding information before configuring a kernel.
Definition: Utils.cpp:588
#define ARM_COMPUTE_ERROR_ON_NULLPTR(...)
Definition: Validate.h:157
unsigned int adjust_vec_size(unsigned int vec_size, size_t dim0)
Returns the adjusted vector size in case it is less than the input&#39;s first dimension, getting rounded down to its closest valid vector size.
Definition: Utils.h:1222
std::string kernel_name

◆ configure() [2/2]

void configure ( ICLTensor input,
const ICLTensor weights,
const ICLTensor biases,
ICLTensor output,
const DWCComputeKernelInfo dwc_info,
const ConvolutionInfo conv_info,
const ICLTensor output_multipliers = nullptr,
const ICLTensor output_shifts = nullptr 
)

Static function to check if given info will lead to a valid configuration of CLDepthwiseConvolutionLayerNativeKernel.

Similar to CLDepthwiseConvolutionLayerNativeKernel::configure()

Definition at line 172 of file CLDepthwiseConvolutionLayerNativeKernel.cpp.

References CLDepthwiseConvolutionLayerNativeKernel::configure(), and CLKernelLibrary::get().

175 {
176  configure(CLKernelLibrary::get().get_compile_context(), input, weights, biases, output, dwc_info, conv_info, output_multipliers, output_shifts);
177 }
static CLKernelLibrary & get()
Access the KernelLibrary singleton.
void configure(const CLCompileContext &compile_context, ICLTensor *input, const ICLTensor *weights, const ICLTensor *biases, ICLTensor *output, const DWCComputeKernelInfo &dwc_info, const ConvolutionInfo &conv_info, const ICLTensor *output_multipliers=nullptr, const ICLTensor *output_shifts=nullptr)
Initialize the function&#39;s source, destination and parameters.

◆ operator=() [1/2]

Prevent instances of this class from being copied (As this class contains pointers)

◆ operator=() [2/2]

Allow instances of this class to be moved.

◆ run()

void run ( const Window window,
cl::CommandQueue &  queue 
)
overridevirtual

Enqueue the OpenCL kernel to process the given window on the passed OpenCL command queue.

Note
The queue is not flushed by this method, and therefore the kernel will not have been executed by the time this method returns.
Parameters
[in]windowRegion on which to execute the kernel. (Must be a valid region of the window returned by window()).
[in,out]queueCommand queue on which to enqueue the kernel.

Reimplemented from ICLKernel.

Definition at line 358 of file CLDepthwiseConvolutionLayerNativeKernel.cpp.

References ICLKernel::add_1D_tensor_argument(), ICLKernel::add_4D_tensor_argument(), ICLKernel::add_4d_tensor_nhwc_argument(), ARM_COMPUTE_ERROR_ON_INVALID_SUBWINDOW, ARM_COMPUTE_ERROR_ON_UNCONFIGURED_KERNEL, ICLTensor::cl_buffer(), Window::collapse(), arm_compute::create_image2d_from_buffer(), ITensorInfo::data_type(), ITensorInfo::dimension(), Window::DimZ, arm_compute::enqueue(), Window::first_slice_window_4D(), CLKernelLibrary::get(), ITensor::info(), ICLKernel::lws_hint(), arm_compute::test::validation::reference::slice(), ITensorInfo::strides_in_bytes(), and IKernel::window().

359 {
362 
363  // Collapse window
364  Window window_collapsed = window.collapse(ICLKernel::window(), Window::DimZ);
365 
366  Window slice = window_collapsed.first_slice_window_4D();
367 
368  cl::Image2D input_cl_image;
369  cl::Image2D weights_cl_image;
370 
371  if(_export_input_to_cl_image || _export_weights_to_cl_image)
372  {
373  // Export cl_buffer to cl_image
374  if(_export_input_to_cl_image)
375  {
376  const size_t image_w = _input->info()->dimension(0) / 4;
377  const size_t image_h = _input->info()->dimension(1) * _input->info()->dimension(2) * _input->info()->dimension(3);
378  const TensorShape shape2d(image_w, image_h);
379  const size_t image_row_pitch = _input->info()->strides_in_bytes()[1];
380  input_cl_image = create_image2d_from_buffer(CLKernelLibrary::get().context(), _input->cl_buffer(), shape2d, _input->info()->data_type(), image_row_pitch);
381  }
382 
383  if(_export_weights_to_cl_image)
384  {
385  const size_t image_w = _weights->info()->dimension(0) / 4;
386  const size_t image_h = _weights->info()->dimension(1) * _weights->info()->dimension(2) * _weights->info()->dimension(3);
387  const TensorShape shape2d(image_w, image_h);
388  const size_t image_row_pitch = _weights->info()->strides_in_bytes()[1];
389  weights_cl_image = create_image2d_from_buffer(CLKernelLibrary::get().context(), _weights->cl_buffer(), shape2d, _weights->info()->data_type(), image_row_pitch);
390  }
391  }
392 
393  unsigned int idx = 0;
394  if(_export_input_to_cl_image)
395  {
396  _kernel.setArg(idx++, input_cl_image);
397  }
398  add_4d_tensor_nhwc_argument(idx, _input);
399  add_4d_tensor_nhwc_argument(idx, _output);
400  if(_export_weights_to_cl_image)
401  {
402  _kernel.setArg(idx++, weights_cl_image);
403  }
404  add_4D_tensor_argument(idx, _weights, slice);
405  if(_is_quantized)
406  {
407  add_1D_tensor_argument(idx, _output_multipliers, slice);
408  add_1D_tensor_argument(idx, _output_shifts, slice);
409  }
410  if(_biases != nullptr)
411  {
412  add_1D_tensor_argument(idx, _biases, slice);
413  }
414  enqueue(queue, *this, slice, lws_hint());
415 }
void add_4d_tensor_nhwc_argument(unsigned int &idx, const ICLTensor *tensor)
Add the passed NHWC 4D tensor&#39;s parameters to the object&#39;s kernel&#39;s arguments by passing strides...
Definition: ICLKernel.cpp:144
const Window & window() const
The maximum window the kernel can be executed on.
Definition: IKernel.cpp:28
virtual size_t dimension(size_t index) const =0
Return the size of the requested dimension.
void enqueue(cl::CommandQueue &queue, ICLKernel &kernel, const Window &window, const cl::NDRange &lws_hint=CLKernelLibrary::get().default_ndrange(), bool use_dummy_work_items=false)
Add the kernel to the command queue with the given window.
Definition: ICLKernel.cpp:32
cl::NDRange lws_hint() const
Return the Local-Workgroup-Size hint.
Definition: ICLKernel.h:383
virtual DataType data_type() const =0
Data type used for each element of the tensor.
static CLKernelLibrary & get()
Access the KernelLibrary singleton.
Window collapse(const Window &full_window, size_t first, size_t last=Coordinates::num_max_dimensions) const
Collapse the dimensions between first and last.
Definition: Window.inl:111
virtual ITensorInfo * info() const =0
Interface to be implemented by the child class to return the tensor&#39;s metadata.
#define ARM_COMPUTE_ERROR_ON_UNCONFIGURED_KERNEL(k)
Definition: Validate.h:915
static constexpr size_t DimZ
Alias for dimension 2 also known as Z dimension.
Definition: Window.h:47
virtual const cl::Buffer & cl_buffer() const =0
Interface to be implemented by the child class to return a reference to the OpenCL buffer containing ...
cl::Image2D create_image2d_from_buffer(const cl::Context &ctx, const cl::Buffer &buffer, const TensorShape &shape2d, DataType data_type, size_t image_row_pitch)
Create a cl::Image2D object from an OpenCL buffer.
Definition: CLUtils.cpp:35
virtual const Strides & strides_in_bytes() const =0
The strides in bytes for accessing each dimension of the tensor.
void add_1D_tensor_argument(unsigned int &idx, const ICLTensor *tensor, const Window &window)
Add the passed 1D tensor&#39;s parameters to the object&#39;s kernel&#39;s arguments starting from the index idx...
Definition: ICLKernel.h:178
void add_4D_tensor_argument(unsigned int &idx, const ICLTensor *tensor, const Window &window)
Add the passed 4D tensor&#39;s parameters to the object&#39;s kernel&#39;s arguments starting from the index idx...
Definition: ICLKernel.h:236
#define ARM_COMPUTE_ERROR_ON_INVALID_SUBWINDOW(f, s)
Definition: Validate.h:201
SimpleTensor< T > slice(const SimpleTensor< T > &src, Coordinates starts, Coordinates ends)

◆ validate()

Status validate ( const ITensorInfo input,
const ITensorInfo weights,
const ITensorInfo biases,
const ITensorInfo output,
const DWCComputeKernelInfo dwc_info,
const ConvolutionInfo conv_info,
const ITensorInfo output_multipliers = nullptr,
const ITensorInfo output_shifts = nullptr 
)
static

Static function to check if given info will lead to a valid configuration of CLDepthwiseConvolutionLayerNativeKernel.

Similar to CLDepthwiseConvolutionLayerNativeKernel::configure()

Returns
a status

Definition at line 351 of file CLDepthwiseConvolutionLayerNativeKernel.cpp.

References ARM_COMPUTE_RETURN_ON_ERROR, and arm_compute::cpu::kernels::validate_arguments().

Referenced by CLDepthwiseConvolutionLayer::validate().

353 {
354  ARM_COMPUTE_RETURN_ON_ERROR(validate_arguments(input, weights, biases, output, dwc_info, conv_info, output_multipliers, output_shifts));
355  return Status{};
356 }
#define ARM_COMPUTE_RETURN_ON_ERROR(status)
Checks if a status contains an error and returns it.
Definition: Error.h:204
Status validate_arguments(const ITensorInfo *src, const ITensorInfo *weights, const ITensorInfo *dst, const PadStrideInfo &conv_info)

The documentation for this class was generated from the following files: