Compute Library
 21.11
CLDepthwiseConvolutionLayerNativeKernel Class Reference

Interface for the kernel to run a MxN depthwise convolution. More...

#include <CLDepthwiseConvolutionLayerNativeKernel.h>

Collaboration diagram for CLDepthwiseConvolutionLayerNativeKernel:
[legend]

Public Member Functions

 CLDepthwiseConvolutionLayerNativeKernel ()
 Default Constructor. More...
 
 CLDepthwiseConvolutionLayerNativeKernel (const CLDepthwiseConvolutionLayerNativeKernel &)=delete
 Prevent instances of this class from being copied (As this class contains pointers) More...
 
CLDepthwiseConvolutionLayerNativeKerneloperator= (const CLDepthwiseConvolutionLayerNativeKernel &)=delete
 Prevent instances of this class from being copied (As this class contains pointers) More...
 
 CLDepthwiseConvolutionLayerNativeKernel (CLDepthwiseConvolutionLayerNativeKernel &&)=default
 Allow instances of this class to be moved. More...
 
CLDepthwiseConvolutionLayerNativeKerneloperator= (CLDepthwiseConvolutionLayerNativeKernel &&)=default
 Allow instances of this class to be moved. More...
 
void configure (const CLCompileContext &compile_context, ICLTensor *input, const ICLTensor *weights, const ICLTensor *biases, ICLTensor *output, const DWCComputeKernelInfo &dwc_info, const ConvolutionInfo &conv_info, const ICLTensor *output_multipliers=nullptr, const ICLTensor *output_shifts=nullptr)
 Initialize the function's source, destination and parameters. More...
 
void configure (ICLTensor *input, const ICLTensor *weights, const ICLTensor *biases, ICLTensor *output, const DWCComputeKernelInfo &dwc_info, const ConvolutionInfo &conv_info, const ICLTensor *output_multipliers=nullptr, const ICLTensor *output_shifts=nullptr)
 Static function to check if given info will lead to a valid configuration of CLDepthwiseConvolutionLayerNativeKernel. More...
 
void run (const Window &window, cl::CommandQueue &queue) override
 Enqueue the OpenCL kernel to process the given window on the passed OpenCL command queue. More...
 
- Public Member Functions inherited from ICLKernel
 ICLKernel ()
 Constructor. More...
 
cl::Kernel & kernel ()
 Returns a reference to the OpenCL kernel of this object. More...
 
CLKernelType type () const
 Returns the CL kernel type. More...
 
template<typename T >
void add_1D_array_argument (unsigned int &idx, const ICLArray< T > *array, const Strides &strides, unsigned int num_dimensions, const Window &window)
 Add the passed 1D array's parameters to the object's kernel's arguments starting from the index idx. More...
 
void add_1D_tensor_argument (unsigned int &idx, const ICLTensor *tensor, const Window &window)
 Add the passed 1D tensor's parameters to the object's kernel's arguments starting from the index idx. More...
 
void add_1D_tensor_argument_if (bool cond, unsigned int &idx, const ICLTensor *tensor, const Window &window)
 Add the passed 1D tensor's parameters to the object's kernel's arguments starting from the index idx if the condition is true. More...
 
void add_2D_tensor_argument (unsigned int &idx, const ICLTensor *tensor, const Window &window)
 Add the passed 2D tensor's parameters to the object's kernel's arguments starting from the index idx. More...
 
void add_2D_tensor_argument_if (bool cond, unsigned int &idx, const ICLTensor *tensor, const Window &window)
 Add the passed 2D tensor's parameters to the object's kernel's arguments starting from the index idx if the condition is true. More...
 
void add_3D_tensor_argument (unsigned int &idx, const ICLTensor *tensor, const Window &window)
 Add the passed 3D tensor's parameters to the object's kernel's arguments starting from the index idx. More...
 
void add_4D_tensor_argument (unsigned int &idx, const ICLTensor *tensor, const Window &window)
 Add the passed 4D tensor's parameters to the object's kernel's arguments starting from the index idx. More...
 
virtual void run_op (ITensorPack &tensors, const Window &window, cl::CommandQueue &queue)
 Enqueue the OpenCL kernel to process the given window on the passed OpenCL command queue. More...
 
template<typename T >
void add_argument (unsigned int &idx, T value)
 Add the passed parameters to the object's kernel's arguments starting from the index idx. More...
 
void set_lws_hint (const cl::NDRange &lws_hint)
 Set the Local-Workgroup-Size hint. More...
 
cl::NDRange lws_hint () const
 Return the Local-Workgroup-Size hint. More...
 
void set_wbsm_hint (const cl_int &wbsm_hint)
 Set the workgroup batch size modifier hint. More...
 
cl_int wbsm_hint () const
 Return the workgroup batch size modifier hint. More...
 
const std::string & config_id () const
 Get the configuration ID. More...
 
void set_target (GPUTarget target)
 Set the targeted GPU architecture. More...
 
void set_target (cl::Device &device)
 Set the targeted GPU architecture according to the CL device. More...
 
GPUTarget get_target () const
 Get the targeted GPU architecture. More...
 
size_t get_max_workgroup_size ()
 Get the maximum workgroup size for the device the CLKernelLibrary uses. More...
 
template<unsigned int dimension_size>
void add_tensor_argument (unsigned &idx, const ICLTensor *tensor, const Window &window)
 
template<typename T , unsigned int dimension_size>
void add_array_argument (unsigned &idx, const ICLArray< T > *array, const Strides &strides, unsigned int num_dimensions, const Window &window)
 Add the passed array's parameters to the object's kernel's arguments starting from the index idx. More...
 
- Public Member Functions inherited from IKernel
 IKernel ()
 Constructor. More...
 
virtual ~IKernel ()=default
 Destructor. More...
 
virtual bool is_parallelisable () const
 Indicates whether or not the kernel is parallelisable. More...
 
virtual BorderSize border_size () const
 The size of the border for that kernel. More...
 
const Windowwindow () const
 The maximum window the kernel can be executed on. More...
 
bool is_window_configured () const
 Function to check if the embedded window of this kernel has been configured. More...
 

Static Public Member Functions

static Status validate (const ITensorInfo *input, const ITensorInfo *weights, const ITensorInfo *biases, const ITensorInfo *output, const DWCComputeKernelInfo &dwc_info, const ConvolutionInfo &conv_info, const ITensorInfo *output_multipliers=nullptr, const ITensorInfo *output_shifts=nullptr)
 Static function to check if given info will lead to a valid configuration of CLDepthwiseConvolutionLayerNativeKernel. More...
 
- Static Public Member Functions inherited from ICLKernel
static constexpr unsigned int num_arguments_per_1D_array ()
 Returns the number of arguments enqueued per 1D array object. More...
 
static constexpr unsigned int num_arguments_per_1D_tensor ()
 Returns the number of arguments enqueued per 1D tensor object. More...
 
static constexpr unsigned int num_arguments_per_2D_tensor ()
 Returns the number of arguments enqueued per 2D tensor object. More...
 
static constexpr unsigned int num_arguments_per_3D_tensor ()
 Returns the number of arguments enqueued per 3D tensor object. More...
 
static constexpr unsigned int num_arguments_per_4D_tensor ()
 Returns the number of arguments enqueued per 4D tensor object. More...
 
static cl::NDRange gws_from_window (const Window &window)
 Get the global work size given an execution window. More...
 

Detailed Description

Interface for the kernel to run a MxN depthwise convolution.

M and N are respectively the rows and columns of the filter This kernel assumes that tensor for the weights is NOT reshaped (Native version)

Definition at line 37 of file CLDepthwiseConvolutionLayerNativeKernel.h.

Constructor & Destructor Documentation

◆ CLDepthwiseConvolutionLayerNativeKernel() [1/3]

Default Constructor.

Definition at line 153 of file CLDepthwiseConvolutionLayerNativeKernel.cpp.

References arm_compute::DEPTHWISE.

154  : _input(nullptr),
155  _weights(nullptr),
156  _biases(nullptr),
157  _output(nullptr),
158  _depth_multiplier(1),
159  _output_multipliers(nullptr),
160  _output_shifts(nullptr),
161  _export_to_cl_image(false),
162  _is_quantized(false)
163 {
164  _type = CLKernelType::DEPTHWISE;
165 }
Depthwise CL kernel type.
Definition: CLTypes.h:82

◆ CLDepthwiseConvolutionLayerNativeKernel() [2/3]

Prevent instances of this class from being copied (As this class contains pointers)

◆ CLDepthwiseConvolutionLayerNativeKernel() [3/3]

Allow instances of this class to be moved.

Member Function Documentation

◆ configure() [1/2]

void configure ( const CLCompileContext compile_context,
ICLTensor input,
const ICLTensor weights,
const ICLTensor biases,
ICLTensor output,
const DWCComputeKernelInfo dwc_info,
const ConvolutionInfo conv_info,
const ICLTensor output_multipliers = nullptr,
const ICLTensor output_shifts = nullptr 
)

Initialize the function's source, destination and parameters.

Parameters
[in]compile_contextThe compile context to be used.
[in]inputSource tensor. Data type supported: QASYMM8/QASYMM8_SIGNED/FP32/FP16. Data layout supported: NHWC
[in]weightsWeights tensor. A 3D tensor with dimensions [IFM, N, M]. Data type supported: Same as input or QASYMM8/QASYMM8_SIGNED/QSYMM8_PER_CHANNEL when input is QASYMM8.
[in]biasesBiases tensor. A 1D tensor with dimensions [IFM]. Must be nullptr if not needed. Data type supported: Same as input, S32 when input is QASYMM8/QASYMM8_SIGNED.
[out]outputDestination tensor. Pass in nullptr or input for in-place operation. Data type supported: Same as input.
[in]dwc_infoDepthwise convolution layer info
[in]conv_infoConvolution info (padding, stride, dilation, ...)
[in]output_multipliers(Optional) Output multipliers tensor for quantized computations. In case of per-channel quantization, the number of multipliers must be equal to the number of filters (IFM). Supported data types: S32
[in]output_shifts(Optional) Output shifts tensor for quantized computations. In case of per-channel quantization, the number of multipliers must be equal to the number of filters (IFM). Supported data types: S32
Note
: In-place is only supported when
  • data layout: NHWC
  • filter: 1x1
  • depth_multiplier: 1
  • strides: 1
  • dilation: 1
  • no padding
  • no change of data layout after configure

Definition at line 174 of file CLDepthwiseConvolutionLayerNativeKernel.cpp.

References CLBuildOptions::add_option(), CLBuildOptions::add_option_if(), CLBuildOptions::add_option_if_else(), arm_compute::adjust_vec_size(), ARM_COMPUTE_ERROR_ON, ARM_COMPUTE_ERROR_ON_NULLPTR, ARM_COMPUTE_ERROR_THROW_ON, arm_compute::auto_init_if_empty(), arm_compute::calculate_max_window(), arm_compute::quantization::calculate_quantized_multiplier(), ICloneable< T >::clone(), arm_compute::misc::shape_calculator::compute_depthwise_convolution_shape(), arm_compute::test::validation::conv_info, arm_compute::create_kernel(), ITensorInfo::data_type(), ITensorInfo::dimension(), DWCComputeKernelInfo::export_weights_to_cl_image, arm_compute::float_to_string_with_full_precision(), PixelValue::get(), arm_compute::get_cl_type_from_data_type(), arm_compute::get_padding_info(), arm_compute::get_quantized_activation_min_max(), arm_compute::has_padding_changed(), ITensor::info(), arm_compute::test::validation::input, arm_compute::is_data_type_quantized(), kernel_name, arm_compute::lower_string(), DWCComputeKernelInfo::m0, DWCComputeKernelInfo::n0, ITensorInfo::num_dimensions(), UniformQuantizationInfo::offset, CLBuildOptions::options(), arm_compute::QSYMM8_PER_CHANNEL, ITensorInfo::quantization_info(), arm_compute::S32, UniformQuantizationInfo::scale, arm_compute::set_unroll_with_pragma(), arm_compute::string_from_activation_func(), arm_compute::string_from_data_type(), arm_compute::support::cpp11::to_string(), QuantizationInfo::uniform(), and arm_compute::opencl::kernels::gemm::update_padding_for_cl_image().

Referenced by CLDepthwiseConvolutionLayerNativeKernel::configure().

177 {
179  if(output == nullptr)
180  {
181  // In-place
182  output = input;
183  }
184  ARM_COMPUTE_ERROR_THROW_ON(validate_arguments(input->info(), weights->info(), (biases != nullptr) ? biases->info() : nullptr, output->info(),
185  dwc_info, conv_info, (output_multipliers != nullptr) ? output_multipliers->info() : nullptr, (output_shifts != nullptr) ? output_shifts->info() : nullptr));
186 
187  auto padding_info = get_padding_info({ input, output });
188 
190  auto_init_if_empty(*(output->info()), input->info()->clone()->set_tensor_shape(output_shape).set_quantization_info(output->info()->quantization_info()));
191 
192  _input = input;
193  _output = output;
194  _weights = weights;
195  _biases = biases;
196  _depth_multiplier = conv_info.depth_multiplier;
197  _output_multipliers = output_multipliers;
198  _output_shifts = output_shifts;
199  _export_to_cl_image = dwc_info.export_weights_to_cl_image;
200  _is_quantized = is_data_type_quantized(input->info()->data_type());
201 
202  const unsigned int n0 = adjust_vec_size(dwc_info.n0, input->info()->dimension(0));
203  const unsigned int m0 = std::min(dwc_info.m0, (unsigned int)output->info()->dimension(1));
204  std::string kernel_name = "";
205 
206  CLBuildOptions build_opts;
207 
208  // Update the padding for the weights tensor if we can export to cl_image
209  if(_export_to_cl_image)
210  {
212  }
213 
214  build_opts.add_option("-cl-fast-relaxed-math");
215  build_opts.add_option("-DACTIVATION_TYPE=" + lower_string(string_from_activation_func(conv_info.act_info.activation())));
216  build_opts.add_option("-DDEPTH_MULTIPLIER=" + support::cpp11::to_string(conv_info.depth_multiplier));
217  build_opts.add_option("-DSRC_TENSOR_TYPE=BUFFER");
218  build_opts.add_option("-DSRC_WIDTH=" + support::cpp11::to_string(_input->info()->dimension(1)));
219  build_opts.add_option("-DSRC_HEIGHT=" + support::cpp11::to_string(_input->info()->dimension(2)));
220  // Note: SRC_DATA_TYPE must have the same data type of WEI_DATA_TYPE. In quantized, we could
221  // have a case where the data types for the activation and weights are different. However, since the implementation
222  // only works when both have same data type, we have to change the offset to take into account this aspect
223  build_opts.add_option("-DSRC_DATA_TYPE=" + get_cl_type_from_data_type(_input->info()->data_type()));
224  build_opts.add_option("-DDST_TENSOR_TYPE=BUFFER");
225  build_opts.add_option("-DDST_WIDTH=" + support::cpp11::to_string(_output->info()->dimension(1)));
226  build_opts.add_option("-DDST_HEIGHT=" + support::cpp11::to_string(_output->info()->dimension(2)));
227  build_opts.add_option("-DDST_DATA_TYPE=" + get_cl_type_from_data_type(_output->info()->data_type()));
228  build_opts.add_option_if_else(_export_to_cl_image, "-DWEI_TENSOR_TYPE=IMAGE", "-DWEI_TENSOR_TYPE=BUFFER");
229  build_opts.add_option("-DWEI_WIDTH=" + support::cpp11::to_string(_weights->info()->dimension(1)));
230  build_opts.add_option("-DWEI_HEIGHT=" + support::cpp11::to_string(_weights->info()->dimension(2)));
231  build_opts.add_option("-DWEI_DATA_TYPE=" + get_cl_type_from_data_type(_weights->info()->data_type()));
232  build_opts.add_option("-DPAD_TOP=" + support::cpp11::to_string(conv_info.pad_stride_info.pad_top()));
233  build_opts.add_option("-DPAD_LEFT=" + support::cpp11::to_string(conv_info.pad_stride_info.pad_left()));
234  build_opts.add_option("-DSTRIDE_X=" + support::cpp11::to_string(conv_info.pad_stride_info.stride().first));
235  build_opts.add_option("-DSTRIDE_Y=" + support::cpp11::to_string(conv_info.pad_stride_info.stride().second));
236  build_opts.add_option("-DDILATION_X=" + support::cpp11::to_string(conv_info.dilation.x()));
237  build_opts.add_option("-DDILATION_Y=" + support::cpp11::to_string(conv_info.dilation.y()));
238  build_opts.add_option("-DN0=" + support::cpp11::to_string(n0));
239  build_opts.add_option("-DM0=" + support::cpp11::to_string(m0));
240  build_opts.add_option("-DM0_A=" + support::cpp11::to_string(_weights->info()->dimension(1) + m0 - 1));
241  build_opts.add_option("-DPARTIAL_N0=" + support::cpp11::to_string(_input->info()->dimension(0) % n0));
242  build_opts.add_option_if(_input->info()->num_dimensions() > 3, "-DBATCHED_EXECUTION");
243 
244  // Force unroll with pragma when any of the following values exceed the maximum number of manual unroll
245  set_unroll_with_pragma(build_opts, { static_cast<int>(_weights->info()->dimension(1) + m0 - 1),
246  static_cast<int>(_weights->info()->dimension(1)),
247  static_cast<int>(_weights->info()->dimension(2))
248  });
249 
250  if(biases != nullptr)
251  {
252  build_opts.add_option(std::string("-DHAS_BIAS"));
253  build_opts.add_option(std::string("-DBIA_DATA_TYPE=" + get_cl_type_from_data_type(biases->info()->data_type())));
254  }
255 
256  if(_is_quantized)
257  {
258  kernel_name = "dwc_native_quantized_nhwc";
259  const UniformQuantizationInfo iqinfo = input->info()->quantization_info().uniform();
260  const UniformQuantizationInfo wqinfo = weights->info()->quantization_info().uniform();
261  const UniformQuantizationInfo oqinfo = output->info()->quantization_info().uniform();
262 
263  PixelValue zero_value = PixelValue(0, input->info()->data_type(), input->info()->quantization_info());
264  int zero_value_s32;
265  zero_value.get(zero_value_s32);
266 
267  float multiplier = iqinfo.scale * wqinfo.scale / oqinfo.scale;
268  int output_multiplier = 0;
269  int output_shift = 0;
270  quantization::calculate_quantized_multiplier(multiplier, &output_multiplier, &output_shift);
271  build_opts.add_option("-DDST_MULTIPLIER=" + support::cpp11::to_string(output_multiplier));
272  build_opts.add_option("-DDST_SHIFT=" + support::cpp11::to_string(output_shift));
273  build_opts.add_option("-DSRC_OFFSET=" + support::cpp11::to_string(-iqinfo.offset));
274  build_opts.add_option("-DWEI_OFFSET=" + support::cpp11::to_string(-wqinfo.offset));
275  build_opts.add_option("-DDST_OFFSET=" + support::cpp11::to_string(oqinfo.offset));
276  build_opts.add_option("-DZERO_VALUE=" + support::cpp11::to_string(zero_value_s32));
277  build_opts.add_option("-DACC_DATA_TYPE=" + get_cl_type_from_data_type(DataType::S32));
278  build_opts.add_option("-DDST_MULTIPLIERS_DATA_TYPE=" + get_cl_type_from_data_type(_output_multipliers->info()->data_type()));
279  build_opts.add_option("-DDST_SHIFTS_DATA_TYPE=" + get_cl_type_from_data_type(_output_shifts->info()->data_type()));
280  build_opts.add_option_if_else(weights->info()->data_type() == DataType::QSYMM8_PER_CHANNEL, "-DQUANTIZATION_TYPE=PER_CHANNEL", "-DQUANTIZATION_TYPE=PER_TENSOR");
281  // Note: We expect the input and output tensors to always adopt a per-tensor quantization approach
282  int a_val{};
283  int b_val{};
284  std::tie(b_val, a_val) = get_quantized_activation_min_max(conv_info.act_info, input->info()->data_type(), oqinfo);
285 
286  build_opts.add_option_if(conv_info.act_info.enabled(), "-DA_VAL=" + support::cpp11::to_string(a_val));
287  build_opts.add_option_if(conv_info.act_info.enabled(), "-DB_VAL=" + support::cpp11::to_string(b_val));
288  }
289  else
290  {
291  kernel_name = "dwc_native_fp_nhwc";
292  build_opts.add_option("-DACC_DATA_TYPE=" + get_cl_type_from_data_type(input->info()->data_type()));
293  build_opts.add_option("-DZERO_VALUE=" + support::cpp11::to_string(0));
294  build_opts.add_option_if(conv_info.act_info.enabled(), "-DA_VAL=" + float_to_string_with_full_precision(conv_info.act_info.a()));
295  build_opts.add_option_if(conv_info.act_info.enabled(), "-DB_VAL=" + float_to_string_with_full_precision(conv_info.act_info.b()));
296  }
297 
298  Window win = calculate_max_window(*(output->info()), Steps(n0, m0));
299  ICLKernel::configure_internal(win);
300 
301  _kernel = create_kernel(compile_context, kernel_name, build_opts.options());
302 
304 
305  // Set config_id for enabling LWS tuning
306  _config_id = kernel_name;
307  _config_id += "_";
308  _config_id += support::cpp11::to_string(input->info()->dimension(0));
309  _config_id += "_";
310  _config_id += support::cpp11::to_string(input->info()->dimension(1));
311  _config_id += "_";
312  _config_id += support::cpp11::to_string(input->info()->dimension(2));
313  _config_id += "_";
314  _config_id += support::cpp11::to_string(output->info()->dimension(0));
315  _config_id += "_";
316  _config_id += support::cpp11::to_string(output->info()->dimension(1));
317  _config_id += "_";
318  _config_id += support::cpp11::to_string(output->info()->dimension(2));
319  _config_id += "_";
320  _config_id += string_from_data_type(input->info()->data_type());
321 }
bool is_data_type_quantized(DataType dt)
Check if a given data type is of quantized type.
Definition: Utils.h:981
virtual size_t num_dimensions() const =0
The number of dimensions of the tensor (rank)
Window calculate_max_window(const ValidRegion &valid_region, const Steps &steps, bool skip_border, BorderSize border_size)
TensorShape compute_depthwise_convolution_shape(const ITensorInfo &input, const ITensorInfo &weights, const ConvolutionInfo &info)
Calculate the depthwise convolution output shape of a tensor.
void set_unroll_with_pragma(CLBuildOptions &built_opts, std::initializer_list< int > values)
Definition: CLHelpers.cpp:469
virtual size_t dimension(size_t index) const =0
Return the size of the requested dimension.
std::string to_string(T &&value)
Convert integer and float values to string.
virtual DataType data_type() const =0
Data type used for each element of the tensor.
#define ARM_COMPUTE_ERROR_ON(cond)
If the condition is true then an error message is printed and an exception thrown.
Definition: Error.h:466
const std::string & string_from_activation_func(ActivationLayerInfo::ActivationFunction act)
Translates a given activation function to a string.
Definition: Utils.cpp:163
#define ARM_COMPUTE_ERROR_THROW_ON(status)
Definition: Error.h:455
Status calculate_quantized_multiplier(float multiplier, int32_t *quant_multiplier, int32_t *shift, bool ignore_epsilon=false)
Calculate quantized representation of multiplier.
std::string lower_string(const std::string &val)
Lower a given string.
Definition: Utils.cpp:326
void update_padding_for_cl_image(ITensorInfo *tensor)
Update padding required to export the OpenCL buffer to OpenCL image2d.
1 channel, 1 S32 per channel
cl::Kernel create_kernel(const CLCompileContext &ctx, const std::string &kernel_name, const std::set< std::string > &build_opts=std::set< std::string >())
Creates an opencl kernel using a compile context.
Definition: CLHelpers.cpp:391
const std::string & string_from_data_type(DataType dt)
Convert a data type identity into a string.
Definition: Utils.cpp:135
std::string float_to_string_with_full_precision(float val)
Create a string with the float in full precision.
Definition: Utils.h:1075
std::pair< int32_t, int32_t > get_quantized_activation_min_max(ActivationLayerInfo act_info, DataType data_type, UniformQuantizationInfo oq_info)
Returns a pair of minimum and maximum values for a quantized activation.
Definition: Utils.cpp:488
std::string get_cl_type_from_data_type(const DataType &dt)
Translates a tensor data type to the appropriate OpenCL type.
Definition: CLHelpers.cpp:39
bool auto_init_if_empty(ITensorInfo &info, const TensorShape &shape, int num_channels, DataType data_type, QuantizationInfo quantization_info=QuantizationInfo())
Auto initialize the tensor info (shape, number of channels and data type) if the current assignment i...
virtual ITensorInfo * info() const =0
Interface to be implemented by the child class to return the tensor&#39;s metadata.
bool has_padding_changed(const std::unordered_map< const ITensorInfo *, PaddingSize > &padding_map)
Check if the previously stored padding info has changed after configuring a kernel.
Definition: Utils.cpp:533
quantized, symmetric per channel fixed-point 8-bit number
std::unordered_map< const ITensorInfo *, PaddingSize > get_padding_info(std::initializer_list< const ITensorInfo *> infos)
Stores padding information before configuring a kernel.
Definition: Utils.cpp:518
#define ARM_COMPUTE_ERROR_ON_NULLPTR(...)
Definition: Validate.h:157
unsigned int adjust_vec_size(unsigned int vec_size, size_t dim0)
Returns the adjusted vector size in case it is less than the input&#39;s first dimension, getting rounded down to its closest valid vector size.
Definition: Utils.h:1171
std::string kernel_name

◆ configure() [2/2]

void configure ( ICLTensor input,
const ICLTensor weights,
const ICLTensor biases,
ICLTensor output,
const DWCComputeKernelInfo dwc_info,
const ConvolutionInfo conv_info,
const ICLTensor output_multipliers = nullptr,
const ICLTensor output_shifts = nullptr 
)

Static function to check if given info will lead to a valid configuration of CLDepthwiseConvolutionLayerNativeKernel.

Similar to CLDepthwiseConvolutionLayerNativeKernel::configure()

Definition at line 167 of file CLDepthwiseConvolutionLayerNativeKernel.cpp.

References CLDepthwiseConvolutionLayerNativeKernel::configure(), and CLKernelLibrary::get().

170 {
171  configure(CLKernelLibrary::get().get_compile_context(), input, weights, biases, output, dwc_info, conv_info, output_multipliers, output_shifts);
172 }
static CLKernelLibrary & get()
Access the KernelLibrary singleton.
void configure(const CLCompileContext &compile_context, ICLTensor *input, const ICLTensor *weights, const ICLTensor *biases, ICLTensor *output, const DWCComputeKernelInfo &dwc_info, const ConvolutionInfo &conv_info, const ICLTensor *output_multipliers=nullptr, const ICLTensor *output_shifts=nullptr)
Initialize the function&#39;s source, destination and parameters.

◆ operator=() [1/2]

Prevent instances of this class from being copied (As this class contains pointers)

◆ operator=() [2/2]

Allow instances of this class to be moved.

◆ run()

void run ( const Window window,
cl::CommandQueue &  queue 
)
overridevirtual

Enqueue the OpenCL kernel to process the given window on the passed OpenCL command queue.

Note
The queue is not flushed by this method, and therefore the kernel will not have been executed by the time this method returns.
Parameters
[in]windowRegion on which to execute the kernel. (Must be a valid region of the window returned by window()).
[in,out]queueCommand queue on which to enqueue the kernel.

Reimplemented from ICLKernel.

Definition at line 330 of file CLDepthwiseConvolutionLayerNativeKernel.cpp.

References ICLKernel::add_1D_tensor_argument(), ICLKernel::add_4D_tensor_argument(), ARM_COMPUTE_ERROR_ON, ARM_COMPUTE_ERROR_ON_INVALID_SUBWINDOW, ARM_COMPUTE_ERROR_ON_UNCONFIGURED_KERNEL, ICLTensor::cl_buffer(), Window::collapse(), arm_compute::create_image2d_from_buffer(), ITensorInfo::data_type(), ITensorInfo::dimension(), Window::DimX, Window::DimZ, arm_compute::enqueue(), Window::first_slice_window_4D(), CLKernelLibrary::get(), ITensor::info(), ICLKernel::lws_hint(), Window::set(), arm_compute::test::validation::reference::slice(), Window::Dimension::step(), ITensorInfo::strides_in_bytes(), ITensorInfo::tensor_shape(), IKernel::window(), and Window::x().

331 {
334 
335  // Collapse window
336  Window window_collapsed = window.collapse(ICLKernel::window(), Window::DimZ);
337 
338  Window slice = window_collapsed.first_slice_window_4D();
339 
340  if(_depth_multiplier != 1)
341  {
342  // If the depth multiplier > 1, we need to use the input channels rather than the output channels
343  ARM_COMPUTE_ERROR_ON(slice.x().step() != 1);
344  slice.set(Window::DimX, Window::Dimension(0, _input->info()->tensor_shape()[0], 1));
345  }
346 
347  cl::Image2D weights_cl_image;
348 
349  if(_export_to_cl_image)
350  {
351  const size_t image_w = _weights->info()->dimension(0) / 4;
352  const size_t image_h = _weights->info()->dimension(1) * _weights->info()->dimension(2) * _weights->info()->dimension(3);
353  const TensorShape shape2d(image_w, image_h);
354  const size_t image_row_pitch = _weights->info()->strides_in_bytes()[1];
355 
356  // Export cl_buffer to cl_image
357  weights_cl_image = create_image2d_from_buffer(CLKernelLibrary::get().context(), _weights->cl_buffer(), shape2d, _weights->info()->data_type(), image_row_pitch);
358  }
359 
360  unsigned int idx = 0;
361  add_4D_tensor_argument(idx, _input, slice);
362  add_4D_tensor_argument(idx, _output, slice);
363  if(_export_to_cl_image)
364  {
365  _kernel.setArg(idx++, weights_cl_image);
366  }
367  add_4D_tensor_argument(idx, _weights, slice);
368  if(_is_quantized)
369  {
370  add_1D_tensor_argument(idx, _output_multipliers, slice);
371  add_1D_tensor_argument(idx, _output_shifts, slice);
372  }
373  if(_biases != nullptr)
374  {
375  add_1D_tensor_argument(idx, _biases, slice);
376  }
377  enqueue(queue, *this, slice, lws_hint());
378 }
const Window & window() const
The maximum window the kernel can be executed on.
Definition: IKernel.cpp:28
virtual size_t dimension(size_t index) const =0
Return the size of the requested dimension.
void enqueue(cl::CommandQueue &queue, ICLKernel &kernel, const Window &window, const cl::NDRange &lws_hint=CLKernelLibrary::get().default_ndrange(), bool use_dummy_work_items=false)
Add the kernel to the command queue with the given window.
Definition: ICLKernel.cpp:32
cl::NDRange lws_hint() const
Return the Local-Workgroup-Size hint.
Definition: ICLKernel.h:318
virtual DataType data_type() const =0
Data type used for each element of the tensor.
#define ARM_COMPUTE_ERROR_ON(cond)
If the condition is true then an error message is printed and an exception thrown.
Definition: Error.h:466
static CLKernelLibrary & get()
Access the KernelLibrary singleton.
Window collapse(const Window &full_window, size_t first, size_t last=Coordinates::num_max_dimensions) const
Collapse the dimensions between first and last.
Definition: Window.inl:111
static constexpr size_t DimX
Alias for dimension 0 also known as X dimension.
Definition: Window.h:43
virtual const TensorShape & tensor_shape() const =0
Size for each dimension of the tensor.
virtual ITensorInfo * info() const =0
Interface to be implemented by the child class to return the tensor&#39;s metadata.
#define ARM_COMPUTE_ERROR_ON_UNCONFIGURED_KERNEL(k)
Definition: Validate.h:915
static constexpr size_t DimZ
Alias for dimension 2 also known as Z dimension.
Definition: Window.h:47
virtual const cl::Buffer & cl_buffer() const =0
Interface to be implemented by the child class to return a reference to the OpenCL buffer containing ...
cl::Image2D create_image2d_from_buffer(const cl::Context &ctx, const cl::Buffer &buffer, const TensorShape &shape2d, DataType data_type, size_t image_row_pitch)
Create a cl::Image2D object from an OpenCL buffer.
Definition: CLUtils.cpp:35
virtual const Strides & strides_in_bytes() const =0
The strides in bytes for accessing each dimension of the tensor.
void add_1D_tensor_argument(unsigned int &idx, const ICLTensor *tensor, const Window &window)
Add the passed 1D tensor&#39;s parameters to the object&#39;s kernel&#39;s arguments starting from the index idx...
Definition: ICLKernel.h:166
void add_4D_tensor_argument(unsigned int &idx, const ICLTensor *tensor, const Window &window)
Add the passed 4D tensor&#39;s parameters to the object&#39;s kernel&#39;s arguments starting from the index idx...
Definition: ICLKernel.h:224
#define ARM_COMPUTE_ERROR_ON_INVALID_SUBWINDOW(f, s)
Definition: Validate.h:201
SimpleTensor< T > slice(const SimpleTensor< T > &src, Coordinates starts, Coordinates ends)

◆ validate()

Status validate ( const ITensorInfo input,
const ITensorInfo weights,
const ITensorInfo biases,
const ITensorInfo output,
const DWCComputeKernelInfo dwc_info,
const ConvolutionInfo conv_info,
const ITensorInfo output_multipliers = nullptr,
const ITensorInfo output_shifts = nullptr 
)
static

Static function to check if given info will lead to a valid configuration of CLDepthwiseConvolutionLayerNativeKernel.

Similar to CLDepthwiseConvolutionLayerNativeKernel::configure()

Returns
a status

Definition at line 323 of file CLDepthwiseConvolutionLayerNativeKernel.cpp.

References ARM_COMPUTE_RETURN_ON_ERROR.

Referenced by CLDepthwiseConvolutionLayer::validate().

325 {
326  ARM_COMPUTE_RETURN_ON_ERROR(validate_arguments(input, weights, biases, output, dwc_info, conv_info, output_multipliers, output_shifts));
327  return Status{};
328 }
#define ARM_COMPUTE_RETURN_ON_ERROR(status)
Checks if a status contains an error and returns it.
Definition: Error.h:204

The documentation for this class was generated from the following files: