Compute Library
 22.11
ClDirectConv2dKernel Class Reference

Interface for the direct convolution kernel. More...

#include <ClDirectConv2dKernel.h>

Collaboration diagram for ClDirectConv2dKernel:
[legend]

Public Member Functions

 ClDirectConv2dKernel ()
 
 ARM_COMPUTE_DISALLOW_COPY_ALLOW_MOVE (ClDirectConv2dKernel)
 
void configure (const CLCompileContext &compile_context, ITensorInfo *src, ITensorInfo *weights, ITensorInfo *biases, ITensorInfo *dst, const PadStrideInfo &conv_info, const ActivationLayerInfo &act_info, const DirectConvComputeKernelInfo &desc)
 Set the src, weights, biases and dst tensors info. More...
 
void run_op (ITensorPack &tensors, const Window &window, cl::CommandQueue &queue) override
 Enqueue the OpenCL kernel to process the given window on the passed OpenCL command queue. More...
 
- Public Member Functions inherited from ICLKernel
 ICLKernel ()
 Constructor. More...
 
cl::Kernel & kernel ()
 Returns a reference to the OpenCL kernel of this object. More...
 
CLKernelType type () const
 Returns the CL kernel type. More...
 
template<typename T >
void add_1D_array_argument (unsigned int &idx, const ICLArray< T > *array, const Strides &strides, unsigned int num_dimensions, const Window &window)
 Add the passed 1D array's parameters to the object's kernel's arguments starting from the index idx. More...
 
void add_1D_tensor_argument (unsigned int &idx, const ICLTensor *tensor, const Window &window)
 Add the passed 1D tensor's parameters to the object's kernel's arguments starting from the index idx. More...
 
void add_1D_tensor_argument_if (bool cond, unsigned int &idx, const ICLTensor *tensor, const Window &window)
 Add the passed 1D tensor's parameters to the object's kernel's arguments starting from the index idx if the condition is true. More...
 
void add_2D_tensor_argument (unsigned int &idx, const ICLTensor *tensor, const Window &window)
 Add the passed 2D tensor's parameters to the object's kernel's arguments starting from the index idx. More...
 
void add_2D_tensor_argument_if (bool cond, unsigned int &idx, const ICLTensor *tensor, const Window &window)
 Add the passed 2D tensor's parameters to the object's kernel's arguments starting from the index idx if the condition is true. More...
 
void add_3D_tensor_argument (unsigned int &idx, const ICLTensor *tensor, const Window &window)
 Add the passed 3D tensor's parameters to the object's kernel's arguments starting from the index idx. More...
 
void add_4D_tensor_argument (unsigned int &idx, const ICLTensor *tensor, const Window &window)
 Add the passed 4D tensor's parameters to the object's kernel's arguments starting from the index idx. More...
 
void add_5D_tensor_argument (unsigned int &idx, const ICLTensor *tensor, const Window &window)
 Add the passed 5D tensor's parameters to the object's kernel's arguments starting from the index idx. More...
 
void add_3d_tensor_nhw_argument (unsigned int &idx, const ICLTensor *tensor)
 Add the passed NHW 3D tensor's parameters to the object's kernel's arguments by passing strides, dimensions and the offset to the first valid element in bytes. More...
 
void add_4d_tensor_nhwc_argument (unsigned int &idx, const ICLTensor *tensor)
 Add the passed NHWC 4D tensor's parameters to the object's kernel's arguments by passing strides, dimensions and the offset to the first valid element in bytes. More...
 
virtual void run (const Window &window, cl::CommandQueue &queue)
 Enqueue the OpenCL kernel to process the given window on the passed OpenCL command queue. More...
 
virtual void run_composite_op (ITensorPack &tensors, const Window &window, cl::CommandQueue &queue, const experimental::dynamic_fusion::ClExecutionDescriptor &exec_desc)
 The execution is carried out through run_op method. But the run_op method needs to be extended to include ClExecutionDescriptor as now LWS GWS tuning will be separated from the IKernel. More...
 
template<typename T >
void add_argument (unsigned int &idx, T value)
 Add the passed parameters to the object's kernel's arguments starting from the index idx. More...
 
void set_lws_hint (const cl::NDRange &lws_hint)
 Set the Local-Workgroup-Size hint. More...
 
cl::NDRange lws_hint () const
 Return the Local-Workgroup-Size hint. More...
 
void set_wbsm_hint (const cl_int &wbsm_hint)
 Set the workgroup batch size modifier hint. More...
 
cl_int wbsm_hint () const
 Return the workgroup batch size modifier hint. More...
 
const std::string & config_id () const
 Get the configuration ID. More...
 
void set_target (GPUTarget target)
 Set the targeted GPU architecture. More...
 
void set_target (cl::Device &device)
 Set the targeted GPU architecture according to the CL device. More...
 
GPUTarget get_target () const
 Get the targeted GPU architecture. More...
 
size_t get_max_workgroup_size ()
 Get the maximum workgroup size for the device the CLKernelLibrary uses. More...
 
template<unsigned int dimension_size>
void add_tensor_argument (unsigned &idx, const ICLTensor *tensor, const Window &window)
 
template<typename T , unsigned int dimension_size>
void add_array_argument (unsigned &idx, const ICLArray< T > *array, const Strides &strides, unsigned int num_dimensions, const Window &window)
 Add the passed array's parameters to the object's kernel's arguments starting from the index idx. More...
 
- Public Member Functions inherited from IKernel
 IKernel ()
 Constructor. More...
 
virtual ~IKernel ()=default
 Destructor. More...
 
virtual bool is_parallelisable () const
 Indicates whether or not the kernel is parallelisable. More...
 
virtual BorderSize border_size () const
 The size of the border for that kernel. More...
 
const Windowwindow () const
 The maximum window the kernel can be executed on. More...
 
bool is_window_configured () const
 Function to check if the embedded window of this kernel has been configured. More...
 

Static Public Member Functions

static Status validate (const ITensorInfo *src, const ITensorInfo *weights, const ITensorInfo *biases, const ITensorInfo *dst, const PadStrideInfo &conv_info, const ActivationLayerInfo &act_info, const DirectConvComputeKernelInfo &desc)
 Static function to check if given info will lead to a valid configuration. More...
 
- Static Public Member Functions inherited from ICLKernel
static constexpr unsigned int num_arguments_per_3d_tensor_nhw ()
 Returns the number of arguments enqueued per NHW 3D Tensor object. More...
 
static constexpr unsigned int num_arguments_per_4d_tensor_nhwc ()
 Returns the number of arguments enqueued per NHWC 4D Tensor object. More...
 
static constexpr unsigned int num_arguments_per_1D_array ()
 Returns the number of arguments enqueued per 1D array object. More...
 
static constexpr unsigned int num_arguments_per_1D_tensor ()
 Returns the number of arguments enqueued per 1D tensor object. More...
 
static constexpr unsigned int num_arguments_per_2D_tensor ()
 Returns the number of arguments enqueued per 2D tensor object. More...
 
static constexpr unsigned int num_arguments_per_3D_tensor ()
 Returns the number of arguments enqueued per 3D tensor object. More...
 
static constexpr unsigned int num_arguments_per_4D_tensor ()
 Returns the number of arguments enqueued per 4D tensor object. More...
 
static cl::NDRange gws_from_window (const Window &window)
 Get the global work size given an execution window. More...
 

Data Fields

DataLayout _data_layout {}
 
PadStrideInfo _conv_info {}
 
bool _export_to_cl_image { false }
 

Detailed Description

Interface for the direct convolution kernel.

Definition at line 41 of file ClDirectConv2dKernel.h.

Constructor & Destructor Documentation

◆ ClDirectConv2dKernel()

Definition at line 142 of file ClDirectConv2dKernel.cpp.

References arm_compute::DIRECT.

143 {
144  _type = CLKernelType::DIRECT;
145 }

Member Function Documentation

◆ ARM_COMPUTE_DISALLOW_COPY_ALLOW_MOVE()

ARM_COMPUTE_DISALLOW_COPY_ALLOW_MOVE ( ClDirectConv2dKernel  )

◆ configure()

void configure ( const CLCompileContext compile_context,
ITensorInfo src,
ITensorInfo weights,
ITensorInfo biases,
ITensorInfo dst,
const PadStrideInfo conv_info,
const ActivationLayerInfo act_info,
const DirectConvComputeKernelInfo desc 
)

Set the src, weights, biases and dst tensors info.

Note
: Due to set_valid_region() in NCHW, src/weights/biases cannot be const. Need to change this once the set_valid_region() is removed.
: DirectConvolution only works in the following configurations for the NCHW data layout: 1x1 convolution with stride_x = 1/2/3, stride_y = 1/2/3 3x3 convolution with stride_x = 1/2, stride_y = 1/2 5x5 convolution with stride_x = 1/2, stride_y = 1/2 9x9 convolution with stride_x = 1/2, stride_y = 1/2
Parameters
[in]compile_contextThe compile context to be used.
[in]srcThe src tensor info to convolve. 3 lower dimensions represent a single src [width, height, IFM], while every optional dimension from 4 and above represent a batch of inputs. Data types supported: QASYMM8_SIGNED/QASYMM8/F16/F32.
[in]weightsWeights tensor info. Weights are 4D tensor with dimensions [kernel_x, kernel_y, IFM, OFM]. The 3rd dimension must be the same as the src's volume 3rd dimension. Data type supported:Same as src.
[in]biasesBiases tensor info. Biases are 1D tensor with dimension [OFM]. Data type supported: Should match src data type, except for src of QASYMM8 and QASYMM8_SIGNED type where biases should be of S32 type
[out]dstOutput tensor info. The 3rd dimensions must be equal to the 4th dimension of the kernels tensor. Data types supported: Same as src.
[in]conv_infoContains padding and stride information described in PadStrideInfo.
[in]act_infoContains activaton information described in ActivationLayerInfo.
[in]descDirect convolution descriptor used to build the NHWC direct convolution kernel. For NCHW, this parameter is ignored.

Definition at line 147 of file ClDirectConv2dKernel.cpp.

References ClDirectConv2dKernel::_conv_info, ClDirectConv2dKernel::_data_layout, ClDirectConv2dKernel::_export_to_cl_image, ActivationLayerInfo::a(), ActivationLayerInfo::activation(), CLBuildOptions::add_option(), CLBuildOptions::add_option_if(), CLBuildOptions::add_option_if_else(), arm_compute::adjust_vec_size(), ARM_COMPUTE_ERROR_ON_NULLPTR, ARM_COMPUTE_ERROR_THROW_ON, arm_compute::auto_init_if_empty(), ActivationLayerInfo::b(), arm_compute::BIFROST, IKernel::border_size(), ActivationLayerInfo::BOUNDED_RELU, build_options, arm_compute::calculate_max_window(), arm_compute::quantization::calculate_quantized_multiplier(), arm_compute::CHANNEL, TensorShape::collapse(), arm_compute::misc::shape_calculator::compute_deep_convolution_shape(), arm_compute::test::validation::conv_info, conv_stride_x, conv_stride_y, arm_compute::create_kernel(), ITensorInfo::data_layout(), arm_compute::test::validation::data_type, ITensorInfo::data_type(), ITensorInfo::dimension(), ActivationLayerInfo::enabled(), DirectConvComputeKernelInfo::export_weights_to_cl_image, arm_compute::F16, arm_compute::F32, arm_compute::float_to_string_with_full_precision(), arm_compute::G71, PixelValue::get(), arm_compute::get_cl_type_from_data_type(), arm_compute::get_data_layout_dimension_index(), arm_compute::get_data_size_from_data_type(), CLCompileContext::get_ddk_version(), ICLKernel::get_target(), arm_compute::GPU_ARCH_MASK, arm_compute::HEIGHT, arm_compute::is_data_type_quantized(), DirectConvComputeKernelInfo::k0, kernel_name, arm_compute::lower_string(), ActivationLayerInfo::LU_BOUNDED_RELU, DirectConvComputeKernelInfo::m0, DirectConvComputeKernelInfo::n0, arm_compute::NCHW, arm_compute::NHWC, UniformQuantizationInfo::offset, CLBuildOptions::options(), arm_compute::test::validation::output_shape, PadStrideInfo::pad_left(), PadStrideInfo::pad_top(), ITensorInfo::quantization_info(), arm_compute::S32, UniformQuantizationInfo::scale, PadStrideInfo::stride(), arm_compute::string_from_activation_func(), arm_compute::string_from_data_layout(), arm_compute::string_from_data_type(), arm_compute::support::cpp11::to_string(), arm_compute::utils::cast::U, QuantizationInfo::uniform(), arm_compute::opencl::kernels::gemm::update_padding_for_cl_image(), arm_compute::cpu::kernels::validate_arguments(), and arm_compute::WIDTH.

149 {
151 
152  // Perform validation
153  ARM_COMPUTE_ERROR_THROW_ON(validate_arguments(src, weights, biases, dst, conv_info, act_info, desc));
154 
155  const int conv_stride_x = std::get<0>(conv_info.stride());
156  const int conv_stride_y = std::get<1>(conv_info.stride());
157 
158  _data_layout = src->data_layout();
160 
161  const unsigned int width_idx = get_data_layout_dimension_index(_data_layout, DataLayoutDimension::WIDTH);
162  const unsigned int height_idx = get_data_layout_dimension_index(_data_layout, DataLayoutDimension::HEIGHT);
163  const unsigned int channel_idx = get_data_layout_dimension_index(_data_layout, DataLayoutDimension::CHANNEL);
164  const unsigned int kernel_size = weights->dimension(width_idx);
165  const DataType data_type = src->data_type();
166 
167  const GPUTarget gpu_target = get_target();
168  unsigned int _num_elems_processed_per_iteration = 0;
169 
170  // Get dst shape
172 
173  // Output auto inizialitation if not yet initialized
174  auto_init_if_empty(*dst, output_shape,
175  1,
176  src->data_type(),
177  src->quantization_info());
178 
179  // Configure kernel window
180  Window win;
181  if(_data_layout == DataLayout::NHWC)
182  {
183  output_shape.collapse(2U, 1U);
184  const unsigned int n0 = adjust_vec_size(desc.n0, output_shape[0]);
185  const unsigned int m0 = adjust_vec_size(desc.m0, output_shape[1]);
186 
187  // Create window and update padding
188  win = calculate_max_window(output_shape, Steps(n0, m0));
189  }
190  else if(_data_layout == DataLayout::NCHW)
191  {
192  _num_elems_processed_per_iteration = 1u;
193  win = calculate_max_window(*dst, Steps(_num_elems_processed_per_iteration));
194  }
195 
196  ICLKernel::configure_internal(win);
197 
198  std::stringstream kernel_name;
199  CLBuildOptions build_options;
200 
201  if(_data_layout == DataLayout::NHWC)
202  {
203  kernel_name << "direct_convolution_nhwc";
204 
205  const unsigned int n0 = win.x().step();
206  const unsigned int m0 = win.y().step();
207  const unsigned int k0 = adjust_vec_size(desc.k0, src->dimension(channel_idx));
208  const unsigned int partial_store_n0 = dst->dimension(channel_idx) % n0;
209  const unsigned int pad_left = conv_info.pad_left();
210  const unsigned int pad_top = conv_info.pad_top();
211 
212  _export_to_cl_image = desc.export_weights_to_cl_image;
213 
214  // Update the padding for the weights tensor if we can export to cl_image
216  {
218  }
219 
220  if(biases != nullptr)
221  {
222  build_options.add_option(std::string("-DHAS_BIAS"));
223  build_options.add_option(std::string("-DBIA_DATA_TYPE=" + get_cl_type_from_data_type(biases->data_type())));
224  }
225 
226  // Conditions of -cl-fast-relaxed-math causing accuracy issues can be traced from COMPMID-5324
227  const auto act_function = act_info.activation();
228  const auto dst_data_type = dst->data_type();
229 
230  if((gpu_target != GPUTarget::G71 && (gpu_target & GPUTarget::GPU_ARCH_MASK) == GPUTarget::BIFROST)
232  && (dst_data_type == DataType::F32 || dst_data_type == DataType::F16))
233  {
234  // -cl-fast-relaxed-math also sets -cl-finite-math-only and -cl-unsafe-math-optimizations
235  // to disable -cl-finite-math-only, we only include -cl-unsafe-math-optimizations
236  build_options.add_option("-cl-unsafe-math-optimizations");
237  }
238  else
239  {
240  build_options.add_option("-cl-fast-relaxed-math");
241  }
242 
243  build_options.add_option("-DSRC_TENSOR_TYPE=BUFFER");
244  build_options.add_option("-DSRC_DATA_TYPE=" + get_cl_type_from_data_type(src->data_type()));
245  build_options.add_option("-DSRC_CHANNELS=" + support::cpp11::to_string(src->dimension(0)));
246  build_options.add_option("-DSRC_WIDTH=" + support::cpp11::to_string(src->dimension(1)));
247  build_options.add_option("-DSRC_HEIGHT=" + support::cpp11::to_string(src->dimension(2)));
248  build_options.add_option("-DDST_CHANNELS=" + support::cpp11::to_string(dst->dimension(0)));
249  build_options.add_option("-DDST_WIDTH=" + support::cpp11::to_string(dst->dimension(1)));
250  build_options.add_option("-DDST_HEIGHT=" + support::cpp11::to_string(dst->dimension(2)));
251  build_options.add_option("-DDST_TENSOR_TYPE=BUFFER");
252  build_options.add_option("-DDST_DATA_TYPE=" + get_cl_type_from_data_type(dst_data_type));
253  build_options.add_option_if_else(_export_to_cl_image, "-DWEI_TENSOR_TYPE=IMAGE", "-DWEI_TENSOR_TYPE=BUFFER");
254  build_options.add_option("-DWEI_WIDTH=" + support::cpp11::to_string(weights->dimension(width_idx)));
255  build_options.add_option("-DWEI_HEIGHT=" + support::cpp11::to_string(weights->dimension(height_idx)));
256  build_options.add_option("-DWEI_DATA_TYPE=" + get_cl_type_from_data_type(weights->data_type()));
257  build_options.add_option("-DSTRIDE_X=" + support::cpp11::to_string(conv_stride_x));
258  build_options.add_option("-DSTRIDE_Y=" + support::cpp11::to_string(conv_stride_y));
259  build_options.add_option("-DPAD_LEFT=" + support::cpp11::to_string(pad_left));
260  build_options.add_option("-DPAD_TOP=" + support::cpp11::to_string(pad_top));
261  build_options.add_option("-DN0=" + support::cpp11::to_string(n0));
262  build_options.add_option("-DM0=" + support::cpp11::to_string(m0));
263  build_options.add_option("-DK0=" + support::cpp11::to_string(k0));
264  build_options.add_option("-DPARTIAL_N0=" + support::cpp11::to_string(partial_store_n0));
265  build_options.add_option_if((src->dimension(channel_idx) % k0) != 0, "-DLEFTOVER_LOOP");
266  build_options.add_option("-DACTIVATION_TYPE=" + lower_string(string_from_activation_func(act_function)));
267 
268  if(is_data_type_quantized(data_type))
269  {
270  const UniformQuantizationInfo iqinfo = src->quantization_info().uniform();
271  const UniformQuantizationInfo wqinfo = weights->quantization_info().uniform();
272  const UniformQuantizationInfo oqinfo = dst->quantization_info().uniform();
273 
274  PixelValue zero_value = PixelValue(0, src->data_type(), src->quantization_info());
275  int zero_value_s32;
276  zero_value.get(zero_value_s32);
277 
278  float multiplier = iqinfo.scale * wqinfo.scale / oqinfo.scale;
279  int output_multiplier = 0;
280  int output_shift = 0;
281  quantization::calculate_quantized_multiplier(multiplier, &output_multiplier, &output_shift);
282  build_options.add_option("-DIS_QUANTIZED");
283  build_options.add_option("-DDST_MULTIPLIER=" + support::cpp11::to_string(output_multiplier));
284  build_options.add_option("-DDST_SHIFT=" + support::cpp11::to_string(output_shift));
285  build_options.add_option("-DSRC_OFFSET=" + support::cpp11::to_string(-iqinfo.offset));
286  build_options.add_option("-DWEI_OFFSET=" + support::cpp11::to_string(-wqinfo.offset));
287  build_options.add_option("-DDST_OFFSET=" + support::cpp11::to_string(oqinfo.offset));
288  build_options.add_option("-DZERO_VALUE=" + support::cpp11::to_string(zero_value_s32));
289  build_options.add_option("-DACC_DATA_TYPE=" + get_cl_type_from_data_type(DataType::S32));
290  }
291  else
292  {
293  build_options.add_option("-DACC_DATA_TYPE=" + get_cl_type_from_data_type(data_type));
294  build_options.add_option("-DZERO_VALUE=" + support::cpp11::to_string(0));
295  build_options.add_option("-DSRC_OFFSET=" + support::cpp11::to_string(0));
296  build_options.add_option("-DWEI_OFFSET=" + support::cpp11::to_string(0));
297  build_options.add_option("-DDST_OFFSET=" + support::cpp11::to_string(0));
298  build_options.add_option_if(act_info.enabled(), "-DA_VAL=" + float_to_string_with_full_precision(act_info.a()));
299  build_options.add_option_if(act_info.enabled(), "-DB_VAL=" + float_to_string_with_full_precision(act_info.b()));
300  }
301 
302  if(compile_context.get_ddk_version() >= 30)
303  {
304  build_options.add_option("-fregister-allocation=64");
305  }
306  }
307  else
308  {
309  _export_to_cl_image = false;
310 
311  kernel_name << "direct_convolution_nchw";
312  build_options.add_option_if(biases != nullptr, std::string("-DHAS_BIAS"));
313  build_options.add_option("-DSRC_WIDTH=" + support::cpp11::to_string(src->dimension(width_idx)));
314  build_options.add_option("-DSRC_HEIGHT=" + support::cpp11::to_string(src->dimension(height_idx)));
315  build_options.add_option("-DSRC_CHANNELS=" + support::cpp11::to_string(src->dimension(channel_idx)));
316  build_options.add_option("-DPAD_LEFT=" + support::cpp11::to_string(conv_info.pad_left()));
317  build_options.add_option("-DPAD_TOP=" + support::cpp11::to_string(conv_info.pad_top()));
318  build_options.add_option("-DSTRIDE_X=" + support::cpp11::to_string(conv_stride_x));
319  build_options.add_option("-DSTRIDE_Y=" + support::cpp11::to_string(conv_stride_y));
320  build_options.add_option("-DWEI_WIDTH=" + support::cpp11::to_string(weights->dimension(width_idx)));
321  build_options.add_option("-DWEI_HEIGHT=" + support::cpp11::to_string(weights->dimension(height_idx)));
322  build_options.add_option(std::string("-DDATA_TYPE=" + get_cl_type_from_data_type(data_type)));
323  build_options.add_option(std::string("-DDATA_SIZE=" + get_data_size_from_data_type(data_type)));
324  build_options.add_option(std::string("-DWEIGHTS_DEPTH=" + support::cpp11::to_string(weights->dimension(channel_idx))));
325  build_options.add_option(std::string("-DSTRIDE_X=" + support::cpp11::to_string(conv_stride_x)));
326  build_options.add_option(std::string("-DDATA_TYPE_PROMOTED=" + get_cl_type_from_data_type(data_type)));
327  build_options.add_option(std::string("-DVEC_SIZE=" + support::cpp11::to_string(_num_elems_processed_per_iteration)));
328  build_options.add_option(std::string("-DVEC_SIZE_LEFTOVER=" + support::cpp11::to_string(src->dimension(0) % _num_elems_processed_per_iteration)));
329 
330  if(is_data_type_quantized(data_type))
331  {
332  const UniformQuantizationInfo iqinfo = src->quantization_info().uniform();
333  const UniformQuantizationInfo wqinfo = weights->quantization_info().uniform();
334  const UniformQuantizationInfo oqinfo = dst->quantization_info().uniform();
335 
336  float multiplier = iqinfo.scale * wqinfo.scale / oqinfo.scale;
337  int output_multiplier = 0;
338  int output_shift = 0;
339  quantization::calculate_quantized_multiplier(multiplier, &output_multiplier, &output_shift);
340  build_options.add_option("-DIS_QUANTIZED");
341  build_options.add_option("-DOUTPUT_MULTIPLIER=" + support::cpp11::to_string(output_multiplier));
342  build_options.add_option("-DOUTPUT_SHIFT=" + support::cpp11::to_string(output_shift));
343  build_options.add_option("-DKERNEL_SIZE=" + support::cpp11::to_string(kernel_size));
344  build_options.add_option("-DINPUT_OFFSET=" + support::cpp11::to_string(-iqinfo.offset));
345  build_options.add_option("-DWEIGHTS_OFFSET=" + support::cpp11::to_string(-wqinfo.offset));
346  build_options.add_option("-DOUTPUT_OFFSET=" + support::cpp11::to_string(oqinfo.offset));
347  }
348  }
349 
350  _kernel = create_kernel(compile_context, kernel_name.str(), build_options.options());
351 
352  // Set config_id for enabling LWS tuning
353  _config_id = kernel_name.str();
354  _config_id += "_";
355  _config_id += lower_string(string_from_data_type(data_type));
356  _config_id += "_";
357  _config_id += support::cpp11::to_string(kernel_size);
358  _config_id += "_";
359  _config_id += support::cpp11::to_string(border_size().left);
360  _config_id += "_";
361  _config_id += support::cpp11::to_string(border_size().top);
362  _config_id += "_";
363  _config_id += support::cpp11::to_string(border_size().right);
364  _config_id += "_";
365  _config_id += support::cpp11::to_string(border_size().bottom);
366  _config_id += "_";
367  _config_id += support::cpp11::to_string(conv_stride_x);
368  _config_id += "_";
369  _config_id += support::cpp11::to_string(conv_stride_y);
370  _config_id += "_";
371  _config_id += support::cpp11::to_string(dst->dimension(width_idx));
372  _config_id += "_";
373  _config_id += support::cpp11::to_string(dst->dimension(height_idx));
374  _config_id += "_";
375  _config_id += lower_string(string_from_data_layout(_data_layout));
376 }
bool is_data_type_quantized(DataType dt)
Check if a given data type is of quantized type.
Definition: Utils.h:1030
Window calculate_max_window(const ValidRegion &valid_region, const Steps &steps, bool skip_border, BorderSize border_size)
std::string to_string(T &&value)
Convert integer and float values to string.
1 channel, 1 F32 per channel
const std::string & string_from_activation_func(ActivationLayerInfo::ActivationFunction act)
Translates a given activation function to a string.
Definition: Utils.cpp:163
#define ARM_COMPUTE_ERROR_THROW_ON(status)
Definition: Error.h:455
Status calculate_quantized_multiplier(float multiplier, int32_t *quant_multiplier, int32_t *shift, bool ignore_epsilon=false)
Calculate quantized representation of multiplier.
std::string lower_string(const std::string &val)
Lower a given string.
Definition: Utils.cpp:353
Status validate_arguments(const ITensorInfo *src, const ITensorInfo *weights, const ITensorInfo *dst, const PadStrideInfo &conv_info)
std::set< std::string > build_options
void update_padding_for_cl_image(ITensorInfo *tensor)
Update padding required to export the OpenCL buffer to OpenCL image2d.
SimpleTensor< float > src
Definition: DFT.cpp:155
1 channel, 1 F16 per channel
1 channel, 1 S32 per channel
const size_t conv_stride_y
Definition: impl.cpp:58
cl::Kernel create_kernel(const CLCompileContext &ctx, const std::string &kernel_name, const std::set< std::string > &build_opts=std::set< std::string >())
Creates an opencl kernel using a compile context.
Definition: CLHelpers.cpp:404
const std::string & string_from_data_type(DataType dt)
Convert a data type identity into a string.
Definition: Utils.cpp:135
std::string get_data_size_from_data_type(const DataType &dt)
Get the size of a data type in number of bits.
Definition: CLHelpers.cpp:193
std::string float_to_string_with_full_precision(float val)
Create a string with the float in full precision.
Definition: Utils.h:1124
GPUTarget get_target() const
Get the targeted GPU architecture.
Definition: ICLKernel.h:443
std::string get_cl_type_from_data_type(const DataType &dt)
Translates a tensor data type to the appropriate OpenCL type.
Definition: CLHelpers.cpp:39
bool auto_init_if_empty(ITensorInfo &info, const TensorShape &shape, int num_channels, DataType data_type, QuantizationInfo quantization_info=QuantizationInfo())
Auto initialize the tensor info (shape, number of channels and data type) if the current assignment i...
virtual BorderSize border_size() const
The size of the border for that kernel.
Definition: IKernel.cpp:46
Num samples, channels, height, width.
const std::string & string_from_data_layout(DataLayout dl)
Convert a data layout identity into a string.
Definition: Utils.cpp:123
GPUTarget
Available GPU Targets.
Definition: GPUTarget.h:34
size_t get_data_layout_dimension_index(const DataLayout &data_layout, const DataLayoutDimension &data_layout_dimension)
Get the index of the given dimension.
Definition: Helpers.inl:193
Num samples, height, width, channels.
#define ARM_COMPUTE_ERROR_ON_NULLPTR(...)
Definition: Validate.h:157
unsigned int adjust_vec_size(unsigned int vec_size, size_t dim0)
Returns the adjusted vector size in case it is less than the input&#39;s first dimension, getting rounded down to its closest valid vector size.
Definition: Utils.h:1222
std::string kernel_name
DataType
Available data types.
Definition: Types.h:79
const size_t conv_stride_x
Definition: impl.cpp:57
TensorShape compute_deep_convolution_shape(const TensorShape &input_shape, DataLayout input_data_layout, const TensorShape &weights_shape, const PadStrideInfo &conv_info)
Calculate the deep convolution shape output shape of a tensor.

◆ run_op()

void run_op ( ITensorPack tensors,
const Window window,
cl::CommandQueue &  queue 
)
overridevirtual

Enqueue the OpenCL kernel to process the given window on the passed OpenCL command queue.

Note
The queue is not flushed by this method, and therefore the kernel will not have been executed by the time this method returns.
Parameters
[in]tensorsA vector containing the tensors to operato on.
[in]windowRegion on which to execute the kernel. (Must be a valid region of the window returned by window()).
[in,out]queueCommand queue on which to enqueue the kernel.

Reimplemented from ICLKernel.

Definition at line 385 of file ClDirectConv2dKernel.cpp.

References ClDirectConv2dKernel::_data_layout, ClDirectConv2dKernel::_export_to_cl_image, arm_compute::ACL_DST, arm_compute::ACL_SRC_0, arm_compute::ACL_SRC_1, arm_compute::ACL_SRC_2, ICLKernel::add_1D_tensor_argument(), ICLKernel::add_3D_tensor_argument(), ICLKernel::add_4d_tensor_nhwc_argument(), ARM_COMPUTE_ERROR_ON_INVALID_SUBWINDOW, ARM_COMPUTE_ERROR_ON_UNCONFIGURED_KERNEL, arm_compute::create_image2d_from_buffer(), arm_compute::enqueue(), Window::first_slice_window_3D(), CLKernelLibrary::get(), ITensorPack::get_const_tensor(), ITensorPack::get_tensor(), ICLKernel::lws_hint(), arm_compute::NHWC, ICLKernel::num_arguments_per_3D_tensor(), arm_compute::test::validation::reference::slice(), Window::slide_window_slice_3D(), Window::use_tensor_dimensions(), and IKernel::window().

386 {
389 
390  // Get initial windows
392 
393  const auto src = utils::cast::polymorphic_downcast<const ICLTensor *>(tensors.get_const_tensor(TensorType::ACL_SRC_0));
394  const auto weights = utils::cast::polymorphic_downcast<const ICLTensor *>(tensors.get_const_tensor(TensorType::ACL_SRC_1));
395  const auto biases = utils::cast::polymorphic_downcast<const ICLTensor *>(tensors.get_const_tensor(TensorType::ACL_SRC_2));
396  auto dst = utils::cast::polymorphic_downcast<ICLTensor *>(tensors.get_tensor(TensorType::ACL_DST));
397 
399  {
400  cl::Image2D weights_cl_image;
401 
403  {
404  const size_t image_w = weights->info()->dimension(0) / 4;
405  const size_t image_h = weights->info()->dimension(1) * weights->info()->dimension(2) * weights->info()->dimension(3);
406  const TensorShape shape2d(image_w, image_h);
407  const size_t image_row_pitch = weights->info()->strides_in_bytes()[1];
408 
409  // Export cl_buffer to cl_image
410  weights_cl_image = create_image2d_from_buffer(CLKernelLibrary::get().context(), weights->cl_buffer(), shape2d, weights->info()->data_type(), image_row_pitch);
411  }
412 
413  unsigned int idx = 0;
415  add_4d_tensor_nhwc_argument(idx, dst);
417  {
418  _kernel.setArg(idx++, weights_cl_image);
419  }
420  add_4d_tensor_nhwc_argument(idx, weights);
421  if(biases != nullptr)
422  {
423  add_1D_tensor_argument(idx, biases, slice);
424  }
425  enqueue(queue, *this, slice, lws_hint());
426  }
427  else
428  {
429  unsigned int idx1 = 2 * num_arguments_per_3D_tensor();
430  add_3D_tensor_argument(idx1, weights, slice);
431 
432  if(biases != nullptr)
433  {
434  Window slice_biases;
435  slice_biases.use_tensor_dimensions(biases->info()->tensor_shape());
436  add_1D_tensor_argument(idx1, biases, slice_biases);
437  }
438 
439  _kernel.setArg(idx1++, static_cast<unsigned int>(weights->info()->strides_in_bytes()[3]));
440 
441  do
442  {
443  unsigned int idx = 0;
444  add_3D_tensor_argument(idx, src, slice);
445  add_3D_tensor_argument(idx, dst, slice);
446  enqueue(queue, *this, slice, lws_hint());
447  }
448  while(window.slide_window_slice_3D(slice));
449  }
450 }
void add_4d_tensor_nhwc_argument(unsigned int &idx, const ICLTensor *tensor)
Add the passed NHWC 4D tensor&#39;s parameters to the object&#39;s kernel&#39;s arguments by passing strides...
Definition: ICLKernel.cpp:144
const Window & window() const
The maximum window the kernel can be executed on.
Definition: IKernel.cpp:28
void enqueue(cl::CommandQueue &queue, ICLKernel &kernel, const Window &window, const cl::NDRange &lws_hint=CLKernelLibrary::get().default_ndrange(), bool use_dummy_work_items=false)
Add the kernel to the command queue with the given window.
Definition: ICLKernel.cpp:32
cl::NDRange lws_hint() const
Return the Local-Workgroup-Size hint.
Definition: ICLKernel.h:383
static CLKernelLibrary & get()
Access the KernelLibrary singleton.
void add_3D_tensor_argument(unsigned int &idx, const ICLTensor *tensor, const Window &window)
Add the passed 3D tensor&#39;s parameters to the object&#39;s kernel&#39;s arguments starting from the index idx...
Definition: ICLKernel.h:226
SimpleTensor< float > src
Definition: DFT.cpp:155
static constexpr unsigned int num_arguments_per_3D_tensor()
Returns the number of arguments enqueued per 3D tensor object.
Definition: ICLKernel.h:313
bool slide_window_slice_3D(Window &slice) const
Slide the passed 3D window slice.
Definition: Window.h:349
#define ARM_COMPUTE_ERROR_ON_UNCONFIGURED_KERNEL(k)
Definition: Validate.h:915
Num samples, height, width, channels.
cl::Image2D create_image2d_from_buffer(const cl::Context &ctx, const cl::Buffer &buffer, const TensorShape &shape2d, DataType data_type, size_t image_row_pitch)
Create a cl::Image2D object from an OpenCL buffer.
Definition: CLUtils.cpp:35
void add_1D_tensor_argument(unsigned int &idx, const ICLTensor *tensor, const Window &window)
Add the passed 1D tensor&#39;s parameters to the object&#39;s kernel&#39;s arguments starting from the index idx...
Definition: ICLKernel.h:178
Window first_slice_window_3D() const
First 3D slice of the window.
Definition: Window.h:305
#define ARM_COMPUTE_ERROR_ON_INVALID_SUBWINDOW(f, s)
Definition: Validate.h:201
SimpleTensor< T > slice(const SimpleTensor< T > &src, Coordinates starts, Coordinates ends)

◆ validate()

Status validate ( const ITensorInfo src,
const ITensorInfo weights,
const ITensorInfo biases,
const ITensorInfo dst,
const PadStrideInfo conv_info,
const ActivationLayerInfo act_info,
const DirectConvComputeKernelInfo desc 
)
static

Static function to check if given info will lead to a valid configuration.

Similar to ClDirectConv2dKernel::configure()

Returns
a status

Definition at line 378 of file ClDirectConv2dKernel.cpp.

References ARM_COMPUTE_RETURN_ON_ERROR, and arm_compute::cpu::kernels::validate_arguments().

380 {
381  ARM_COMPUTE_RETURN_ON_ERROR(validate_arguments(src, weights, biases, dst, conv_info, act_info, desc));
382  return Status{};
383 }
#define ARM_COMPUTE_RETURN_ON_ERROR(status)
Checks if a status contains an error and returns it.
Definition: Error.h:204
Status validate_arguments(const ITensorInfo *src, const ITensorInfo *weights, const ITensorInfo *dst, const PadStrideInfo &conv_info)
SimpleTensor< float > src
Definition: DFT.cpp:155

Field Documentation

◆ _conv_info

PadStrideInfo _conv_info {}

Definition at line 86 of file ClDirectConv2dKernel.h.

Referenced by ClDirectConv2dKernel::configure().

◆ _data_layout

DataLayout _data_layout {}

◆ _export_to_cl_image

bool _export_to_cl_image { false }

The documentation for this class was generated from the following files: