Compute Library
 21.02
CLDepthwiseConvolutionLayerNativeKernel Class Reference

Interface for the kernel to run a MxN depthwise convolution. More...

#include <CLDepthwiseConvolutionLayerNativeKernel.h>

Collaboration diagram for CLDepthwiseConvolutionLayerNativeKernel:
[legend]

Public Member Functions

 CLDepthwiseConvolutionLayerNativeKernel ()
 Default Constructor. More...
 
 CLDepthwiseConvolutionLayerNativeKernel (const CLDepthwiseConvolutionLayerNativeKernel &)=delete
 Prevent instances of this class from being copied (As this class contains pointers) More...
 
CLDepthwiseConvolutionLayerNativeKerneloperator= (const CLDepthwiseConvolutionLayerNativeKernel &)=delete
 Prevent instances of this class from being copied (As this class contains pointers) More...
 
 CLDepthwiseConvolutionLayerNativeKernel (CLDepthwiseConvolutionLayerNativeKernel &&)=default
 Allow instances of this class to be moved. More...
 
CLDepthwiseConvolutionLayerNativeKerneloperator= (CLDepthwiseConvolutionLayerNativeKernel &&)=default
 Allow instances of this class to be moved. More...
 
void configure (const ICLTensor *input, const ICLTensor *weights, const ICLTensor *biases, ICLTensor *output, const DWCWeightsKernelInfo &dwc_weights_info, const DWCKernelInfo &dwc_info, const PadStrideInfo &conv_info, unsigned int depth_multiplier=1, const Size2D &dilation=Size2D(1U, 1U), const ICLTensor *output_multipliers=nullptr, const ICLTensor *output_shifts=nullptr)
 Initialize the function's source, destination and parameters. More...
 
void configure (const CLCompileContext &compile_context, const ICLTensor *input, const ICLTensor *weights, const ICLTensor *biases, ICLTensor *output, const DWCWeightsKernelInfo &dwc_weights_info, const DWCKernelInfo &dwc_info, const PadStrideInfo &conv_info, unsigned int depth_multiplier=1, const Size2D &dilation=Size2D(1U, 1U), const ICLTensor *output_multipliers=nullptr, const ICLTensor *output_shifts=nullptr)
 Initialize the function's source, destination and parameters. More...
 
void run (const Window &window, cl::CommandQueue &queue) override
 Enqueue the OpenCL kernel to process the given window on the passed OpenCL command queue. More...
 
- Public Member Functions inherited from ICLKernel
 ICLKernel ()
 Constructor. More...
 
cl::Kernel & kernel ()
 Returns a reference to the OpenCL kernel of this object. More...
 
template<typename T >
void add_1D_array_argument (unsigned int &idx, const ICLArray< T > *array, const Strides &strides, unsigned int num_dimensions, const Window &window)
 Add the passed 1D array's parameters to the object's kernel's arguments starting from the index idx. More...
 
void add_1D_tensor_argument (unsigned int &idx, const ICLTensor *tensor, const Window &window)
 Add the passed 1D tensor's parameters to the object's kernel's arguments starting from the index idx. More...
 
void add_1D_tensor_argument_if (bool cond, unsigned int &idx, const ICLTensor *tensor, const Window &window)
 Add the passed 1D tensor's parameters to the object's kernel's arguments starting from the index idx if the condition is true. More...
 
void add_2D_tensor_argument (unsigned int &idx, const ICLTensor *tensor, const Window &window)
 Add the passed 2D tensor's parameters to the object's kernel's arguments starting from the index idx. More...
 
void add_2D_tensor_argument_if (bool cond, unsigned int &idx, const ICLTensor *tensor, const Window &window)
 Add the passed 2D tensor's parameters to the object's kernel's arguments starting from the index idx if the condition is true. More...
 
void add_3D_tensor_argument (unsigned int &idx, const ICLTensor *tensor, const Window &window)
 Add the passed 3D tensor's parameters to the object's kernel's arguments starting from the index idx. More...
 
void add_4D_tensor_argument (unsigned int &idx, const ICLTensor *tensor, const Window &window)
 Add the passed 4D tensor's parameters to the object's kernel's arguments starting from the index idx. More...
 
virtual void run_op (ITensorPack &tensors, const Window &window, cl::CommandQueue &queue)
 Enqueue the OpenCL kernel to process the given window on the passed OpenCL command queue. More...
 
template<typename T >
void add_argument (unsigned int &idx, T value)
 Add the passed parameters to the object's kernel's arguments starting from the index idx. More...
 
void set_lws_hint (const cl::NDRange &lws_hint)
 Set the Local-Workgroup-Size hint. More...
 
cl::NDRange lws_hint () const
 Return the Local-Workgroup-Size hint. More...
 
void set_wbsm_hint (const cl_int &wbsm_hint)
 Set the workgroup batch size modifier hint. More...
 
cl_int wbsm_hint () const
 Return the workgroup batch size modifier hint. More...
 
const std::string & config_id () const
 Get the configuration ID. More...
 
void set_target (GPUTarget target)
 Set the targeted GPU architecture. More...
 
void set_target (cl::Device &device)
 Set the targeted GPU architecture according to the CL device. More...
 
GPUTarget get_target () const
 Get the targeted GPU architecture. More...
 
size_t get_max_workgroup_size ()
 Get the maximum workgroup size for the device the CLKernelLibrary uses. More...
 
template<unsigned int dimension_size>
void add_tensor_argument (unsigned &idx, const ICLTensor *tensor, const Window &window)
 
template<typename T , unsigned int dimension_size>
void add_array_argument (unsigned &idx, const ICLArray< T > *array, const Strides &strides, unsigned int num_dimensions, const Window &window)
 Add the passed array's parameters to the object's kernel's arguments starting from the index idx. More...
 
- Public Member Functions inherited from IKernel
 IKernel ()
 Constructor. More...
 
virtual ~IKernel ()=default
 Destructor. More...
 
virtual bool is_parallelisable () const
 Indicates whether or not the kernel is parallelisable. More...
 
virtual BorderSize border_size () const
 The size of the border for that kernel. More...
 
const Windowwindow () const
 The maximum window the kernel can be executed on. More...
 

Static Public Member Functions

static Status validate (const ITensorInfo *input, const ITensorInfo *weights, const ITensorInfo *biases, const ITensorInfo *output, const DWCWeightsKernelInfo &dwc_weights_info, const DWCKernelInfo &dwc_info, const PadStrideInfo &conv_info, unsigned int depth_multiplier=1, const Size2D &dilation=Size2D(1U, 1U), const ITensorInfo *output_multipliers=nullptr, const ITensorInfo *output_shifts=nullptr)
 Static function to check if given info will lead to a valid configuration of CLDepthwiseConvolutionLayerNativeKernel. More...
 
- Static Public Member Functions inherited from ICLKernel
static constexpr unsigned int num_arguments_per_1D_array ()
 Returns the number of arguments enqueued per 1D array object. More...
 
static constexpr unsigned int num_arguments_per_1D_tensor ()
 Returns the number of arguments enqueued per 1D tensor object. More...
 
static constexpr unsigned int num_arguments_per_2D_tensor ()
 Returns the number of arguments enqueued per 2D tensor object. More...
 
static constexpr unsigned int num_arguments_per_3D_tensor ()
 Returns the number of arguments enqueued per 3D tensor object. More...
 
static constexpr unsigned int num_arguments_per_4D_tensor ()
 Returns the number of arguments enqueued per 4D tensor object. More...
 
static cl::NDRange gws_from_window (const Window &window)
 Get the global work size given an execution window. More...
 

Detailed Description

Interface for the kernel to run a MxN depthwise convolution.

M and N are respectively the rows and columns of the filter This kernel assumes that tensor for the weights is NOT reshaped (Native version)

Definition at line 37 of file CLDepthwiseConvolutionLayerNativeKernel.h.

Constructor & Destructor Documentation

◆ CLDepthwiseConvolutionLayerNativeKernel() [1/3]

Default Constructor.

Definition at line 128 of file CLDepthwiseConvolutionLayerNativeKernel.cpp.

129  : _input(nullptr),
130  _weights(nullptr),
131  _biases(nullptr),
132  _output(nullptr),
133  _depth_multiplier(1),
134  _output_multipliers(nullptr),
135  _output_shifts(nullptr),
136  _is_quantized(false)
137 {
138 }

◆ CLDepthwiseConvolutionLayerNativeKernel() [2/3]

Prevent instances of this class from being copied (As this class contains pointers)

◆ CLDepthwiseConvolutionLayerNativeKernel() [3/3]

Allow instances of this class to be moved.

Member Function Documentation

◆ configure() [1/2]

void configure ( const ICLTensor input,
const ICLTensor weights,
const ICLTensor biases,
ICLTensor output,
const DWCWeightsKernelInfo dwc_weights_info,
const DWCKernelInfo dwc_info,
const PadStrideInfo conv_info,
unsigned int  depth_multiplier = 1,
const Size2D dilation = Size2D(1U, 1U),
const ICLTensor output_multipliers = nullptr,
const ICLTensor output_shifts = nullptr 
)

Initialize the function's source, destination and parameters.

Parameters
[in]inputSource tensor. Data type supported: QASYMM8/QASYMM8_SIGNED/FP32/FP16. Data layout supported: NHWC
[in]weightsWeights tensor. A 3D tensor with dimensions [IFM, N, M]. Data type supported: Same as input or QASYMM8/QASYMM8_SIGNED/QSYMM8_PER_CHANNEL when input is QASYMM8.
[in]biasesBiases tensor. A 1D tensor with dimensions [IFM]. Must be nullptr if not needed. Data type supported: Same as input, S32 when input is QASYMM8/QASYMM8_SIGNED.
[out]outputDestination tensor. Data type supported: Same as input.
[in]dwc_weights_infoDepthwise convolution layer weights info to retrieve the number of output elements processed by each thread
[in]dwc_infoDepthwise convolution layer info
[in]conv_infoPadding and stride information to use for the convolution.
[in]depth_multiplier(Optional) Multiplier to apply to the input's depth in order to retrieve the output's depth. Defaults to 1.
[in]dilation(Optional) Dilation, in elements, across x and y. Defaults to (1, 1).
[in]output_multipliers(Optional) Output multipliers tensor for quantized computations. In case of per-channel quantization, the number of multipliers must be equal to the number of filters (IFM). Supported data types: S32
[in]output_shifts(Optional) Output shifts tensor for quantized computations. In case of per-channel quantization, the number of multipliers must be equal to the number of filters (IFM). Supported data types: S32

Definition at line 140 of file CLDepthwiseConvolutionLayerNativeKernel.cpp.

References CLKernelLibrary::get().

143 {
144  configure(CLKernelLibrary::get().get_compile_context(), input, weights, biases, output, dwc_weights_info, dwc_info, conv_info, depth_multiplier, dilation, output_multipliers, output_shifts);
145 }
static CLKernelLibrary & get()
Access the KernelLibrary singleton.
void configure(const ICLTensor *input, const ICLTensor *weights, const ICLTensor *biases, ICLTensor *output, const DWCWeightsKernelInfo &dwc_weights_info, const DWCKernelInfo &dwc_info, const PadStrideInfo &conv_info, unsigned int depth_multiplier=1, const Size2D &dilation=Size2D(1U, 1U), const ICLTensor *output_multipliers=nullptr, const ICLTensor *output_shifts=nullptr)
Initialize the function&#39;s source, destination and parameters.

◆ configure() [2/2]

void configure ( const CLCompileContext compile_context,
const ICLTensor input,
const ICLTensor weights,
const ICLTensor biases,
ICLTensor output,
const DWCWeightsKernelInfo dwc_weights_info,
const DWCKernelInfo dwc_info,
const PadStrideInfo conv_info,
unsigned int  depth_multiplier = 1,
const Size2D dilation = Size2D(1U, 1U),
const ICLTensor output_multipliers = nullptr,
const ICLTensor output_shifts = nullptr 
)

Initialize the function's source, destination and parameters.

Parameters
[in]compile_contextThe compile context to be used.
[in]inputSource tensor. Data type supported: QASYMM8/QASYMM8_SIGNED/FP32/FP16. Data layout supported: NHWC
[in]weightsWeights tensor. A 3D tensor with dimensions [IFM, N, M]. Data type supported: Same as input or QASYMM8/QASYMM8_SIGNED/QSYMM8_PER_CHANNEL when input is QASYMM8.
[in]biasesBiases tensor. A 1D tensor with dimensions [IFM]. Must be nullptr if not needed. Data type supported: Same as input, S32 when input is QASYMM8/QASYMM8_SIGNED.
[out]outputDestination tensor. Data type supported: Same as input.
[in]dwc_weights_infoDepthwise convolution layer weights info to retrieve the number of output elements processed by each thread
[in]dwc_infoDepthwise convolution layer info
[in]conv_infoPadding and stride information to use for the convolution.
[in]depth_multiplier(Optional) Multiplier to apply to the input's depth in order to retrieve the output's depth. Defaults to 1.
[in]dilation(Optional) Dilation, in elements, across x and y. Defaults to (1, 1).
[in]output_multipliers(Optional) Output multipliers tensor for quantized computations. In case of per-channel quantization, the number of multipliers must be equal to the number of filters (IFM). Supported data types: S32
[in]output_shifts(Optional) Output shifts tensor for quantized computations. In case of per-channel quantization, the number of multipliers must be equal to the number of filters (IFM). Supported data types: S32

Definition at line 147 of file CLDepthwiseConvolutionLayerNativeKernel.cpp.

References CLBuildOptions::add_option_if(), arm_compute::adjust_vec_size(), ARM_COMPUTE_ERROR_ON, ARM_COMPUTE_ERROR_ON_NULLPTR, ARM_COMPUTE_ERROR_THROW_ON, arm_compute::auto_init_if_empty(), arm_compute::calculate_max_window(), arm_compute::quantization::calculate_quantized_multiplier(), ICloneable< T >::clone(), arm_compute::misc::shape_calculator::compute_depthwise_convolution_shape(), arm_compute::test::validation::conv_info, arm_compute::create_kernel(), ITensorInfo::data_type(), ITensorInfo::dimension(), arm_compute::float_to_string_with_full_precision(), arm_compute::get_cl_type_from_data_type(), arm_compute::get_padding_info(), arm_compute::get_quantized_activation_min_max(), arm_compute::has_padding_changed(), ITensor::info(), arm_compute::test::validation::input, arm_compute::is_data_type_quantized(), arm_compute::is_data_type_quantized_per_channel(), kernel_name, arm_compute::lower_string(), DWCWeightsKernelInfo::n0, UniformQuantizationInfo::offset, ITensorInfo::quantization_info(), UniformQuantizationInfo::scale, arm_compute::string_from_activation_func(), arm_compute::string_from_data_type(), ITensorInfo::tensor_shape(), arm_compute::support::cpp11::to_string(), TensorShape::total_size_upper(), QuantizationInfo::uniform(), and arm_compute::validate_arguments().

151 {
152  ARM_COMPUTE_ERROR_ON_NULLPTR(input, weights, output);
153  ARM_COMPUTE_ERROR_THROW_ON(validate_arguments(input->info(), weights->info(), (biases != nullptr) ? biases->info() : nullptr, output->info(),
154  dwc_weights_info, dwc_info, conv_info, depth_multiplier, dilation,
155  (output_multipliers != nullptr) ? output_multipliers->info() : nullptr, (output_shifts != nullptr) ? output_shifts->info() : nullptr));
156 
157  auto padding_info = get_padding_info({ input, output });
158 
159  const TensorShape output_shape = arm_compute::misc::shape_calculator::compute_depthwise_convolution_shape(*(input->info()), *(weights->info()), conv_info, depth_multiplier, dilation);
160  auto_init_if_empty(*(output->info()), input->info()->clone()->set_tensor_shape(output_shape).set_quantization_info(output->info()->quantization_info()));
161 
162  _input = input;
163  _output = output;
164  _weights = weights;
165  _biases = biases;
166  _depth_multiplier = depth_multiplier;
167  _output_multipliers = output_multipliers;
168  _output_shifts = output_shifts;
169  _is_quantized = is_data_type_quantized(input->info()->data_type());
170 
171  const unsigned int n0 = adjust_vec_size(dwc_weights_info.n0, input->info()->dimension(0));
172 
173  CLBuildOptions build_opts;
174  build_opts.add_option_if(_biases != nullptr, "-DHAS_BIAS");
175  build_opts.add_option_if(_input->info()->tensor_shape().total_size_upper(3) > 1, "-DDST_DEPTH=" + support::cpp11::to_string(static_cast<int>(_output->info()->dimension(2))));
176  build_opts.add_option("-DDATA_TYPE=" + get_cl_type_from_data_type(_input->info()->data_type()));
177  build_opts.add_option("-DACTIVATION_TYPE=" + lower_string(string_from_activation_func(dwc_info.activation_info.activation())));
178  build_opts.add_option("-DDEPTH_MULTIPLIER=" + support::cpp11::to_string(depth_multiplier));
179  build_opts.add_option("-DN0=" + support::cpp11::to_string(n0));
180  build_opts.add_option("-DSRC_DIM1=" + support::cpp11::to_string(_input->info()->dimension(1)));
181  build_opts.add_option("-DSRC_DIM2=" + support::cpp11::to_string(_input->info()->dimension(2)));
182  build_opts.add_option("-DKERNEL_WIDTH=" + support::cpp11::to_string(weights->info()->dimension(1)));
183  build_opts.add_option("-DKERNEL_HEIGHT=" + support::cpp11::to_string(weights->info()->dimension(2)));
184  build_opts.add_option("-DCONV_PAD_TOP=" + support::cpp11::to_string(conv_info.pad_top()));
185  build_opts.add_option("-DCONV_PAD_LEFT=" + support::cpp11::to_string(conv_info.pad_left()));
186  build_opts.add_option("-DCONV_STRIDE_X=" + support::cpp11::to_string(conv_info.stride().first));
187  build_opts.add_option("-DCONV_STRIDE_Y=" + support::cpp11::to_string(conv_info.stride().second));
188  build_opts.add_option("-DDILATION_X=" + support::cpp11::to_string(dilation.x()));
189  build_opts.add_option("-DDILATION_Y=" + support::cpp11::to_string(dilation.y()));
190  build_opts.add_option("-DVEC_SIZE_LEFTOVER=" + support::cpp11::to_string(_input->info()->dimension(0) % n0));
191 
192  std::string kernel_name = (_is_quantized) ? "dwc_MxN_native_quantized8_nhwc" : "dwc_MxN_native_fp_nhwc";
193 
194  if(_is_quantized)
195  {
196  const UniformQuantizationInfo iq_info = _input->info()->quantization_info().uniform();
197  const UniformQuantizationInfo wq_info = _weights->info()->quantization_info().uniform();
198  const UniformQuantizationInfo oq_info = _output->info()->quantization_info().uniform();
199 
200  build_opts.add_option("-DINPUT_OFFSET=" + support::cpp11::to_string(-iq_info.offset));
201  build_opts.add_option("-DWEIGHTS_OFFSET=" + support::cpp11::to_string(-wq_info.offset));
202  build_opts.add_option("-DOUTPUT_OFFSET=" + support::cpp11::to_string(oq_info.offset));
203  build_opts.add_option_if(is_data_type_quantized_per_channel(weights->info()->data_type()), "-DPER_CHANNEL_QUANTIZATION");
204 
205  // Compute non-per-channel multiplier and shift anyway to make OpenCL kernel simpler
206  float multiplier = iq_info.scale * wq_info.scale / oq_info.scale;
207  int output_multiplier = 0;
208  int output_shift = 0;
209  quantization::calculate_quantized_multiplier(multiplier, &output_multiplier, &output_shift);
210  build_opts.add_option("-DOUTPUT_MULTIPLIER=" + support::cpp11::to_string(output_multiplier));
211  build_opts.add_option("-DOUTPUT_SHIFT=" + support::cpp11::to_string(output_shift));
212 
213  if(dwc_info.activation_info.enabled())
214  {
215  int a_val{};
216  int b_val{};
217  std::tie(b_val, a_val) = get_quantized_activation_min_max(dwc_info.activation_info, input->info()->data_type(), oq_info);
218 
219  const int o1 = oq_info.offset;
220 
221  build_opts.add_option("-DA_VAL=" + support::cpp11::to_string(a_val));
222  build_opts.add_option("-DB_VAL=" + support::cpp11::to_string(b_val));
223  build_opts.add_option("-DCONST_0=" + support::cpp11::to_string(o1));
224 
225  const float s1 = iq_info.scale;
226  build_opts.add_option("-DS1_VAL=" + float_to_string_with_full_precision(s1));
227  build_opts.add_option("-DO1_VAL=" + support::cpp11::to_string(o1));
228  }
229 
230  build_opts.add_option("-DDATA_TYPE=" + get_cl_type_from_data_type(input->info()->data_type()));
231  build_opts.add_option("-DWEIGHTS_TYPE=" + get_cl_type_from_data_type(weights->info()->data_type()));
232  }
233  else
234  {
235  build_opts.add_option_if(dwc_info.activation_info.enabled(), "-DA_VAL=" + float_to_string_with_full_precision(dwc_info.activation_info.a()));
236  build_opts.add_option_if(dwc_info.activation_info.enabled(), "-DB_VAL=" + float_to_string_with_full_precision(dwc_info.activation_info.b()));
237  }
238 
239  Window win = calculate_max_window(*(output->info()), Steps(n0));
240  ICLKernel::configure_internal(win);
241 
242  _kernel = create_kernel(compile_context, kernel_name, build_opts.options());
243 
245 
246  // Set config_id for enabling LWS tuning
247  _config_id = kernel_name;
248  _config_id += "_";
249  _config_id += support::cpp11::to_string(input->info()->dimension(0));
250  _config_id += "_";
251  _config_id += support::cpp11::to_string(input->info()->dimension(1));
252  _config_id += "_";
253  _config_id += support::cpp11::to_string(input->info()->dimension(2));
254  _config_id += "_";
255  _config_id += support::cpp11::to_string(output->info()->dimension(0));
256  _config_id += "_";
257  _config_id += support::cpp11::to_string(output->info()->dimension(1));
258  _config_id += "_";
259  _config_id += support::cpp11::to_string(output->info()->dimension(2));
260  _config_id += "_";
261  _config_id += string_from_data_type(input->info()->data_type());
262 }
bool is_data_type_quantized(DataType dt)
Check if a given data type is of quantized type.
Definition: Utils.h:1168
Window calculate_max_window(const ValidRegion &valid_region, const Steps &steps, bool skip_border, BorderSize border_size)
TensorShape compute_depthwise_convolution_shape(const ITensorInfo &input, const ITensorInfo &weights, PadStrideInfo conv_info, unsigned int depth_multiplier, const Size2D &dilation=Size2D(1U, 1U))
Calculate the depthwise convolution output shape of a tensor.
virtual size_t dimension(size_t index) const =0
Return the size of the requested dimension.
std::string to_string(T &&value)
Convert integer and float values to string.
virtual DataType data_type() const =0
Data type used for each element of the tensor.
#define ARM_COMPUTE_ERROR_ON(cond)
If the condition is true then an error message is printed and an exception thrown.
Definition: Error.h:466
const std::string & string_from_activation_func(ActivationLayerInfo::ActivationFunction act)
Translates a given activation function to a string.
Definition: Utils.cpp:163
size_t total_size_upper(size_t dimension) const
Collapses given dimension and above.
Definition: TensorShape.h:182
#define ARM_COMPUTE_ERROR_THROW_ON(status)
Definition: Error.h:455
Status calculate_quantized_multiplier(float multiplier, int32_t *quant_multiplier, int32_t *shift, bool ignore_epsilon=false)
Calculate quantized representation of multiplier.
std::string lower_string(const std::string &val)
Lower a given string.
Definition: Utils.cpp:350
cl::Kernel create_kernel(const CLCompileContext &ctx, const std::string &kernel_name, const std::set< std::string > &build_opts=std::set< std::string >())
Creates an opencl kernel using a compile context.
Definition: CLHelpers.cpp:403
const std::string & string_from_data_type(DataType dt)
Convert a data type identity into a string.
Definition: Utils.cpp:135
bool is_data_type_quantized_per_channel(DataType dt)
Check if a given data type is of per channel type.
Definition: Utils.h:1245
std::string float_to_string_with_full_precision(float val)
Create a string with the float in full precision.
Definition: Utils.h:1262
virtual const TensorShape & tensor_shape() const =0
Size for each dimension of the tensor.
std::pair< int32_t, int32_t > get_quantized_activation_min_max(ActivationLayerInfo act_info, DataType data_type, UniformQuantizationInfo oq_info)
Returns a pair of minimum and maximum values for a quantized activation.
Definition: Utils.cpp:483
std::string kernel_name
UniformQuantizationInfo uniform() const
Return per layer quantization info.
std::string get_cl_type_from_data_type(const DataType &dt)
Translates a tensor data type to the appropriate OpenCL type.
Definition: CLHelpers.cpp:37
bool auto_init_if_empty(ITensorInfo &info, const TensorShape &shape, int num_channels, DataType data_type, QuantizationInfo quantization_info=QuantizationInfo())
Auto initialize the tensor info (shape, number of channels and data type) if the current assignment i...
virtual ITensorInfo * info() const =0
Interface to be implemented by the child class to return the tensor&#39;s metadata.
virtual QuantizationInfo quantization_info() const =0
Get the quantization settings (scale and offset) of the tensor.
bool has_padding_changed(const std::unordered_map< const ITensorInfo *, PaddingSize > &padding_map)
Check if the previously stored padding info has changed after configuring a kernel.
Definition: Utils.cpp:528
std::unordered_map< const ITensorInfo *, PaddingSize > get_padding_info(std::initializer_list< const ITensorInfo *> infos)
Stores padding information before configuring a kernel.
Definition: Utils.cpp:513
Status validate_arguments(const ITensorInfo *input, const ITensorInfo *bias, const ITensorInfo *output, const GEMMLowpOutputStageInfo *output_stage)
#define ARM_COMPUTE_ERROR_ON_NULLPTR(...)
Definition: Validate.h:161
unsigned int adjust_vec_size(unsigned int vec_size, size_t dim0)
Returns the adjusted vector size in case it is less than the input&#39;s first dimension, getting rounded down to its closest valid vector size.
Definition: Utils.h:1358

◆ operator=() [1/2]

Prevent instances of this class from being copied (As this class contains pointers)

◆ operator=() [2/2]

Allow instances of this class to be moved.

◆ run()

void run ( const Window window,
cl::CommandQueue &  queue 
)
overridevirtual

Enqueue the OpenCL kernel to process the given window on the passed OpenCL command queue.

Note
The queue is not flushed by this method, and therefore the kernel will not have been executed by the time this method returns.
Parameters
[in]windowRegion on which to execute the kernel. (Must be a valid region of the window returned by window()).
[in,out]queueCommand queue on which to enqueue the kernel.

Reimplemented from ICLKernel.

Definition at line 272 of file CLDepthwiseConvolutionLayerNativeKernel.cpp.

References ICLKernel::add_1D_tensor_argument(), ICLKernel::add_3D_tensor_argument(), ICLKernel::add_4D_tensor_argument(), ARM_COMPUTE_ERROR_ON, ARM_COMPUTE_ERROR_ON_INVALID_SUBWINDOW, ARM_COMPUTE_ERROR_ON_UNCONFIGURED_KERNEL, Window::collapse(), Window::DimX, Window::DimZ, arm_compute::enqueue(), Window::first_slice_window_4D(), ITensor::info(), ICLKernel::lws_hint(), ICLKernel::num_arguments_per_3D_tensor(), ICLKernel::num_arguments_per_4D_tensor(), Window::set(), Window::slide_window_slice_4D(), Window::Dimension::step(), ITensorInfo::tensor_shape(), IKernel::window(), and Window::x().

273 {
276 
277  // Collapse window
278  Window window_collapsed = window.collapse(ICLKernel::window(), Window::DimZ);
279  Window slice_in = window.first_slice_window_4D();
280  Window slice_out = window_collapsed.first_slice_window_4D();
281 
282  if(_depth_multiplier != 1)
283  {
284  ARM_COMPUTE_ERROR_ON(slice_out.x().step() != 1);
285  slice_out.set(Window::DimX, Window::Dimension(0, _input->info()->tensor_shape()[0], 1));
286  }
287 
288  unsigned int idx = 2 * num_arguments_per_4D_tensor() + num_arguments_per_3D_tensor();
289 
290  // Set output multipliers in case of quantized data type
291  if(_is_quantized)
292  {
293  add_1D_tensor_argument(idx, _output_multipliers, slice_in);
294  add_1D_tensor_argument(idx, _output_shifts, slice_in);
295  }
296 
297  if(_biases != nullptr)
298  {
299  add_1D_tensor_argument(idx, _biases, slice_in);
300  }
301 
302  do
303  {
304  idx = 0;
305  add_4D_tensor_argument(idx, _input, slice_in);
306  add_4D_tensor_argument(idx, _output, slice_out);
307  add_3D_tensor_argument(idx, _weights, slice_out);
308  enqueue(queue, *this, slice_out, lws_hint());
309  }
310  while(window_collapsed.slide_window_slice_4D(slice_out) && window.slide_window_slice_4D(slice_in));
311 }
const Window & window() const
The maximum window the kernel can be executed on.
Definition: IKernel.cpp:28
void enqueue(IGCKernel &kernel, const Window &window, const gles::NDRange &lws=gles::NDRange(1U, 1U, 1U))
Add the kernel to the command queue with the given window.
Definition: IGCKernel.cpp:41
cl::NDRange lws_hint() const
Return the Local-Workgroup-Size hint.
Definition: ICLKernel.h:276
#define ARM_COMPUTE_ERROR_ON(cond)
If the condition is true then an error message is printed and an exception thrown.
Definition: Error.h:466
void add_3D_tensor_argument(unsigned int &idx, const ICLTensor *tensor, const Window &window)
Add the passed 3D tensor&#39;s parameters to the object&#39;s kernel&#39;s arguments starting from the index idx...
Definition: ICLKernel.h:172
Window collapse(const Window &full_window, size_t first, size_t last=Coordinates::num_max_dimensions) const
Collapse the dimensions between first and last.
Definition: Window.inl:111
static constexpr size_t DimX
Alias for dimension 0 also known as X dimension.
Definition: Window.h:43
static constexpr unsigned int num_arguments_per_3D_tensor()
Returns the number of arguments enqueued per 3D tensor object.
Definition: ICLKernel.h:214
virtual const TensorShape & tensor_shape() const =0
Size for each dimension of the tensor.
virtual ITensorInfo * info() const =0
Interface to be implemented by the child class to return the tensor&#39;s metadata.
static constexpr unsigned int num_arguments_per_4D_tensor()
Returns the number of arguments enqueued per 4D tensor object.
Definition: ICLKernel.h:222
#define ARM_COMPUTE_ERROR_ON_UNCONFIGURED_KERNEL(k)
Definition: Validate.h:941
static constexpr size_t DimZ
Alias for dimension 2 also known as Z dimension.
Definition: Window.h:47
Window first_slice_window_4D() const
First 4D slice of the window.
Definition: Window.h:299
bool slide_window_slice_4D(Window &slice) const
Slide the passed 4D window slice.
Definition: Window.h:347
void add_1D_tensor_argument(unsigned int &idx, const ICLTensor *tensor, const Window &window)
Add the passed 1D tensor&#39;s parameters to the object&#39;s kernel&#39;s arguments starting from the index idx...
Definition: ICLKernel.h:124
void add_4D_tensor_argument(unsigned int &idx, const ICLTensor *tensor, const Window &window)
Add the passed 4D tensor&#39;s parameters to the object&#39;s kernel&#39;s arguments starting from the index idx...
Definition: ICLKernel.h:182
#define ARM_COMPUTE_ERROR_ON_INVALID_SUBWINDOW(f, s)
Definition: Validate.h:205

◆ validate()

Status validate ( const ITensorInfo input,
const ITensorInfo weights,
const ITensorInfo biases,
const ITensorInfo output,
const DWCWeightsKernelInfo dwc_weights_info,
const DWCKernelInfo dwc_info,
const PadStrideInfo conv_info,
unsigned int  depth_multiplier = 1,
const Size2D dilation = Size2D(1U, 1U),
const ITensorInfo output_multipliers = nullptr,
const ITensorInfo output_shifts = nullptr 
)
static

Static function to check if given info will lead to a valid configuration of CLDepthwiseConvolutionLayerNativeKernel.

Parameters
[in]inputSource tensor info. Data type supported: QASYMM8/QASYMM8_SIGNED/FP32/FP16. Data layout supported: NHWC
[in]weightsWeights tensor info. A 3D tensor with dimensions [IFM, N, M]. Data type supported: Same as input or QASYMM8/QASYMM8_SIGNED/QSYMM8_PER_CHANNEL when input is QASYMM8.
[in]biasesBiases tensor info. A 1D tensor with dimensions [IFM]. Must be nullptr if not needed. Data type supported: Same as input, S32 when input is QASYMM8/QASYMM8_SIGNED.
[in]outputDestination tensor info. Data type supported: Same as input.
[in]dwc_weights_infoDepthwise convolution layer weights info to retrieve the number of output elements processed by each thread
[in]dwc_infoDepthwise convolution layer info
[in]conv_infoPadding and stride information to use for the convolution.
[in]depth_multiplier(Optional) Multiplier to apply to the input's depth in order to retrieve the output's depth. Defaults to 1.
[in]dilation(Optional) Dilation, in elements, across x and y. Defaults to (1, 1).
[in]output_multipliers(Optional) Output multipliers tensor for quantized computations. In case of per-channel quantization, the number of multipliers must be equal to the number of filters (IFM). Supported data types: S32
[in]output_shifts(Optional) Output shifts tensor for quantized computations. In case of per-channel quantization, the number of multipliers must be equal to the number of filters (IFM). Supported data types: S32
Returns
a status

Definition at line 264 of file CLDepthwiseConvolutionLayerNativeKernel.cpp.

References ARM_COMPUTE_RETURN_ON_ERROR, and arm_compute::validate_arguments().

267 {
268  ARM_COMPUTE_RETURN_ON_ERROR(validate_arguments(input, weights, biases, output, dwc_weights_info, dwc_info, conv_info, depth_multiplier, dilation, output_multipliers, output_shifts));
269  return Status{};
270 }
#define ARM_COMPUTE_RETURN_ON_ERROR(status)
Checks if a status contains an error and returns it.
Definition: Error.h:204
Status validate_arguments(const ITensorInfo *input, const ITensorInfo *bias, const ITensorInfo *output, const GEMMLowpOutputStageInfo *output_stage)

The documentation for this class was generated from the following files: