Compute Library
 22.11
ClWinogradOutputTransformKernel Class Reference

Interface for the Winograd output transform kernel. More...

#include <ClWinogradOutputTransformKernel.h>

Collaboration diagram for ClWinogradOutputTransformKernel:
[legend]

Public Member Functions

 ClWinogradOutputTransformKernel ()
 
 ARM_COMPUTE_DISALLOW_COPY_ALLOW_MOVE (ClWinogradOutputTransformKernel)
 
void configure (const ClCompileContext &compile_context, ITensorInfo *src, ITensorInfo *bias, ITensorInfo *dst, const WinogradInfo &winograd_info, const ActivationLayerInfo &act_info=ActivationLayerInfo())
 Set the input and output tensor. More...
 
void run_op (ITensorPack &tensors, const Window &window, cl::CommandQueue &queue) override
 Enqueue the OpenCL kernel to process the given window on the passed OpenCL command queue. More...
 
- Public Member Functions inherited from ICLKernel
 ICLKernel ()
 Constructor. More...
 
cl::Kernel & kernel ()
 Returns a reference to the OpenCL kernel of this object. More...
 
CLKernelType type () const
 Returns the CL kernel type. More...
 
template<typename T >
void add_1D_array_argument (unsigned int &idx, const ICLArray< T > *array, const Strides &strides, unsigned int num_dimensions, const Window &window)
 Add the passed 1D array's parameters to the object's kernel's arguments starting from the index idx. More...
 
void add_1D_tensor_argument (unsigned int &idx, const ICLTensor *tensor, const Window &window)
 Add the passed 1D tensor's parameters to the object's kernel's arguments starting from the index idx. More...
 
void add_1D_tensor_argument_if (bool cond, unsigned int &idx, const ICLTensor *tensor, const Window &window)
 Add the passed 1D tensor's parameters to the object's kernel's arguments starting from the index idx if the condition is true. More...
 
void add_2D_tensor_argument (unsigned int &idx, const ICLTensor *tensor, const Window &window)
 Add the passed 2D tensor's parameters to the object's kernel's arguments starting from the index idx. More...
 
void add_2D_tensor_argument_if (bool cond, unsigned int &idx, const ICLTensor *tensor, const Window &window)
 Add the passed 2D tensor's parameters to the object's kernel's arguments starting from the index idx if the condition is true. More...
 
void add_3D_tensor_argument (unsigned int &idx, const ICLTensor *tensor, const Window &window)
 Add the passed 3D tensor's parameters to the object's kernel's arguments starting from the index idx. More...
 
void add_4D_tensor_argument (unsigned int &idx, const ICLTensor *tensor, const Window &window)
 Add the passed 4D tensor's parameters to the object's kernel's arguments starting from the index idx. More...
 
void add_5D_tensor_argument (unsigned int &idx, const ICLTensor *tensor, const Window &window)
 Add the passed 5D tensor's parameters to the object's kernel's arguments starting from the index idx. More...
 
void add_3d_tensor_nhw_argument (unsigned int &idx, const ICLTensor *tensor)
 Add the passed NHW 3D tensor's parameters to the object's kernel's arguments by passing strides, dimensions and the offset to the first valid element in bytes. More...
 
void add_4d_tensor_nhwc_argument (unsigned int &idx, const ICLTensor *tensor)
 Add the passed NHWC 4D tensor's parameters to the object's kernel's arguments by passing strides, dimensions and the offset to the first valid element in bytes. More...
 
virtual void run (const Window &window, cl::CommandQueue &queue)
 Enqueue the OpenCL kernel to process the given window on the passed OpenCL command queue. More...
 
virtual void run_composite_op (ITensorPack &tensors, const Window &window, cl::CommandQueue &queue, const experimental::dynamic_fusion::ClExecutionDescriptor &exec_desc)
 The execution is carried out through run_op method. But the run_op method needs to be extended to include ClExecutionDescriptor as now LWS GWS tuning will be separated from the IKernel. More...
 
template<typename T >
void add_argument (unsigned int &idx, T value)
 Add the passed parameters to the object's kernel's arguments starting from the index idx. More...
 
void set_lws_hint (const cl::NDRange &lws_hint)
 Set the Local-Workgroup-Size hint. More...
 
cl::NDRange lws_hint () const
 Return the Local-Workgroup-Size hint. More...
 
void set_wbsm_hint (const cl_int &wbsm_hint)
 Set the workgroup batch size modifier hint. More...
 
cl_int wbsm_hint () const
 Return the workgroup batch size modifier hint. More...
 
const std::string & config_id () const
 Get the configuration ID. More...
 
void set_target (GPUTarget target)
 Set the targeted GPU architecture. More...
 
void set_target (cl::Device &device)
 Set the targeted GPU architecture according to the CL device. More...
 
GPUTarget get_target () const
 Get the targeted GPU architecture. More...
 
size_t get_max_workgroup_size ()
 Get the maximum workgroup size for the device the CLKernelLibrary uses. More...
 
template<unsigned int dimension_size>
void add_tensor_argument (unsigned &idx, const ICLTensor *tensor, const Window &window)
 
template<typename T , unsigned int dimension_size>
void add_array_argument (unsigned &idx, const ICLArray< T > *array, const Strides &strides, unsigned int num_dimensions, const Window &window)
 Add the passed array's parameters to the object's kernel's arguments starting from the index idx. More...
 
- Public Member Functions inherited from IKernel
 IKernel ()
 Constructor. More...
 
virtual ~IKernel ()=default
 Destructor. More...
 
virtual bool is_parallelisable () const
 Indicates whether or not the kernel is parallelisable. More...
 
virtual BorderSize border_size () const
 The size of the border for that kernel. More...
 
const Windowwindow () const
 The maximum window the kernel can be executed on. More...
 
bool is_window_configured () const
 Function to check if the embedded window of this kernel has been configured. More...
 

Static Public Member Functions

static Status validate (const ITensorInfo *src, const ITensorInfo *bias, const ITensorInfo *dst, const WinogradInfo &winograd_info, const ActivationLayerInfo &act_info=ActivationLayerInfo())
 Static function to check if given info will lead to a valid configuration. More...
 
- Static Public Member Functions inherited from ICLKernel
static constexpr unsigned int num_arguments_per_3d_tensor_nhw ()
 Returns the number of arguments enqueued per NHW 3D Tensor object. More...
 
static constexpr unsigned int num_arguments_per_4d_tensor_nhwc ()
 Returns the number of arguments enqueued per NHWC 4D Tensor object. More...
 
static constexpr unsigned int num_arguments_per_1D_array ()
 Returns the number of arguments enqueued per 1D array object. More...
 
static constexpr unsigned int num_arguments_per_1D_tensor ()
 Returns the number of arguments enqueued per 1D tensor object. More...
 
static constexpr unsigned int num_arguments_per_2D_tensor ()
 Returns the number of arguments enqueued per 2D tensor object. More...
 
static constexpr unsigned int num_arguments_per_3D_tensor ()
 Returns the number of arguments enqueued per 3D tensor object. More...
 
static constexpr unsigned int num_arguments_per_4D_tensor ()
 Returns the number of arguments enqueued per 4D tensor object. More...
 
static cl::NDRange gws_from_window (const Window &window)
 Get the global work size given an execution window. More...
 

Detailed Description

Interface for the Winograd output transform kernel.

Definition at line 39 of file ClWinogradOutputTransformKernel.h.

Constructor & Destructor Documentation

◆ ClWinogradOutputTransformKernel()

Definition at line 125 of file ClWinogradOutputTransformKernel.cpp.

References arm_compute::WINOGRAD.

126 {
127  _type = CLKernelType::WINOGRAD;
128 }
Convolution using Winograd.

Member Function Documentation

◆ ARM_COMPUTE_DISALLOW_COPY_ALLOW_MOVE()

ARM_COMPUTE_DISALLOW_COPY_ALLOW_MOVE ( ClWinogradOutputTransformKernel  )

◆ configure()

void configure ( const ClCompileContext compile_context,
ITensorInfo src,
ITensorInfo bias,
ITensorInfo dst,
const WinogradInfo winograd_info,
const ActivationLayerInfo act_info = ActivationLayerInfo() 
)

Set the input and output tensor.

Note
Winograd output transform supports the following configurations for NCWH data layout F(output tile, kernel size):F(2x2, 3x3), F(2x1, 3x1), F(1x2, 1x3), F(4x4, 3x3), F(4x1, 3x1), F(1x4, 1x3), F(4x4, 5x5), F(4x1, 5x1), F(1x4, 1x5)
Winograd output transform supports the following configurations for NHWC data layout F(output tile, kernel size):F(4x4, 3x3), F(4x1, 3x1), F(1x4, 1x3), F(4x4, 5x5), F(4x1, 5x1), F(1x4, 1x5)

Strides: only unit strides

Parameters
[in]compile_contextThe compile context to be used.
[in]srcSource tensor info with shape [C, N, K, batches]. Data types supported: F16/F32.
[in]biasBiases tensor info. Shared biases supported. Biases are 1D tensor with dimensions [OFM]. It can be a nullptr. Data type supported: as src
[out]dstThe output tensor info. The shape for this tensor can be calculated using the utility function compute_winograd_output_transform_shape. Data types supported: Same as src
[in]winograd_infoContains Winograd's information described in WinogradInfo
[in]act_info(Optional) Activation layer information in case of a fused activation.

Definition at line 130 of file ClWinogradOutputTransformKernel.cpp.

References ActivationLayerInfo::a(), ActivationLayerInfo::activation(), CLBuildOptions::add_option(), CLBuildOptions::add_option_if(), ARM_COMPUTE_ERROR_ON, ARM_COMPUTE_ERROR_ON_NULLPTR, ARM_COMPUTE_ERROR_THROW_ON, arm_compute::auto_init_if_empty(), ActivationLayerInfo::b(), bias, arm_compute::BIFROST, ActivationLayerInfo::BOUNDED_RELU, ICloneable< T >::clone(), arm_compute::compute_winograd_convolution_tiles(), arm_compute::misc::shape_calculator::compute_winograd_output_transform_shape(), WinogradInfo::convolution_info, arm_compute::create_kernel(), ITensorInfo::data_type(), ITensorInfo::dimension(), ActivationLayerInfo::enabled(), arm_compute::F16, arm_compute::F32, arm_compute::float_to_string_with_full_precision(), arm_compute::G71, arm_compute::get_cl_type_from_data_type(), arm_compute::get_data_layout_dimension_index(), arm_compute::get_padding_info(), arm_compute::GPU_ARCH_MASK, arm_compute::has_padding_changed(), Size2D::height, arm_compute::HEIGHT, arm_compute::test::validation::idx_height, arm_compute::test::validation::idx_width, WinogradInfo::input_dimensions, kernel_name, WinogradInfo::kernel_size, arm_compute::lower_string(), ActivationLayerInfo::LU_BOUNDED_RELU, arm_compute::NHWC, CLBuildOptions::options(), WinogradInfo::output_data_layout, WinogradInfo::output_tile_size, arm_compute::test::validation::src, arm_compute::string_from_activation_func(), arm_compute::string_from_data_layout(), arm_compute::string_from_data_type(), ITensorInfo::tensor_shape(), Size2D::to_string(), arm_compute::support::cpp11::to_string(), TensorShape::total_size_upper(), arm_compute::upper_string(), arm_compute::cpu::kernels::validate_and_configure_window(), arm_compute::cpu::kernels::validate_arguments(), Size2D::width, arm_compute::WIDTH, Size2D::x(), and Size2D::y().

132 {
134 
135  // Output tensor auto initialization if not yet initialized
136  auto_init_if_empty(*dst, src->clone()->set_tensor_shape(compute_winograd_output_transform_shape(*src, winograd_info)));
137 
138  ARM_COMPUTE_ERROR_THROW_ON(validate_arguments(src, bias, dst, winograd_info, act_info));
139 
140  // Configure kernel window
141  auto win_config = validate_and_configure_window(src, bias, dst, winograd_info.output_tile_size);
142  ARM_COMPUTE_ERROR_THROW_ON(win_config.first);
143  IClKernel::configure_internal(win_config.second);
144 
145  auto padding_info = get_padding_info({ src, bias, dst });
146 
147  _is_nhwc = winograd_info.output_data_layout == DataLayout::NHWC;
148 
149  // Compute num_tiles_x
150  const Size2D input_dimensions = winograd_info.input_dimensions;
151  const Size2D kernel_size = winograd_info.kernel_size;
152  const Size2D output_tile_size = winograd_info.output_tile_size;
153  const PadStrideInfo conv_info = winograd_info.convolution_info;
154  const int idx_width = get_data_layout_dimension_index(winograd_info.output_data_layout, DataLayoutDimension::WIDTH);
155  const int idx_height = get_data_layout_dimension_index(winograd_info.output_data_layout, DataLayoutDimension::HEIGHT);
156 
157  // Compute the number of output tiles along the x and y direction of size "output_tile_size"
158  const Size2D num_tiles = compute_winograd_convolution_tiles(input_dimensions,
159  kernel_size,
160  output_tile_size,
161  conv_info);
162  const size_t total_batches = dst->tensor_shape().total_size_upper(3);
163 
164  // Set build options
165  CLBuildOptions build_opts;
166  build_opts.add_option("-DACTIVATION_TYPE=" + lower_string(string_from_activation_func(act_info.activation())));
167  build_opts.add_option_if(act_info.enabled(), "-DA_VAL=" + float_to_string_with_full_precision(act_info.a()));
168  build_opts.add_option_if(act_info.enabled(), "-DB_VAL=" + float_to_string_with_full_precision(act_info.b()));
169 
170  if((output_tile_size.x() == 2) || (output_tile_size.x() == 1 && output_tile_size.y() == 2))
171  {
172  build_opts.add_option("-DVEC_SIZE=2");
173  }
174  else if((output_tile_size.x() == 4) || (output_tile_size.x() == 1 && output_tile_size.y() == 4))
175  {
176  build_opts.add_option("-DVEC_SIZE=4");
177  }
178 
179  _num_tiles_x = num_tiles.width;
180 
181  // Conditions of -cl-fast-relaxed-math causing accuracy issues can be traced from COMPMID-5324
182  const GPUTarget gpu_target = get_target();
183  const auto act_function = act_info.activation();
184  const auto src_data_type = src->data_type();
185 
186  if((gpu_target != GPUTarget::G71 && (gpu_target & GPUTarget::GPU_ARCH_MASK) == GPUTarget::BIFROST)
188  && (src_data_type == DataType::F32 || src_data_type == DataType::F16))
189  {
190  // -cl-fast-relaxed-math also sets -cl-finite-math-only and -cl-unsafe-math-optimizations
191  // to disable -cl-finite-math-only, we only include -cl-unsafe-math-optimizations
192  build_opts.add_option("-cl-unsafe-math-optimizations");
193  }
194  else
195  {
196  build_opts.add_option("-cl-fast-relaxed-math");
197  }
198 
199  if(_is_nhwc)
200  {
201  build_opts.add_option_if(bias != nullptr, std::string("-DHAS_BIAS"));
202  build_opts.add_option("-DN0=" + support::cpp11::to_string(win_config.second.x().step()));
203  build_opts.add_option("-DOUTPUT_TILE_W=" + support::cpp11::to_string(output_tile_size.width));
204  build_opts.add_option("-DOUTPUT_TILE_H=" + support::cpp11::to_string(output_tile_size.height));
205  build_opts.add_option("-DDATA_TYPE=" + get_cl_type_from_data_type(src_data_type));
206  build_opts.add_option_if(total_batches > 1, "-DSRC_DEPTH=" + support::cpp11::to_string(src->dimension(2)));
207  build_opts.add_option_if(winograd_info.kernel_size.height == 1, "-DWINOGRAD_OUTPUT_TRANSFORM_HORIZONTAL");
208  build_opts.add_option_if(winograd_info.kernel_size.width == 1, "-DWINOGRAD_OUTPUT_TRANSFORM_VERTICAL");
209  build_opts.add_option("-DNUM_TILES_X=" + support::cpp11::to_string(_num_tiles_x));
210  }
211  else
212  {
213  build_opts.add_option_if(bias != nullptr, std::string("-DHAS_BIAS"));
214  build_opts.add_option("-DN0=" + support::cpp11::to_string(win_config.second.x().step()));
215  build_opts.add_option("-DNUM_TILES_X=" + support::cpp11::to_string(num_tiles.width));
216  build_opts.add_option("-DOUTPUT_TILE_W=" + support::cpp11::to_string(output_tile_size.width));
217  build_opts.add_option("-DOUTPUT_TILE_H=" + support::cpp11::to_string(output_tile_size.height));
218  build_opts.add_option("-DDATA_TYPE=" + get_cl_type_from_data_type(src_data_type));
219  build_opts.add_option("-DSRC_HEIGHT=" + support::cpp11::to_string(src->dimension(1)));
220  build_opts.add_option("-DDST_WIDTH=" + support::cpp11::to_string(dst->dimension(idx_width)));
221  build_opts.add_option("-DDST_HEIGHT=" + support::cpp11::to_string(dst->dimension(idx_height)));
222  build_opts.add_option_if(total_batches > 1, "-DSRC_DEPTH=" + support::cpp11::to_string(src->dimension(2)));
223  build_opts.add_option_if(winograd_info.kernel_size.height == 1, "-DWINOGRAD_OUTPUT_TRANSFORM_HORIZONTAL");
224  build_opts.add_option_if(winograd_info.kernel_size.width == 1, "-DWINOGRAD_OUTPUT_TRANSFORM_VERTICAL");
225  }
226 
227  // Storing tensor dimensions to be sent later as kernel arguments
228  _src_height = src->dimension(1);
229  _dst_width = dst->dimension(idx_width);
230  _dst_height = dst->dimension(idx_height);
231 
232  // Create kernel
233  std::string kernel_name = "winograd_output_transform_" + output_tile_size.to_string() + "_" + kernel_size.to_string() + "_" + lower_string(string_from_data_layout(winograd_info.output_data_layout));
234 
235  // A macro guard to compile ONLY the kernel of interest
236  build_opts.add_option("-D" + upper_string(kernel_name));
237  _kernel = create_kernel(compile_context, kernel_name, build_opts.options());
238 
239  // Set config_id for enabling LWS tuning
240  _config_id = kernel_name;
241  _config_id += "_";
242  _config_id += lower_string(string_from_data_type(src_data_type));
243  _config_id += "_";
244  _config_id += support::cpp11::to_string(src->dimension(0));
245  _config_id += "_";
246  _config_id += support::cpp11::to_string(src->dimension(1));
247  _config_id += "_";
248  _config_id += support::cpp11::to_string(dst->dimension(0));
249  _config_id += "_";
250  _config_id += support::cpp11::to_string(dst->dimension(1));
251  _config_id += "_";
252  _config_id += lower_string(string_from_data_layout(winograd_info.output_data_layout));
253 
254  ARM_COMPUTE_ERROR_ON(has_padding_changed(padding_info) && _is_nhwc);
255 }
std::string to_string(T &&value)
Convert integer and float values to string.
1 channel, 1 F32 per channel
#define ARM_COMPUTE_ERROR_ON(cond)
If the condition is true then an error message is printed and an exception thrown.
Definition: Error.h:466
const std::string & string_from_activation_func(ActivationLayerInfo::ActivationFunction act)
Translates a given activation function to a string.
Definition: Utils.cpp:163
#define ARM_COMPUTE_ERROR_THROW_ON(status)
Definition: Error.h:455
Size2D compute_winograd_convolution_tiles(const Size2D &in_dims, const Size2D &kernel_size, const Size2D &output_tile_size, const PadStrideInfo &conv_info)
Calculate the number of output tiles required by Winograd Convolution layer.
Definition: Helpers.h:227
std::string lower_string(const std::string &val)
Lower a given string.
Definition: Utils.cpp:353
Status validate_arguments(const ITensorInfo *src, const ITensorInfo *weights, const ITensorInfo *dst, const PadStrideInfo &conv_info)
SimpleTensor< float > src
Definition: DFT.cpp:155
1 channel, 1 F16 per channel
std::string upper_string(const std::string &val)
Raise a given string to upper case.
Definition: Utils.cpp:360
cl::Kernel create_kernel(const CLCompileContext &ctx, const std::string &kernel_name, const std::set< std::string > &build_opts=std::set< std::string >())
Creates an opencl kernel using a compile context.
Definition: CLHelpers.cpp:404
const std::string & string_from_data_type(DataType dt)
Convert a data type identity into a string.
Definition: Utils.cpp:135
std::string float_to_string_with_full_precision(float val)
Create a string with the float in full precision.
Definition: Utils.h:1124
GPUTarget get_target() const
Get the targeted GPU architecture.
Definition: ICLKernel.h:443
std::string get_cl_type_from_data_type(const DataType &dt)
Translates a tensor data type to the appropriate OpenCL type.
Definition: CLHelpers.cpp:39
bool auto_init_if_empty(ITensorInfo &info, const TensorShape &shape, int num_channels, DataType data_type, QuantizationInfo quantization_info=QuantizationInfo())
Auto initialize the tensor info (shape, number of channels and data type) if the current assignment i...
bool has_padding_changed(const std::unordered_map< const ITensorInfo *, PaddingSize > &padding_map)
Check if the previously stored padding info has changed after configuring a kernel.
Definition: Utils.cpp:603
std::pair< Status, Window > validate_and_configure_window(ITensorInfo *src, ITensorInfo *dst)
const std::string & string_from_data_layout(DataLayout dl)
Convert a data layout identity into a string.
Definition: Utils.cpp:123
TensorShape compute_winograd_output_transform_shape(const ITensorInfo &input, const WinogradInfo &winograd_info)
Calculate the winograd output transform shape.
GPUTarget
Available GPU Targets.
Definition: GPUTarget.h:34
size_t get_data_layout_dimension_index(const DataLayout &data_layout, const DataLayoutDimension &data_layout_dimension)
Get the index of the given dimension.
Definition: Helpers.inl:193
Num samples, height, width, channels.
std::unordered_map< const ITensorInfo *, PaddingSize > get_padding_info(std::initializer_list< const ITensorInfo *> infos)
Stores padding information before configuring a kernel.
Definition: Utils.cpp:588
#define ARM_COMPUTE_ERROR_ON_NULLPTR(...)
Definition: Validate.h:157
std::string kernel_name
const int32_t * bias

◆ run_op()

void run_op ( ITensorPack tensors,
const Window window,
cl::CommandQueue &  queue 
)
overridevirtual

Enqueue the OpenCL kernel to process the given window on the passed OpenCL command queue.

Note
The queue is not flushed by this method, and therefore the kernel will not have been executed by the time this method returns.
Parameters
[in]tensorsA vector containing the tensors to operato on.
[in]windowRegion on which to execute the kernel. (Must be a valid region of the window returned by window()).
[in,out]queueCommand queue on which to enqueue the kernel.

Reimplemented from ICLKernel.

Definition at line 264 of file ClWinogradOutputTransformKernel.cpp.

References arm_compute::ACL_DST, arm_compute::ACL_SRC_0, arm_compute::ACL_SRC_1, ARM_COMPUTE_ERROR_ON_INVALID_SUBWINDOW, ARM_COMPUTE_ERROR_ON_UNCONFIGURED_KERNEL, Window::collapse_if_possible(), Window::DimX, Window::DimY, Window::DimZ, arm_compute::test::validation::dst, arm_compute::enqueue(), ITensorPack::get_const_tensor(), ITensorPack::get_tensor(), Window::set(), arm_compute::test::validation::reference::slice(), Window::slide_window_slice_3D(), arm_compute::test::validation::src, and Window::use_tensor_dimensions().

265 {
268 
269  auto src = utils::cast::polymorphic_downcast<const ICLTensor *>(tensors.get_const_tensor(TensorType::ACL_SRC_0));
270  auto bias = utils::cast::polymorphic_downcast<const ICLTensor *>(tensors.get_const_tensor(TensorType::ACL_SRC_1));
271  auto dst = utils::cast::polymorphic_downcast<ICLTensor *>(tensors.get_tensor(TensorType::ACL_DST));
272 
273  // Collapse window
274  Window window_collapsed = window.collapse_if_possible(IClKernel::window(), Window::DimZ);
275 
276  // Get initial windows
277  Window slice = window_collapsed.first_slice_window_4D();
278  slice.set(Window::DimZ, Window::Dimension(0, 1, 1));
279 
280  // Setup output slice
281  Window slice_out(slice);
282  slice_out.set(Window::DimX, Window::Dimension(0, 0, 0));
283  slice_out.set(Window::DimY, Window::Dimension(0, 0, 0));
284 
285  if(bias != nullptr)
286  {
287  unsigned int idx1 = 2 * num_arguments_per_4D_tensor();
288  Window slice_biases;
289  slice_biases.use_tensor_dimensions(bias->info()->tensor_shape());
290  add_1D_tensor_argument(idx1, bias, slice_biases);
291  }
292 
293  if(_is_nhwc)
294  {
295  unsigned int idx2 = 2 * num_arguments_per_4D_tensor() + ((bias != nullptr) ? num_arguments_per_1D_tensor() : 0);
296  _kernel.setArg(idx2++, static_cast<int>(dst->info()->total_size() - dst->info()->strides_in_bytes().y()));
297  _kernel.setArg<cl_int>(idx2++, _src_height);
298  _kernel.setArg<cl_int>(idx2++, _dst_width);
299  _kernel.setArg<cl_int>(idx2++, _dst_height);
300  }
301 
302  do
303  {
304  unsigned int idx = 0;
305  add_4D_tensor_argument(idx, src, slice);
306  add_4D_tensor_argument(idx, dst, slice_out);
307  enqueue(queue, *this, slice, lws_hint());
308  }
309  while(window.slide_window_slice_3D(slice) && window.slide_window_slice_3D(slice_out));
310 }
static constexpr unsigned int num_arguments_per_1D_tensor()
Returns the number of arguments enqueued per 1D tensor object.
Definition: ICLKernel.h:297
const Window & window() const
The maximum window the kernel can be executed on.
Definition: IKernel.cpp:28
void enqueue(cl::CommandQueue &queue, ICLKernel &kernel, const Window &window, const cl::NDRange &lws_hint=CLKernelLibrary::get().default_ndrange(), bool use_dummy_work_items=false)
Add the kernel to the command queue with the given window.
Definition: ICLKernel.cpp:32
cl::NDRange lws_hint() const
Return the Local-Workgroup-Size hint.
Definition: ICLKernel.h:383
SimpleTensor< float > src
Definition: DFT.cpp:155
static constexpr size_t DimX
Alias for dimension 0 also known as X dimension.
Definition: Window.h:43
Window collapse_if_possible(const Window &full_window, size_t first, size_t last, bool *has_collapsed=nullptr) const
Collapse the dimensions between first and last if possible.
Definition: Window.inl:68
static constexpr unsigned int num_arguments_per_4D_tensor()
Returns the number of arguments enqueued per 4D tensor object.
Definition: ICLKernel.h:321
bool slide_window_slice_3D(Window &slice) const
Slide the passed 3D window slice.
Definition: Window.h:349
#define ARM_COMPUTE_ERROR_ON_UNCONFIGURED_KERNEL(k)
Definition: Validate.h:915
static constexpr size_t DimY
Alias for dimension 1 also known as Y dimension.
Definition: Window.h:45
static constexpr size_t DimZ
Alias for dimension 2 also known as Z dimension.
Definition: Window.h:47
void add_1D_tensor_argument(unsigned int &idx, const ICLTensor *tensor, const Window &window)
Add the passed 1D tensor&#39;s parameters to the object&#39;s kernel&#39;s arguments starting from the index idx...
Definition: ICLKernel.h:178
void add_4D_tensor_argument(unsigned int &idx, const ICLTensor *tensor, const Window &window)
Add the passed 4D tensor&#39;s parameters to the object&#39;s kernel&#39;s arguments starting from the index idx...
Definition: ICLKernel.h:236
#define ARM_COMPUTE_ERROR_ON_INVALID_SUBWINDOW(f, s)
Definition: Validate.h:201
SimpleTensor< T > slice(const SimpleTensor< T > &src, Coordinates starts, Coordinates ends)
const int32_t * bias

◆ validate()

Status validate ( const ITensorInfo src,
const ITensorInfo bias,
const ITensorInfo dst,
const WinogradInfo winograd_info,
const ActivationLayerInfo act_info = ActivationLayerInfo() 
)
static

Static function to check if given info will lead to a valid configuration.

Similar to ClWinogradOutputTransformKernel::configure()

Returns
a status

Definition at line 257 of file ClWinogradOutputTransformKernel.cpp.

References ARM_COMPUTE_RETURN_ON_ERROR, ICloneable< T >::clone(), WinogradInfo::output_tile_size, arm_compute::cpu::kernels::validate_and_configure_window(), and arm_compute::cpu::kernels::validate_arguments().

258 {
259  ARM_COMPUTE_RETURN_ON_ERROR(validate_arguments(src, (bias != nullptr ? bias->clone().get() : nullptr), dst, winograd_info, act_info));
260  ARM_COMPUTE_RETURN_ON_ERROR(validate_and_configure_window(src->clone().get(), (bias != nullptr ? bias->clone().get() : nullptr), dst->clone().get(), winograd_info.output_tile_size).first);
261  return Status{};
262 }
#define ARM_COMPUTE_RETURN_ON_ERROR(status)
Checks if a status contains an error and returns it.
Definition: Error.h:204
Status validate_arguments(const ITensorInfo *src, const ITensorInfo *weights, const ITensorInfo *dst, const PadStrideInfo &conv_info)
SimpleTensor< float > src
Definition: DFT.cpp:155
std::pair< Status, Window > validate_and_configure_window(ITensorInfo *src, ITensorInfo *dst)
const int32_t * bias

The documentation for this class was generated from the following files: