Compute Library
 23.11
ClGemmLowpMatrixMultiplyNativeKernel Class Reference

OpenCL kernel to multiply matrices with QASYMM8/QASYMM8_SIGNED data type. More...

#include <ClGemmLowpMatrixMultiplyNativeKernel.h>

Collaboration diagram for ClGemmLowpMatrixMultiplyNativeKernel:
[legend]

Public Member Functions

 ClGemmLowpMatrixMultiplyNativeKernel ()
 
 ARM_COMPUTE_DISALLOW_COPY_ALLOW_MOVE (ClGemmLowpMatrixMultiplyNativeKernel)
 
void configure (const CLCompileContext &compile_context, const ITensorInfo *src0, ITensorInfo *src1, ITensorInfo *dst, const GEMMLHSMatrixInfo &lhs_info, const GEMMRHSMatrixInfo &rhs_info, const GEMMReshapeInfo &gemm_info)
 Initialise the kernel's input and dst. More...
 
void run_op (ITensorPack &tensors, const Window &window, cl::CommandQueue &queue) override
 Enqueue the OpenCL kernel to process the given window on the passed OpenCL command queue. More...
 
- Public Member Functions inherited from ICLKernel
 ICLKernel ()
 Constructor. More...
 
cl::Kernel & kernel ()
 Returns a reference to the OpenCL kernel of this object. More...
 
CLKernelType type () const
 Returns the CL kernel type. More...
 
template<typename T >
void add_1D_array_argument (unsigned int &idx, const ICLArray< T > *array, const Strides &strides, unsigned int num_dimensions, const Window &window)
 Add the passed 1D array's parameters to the object's kernel's arguments starting from the index idx. More...
 
void add_1D_tensor_argument (unsigned int &idx, const ICLTensor *tensor, const Window &window)
 Add the passed 1D tensor's parameters to the object's kernel's arguments starting from the index idx. More...
 
void add_1D_tensor_argument_if (bool cond, unsigned int &idx, const ICLTensor *tensor, const Window &window)
 Add the passed 1D tensor's parameters to the object's kernel's arguments starting from the index idx if the condition is true. More...
 
void add_2D_tensor_argument (unsigned int &idx, const ICLTensor *tensor, const Window &window)
 Add the passed 2D tensor's parameters to the object's kernel's arguments starting from the index idx. More...
 
void add_2D_tensor_argument_if (bool cond, unsigned int &idx, const ICLTensor *tensor, const Window &window)
 Add the passed 2D tensor's parameters to the object's kernel's arguments starting from the index idx if the condition is true. More...
 
void add_3D_tensor_argument (unsigned int &idx, const ICLTensor *tensor, const Window &window)
 Add the passed 3D tensor's parameters to the object's kernel's arguments starting from the index idx. More...
 
void add_4D_tensor_argument (unsigned int &idx, const ICLTensor *tensor, const Window &window)
 Add the passed 4D tensor's parameters to the object's kernel's arguments starting from the index idx. More...
 
void add_5D_tensor_argument (unsigned int &idx, const ICLTensor *tensor, const Window &window)
 Add the passed 5D tensor's parameters to the object's kernel's arguments starting from the index idx. More...
 
void add_3d_tensor_nhw_argument (unsigned int &idx, const ICLTensor *tensor)
 Add the passed NHW 3D tensor's parameters to the object's kernel's arguments by passing strides, dimensions and the offset to the first valid element in bytes. More...
 
void add_4d_tensor_nhwc_argument (unsigned int &idx, const ICLTensor *tensor)
 Add the passed NHWC 4D tensor's parameters to the object's kernel's arguments by passing strides, dimensions and the offset to the first valid element in bytes. More...
 
virtual void run (const Window &window, cl::CommandQueue &queue)
 Enqueue the OpenCL kernel to process the given window on the passed OpenCL command queue. More...
 
template<typename T >
void add_argument (unsigned int &idx, T value)
 Add the passed parameters to the object's kernel's arguments starting from the index idx. More...
 
void set_lws_hint (const cl::NDRange &lws_hint)
 Set the Local-Workgroup-Size hint. More...
 
cl::NDRange lws_hint () const
 Return the Local-Workgroup-Size hint. More...
 
void set_wbsm_hint (const cl_int &wbsm_hint)
 Set the workgroup batch size modifier hint. More...
 
cl_int wbsm_hint () const
 Return the workgroup batch size modifier hint. More...
 
const std::string & config_id () const
 Get the configuration ID. More...
 
void set_target (GPUTarget target)
 Set the targeted GPU architecture. More...
 
void set_target (cl::Device &device)
 Set the targeted GPU architecture according to the CL device. More...
 
GPUTarget get_target () const
 Get the targeted GPU architecture. More...
 
size_t get_max_workgroup_size ()
 Get the maximum workgroup size for the device the CLKernelLibrary uses. More...
 
cl::NDRange get_cached_gws () const
 Get the cached gws used to enqueue this kernel. More...
 
void cache_gws (const cl::NDRange &gws)
 Cache the latest gws used to enqueue this kernel. More...
 
template<unsigned int dimension_size>
void add_tensor_argument (unsigned &idx, const ICLTensor *tensor, const Window &window)
 
template<typename T , unsigned int dimension_size>
void add_array_argument (unsigned &idx, const ICLArray< T > *array, const Strides &strides, unsigned int num_dimensions, const Window &window)
 Add the passed array's parameters to the object's kernel's arguments starting from the index idx. More...
 
- Public Member Functions inherited from IKernel
 IKernel ()
 Constructor. More...
 
virtual ~IKernel ()=default
 Destructor. More...
 
virtual bool is_parallelisable () const
 Indicates whether or not the kernel is parallelisable. More...
 
virtual BorderSize border_size () const
 The size of the border for that kernel. More...
 
const Windowwindow () const
 The maximum window the kernel can be executed on. More...
 
bool is_window_configured () const
 Function to check if the embedded window of this kernel has been configured. More...
 

Static Public Member Functions

static Status validate (const ITensorInfo *src0, const ITensorInfo *src1, const ITensorInfo *dst, const GEMMLHSMatrixInfo &lhs_info, const GEMMRHSMatrixInfo &rhs_info, const GEMMReshapeInfo &gemm_info)
 Static function to check if given info will lead to a valid configuration. More...
 
- Static Public Member Functions inherited from ICLKernel
constexpr static unsigned int num_arguments_per_3d_tensor_nhw ()
 Returns the number of arguments enqueued per NHW 3D Tensor object. More...
 
constexpr static unsigned int num_arguments_per_4d_tensor_nhwc ()
 Returns the number of arguments enqueued per NHWC 4D Tensor object. More...
 
constexpr static unsigned int num_arguments_per_1D_array ()
 Returns the number of arguments enqueued per 1D array object. More...
 
constexpr static unsigned int num_arguments_per_1D_tensor ()
 Returns the number of arguments enqueued per 1D tensor object. More...
 
constexpr static unsigned int num_arguments_per_2D_tensor ()
 Returns the number of arguments enqueued per 2D tensor object. More...
 
constexpr static unsigned int num_arguments_per_3D_tensor ()
 Returns the number of arguments enqueued per 3D tensor object. More...
 
constexpr static unsigned int num_arguments_per_4D_tensor ()
 Returns the number of arguments enqueued per 4D tensor object. More...
 
static cl::NDRange gws_from_window (const Window &window, bool use_dummy_work_items)
 Get the global work size given an execution window. More...
 

Detailed Description

OpenCL kernel to multiply matrices with QASYMM8/QASYMM8_SIGNED data type.

Definition at line 40 of file ClGemmLowpMatrixMultiplyNativeKernel.h.

Constructor & Destructor Documentation

◆ ClGemmLowpMatrixMultiplyNativeKernel()

Definition at line 178 of file ClGemmLowpMatrixMultiplyNativeKernel.cpp.

179 {
180  _type = CLKernelType::GEMM;
181 }

References arm_compute::GEMM.

Member Function Documentation

◆ ARM_COMPUTE_DISALLOW_COPY_ALLOW_MOVE()

ARM_COMPUTE_DISALLOW_COPY_ALLOW_MOVE ( ClGemmLowpMatrixMultiplyNativeKernel  )

◆ configure()

void configure ( const CLCompileContext compile_context,
const ITensorInfo src0,
ITensorInfo src1,
ITensorInfo dst,
const GEMMLHSMatrixInfo lhs_info,
const GEMMRHSMatrixInfo rhs_info,
const GEMMReshapeInfo gemm_info 
)

Initialise the kernel's input and dst.

Parameters
[in]compile_contextThe compile context to be used.
[in]src0Source tensor containing the LHS matrix. Data type supported: QASYMM8/QASYMM8_SIGNED
[in]src1Source tensor containing the RHS matrix. Data type supported: same as src0
[out]dstDestination tensor to store the result of matrix multiplication. Data type supported: S32
[in]lhs_infoLHS matrix information used to retrieve the number of rows to be processed by each thread lhs_info.m0: 2,3,4,5,6,7,8 lhs_info.k0: 2,3,4,8,16
[in]rhs_infoRHS matrix information used to retrieve the number of columns to be processed by each thread rhs_info.n0: 2,3,4,8,16 rhs_info.k0: same as lhs_info.k0
[in]gemm_infoGEMM information used to retrieve the original dimensions of the input matrices

Definition at line 183 of file ClGemmLowpMatrixMultiplyNativeKernel.cpp.

190 {
191  ARM_COMPUTE_ERROR_ON_NULLPTR(src0, src1, dst);
192 
193  ARM_COMPUTE_ERROR_THROW_ON(validate_arguments(src0, src1, dst, lhs_info, rhs_info, gemm_info));
194 
195  _reinterpret_input_as_3d = gemm_info.reinterpret_input_as_3d();
196  _reinterpret_output_as_3d = (gemm_info.depth_output_gemm3d() != 0);
197  _use_dummy_work_items = preferred_dummy_work_items_support(CLKernelLibrary::get().get_device());
198 
199  // We still need padding on the X dimension for the RHS matrix
200  auto padding_info = get_padding_info({src0, dst});
201 
202  // In case both input and dst have to be reinterpreted as 3D tensors,
203  // force reinterpret_input_as_3d and reinterpret_dst_as_3d to be false.
204  if (_reinterpret_input_as_3d == _reinterpret_output_as_3d)
205  {
206  _reinterpret_input_as_3d = false;
207  _reinterpret_output_as_3d = false;
208  }
209 
210  // Check if we need to slide the matrix B
211  const unsigned int num_dimensions_src0 = src0->num_dimensions();
212  _slide_matrix_b = (src1->num_dimensions() >= num_dimensions_src0);
213 
214  ElementsProcessed num_elements_processed{};
215 
216  // Configure kernel window
217  auto win_config =
218  validate_and_configure_window(src0, src1, dst, lhs_info, rhs_info, gemm_info, num_elements_processed);
219  ARM_COMPUTE_ERROR_THROW_ON(win_config.first);
220  ICLKernel::configure_internal(win_config.second);
221 
222  // If _reinterpret_input_as_3d = _reinterpret_output_as_3d = true,
223  // we will dispatch a batched-GEMM to reduce the complexity of the address calculation within the OpenCL kernel.
224  // This means that the actual m used by the kernel is given by dst->info()->dimension(1) and not by gemm_info.m
225  const unsigned int internal_m = _reinterpret_output_as_3d ? gemm_info.m() : dst->dimension(1);
226  // Calculate partial (store instead of load) M0 and partial N0 for the partial blocks at the end of a row/column if any. This is to avoid padding.
227  const unsigned int partial_store_m0 = internal_m % lhs_info.m0;
228  const unsigned int partial_store_n0 = gemm_info.n() % rhs_info.n0;
229 
230  // Shrink M0 to be always <= M (internal_m) to prevent out-of-bounds reads.
231  // NOTE: This might have implications on heuristics and performance
232  const unsigned int internal_m0 = std::min(internal_m, lhs_info.m0);
233 
234  // Create build options
235  CLBuildOptions build_opts;
236  build_opts.add_option_if(_reinterpret_input_as_3d, "-DREINTERPRET_INPUT_AS_3D");
237  build_opts.add_option_if(_reinterpret_output_as_3d, "-DREINTERPRET_OUTPUT_AS_3D");
238  build_opts.add_option_if(_reinterpret_input_as_3d || _reinterpret_output_as_3d,
239  "-DHEIGHT_GEMM3D=" + support::cpp11::to_string(dst->dimension(1)));
240  build_opts.add_option_if(_reinterpret_input_as_3d || _reinterpret_output_as_3d,
241  "-DDEPTH_GEMM3D=" + support::cpp11::to_string(dst->dimension(2)));
242  build_opts.add_option_if(!_slide_matrix_b, "-DMATRIX_B_DEPTH=" + support::cpp11::to_string(src1->dimension(2)));
243  build_opts.add_option_if(_use_dummy_work_items, "-DDUMMY_WORK_ITEMS");
244  build_opts.add_option("-DM=" + support::cpp11::to_string(src0->dimension(1)));
245  build_opts.add_option("-DN=" + support::cpp11::to_string(gemm_info.n()));
246  build_opts.add_option("-DK=" + support::cpp11::to_string(gemm_info.k()));
247  build_opts.add_option("-DM0=" + support::cpp11::to_string(internal_m0));
248  build_opts.add_option("-DN0=" + support::cpp11::to_string(rhs_info.n0));
249  build_opts.add_option("-DK0=" + support::cpp11::to_string(rhs_info.k0));
250  build_opts.add_option("-DDATA_TYPE=" + get_cl_type_from_data_type(src0->data_type()));
251  build_opts.add_option("-DACC_DATA_TYPE=" + get_cl_dot8_acc_type_from_data_type(src0->data_type()));
252  build_opts.add_option("-DPARTIAL_STORE_M0=" + support::cpp11::to_string(partial_store_m0));
253  build_opts.add_option("-DPARTIAL_STORE_N0=" + support::cpp11::to_string(partial_store_n0));
254  std::string kernel_name("gemmlowp_mm_native");
255 
256  // A macro guard to compile ONLY the kernel of interest
257  build_opts.add_option("-D" + upper_string(kernel_name));
258 
259  // Create kernel
260  _kernel = create_kernel(compile_context, kernel_name, build_opts.options());
261 
262  // Set config_id for enabling LWS tuning
263  _config_id = kernel_name;
264  _config_id += "_";
265  _config_id += dot8_supported(CLKernelLibrary::get().get_device()) ? "_dot8" : "";
266  _config_id += "_";
267  _config_id += (_reinterpret_input_as_3d ? "3di_" : "");
268  _config_id += (_reinterpret_output_as_3d ? "3do_" : "");
269  _config_id += support::cpp11::to_string(dst->dimension(1));
270  _config_id += "_";
271  _config_id += support::cpp11::to_string(dst->dimension(0));
272  _config_id += "_";
273  _config_id += support::cpp11::to_string(gemm_info.k());
274  _config_id += "_";
275  _config_id += support::cpp11::to_string(dst->dimension(2));
276  _config_id += "_";
277  _config_id += support::cpp11::to_string(lhs_info.m0);
278  _config_id += "_";
279  _config_id += support::cpp11::to_string(rhs_info.n0);
280  _config_id += "_";
281  _config_id += support::cpp11::to_string(lhs_info.k0);
282 
284 }

References CLBuildOptions::add_option(), CLBuildOptions::add_option_if(), ARM_COMPUTE_ERROR_ON, ARM_COMPUTE_ERROR_ON_NULLPTR, ARM_COMPUTE_ERROR_THROW_ON, arm_compute::create_kernel(), ITensorInfo::data_type(), GEMMReshapeInfo::depth_output_gemm3d(), ITensorInfo::dimension(), arm_compute::dot8_supported(), arm_compute::test::validation::dst, CLKernelLibrary::get(), arm_compute::get_cl_dot8_acc_type_from_data_type(), arm_compute::get_cl_type_from_data_type(), arm_compute::get_padding_info(), arm_compute::has_padding_changed(), GEMMReshapeInfo::k(), GEMMLHSMatrixInfo::k0, GEMMRHSMatrixInfo::k0, kernel_name, GEMMReshapeInfo::m(), GEMMLHSMatrixInfo::m0, GEMMReshapeInfo::n(), GEMMRHSMatrixInfo::n0, ITensorInfo::num_dimensions(), CLBuildOptions::options(), arm_compute::preferred_dummy_work_items_support(), GEMMReshapeInfo::reinterpret_input_as_3d(), arm_compute::support::cpp11::to_string(), arm_compute::upper_string(), arm_compute::cpu::kernels::validate_and_configure_window(), and arm_compute::cpu::kernels::validate_arguments().

◆ run_op()

void run_op ( ITensorPack tensors,
const Window window,
cl::CommandQueue &  queue 
)
overridevirtual

Enqueue the OpenCL kernel to process the given window on the passed OpenCL command queue.

Note
The queue is not flushed by this method, and therefore the kernel will not have been executed by the time this method returns.
Parameters
[in]tensorsA vector containing the tensors to operato on.
[in]windowRegion on which to execute the kernel. (Must be a valid region of the window returned by window()).
[in,out]queueCommand queue on which to enqueue the kernel.

Reimplemented from ICLKernel.

Definition at line 303 of file ClGemmLowpMatrixMultiplyNativeKernel.cpp.

304 {
307 
308  const auto src0 =
309  utils::cast::polymorphic_downcast<const ICLTensor *>(tensors.get_const_tensor(TensorType::ACL_SRC_0));
310  const auto src1 =
311  utils::cast::polymorphic_downcast<const ICLTensor *>(tensors.get_const_tensor(TensorType::ACL_SRC_1));
312  auto dst = utils::cast::polymorphic_downcast<ICLTensor *>(tensors.get_tensor(TensorType::ACL_DST));
313 
314  if (src1->info()->num_dimensions() < 3)
315  {
316  // The stride_z for matrix B must be zero if we do not slice
317  ARM_COMPUTE_ERROR_ON(src1->info()->strides_in_bytes()[3] != 0);
318  }
319 
321  Window slice_matrix_b = slice;
322 
323  slice_matrix_b.set(Window::DimX, Window::Dimension(0, 1, 1));
324  slice_matrix_b.set(Window::DimY, Window::Dimension(0, 1, 1));
325 
326  if (_reinterpret_input_as_3d)
327  {
328  // Pass bottom paddings to the kernel if the input has to be reinterpreted as 3D tensor
329  const unsigned int idx0 = 3 * num_arguments_per_2D_tensor() + 3;
330  const unsigned int total_cross_plane_pad = src0->info()->padding().top + src0->info()->padding().bottom;
331  _kernel.setArg<cl_uint>(idx0, static_cast<unsigned int>(total_cross_plane_pad));
332  }
333 
334  if (_reinterpret_output_as_3d)
335  {
336  // Pass bottom paddings to the kernel if the output has to be reinterpreted as 3D tensor
337  const unsigned int idx0 = 3 * num_arguments_per_2D_tensor() + 3 + (_reinterpret_input_as_3d ? 1 : 0);
338  const unsigned int total_cross_plane_pad = dst->info()->padding().top + dst->info()->padding().bottom;
339  _kernel.setArg<cl_uint>(idx0, static_cast<unsigned int>(total_cross_plane_pad));
340  }
341 
342  do
343  {
344  Window slice_b = slice;
345  // Don't slice matrix B along the z dimension if matrix B has just 2 dimensions and matrix A more than 2
346  // This scenario can happen when the matrix multiplication is used to perform a convolution operation
347  if (!_slide_matrix_b)
348  {
349  slice_b = slice_matrix_b;
350  }
351 
352  unsigned int idx = 0;
353  add_2D_tensor_argument(idx, src0, slice);
354  add_2D_tensor_argument(idx, src1, slice_b);
356  _kernel.setArg<cl_uint>(idx++, static_cast<unsigned int>(src0->info()->strides_in_bytes()[2]));
357  _kernel.setArg<cl_uint>(idx++, static_cast<unsigned int>(src1->info()->strides_in_bytes()[2]));
358  _kernel.setArg<cl_uint>(idx++, static_cast<unsigned int>(dst->info()->strides_in_bytes()[2]));
359  enqueue(queue, *this, slice, lws_hint(), _use_dummy_work_items);
361 }

References arm_compute::ACL_DST, arm_compute::ACL_SRC_0, arm_compute::ACL_SRC_1, ICLKernel::add_2D_tensor_argument(), ARM_COMPUTE_ERROR_ON, ARM_COMPUTE_ERROR_ON_INVALID_SUBWINDOW, ARM_COMPUTE_ERROR_ON_UNCONFIGURED_KERNEL, Window::DimX, Window::DimY, arm_compute::test::validation::dst, arm_compute::enqueue(), Window::first_slice_window_3D(), ITensorPack::get_const_tensor(), ITensorPack::get_tensor(), ICLKernel::lws_hint(), ICLKernel::num_arguments_per_2D_tensor(), Window::set(), arm_compute::test::validation::reference::slice(), Window::slide_window_slice_3D(), and IKernel::window().

◆ validate()

Status validate ( const ITensorInfo src0,
const ITensorInfo src1,
const ITensorInfo dst,
const GEMMLHSMatrixInfo lhs_info,
const GEMMRHSMatrixInfo rhs_info,
const GEMMReshapeInfo gemm_info 
)
static

Static function to check if given info will lead to a valid configuration.

Similar to ClGemmLowpMatrixMultiplyNativeKernel::configure()

Returns
a status

Definition at line 286 of file ClGemmLowpMatrixMultiplyNativeKernel.cpp.

292 {
293  ElementsProcessed num_elements_processed{};
294  ARM_COMPUTE_RETURN_ON_ERROR(validate_arguments(src0, src1, dst, lhs_info, rhs_info, gemm_info));
295  ARM_COMPUTE_RETURN_ON_ERROR(validate_and_configure_window(src0->clone().get(), src1->clone().get(),
296  dst->clone().get(), lhs_info, rhs_info, gemm_info,
297  num_elements_processed)
298  .first);
299 
300  return Status{};
301 }

References ARM_COMPUTE_RETURN_ON_ERROR, ICloneable< T >::clone(), arm_compute::test::validation::dst, arm_compute::cpu::kernels::validate_and_configure_window(), and arm_compute::cpu::kernels::validate_arguments().

Referenced by ClGemmLowpMatrixMultiplyCore::validate().


The documentation for this class was generated from the following files:
arm_compute::support::cpp11::to_string
std::string to_string(T &&value)
Convert integer and float values to string.
Definition: StringSupport.h:168
arm_compute::dot8_supported
bool dot8_supported(const cl::Device &device)
Helper function to check whether the cl_arm_integer_dot_product_int8 extension is supported.
Definition: CLHelpers.cpp:242
arm_compute::preferred_dummy_work_items_support
bool preferred_dummy_work_items_support(const cl::Device &device)
Helper function to check if "dummy work-items" are preferred to have a power of two NDRange In case d...
Definition: CLHelpers.cpp:370
arm_compute::get_cl_dot8_acc_type_from_data_type
std::string get_cl_dot8_acc_type_from_data_type(const DataType &dt)
Translates a tensor data type to the appropriate OpenCL dot8 accumulator type.
Definition: CLHelpers.cpp:176
arm_compute::test::validation::dst
auto dst
Definition: DFT.cpp:170
arm_compute::cpu::kernels::validate_arguments
Status validate_arguments(const ITensorInfo *src, const ITensorInfo *weights, const ITensorInfo *dst, const PadStrideInfo &conv_info)
Definition: CpuDirectConv2dKernel.cpp:57
arm_compute::ICLKernel::num_arguments_per_2D_tensor
constexpr static unsigned int num_arguments_per_2D_tensor()
Returns the number of arguments enqueued per 2D tensor object.
Definition: ICLKernel.h:313
ARM_COMPUTE_ERROR_ON_UNCONFIGURED_KERNEL
#define ARM_COMPUTE_ERROR_ON_UNCONFIGURED_KERNEL(k)
Definition: Validate.h:1079
arm_compute::Window::DimX
static constexpr size_t DimX
Alias for dimension 0 also known as X dimension.
Definition: Window.h:43
arm_compute::ACL_SRC_0
@ ACL_SRC_0
Definition: Types.h:45
arm_compute::upper_string
std::string upper_string(const std::string &val)
Raise a given string to upper case.
Definition: StringUtils.cpp:45
arm_compute::ACL_SRC_1
@ ACL_SRC_1
Definition: Types.h:46
arm_compute::CLKernelLibrary::get
static CLKernelLibrary & get()
Access the KernelLibrary singleton.
Definition: CLKernelLibrary.cpp:41
arm_compute::ICLKernel::add_2D_tensor_argument
void add_2D_tensor_argument(unsigned int &idx, const ICLTensor *tensor, const Window &window)
Add the passed 2D tensor's parameters to the object's kernel's arguments starting from the index idx.
Definition: ICLKernel.h:210
ARM_COMPUTE_RETURN_ON_ERROR
#define ARM_COMPUTE_RETURN_ON_ERROR(status)
Checks if a status contains an error and returns it.
Definition: Error.h:205
ARM_COMPUTE_ERROR_ON_NULLPTR
#define ARM_COMPUTE_ERROR_ON_NULLPTR(...)
Definition: Validate.h:159
ARM_COMPUTE_ERROR_ON
#define ARM_COMPUTE_ERROR_ON(cond)
If the condition is true then an error message is printed and an exception thrown.
Definition: Error.h:466
ARM_COMPUTE_ERROR_THROW_ON
#define ARM_COMPUTE_ERROR_THROW_ON(status)
Definition: Error.h:455
arm_compute::ACL_DST
@ ACL_DST
Definition: Types.h:55
arm_compute::create_kernel
cl::Kernel create_kernel(const CLCompileContext &ctx, const std::string &kernel_name, const std::set< std::string > &build_opts=std::set< std::string >())
Creates an opencl kernel using a compile context.
Definition: CLHelpers.cpp:409
ARM_COMPUTE_ERROR_ON_INVALID_SUBWINDOW
#define ARM_COMPUTE_ERROR_ON_INVALID_SUBWINDOW(f, s)
Definition: Validate.h:203
arm_compute::Window::slide_window_slice_3D
bool slide_window_slice_3D(Window &slice) const
Slide the passed 3D window slice.
Definition: Window.h:350
arm_compute::cpu::kernels::validate_and_configure_window
std::pair< Status, Window > validate_and_configure_window(ITensorInfo *src, ITensorInfo *dst)
Definition: CpuDirectConv2dKernel.cpp:92
arm_compute::Window::DimY
static constexpr size_t DimY
Alias for dimension 1 also known as Y dimension.
Definition: Window.h:45
arm_compute::Window::first_slice_window_3D
Window first_slice_window_3D() const
First 3D slice of the window.
Definition: Window.h:306
arm_compute::IKernel::window
const Window & window() const
The maximum window the kernel can be executed on.
Definition: IKernel.cpp:28
arm_compute::get_cl_type_from_data_type
std::string get_cl_type_from_data_type(const DataType &dt)
Translates a tensor data type to the appropriate OpenCL type.
Definition: CLHelpers.cpp:41
arm_compute::has_padding_changed
bool has_padding_changed(const std::unordered_map< const ITensorInfo *, PaddingSize > &padding_map)
Check if the previously stored padding info has changed after configuring a kernel.
Definition: Utils.cpp:491
arm_compute::ICLKernel::lws_hint
cl::NDRange lws_hint() const
Return the Local-Workgroup-Size hint.
Definition: ICLKernel.h:383
arm_compute::GEMM
@ GEMM
GEMM CL kernel type.
Definition: CLTypes.h:84
arm_compute::get_padding_info
std::unordered_map< const ITensorInfo *, PaddingSize > get_padding_info(std::initializer_list< const ITensorInfo * > infos)
Stores padding information before configuring a kernel.
Definition: Utils.cpp:476
arm_compute::test::validation::reference::slice
SimpleTensor< T > slice(const SimpleTensor< T > &src, Coordinates starts, Coordinates ends)
Definition: SliceOperations.cpp:38
kernel_name
std::string kernel_name
Definition: ClIm2ColKernel.cpp:58
arm_compute::enqueue
void enqueue(cl::CommandQueue &queue, ICLKernel &kernel, const Window &window, const cl::NDRange &lws_hint=CLKernelLibrary::get().default_ndrange(), bool use_dummy_work_items=false)
Add the kernel to the command queue with the given window.
Definition: ICLKernel.cpp:33