Compute Library
 23.11
ClGemmMatrixMultiplyNativeKernel Class Reference

OpenCL kernel to multiply matrices when neither of the input matrices have been reshaped. More...

#include <ClGemmMatrixMultiplyNativeKernel.h>

Collaboration diagram for ClGemmMatrixMultiplyNativeKernel:
[legend]

Public Member Functions

 ClGemmMatrixMultiplyNativeKernel ()
 
 ARM_COMPUTE_DISALLOW_COPY_ALLOW_MOVE (ClGemmMatrixMultiplyNativeKernel)
 
void configure (const ClCompileContext &compile_context, ITensorInfo *src0, ITensorInfo *src1, ITensorInfo *src2, ITensorInfo *dst, float alpha, float beta, const GEMMLHSMatrixInfo &lhs_info, const GEMMRHSMatrixInfo &rhs_info, const GEMMKernelInfo &gemm_info)
 Initialise the kernel's input and dst. More...
 
void run_op (ITensorPack &tensors, const Window &window, cl::CommandQueue &queue) override
 Enqueue the OpenCL kernel to process the given window on the passed OpenCL command queue. More...
 
- Public Member Functions inherited from ICLKernel
 ICLKernel ()
 Constructor. More...
 
cl::Kernel & kernel ()
 Returns a reference to the OpenCL kernel of this object. More...
 
CLKernelType type () const
 Returns the CL kernel type. More...
 
template<typename T >
void add_1D_array_argument (unsigned int &idx, const ICLArray< T > *array, const Strides &strides, unsigned int num_dimensions, const Window &window)
 Add the passed 1D array's parameters to the object's kernel's arguments starting from the index idx. More...
 
void add_1D_tensor_argument (unsigned int &idx, const ICLTensor *tensor, const Window &window)
 Add the passed 1D tensor's parameters to the object's kernel's arguments starting from the index idx. More...
 
void add_1D_tensor_argument_if (bool cond, unsigned int &idx, const ICLTensor *tensor, const Window &window)
 Add the passed 1D tensor's parameters to the object's kernel's arguments starting from the index idx if the condition is true. More...
 
void add_2D_tensor_argument (unsigned int &idx, const ICLTensor *tensor, const Window &window)
 Add the passed 2D tensor's parameters to the object's kernel's arguments starting from the index idx. More...
 
void add_2D_tensor_argument_if (bool cond, unsigned int &idx, const ICLTensor *tensor, const Window &window)
 Add the passed 2D tensor's parameters to the object's kernel's arguments starting from the index idx if the condition is true. More...
 
void add_3D_tensor_argument (unsigned int &idx, const ICLTensor *tensor, const Window &window)
 Add the passed 3D tensor's parameters to the object's kernel's arguments starting from the index idx. More...
 
void add_4D_tensor_argument (unsigned int &idx, const ICLTensor *tensor, const Window &window)
 Add the passed 4D tensor's parameters to the object's kernel's arguments starting from the index idx. More...
 
void add_5D_tensor_argument (unsigned int &idx, const ICLTensor *tensor, const Window &window)
 Add the passed 5D tensor's parameters to the object's kernel's arguments starting from the index idx. More...
 
void add_3d_tensor_nhw_argument (unsigned int &idx, const ICLTensor *tensor)
 Add the passed NHW 3D tensor's parameters to the object's kernel's arguments by passing strides, dimensions and the offset to the first valid element in bytes. More...
 
void add_4d_tensor_nhwc_argument (unsigned int &idx, const ICLTensor *tensor)
 Add the passed NHWC 4D tensor's parameters to the object's kernel's arguments by passing strides, dimensions and the offset to the first valid element in bytes. More...
 
virtual void run (const Window &window, cl::CommandQueue &queue)
 Enqueue the OpenCL kernel to process the given window on the passed OpenCL command queue. More...
 
template<typename T >
void add_argument (unsigned int &idx, T value)
 Add the passed parameters to the object's kernel's arguments starting from the index idx. More...
 
void set_lws_hint (const cl::NDRange &lws_hint)
 Set the Local-Workgroup-Size hint. More...
 
cl::NDRange lws_hint () const
 Return the Local-Workgroup-Size hint. More...
 
void set_wbsm_hint (const cl_int &wbsm_hint)
 Set the workgroup batch size modifier hint. More...
 
cl_int wbsm_hint () const
 Return the workgroup batch size modifier hint. More...
 
const std::string & config_id () const
 Get the configuration ID. More...
 
void set_target (GPUTarget target)
 Set the targeted GPU architecture. More...
 
void set_target (cl::Device &device)
 Set the targeted GPU architecture according to the CL device. More...
 
GPUTarget get_target () const
 Get the targeted GPU architecture. More...
 
size_t get_max_workgroup_size ()
 Get the maximum workgroup size for the device the CLKernelLibrary uses. More...
 
cl::NDRange get_cached_gws () const
 Get the cached gws used to enqueue this kernel. More...
 
void cache_gws (const cl::NDRange &gws)
 Cache the latest gws used to enqueue this kernel. More...
 
template<unsigned int dimension_size>
void add_tensor_argument (unsigned &idx, const ICLTensor *tensor, const Window &window)
 
template<typename T , unsigned int dimension_size>
void add_array_argument (unsigned &idx, const ICLArray< T > *array, const Strides &strides, unsigned int num_dimensions, const Window &window)
 Add the passed array's parameters to the object's kernel's arguments starting from the index idx. More...
 
- Public Member Functions inherited from IKernel
 IKernel ()
 Constructor. More...
 
virtual ~IKernel ()=default
 Destructor. More...
 
virtual bool is_parallelisable () const
 Indicates whether or not the kernel is parallelisable. More...
 
virtual BorderSize border_size () const
 The size of the border for that kernel. More...
 
const Windowwindow () const
 The maximum window the kernel can be executed on. More...
 
bool is_window_configured () const
 Function to check if the embedded window of this kernel has been configured. More...
 

Static Public Member Functions

static Status validate (const ITensorInfo *src0, const ITensorInfo *src1, const ITensorInfo *src2, const ITensorInfo *dst, float alpha, float beta, const GEMMLHSMatrixInfo &lhs_info, const GEMMRHSMatrixInfo &rhs_info, const GEMMKernelInfo &gemm_info)
 Static function to check if given info will lead to a valid configuration. More...
 
- Static Public Member Functions inherited from ICLKernel
constexpr static unsigned int num_arguments_per_3d_tensor_nhw ()
 Returns the number of arguments enqueued per NHW 3D Tensor object. More...
 
constexpr static unsigned int num_arguments_per_4d_tensor_nhwc ()
 Returns the number of arguments enqueued per NHWC 4D Tensor object. More...
 
constexpr static unsigned int num_arguments_per_1D_array ()
 Returns the number of arguments enqueued per 1D array object. More...
 
constexpr static unsigned int num_arguments_per_1D_tensor ()
 Returns the number of arguments enqueued per 1D tensor object. More...
 
constexpr static unsigned int num_arguments_per_2D_tensor ()
 Returns the number of arguments enqueued per 2D tensor object. More...
 
constexpr static unsigned int num_arguments_per_3D_tensor ()
 Returns the number of arguments enqueued per 3D tensor object. More...
 
constexpr static unsigned int num_arguments_per_4D_tensor ()
 Returns the number of arguments enqueued per 4D tensor object. More...
 
static cl::NDRange gws_from_window (const Window &window, bool use_dummy_work_items)
 Get the global work size given an execution window. More...
 

Detailed Description

OpenCL kernel to multiply matrices when neither of the input matrices have been reshaped.

Definition at line 40 of file ClGemmMatrixMultiplyNativeKernel.h.

Constructor & Destructor Documentation

◆ ClGemmMatrixMultiplyNativeKernel()

Definition at line 220 of file ClGemmMatrixMultiplyNativeKernel.cpp.

221 {
222  _type = CLKernelType::GEMM;
223 }

References arm_compute::GEMM.

Member Function Documentation

◆ ARM_COMPUTE_DISALLOW_COPY_ALLOW_MOVE()

ARM_COMPUTE_DISALLOW_COPY_ALLOW_MOVE ( ClGemmMatrixMultiplyNativeKernel  )

◆ configure()

void configure ( const ClCompileContext compile_context,
ITensorInfo src0,
ITensorInfo src1,
ITensorInfo src2,
ITensorInfo dst,
float  alpha,
float  beta,
const GEMMLHSMatrixInfo lhs_info,
const GEMMRHSMatrixInfo rhs_info,
const GEMMKernelInfo gemm_info 
)

Initialise the kernel's input and dst.

Parameters
[in]compile_contextThe compile context to be used.
[in]src0Input tensor for the LHS matrix. Data type supported: F32/F16. The number of dimensions for the LHS matrix must be less or equal than 4.
[in]src1Input tensor for the RHS matrix. Data type supported: same as src0. The number of dimensions for the RHS matrix must be less or equal than 3.
[in]src2Input tensor containing the bias matrix. Data type supported: same as src0.
[out]dstdst tensor info. Data type supported: same as src0
[in]alphaWeight of the matrix product
[in]betaWeight of the matrix bias
[in]lhs_infoLHS matrix information used to retrieve the number of rows and accumulations to be processed by each thread. Only the following values are supported: lhs_info.m0: 1,2,3,4,5,6,7,8 lhs_info.k0: 2,3,4,8,16
[in]rhs_infoRHS matrix information used to retrieve the number of columns and accumulations to be processed by each thread. Only the following values are supported: rhs_info.n0: 2,3,4,8,16 rhs_info.k0: same of lhs_info.k0
[in]gemm_infoGEMM information used to retrieve the original dimensions of the input matrices

Definition at line 225 of file ClGemmMatrixMultiplyNativeKernel.cpp.

235 {
236  ARM_COMPUTE_ERROR_ON_NULLPTR(src0, src1, dst);
237 
238  // dst tensor auto initialization if not yet initialized
240  *dst, src0->clone()->set_tensor_shape(misc::shape_calculator::compute_mm_shape(*src0, *src1, gemm_info)));
241 
242  ARM_COMPUTE_ERROR_THROW_ON(validate_arguments(src0, src1, src2, dst, alpha, beta, lhs_info, rhs_info, gemm_info));
243 
244  auto padding_info = get_padding_info({src0, dst});
245  _reinterpret_input_as_3d = gemm_info.reinterpret_input_as_3d;
246  _reinterpret_output_as_3d = gemm_info.depth_output_gemm3d != 0;
247  _use_dummy_work_items = preferred_dummy_work_items_support(CLKernelLibrary::get().get_device());
248  _add_bias = src2 != nullptr;
249 
250  // In case both input and dst have to be reinterpreted as 3D tensors,
251  // force reinterpret_input_as_3d and reinterpret_output_as_3d to be false.
252  if (_reinterpret_input_as_3d == _reinterpret_output_as_3d)
253  {
254  _reinterpret_input_as_3d = false;
255  _reinterpret_output_as_3d = false;
256  }
257 
258  // Check if we need to slide the matrix B
259  const unsigned int num_dimensions_src0 = src0->num_dimensions();
260  _slide_matrix_b = (src1->num_dimensions() >= num_dimensions_src0);
261 
262  ElementsProcessed num_elements_processed{};
263 
264  // Configure kernel window
265  auto win_config = validate_and_configure_window(src0, src1, src2 != nullptr ? src2 : nullptr, dst, lhs_info,
266  rhs_info, gemm_info, num_elements_processed);
267  ARM_COMPUTE_ERROR_THROW_ON(win_config.first);
268  IClKernel::configure_internal(win_config.second);
269 
270  // If _reinterpret_input_as_3d = _reinterpret_output_as_3d = true,
271  // we will dispatch a batched-GEMM to reduce the complexity of the address calculation within the OpenCL kernel.
272  // This means that the actual m used by the kernel is given by dst->dimension(1) and not by gemm_info.m
273  const unsigned int internal_m = _reinterpret_output_as_3d ? gemm_info.m : dst->dimension(1);
274 
275  const unsigned int h_gemm_3d = _reinterpret_output_as_3d ? dst->dimension(1) : src0->dimension(1);
276  const unsigned int d_gemm_3d = _reinterpret_output_as_3d ? dst->dimension(2) : src0->dimension(2);
277 
278  // Calculate partial (store instead of load) M0 and partial N0 for the partial blocks at the end of a row/column if any. This is to avoid padding.
279  const unsigned int partial_store_m0 = internal_m % lhs_info.m0;
280  const unsigned int partial_store_n0 = gemm_info.n % rhs_info.n0;
281 
282  // Shrink M0 to be always <= M (internal_m) to prevent out-of-bounds reads.
283  // NOTE: This might have implications on heuristics and performance
284  const unsigned int internal_m0 = std::min(internal_m, lhs_info.m0);
285  _m = internal_m;
286  _n = gemm_info.n;
287  _k = gemm_info.k;
288 
289  // Create build options
290  CLBuildOptions build_opts;
291  build_opts.add_option("-DDATA_TYPE=" + get_cl_type_from_data_type(src0->data_type()));
292  build_opts.add_option_if(!(helpers::float_ops::is_one(alpha)),
293  "-DALPHA=" + float_to_string_with_full_precision(alpha));
294  build_opts.add_option_if(src2 != nullptr, "-DBETA=" + float_to_string_with_full_precision(beta));
295  build_opts.add_option_if(helpers::float_ops::is_one(beta), "-DUNIT_BETA");
296  build_opts.add_option_if(gemm_info.broadcast_bias, "-DBROADCAST_BIAS");
297  build_opts.add_option_if(_reinterpret_input_as_3d, "-DREINTERPRET_INPUT_AS_3D");
298  build_opts.add_option_if(_reinterpret_output_as_3d, "-DREINTERPRET_OUTPUT_AS_3D");
299  build_opts.add_option_if(_reinterpret_input_as_3d || _reinterpret_output_as_3d,
300  "-DHEIGHT_GEMM3D=" + support::cpp11::to_string(h_gemm_3d));
301  build_opts.add_option_if(_reinterpret_input_as_3d || _reinterpret_output_as_3d,
302  "-DDEPTH_GEMM3D=" + support::cpp11::to_string(d_gemm_3d));
303  build_opts.add_option_if(!_slide_matrix_b, "-DMATRIX_B_DEPTH=" + support::cpp11::to_string(src1->dimension(2)));
304  build_opts.add_option_if(_use_dummy_work_items, "-DDUMMY_WORK_ITEMS");
305  build_opts.add_option("-DM0=" + support::cpp11::to_string(internal_m0));
306  build_opts.add_option("-DN0=" + support::cpp11::to_string(rhs_info.n0));
307  build_opts.add_option("-DK0=" + support::cpp11::to_string(rhs_info.k0));
308  build_opts.add_option("-DPARTIAL_STORE_M0=" + support::cpp11::to_string(partial_store_m0));
309  build_opts.add_option("-DPARTIAL_STORE_N0=" + support::cpp11::to_string(partial_store_n0));
310  build_opts.add_option_if(gemm_info.activation_info.enabled(),
311  "-DACTIVATION_TYPE=" +
312  lower_string(string_from_activation_func(gemm_info.activation_info.activation())));
313  build_opts.add_option_if(gemm_info.activation_info.enabled(),
314  "-DA_VAL=" + float_to_string_with_full_precision(gemm_info.activation_info.a()));
315  build_opts.add_option_if(gemm_info.activation_info.enabled(),
316  "-DB_VAL=" + float_to_string_with_full_precision(gemm_info.activation_info.b()));
317 
318  std::string kernel_name("gemm_mm_native");
319 
320  // A macro guard to compile ONLY the kernel of interest
321  build_opts.add_option("-D" + upper_string(kernel_name));
322 
323  // Create kernel
324  _kernel = create_kernel(compile_context, kernel_name, build_opts.options());
325 
326  // Set config_id for enabling LWS tuning
327  _config_id = kernel_name;
328  _config_id += "_";
329  _config_id += (_add_bias ? "add_bias_" : "");
330  _config_id += (gemm_info.broadcast_bias ? "broadcast_bias_" : "");
331  _config_id += (_reinterpret_input_as_3d ? "3di_" : "");
332  _config_id += (_reinterpret_output_as_3d ? "3do_" : "");
333  _config_id += (gemm_info.activation_info.enabled() ? "fused_activation_" : "");
334  _config_id += lower_string(string_from_data_type(src0->data_type()));
335  _config_id += "_";
336  _config_id += support::cpp11::to_string(dst->dimension(1));
337  _config_id += "_";
338  _config_id += support::cpp11::to_string(dst->dimension(0));
339  _config_id += "_";
340  _config_id += support::cpp11::to_string(gemm_info.k);
341  _config_id += "_";
342  _config_id += support::cpp11::to_string(dst->dimension(2));
343  _config_id += "_";
344  _config_id += support::cpp11::to_string(lhs_info.m0);
345  _config_id += "_";
346  _config_id += support::cpp11::to_string(rhs_info.n0);
347  _config_id += "_";
348  _config_id += support::cpp11::to_string(rhs_info.k0);
349 
351 }

References ActivationLayerInfo::a(), ActivationLayerInfo::activation(), GEMMKernelInfo::activation_info, CLBuildOptions::add_option(), CLBuildOptions::add_option_if(), ARM_COMPUTE_ERROR_ON, ARM_COMPUTE_ERROR_ON_NULLPTR, ARM_COMPUTE_ERROR_THROW_ON, arm_compute::auto_init_if_empty(), ActivationLayerInfo::b(), GEMMKernelInfo::broadcast_bias, ICloneable< T >::clone(), arm_compute::misc::shape_calculator::compute_mm_shape(), arm_compute::create_kernel(), ITensorInfo::data_type(), GEMMKernelInfo::depth_output_gemm3d, ITensorInfo::dimension(), arm_compute::test::validation::dst, ActivationLayerInfo::enabled(), arm_compute::float_to_string_with_full_precision(), CLKernelLibrary::get(), arm_compute::get_cl_type_from_data_type(), arm_compute::get_padding_info(), arm_compute::has_padding_changed(), arm_compute::helpers::float_ops::is_one(), GEMMKernelInfo::k, GEMMRHSMatrixInfo::k0, kernel_name, arm_compute::lower_string(), GEMMKernelInfo::m, GEMMLHSMatrixInfo::m0, GEMMKernelInfo::n, GEMMRHSMatrixInfo::n0, ITensorInfo::num_dimensions(), CLBuildOptions::options(), arm_compute::preferred_dummy_work_items_support(), GEMMKernelInfo::reinterpret_input_as_3d, arm_compute::string_from_activation_func(), arm_compute::string_from_data_type(), arm_compute::support::cpp11::to_string(), arm_compute::upper_string(), arm_compute::cpu::kernels::validate_and_configure_window(), and arm_compute::cpu::kernels::validate_arguments().

◆ run_op()

void run_op ( ITensorPack tensors,
const Window window,
cl::CommandQueue &  queue 
)
overridevirtual

Enqueue the OpenCL kernel to process the given window on the passed OpenCL command queue.

Note
The queue is not flushed by this method, and therefore the kernel will not have been executed by the time this method returns.
Parameters
[in]tensorsA vector containing the tensors to operato on.
[in]windowRegion on which to execute the kernel. (Must be a valid region of the window returned by window()).
[in,out]queueCommand queue on which to enqueue the kernel.

Reimplemented from ICLKernel.

Definition at line 374 of file ClGemmMatrixMultiplyNativeKernel.cpp.

375 {
378 
379  const auto src0 =
380  utils::cast::polymorphic_downcast<const ICLTensor *>(tensors.get_const_tensor(TensorType::ACL_SRC_0));
381  const auto src1 =
382  utils::cast::polymorphic_downcast<const ICLTensor *>(tensors.get_const_tensor(TensorType::ACL_SRC_1));
383  const auto src2 =
384  utils::cast::polymorphic_downcast<const ICLTensor *>(tensors.get_const_tensor(TensorType::ACL_SRC_2));
385  auto dst = utils::cast::polymorphic_downcast<ICLTensor *>(tensors.get_tensor(TensorType::ACL_DST));
386 
387  ARM_COMPUTE_ERROR_ON_NULLPTR(src0, src1, dst);
388  ARM_COMPUTE_ERROR_ON(_add_bias && src2 == nullptr);
389 
390  if (src1->info()->num_dimensions() < 3)
391  {
392  // The stride_z for matrix B must be zero if we do not slice
393  ARM_COMPUTE_ERROR_ON(src1->info()->strides_in_bytes()[3] != 0);
394  }
395 
397  Window slice_matrix_b = slice;
398 
399  slice_matrix_b.set(Window::DimX, Window::Dimension(0, 1, 1));
400  slice_matrix_b.set(Window::DimY, Window::Dimension(0, 1, 1));
401 
402  if (_reinterpret_input_as_3d)
403  {
404  // Pass bottom paddings to the kernel if the input has to be reinterpreted as 3D tensor
405  unsigned int idx0;
406  if (_add_bias)
407  {
408  idx0 = 4 * num_arguments_per_2D_tensor() + 7;
409  }
410  else
411  {
412  idx0 = 3 * num_arguments_per_2D_tensor() + 6;
413  }
414  const unsigned int total_cross_plane_pad = src0->info()->padding().top + src0->info()->padding().bottom;
415  _kernel.setArg<cl_uint>(idx0, static_cast<unsigned int>(total_cross_plane_pad));
416  }
417 
418  if (_reinterpret_output_as_3d)
419  {
420  // Pass bottom paddings to the kernel if the dst has to be reinterpreted as 3D tensor
421  unsigned int idx0;
422  if (_add_bias)
423  {
424  idx0 = 4 * num_arguments_per_2D_tensor() + 7 + (_reinterpret_input_as_3d ? 1 : 0);
425  }
426  else
427  {
428  idx0 = 3 * num_arguments_per_2D_tensor() + 6 + (_reinterpret_input_as_3d ? 1 : 0);
429  }
430  const unsigned int total_cross_plane_pad = dst->info()->padding().top + dst->info()->padding().bottom;
431  _kernel.setArg<cl_uint>(idx0, static_cast<unsigned int>(total_cross_plane_pad));
432  }
433 
434  do
435  {
436  Window slice_b = slice;
437  // Don't slice matrix B along the z dimension if matrix B has just 2 dimensions and matrix A more than 2
438  // This scenario can happen when the matrix multiplication is used to perform a convolution operation
439  if (!_slide_matrix_b)
440  {
441  slice_b = slice_matrix_b;
442  }
443 
444  unsigned int idx = 0;
445  add_2D_tensor_argument(idx, src0, slice);
446  add_2D_tensor_argument(idx, src1, slice_b);
447  if (_add_bias)
448  {
449  add_2D_tensor_argument(idx, src2, slice);
450  }
452 
453  _kernel.setArg<cl_uint>(idx++, static_cast<unsigned int>(src0->info()->strides_in_bytes()[2]));
454  _kernel.setArg<cl_uint>(idx++, static_cast<unsigned int>(src1->info()->strides_in_bytes()[2]));
455  if (_add_bias)
456  {
457  _kernel.setArg<cl_uint>(idx++, static_cast<unsigned int>(src2->info()->strides_in_bytes()[2]));
458  }
459  _kernel.setArg<cl_uint>(idx++, static_cast<unsigned int>(dst->info()->strides_in_bytes()[2]));
460 
461  // Pass m, n and k at runtime
462  _kernel.setArg<cl_int>(idx++, _m);
463  _kernel.setArg<cl_int>(idx++, _n);
464  _kernel.setArg<cl_int>(idx++, _k);
465 
466  enqueue(queue, *this, slice, lws_hint(), _use_dummy_work_items);
468 }

References arm_compute::ACL_DST, arm_compute::ACL_SRC_0, arm_compute::ACL_SRC_1, arm_compute::ACL_SRC_2, ICLKernel::add_2D_tensor_argument(), ARM_COMPUTE_ERROR_ON, ARM_COMPUTE_ERROR_ON_INVALID_SUBWINDOW, ARM_COMPUTE_ERROR_ON_NULLPTR, ARM_COMPUTE_ERROR_ON_UNCONFIGURED_KERNEL, Window::DimX, Window::DimY, arm_compute::test::validation::dst, arm_compute::enqueue(), Window::first_slice_window_3D(), ITensorPack::get_const_tensor(), ITensorPack::get_tensor(), ICLKernel::lws_hint(), ICLKernel::num_arguments_per_2D_tensor(), Window::set(), arm_compute::test::validation::reference::slice(), Window::slide_window_slice_3D(), and IKernel::window().

◆ validate()

Status validate ( const ITensorInfo src0,
const ITensorInfo src1,
const ITensorInfo src2,
const ITensorInfo dst,
float  alpha,
float  beta,
const GEMMLHSMatrixInfo lhs_info,
const GEMMRHSMatrixInfo rhs_info,
const GEMMKernelInfo gemm_info 
)
static

Static function to check if given info will lead to a valid configuration.

Similar to ClGemmMatrixMultiplyNativeKernel::configure()

Returns
a status

Definition at line 353 of file ClGemmMatrixMultiplyNativeKernel.cpp.

362 {
363  ElementsProcessed num_elements_processed{};
364  ARM_COMPUTE_RETURN_ON_ERROR(validate_arguments(src0, src1, src2, dst, alpha, beta, lhs_info, rhs_info, gemm_info));
365  ARM_COMPUTE_RETURN_ON_ERROR(validate_and_configure_window(src0->clone().get(), src1->clone().get(),
366  src2 != nullptr ? src2->clone().get() : nullptr,
367  dst->clone().get(), lhs_info, rhs_info, gemm_info,
368  num_elements_processed)
369  .first);
370 
371  return Status{};
372 }

References ARM_COMPUTE_RETURN_ON_ERROR, ICloneable< T >::clone(), arm_compute::test::validation::dst, arm_compute::cpu::kernels::validate_and_configure_window(), and arm_compute::cpu::kernels::validate_arguments().


The documentation for this class was generated from the following files:
arm_compute::support::cpp11::to_string
std::string to_string(T &&value)
Convert integer and float values to string.
Definition: StringSupport.h:168
arm_compute::preferred_dummy_work_items_support
bool preferred_dummy_work_items_support(const cl::Device &device)
Helper function to check if "dummy work-items" are preferred to have a power of two NDRange In case d...
Definition: CLHelpers.cpp:370
arm_compute::misc::shape_calculator::compute_mm_shape
TensorShape compute_mm_shape(const ITensorInfo &input0, const ITensorInfo &input1, bool is_interleaved_transposed, const GEMMReshapeInfo &reshape_info)
Calculate the matrix multiplication output shape of two tensors.
Definition: ShapeCalculator.h:980
arm_compute::test::validation::dst
auto dst
Definition: DFT.cpp:170
arm_compute::lower_string
std::string lower_string(const std::string &val)
Lower a given string.
Definition: StringUtils.cpp:38
arm_compute::cpu::kernels::validate_arguments
Status validate_arguments(const ITensorInfo *src, const ITensorInfo *weights, const ITensorInfo *dst, const PadStrideInfo &conv_info)
Definition: CpuDirectConv2dKernel.cpp:57
arm_compute::ICLKernel::num_arguments_per_2D_tensor
constexpr static unsigned int num_arguments_per_2D_tensor()
Returns the number of arguments enqueued per 2D tensor object.
Definition: ICLKernel.h:313
ARM_COMPUTE_ERROR_ON_UNCONFIGURED_KERNEL
#define ARM_COMPUTE_ERROR_ON_UNCONFIGURED_KERNEL(k)
Definition: Validate.h:1079
arm_compute::Window::DimX
static constexpr size_t DimX
Alias for dimension 0 also known as X dimension.
Definition: Window.h:43
arm_compute::string_from_data_type
const std::string & string_from_data_type(DataType dt)
Convert a data type identity into a string.
Definition: DataTypeUtils.cpp:31
arm_compute::ACL_SRC_0
@ ACL_SRC_0
Definition: Types.h:45
arm_compute::upper_string
std::string upper_string(const std::string &val)
Raise a given string to upper case.
Definition: StringUtils.cpp:45
arm_compute::ACL_SRC_1
@ ACL_SRC_1
Definition: Types.h:46
arm_compute::CLKernelLibrary::get
static CLKernelLibrary & get()
Access the KernelLibrary singleton.
Definition: CLKernelLibrary.cpp:41
arm_compute::ACL_SRC_2
@ ACL_SRC_2
Definition: Types.h:47
arm_compute::ICLKernel::add_2D_tensor_argument
void add_2D_tensor_argument(unsigned int &idx, const ICLTensor *tensor, const Window &window)
Add the passed 2D tensor's parameters to the object's kernel's arguments starting from the index idx.
Definition: ICLKernel.h:210
ARM_COMPUTE_RETURN_ON_ERROR
#define ARM_COMPUTE_RETURN_ON_ERROR(status)
Checks if a status contains an error and returns it.
Definition: Error.h:205
ARM_COMPUTE_ERROR_ON_NULLPTR
#define ARM_COMPUTE_ERROR_ON_NULLPTR(...)
Definition: Validate.h:159
ARM_COMPUTE_ERROR_ON
#define ARM_COMPUTE_ERROR_ON(cond)
If the condition is true then an error message is printed and an exception thrown.
Definition: Error.h:466
ARM_COMPUTE_ERROR_THROW_ON
#define ARM_COMPUTE_ERROR_THROW_ON(status)
Definition: Error.h:455
arm_compute::ACL_DST
@ ACL_DST
Definition: Types.h:55
arm_compute::auto_init_if_empty
bool auto_init_if_empty(ITensorInfo &info, const TensorShape &shape, int num_channels, DataType data_type, QuantizationInfo quantization_info=QuantizationInfo())
Auto initialize the tensor info (shape, number of channels and data type) if the current assignment i...
Definition: AutoConfiguration.h:43
arm_compute::create_kernel
cl::Kernel create_kernel(const CLCompileContext &ctx, const std::string &kernel_name, const std::set< std::string > &build_opts=std::set< std::string >())
Creates an opencl kernel using a compile context.
Definition: CLHelpers.cpp:409
ARM_COMPUTE_ERROR_ON_INVALID_SUBWINDOW
#define ARM_COMPUTE_ERROR_ON_INVALID_SUBWINDOW(f, s)
Definition: Validate.h:203
arm_compute::float_to_string_with_full_precision
std::string float_to_string_with_full_precision(float val)
Create a string with the float in full precision.
Definition: StringUtils.cpp:52
arm_compute::Window::slide_window_slice_3D
bool slide_window_slice_3D(Window &slice) const
Slide the passed 3D window slice.
Definition: Window.h:350
arm_compute::cpu::kernels::validate_and_configure_window
std::pair< Status, Window > validate_and_configure_window(ITensorInfo *src, ITensorInfo *dst)
Definition: CpuDirectConv2dKernel.cpp:92
arm_compute::Window::DimY
static constexpr size_t DimY
Alias for dimension 1 also known as Y dimension.
Definition: Window.h:45
arm_compute::Window::first_slice_window_3D
Window first_slice_window_3D() const
First 3D slice of the window.
Definition: Window.h:306
arm_compute::IKernel::window
const Window & window() const
The maximum window the kernel can be executed on.
Definition: IKernel.cpp:28
arm_compute::get_cl_type_from_data_type
std::string get_cl_type_from_data_type(const DataType &dt)
Translates a tensor data type to the appropriate OpenCL type.
Definition: CLHelpers.cpp:41
arm_compute::has_padding_changed
bool has_padding_changed(const std::unordered_map< const ITensorInfo *, PaddingSize > &padding_map)
Check if the previously stored padding info has changed after configuring a kernel.
Definition: Utils.cpp:491
arm_compute::ICLKernel::lws_hint
cl::NDRange lws_hint() const
Return the Local-Workgroup-Size hint.
Definition: ICLKernel.h:383
arm_compute::string_from_activation_func
const std::string & string_from_activation_func(const ActivationFunction &act)
Translates a given activation function to a string.
Definition: ActivationFunctionUtils.cpp:31
arm_compute::GEMM
@ GEMM
GEMM CL kernel type.
Definition: CLTypes.h:84
arm_compute::get_padding_info
std::unordered_map< const ITensorInfo *, PaddingSize > get_padding_info(std::initializer_list< const ITensorInfo * > infos)
Stores padding information before configuring a kernel.
Definition: Utils.cpp:476
arm_compute::test::validation::reference::slice
SimpleTensor< T > slice(const SimpleTensor< T > &src, Coordinates starts, Coordinates ends)
Definition: SliceOperations.cpp:38
arm_compute::helpers::float_ops::is_one
bool is_one(float a, float epsilon=0.00001f)
Checks if the input floating point number is 1.0f checking if the difference is within a range define...
Definition: float_ops.h:100
kernel_name
std::string kernel_name
Definition: ClIm2ColKernel.cpp:58
arm_compute::enqueue
void enqueue(cl::CommandQueue &queue, ICLKernel &kernel, const Window &window, const cl::NDRange &lws_hint=CLKernelLibrary::get().default_ndrange(), bool use_dummy_work_items=false)
Add the kernel to the command queue with the given window.
Definition: ICLKernel.cpp:33