Compute Library
 22.11
ClGemmLowpMatrixMultiplyReshapedOnlyRhsKernel Class Reference

OpenCL kernel to multiply matrices with QASYMM8 data type when only the input matrix RHS (src1) has been reshaped. More...

#include <ClGemmLowpMatrixMultiplyReshapedOnlyRhsKernel.h>

Collaboration diagram for ClGemmLowpMatrixMultiplyReshapedOnlyRhsKernel:
[legend]

Public Member Functions

 ClGemmLowpMatrixMultiplyReshapedOnlyRhsKernel ()
 
 ARM_COMPUTE_DISALLOW_COPY_ALLOW_MOVE (ClGemmLowpMatrixMultiplyReshapedOnlyRhsKernel)
 
void configure (const CLCompileContext &compile_context, const ITensorInfo *src0, const ITensorInfo *src1, ITensorInfo *dst, const GEMMKernelInfo &gemm_info, ITensorInfo *vector_sum_col=nullptr, const ITensorInfo *vector_sum_row=nullptr, ITensorInfo *bias=nullptr, ITensorInfo *output_multipliers=nullptr, ITensorInfo *output_shifts=nullptr)
 Initialise the kernel's source and destination. More...
 
void run_op (ITensorPack &tensors, const Window &window, cl::CommandQueue &queue) override
 Enqueue the OpenCL kernel to process the given window on the passed OpenCL command queue. More...
 
- Public Member Functions inherited from ICLKernel
 ICLKernel ()
 Constructor. More...
 
cl::Kernel & kernel ()
 Returns a reference to the OpenCL kernel of this object. More...
 
CLKernelType type () const
 Returns the CL kernel type. More...
 
template<typename T >
void add_1D_array_argument (unsigned int &idx, const ICLArray< T > *array, const Strides &strides, unsigned int num_dimensions, const Window &window)
 Add the passed 1D array's parameters to the object's kernel's arguments starting from the index idx. More...
 
void add_1D_tensor_argument (unsigned int &idx, const ICLTensor *tensor, const Window &window)
 Add the passed 1D tensor's parameters to the object's kernel's arguments starting from the index idx. More...
 
void add_1D_tensor_argument_if (bool cond, unsigned int &idx, const ICLTensor *tensor, const Window &window)
 Add the passed 1D tensor's parameters to the object's kernel's arguments starting from the index idx if the condition is true. More...
 
void add_2D_tensor_argument (unsigned int &idx, const ICLTensor *tensor, const Window &window)
 Add the passed 2D tensor's parameters to the object's kernel's arguments starting from the index idx. More...
 
void add_2D_tensor_argument_if (bool cond, unsigned int &idx, const ICLTensor *tensor, const Window &window)
 Add the passed 2D tensor's parameters to the object's kernel's arguments starting from the index idx if the condition is true. More...
 
void add_3D_tensor_argument (unsigned int &idx, const ICLTensor *tensor, const Window &window)
 Add the passed 3D tensor's parameters to the object's kernel's arguments starting from the index idx. More...
 
void add_4D_tensor_argument (unsigned int &idx, const ICLTensor *tensor, const Window &window)
 Add the passed 4D tensor's parameters to the object's kernel's arguments starting from the index idx. More...
 
void add_5D_tensor_argument (unsigned int &idx, const ICLTensor *tensor, const Window &window)
 Add the passed 5D tensor's parameters to the object's kernel's arguments starting from the index idx. More...
 
void add_3d_tensor_nhw_argument (unsigned int &idx, const ICLTensor *tensor)
 Add the passed NHW 3D tensor's parameters to the object's kernel's arguments by passing strides, dimensions and the offset to the first valid element in bytes. More...
 
void add_4d_tensor_nhwc_argument (unsigned int &idx, const ICLTensor *tensor)
 Add the passed NHWC 4D tensor's parameters to the object's kernel's arguments by passing strides, dimensions and the offset to the first valid element in bytes. More...
 
virtual void run (const Window &window, cl::CommandQueue &queue)
 Enqueue the OpenCL kernel to process the given window on the passed OpenCL command queue. More...
 
virtual void run_composite_op (ITensorPack &tensors, const Window &window, cl::CommandQueue &queue, const experimental::dynamic_fusion::ClExecutionDescriptor &exec_desc)
 The execution is carried out through run_op method. But the run_op method needs to be extended to include ClExecutionDescriptor as now LWS GWS tuning will be separated from the IKernel. More...
 
template<typename T >
void add_argument (unsigned int &idx, T value)
 Add the passed parameters to the object's kernel's arguments starting from the index idx. More...
 
void set_lws_hint (const cl::NDRange &lws_hint)
 Set the Local-Workgroup-Size hint. More...
 
cl::NDRange lws_hint () const
 Return the Local-Workgroup-Size hint. More...
 
void set_wbsm_hint (const cl_int &wbsm_hint)
 Set the workgroup batch size modifier hint. More...
 
cl_int wbsm_hint () const
 Return the workgroup batch size modifier hint. More...
 
const std::string & config_id () const
 Get the configuration ID. More...
 
void set_target (GPUTarget target)
 Set the targeted GPU architecture. More...
 
void set_target (cl::Device &device)
 Set the targeted GPU architecture according to the CL device. More...
 
GPUTarget get_target () const
 Get the targeted GPU architecture. More...
 
size_t get_max_workgroup_size ()
 Get the maximum workgroup size for the device the CLKernelLibrary uses. More...
 
template<unsigned int dimension_size>
void add_tensor_argument (unsigned &idx, const ICLTensor *tensor, const Window &window)
 
template<typename T , unsigned int dimension_size>
void add_array_argument (unsigned &idx, const ICLArray< T > *array, const Strides &strides, unsigned int num_dimensions, const Window &window)
 Add the passed array's parameters to the object's kernel's arguments starting from the index idx. More...
 
- Public Member Functions inherited from IKernel
 IKernel ()
 Constructor. More...
 
virtual ~IKernel ()=default
 Destructor. More...
 
virtual bool is_parallelisable () const
 Indicates whether or not the kernel is parallelisable. More...
 
virtual BorderSize border_size () const
 The size of the border for that kernel. More...
 
const Windowwindow () const
 The maximum window the kernel can be executed on. More...
 
bool is_window_configured () const
 Function to check if the embedded window of this kernel has been configured. More...
 

Static Public Member Functions

static Status validate (const ITensorInfo *src0, const ITensorInfo *src1, const ITensorInfo *dst, const GEMMKernelInfo &gemm_info, const ITensorInfo *vector_sum_col=nullptr, const ITensorInfo *vector_sum_row=nullptr, const ITensorInfo *bias=nullptr, const ITensorInfo *output_multipliers=nullptr, const ITensorInfo *output_shifts=nullptr)
 Static function to check if given info will lead to a valid configuration. More...
 
- Static Public Member Functions inherited from ICLKernel
static constexpr unsigned int num_arguments_per_3d_tensor_nhw ()
 Returns the number of arguments enqueued per NHW 3D Tensor object. More...
 
static constexpr unsigned int num_arguments_per_4d_tensor_nhwc ()
 Returns the number of arguments enqueued per NHWC 4D Tensor object. More...
 
static constexpr unsigned int num_arguments_per_1D_array ()
 Returns the number of arguments enqueued per 1D array object. More...
 
static constexpr unsigned int num_arguments_per_1D_tensor ()
 Returns the number of arguments enqueued per 1D tensor object. More...
 
static constexpr unsigned int num_arguments_per_2D_tensor ()
 Returns the number of arguments enqueued per 2D tensor object. More...
 
static constexpr unsigned int num_arguments_per_3D_tensor ()
 Returns the number of arguments enqueued per 3D tensor object. More...
 
static constexpr unsigned int num_arguments_per_4D_tensor ()
 Returns the number of arguments enqueued per 4D tensor object. More...
 
static cl::NDRange gws_from_window (const Window &window)
 Get the global work size given an execution window. More...
 

Detailed Description

OpenCL kernel to multiply matrices with QASYMM8 data type when only the input matrix RHS (src1) has been reshaped.

Note
The input matrix src1 must be reshaped through opencl::kernels::ClGemmReshapeRhsMatrixKernel
For fused output stage, only GEMMLowpOutputStageType::QUANTIZE_DOWN_FIXEDPOINT type is supported

Definition at line 43 of file ClGemmLowpMatrixMultiplyReshapedOnlyRhsKernel.h.

Constructor & Destructor Documentation

◆ ClGemmLowpMatrixMultiplyReshapedOnlyRhsKernel()

Member Function Documentation

◆ ARM_COMPUTE_DISALLOW_COPY_ALLOW_MOVE()

ARM_COMPUTE_DISALLOW_COPY_ALLOW_MOVE ( ClGemmLowpMatrixMultiplyReshapedOnlyRhsKernel  )

◆ configure()

void configure ( const CLCompileContext compile_context,
const ITensorInfo src0,
const ITensorInfo src1,
ITensorInfo dst,
const GEMMKernelInfo gemm_info,
ITensorInfo vector_sum_col = nullptr,
const ITensorInfo vector_sum_row = nullptr,
ITensorInfo bias = nullptr,
ITensorInfo output_multipliers = nullptr,
ITensorInfo output_shifts = nullptr 
)

Initialise the kernel's source and destination.

Parameters
[in]compile_contextThe compile context to be used.
[in]src0Input tensor containing the LHS matrix. Data type supported: QASYMM8/QASYMM8_SIGNED
[in]src1Input tensor containing the RHS reshaped matrix. Data type supported: same as src0
[out]dstDestination tensor. Data type supported: QASYMM8/QASYMM8_SIGNED/S32.
[in]gemm_infoGEMM information used to retrieve the original dimensions of the input matrices, output stage information and RHS/LHS info. Only the following values are supported for LHS info: lhs_info.m0: 2,3,4,5,6,7,8 lhs_info.k0: 2,3,4,8,16 Only the following values are supported for RHS info: rhs_info.n0: 2,3,4,8,16 rhs_info.k0: same as lhs_info.k0 rhs_info.transpose: true
[in]vector_sum_col(Optional) Input row-vector of sums of all the entries in each column of matrix B. Note: vector_sum_col can be a nullptr in case a_offset = 0. Data type supported: S32
[in]vector_sum_row(Optional) Input row-vector of sums of all the entries in each row of matrix A. Note: vector_sum_row can be a nullptr in case b_offset = 0. Data type supported: S32
[in]bias(Optional) Biases tensor. Only shared biases supported and it can be a nullptr if the addition of biases is not required. Biases are 1D tensor with dimensions [OFM]. Data type supported: S32.
[in]output_multipliers(Optional) Output multipliers tensor. In case of per-channel quantization, the number of multipliers must be equal to the number of filters (OFM). Supported data types: S32.
[in]output_shifts(Optional) Output shifts tensor. In case of per-channel quantization, the number of multipliers must be equal to the number of filters (OFM). Supported data types: S32.

Definition at line 288 of file ClGemmLowpMatrixMultiplyReshapedOnlyRhsKernel.cpp.

References GEMMKernelInfo::a_offset, CLBuildOptions::add_option(), CLBuildOptions::add_option_if(), ARM_COMPUTE_ERROR_ON, ARM_COMPUTE_ERROR_ON_NULLPTR, ARM_COMPUTE_ERROR_THROW_ON, GEMMKernelInfo::b_offset, arm_compute::create_kernel(), ITensorInfo::data_type(), GEMMKernelInfo::depth_output_gemm3d, ITensorInfo::dimension(), arm_compute::dot8_supported(), arm_compute::test::validation::dst, GEMMLowpOutputStageInfo::gemmlowp_max_bound, GEMMLowpOutputStageInfo::gemmlowp_min_bound, GEMMLowpOutputStageInfo::gemmlowp_multipliers, GEMMLowpOutputStageInfo::gemmlowp_offset, GEMMLowpOutputStageInfo::gemmlowp_shifts, CLKernelLibrary::get(), arm_compute::get_cl_dot8_acc_type_from_data_type(), arm_compute::get_cl_type_from_data_type(), arm_compute::get_min_max(), arm_compute::get_padding_info(), arm_compute::has_padding_changed(), GEMMLowpOutputStageInfo::is_quantized_per_channel, GEMMKernelInfo::k, kernel_name, GEMMKernelInfo::lhs_info, GEMMKernelInfo::m, GEMMLHSMatrixInfo::m0, GEMMKernelInfo::n, Dimensions< T >::num_dimensions(), ITensorInfo::num_dimensions(), CLBuildOptions::options(), GEMMKernelInfo::output_stage, arm_compute::preferred_dummy_work_items_support(), arm_compute::QUANTIZE_DOWN_FIXEDPOINT, GEMMKernelInfo::reinterpret_input_as_3d, GEMMKernelInfo::rhs_info, ITensorInfo::tensor_shape(), arm_compute::support::cpp11::to_string(), GEMMLowpOutputStageInfo::type, arm_compute::upper_string(), arm_compute::cpu::kernels::validate_and_configure_window(), and arm_compute::cpu::kernels::validate_arguments().

292 {
293  ARM_COMPUTE_ERROR_ON_NULLPTR(src0, src1, dst);
294  ARM_COMPUTE_ERROR_THROW_ON(validate_arguments(src0, src1, dst, gemm_info, vector_sum_col, vector_sum_row, bias, output_multipliers, output_shifts));
295 
296  auto padding_info = get_padding_info({ src0, src1, dst, vector_sum_row });
297  const GEMMRHSMatrixInfo rhs_info = gemm_info.rhs_info;
298  const GEMMLHSMatrixInfo lhs_info = gemm_info.lhs_info;
299  const GEMMLowpOutputStageInfo output_stage = gemm_info.output_stage;
300  const int32_t a_offset = gemm_info.a_offset;
301  const int32_t b_offset = gemm_info.b_offset;
302 
303  _reinterpret_input_as_3d = gemm_info.reinterpret_input_as_3d;
304  _reinterpret_output_as_3d = (gemm_info.depth_output_gemm3d != 0);
305  _use_dummy_work_items = preferred_dummy_work_items_support(CLKernelLibrary::get().get_device());
306  _is_quantized_per_channel = output_stage.is_quantized_per_channel;
307 
308  // In case both input and dst have to be reinterpreted as 3D tensors,
309  // force reinterpret_input_as_3d and reinterpret_output_as_3d to be false.
310  if(_reinterpret_input_as_3d == _reinterpret_output_as_3d)
311  {
312  _reinterpret_input_as_3d = false;
313  _reinterpret_output_as_3d = false;
314  }
315 
316  // Check if we need to slide the matrix B
317  const unsigned int num_dimensions_src0 = src0->num_dimensions();
318  _slide_matrix_b = (src1->num_dimensions() >= num_dimensions_src0);
319 
320  ElementsProcessed num_elements_processed{};
321 
322  // Configure kernel window
323  auto win_config = validate_and_configure_window(src0, src1, dst, gemm_info, vector_sum_col, vector_sum_row, bias, output_multipliers, output_shifts, num_elements_processed);
324  ARM_COMPUTE_ERROR_THROW_ON(win_config.first);
325  ICLKernel::configure_internal(win_config.second);
326 
327  // If _reinterpret_input_as_3d = _reinterpret_output_as_3d = true,
328  // we will dispatch a batched-GEMM to reduce the complexity of the address calculation within the OpenCL kernel.
329  // This means that the actual m used by the kernel is given by dst->dimension(1) and not by gemm_info.m
330  const unsigned int internal_m = _reinterpret_output_as_3d ? gemm_info.m : dst->dimension(1);
331 
332  // Shrink M0 to be always <= M (internal_m) to prevent out-of-bounds reads.
333  // NOTE: This might have implications on heuristics and performance
334  const unsigned int internal_m0 = std::min(internal_m, lhs_info.m0);
335 
336  // Calculate partial (store instead of load) M0 and partial N0 for the partial blocks at the end of a row/column if any. This is to avoid padding.
337  const unsigned int partial_store_m0 = internal_m % internal_m0;
338  const unsigned int partial_store_n0 = gemm_info.n % rhs_info.n0;
339 
340  // Create build options
341  CLBuildOptions build_opts;
342  build_opts.add_option_if(_reinterpret_input_as_3d, "-DREINTERPRET_INPUT_AS_3D");
343  build_opts.add_option_if(_reinterpret_output_as_3d, "-DREINTERPRET_OUTPUT_AS_3D");
344  build_opts.add_option_if(_reinterpret_input_as_3d || _reinterpret_output_as_3d, "-DHEIGHT_GEMM3D=" + support::cpp11::to_string(dst->dimension(1)));
345  build_opts.add_option_if(_reinterpret_input_as_3d || _reinterpret_output_as_3d, "-DDEPTH_GEMM3D=" + support::cpp11::to_string(dst->dimension(2)));
346  build_opts.add_option_if(!_slide_matrix_b, "-DMATRIX_B_DEPTH=" + support::cpp11::to_string(src1->dimension(2)));
347  build_opts.add_option_if(rhs_info.interleave, "-DRHS_INTERLEAVE");
348  build_opts.add_option_if(_use_dummy_work_items, "-DDUMMY_WORK_ITEMS");
349  build_opts.add_option("-DM=" + support::cpp11::to_string(internal_m));
350  build_opts.add_option("-DN=" + support::cpp11::to_string(gemm_info.n));
351  build_opts.add_option("-DK=" + support::cpp11::to_string(gemm_info.k));
352  build_opts.add_option("-DM0=" + support::cpp11::to_string(internal_m0));
353  build_opts.add_option("-DN0=" + support::cpp11::to_string(rhs_info.n0));
354  build_opts.add_option("-DK0=" + support::cpp11::to_string(rhs_info.k0));
355  build_opts.add_option("-DH0=" + support::cpp11::to_string(rhs_info.h0));
356  build_opts.add_option("-DPARTIAL_STORE_M0=" + support::cpp11::to_string(partial_store_m0));
357  build_opts.add_option("-DPARTIAL_STORE_N0=" + support::cpp11::to_string(partial_store_n0));
358  build_opts.add_option("-DDATA_TYPE=" + get_cl_type_from_data_type(src0->data_type()));
359  build_opts.add_option("-DACC_DATA_TYPE=" + get_cl_dot8_acc_type_from_data_type(src0->data_type()));
360 
361  std::string kernel_name("gemmlowp_mm_reshaped_only_rhs_");
362  kernel_name += rhs_info.transpose ? "t" : "nt";
363 
365  {
366  kernel_name += "_fused_output_stage_fixedpoint";
367  _fuse_output_stage = true;
368  // If a_offset == 0, vector_sum_col can be a nullptr
369  if(a_offset != 0 && vector_sum_col != nullptr)
370  {
371  build_opts.add_option("-DA_OFFSET=" + support::cpp11::to_string(a_offset));
372  build_opts.add_option_if(vector_sum_col->tensor_shape().num_dimensions() > 1, "-DSUM_COL_HAS_BATCHES");
373  }
374  // If b_offset == 0, vector_sum_row can be a nullptr
375  build_opts.add_option_if(b_offset != 0, "-DB_OFFSET=" + support::cpp11::to_string(b_offset));
376  build_opts.add_option("-DK_OFFSET=" + support::cpp11::to_string(a_offset * b_offset * src0->dimension(0)));
377  build_opts.add_option_if(bias != nullptr, "-DADD_BIAS");
378  build_opts.add_option("-DRESULT_OFFSET=" + support::cpp11::to_string(output_stage.gemmlowp_offset));
379  // In case of _is_quantized_per_channel, RESULT_MULTIPLIER and RESULT_SHIFT are not utilized, but they are passed as a part of T_QUANTIZE8 macro.
380  if(!_is_quantized_per_channel)
381  {
382  build_opts.add_option("-DRESULT_MULTIPLIER=" + support::cpp11::to_string(output_stage.gemmlowp_multipliers[0]));
383  build_opts.add_option("-DRESULT_SHIFT=" + support::cpp11::to_string(output_stage.gemmlowp_shifts[0]));
384  }
385  else
386  {
387  build_opts.add_option("-DRESULT_MULTIPLIER=0");
388  build_opts.add_option("-DRESULT_SHIFT=0");
389  }
390  build_opts.add_option_if(_is_quantized_per_channel, "-DPER_CHANNEL_QUANTIZATION");
391 
392  const int min = output_stage.gemmlowp_min_bound;
393  const int max = output_stage.gemmlowp_max_bound;
394 
395  PixelValue min_val{};
396  PixelValue max_val{};
397  std::tie(min_val, max_val) = get_min_max(dst->data_type());
398  build_opts.add_option_if(min != min_val.get<int32_t>(), "-DMIN_BOUND=" + support::cpp11::to_string(min));
399  build_opts.add_option_if(max != max_val.get<int32_t>(), "-DMAX_BOUND=" + support::cpp11::to_string(max));
400  }
401 
402  // A macro guard to compile ONLY the kernel of interest
403  build_opts.add_option("-D" + upper_string(kernel_name));
404 
405  // Create kernel
406  _kernel = create_kernel(compile_context, kernel_name, build_opts.options());
407 
408  // Set config_id for enabling LWS tuning
409  _config_id = kernel_name;
410  _config_id += "_";
411  _config_id += dot8_supported(CLKernelLibrary::get().get_device()) ? "_dot8" : "";
412  _config_id += "_";
413  _config_id += (_reinterpret_input_as_3d ? "3di_" : "");
414  _config_id += (_reinterpret_output_as_3d ? "3do_" : "");
415  _config_id += support::cpp11::to_string(dst->dimension(1));
416  _config_id += "_";
417  _config_id += support::cpp11::to_string(dst->dimension(0));
418  _config_id += "_";
419  _config_id += support::cpp11::to_string(gemm_info.k);
420  _config_id += "_";
421  _config_id += support::cpp11::to_string(dst->dimension(2));
422  _config_id += "_";
423  _config_id += support::cpp11::to_string(lhs_info.m0);
424  _config_id += "_";
425  _config_id += support::cpp11::to_string(rhs_info.n0);
426  _config_id += "_";
427  _config_id += support::cpp11::to_string(rhs_info.k0);
428  _config_id += "_";
429  _config_id += support::cpp11::to_string(rhs_info.h0);
430  _config_id += "_";
431  _config_id += support::cpp11::to_string(rhs_info.interleave);
433 }
Quantize using a fixed point multiplication.
bool dot8_supported(const cl::Device &device)
Helper function to check whether the cl_arm_integer_dot_product_int8 extension is supported...
Definition: CLHelpers.cpp:241
bool preferred_dummy_work_items_support(const cl::Device &device)
Helper function to check if "dummy work-items" are preferred to have a power of two NDRange In case d...
Definition: CLHelpers.cpp:367
std::string get_cl_dot8_acc_type_from_data_type(const DataType &dt)
Translates a tensor data type to the appropriate OpenCL dot8 accumulator type.
Definition: CLHelpers.cpp:175
std::string to_string(T &&value)
Convert integer and float values to string.
#define ARM_COMPUTE_ERROR_ON(cond)
If the condition is true then an error message is printed and an exception thrown.
Definition: Error.h:466
static CLKernelLibrary & get()
Access the KernelLibrary singleton.
#define ARM_COMPUTE_ERROR_THROW_ON(status)
Definition: Error.h:455
Status validate_arguments(const ITensorInfo *src, const ITensorInfo *weights, const ITensorInfo *dst, const PadStrideInfo &conv_info)
const OutputStage & output_stage
std::string upper_string(const std::string &val)
Raise a given string to upper case.
Definition: Utils.cpp:360
cl::Kernel create_kernel(const CLCompileContext &ctx, const std::string &kernel_name, const std::set< std::string > &build_opts=std::set< std::string >())
Creates an opencl kernel using a compile context.
Definition: CLHelpers.cpp:404
std::string get_cl_type_from_data_type(const DataType &dt)
Translates a tensor data type to the appropriate OpenCL type.
Definition: CLHelpers.cpp:39
bool has_padding_changed(const std::unordered_map< const ITensorInfo *, PaddingSize > &padding_map)
Check if the previously stored padding info has changed after configuring a kernel.
Definition: Utils.cpp:603
std::pair< Status, Window > validate_and_configure_window(ITensorInfo *src, ITensorInfo *dst)
std::unordered_map< const ITensorInfo *, PaddingSize > get_padding_info(std::initializer_list< const ITensorInfo *> infos)
Stores padding information before configuring a kernel.
Definition: Utils.cpp:588
#define ARM_COMPUTE_ERROR_ON_NULLPTR(...)
Definition: Validate.h:157
std::string kernel_name
std::tuple< PixelValue, PixelValue > get_min_max(DataType dt)
Compute the mininum and maximum values a data type can take.
Definition: Utils.h:564
const int32_t * bias

◆ run_op()

void run_op ( ITensorPack tensors,
const Window window,
cl::CommandQueue &  queue 
)
overridevirtual

Enqueue the OpenCL kernel to process the given window on the passed OpenCL command queue.

Note
The queue is not flushed by this method, and therefore the kernel will not have been executed by the time this method returns.
Parameters
[in]tensorsA vector containing the tensors to operato on.
[in]windowRegion on which to execute the kernel. (Must be a valid region of the window returned by window()).
[in,out]queueCommand queue on which to enqueue the kernel.

Reimplemented from ICLKernel.

Definition at line 456 of file ClGemmLowpMatrixMultiplyReshapedOnlyRhsKernel.cpp.

References arm_compute::ACL_BIAS, arm_compute::ACL_DST, arm_compute::ACL_MULTIPLIERS, arm_compute::ACL_SHIFTS, arm_compute::ACL_SRC_0, arm_compute::ACL_SRC_1, arm_compute::ACL_VEC_COL_SUM, arm_compute::ACL_VEC_ROW_SUM, ARM_COMPUTE_ERROR_ON, ARM_COMPUTE_ERROR_ON_INVALID_SUBWINDOW, ARM_COMPUTE_ERROR_ON_UNCONFIGURED_KERNEL, Window::DimX, Window::DimY, Window::DimZ, arm_compute::enqueue(), Window::first_slice_window_3D(), ITensorPack::get_const_tensor(), ITensorPack::get_tensor(), Window::set(), arm_compute::test::validation::reference::slice(), Window::slide_window_slice_3D(), and IKernel::window().

457 {
460 
461  const auto src0 = utils::cast::polymorphic_downcast<const ICLTensor *>(tensors.get_const_tensor(TensorType::ACL_SRC_0));
462  const auto src1 = utils::cast::polymorphic_downcast<const ICLTensor *>(tensors.get_const_tensor(TensorType::ACL_SRC_1));
463  const auto bias = utils::cast::polymorphic_downcast<const ICLTensor *>(tensors.get_const_tensor(TensorType::ACL_BIAS));
464  const auto vector_sum_col = utils::cast::polymorphic_downcast<const ICLTensor *>(tensors.get_const_tensor(TensorType::ACL_VEC_COL_SUM));
465  const auto vector_sum_row = utils::cast::polymorphic_downcast<const ICLTensor *>(tensors.get_const_tensor(TensorType::ACL_VEC_ROW_SUM));
466  const auto output_shifts = utils::cast::polymorphic_downcast<const ICLTensor *>(tensors.get_const_tensor(TensorType::ACL_SHIFTS));
467  const auto output_multipliers = utils::cast::polymorphic_downcast<const ICLTensor *>(tensors.get_const_tensor(TensorType::ACL_MULTIPLIERS));
468  auto dst = utils::cast::polymorphic_downcast<ICLTensor *>(tensors.get_tensor(TensorType::ACL_DST));
469 
470  if(src1->info()->num_dimensions() < 3)
471  {
472  // The stride_z for matrix B must be zero if we do not slice
473  ARM_COMPUTE_ERROR_ON(src1->info()->strides_in_bytes()[3] != 0);
474  }
475 
477  Window slice_matrix_b = slice;
478 
479  slice_matrix_b.set(Window::DimX, Window::Dimension(0, 1, 1));
480  slice_matrix_b.set(Window::DimY, Window::Dimension(0, 1, 1));
481 
482  if(_reinterpret_input_as_3d)
483  {
484  // Pass bottom paddings to the kernel if the input has to be reinterpreted as 3D tensor
485  const unsigned int idx0 = 3 * num_arguments_per_2D_tensor() + 3;
486  const unsigned int total_cross_plane_pad = src0->info()->padding().top + src0->info()->padding().bottom;
487  _kernel.setArg<cl_uint>(idx0, static_cast<unsigned int>(total_cross_plane_pad));
488  }
489 
490  if(_reinterpret_output_as_3d)
491  {
492  // Pass bottom paddings to the kernel if the dst has to be reinterpreted as 3D tensor
493  const unsigned int idx0 = 3 * num_arguments_per_2D_tensor() + 3 + (_reinterpret_input_as_3d ? 1 : 0);
494  const unsigned int total_cross_plane_pad = dst->info()->padding().top + dst->info()->padding().bottom;
495  _kernel.setArg<cl_uint>(idx0, static_cast<unsigned int>(total_cross_plane_pad));
496  }
497 
498  // Set window for vector_sum_col
499  Window win_vector_sum_col = slice;
500  win_vector_sum_col.set(Window::DimY, Window::Dimension(0, 0, 0));
501  win_vector_sum_col.set(Window::DimZ, Window::Dimension(0, 0, 0));
502 
503  // Set window for vector_sum_row
504  Window win_vector_sum_row = slice;
505  win_vector_sum_row.set(Window::DimX, Window::Dimension(0, 0, 0));
506  win_vector_sum_row.set(Window::DimY, Window::Dimension(0, 0, 0));
507  win_vector_sum_col.set(Window::DimZ, Window::Dimension(0, 0, 0));
508 
509  Window biases_slice = slice;
510  biases_slice.set(Window::DimY, Window::Dimension(0, 1, 1));
511  biases_slice.set(Window::DimZ, Window::Dimension(0, 1, 1));
512 
513  do
514  {
515  Window slice_b = slice;
516  // Don't slice matrix B along the z dimension if matrix B has just 2 dimensions and matrix A more than 2
517  // This scenario can happen when the matrix multiplication is used to perform a convolution operation
518  if(!_slide_matrix_b)
519  {
520  slice_b = slice_matrix_b;
521  }
522 
523  unsigned int idx = 0;
524  add_2D_tensor_argument(idx, src0, slice);
525  add_2D_tensor_argument(idx, src1, slice_b);
526  add_2D_tensor_argument(idx, dst, slice);
527  _kernel.setArg<cl_uint>(idx++, static_cast<unsigned int>(src0->info()->strides_in_bytes()[2]));
528  _kernel.setArg<cl_uint>(idx++, static_cast<unsigned int>(src1->info()->strides_in_bytes()[2]));
529  _kernel.setArg<cl_uint>(idx++, static_cast<unsigned int>(dst->info()->strides_in_bytes()[2]));
530  if(_reinterpret_input_as_3d)
531  {
532  // Pass bottom paddings to the kernel if the input has to be reinterpreted as 3D tensor
533  idx++;
534  }
535 
536  if(_reinterpret_output_as_3d)
537  {
538  // Pass bottom paddings to the kernel if the dst has to be reinterpreted as 3D tensor
539  idx++;
540  }
541 
542  if(_fuse_output_stage)
543  {
544  add_2D_tensor_argument_if((vector_sum_col != nullptr), idx, vector_sum_col, win_vector_sum_col);
545  add_2D_tensor_argument_if((vector_sum_row != nullptr), idx, vector_sum_row, win_vector_sum_row);
546  add_1D_tensor_argument_if((bias != nullptr), idx, bias, biases_slice);
547  add_1D_tensor_argument_if(_is_quantized_per_channel, idx, output_multipliers, biases_slice);
548  add_1D_tensor_argument_if(_is_quantized_per_channel, idx, output_shifts, biases_slice);
549  }
550  enqueue(queue, *this, slice, lws_hint(), _use_dummy_work_items);
551  }
552  while(window.slide_window_slice_3D(slice));
553 }
void add_1D_tensor_argument_if(bool cond, unsigned int &idx, const ICLTensor *tensor, const Window &window)
Add the passed 1D tensor&#39;s parameters to the object&#39;s kernel&#39;s arguments starting from the index idx ...
Definition: ICLKernel.h:189
const Window & window() const
The maximum window the kernel can be executed on.
Definition: IKernel.cpp:28
void add_2D_tensor_argument_if(bool cond, unsigned int &idx, const ICLTensor *tensor, const Window &window)
Add the passed 2D tensor&#39;s parameters to the object&#39;s kernel&#39;s arguments starting from the index idx ...
Definition: ICLKernel.h:213
void enqueue(cl::CommandQueue &queue, ICLKernel &kernel, const Window &window, const cl::NDRange &lws_hint=CLKernelLibrary::get().default_ndrange(), bool use_dummy_work_items=false)
Add the kernel to the command queue with the given window.
Definition: ICLKernel.cpp:32
cl::NDRange lws_hint() const
Return the Local-Workgroup-Size hint.
Definition: ICLKernel.h:383
#define ARM_COMPUTE_ERROR_ON(cond)
If the condition is true then an error message is printed and an exception thrown.
Definition: Error.h:466
static constexpr size_t DimX
Alias for dimension 0 also known as X dimension.
Definition: Window.h:43
static constexpr unsigned int num_arguments_per_2D_tensor()
Returns the number of arguments enqueued per 2D tensor object.
Definition: ICLKernel.h:305
bool slide_window_slice_3D(Window &slice) const
Slide the passed 3D window slice.
Definition: Window.h:349
#define ARM_COMPUTE_ERROR_ON_UNCONFIGURED_KERNEL(k)
Definition: Validate.h:915
static constexpr size_t DimY
Alias for dimension 1 also known as Y dimension.
Definition: Window.h:45
void add_2D_tensor_argument(unsigned int &idx, const ICLTensor *tensor, const Window &window)
Add the passed 2D tensor&#39;s parameters to the object&#39;s kernel&#39;s arguments starting from the index idx...
Definition: ICLKernel.h:202
static constexpr size_t DimZ
Alias for dimension 2 also known as Z dimension.
Definition: Window.h:47
Window first_slice_window_3D() const
First 3D slice of the window.
Definition: Window.h:305
#define ARM_COMPUTE_ERROR_ON_INVALID_SUBWINDOW(f, s)
Definition: Validate.h:201
SimpleTensor< T > slice(const SimpleTensor< T > &src, Coordinates starts, Coordinates ends)
const int32_t * bias

◆ validate()

Status validate ( const ITensorInfo src0,
const ITensorInfo src1,
const ITensorInfo dst,
const GEMMKernelInfo gemm_info,
const ITensorInfo vector_sum_col = nullptr,
const ITensorInfo vector_sum_row = nullptr,
const ITensorInfo bias = nullptr,
const ITensorInfo output_multipliers = nullptr,
const ITensorInfo output_shifts = nullptr 
)
static

Static function to check if given info will lead to a valid configuration.

Similar to ClGemmLowpMatrixMultiplyReshapedOnlyRhsKernel::configure()

Returns
a status

Definition at line 435 of file ClGemmLowpMatrixMultiplyReshapedOnlyRhsKernel.cpp.

References ARM_COMPUTE_RETURN_ON_ERROR, ICloneable< T >::clone(), arm_compute::test::validation::gemm_info, arm_compute::cpu::kernels::validate_and_configure_window(), and arm_compute::cpu::kernels::validate_arguments().

Referenced by ClGemmLowpMatrixMultiplyCore::validate().

438 {
439  ElementsProcessed num_elements_processed{};
440  ARM_COMPUTE_RETURN_ON_ERROR(validate_arguments(src0, src1, dst, gemm_info, vector_sum_col, vector_sum_row, bias, output_multipliers, output_shifts));
442  src1->clone().get(),
443  dst->clone().get(),
444  gemm_info,
445  vector_sum_col != nullptr ? vector_sum_col->clone().get() : nullptr,
446  vector_sum_row != nullptr ? vector_sum_row->clone().get() : nullptr,
447  bias != nullptr ? bias->clone().get() : nullptr,
448  output_multipliers != nullptr ? output_multipliers->clone().get() : nullptr,
449  output_shifts != nullptr ? output_shifts->clone().get() : nullptr,
450  num_elements_processed)
451  .first);
452 
453  return Status{};
454 }
#define ARM_COMPUTE_RETURN_ON_ERROR(status)
Checks if a status contains an error and returns it.
Definition: Error.h:204
Status validate_arguments(const ITensorInfo *src, const ITensorInfo *weights, const ITensorInfo *dst, const PadStrideInfo &conv_info)
std::pair< Status, Window > validate_and_configure_window(ITensorInfo *src, ITensorInfo *dst)
const int32_t * bias

The documentation for this class was generated from the following files: