Compute Library
 21.11
ClGemmLowpMatrixMultiplyNativeKernel Class Reference

OpenCL kernel to multiply matrices with QASYMM8/QASYMM8_SIGNED data type. More...

#include <ClGemmLowpMatrixMultiplyNativeKernel.h>

Collaboration diagram for ClGemmLowpMatrixMultiplyNativeKernel:
[legend]

Public Member Functions

 ClGemmLowpMatrixMultiplyNativeKernel ()
 
 ARM_COMPUTE_DISALLOW_COPY_ALLOW_MOVE (ClGemmLowpMatrixMultiplyNativeKernel)
 
void configure (const CLCompileContext &compile_context, const ITensorInfo *src0, ITensorInfo *src1, ITensorInfo *dst, const GEMMLHSMatrixInfo &lhs_info, const GEMMRHSMatrixInfo &rhs_info, const GEMMReshapeInfo &gemm_info)
 Initialise the kernel's input and dst. More...
 
void run_op (ITensorPack &tensors, const Window &window, cl::CommandQueue &queue) override
 Enqueue the OpenCL kernel to process the given window on the passed OpenCL command queue. More...
 
- Public Member Functions inherited from ICLKernel
 ICLKernel ()
 Constructor. More...
 
cl::Kernel & kernel ()
 Returns a reference to the OpenCL kernel of this object. More...
 
CLKernelType type () const
 Returns the CL kernel type. More...
 
template<typename T >
void add_1D_array_argument (unsigned int &idx, const ICLArray< T > *array, const Strides &strides, unsigned int num_dimensions, const Window &window)
 Add the passed 1D array's parameters to the object's kernel's arguments starting from the index idx. More...
 
void add_1D_tensor_argument (unsigned int &idx, const ICLTensor *tensor, const Window &window)
 Add the passed 1D tensor's parameters to the object's kernel's arguments starting from the index idx. More...
 
void add_1D_tensor_argument_if (bool cond, unsigned int &idx, const ICLTensor *tensor, const Window &window)
 Add the passed 1D tensor's parameters to the object's kernel's arguments starting from the index idx if the condition is true. More...
 
void add_2D_tensor_argument (unsigned int &idx, const ICLTensor *tensor, const Window &window)
 Add the passed 2D tensor's parameters to the object's kernel's arguments starting from the index idx. More...
 
void add_2D_tensor_argument_if (bool cond, unsigned int &idx, const ICLTensor *tensor, const Window &window)
 Add the passed 2D tensor's parameters to the object's kernel's arguments starting from the index idx if the condition is true. More...
 
void add_3D_tensor_argument (unsigned int &idx, const ICLTensor *tensor, const Window &window)
 Add the passed 3D tensor's parameters to the object's kernel's arguments starting from the index idx. More...
 
void add_4D_tensor_argument (unsigned int &idx, const ICLTensor *tensor, const Window &window)
 Add the passed 4D tensor's parameters to the object's kernel's arguments starting from the index idx. More...
 
virtual void run (const Window &window, cl::CommandQueue &queue)
 Enqueue the OpenCL kernel to process the given window on the passed OpenCL command queue. More...
 
template<typename T >
void add_argument (unsigned int &idx, T value)
 Add the passed parameters to the object's kernel's arguments starting from the index idx. More...
 
void set_lws_hint (const cl::NDRange &lws_hint)
 Set the Local-Workgroup-Size hint. More...
 
cl::NDRange lws_hint () const
 Return the Local-Workgroup-Size hint. More...
 
void set_wbsm_hint (const cl_int &wbsm_hint)
 Set the workgroup batch size modifier hint. More...
 
cl_int wbsm_hint () const
 Return the workgroup batch size modifier hint. More...
 
const std::string & config_id () const
 Get the configuration ID. More...
 
void set_target (GPUTarget target)
 Set the targeted GPU architecture. More...
 
void set_target (cl::Device &device)
 Set the targeted GPU architecture according to the CL device. More...
 
GPUTarget get_target () const
 Get the targeted GPU architecture. More...
 
size_t get_max_workgroup_size ()
 Get the maximum workgroup size for the device the CLKernelLibrary uses. More...
 
template<unsigned int dimension_size>
void add_tensor_argument (unsigned &idx, const ICLTensor *tensor, const Window &window)
 
template<typename T , unsigned int dimension_size>
void add_array_argument (unsigned &idx, const ICLArray< T > *array, const Strides &strides, unsigned int num_dimensions, const Window &window)
 Add the passed array's parameters to the object's kernel's arguments starting from the index idx. More...
 
- Public Member Functions inherited from IKernel
 IKernel ()
 Constructor. More...
 
virtual ~IKernel ()=default
 Destructor. More...
 
virtual bool is_parallelisable () const
 Indicates whether or not the kernel is parallelisable. More...
 
virtual BorderSize border_size () const
 The size of the border for that kernel. More...
 
const Windowwindow () const
 The maximum window the kernel can be executed on. More...
 
bool is_window_configured () const
 Function to check if the embedded window of this kernel has been configured. More...
 

Static Public Member Functions

static Status validate (const ITensorInfo *src0, const ITensorInfo *src1, const ITensorInfo *dst, const GEMMLHSMatrixInfo &lhs_info, const GEMMRHSMatrixInfo &rhs_info, const GEMMReshapeInfo &gemm_info)
 Static function to check if given info will lead to a valid configuration. More...
 
- Static Public Member Functions inherited from ICLKernel
static constexpr unsigned int num_arguments_per_1D_array ()
 Returns the number of arguments enqueued per 1D array object. More...
 
static constexpr unsigned int num_arguments_per_1D_tensor ()
 Returns the number of arguments enqueued per 1D tensor object. More...
 
static constexpr unsigned int num_arguments_per_2D_tensor ()
 Returns the number of arguments enqueued per 2D tensor object. More...
 
static constexpr unsigned int num_arguments_per_3D_tensor ()
 Returns the number of arguments enqueued per 3D tensor object. More...
 
static constexpr unsigned int num_arguments_per_4D_tensor ()
 Returns the number of arguments enqueued per 4D tensor object. More...
 
static cl::NDRange gws_from_window (const Window &window)
 Get the global work size given an execution window. More...
 

Detailed Description

OpenCL kernel to multiply matrices with QASYMM8/QASYMM8_SIGNED data type.

Definition at line 39 of file ClGemmLowpMatrixMultiplyNativeKernel.h.

Constructor & Destructor Documentation

◆ ClGemmLowpMatrixMultiplyNativeKernel()

Definition at line 161 of file ClGemmLowpMatrixMultiplyNativeKernel.cpp.

References arm_compute::GEMM.

162 {
163  _type = CLKernelType::GEMM;
164 }
Convolution using GEMM.

Member Function Documentation

◆ ARM_COMPUTE_DISALLOW_COPY_ALLOW_MOVE()

ARM_COMPUTE_DISALLOW_COPY_ALLOW_MOVE ( ClGemmLowpMatrixMultiplyNativeKernel  )

◆ configure()

void configure ( const CLCompileContext compile_context,
const ITensorInfo src0,
ITensorInfo src1,
ITensorInfo dst,
const GEMMLHSMatrixInfo lhs_info,
const GEMMRHSMatrixInfo rhs_info,
const GEMMReshapeInfo gemm_info 
)

Initialise the kernel's input and dst.

Parameters
[in]compile_contextThe compile context to be used.
[in]src0Source tensor containing the LHS matrix. Data type supported: QASYMM8/QASYMM8_SIGNED
[in]src1Source tensor containing the RHS matrix. Data type supported: same as src0
[out]dstDestination tensor to store the result of matrix multiplication. Data type supported: S32
[in]lhs_infoLHS matrix information used to retrieve the number of rows to be processed by each thread lhs_info.m0: 2,3,4,5,6,7,8 lhs_info.k0: 2,3,4,8,16
[in]rhs_infoRHS matrix information used to retrieve the number of columns to be processed by each thread rhs_info.n0: 2,3,4,8,16 rhs_info.k0: same as lhs_info.k0
[in]gemm_infoGEMM information used to retrieve the original dimensions of the input matrices

Definition at line 166 of file ClGemmLowpMatrixMultiplyNativeKernel.cpp.

References CLBuildOptions::add_option(), CLBuildOptions::add_option_if(), ARM_COMPUTE_ERROR_ON, ARM_COMPUTE_ERROR_ON_NULLPTR, ARM_COMPUTE_ERROR_THROW_ON, arm_compute::create_kernel(), ITensorInfo::data_type(), GEMMReshapeInfo::depth_output_gemm3d(), ITensorInfo::dimension(), arm_compute::dot8_supported(), CLKernelLibrary::get(), arm_compute::get_cl_dot8_acc_type_from_data_type(), arm_compute::get_cl_type_from_data_type(), arm_compute::get_padding_info(), arm_compute::has_padding_changed(), GEMMReshapeInfo::k(), GEMMLHSMatrixInfo::k0, GEMMRHSMatrixInfo::k0, kernel_name, GEMMReshapeInfo::m(), GEMMLHSMatrixInfo::m0, GEMMReshapeInfo::n(), GEMMRHSMatrixInfo::n0, ITensorInfo::num_dimensions(), CLBuildOptions::options(), arm_compute::preferred_dummy_work_items_support(), GEMMReshapeInfo::reinterpret_input_as_3d(), and arm_compute::support::cpp11::to_string().

168 {
169  ARM_COMPUTE_ERROR_ON_NULLPTR(src0, src1, dst);
170 
171  ARM_COMPUTE_ERROR_THROW_ON(validate_arguments(src0, src1, dst, lhs_info, rhs_info, gemm_info));
172 
173  _reinterpret_input_as_3d = gemm_info.reinterpret_input_as_3d();
174  _reinterpret_output_as_3d = (gemm_info.depth_output_gemm3d() != 0);
175  _use_dummy_work_items = preferred_dummy_work_items_support(CLKernelLibrary::get().get_device());
176 
177  // We still need padding on the X dimension for the RHS matrix
178  auto padding_info = get_padding_info({ src0, dst });
179 
180  // In case both input and dst have to be reinterpreted as 3D tensors,
181  // force reinterpret_input_as_3d and reinterpret_dst_as_3d to be false.
182  if(_reinterpret_input_as_3d == _reinterpret_output_as_3d)
183  {
184  _reinterpret_input_as_3d = false;
185  _reinterpret_output_as_3d = false;
186  }
187 
188  // Check if we need to slide the matrix B
189  const unsigned int num_dimensions_src0 = src0->num_dimensions();
190  _slide_matrix_b = (src1->num_dimensions() >= num_dimensions_src0);
191 
192  ElementsProcessed num_elements_processed{};
193 
194  // Configure kernel window
195  auto win_config = validate_and_configure_window(src0, src1, dst, lhs_info, rhs_info, gemm_info, num_elements_processed);
196  ARM_COMPUTE_ERROR_THROW_ON(win_config.first);
197  ICLKernel::configure_internal(win_config.second);
198 
199  // If _reinterpret_input_as_3d = _reinterpret_output_as_3d = true,
200  // we will dispatch a batched-GEMM to reduce the complexity of the address calculation within the OpenCL kernel.
201  // This means that the actual m used by the kernel is given by dst->info()->dimension(1) and not by gemm_info.m
202  const unsigned int internal_m = _reinterpret_output_as_3d ? gemm_info.m() : dst->dimension(1);
203  // Calculate partial (store instead of load) M0 and partial N0 for the partial blocks at the end of a row/column if any. This is to avoid padding.
204  const unsigned int partial_store_m0 = internal_m % lhs_info.m0;
205  const unsigned int partial_store_n0 = gemm_info.n() % rhs_info.n0;
206 
207  // Shrink M0 to be always <= M (internal_m) to prevent out-of-bounds reads.
208  // NOTE: This might have implications on heuristics and performance
209  const unsigned int internal_m0 = std::min(internal_m, lhs_info.m0);
210 
211  // Create build options
212  CLBuildOptions build_opts;
213  build_opts.add_option_if(_reinterpret_input_as_3d, "-DREINTERPRET_INPUT_AS_3D");
214  build_opts.add_option_if(_reinterpret_output_as_3d, "-DREINTERPRET_OUTPUT_AS_3D");
215  build_opts.add_option_if(_reinterpret_input_as_3d || _reinterpret_output_as_3d, "-DHEIGHT_GEMM3D=" + support::cpp11::to_string(dst->dimension(1)));
216  build_opts.add_option_if(_reinterpret_input_as_3d || _reinterpret_output_as_3d, "-DDEPTH_GEMM3D=" + support::cpp11::to_string(dst->dimension(2)));
217  build_opts.add_option_if(!_slide_matrix_b, "-DMATRIX_B_DEPTH=" + support::cpp11::to_string(src1->dimension(2)));
218  build_opts.add_option_if(_use_dummy_work_items, "-DDUMMY_WORK_ITEMS");
219  build_opts.add_option("-DM=" + support::cpp11::to_string(src0->dimension(1)));
220  build_opts.add_option("-DN=" + support::cpp11::to_string(gemm_info.n()));
221  build_opts.add_option("-DK=" + support::cpp11::to_string(gemm_info.k()));
222  build_opts.add_option("-DM0=" + support::cpp11::to_string(internal_m0));
223  build_opts.add_option("-DN0=" + support::cpp11::to_string(rhs_info.n0));
224  build_opts.add_option("-DK0=" + support::cpp11::to_string(rhs_info.k0));
225  build_opts.add_option("-DDATA_TYPE=" + get_cl_type_from_data_type(src0->data_type()));
226  build_opts.add_option("-DACC_DATA_TYPE=" + get_cl_dot8_acc_type_from_data_type(src0->data_type()));
227  build_opts.add_option("-DPARTIAL_STORE_M0=" + support::cpp11::to_string(partial_store_m0));
228  build_opts.add_option("-DPARTIAL_STORE_N0=" + support::cpp11::to_string(partial_store_n0));
229  std::string kernel_name("gemmlowp_mm_native");
230 
231  // Create kernel
232  _kernel = create_kernel(compile_context, kernel_name, build_opts.options());
233 
234  // Set config_id for enabling LWS tuning
235  _config_id = kernel_name;
236  _config_id += "_";
237  _config_id += dot8_supported(CLKernelLibrary::get().get_device()) ? "_dot8" : "";
238  _config_id += "_";
239  _config_id += (_reinterpret_input_as_3d ? "3di_" : "");
240  _config_id += (_reinterpret_output_as_3d ? "3do_" : "");
241  _config_id += support::cpp11::to_string(dst->dimension(1));
242  _config_id += "_";
243  _config_id += support::cpp11::to_string(dst->dimension(0));
244  _config_id += "_";
245  _config_id += support::cpp11::to_string(gemm_info.k());
246  _config_id += "_";
247  _config_id += support::cpp11::to_string(dst->dimension(2));
248  _config_id += "_";
249  _config_id += support::cpp11::to_string(lhs_info.m0);
250  _config_id += "_";
251  _config_id += support::cpp11::to_string(rhs_info.n0);
252  _config_id += "_";
253  _config_id += support::cpp11::to_string(lhs_info.k0);
254 
256 }
bool dot8_supported(const cl::Device &device)
Helper function to check whether the cl_arm_integer_dot_product_int8 extension is supported...
Definition: CLHelpers.cpp:241
bool preferred_dummy_work_items_support(const cl::Device &device)
Helper function to check if "dummy work-items" are preferred to have a power of two NDRange In case d...
Definition: CLHelpers.cpp:363
std::string get_cl_dot8_acc_type_from_data_type(const DataType &dt)
Translates a tensor data type to the appropriate OpenCL dot8 accumulator type.
Definition: CLHelpers.cpp:175
std::string to_string(T &&value)
Convert integer and float values to string.
#define ARM_COMPUTE_ERROR_ON(cond)
If the condition is true then an error message is printed and an exception thrown.
Definition: Error.h:466
static CLKernelLibrary & get()
Access the KernelLibrary singleton.
#define ARM_COMPUTE_ERROR_THROW_ON(status)
Definition: Error.h:455
cl::Kernel create_kernel(const CLCompileContext &ctx, const std::string &kernel_name, const std::set< std::string > &build_opts=std::set< std::string >())
Creates an opencl kernel using a compile context.
Definition: CLHelpers.cpp:391
std::string get_cl_type_from_data_type(const DataType &dt)
Translates a tensor data type to the appropriate OpenCL type.
Definition: CLHelpers.cpp:39
bool has_padding_changed(const std::unordered_map< const ITensorInfo *, PaddingSize > &padding_map)
Check if the previously stored padding info has changed after configuring a kernel.
Definition: Utils.cpp:533
std::unordered_map< const ITensorInfo *, PaddingSize > get_padding_info(std::initializer_list< const ITensorInfo *> infos)
Stores padding information before configuring a kernel.
Definition: Utils.cpp:518
#define ARM_COMPUTE_ERROR_ON_NULLPTR(...)
Definition: Validate.h:157
std::string kernel_name

◆ run_op()

void run_op ( ITensorPack tensors,
const Window window,
cl::CommandQueue &  queue 
)
overridevirtual

Enqueue the OpenCL kernel to process the given window on the passed OpenCL command queue.

Note
The queue is not flushed by this method, and therefore the kernel will not have been executed by the time this method returns.
Parameters
[in]tensorsA vector containing the tensors to operato on.
[in]windowRegion on which to execute the kernel. (Must be a valid region of the window returned by window()).
[in,out]queueCommand queue on which to enqueue the kernel.

Reimplemented from ICLKernel.

Definition at line 275 of file ClGemmLowpMatrixMultiplyNativeKernel.cpp.

References arm_compute::ACL_DST, arm_compute::ACL_SRC_0, arm_compute::ACL_SRC_1, ICLKernel::add_2D_tensor_argument(), ARM_COMPUTE_ERROR_ON, ARM_COMPUTE_ERROR_ON_INVALID_SUBWINDOW, ARM_COMPUTE_ERROR_ON_UNCONFIGURED_KERNEL, Window::DimX, Window::DimY, arm_compute::enqueue(), Window::first_slice_window_3D(), ITensorPack::get_const_tensor(), ITensorPack::get_tensor(), ICLKernel::lws_hint(), ICLKernel::num_arguments_per_2D_tensor(), Window::set(), arm_compute::test::validation::reference::slice(), Window::slide_window_slice_3D(), and IKernel::window().

276 {
279 
280  const auto src0 = utils::cast::polymorphic_downcast<const ICLTensor *>(tensors.get_const_tensor(TensorType::ACL_SRC_0));
281  const auto src1 = utils::cast::polymorphic_downcast<const ICLTensor *>(tensors.get_const_tensor(TensorType::ACL_SRC_1));
282  auto dst = utils::cast::polymorphic_downcast<ICLTensor *>(tensors.get_tensor(TensorType::ACL_DST));
283 
284  if(src1->info()->num_dimensions() < 3)
285  {
286  // The stride_z for matrix B must be zero if we do not slice
287  ARM_COMPUTE_ERROR_ON(src1->info()->strides_in_bytes()[3] != 0);
288  }
289 
291  Window slice_matrix_b = slice;
292 
293  slice_matrix_b.set(Window::DimX, Window::Dimension(0, 1, 1));
294  slice_matrix_b.set(Window::DimY, Window::Dimension(0, 1, 1));
295 
296  if(_reinterpret_input_as_3d)
297  {
298  // Pass bottom paddings to the kernel if the input has to be reinterpreted as 3D tensor
299  const unsigned int idx0 = 3 * num_arguments_per_2D_tensor() + 3;
300  const unsigned int total_cross_plane_pad = src0->info()->padding().top + src0->info()->padding().bottom;
301  _kernel.setArg<cl_uint>(idx0, static_cast<unsigned int>(total_cross_plane_pad));
302  }
303 
304  if(_reinterpret_output_as_3d)
305  {
306  // Pass bottom paddings to the kernel if the output has to be reinterpreted as 3D tensor
307  const unsigned int idx0 = 3 * num_arguments_per_2D_tensor() + 3 + (_reinterpret_input_as_3d ? 1 : 0);
308  const unsigned int total_cross_plane_pad = dst->info()->padding().top + dst->info()->padding().bottom;
309  _kernel.setArg<cl_uint>(idx0, static_cast<unsigned int>(total_cross_plane_pad));
310  }
311 
312  do
313  {
314  Window slice_b = slice;
315  // Don't slice matrix B along the z dimension if matrix B has just 2 dimensions and matrix A more than 2
316  // This scenario can happen when the matrix multiplication is used to perform a convolution operation
317  if(!_slide_matrix_b)
318  {
319  slice_b = slice_matrix_b;
320  }
321 
322  unsigned int idx = 0;
323  add_2D_tensor_argument(idx, src0, slice);
324  add_2D_tensor_argument(idx, src1, slice_b);
325  add_2D_tensor_argument(idx, dst, slice);
326  _kernel.setArg<cl_uint>(idx++, static_cast<unsigned int>(src0->info()->strides_in_bytes()[2]));
327  _kernel.setArg<cl_uint>(idx++, static_cast<unsigned int>(src1->info()->strides_in_bytes()[2]));
328  _kernel.setArg<cl_uint>(idx++, static_cast<unsigned int>(dst->info()->strides_in_bytes()[2]));
329  enqueue(queue, *this, slice, lws_hint(), _use_dummy_work_items);
330  }
331  while(window.slide_window_slice_3D(slice));
332 }
const Window & window() const
The maximum window the kernel can be executed on.
Definition: IKernel.cpp:28
void enqueue(cl::CommandQueue &queue, ICLKernel &kernel, const Window &window, const cl::NDRange &lws_hint=CLKernelLibrary::get().default_ndrange(), bool use_dummy_work_items=false)
Add the kernel to the command queue with the given window.
Definition: ICLKernel.cpp:32
cl::NDRange lws_hint() const
Return the Local-Workgroup-Size hint.
Definition: ICLKernel.h:318
#define ARM_COMPUTE_ERROR_ON(cond)
If the condition is true then an error message is printed and an exception thrown.
Definition: Error.h:466
static constexpr size_t DimX
Alias for dimension 0 also known as X dimension.
Definition: Window.h:43
static constexpr unsigned int num_arguments_per_2D_tensor()
Returns the number of arguments enqueued per 2D tensor object.
Definition: ICLKernel.h:248
bool slide_window_slice_3D(Window &slice) const
Slide the passed 3D window slice.
Definition: Window.h:335
#define ARM_COMPUTE_ERROR_ON_UNCONFIGURED_KERNEL(k)
Definition: Validate.h:915
static constexpr size_t DimY
Alias for dimension 1 also known as Y dimension.
Definition: Window.h:45
void add_2D_tensor_argument(unsigned int &idx, const ICLTensor *tensor, const Window &window)
Add the passed 2D tensor&#39;s parameters to the object&#39;s kernel&#39;s arguments starting from the index idx...
Definition: ICLKernel.h:190
Window first_slice_window_3D() const
First 3D slice of the window.
Definition: Window.h:291
#define ARM_COMPUTE_ERROR_ON_INVALID_SUBWINDOW(f, s)
Definition: Validate.h:201
SimpleTensor< T > slice(const SimpleTensor< T > &src, Coordinates starts, Coordinates ends)

◆ validate()

Status validate ( const ITensorInfo src0,
const ITensorInfo src1,
const ITensorInfo dst,
const GEMMLHSMatrixInfo lhs_info,
const GEMMRHSMatrixInfo rhs_info,
const GEMMReshapeInfo gemm_info 
)
static

Static function to check if given info will lead to a valid configuration.

Similar to ClGemmLowpMatrixMultiplyNativeKernel::configure()

Returns
a status

Definition at line 258 of file ClGemmLowpMatrixMultiplyNativeKernel.cpp.

References ARM_COMPUTE_RETURN_ON_ERROR, ICloneable< T >::clone(), arm_compute::test::validation::gemm_info, arm_compute::test::validation::lhs_info, and arm_compute::test::validation::rhs_info.

Referenced by ClGemmLowpMatrixMultiplyCore::validate().

260 {
261  ElementsProcessed num_elements_processed{};
262  ARM_COMPUTE_RETURN_ON_ERROR(validate_arguments(src0, src1, dst, lhs_info, rhs_info, gemm_info));
263  ARM_COMPUTE_RETURN_ON_ERROR(validate_and_configure_window(src0->clone().get(),
264  src1->clone().get(),
265  dst->clone().get(),
266  lhs_info,
267  rhs_info,
268  gemm_info,
269  num_elements_processed)
270  .first);
271 
272  return Status{};
273 }
#define ARM_COMPUTE_RETURN_ON_ERROR(status)
Checks if a status contains an error and returns it.
Definition: Error.h:204

The documentation for this class was generated from the following files: