Compute Library
 21.02
CLGEMMLowpMatrixMultiplyNativeKernel Class Reference

OpenCL kernel to multiply matrices with QASYMM8/QASYMM8_SIGNED data type. More...

#include <CLGEMMLowpMatrixMultiplyNativeKernel.h>

Collaboration diagram for CLGEMMLowpMatrixMultiplyNativeKernel:
[legend]

Public Member Functions

 CLGEMMLowpMatrixMultiplyNativeKernel ()
 Default Constructor. More...
 
 CLGEMMLowpMatrixMultiplyNativeKernel (const CLGEMMLowpMatrixMultiplyNativeKernel &)=delete
 Prevent instances of this class from being copied (As this class contains pointers) More...
 
CLGEMMLowpMatrixMultiplyNativeKerneloperator= (const CLGEMMLowpMatrixMultiplyNativeKernel &)=delete
 Prevent instances of this class from being copied (As this class contains pointers) More...
 
 CLGEMMLowpMatrixMultiplyNativeKernel (CLGEMMLowpMatrixMultiplyNativeKernel &&)=default
 Allow instances of this class to be moved. More...
 
CLGEMMLowpMatrixMultiplyNativeKerneloperator= (CLGEMMLowpMatrixMultiplyNativeKernel &&)=default
 Allow instances of this class to be moved. More...
 
void configure (const ICLTensor *input0, const ICLTensor *input1, ICLTensor *output, const GEMMLHSMatrixInfo &lhs_info, const GEMMRHSMatrixInfo &rhs_info, const GEMMReshapeInfo &gemm_info)
 Initialise the kernel's input and output. More...
 
void configure (const CLCompileContext &compile_context, const ICLTensor *input0, const ICLTensor *input1, ICLTensor *output, const GEMMLHSMatrixInfo &lhs_info, const GEMMRHSMatrixInfo &rhs_info, const GEMMReshapeInfo &gemm_info)
 Initialise the kernel's input and output. More...
 
void run (const Window &window, cl::CommandQueue &queue) override
 Enqueue the OpenCL kernel to process the given window on the passed OpenCL command queue. More...
 
- Public Member Functions inherited from ICLKernel
 ICLKernel ()
 Constructor. More...
 
cl::Kernel & kernel ()
 Returns a reference to the OpenCL kernel of this object. More...
 
template<typename T >
void add_1D_array_argument (unsigned int &idx, const ICLArray< T > *array, const Strides &strides, unsigned int num_dimensions, const Window &window)
 Add the passed 1D array's parameters to the object's kernel's arguments starting from the index idx. More...
 
void add_1D_tensor_argument (unsigned int &idx, const ICLTensor *tensor, const Window &window)
 Add the passed 1D tensor's parameters to the object's kernel's arguments starting from the index idx. More...
 
void add_1D_tensor_argument_if (bool cond, unsigned int &idx, const ICLTensor *tensor, const Window &window)
 Add the passed 1D tensor's parameters to the object's kernel's arguments starting from the index idx if the condition is true. More...
 
void add_2D_tensor_argument (unsigned int &idx, const ICLTensor *tensor, const Window &window)
 Add the passed 2D tensor's parameters to the object's kernel's arguments starting from the index idx. More...
 
void add_2D_tensor_argument_if (bool cond, unsigned int &idx, const ICLTensor *tensor, const Window &window)
 Add the passed 2D tensor's parameters to the object's kernel's arguments starting from the index idx if the condition is true. More...
 
void add_3D_tensor_argument (unsigned int &idx, const ICLTensor *tensor, const Window &window)
 Add the passed 3D tensor's parameters to the object's kernel's arguments starting from the index idx. More...
 
void add_4D_tensor_argument (unsigned int &idx, const ICLTensor *tensor, const Window &window)
 Add the passed 4D tensor's parameters to the object's kernel's arguments starting from the index idx. More...
 
virtual void run_op (ITensorPack &tensors, const Window &window, cl::CommandQueue &queue)
 Enqueue the OpenCL kernel to process the given window on the passed OpenCL command queue. More...
 
template<typename T >
void add_argument (unsigned int &idx, T value)
 Add the passed parameters to the object's kernel's arguments starting from the index idx. More...
 
void set_lws_hint (const cl::NDRange &lws_hint)
 Set the Local-Workgroup-Size hint. More...
 
cl::NDRange lws_hint () const
 Return the Local-Workgroup-Size hint. More...
 
void set_wbsm_hint (const cl_int &wbsm_hint)
 Set the workgroup batch size modifier hint. More...
 
cl_int wbsm_hint () const
 Return the workgroup batch size modifier hint. More...
 
const std::string & config_id () const
 Get the configuration ID. More...
 
void set_target (GPUTarget target)
 Set the targeted GPU architecture. More...
 
void set_target (cl::Device &device)
 Set the targeted GPU architecture according to the CL device. More...
 
GPUTarget get_target () const
 Get the targeted GPU architecture. More...
 
size_t get_max_workgroup_size ()
 Get the maximum workgroup size for the device the CLKernelLibrary uses. More...
 
template<unsigned int dimension_size>
void add_tensor_argument (unsigned &idx, const ICLTensor *tensor, const Window &window)
 
template<typename T , unsigned int dimension_size>
void add_array_argument (unsigned &idx, const ICLArray< T > *array, const Strides &strides, unsigned int num_dimensions, const Window &window)
 Add the passed array's parameters to the object's kernel's arguments starting from the index idx. More...
 
- Public Member Functions inherited from IKernel
 IKernel ()
 Constructor. More...
 
virtual ~IKernel ()=default
 Destructor. More...
 
virtual bool is_parallelisable () const
 Indicates whether or not the kernel is parallelisable. More...
 
virtual BorderSize border_size () const
 The size of the border for that kernel. More...
 
const Windowwindow () const
 The maximum window the kernel can be executed on. More...
 

Static Public Member Functions

static Status validate (const ITensorInfo *input0, const ITensorInfo *input1, const ITensorInfo *output, const GEMMLHSMatrixInfo &lhs_info, const GEMMRHSMatrixInfo &rhs_info, const GEMMReshapeInfo &gemm_info)
 Static function to check if given info will lead to a valid configuration of CLGEMMLowpMatrixMultiplyNativeKernel. More...
 
- Static Public Member Functions inherited from ICLKernel
static constexpr unsigned int num_arguments_per_1D_array ()
 Returns the number of arguments enqueued per 1D array object. More...
 
static constexpr unsigned int num_arguments_per_1D_tensor ()
 Returns the number of arguments enqueued per 1D tensor object. More...
 
static constexpr unsigned int num_arguments_per_2D_tensor ()
 Returns the number of arguments enqueued per 2D tensor object. More...
 
static constexpr unsigned int num_arguments_per_3D_tensor ()
 Returns the number of arguments enqueued per 3D tensor object. More...
 
static constexpr unsigned int num_arguments_per_4D_tensor ()
 Returns the number of arguments enqueued per 4D tensor object. More...
 
static cl::NDRange gws_from_window (const Window &window)
 Get the global work size given an execution window. More...
 

Detailed Description

OpenCL kernel to multiply matrices with QASYMM8/QASYMM8_SIGNED data type.

Definition at line 34 of file CLGEMMLowpMatrixMultiplyNativeKernel.h.

Constructor & Destructor Documentation

◆ CLGEMMLowpMatrixMultiplyNativeKernel() [1/3]

Default Constructor.

Definition at line 156 of file CLGEMMLowpMatrixMultiplyNativeKernel.cpp.

157  : _input0(nullptr), _input1(nullptr), _output(nullptr), _slide_matrix_b(true), _reinterpret_input_as_3d(false), _reinterpret_output_as_3d(false), _use_dummy_work_items(false)
158 {
159 }

◆ CLGEMMLowpMatrixMultiplyNativeKernel() [2/3]

Prevent instances of this class from being copied (As this class contains pointers)

◆ CLGEMMLowpMatrixMultiplyNativeKernel() [3/3]

Allow instances of this class to be moved.

Member Function Documentation

◆ configure() [1/2]

void configure ( const ICLTensor input0,
const ICLTensor input1,
ICLTensor output,
const GEMMLHSMatrixInfo lhs_info,
const GEMMRHSMatrixInfo rhs_info,
const GEMMReshapeInfo gemm_info 
)

Initialise the kernel's input and output.

Parameters
[in]input0Input tensor containing the LHS matrix. Data type supported: QASYMM8/QASYMM8_SIGNED
[in]input1Input tensor containing the RHS matrix. Data type supported: same as input0
[out]outputOutput tensor to store the result of matrix multiplication. Data type supported: S32
[in]lhs_infoLHS matrix information used to retrieve the number of rows to be processed by each thread lhs_info.m0: 2,3,4,5,6,7,8 lhs_info.k0: 2,3,4,8,16
[in]rhs_infoRHS matrix information used to retrieve the number of columns to be processed by each thread rhs_info.n0: 2,3,4,8,16 rhs_info.k0: same as lhs_info.k0
[in]gemm_infoGEMM information used to retrieve the original dimensions of the input matrices

Definition at line 161 of file CLGEMMLowpMatrixMultiplyNativeKernel.cpp.

References CLKernelLibrary::get().

163 {
164  configure(CLKernelLibrary::get().get_compile_context(), input0, input1, output, lhs_info, rhs_info, gemm_info);
165 }
static CLKernelLibrary & get()
Access the KernelLibrary singleton.
void configure(const ICLTensor *input0, const ICLTensor *input1, ICLTensor *output, const GEMMLHSMatrixInfo &lhs_info, const GEMMRHSMatrixInfo &rhs_info, const GEMMReshapeInfo &gemm_info)
Initialise the kernel&#39;s input and output.

◆ configure() [2/2]

void configure ( const CLCompileContext compile_context,
const ICLTensor input0,
const ICLTensor input1,
ICLTensor output,
const GEMMLHSMatrixInfo lhs_info,
const GEMMRHSMatrixInfo rhs_info,
const GEMMReshapeInfo gemm_info 
)

Initialise the kernel's input and output.

Parameters
[in]compile_contextThe compile context to be used.
[in]input0Input tensor containing the LHS matrix. Data type supported: QASYMM8/QASYMM8_SIGNED
[in]input1Input tensor containing the RHS matrix. Data type supported: same as input0
[out]outputOutput tensor to store the result of matrix multiplication. Data type supported: S32
[in]lhs_infoLHS matrix information used to retrieve the number of rows to be processed by each thread lhs_info.m0: 2,3,4,5,6,7,8 lhs_info.k0: 2,3,4,8,16
[in]rhs_infoRHS matrix information used to retrieve the number of columns to be processed by each thread rhs_info.n0: 2,3,4,8,16 rhs_info.k0: same as lhs_info.k0
[in]gemm_infoGEMM information used to retrieve the original dimensions of the input matrices

Definition at line 167 of file CLGEMMLowpMatrixMultiplyNativeKernel.cpp.

References CLBuildOptions::add_option(), CLBuildOptions::add_option_if(), ARM_COMPUTE_ERROR_ON, ARM_COMPUTE_ERROR_ON_NULLPTR, ARM_COMPUTE_ERROR_THROW_ON, arm_compute::create_kernel(), ITensorInfo::data_type(), GEMMReshapeInfo::depth_output_gemm3d(), ITensorInfo::dimension(), arm_compute::dot8_supported(), CLKernelLibrary::get(), arm_compute::get_cl_dot8_acc_type_from_data_type(), arm_compute::get_cl_type_from_data_type(), arm_compute::get_padding_info(), arm_compute::has_padding_changed(), ITensor::info(), GEMMLHSMatrixInfo::k0, kernel_name, GEMMLHSMatrixInfo::m0, ITensorInfo::num_dimensions(), CLBuildOptions::options(), arm_compute::preferred_dummy_work_items_support(), GEMMReshapeInfo::reinterpret_input_as_3d(), arm_compute::support::cpp11::to_string(), and arm_compute::validate_arguments().

170 {
171  ARM_COMPUTE_ERROR_ON_NULLPTR(input0, input1, output);
172 
173  ARM_COMPUTE_ERROR_THROW_ON(validate_arguments(input0->info(), input1->info(), output->info(), lhs_info, rhs_info, gemm_info));
174 
175  _input0 = input0;
176  _input1 = input1;
177  _output = output;
178  _reinterpret_input_as_3d = gemm_info.reinterpret_input_as_3d();
179  _reinterpret_output_as_3d = (gemm_info.depth_output_gemm3d() != 0);
180  _use_dummy_work_items = preferred_dummy_work_items_support(CLKernelLibrary::get().get_device());
181 
182  // We still need padding on the X dimension for the RHS matrix
183  auto padding_info = get_padding_info({ input0, output });
184 
185  // In case both input and output have to be reinterpreted as 3D tensors,
186  // force reinterpret_input_as_3d and reinterpret_output_as_3d to be false.
187  if(_reinterpret_input_as_3d == _reinterpret_output_as_3d)
188  {
189  _reinterpret_input_as_3d = false;
190  _reinterpret_output_as_3d = false;
191  }
192 
193  // Check if we need to slide the matrix B
194  const unsigned int num_dimensions_input0 = _input0->info()->num_dimensions();
195  _slide_matrix_b = (_input1->info()->num_dimensions() >= num_dimensions_input0);
196 
197  ElementsProcessed num_elements_processed{};
198 
199  // Configure kernel window
200  auto win_config = validate_and_configure_window(input0->info(), input1->info(), output->info(), lhs_info, rhs_info, gemm_info, num_elements_processed);
201  ARM_COMPUTE_ERROR_THROW_ON(win_config.first);
202  ICLKernel::configure_internal(win_config.second);
203 
204  // If _reinterpret_input_as_3d = _reinterpret_output_as_3d = true,
205  // we will dispatch a batched-GEMM to reduce the complexity of the address calculation within the OpenCL kernel.
206  // This means that the actual m used by the kernel is given by output->info()->dimension(1) and not by gemm_info.m
207  const unsigned int internal_m = _reinterpret_output_as_3d ? gemm_info.m() : output->info()->dimension(1);
208  // Calculate partial (store instead of load) M0 and partial N0 for the partial blocks at the end of a row/column if any. This is to avoid padding.
209  const unsigned int partial_store_m0 = internal_m % lhs_info.m0;
210  const unsigned int partial_store_n0 = gemm_info.n() % rhs_info.n0;
211 
212  // Shrink M0 to be always <= M (internal_m) to prevent out-of-bounds reads.
213  // NOTE: This might have implications on heuristics and performance
214  const unsigned int internal_m0 = std::min(internal_m, lhs_info.m0);
215 
216  // Create build options
217  CLBuildOptions build_opts;
218  build_opts.add_option_if(_reinterpret_input_as_3d, "-DREINTERPRET_INPUT_AS_3D");
219  build_opts.add_option_if(_reinterpret_output_as_3d, "-DREINTERPRET_OUTPUT_AS_3D");
220  build_opts.add_option_if(_reinterpret_input_as_3d || _reinterpret_output_as_3d, "-DHEIGHT_GEMM3D=" + support::cpp11::to_string(output->info()->dimension(1)));
221  build_opts.add_option_if(_reinterpret_input_as_3d || _reinterpret_output_as_3d, "-DDEPTH_GEMM3D=" + support::cpp11::to_string(output->info()->dimension(2)));
222  build_opts.add_option_if(!_slide_matrix_b, "-DMATRIX_B_DEPTH=" + support::cpp11::to_string(input1->info()->dimension(2)));
223  build_opts.add_option_if(_use_dummy_work_items, "-DDUMMY_WORK_ITEMS");
224  build_opts.add_option("-DM=" + support::cpp11::to_string(input0->info()->dimension(1)));
225  build_opts.add_option("-DN=" + support::cpp11::to_string(gemm_info.n()));
226  build_opts.add_option("-DK=" + support::cpp11::to_string(gemm_info.k()));
227  build_opts.add_option("-DM0=" + support::cpp11::to_string(internal_m0));
228  build_opts.add_option("-DN0=" + support::cpp11::to_string(rhs_info.n0));
229  build_opts.add_option("-DK0=" + support::cpp11::to_string(rhs_info.k0));
230  build_opts.add_option("-DDATA_TYPE=" + get_cl_type_from_data_type(input0->info()->data_type()));
231  build_opts.add_option("-DACC_DATA_TYPE=" + get_cl_dot8_acc_type_from_data_type(input0->info()->data_type()));
232  build_opts.add_option("-DPARTIAL_STORE_M0=" + support::cpp11::to_string(partial_store_m0));
233  build_opts.add_option("-DPARTIAL_STORE_N0=" + support::cpp11::to_string(partial_store_n0));
234  std::string kernel_name("gemmlowp_mm_native");
235 
236  // Create kernel
237  _kernel = create_kernel(compile_context, kernel_name, build_opts.options());
238 
239  // Set config_id for enabling LWS tuning
240  _config_id = kernel_name;
241  _config_id += "_";
242  _config_id += dot8_supported(CLKernelLibrary::get().get_device()) ? "_dot8" : "";
243  _config_id += "_";
244  _config_id += (_reinterpret_input_as_3d ? "3di_" : "");
245  _config_id += (_reinterpret_output_as_3d ? "3do_" : "");
246  _config_id += support::cpp11::to_string(output->info()->dimension(1));
247  _config_id += "_";
248  _config_id += support::cpp11::to_string(output->info()->dimension(0));
249  _config_id += "_";
250  _config_id += support::cpp11::to_string(gemm_info.k());
251  _config_id += "_";
252  _config_id += support::cpp11::to_string(output->info()->dimension(2));
253  _config_id += "_";
254  _config_id += support::cpp11::to_string(lhs_info.m0);
255  _config_id += "_";
256  _config_id += support::cpp11::to_string(rhs_info.n0);
257  _config_id += "_";
258  _config_id += support::cpp11::to_string(lhs_info.k0);
259 
261 }
virtual size_t num_dimensions() const =0
The number of dimensions of the tensor (rank)
bool dot8_supported(const cl::Device &device)
Helper function to check whether the cl_arm_integer_dot_product_int8 extension is supported...
Definition: CLHelpers.cpp:239
bool preferred_dummy_work_items_support(const cl::Device &device)
Helper function to check if "dummy work-items" are preferred to have a power of two NDRange In case d...
Definition: CLHelpers.cpp:361
std::string get_cl_dot8_acc_type_from_data_type(const DataType &dt)
Translates a tensor data type to the appropriate OpenCL dot8 accumulator type.
Definition: CLHelpers.cpp:173
std::string to_string(T &&value)
Convert integer and float values to string.
#define ARM_COMPUTE_ERROR_ON(cond)
If the condition is true then an error message is printed and an exception thrown.
Definition: Error.h:466
static CLKernelLibrary & get()
Access the KernelLibrary singleton.
#define ARM_COMPUTE_ERROR_THROW_ON(status)
Definition: Error.h:455
cl::Kernel create_kernel(const CLCompileContext &ctx, const std::string &kernel_name, const std::set< std::string > &build_opts=std::set< std::string >())
Creates an opencl kernel using a compile context.
Definition: CLHelpers.cpp:403
std::string kernel_name
std::string get_cl_type_from_data_type(const DataType &dt)
Translates a tensor data type to the appropriate OpenCL type.
Definition: CLHelpers.cpp:37
virtual ITensorInfo * info() const =0
Interface to be implemented by the child class to return the tensor&#39;s metadata.
bool has_padding_changed(const std::unordered_map< const ITensorInfo *, PaddingSize > &padding_map)
Check if the previously stored padding info has changed after configuring a kernel.
Definition: Utils.cpp:528
std::unordered_map< const ITensorInfo *, PaddingSize > get_padding_info(std::initializer_list< const ITensorInfo *> infos)
Stores padding information before configuring a kernel.
Definition: Utils.cpp:513
Status validate_arguments(const ITensorInfo *input, const ITensorInfo *bias, const ITensorInfo *output, const GEMMLowpOutputStageInfo *output_stage)
#define ARM_COMPUTE_ERROR_ON_NULLPTR(...)
Definition: Validate.h:161

◆ operator=() [1/2]

Prevent instances of this class from being copied (As this class contains pointers)

◆ operator=() [2/2]

Allow instances of this class to be moved.

◆ run()

void run ( const Window window,
cl::CommandQueue &  queue 
)
overridevirtual

Enqueue the OpenCL kernel to process the given window on the passed OpenCL command queue.

Note
The queue is not flushed by this method, and therefore the kernel will not have been executed by the time this method returns.
Parameters
[in]windowRegion on which to execute the kernel. (Must be a valid region of the window returned by window()).
[in,out]queueCommand queue on which to enqueue the kernel.

Reimplemented from ICLKernel.

Definition at line 280 of file CLGEMMLowpMatrixMultiplyNativeKernel.cpp.

References ICLKernel::add_2D_tensor_argument(), ARM_COMPUTE_ERROR_ON, ARM_COMPUTE_ERROR_ON_INVALID_SUBWINDOW, ARM_COMPUTE_ERROR_ON_UNCONFIGURED_KERNEL, BorderSize::bottom, Window::DimX, Window::DimY, arm_compute::enqueue(), Window::first_slice_window_3D(), ITensor::info(), ICLKernel::lws_hint(), ICLKernel::num_arguments_per_2D_tensor(), ITensorInfo::num_dimensions(), ITensorInfo::padding(), Window::set(), arm_compute::test::validation::reference::slice(), Window::slide_window_slice_3D(), ITensorInfo::strides_in_bytes(), BorderSize::top, and IKernel::window().

281 {
284 
285  if(_input1->info()->num_dimensions() < 3)
286  {
287  // The stride_z for matrix B must be zero if we do not slice
288  ARM_COMPUTE_ERROR_ON(_input1->info()->strides_in_bytes()[3] != 0);
289  }
290 
292  Window slice_matrix_b = slice;
293 
294  slice_matrix_b.set(Window::DimX, Window::Dimension(0, 1, 1));
295  slice_matrix_b.set(Window::DimY, Window::Dimension(0, 1, 1));
296 
297  if(_reinterpret_input_as_3d)
298  {
299  // Pass bottom paddings to the kernel if the input has to be reinterpreted as 3D tensor
300  const unsigned int idx0 = 3 * num_arguments_per_2D_tensor() + 3;
301  const unsigned int total_cross_plane_pad = _input0->info()->padding().top + _input0->info()->padding().bottom;
302  _kernel.setArg<cl_uint>(idx0, static_cast<unsigned int>(total_cross_plane_pad));
303  }
304 
305  if(_reinterpret_output_as_3d)
306  {
307  // Pass bottom paddings to the kernel if the output has to be reinterpreted as 3D tensor
308  const unsigned int idx0 = 3 * num_arguments_per_2D_tensor() + 3 + (_reinterpret_input_as_3d ? 1 : 0);
309  const unsigned int total_cross_plane_pad = _output->info()->padding().top + _output->info()->padding().bottom;
310  _kernel.setArg<cl_uint>(idx0, static_cast<unsigned int>(total_cross_plane_pad));
311  }
312 
313  do
314  {
315  Window slice_b = slice;
316  // Don't slice matrix B along the z dimension if matrix B has just 2 dimensions and matrix A more than 2
317  // This scenario can happen when the matrix multiplication is used to perform a convolution operation
318  if(!_slide_matrix_b)
319  {
320  slice_b = slice_matrix_b;
321  }
322 
323  unsigned int idx = 0;
324  add_2D_tensor_argument(idx, _input0, slice);
325  add_2D_tensor_argument(idx, _input1, slice_b);
326  add_2D_tensor_argument(idx, _output, slice);
327  _kernel.setArg<cl_uint>(idx++, static_cast<unsigned int>(_input0->info()->strides_in_bytes()[2]));
328  _kernel.setArg<cl_uint>(idx++, static_cast<unsigned int>(_input1->info()->strides_in_bytes()[2]));
329  _kernel.setArg<cl_uint>(idx++, static_cast<unsigned int>(_output->info()->strides_in_bytes()[2]));
330  enqueue(queue, *this, slice, lws_hint(), _use_dummy_work_items);
331  }
332  while(window.slide_window_slice_3D(slice));
333 }
unsigned int top
top of the border
Definition: Types.h:375
virtual size_t num_dimensions() const =0
The number of dimensions of the tensor (rank)
const Window & window() const
The maximum window the kernel can be executed on.
Definition: IKernel.cpp:28
void enqueue(IGCKernel &kernel, const Window &window, const gles::NDRange &lws=gles::NDRange(1U, 1U, 1U))
Add the kernel to the command queue with the given window.
Definition: IGCKernel.cpp:41
cl::NDRange lws_hint() const
Return the Local-Workgroup-Size hint.
Definition: ICLKernel.h:276
#define ARM_COMPUTE_ERROR_ON(cond)
If the condition is true then an error message is printed and an exception thrown.
Definition: Error.h:466
unsigned int bottom
bottom of the border
Definition: Types.h:377
static constexpr size_t DimX
Alias for dimension 0 also known as X dimension.
Definition: Window.h:43
virtual ITensorInfo * info() const =0
Interface to be implemented by the child class to return the tensor&#39;s metadata.
virtual PaddingSize padding() const =0
Padding of tensor.
static constexpr unsigned int num_arguments_per_2D_tensor()
Returns the number of arguments enqueued per 2D tensor object.
Definition: ICLKernel.h:206
bool slide_window_slice_3D(Window &slice) const
Slide the passed 3D window slice.
Definition: Window.h:335
#define ARM_COMPUTE_ERROR_ON_UNCONFIGURED_KERNEL(k)
Definition: Validate.h:941
static constexpr size_t DimY
Alias for dimension 1 also known as Y dimension.
Definition: Window.h:45
void add_2D_tensor_argument(unsigned int &idx, const ICLTensor *tensor, const Window &window)
Add the passed 2D tensor&#39;s parameters to the object&#39;s kernel&#39;s arguments starting from the index idx...
Definition: ICLKernel.h:148
virtual const Strides & strides_in_bytes() const =0
The strides in bytes for accessing each dimension of the tensor.
Window first_slice_window_3D() const
First 3D slice of the window.
Definition: Window.h:291
#define ARM_COMPUTE_ERROR_ON_INVALID_SUBWINDOW(f, s)
Definition: Validate.h:205
SimpleTensor< T > slice(const SimpleTensor< T > &src, Coordinates starts, Coordinates ends)

◆ validate()

Status validate ( const ITensorInfo input0,
const ITensorInfo input1,
const ITensorInfo output,
const GEMMLHSMatrixInfo lhs_info,
const GEMMRHSMatrixInfo rhs_info,
const GEMMReshapeInfo gemm_info 
)
static

Static function to check if given info will lead to a valid configuration of CLGEMMLowpMatrixMultiplyNativeKernel.

Parameters
[in]input0Input tensor info for the LHS matrix. Data type supported: QASYMM8/QASYMM8_SIGNED
[in]input1Input tensor info for the RHS matrix. Data type supported: same as input0
[in]outputOutput tensor info. Data type supported: S32
[in]lhs_infoLHS matrix information used to retrieve the number of rows to be processed by each thread lhs_info.m0: 2,3,4,5,6,7,8 lhs_info.k0: 2,3,4,8,16
[in]rhs_infoRHS matrix information used to retrieve the number of columns to be processed by each thread rhs_info.n0: 2,3,4,8,16 rhs_info.k0: same as lhs_info.k0
[in]gemm_infoGEMM information used to retrieve the original dimensions of the input matrices
Returns
a status

Definition at line 263 of file CLGEMMLowpMatrixMultiplyNativeKernel.cpp.

References ARM_COMPUTE_RETURN_ON_ERROR, ICloneable< T >::clone(), and arm_compute::validate_arguments().

Referenced by CLGEMMLowpMatrixMultiplyCore::validate().

265 {
266  ElementsProcessed num_elements_processed{};
267  ARM_COMPUTE_RETURN_ON_ERROR(validate_arguments(input0, input1, output, lhs_info, rhs_info, gemm_info));
268  ARM_COMPUTE_RETURN_ON_ERROR(validate_and_configure_window(input0->clone().get(),
269  input1->clone().get(),
270  output->clone().get(),
271  lhs_info,
272  rhs_info,
273  gemm_info,
274  num_elements_processed)
275  .first);
276 
277  return Status{};
278 }
#define ARM_COMPUTE_RETURN_ON_ERROR(status)
Checks if a status contains an error and returns it.
Definition: Error.h:204
Status validate_arguments(const ITensorInfo *input, const ITensorInfo *bias, const ITensorInfo *output, const GEMMLowpOutputStageInfo *output_stage)

The documentation for this class was generated from the following files: