Compute Library
 21.02
CLGEMMMatrixMultiplyReshapedOnlyRHSKernel Class Reference

OpenCL kernel to multiply matrices when only the input matrix RHS (input1) has been reshaped. More...

#include <CLGEMMMatrixMultiplyReshapedOnlyRHSKernel.h>

Collaboration diagram for CLGEMMMatrixMultiplyReshapedOnlyRHSKernel:
[legend]

Public Member Functions

 CLGEMMMatrixMultiplyReshapedOnlyRHSKernel ()
 Default Constructor. More...
 
 CLGEMMMatrixMultiplyReshapedOnlyRHSKernel (const CLGEMMMatrixMultiplyReshapedOnlyRHSKernel &)=delete
 Prevent instances of this class from being copied (As this class contains pointers) More...
 
CLGEMMMatrixMultiplyReshapedOnlyRHSKerneloperator= (const CLGEMMMatrixMultiplyReshapedOnlyRHSKernel &)=delete
 Prevent instances of this class from being copied (As this class contains pointers) More...
 
 CLGEMMMatrixMultiplyReshapedOnlyRHSKernel (CLGEMMMatrixMultiplyReshapedOnlyRHSKernel &&)=default
 Allow instances of this class to be moved. More...
 
CLGEMMMatrixMultiplyReshapedOnlyRHSKerneloperator= (CLGEMMMatrixMultiplyReshapedOnlyRHSKernel &&)=default
 Allow instances of this class to be moved. More...
 
void configure (const ICLTensor *input0, const ICLTensor *input1, const ICLTensor *input2, ICLTensor *output, float alpha, float beta, const GEMMLHSMatrixInfo &lhs_info, const GEMMRHSMatrixInfo &rhs_info, const GEMMKernelInfo &gemm_info)
 Initialise the kernel's input and output. More...
 
void configure (const CLCompileContext &compile_context, const ICLTensor *input0, const ICLTensor *input1, const ICLTensor *input2, ICLTensor *output, float alpha, float beta, const GEMMLHSMatrixInfo &lhs_info, const GEMMRHSMatrixInfo &rhs_info, const GEMMKernelInfo &gemm_info)
 Initialise the kernel's input and output. More...
 
void run (const Window &window, cl::CommandQueue &queue) override
 Enqueue the OpenCL kernel to process the given window on the passed OpenCL command queue. More...
 
- Public Member Functions inherited from ICLKernel
 ICLKernel ()
 Constructor. More...
 
cl::Kernel & kernel ()
 Returns a reference to the OpenCL kernel of this object. More...
 
template<typename T >
void add_1D_array_argument (unsigned int &idx, const ICLArray< T > *array, const Strides &strides, unsigned int num_dimensions, const Window &window)
 Add the passed 1D array's parameters to the object's kernel's arguments starting from the index idx. More...
 
void add_1D_tensor_argument (unsigned int &idx, const ICLTensor *tensor, const Window &window)
 Add the passed 1D tensor's parameters to the object's kernel's arguments starting from the index idx. More...
 
void add_1D_tensor_argument_if (bool cond, unsigned int &idx, const ICLTensor *tensor, const Window &window)
 Add the passed 1D tensor's parameters to the object's kernel's arguments starting from the index idx if the condition is true. More...
 
void add_2D_tensor_argument (unsigned int &idx, const ICLTensor *tensor, const Window &window)
 Add the passed 2D tensor's parameters to the object's kernel's arguments starting from the index idx. More...
 
void add_2D_tensor_argument_if (bool cond, unsigned int &idx, const ICLTensor *tensor, const Window &window)
 Add the passed 2D tensor's parameters to the object's kernel's arguments starting from the index idx if the condition is true. More...
 
void add_3D_tensor_argument (unsigned int &idx, const ICLTensor *tensor, const Window &window)
 Add the passed 3D tensor's parameters to the object's kernel's arguments starting from the index idx. More...
 
void add_4D_tensor_argument (unsigned int &idx, const ICLTensor *tensor, const Window &window)
 Add the passed 4D tensor's parameters to the object's kernel's arguments starting from the index idx. More...
 
virtual void run_op (ITensorPack &tensors, const Window &window, cl::CommandQueue &queue)
 Enqueue the OpenCL kernel to process the given window on the passed OpenCL command queue. More...
 
template<typename T >
void add_argument (unsigned int &idx, T value)
 Add the passed parameters to the object's kernel's arguments starting from the index idx. More...
 
void set_lws_hint (const cl::NDRange &lws_hint)
 Set the Local-Workgroup-Size hint. More...
 
cl::NDRange lws_hint () const
 Return the Local-Workgroup-Size hint. More...
 
void set_wbsm_hint (const cl_int &wbsm_hint)
 Set the workgroup batch size modifier hint. More...
 
cl_int wbsm_hint () const
 Return the workgroup batch size modifier hint. More...
 
const std::string & config_id () const
 Get the configuration ID. More...
 
void set_target (GPUTarget target)
 Set the targeted GPU architecture. More...
 
void set_target (cl::Device &device)
 Set the targeted GPU architecture according to the CL device. More...
 
GPUTarget get_target () const
 Get the targeted GPU architecture. More...
 
size_t get_max_workgroup_size ()
 Get the maximum workgroup size for the device the CLKernelLibrary uses. More...
 
template<unsigned int dimension_size>
void add_tensor_argument (unsigned &idx, const ICLTensor *tensor, const Window &window)
 
template<typename T , unsigned int dimension_size>
void add_array_argument (unsigned &idx, const ICLArray< T > *array, const Strides &strides, unsigned int num_dimensions, const Window &window)
 Add the passed array's parameters to the object's kernel's arguments starting from the index idx. More...
 
- Public Member Functions inherited from IKernel
 IKernel ()
 Constructor. More...
 
virtual ~IKernel ()=default
 Destructor. More...
 
virtual bool is_parallelisable () const
 Indicates whether or not the kernel is parallelisable. More...
 
virtual BorderSize border_size () const
 The size of the border for that kernel. More...
 
const Windowwindow () const
 The maximum window the kernel can be executed on. More...
 

Static Public Member Functions

static Status validate (const ITensorInfo *input0, const ITensorInfo *input1, const ITensorInfo *input2, const ITensorInfo *output, float alpha, float beta, const GEMMLHSMatrixInfo &lhs_info, const GEMMRHSMatrixInfo &rhs_info, const GEMMKernelInfo &gemm_info)
 Static function to check if given info will lead to a valid configuration of CLGEMMMatrixMultiplyReshapedOnlyRHSKernel. More...
 
- Static Public Member Functions inherited from ICLKernel
static constexpr unsigned int num_arguments_per_1D_array ()
 Returns the number of arguments enqueued per 1D array object. More...
 
static constexpr unsigned int num_arguments_per_1D_tensor ()
 Returns the number of arguments enqueued per 1D tensor object. More...
 
static constexpr unsigned int num_arguments_per_2D_tensor ()
 Returns the number of arguments enqueued per 2D tensor object. More...
 
static constexpr unsigned int num_arguments_per_3D_tensor ()
 Returns the number of arguments enqueued per 3D tensor object. More...
 
static constexpr unsigned int num_arguments_per_4D_tensor ()
 Returns the number of arguments enqueued per 4D tensor object. More...
 
static cl::NDRange gws_from_window (const Window &window)
 Get the global work size given an execution window. More...
 

Detailed Description

OpenCL kernel to multiply matrices when only the input matrix RHS (input1) has been reshaped.

Note
The input matrix input1 must be reshaped through CLGEMMReshapeRHSMatrixKernel

Definition at line 39 of file CLGEMMMatrixMultiplyReshapedOnlyRHSKernel.h.

Constructor & Destructor Documentation

◆ CLGEMMMatrixMultiplyReshapedOnlyRHSKernel() [1/3]

Default Constructor.

Definition at line 185 of file CLGEMMMatrixMultiplyReshapedOnlyRHSKernel.cpp.

186  : _input0(nullptr), _input1(nullptr), _input2(nullptr), _output(nullptr), _slide_matrix_b(true), _reinterpret_input_as_3d(false), _reinterpret_output_as_3d(false), _use_dummy_work_items(false),
187  _add_bias(false), _broadcast_bias(false), _export_to_cl_image(false), _has_pad_y(false)
188 {
189 }

◆ CLGEMMMatrixMultiplyReshapedOnlyRHSKernel() [2/3]

Prevent instances of this class from being copied (As this class contains pointers)

◆ CLGEMMMatrixMultiplyReshapedOnlyRHSKernel() [3/3]

Allow instances of this class to be moved.

Member Function Documentation

◆ configure() [1/2]

void configure ( const ICLTensor input0,
const ICLTensor input1,
const ICLTensor input2,
ICLTensor output,
float  alpha,
float  beta,
const GEMMLHSMatrixInfo lhs_info,
const GEMMRHSMatrixInfo rhs_info,
const GEMMKernelInfo gemm_info 
)

Initialise the kernel's input and output.

Note
If rhs_info.export_to_cl_image = true, this OpenCL kernel will fetch the RHS data using the OpenCL read_image built-in function. Reading from the OpenCL image object can increase the performance. However, since the OpenCL image object is created importing the OpenCL buffer, the following conditions are required:
  1. rhs_info.n0 can only be 4, 8 and 16
  2. rhs_info.k0 can only be 4, 8 and 16
  3. Data type can only be F32
  4. The platform should support the OpenCL cl_khr_image2d_from_buffer extension
  5. The stride Y for the input1 should satisfy the OpenCL pitch alignment requirement
  6. input1 width should be less or equal to (CL_DEVICE_IMAGE2D_MAX_WIDTH * 4)
  7. input1 (height * depth) should be less or equal to CL_DEVICE_IMAGE2D_MAX_HEIGHT
Parameters
[in]input0Input tensor containing the LHS matrix. Data type supported: F16/F32 (only F32 if rhs_info.export_to_cl_image = true). The number of dimensions for the LHS matrix must be less or equal than 4.
[in]input1Input tensor containing the RHS reshaped matrix. Data type supported: same as input0. The number of dimensions for the RHS matrix must be less or equal than 3.
[in]input2Input tensor containing the bias matrix. Data type supported: same as input0.
[out]outputOutput tensor to store the result of matrix multiplication. Data type supported: same as input0
[in]alphaWeight of the matrix product
[in]betaWeight of the matrix bias
[in]lhs_infoLHS matrix information used to retrieve the number of rows to be processed by each thread. Only the following values are supported: lhs_info.m0: 1,2,3,4,5,6,7,8
[in]rhs_infoRHS matrix information used for reshaping the input1 tensor. Only the following values are supported: rhs_info.k0: 2,3,4,8,16 rhs_info.n0: 2,3,4,8,16 rhs_info.transpose: true,false
[in]gemm_infoGEMM information used to retrieve the original dimensions of the input matrices

Definition at line 191 of file CLGEMMMatrixMultiplyReshapedOnlyRHSKernel.cpp.

References CLKernelLibrary::get().

194 {
195  configure(CLKernelLibrary::get().get_compile_context(), input0, input1, input2, output, alpha, beta, lhs_info, rhs_info, gemm_info);
196 }
static CLKernelLibrary & get()
Access the KernelLibrary singleton.
void configure(const ICLTensor *input0, const ICLTensor *input1, const ICLTensor *input2, ICLTensor *output, float alpha, float beta, const GEMMLHSMatrixInfo &lhs_info, const GEMMRHSMatrixInfo &rhs_info, const GEMMKernelInfo &gemm_info)
Initialise the kernel&#39;s input and output.

◆ configure() [2/2]

void configure ( const CLCompileContext compile_context,
const ICLTensor input0,
const ICLTensor input1,
const ICLTensor input2,
ICLTensor output,
float  alpha,
float  beta,
const GEMMLHSMatrixInfo lhs_info,
const GEMMRHSMatrixInfo rhs_info,
const GEMMKernelInfo gemm_info 
)

Initialise the kernel's input and output.

Note
If rhs_info.export_to_cl_image = true, this OpenCL kernel will fetch the RHS data using the OpenCL read_image built-in function. Reading from the OpenCL image object can increase the performance. However, since the OpenCL image object is created importing the OpenCL buffer, the following conditions are required:
  1. rhs_info.n0 can only be 4, 8 and 16
  2. rhs_info.k0 can only be 4, 8 and 16
  3. Data type can only be F32
  4. The platform should support the OpenCL cl_khr_image2d_from_buffer extension
  5. The stride Y for the input1 should satisfy the OpenCL pitch alignment requirement
  6. input1 width should be less or equal to (CL_DEVICE_IMAGE2D_MAX_WIDTH * 4)
  7. input1 (height * depth) should be less or equal to CL_DEVICE_IMAGE2D_MAX_HEIGHT
Parameters
[in]compile_contextThe compile context to be used.
[in]input0Input tensor containing the LHS matrix. Data type supported: F16/F32 (only F32 if rhs_info.export_to_cl_image = true). The number of dimensions for the LHS matrix must be less or equal than 4.
[in]input1Input tensor containing the RHS reshaped matrix. Data type supported: same as input0. The number of dimensions for the RHS matrix must be less or equal than 3.
[in]input2Input tensor containing the bias matrix. Data type supported: same as input0.
[out]outputOutput tensor to store the result of matrix multiplication. Data type supported: same as input0
[in]alphaWeight of the matrix product
[in]betaWeight of the matrix bias
[in]lhs_infoLHS matrix information used to retrieve the number of rows to be processed by each thread. Only the following values are supported: lhs_info.m0: 1,2,3,4,5,6,7,8
[in]rhs_infoRHS matrix information used for reshaping the input1 tensor. Only the following values are supported: rhs_info.k0: 2,3,4,8,16 rhs_info.n0: 2,3,4,8,16 rhs_info.transpose: true,false
[in]gemm_infoGEMM information used to retrieve the original dimensions of the input matrices

Definition at line 198 of file CLGEMMMatrixMultiplyReshapedOnlyRHSKernel.cpp.

References CLBuildOptions::add_option(), CLBuildOptions::add_option_if(), ARM_COMPUTE_ERROR_ON, ARM_COMPUTE_ERROR_ON_NULLPTR, ARM_COMPUTE_ERROR_THROW_ON, GEMMKernelInfo::broadcast_bias, arm_compute::create_kernel(), ITensorInfo::data_type(), GEMMKernelInfo::depth_output_gemm3d, ITensorInfo::dimension(), arm_compute::float_to_string_with_full_precision(), CLKernelLibrary::get(), arm_compute::get_cl_type_from_data_type(), arm_compute::get_padding_info(), GEMMKernelInfo::has_pad_y, arm_compute::has_padding_changed(), ITensor::info(), arm_compute::helpers::float_ops::is_one(), arm_compute::helpers::float_ops::is_zero(), kernel_name, arm_compute::lower_string(), ITensorInfo::num_dimensions(), CLBuildOptions::options(), arm_compute::preferred_dummy_work_items_support(), GEMMKernelInfo::reinterpret_input_as_3d, arm_compute::string_from_activation_func(), arm_compute::string_from_data_type(), arm_compute::support::cpp11::to_string(), and arm_compute::validate_arguments().

203 {
204  ARM_COMPUTE_ERROR_ON_NULLPTR(input0, input1, output);
205 
206  ARM_COMPUTE_ERROR_THROW_ON(validate_arguments(input0->info(), input1->info(), (input2 != nullptr ? input2->info() : nullptr), output->info(), alpha, beta, lhs_info, rhs_info, gemm_info));
207 
208  _input0 = input0;
209  _input1 = input1;
210  _input2 = helpers::float_ops::is_zero(beta) ? nullptr : input2;
211  _output = output;
212  _reinterpret_input_as_3d = gemm_info.reinterpret_input_as_3d;
213  _reinterpret_output_as_3d = gemm_info.depth_output_gemm3d != 0;
214  _use_dummy_work_items = preferred_dummy_work_items_support(CLKernelLibrary::get().get_device());
215  _add_bias = _input2 != nullptr;
216  _broadcast_bias = gemm_info.broadcast_bias;
217  _export_to_cl_image = rhs_info.export_to_cl_image;
218  _has_pad_y = gemm_info.has_pad_y;
219 
220  auto padding_info = get_padding_info({ input0, input1, output });
221 
222  // In case both input and output have to be reinterpreted as 3D tensors,
223  // force reinterpret_input_as_3d and reinterpret_output_as_3d to be false.
224  if((_reinterpret_input_as_3d == _reinterpret_output_as_3d) && _has_pad_y)
225  {
226  _reinterpret_input_as_3d = false;
227  _reinterpret_output_as_3d = false;
228  }
229 
230  // Check if we need to slide the matrix B
231  const unsigned int num_dimensions_input0 = _input0->info()->num_dimensions();
232  _slide_matrix_b = (_input1->info()->num_dimensions() >= num_dimensions_input0);
233 
234  ElementsProcessed num_elements_processed{};
235 
236  // Configure kernel window
237  auto win_config = validate_and_configure_window(input0->info(), input1->info(), input2 != nullptr ? input2->info() : nullptr, output->info(), lhs_info, rhs_info, gemm_info, num_elements_processed);
238  ARM_COMPUTE_ERROR_THROW_ON(win_config.first);
239  ICLKernel::configure_internal(win_config.second);
240 
241  // If _reinterpret_input_as_3d = _reinterpret_output_as_3d = true,
242  // we will dispatch a batched-GEMM to reduce the complexity of the address calculation within the OpenCL kernel.
243  // This means that the actual m used by the kernel is given by output->info()->dimension(1) and not by gemm_info.m
244  const unsigned int internal_m = _reinterpret_output_as_3d ? gemm_info.m : output->info()->dimension(1);
245 
246  // These variables are used only if gemm_info.has_pad_y == true
247  const unsigned int h_gemm_3d = _reinterpret_output_as_3d ? output->info()->dimension(1) : input0->info()->dimension(1);
248  const unsigned int d_gemm_3d = _reinterpret_output_as_3d ? output->info()->dimension(2) : input0->info()->dimension(2);
249 
250  // Shrink M0 to be always <= M (internal_m) to prevent out-of-bounds reads.
251  // NOTE: This might have implications on heuristics and performance
252  const unsigned int internal_m0 = std::min(internal_m, lhs_info.m0);
253 
254  // Calculate partial (store instead of load) M0 and partial N0 for the partial blocks at the end of a row/column if any. This is to avoid padding.
255  const unsigned int partial_store_m0 = internal_m % internal_m0;
256  const unsigned int partial_store_n0 = gemm_info.n % rhs_info.n0;
257 
258  // Create build options
259  CLBuildOptions build_opts;
260  build_opts.add_option("-DDATA_TYPE=" + get_cl_type_from_data_type(input0->info()->data_type()));
261  build_opts.add_option_if(!(helpers::float_ops::is_one(alpha)), "-DALPHA=" + float_to_string_with_full_precision(alpha));
262  build_opts.add_option_if(_input2 != nullptr, "-DBETA=" + float_to_string_with_full_precision(beta));
263  build_opts.add_option_if(helpers::float_ops::is_one(beta), "-DUNIT_BETA");
264  build_opts.add_option_if(gemm_info.broadcast_bias, "-DBROADCAST_BIAS");
265  build_opts.add_option_if(!_slide_matrix_b, "-DMATRIX_B_DEPTH=" + support::cpp11::to_string(input1->info()->dimension(2)));
266  build_opts.add_option_if(rhs_info.interleave, "-DRHS_INTERLEAVE");
267  build_opts.add_option_if(_use_dummy_work_items, "-DDUMMY_WORK_ITEMS");
268  build_opts.add_option_if(rhs_info.export_to_cl_image, "-DOPENCL_IMAGE_SUPPORT");
269  build_opts.add_option("-DRHS_HEIGHT=" + support::cpp11::to_string(input1->info()->dimension(1)));
270  build_opts.add_option("-DM=" + support::cpp11::to_string(internal_m));
271  build_opts.add_option("-DN=" + support::cpp11::to_string(gemm_info.n));
272  build_opts.add_option("-DK=" + support::cpp11::to_string(gemm_info.k));
273  build_opts.add_option("-DM0=" + support::cpp11::to_string(internal_m0));
274  build_opts.add_option("-DN0=" + support::cpp11::to_string(rhs_info.n0));
275  build_opts.add_option("-DK0=" + support::cpp11::to_string(rhs_info.k0));
276  build_opts.add_option("-DH0=" + support::cpp11::to_string(rhs_info.h0));
277  build_opts.add_option("-DPARTIAL_STORE_M0=" + support::cpp11::to_string(partial_store_m0));
278  build_opts.add_option("-DPARTIAL_STORE_N0=" + support::cpp11::to_string(partial_store_n0));
279  build_opts.add_option_if(gemm_info.activation_info.enabled(), "-DACTIVATION_TYPE=" + lower_string(string_from_activation_func(gemm_info.activation_info.activation())));
280  build_opts.add_option_if(gemm_info.activation_info.enabled(), "-DA_VAL=" + float_to_string_with_full_precision(gemm_info.activation_info.a()));
281  build_opts.add_option_if(gemm_info.activation_info.enabled(), "-DB_VAL=" + float_to_string_with_full_precision(gemm_info.activation_info.b()));
282  if(_has_pad_y)
283  {
284  build_opts.add_option_if(_reinterpret_input_as_3d, "-DREINTERPRET_INPUT_AS_3D");
285  build_opts.add_option_if(_reinterpret_output_as_3d, "-DREINTERPRET_OUTPUT_AS_3D");
286  build_opts.add_option_if(_reinterpret_input_as_3d || _reinterpret_output_as_3d, "-DHEIGHT_GEMM3D=" + support::cpp11::to_string(h_gemm_3d));
287  build_opts.add_option_if(_reinterpret_input_as_3d || _reinterpret_output_as_3d, "-DDEPTH_GEMM3D=" + support::cpp11::to_string(d_gemm_3d));
288  }
289 
290  std::string kernel_name("gemm_mm_reshaped_only_rhs_");
291  kernel_name += rhs_info.transpose ? "t" : "nt";
292  kernel_name += rhs_info.export_to_cl_image ? "_texture" : "";
293 
294  // Create kernel
295  _kernel = create_kernel(compile_context, kernel_name, build_opts.options());
296 
297  // Set config_id for enabling LWS tuning
298  _config_id = kernel_name;
299  _config_id += "_";
300  _config_id += (_has_pad_y ? "" : "no_pad_y_");
301  _config_id += (_add_bias ? "add_bias_" : "");
302  _config_id += (_broadcast_bias ? "broadcast_bias_" : "");
303  _config_id += (_reinterpret_input_as_3d ? "3di_" : "");
304  _config_id += (_reinterpret_output_as_3d ? "3do_" : "");
305  _config_id += (gemm_info.activation_info.enabled() ? "fused_activation_" : "");
306  _config_id += lower_string(string_from_data_type(input0->info()->data_type()));
307  _config_id += "_";
308  _config_id += support::cpp11::to_string(output->info()->dimension(1));
309  _config_id += "_";
310  _config_id += support::cpp11::to_string(output->info()->dimension(0));
311  _config_id += "_";
312  _config_id += support::cpp11::to_string(gemm_info.k);
313  _config_id += "_";
314  _config_id += support::cpp11::to_string(output->info()->dimension(2));
315  _config_id += "_";
316  _config_id += support::cpp11::to_string(lhs_info.m0);
317  _config_id += "_";
318  _config_id += support::cpp11::to_string(rhs_info.n0);
319  _config_id += "_";
320  _config_id += support::cpp11::to_string(rhs_info.k0);
321  _config_id += "_";
322  _config_id += support::cpp11::to_string(rhs_info.h0);
323  _config_id += "_";
324  _config_id += support::cpp11::to_string(rhs_info.interleave);
325 
327 }
virtual size_t num_dimensions() const =0
The number of dimensions of the tensor (rank)
bool is_one(float a, float epsilon=0.00001f)
Checks if the input floating point number is 1.0f checking if the difference is within a range define...
Definition: float_ops.h:97
bool preferred_dummy_work_items_support(const cl::Device &device)
Helper function to check if "dummy work-items" are preferred to have a power of two NDRange In case d...
Definition: CLHelpers.cpp:361
std::string to_string(T &&value)
Convert integer and float values to string.
#define ARM_COMPUTE_ERROR_ON(cond)
If the condition is true then an error message is printed and an exception thrown.
Definition: Error.h:466
const std::string & string_from_activation_func(ActivationLayerInfo::ActivationFunction act)
Translates a given activation function to a string.
Definition: Utils.cpp:163
static CLKernelLibrary & get()
Access the KernelLibrary singleton.
#define ARM_COMPUTE_ERROR_THROW_ON(status)
Definition: Error.h:455
std::string lower_string(const std::string &val)
Lower a given string.
Definition: Utils.cpp:350
cl::Kernel create_kernel(const CLCompileContext &ctx, const std::string &kernel_name, const std::set< std::string > &build_opts=std::set< std::string >())
Creates an opencl kernel using a compile context.
Definition: CLHelpers.cpp:403
const std::string & string_from_data_type(DataType dt)
Convert a data type identity into a string.
Definition: Utils.cpp:135
std::string float_to_string_with_full_precision(float val)
Create a string with the float in full precision.
Definition: Utils.h:1262
std::string kernel_name
std::string get_cl_type_from_data_type(const DataType &dt)
Translates a tensor data type to the appropriate OpenCL type.
Definition: CLHelpers.cpp:37
virtual ITensorInfo * info() const =0
Interface to be implemented by the child class to return the tensor&#39;s metadata.
bool has_padding_changed(const std::unordered_map< const ITensorInfo *, PaddingSize > &padding_map)
Check if the previously stored padding info has changed after configuring a kernel.
Definition: Utils.cpp:528
std::unordered_map< const ITensorInfo *, PaddingSize > get_padding_info(std::initializer_list< const ITensorInfo *> infos)
Stores padding information before configuring a kernel.
Definition: Utils.cpp:513
Status validate_arguments(const ITensorInfo *input, const ITensorInfo *bias, const ITensorInfo *output, const GEMMLowpOutputStageInfo *output_stage)
bool is_zero(float a, float epsilon=0.00001f)
Checks if the input floating point number is 0.0f checking if the difference is within a range define...
Definition: float_ops.h:109
#define ARM_COMPUTE_ERROR_ON_NULLPTR(...)
Definition: Validate.h:161

◆ operator=() [1/2]

Prevent instances of this class from being copied (As this class contains pointers)

◆ operator=() [2/2]

Allow instances of this class to be moved.

◆ run()

void run ( const Window window,
cl::CommandQueue &  queue 
)
overridevirtual

Enqueue the OpenCL kernel to process the given window on the passed OpenCL command queue.

Note
The queue is not flushed by this method, and therefore the kernel will not have been executed by the time this method returns.
Parameters
[in]windowRegion on which to execute the kernel. (Must be a valid region of the window returned by window()).
[in,out]queueCommand queue on which to enqueue the kernel.

Reimplemented from ICLKernel.

Definition at line 348 of file CLGEMMMatrixMultiplyReshapedOnlyRHSKernel.cpp.

References ICLKernel::add_2D_tensor_argument(), ICLKernel::add_2D_tensor_argument_if(), ARM_COMPUTE_ERROR_ON, ARM_COMPUTE_ERROR_ON_INVALID_SUBWINDOW, ARM_COMPUTE_ERROR_ON_UNCONFIGURED_KERNEL, BorderSize::bottom, ICLTensor::cl_buffer(), arm_compute::create_image2d_from_buffer(), ITensorInfo::data_type(), ITensorInfo::dimension(), Window::DimX, Window::DimY, arm_compute::enqueue(), Window::first_slice_window_3D(), CLKernelLibrary::get(), ITensor::info(), ICLKernel::lws_hint(), ITensorInfo::num_dimensions(), ITensorInfo::padding(), Window::set(), arm_compute::test::validation::reference::slice(), Window::slide_window_slice_3D(), ITensorInfo::strides_in_bytes(), BorderSize::top, and IKernel::window().

349 {
352 
353  if(_input1->info()->num_dimensions() < 3)
354  {
355  // The stride_z for matrix B must be zero if we do not slice
356  ARM_COMPUTE_ERROR_ON(_input1->info()->strides_in_bytes()[3] != 0);
357  }
358 
359  const size_t lhs_idx_batch_size = _reinterpret_input_as_3d && !_has_pad_y ? 3u : 2u;
360  const size_t rhs_idx_batch_size = 2u;
361  const size_t bia_idx_batch_size = 2u;
362  const size_t out_idx_batch_size = _reinterpret_output_as_3d && !_has_pad_y ? 3u : 2u;
363 
365  Window slice_matrix_b = slice;
366 
367  slice_matrix_b.set(Window::DimX, Window::Dimension(0, 1, 1));
368  slice_matrix_b.set(Window::DimY, Window::Dimension(0, 1, 1));
369 
370  // Get cross plane pads
371  const unsigned int total_cross_plane_pad_lhs = _input0->info()->padding().top + _input0->info()->padding().bottom;
372  const unsigned int total_cross_plane_pad_out = _output->info()->padding().top + _output->info()->padding().bottom;
373 
374  // The execution should fail if we try to run with has_pad_y = false but we have padding in either the LHS or DST tensor
375  ARM_COMPUTE_ERROR_ON(!_has_pad_y && ((total_cross_plane_pad_lhs != 0) || (total_cross_plane_pad_out != 0)));
376 
377  cl::Image2D input1_image2d;
378 
379  if(_export_to_cl_image)
380  {
381  const TensorShape shape2d(_input1->info()->dimension(0) / 4, _input1->info()->dimension(1) * _input1->info()->dimension(2));
382  const size_t image_row_pitch = _input1->info()->strides_in_bytes()[1];
383 
384  input1_image2d = create_image2d_from_buffer(CLKernelLibrary::get().context(), _input1->cl_buffer(), shape2d, _input1->info()->data_type(), image_row_pitch);
385  }
386 
387  do
388  {
389  Window slice_b = slice;
390  // Don't slice matrix B along the z dimension if matrix B has just 2 dimensions and matrix A more than 2
391  // This scenario can happen when the matrix multiplication is used to perform a convolution operation
392  if(!_slide_matrix_b)
393  {
394  slice_b = slice_matrix_b;
395  }
396 
397  unsigned int idx = 0;
398 
399  // LHS buffer
400  add_2D_tensor_argument(idx, _input0, slice);
401 
402  // RHS buffer or RHS OpenCL image (_export_to_cl_image == true)
403  if(_export_to_cl_image)
404  {
405  _kernel.setArg(idx++, input1_image2d);
406  }
407  else
408  {
409  add_2D_tensor_argument(idx, _input1, slice_b);
410  }
411 
412  // Bias buffer (_add_bias == true)
413  add_2D_tensor_argument_if(_add_bias, idx, _input2, slice);
414 
415  // Output buffer
416  add_2D_tensor_argument(idx, _output, slice);
417 
418  // LHS stride_z
419  _kernel.setArg<cl_uint>(idx++, static_cast<unsigned int>(_input0->info()->strides_in_bytes()[lhs_idx_batch_size]));
420 
421  // RHS stride_z (not used if _export_to_cl_image == true)
422  _kernel.setArg<cl_uint>(idx++, static_cast<unsigned int>(_input1->info()->strides_in_bytes()[rhs_idx_batch_size]));
423 
424  // Bias stride_z (if _add_bias == true)
425  if(_add_bias)
426  {
427  _kernel.setArg<cl_uint>(idx++, static_cast<unsigned int>(_input2->info()->strides_in_bytes()[bia_idx_batch_size]));
428  }
429 
430  // Output stride_z
431  _kernel.setArg<cl_uint>(idx++, static_cast<unsigned int>(_output->info()->strides_in_bytes()[out_idx_batch_size]));
432 
433  // Cross-plan padding (if _reinterpret_input_as_3d = true)
434  if(_reinterpret_input_as_3d && _has_pad_y)
435  {
436  _kernel.setArg<cl_uint>(idx++, static_cast<unsigned int>(total_cross_plane_pad_lhs));
437  }
438 
439  // Cross-plan padding (if _reinterpret_output_as_3d = true)
440  if(_reinterpret_output_as_3d && _has_pad_y)
441  {
442  _kernel.setArg<cl_uint>(idx++, static_cast<unsigned int>(total_cross_plane_pad_out));
443  }
444 
445  enqueue(queue, *this, slice, lws_hint(), _use_dummy_work_items);
446  }
447  while(window.slide_window_slice_3D(slice));
448 }
unsigned int top
top of the border
Definition: Types.h:375
virtual size_t num_dimensions() const =0
The number of dimensions of the tensor (rank)
const Window & window() const
The maximum window the kernel can be executed on.
Definition: IKernel.cpp:28
void add_2D_tensor_argument_if(bool cond, unsigned int &idx, const ICLTensor *tensor, const Window &window)
Add the passed 2D tensor&#39;s parameters to the object&#39;s kernel&#39;s arguments starting from the index idx ...
Definition: ICLKernel.h:159
void enqueue(IGCKernel &kernel, const Window &window, const gles::NDRange &lws=gles::NDRange(1U, 1U, 1U))
Add the kernel to the command queue with the given window.
Definition: IGCKernel.cpp:41
virtual size_t dimension(size_t index) const =0
Return the size of the requested dimension.
cl::NDRange lws_hint() const
Return the Local-Workgroup-Size hint.
Definition: ICLKernel.h:276
virtual DataType data_type() const =0
Data type used for each element of the tensor.
#define ARM_COMPUTE_ERROR_ON(cond)
If the condition is true then an error message is printed and an exception thrown.
Definition: Error.h:466
static CLKernelLibrary & get()
Access the KernelLibrary singleton.
unsigned int bottom
bottom of the border
Definition: Types.h:377
static constexpr size_t DimX
Alias for dimension 0 also known as X dimension.
Definition: Window.h:43
virtual ITensorInfo * info() const =0
Interface to be implemented by the child class to return the tensor&#39;s metadata.
virtual PaddingSize padding() const =0
Padding of tensor.
bool slide_window_slice_3D(Window &slice) const
Slide the passed 3D window slice.
Definition: Window.h:335
#define ARM_COMPUTE_ERROR_ON_UNCONFIGURED_KERNEL(k)
Definition: Validate.h:941
static constexpr size_t DimY
Alias for dimension 1 also known as Y dimension.
Definition: Window.h:45
void add_2D_tensor_argument(unsigned int &idx, const ICLTensor *tensor, const Window &window)
Add the passed 2D tensor&#39;s parameters to the object&#39;s kernel&#39;s arguments starting from the index idx...
Definition: ICLKernel.h:148
virtual const cl::Buffer & cl_buffer() const =0
Interface to be implemented by the child class to return a reference to the OpenCL buffer containing ...
cl::Image2D create_image2d_from_buffer(const cl::Context &ctx, const cl::Buffer &buffer, const TensorShape &shape2d, DataType data_type, size_t image_row_pitch)
Create a cl::Image2D object from an OpenCL buffer.
Definition: CLUtils.cpp:29
virtual const Strides & strides_in_bytes() const =0
The strides in bytes for accessing each dimension of the tensor.
Window first_slice_window_3D() const
First 3D slice of the window.
Definition: Window.h:291
#define ARM_COMPUTE_ERROR_ON_INVALID_SUBWINDOW(f, s)
Definition: Validate.h:205
SimpleTensor< T > slice(const SimpleTensor< T > &src, Coordinates starts, Coordinates ends)

◆ validate()

Status validate ( const ITensorInfo input0,
const ITensorInfo input1,
const ITensorInfo input2,
const ITensorInfo output,
float  alpha,
float  beta,
const GEMMLHSMatrixInfo lhs_info,
const GEMMRHSMatrixInfo rhs_info,
const GEMMKernelInfo gemm_info 
)
static

Static function to check if given info will lead to a valid configuration of CLGEMMMatrixMultiplyReshapedOnlyRHSKernel.

Note
If rhs_info.export_to_cl_image = true, this OpenCL kernel will fetch the RHS data using the OpenCL read_image built-in function. Reading from the OpenCL image object can increase the performance. However, since the OpenCL image object is created importing the OpenCL buffer, the following conditions are required:
  1. rhs_info.n0 can only be 4, 8 and 16
  2. rhs_info.k0 can only be 4, 8 and 16
  3. Data type can only be F32
  4. The platform should support the OpenCL cl_khr_image2d_from_buffer extension
  5. The stride Y for the input1 should satisfy the OpenCL pitch alignment requirement
  6. input1 width should be less or equal to (CL_DEVICE_IMAGE2D_MAX_WIDTH * 4)
  7. input1 (height * depth) should be less or equal to CL_DEVICE_IMAGE2D_MAX_HEIGHT
Parameters
[in]input0Input tensor info for the LHS matrix. Data type supported: F16/F32 (only F32 if rhs_info.export_to_cl_image = true). The number of dimensions for the LHS matrix must be less or equal than 4.
[in]input1Input tensor info for the RHS reshaped matrix. Data type supported: same as input0. The number of dimensions for the RHS matrix must be less or equal than 3.
[in]input2Input tensor info containing the bias matrix. Data type supported: same as input0.
[in]outputOutput tensor info. Data type supported: same as input0
[in]alphaWeight of the matrix product
[in]betaWeight of the matrix bias
[in]lhs_infoLHS matrix information used to retrieve the number of rows to be processed by each thread. Only the following values are supported: lhs_info.m0: 1,2,3,4,5,6,7,8
[in]rhs_infoRHS matrix information used for reshaping the input1 tensor. Only the following values are supported: rhs_info.k0: 2,3,4,8,16 rhs_info.n0: 2,3,4,8,16 rhs_info.transpose: true,false
[in]gemm_infoGEMM information used to retrieve the original dimensions of the input matrices
Returns
a status

Definition at line 329 of file CLGEMMMatrixMultiplyReshapedOnlyRHSKernel.cpp.

References ARM_COMPUTE_RETURN_ON_ERROR, ICloneable< T >::clone(), and arm_compute::validate_arguments().

Referenced by CLGEMMReshapeRHSMatrixKernelManaged::configure().

332 {
333  ElementsProcessed num_elements_processed{};
334  ARM_COMPUTE_RETURN_ON_ERROR(validate_arguments(input0, input1, input2, output, alpha, beta, lhs_info, rhs_info, gemm_info));
335  ARM_COMPUTE_RETURN_ON_ERROR(validate_and_configure_window(input0->clone().get(),
336  input1->clone().get(),
337  input2 != nullptr ? input2->clone().get() : nullptr,
338  output->clone().get(),
339  lhs_info,
340  rhs_info,
341  gemm_info,
342  num_elements_processed)
343  .first);
344 
345  return Status{};
346 }
#define ARM_COMPUTE_RETURN_ON_ERROR(status)
Checks if a status contains an error and returns it.
Definition: Error.h:204
Status validate_arguments(const ITensorInfo *input, const ITensorInfo *bias, const ITensorInfo *output, const GEMMLowpOutputStageInfo *output_stage)

The documentation for this class was generated from the following files: