Compute Library
 22.11
ClGemmMatrixMultiplyReshapedKernel Class Reference

OpenCL kernel to multiply matrices when both the input matrices LHS (src0) and RHS (src1) have been reshaped. More...

#include <ClGemmMatrixMultiplyReshapedKernel.h>

Collaboration diagram for ClGemmMatrixMultiplyReshapedKernel:
[legend]

Public Member Functions

 ClGemmMatrixMultiplyReshapedKernel ()
 
 ARM_COMPUTE_DISALLOW_COPY_ALLOW_MOVE (ClGemmMatrixMultiplyReshapedKernel)
 
void configure (const ClCompileContext &compile_context, const ITensorInfo *src0, const ITensorInfo *src1, const ITensorInfo *src2, ITensorInfo *dst, float alpha, float beta, const GEMMLHSMatrixInfo &lhs_info, const GEMMRHSMatrixInfo &rhs_info, const GEMMKernelInfo &gemm_info)
 Initialise the kernel's input and output. More...
 
void run_op (ITensorPack &tensors, const Window &window, cl::CommandQueue &queue) override
 Enqueue the OpenCL kernel to process the given window on the passed OpenCL command queue. More...
 
- Public Member Functions inherited from ICLKernel
 ICLKernel ()
 Constructor. More...
 
cl::Kernel & kernel ()
 Returns a reference to the OpenCL kernel of this object. More...
 
CLKernelType type () const
 Returns the CL kernel type. More...
 
template<typename T >
void add_1D_array_argument (unsigned int &idx, const ICLArray< T > *array, const Strides &strides, unsigned int num_dimensions, const Window &window)
 Add the passed 1D array's parameters to the object's kernel's arguments starting from the index idx. More...
 
void add_1D_tensor_argument (unsigned int &idx, const ICLTensor *tensor, const Window &window)
 Add the passed 1D tensor's parameters to the object's kernel's arguments starting from the index idx. More...
 
void add_1D_tensor_argument_if (bool cond, unsigned int &idx, const ICLTensor *tensor, const Window &window)
 Add the passed 1D tensor's parameters to the object's kernel's arguments starting from the index idx if the condition is true. More...
 
void add_2D_tensor_argument (unsigned int &idx, const ICLTensor *tensor, const Window &window)
 Add the passed 2D tensor's parameters to the object's kernel's arguments starting from the index idx. More...
 
void add_2D_tensor_argument_if (bool cond, unsigned int &idx, const ICLTensor *tensor, const Window &window)
 Add the passed 2D tensor's parameters to the object's kernel's arguments starting from the index idx if the condition is true. More...
 
void add_3D_tensor_argument (unsigned int &idx, const ICLTensor *tensor, const Window &window)
 Add the passed 3D tensor's parameters to the object's kernel's arguments starting from the index idx. More...
 
void add_4D_tensor_argument (unsigned int &idx, const ICLTensor *tensor, const Window &window)
 Add the passed 4D tensor's parameters to the object's kernel's arguments starting from the index idx. More...
 
void add_5D_tensor_argument (unsigned int &idx, const ICLTensor *tensor, const Window &window)
 Add the passed 5D tensor's parameters to the object's kernel's arguments starting from the index idx. More...
 
void add_3d_tensor_nhw_argument (unsigned int &idx, const ICLTensor *tensor)
 Add the passed NHW 3D tensor's parameters to the object's kernel's arguments by passing strides, dimensions and the offset to the first valid element in bytes. More...
 
void add_4d_tensor_nhwc_argument (unsigned int &idx, const ICLTensor *tensor)
 Add the passed NHWC 4D tensor's parameters to the object's kernel's arguments by passing strides, dimensions and the offset to the first valid element in bytes. More...
 
virtual void run (const Window &window, cl::CommandQueue &queue)
 Enqueue the OpenCL kernel to process the given window on the passed OpenCL command queue. More...
 
virtual void run_composite_op (ITensorPack &tensors, const Window &window, cl::CommandQueue &queue, const experimental::dynamic_fusion::ClExecutionDescriptor &exec_desc)
 The execution is carried out through run_op method. But the run_op method needs to be extended to include ClExecutionDescriptor as now LWS GWS tuning will be separated from the IKernel. More...
 
template<typename T >
void add_argument (unsigned int &idx, T value)
 Add the passed parameters to the object's kernel's arguments starting from the index idx. More...
 
void set_lws_hint (const cl::NDRange &lws_hint)
 Set the Local-Workgroup-Size hint. More...
 
cl::NDRange lws_hint () const
 Return the Local-Workgroup-Size hint. More...
 
void set_wbsm_hint (const cl_int &wbsm_hint)
 Set the workgroup batch size modifier hint. More...
 
cl_int wbsm_hint () const
 Return the workgroup batch size modifier hint. More...
 
const std::string & config_id () const
 Get the configuration ID. More...
 
void set_target (GPUTarget target)
 Set the targeted GPU architecture. More...
 
void set_target (cl::Device &device)
 Set the targeted GPU architecture according to the CL device. More...
 
GPUTarget get_target () const
 Get the targeted GPU architecture. More...
 
size_t get_max_workgroup_size ()
 Get the maximum workgroup size for the device the CLKernelLibrary uses. More...
 
template<unsigned int dimension_size>
void add_tensor_argument (unsigned &idx, const ICLTensor *tensor, const Window &window)
 
template<typename T , unsigned int dimension_size>
void add_array_argument (unsigned &idx, const ICLArray< T > *array, const Strides &strides, unsigned int num_dimensions, const Window &window)
 Add the passed array's parameters to the object's kernel's arguments starting from the index idx. More...
 
- Public Member Functions inherited from IKernel
 IKernel ()
 Constructor. More...
 
virtual ~IKernel ()=default
 Destructor. More...
 
virtual bool is_parallelisable () const
 Indicates whether or not the kernel is parallelisable. More...
 
virtual BorderSize border_size () const
 The size of the border for that kernel. More...
 
const Windowwindow () const
 The maximum window the kernel can be executed on. More...
 
bool is_window_configured () const
 Function to check if the embedded window of this kernel has been configured. More...
 

Static Public Member Functions

static Status validate (const ITensorInfo *src0, const ITensorInfo *src1, const ITensorInfo *src2, const ITensorInfo *dst, float alpha, float beta, const GEMMLHSMatrixInfo &lhs_info, const GEMMRHSMatrixInfo &rhs_info, const GEMMKernelInfo &gemm_info)
 Static function to check if given info will lead to a valid configuration. More...
 
- Static Public Member Functions inherited from ICLKernel
static constexpr unsigned int num_arguments_per_3d_tensor_nhw ()
 Returns the number of arguments enqueued per NHW 3D Tensor object. More...
 
static constexpr unsigned int num_arguments_per_4d_tensor_nhwc ()
 Returns the number of arguments enqueued per NHWC 4D Tensor object. More...
 
static constexpr unsigned int num_arguments_per_1D_array ()
 Returns the number of arguments enqueued per 1D array object. More...
 
static constexpr unsigned int num_arguments_per_1D_tensor ()
 Returns the number of arguments enqueued per 1D tensor object. More...
 
static constexpr unsigned int num_arguments_per_2D_tensor ()
 Returns the number of arguments enqueued per 2D tensor object. More...
 
static constexpr unsigned int num_arguments_per_3D_tensor ()
 Returns the number of arguments enqueued per 3D tensor object. More...
 
static constexpr unsigned int num_arguments_per_4D_tensor ()
 Returns the number of arguments enqueued per 4D tensor object. More...
 
static cl::NDRange gws_from_window (const Window &window)
 Get the global work size given an execution window. More...
 

Detailed Description

OpenCL kernel to multiply matrices when both the input matrices LHS (src0) and RHS (src1) have been reshaped.

Note
The input matrices src0 and src1 must be reshaped through:

Definition at line 45 of file ClGemmMatrixMultiplyReshapedKernel.h.

Constructor & Destructor Documentation

◆ ClGemmMatrixMultiplyReshapedKernel()

Definition at line 183 of file ClGemmMatrixMultiplyReshapedKernel.cpp.

References arm_compute::GEMM.

184 {
185  _type = CLKernelType::GEMM;
186 }
Convolution using GEMM.

Member Function Documentation

◆ ARM_COMPUTE_DISALLOW_COPY_ALLOW_MOVE()

ARM_COMPUTE_DISALLOW_COPY_ALLOW_MOVE ( ClGemmMatrixMultiplyReshapedKernel  )

◆ configure()

void configure ( const ClCompileContext compile_context,
const ITensorInfo src0,
const ITensorInfo src1,
const ITensorInfo src2,
ITensorInfo dst,
float  alpha,
float  beta,
const GEMMLHSMatrixInfo lhs_info,
const GEMMRHSMatrixInfo rhs_info,
const GEMMKernelInfo gemm_info 
)

Initialise the kernel's input and output.

Note
The F16 computation also supports mixed precision through the gemm_info.fp_mixed_precision flag. Mixed precision combines different floating precisions during the computation, in particular, F32 for the accumulations and F16 for the multiplications. i.e. float c = (half)a * (half)b
If rhs_info.export_to_cl_image = true, this OpenCL kernel will fetch the RHS data using the OpenCL read_image built-in function. Reading from the OpenCL image object can increase the performance. However, since the OpenCL image object is created importing the OpenCL buffer, the following conditions are required:
  1. rhs_info.n0 can only be 4, 8 and 16
  2. rhs_info.k0 can only be 4, 8 and 16
  3. Data type can only be F32
  4. The platform should support the OpenCL cl_khr_image2d_from_buffer extension
  5. The stride Y for the src1 should satisfy the OpenCL pitch alignment requirement
  6. src1 width should be less or equal to (CL_DEVICE_IMAGE2D_MAX_WIDTH * 4)
  7. src1 (height * depth) should be less or equal to CL_DEVICE_IMAGE2D_MAX_HEIGHT
Parameters
[in]compile_contextThe compile context to be used.
[in]src0Input tensor containing the LHS reshaped matrix. Data type supported: F16/F32 (only F32 if rhs_info.export_to_cl_image = true). The number of dimensions for the LHS matrix must be less or equal than 4
[in]src1Input tensor containing the RHS reshaped matrix. Data type supported: same as src0. The number of dimensions for the RHS matrix must be less or equal than 3
[in]src2Input tensor containing the bias matrix. Data type supported: same as src0.
[out]dstdst tensor to store the result of matrix multiplication. Data type supported: same as src0
[in]alphaWeight of the matrix product
[in]betaWeight of the matrix bias
[in]lhs_infoLHS matrix information used for reshaping the src0 tensor. Only the following values are supported: lhs_info.m0: 2,3,4,5,6,7,8 lhs_info.k0: 2,3,4,8,16 lhs_info.transpose: false
[in]rhs_infoRHS matrix information used for reshaping the src1 tensor. Only the following values are supported: rhs_info.n0: 2,3,4,8,16 (only 4, 8 and 16 if rhs_info.export_to_cl_image = true) rhs_info.k0: 2,3,4,8,16 (only 4, 8 and 16 if rhs_info.export_to_cl_image = true) rhs_info.transpose: true
[in]gemm_infoGEMM information used to retrieve the original dimensions of the input matrices
Note
lhs_info.k0 must be equal to rhs_info.k0

Definition at line 188 of file ClGemmMatrixMultiplyReshapedKernel.cpp.

References CLBuildOptions::add_option(), CLBuildOptions::add_option_if(), ARM_COMPUTE_ERROR_ON, ARM_COMPUTE_ERROR_ON_NULLPTR, ARM_COMPUTE_ERROR_THROW_ON, arm_compute::auto_init_if_empty(), ICloneable< T >::clone(), arm_compute::misc::shape_calculator::compute_mm_shape(), arm_compute::create_kernel(), arm_compute::test::validation::data_type, ITensorInfo::data_type(), GEMMKernelInfo::depth_output_gemm3d, ITensorInfo::dimension(), GEMMRHSMatrixInfo::export_to_cl_image, arm_compute::F32, arm_compute::float_to_string_with_full_precision(), arm_compute::test::validation::gemm_info, CLKernelLibrary::get(), arm_compute::get_cl_type_from_data_type(), arm_compute::get_padding_info(), arm_compute::has_padding_changed(), GEMMLHSMatrixInfo::interleave, arm_compute::helpers::float_ops::is_one(), GEMMLHSMatrixInfo::k0, kernel_name, arm_compute::test::validation::lhs_info, arm_compute::lower_string(), GEMMLHSMatrixInfo::m0, ITensorInfo::num_dimensions(), CLBuildOptions::options(), GEMMKernelInfo::post_ops, arm_compute::preferred_dummy_work_items_support(), arm_compute::test::validation::rhs_info, arm_compute::string_from_activation_func(), arm_compute::string_from_data_type(), arm_compute::support::cpp11::to_string(), GEMMLHSMatrixInfo::transpose, arm_compute::upper_string(), GEMMLHSMatrixInfo::v0, arm_compute::cpu::kernels::validate_and_configure_window(), and arm_compute::cpu::kernels::validate_arguments().

191 {
192  ARM_COMPUTE_ERROR_ON_NULLPTR(src0, src1, dst);
193 
194  // dst tensor auto initialization if not yet initialized
195  auto_init_if_empty(*dst, src0->clone()->set_tensor_shape(misc::shape_calculator::compute_mm_shape(*src0, *src1, gemm_info)));
196 
197  ARM_COMPUTE_ERROR_THROW_ON(validate_arguments(src0, src1, src2, dst, alpha, beta, lhs_info, rhs_info, gemm_info));
198 
199  auto padding_info = get_padding_info({ src0, src1, src2, dst });
200  _reinterpret_output_as_3d = gemm_info.depth_output_gemm3d != 0;
201  _use_dummy_work_items = preferred_dummy_work_items_support(CLKernelLibrary::get().get_device());
202  _add_bias = src2 != nullptr;
203  _export_to_cl_image = rhs_info.export_to_cl_image;
204  _num_post_op_args = gemm_info.post_ops.total_num_arguments();
205 
206  // Check if we need to slide the matrix B
207  const unsigned int num_dimensions_src0 = src0->num_dimensions();
208  _slide_matrix_b = (src1->num_dimensions() >= num_dimensions_src0);
209 
210  ElementsProcessed num_elements_processed{};
211 
212  // Configure kernel window
213  auto win_config = validate_and_configure_window(src0->clone().get(),
214  src1->clone().get(),
215  (src2 != nullptr) ? src2->clone().get() : nullptr,
216  dst->clone().get(),
217  lhs_info,
218  rhs_info,
219  gemm_info,
220  num_elements_processed);
221  ARM_COMPUTE_ERROR_THROW_ON(win_config.first);
222  ICLKernel::configure_internal(win_config.second);
223 
224  const bool enable_mixed_precision = gemm_info.fp_mixed_precision;
225  const DataType data_type = src0->data_type();
226 
227  // Calculate partial (store instead of load) M0 and partial N0 for the partial blocks at the end of a row/column if any. This is to avoid padding.
228  const unsigned int internal_m = _reinterpret_output_as_3d ? gemm_info.m : dst->dimension(1);
229 
230  const unsigned int partial_store_m0 = internal_m % lhs_info.m0;
231  const unsigned int partial_store_n0 = gemm_info.n % rhs_info.n0;
232  _m = gemm_info.m;
233  _n = gemm_info.n;
234  _k = gemm_info.k;
235 
236  // Create build options
237  CLBuildOptions build_opts;
238  build_opts.add_option_if(!(helpers::float_ops::is_one(alpha)), "-DALPHA=" + float_to_string_with_full_precision(alpha));
239  build_opts.add_option_if(src2 != nullptr, "-DBETA=" + float_to_string_with_full_precision(beta));
240  build_opts.add_option_if(helpers::float_ops::is_one(beta), "-DUNIT_BETA");
241  build_opts.add_option_if(_reinterpret_output_as_3d, "-DREINTERPRET_OUTPUT_AS_3D");
242  build_opts.add_option_if(_reinterpret_output_as_3d, "-DHEIGHT_GEMM3D=" + support::cpp11::to_string(dst->dimension(1)));
243  build_opts.add_option_if(_reinterpret_output_as_3d, "-DDEPTH_GEMM3D=" + support::cpp11::to_string(dst->dimension(2)));
244  build_opts.add_option_if(gemm_info.broadcast_bias, "-DBROADCAST_BIAS");
245  build_opts.add_option_if(!_slide_matrix_b, "-DMATRIX_B_DEPTH=" + support::cpp11::to_string(src1->dimension(2)));
246  build_opts.add_option_if(lhs_info.interleave, "-DLHS_INTERLEAVE");
247  build_opts.add_option_if(rhs_info.interleave, "-DRHS_INTERLEAVE");
248  build_opts.add_option_if(lhs_info.transpose, "-DLHS_TRANSPOSE");
249  build_opts.add_option_if(_use_dummy_work_items, "-DDUMMY_WORK_ITEMS");
250  build_opts.add_option_if(enable_mixed_precision, "-DMIXED_PRECISION");
251  build_opts.add_option_if(rhs_info.export_to_cl_image, "-DOPENCL_IMAGE_SUPPORT");
252  build_opts.add_option("-DRHS_HEIGHT=" + support::cpp11::to_string(src1->dimension(1)));
253  build_opts.add_option("-DDATA_TYPE=" + get_cl_type_from_data_type(data_type));
254  build_opts.add_option("-DDATA_TYPE_ACCUMULATOR=" + (enable_mixed_precision ? get_cl_type_from_data_type(DataType::F32) : get_cl_type_from_data_type(data_type)));
255  build_opts.add_option("-DM0=" + support::cpp11::to_string(lhs_info.m0));
256  build_opts.add_option("-DN0=" + support::cpp11::to_string(rhs_info.n0));
257  build_opts.add_option("-DK0=" + support::cpp11::to_string(lhs_info.k0));
258  build_opts.add_option("-DV0=" + support::cpp11::to_string(lhs_info.v0));
259  build_opts.add_option("-DH0=" + support::cpp11::to_string(rhs_info.h0));
260  build_opts.add_option("-DPARTIAL_STORE_M0=" + support::cpp11::to_string(partial_store_m0));
261  build_opts.add_option("-DPARTIAL_STORE_N0=" + support::cpp11::to_string(partial_store_n0));
262  // If post_ops are used, then we disable the use of gemm_info.activation_info
263  if(gemm_info.post_ops.size() > 0)
264  {
265  post_op_utils.set_post_ops_cl_build_options(build_opts, gemm_info.post_ops);
266  }
267  else
268  {
269  build_opts.add_option_if(gemm_info.activation_info.enabled(), "-DACTIVATION_TYPE=" + lower_string(string_from_activation_func(gemm_info.activation_info.activation())));
270  build_opts.add_option_if(gemm_info.activation_info.enabled(), "-DA_VAL=" + float_to_string_with_full_precision(gemm_info.activation_info.a()));
271  build_opts.add_option_if(gemm_info.activation_info.enabled(), "-DB_VAL=" + float_to_string_with_full_precision(gemm_info.activation_info.b()));
272  }
273 
274  std::string kernel_name("gemm_mm_reshaped_");
275  kernel_name += lhs_info.transpose ? "lhs_t_" : "lhs_nt_";
276  kernel_name += rhs_info.transpose ? "rhs_t" : "rhs_nt";
277  kernel_name += rhs_info.export_to_cl_image ? "_texture" : "";
278  post_op_utils.set_post_ops_cl_kernel_name(kernel_name, gemm_info.post_ops);
279 
280  // A macro guard to compile ONLY the kernel of interest
281  build_opts.add_option("-D" + upper_string(kernel_name));
282 
283  // Create kernel
284  _kernel = create_kernel(compile_context, kernel_name, build_opts.options());
285 
286  // Set config_id for enabling LWS tuning
287  _config_id = kernel_name;
288  _config_id += "_";
289  _config_id += (_add_bias ? "add_bias_" : "");
290  _config_id += (gemm_info.broadcast_bias ? "broadcast_bias_" : "");
291  _config_id += (_reinterpret_output_as_3d ? "3do_" : "");
292  _config_id += (gemm_info.activation_info.enabled() ? "fused_activation_" : "");
293  _config_id += lower_string(string_from_data_type(src0->data_type()));
294  _config_id += "_";
295  _config_id += (enable_mixed_precision ? "mixed_precision_" : "");
296  _config_id += support::cpp11::to_string(dst->dimension(1));
297  _config_id += "_";
298  _config_id += support::cpp11::to_string(dst->dimension(0));
299  _config_id += "_";
300  _config_id += support::cpp11::to_string(gemm_info.k);
301  _config_id += "_";
302  _config_id += support::cpp11::to_string(dst->dimension(2));
303  _config_id += "_";
304  _config_id += support::cpp11::to_string(lhs_info.m0);
305  _config_id += "_";
306  _config_id += support::cpp11::to_string(rhs_info.n0);
307  _config_id += "_";
308  _config_id += support::cpp11::to_string(lhs_info.k0);
309  _config_id += "_";
310  _config_id += support::cpp11::to_string(lhs_info.v0);
311  _config_id += "_";
312  _config_id += support::cpp11::to_string(rhs_info.h0);
313  _config_id += "_";
314  _config_id += support::cpp11::to_string(lhs_info.interleave);
315  _config_id += "_";
316  _config_id += support::cpp11::to_string(rhs_info.interleave);
317 
319 }
bool is_one(float a, float epsilon=0.00001f)
Checks if the input floating point number is 1.0f checking if the difference is within a range define...
Definition: float_ops.h:97
bool preferred_dummy_work_items_support(const cl::Device &device)
Helper function to check if "dummy work-items" are preferred to have a power of two NDRange In case d...
Definition: CLHelpers.cpp:367
std::string to_string(T &&value)
Convert integer and float values to string.
TensorShape compute_mm_shape(const ITensorInfo &input0, const ITensorInfo &input1, bool is_interleaved_transposed, const GEMMReshapeInfo &reshape_info)
Calculate the matrix multiplication output shape of two tensors.
1 channel, 1 F32 per channel
#define ARM_COMPUTE_ERROR_ON(cond)
If the condition is true then an error message is printed and an exception thrown.
Definition: Error.h:466
const std::string & string_from_activation_func(ActivationLayerInfo::ActivationFunction act)
Translates a given activation function to a string.
Definition: Utils.cpp:163
static CLKernelLibrary & get()
Access the KernelLibrary singleton.
#define ARM_COMPUTE_ERROR_THROW_ON(status)
Definition: Error.h:455
std::string lower_string(const std::string &val)
Lower a given string.
Definition: Utils.cpp:353
Status validate_arguments(const ITensorInfo *src, const ITensorInfo *weights, const ITensorInfo *dst, const PadStrideInfo &conv_info)
std::string upper_string(const std::string &val)
Raise a given string to upper case.
Definition: Utils.cpp:360
cl::Kernel create_kernel(const CLCompileContext &ctx, const std::string &kernel_name, const std::set< std::string > &build_opts=std::set< std::string >())
Creates an opencl kernel using a compile context.
Definition: CLHelpers.cpp:404
const std::string & string_from_data_type(DataType dt)
Convert a data type identity into a string.
Definition: Utils.cpp:135
std::string float_to_string_with_full_precision(float val)
Create a string with the float in full precision.
Definition: Utils.h:1124
std::string get_cl_type_from_data_type(const DataType &dt)
Translates a tensor data type to the appropriate OpenCL type.
Definition: CLHelpers.cpp:39
bool auto_init_if_empty(ITensorInfo &info, const TensorShape &shape, int num_channels, DataType data_type, QuantizationInfo quantization_info=QuantizationInfo())
Auto initialize the tensor info (shape, number of channels and data type) if the current assignment i...
bool has_padding_changed(const std::unordered_map< const ITensorInfo *, PaddingSize > &padding_map)
Check if the previously stored padding info has changed after configuring a kernel.
Definition: Utils.cpp:603
std::pair< Status, Window > validate_and_configure_window(ITensorInfo *src, ITensorInfo *dst)
std::unordered_map< const ITensorInfo *, PaddingSize > get_padding_info(std::initializer_list< const ITensorInfo *> infos)
Stores padding information before configuring a kernel.
Definition: Utils.cpp:588
#define ARM_COMPUTE_ERROR_ON_NULLPTR(...)
Definition: Validate.h:157
std::string kernel_name
DataType
Available data types.
Definition: Types.h:79

◆ run_op()

void run_op ( ITensorPack tensors,
const Window window,
cl::CommandQueue &  queue 
)
overridevirtual

Enqueue the OpenCL kernel to process the given window on the passed OpenCL command queue.

Note
The queue is not flushed by this method, and therefore the kernel will not have been executed by the time this method returns.
Parameters
[in]tensorsA vector containing the tensors to operato on.
[in]windowRegion on which to execute the kernel. (Must be a valid region of the window returned by window()).
[in,out]queueCommand queue on which to enqueue the kernel.

Reimplemented from ICLKernel.

Definition at line 329 of file ClGemmMatrixMultiplyReshapedKernel.cpp.

References arm_compute::ACL_DST, arm_compute::ACL_SRC_0, arm_compute::ACL_SRC_1, arm_compute::ACL_SRC_2, ICLKernel::add_2D_tensor_argument(), ICLKernel::add_2D_tensor_argument_if(), ARM_COMPUTE_ERROR_ON, ARM_COMPUTE_ERROR_ON_INVALID_SUBWINDOW, ARM_COMPUTE_ERROR_ON_NULLPTR, ARM_COMPUTE_ERROR_ON_UNCONFIGURED_KERNEL, arm_compute::create_image2d_from_buffer(), Window::DimX, Window::DimY, arm_compute::enqueue(), Window::first_slice_window_3D(), CLKernelLibrary::get(), ITensorPack::get_const_tensor(), arm_compute::experimental::get_post_op_arg_type(), ITensorPack::get_tensor(), ICLKernel::lws_hint(), Window::set(), arm_compute::test::validation::reference::slice(), Window::slide_window_slice_3D(), and IKernel::window().

330 {
333 
334  const auto src0 = utils::cast::polymorphic_downcast<const ICLTensor *>(tensors.get_const_tensor(TensorType::ACL_SRC_0));
335  const auto src1 = utils::cast::polymorphic_downcast<const ICLTensor *>(tensors.get_const_tensor(TensorType::ACL_SRC_1));
336  const auto src2 = utils::cast::polymorphic_downcast<const ICLTensor *>(tensors.get_const_tensor(TensorType::ACL_SRC_2));
337  auto dst = utils::cast::polymorphic_downcast<ICLTensor *>(tensors.get_tensor(TensorType::ACL_DST));
338 
339  ARM_COMPUTE_ERROR_ON_NULLPTR(src0, src1, dst);
340  ARM_COMPUTE_ERROR_ON(_add_bias && src2 == nullptr);
341 
342  if(src1->info()->num_dimensions() < 3)
343  {
344  // The stride_z for matrix B must be zero if we do not slice
345  ARM_COMPUTE_ERROR_ON(src1->info()->strides_in_bytes()[3] != 0);
346  }
347 
349  Window slice_matrix_b = slice;
350 
351  slice_matrix_b.set(Window::DimX, Window::Dimension(0, 1, 1));
352  slice_matrix_b.set(Window::DimY, Window::Dimension(0, 1, 1));
353 
354  const unsigned int total_cross_plane_pad = dst->info()->padding().top + dst->info()->padding().bottom;
355 
356  cl::Image2D src1_image2d;
357 
358  if(_export_to_cl_image)
359  {
360  const TensorShape shape2d(src1->info()->dimension(0) / 4, src1->info()->dimension(1) * src1->info()->dimension(2));
361  const size_t image_row_pitch = src1->info()->strides_in_bytes()[1];
362 
363  src1_image2d = create_image2d_from_buffer(CLKernelLibrary::get().context(), src1->cl_buffer(), shape2d, src1->info()->data_type(), image_row_pitch);
364  }
365 
366  do
367  {
368  Window slice_b = slice;
369  // Don't slice matrix B along the z dimension if matrix B has just 2 dimensions and matrix A more than 2
370  // This scenario can happen when the matrix multiplication is used to perform a convolution operation
371  if(!_slide_matrix_b)
372  {
373  slice_b = slice_matrix_b;
374  }
375 
376  unsigned int idx = 0;
377 
378  // LHS buffer
379  add_2D_tensor_argument(idx, src0, slice);
380 
381  // RHS buffer or RHS OpenCL image (_export_to_cl_image == true)
382  if(_export_to_cl_image)
383  {
384  _kernel.setArg(idx++, src1_image2d);
385  }
386  else
387  {
388  add_2D_tensor_argument(idx, src1, slice_b);
389  }
390 
391  // Bias buffer (_add_bias == true)
392  add_2D_tensor_argument_if(_add_bias, idx, src2, slice);
393 
394  // dst buffer
395  add_2D_tensor_argument(idx, dst, slice);
396 
397  // post op argument buffers
398  for(size_t i = 0; i < _num_post_op_args; ++i)
399  {
400  const auto post_op_arg = utils::cast::polymorphic_downcast<const ICLTensor *>(tensors.get_const_tensor(experimental::get_post_op_arg_type(i)));
401  add_2D_tensor_argument(idx, post_op_arg, slice);
402  }
403 
404  // LHS stride_z
405  _kernel.setArg<cl_uint>(idx++, static_cast<unsigned int>(src0->info()->strides_in_bytes()[2]));
406 
407  // RHS stride_z (not used if _export_to_cl_image == true)
408  _kernel.setArg<cl_uint>(idx++, static_cast<unsigned int>(src1->info()->strides_in_bytes()[2]));
409 
410  // Bias stride_z (if _add_bias == true)
411  if(_add_bias)
412  {
413  _kernel.setArg<cl_uint>(idx++, static_cast<unsigned int>(src2->info()->strides_in_bytes()[2]));
414  }
415 
416  // dst stride_z
417  _kernel.setArg<cl_uint>(idx++, static_cast<unsigned int>(dst->info()->strides_in_bytes()[2]));
418 
419  // post op argument stride_z
420  for(size_t i = 0; i < _num_post_op_args; ++i)
421  {
422  const auto post_op_arg = utils::cast::polymorphic_downcast<const ICLTensor *>(tensors.get_const_tensor(experimental::get_post_op_arg_type(i)));
423  _kernel.setArg<cl_uint>(idx++, static_cast<unsigned int>(post_op_arg->info()->strides_in_bytes()[2]));
424  }
425  // Cross-plan padding (if _reinterpret_output_as_3d = true)
426  if(_reinterpret_output_as_3d)
427  {
428  _kernel.setArg<cl_uint>(idx++, static_cast<unsigned int>(total_cross_plane_pad));
429  }
430 
431  // Pass m, n and k at runtime
432  _kernel.setArg<cl_int>(idx++, _m);
433  _kernel.setArg<cl_int>(idx++, _n);
434 
435  // K dimension (not used if _export_to_cl_image == true)
436  _kernel.setArg<cl_int>(idx++, _k);
437 
438  // Dispatch kernel
439  enqueue(queue, *this, slice, lws_hint(), _use_dummy_work_items);
440  }
441  while(window.slide_window_slice_3D(slice));
442 }
const Window & window() const
The maximum window the kernel can be executed on.
Definition: IKernel.cpp:28
void add_2D_tensor_argument_if(bool cond, unsigned int &idx, const ICLTensor *tensor, const Window &window)
Add the passed 2D tensor&#39;s parameters to the object&#39;s kernel&#39;s arguments starting from the index idx ...
Definition: ICLKernel.h:213
void enqueue(cl::CommandQueue &queue, ICLKernel &kernel, const Window &window, const cl::NDRange &lws_hint=CLKernelLibrary::get().default_ndrange(), bool use_dummy_work_items=false)
Add the kernel to the command queue with the given window.
Definition: ICLKernel.cpp:32
cl::NDRange lws_hint() const
Return the Local-Workgroup-Size hint.
Definition: ICLKernel.h:383
#define ARM_COMPUTE_ERROR_ON(cond)
If the condition is true then an error message is printed and an exception thrown.
Definition: Error.h:466
static CLKernelLibrary & get()
Access the KernelLibrary singleton.
TensorType get_post_op_arg_type(size_t index)
Get post op argument TensorType from post op argument index in a flattened, ordered post op argument ...
Definition: PostOpUtils.h:79
static constexpr size_t DimX
Alias for dimension 0 also known as X dimension.
Definition: Window.h:43
bool slide_window_slice_3D(Window &slice) const
Slide the passed 3D window slice.
Definition: Window.h:349
#define ARM_COMPUTE_ERROR_ON_UNCONFIGURED_KERNEL(k)
Definition: Validate.h:915
static constexpr size_t DimY
Alias for dimension 1 also known as Y dimension.
Definition: Window.h:45
void add_2D_tensor_argument(unsigned int &idx, const ICLTensor *tensor, const Window &window)
Add the passed 2D tensor&#39;s parameters to the object&#39;s kernel&#39;s arguments starting from the index idx...
Definition: ICLKernel.h:202
#define ARM_COMPUTE_ERROR_ON_NULLPTR(...)
Definition: Validate.h:157
cl::Image2D create_image2d_from_buffer(const cl::Context &ctx, const cl::Buffer &buffer, const TensorShape &shape2d, DataType data_type, size_t image_row_pitch)
Create a cl::Image2D object from an OpenCL buffer.
Definition: CLUtils.cpp:35
Window first_slice_window_3D() const
First 3D slice of the window.
Definition: Window.h:305
#define ARM_COMPUTE_ERROR_ON_INVALID_SUBWINDOW(f, s)
Definition: Validate.h:201
SimpleTensor< T > slice(const SimpleTensor< T > &src, Coordinates starts, Coordinates ends)

◆ validate()

Status validate ( const ITensorInfo src0,
const ITensorInfo src1,
const ITensorInfo src2,
const ITensorInfo dst,
float  alpha,
float  beta,
const GEMMLHSMatrixInfo lhs_info,
const GEMMRHSMatrixInfo rhs_info,
const GEMMKernelInfo gemm_info 
)
static

Static function to check if given info will lead to a valid configuration.

Similar to ClGemmMatrixMultiplyReshapedKernel::configure()

Returns
a status

Definition at line 321 of file ClGemmMatrixMultiplyReshapedKernel.cpp.

References ARM_COMPUTE_RETURN_ON_ERROR, and arm_compute::cpu::kernels::validate_arguments().

Referenced by arm_compute::test::validation::DATA_TEST_CASE(), and arm_compute::test::validation::FIXTURE_DATA_TEST_CASE().

324 {
325  ARM_COMPUTE_RETURN_ON_ERROR(validate_arguments(src0, src1, src2, dst, alpha, beta, lhs_info, rhs_info, gemm_info));
326  return Status{};
327 }
#define ARM_COMPUTE_RETURN_ON_ERROR(status)
Checks if a status contains an error and returns it.
Definition: Error.h:204
Status validate_arguments(const ITensorInfo *src, const ITensorInfo *weights, const ITensorInfo *dst, const PadStrideInfo &conv_info)

The documentation for this class was generated from the following files: