Compute Library
 21.08
CpuGemmMatrixMultiplyKernel Class Reference

Kernel to multiply two input matrices "A" and "B". More...

#include <CpuGemmMatrixMultiplyKernel.h>

Collaboration diagram for CpuGemmMatrixMultiplyKernel:
[legend]

Public Member Functions

 CpuGemmMatrixMultiplyKernel ()=default
 
 ARM_COMPUTE_DISALLOW_COPY_ALLOW_MOVE (CpuGemmMatrixMultiplyKernel)
 
void configure (const ITensorInfo *lhs, const ITensorInfo *rhs, ITensorInfo *dst, float alpha, bool is_interleaved, const GEMMReshapeInfo &reshape_info=GEMMReshapeInfo())
 Initialise the kernel's input and output. More...
 
void run_op (ITensorPack &tensors, const Window &window, const ThreadInfo &info) override
 Execute the kernel on the passed window. More...
 
const char * name () const override
 Name of the kernel. More...
 
- Public Member Functions inherited from ICPPKernel
virtual ~ICPPKernel ()=default
 Default destructor. More...
 
virtual void run (const Window &window, const ThreadInfo &info)
 Execute the kernel on the passed window. More...
 
virtual void run_nd (const Window &window, const ThreadInfo &info, const Window &thread_locator)
 legacy compatibility layer for implemantions which do not support thread_locator In these cases we simply narrow the interface down the legacy version More...
 
- Public Member Functions inherited from IKernel
 IKernel ()
 Constructor. More...
 
virtual ~IKernel ()=default
 Destructor. More...
 
virtual bool is_parallelisable () const
 Indicates whether or not the kernel is parallelisable. More...
 
virtual BorderSize border_size () const
 The size of the border for that kernel. More...
 
const Windowwindow () const
 The maximum window the kernel can be executed on. More...
 
bool is_window_configured () const
 Function to check if the embedded window of this kernel has been configured. More...
 

Static Public Member Functions

static Status validate (const ITensorInfo *lhs, const ITensorInfo *rhs, const ITensorInfo *dst, float alpha, bool is_interleaved, const GEMMReshapeInfo &reshape_info)
 Static function to check if given info will lead to a valid configuration of CpuGemmMatrixMultiplyKernel. More...
 

Detailed Description

Kernel to multiply two input matrices "A" and "B".

All elements of the output matrix/vector will be multiplied by alpha after the matrix multiplication

Note
If the output tensor is a matrix, the implementation assumes that the input tensors lhs and rhs are both matrices and reshaped respectively with CpuGemmInterleave4x4Kernel" and CpuGemmTranspose1xWKernel
If the output tensor is a vector and the data type is F32, the implementation assumes that the first input tensor lhs is a vector and the second input tensor rhs a matrix. The implementation also assumes that both tensors have not been reshaped

Definition at line 42 of file CpuGemmMatrixMultiplyKernel.h.

Constructor & Destructor Documentation

◆ CpuGemmMatrixMultiplyKernel()

Member Function Documentation

◆ ARM_COMPUTE_DISALLOW_COPY_ALLOW_MOVE()

ARM_COMPUTE_DISALLOW_COPY_ALLOW_MOVE ( CpuGemmMatrixMultiplyKernel  )

◆ configure()

void configure ( const ITensorInfo lhs,
const ITensorInfo rhs,
ITensorInfo dst,
float  alpha,
bool  is_interleaved,
const GEMMReshapeInfo reshape_info = GEMMReshapeInfo() 
)

Initialise the kernel's input and output.

Note
If the output tensor is a matrix, the input matrices lhs and rhs should be the output of the kernels: CpuGemmInterleave4x4Kernel and CpuGemmTranspose1xWKernel These two kernels change the layout of the original matrices to be more cache-friendly.
Parameters
[in]lhsLeft-handside tensor info containing the interleaved Matrix A or the vector A. Data types supported: F16/F32
[in]rhsRight-handside tensor info containing the transposed Matrix B if the first input tensor A is not a vector. If the output tensor is a vector, rhs must contain the matrix B not reshaped. Data type supported: same as lhs
[out]dstOutput tensor to store the result of matrix multiplication. Data type supported: same as lhs.
[in]alphaWeight of the matrix product
[in]is_interleaved(Optional) True if lhs and rhs have been reshaped respectively using CpuGemmInterleave4x4Kernel and CpuGemmTranspose1xWKernel
[in]reshape_info(Optional) GEMM reshape info. If is_interleaved_transposed = true, this object must contain the information to understand how lhs and rhs have been reshaped

Definition at line 1088 of file CpuGemmMatrixMultiplyKernel.cpp.

References ARM_COMPUTE_ERROR, ARM_COMPUTE_ERROR_ON_NULLPTR, ARM_COMPUTE_ERROR_THROW_ON, arm_compute::auto_init_if_empty(), arm_compute::calculate_max_window(), ICloneable< T >::clone(), ITensorInfo::data_type(), ITensorInfo::dimension(), arm_compute::F16, arm_compute::F32, GEMMReshapeInfo::m(), GEMMReshapeInfo::n(), TensorShape::set(), and ITensorInfo::tensor_shape().

1089 {
1090  ARM_COMPUTE_ERROR_ON_NULLPTR(lhs, rhs, dst);
1091 
1092  // dst tensor auto inizialitation if not yet initialized
1093  TensorShape tensor_shape{ lhs->tensor_shape() };
1094  tensor_shape.set(0, is_interleaved ? reshape_info.n() : rhs->dimension(0));
1095  tensor_shape.set(1, is_interleaved ? reshape_info.m() : lhs->dimension(1));
1096 
1097  auto_init_if_empty(*dst, lhs->clone()->set_tensor_shape(tensor_shape));
1098 
1099  // Perform validate step
1100  ARM_COMPUTE_ERROR_THROW_ON(validate_arguments(lhs, rhs, dst, alpha, is_interleaved, reshape_info));
1101 
1102  _alpha = alpha;
1103 
1104  // Configure kernel window
1105  Window win{};
1106 
1107  // Check if the dst tensor is a vector. If so,the kernel runs the vector-matrix multiplication
1108  const bool is_dst_vector = (dst->dimension(1) == 1);
1109  if(is_dst_vector)
1110  {
1111  const unsigned int num_elems_processed_per_iteration_x = (lhs->data_type() == DataType::F32) ? 16 : 32;
1112 
1113  win = calculate_max_window(*dst, Steps(num_elems_processed_per_iteration_x));
1114  }
1115  else
1116  {
1117  constexpr unsigned int num_elems_processed_per_iteration_x = 8;
1118  constexpr unsigned int num_elems_processed_per_iteration_y = 4;
1119 
1120  win = calculate_max_window(*dst, Steps(num_elems_processed_per_iteration_x, num_elems_processed_per_iteration_y));
1121  }
1122 
1123  switch(lhs->data_type())
1124  {
1125  case DataType::F32:
1126  {
1127  _func = (is_dst_vector) ? vector_matrix_multiply_f32 : matrix_matrix_multiply_f32;
1128  break;
1129  }
1130 #ifdef __ARM_FEATURE_FP16_VECTOR_ARITHMETIC
1131  case DataType::F16:
1132  {
1133  _func = (is_dst_vector) ? vector_matrix_multiply_f16 : matrix_matrix_multiply_f16;
1134  break;
1135  }
1136 #endif /* __ARM_FEATURE_FP16_VECTOR_ARITHMETIC */
1137  default:
1138  {
1139  ARM_COMPUTE_ERROR("Data type not supported");
1140  break;
1141  }
1142  }
1143  ICPPKernel::configure(win);
1144 }
Window calculate_max_window(const ValidRegion &valid_region, const Steps &steps, bool skip_border, BorderSize border_size)
#define ARM_COMPUTE_ERROR(msg)
Print the given message then throw an std::runtime_error.
Definition: Error.h:352
1 channel, 1 F32 per channel
#define ARM_COMPUTE_ERROR_THROW_ON(status)
Definition: Error.h:455
1 channel, 1 F16 per channel
bool auto_init_if_empty(ITensorInfo &info, const TensorShape &shape, int num_channels, DataType data_type, QuantizationInfo quantization_info=QuantizationInfo())
Auto initialize the tensor info (shape, number of channels and data type) if the current assignment i...
#define ARM_COMPUTE_ERROR_ON_NULLPTR(...)
Definition: Validate.h:157

◆ name()

const char * name ( ) const
overridevirtual

Name of the kernel.

Returns
Kernel name

Implements ICPPKernel.

Definition at line 1168 of file CpuGemmMatrixMultiplyKernel.cpp.

1169 {
1170  return "CpuGemmMatrixMultiplyKernel";
1171 }

◆ run_op()

void run_op ( ITensorPack tensors,
const Window window,
const ThreadInfo info 
)
overridevirtual

Execute the kernel on the passed window.

Warning
If is_parallelisable() returns false then the passed window must be equal to window()
Note
The window has to be a region within the window returned by the window() method
The width of the window has to be a multiple of num_elems_processed_per_iteration().
Parameters
[in]tensorsA vector containing the tensors to operate on.
[in]windowRegion on which to execute the kernel. (Must be a region of the window returned by window())
[in]infoInfo about executing thread and CPU.

Reimplemented from ICPPKernel.

Definition at line 1154 of file CpuGemmMatrixMultiplyKernel.cpp.

References arm_compute::ACL_DST, arm_compute::ACL_SRC_0, arm_compute::ACL_SRC_1, ARM_COMPUTE_ERROR_ON, ARM_COMPUTE_ERROR_ON_INVALID_SUBWINDOW, ARM_COMPUTE_ERROR_ON_UNCONFIGURED_KERNEL, arm_compute::test::validation::dst, ITensorPack::empty(), ITensorPack::get_const_tensor(), ITensorPack::get_tensor(), arm_compute::test::validation::info, and IKernel::window().

1155 {
1158  ARM_COMPUTE_ERROR_ON(tensors.empty());
1159  ARM_COMPUTE_ERROR_ON(_func == nullptr);
1160 
1161  const ITensor *lhs = tensors.get_const_tensor(TensorType::ACL_SRC_0);
1162  const ITensor *rhs = tensors.get_const_tensor(TensorType::ACL_SRC_1);
1163  ITensor *dst = tensors.get_tensor(TensorType::ACL_DST);
1164 
1165  (*_func)(lhs, rhs, dst, window, info, _alpha);
1166 }
const Window & window() const
The maximum window the kernel can be executed on.
Definition: IKernel.cpp:28
#define ARM_COMPUTE_ERROR_ON(cond)
If the condition is true then an error message is printed and an exception thrown.
Definition: Error.h:466
#define ARM_COMPUTE_ERROR_ON_UNCONFIGURED_KERNEL(k)
Definition: Validate.h:915
ScaleKernelInfo info(interpolation_policy, default_border_mode, PixelValue(), sampling_policy, false)
#define ARM_COMPUTE_ERROR_ON_INVALID_SUBWINDOW(f, s)
Definition: Validate.h:201

◆ validate()

Status validate ( const ITensorInfo lhs,
const ITensorInfo rhs,
const ITensorInfo dst,
float  alpha,
bool  is_interleaved,
const GEMMReshapeInfo reshape_info 
)
static

Static function to check if given info will lead to a valid configuration of CpuGemmMatrixMultiplyKernel.

Similar to CpuGemmMatrixMultiplyKernel::configure()

Returns
a status

Definition at line 1146 of file CpuGemmMatrixMultiplyKernel.cpp.

References ARM_COMPUTE_RETURN_ON_ERROR.

Referenced by CpuGemm::validate().

1148 {
1149  ARM_COMPUTE_RETURN_ON_ERROR(validate_arguments(lhs, rhs, dst, alpha, is_interleaved, reshape_info));
1150 
1151  return Status{};
1152 }
#define ARM_COMPUTE_RETURN_ON_ERROR(status)
Checks if a status contains an error and returns it.
Definition: Error.h:204

The documentation for this class was generated from the following files: