Compute Library
 21.02
CLGEMMLowpOffsetContributionKernel Class Reference

OpenCL kernel used to add the offset contribution after the matrix multiplication. More...

#include <CLGEMMLowpOffsetContributionKernel.h>

Collaboration diagram for CLGEMMLowpOffsetContributionKernel:
[legend]

Public Member Functions

 CLGEMMLowpOffsetContributionKernel ()
 Constructor. More...
 
 CLGEMMLowpOffsetContributionKernel (const CLGEMMLowpOffsetContributionKernel &)=delete
 Prevent instances of this class from being copied (As this class contains pointers) More...
 
CLGEMMLowpOffsetContributionKerneloperator= (const CLGEMMLowpOffsetContributionKernel &)=delete
 Prevent instances of this class from being copied (As this class contains pointers) More...
 
 CLGEMMLowpOffsetContributionKernel (CLGEMMLowpOffsetContributionKernel &&)=default
 Allow instances of this class to be moved. More...
 
CLGEMMLowpOffsetContributionKerneloperator= (CLGEMMLowpOffsetContributionKernel &&)=default
 Allow instances of this class to be moved. More...
 
void configure (ICLTensor *mm_result, const ICLTensor *vector_sum_col, const ICLTensor *vector_sum_row, const ICLTensor *bias, int32_t k, int32_t a_offset, int32_t b_offset)
 Initialise the kernel's input and output. More...
 
void configure (const CLCompileContext &compile_context, ICLTensor *mm_result, const ICLTensor *vector_sum_col, const ICLTensor *vector_sum_row, const ICLTensor *bias, int32_t k, int32_t a_offset, int32_t b_offset)
 Initialise the kernel's input and output. More...
 
void run (const Window &window, cl::CommandQueue &queue) override
 Enqueue the OpenCL kernel to process the given window on the passed OpenCL command queue. More...
 
- Public Member Functions inherited from ICLKernel
 ICLKernel ()
 Constructor. More...
 
cl::Kernel & kernel ()
 Returns a reference to the OpenCL kernel of this object. More...
 
template<typename T >
void add_1D_array_argument (unsigned int &idx, const ICLArray< T > *array, const Strides &strides, unsigned int num_dimensions, const Window &window)
 Add the passed 1D array's parameters to the object's kernel's arguments starting from the index idx. More...
 
void add_1D_tensor_argument (unsigned int &idx, const ICLTensor *tensor, const Window &window)
 Add the passed 1D tensor's parameters to the object's kernel's arguments starting from the index idx. More...
 
void add_1D_tensor_argument_if (bool cond, unsigned int &idx, const ICLTensor *tensor, const Window &window)
 Add the passed 1D tensor's parameters to the object's kernel's arguments starting from the index idx if the condition is true. More...
 
void add_2D_tensor_argument (unsigned int &idx, const ICLTensor *tensor, const Window &window)
 Add the passed 2D tensor's parameters to the object's kernel's arguments starting from the index idx. More...
 
void add_2D_tensor_argument_if (bool cond, unsigned int &idx, const ICLTensor *tensor, const Window &window)
 Add the passed 2D tensor's parameters to the object's kernel's arguments starting from the index idx if the condition is true. More...
 
void add_3D_tensor_argument (unsigned int &idx, const ICLTensor *tensor, const Window &window)
 Add the passed 3D tensor's parameters to the object's kernel's arguments starting from the index idx. More...
 
void add_4D_tensor_argument (unsigned int &idx, const ICLTensor *tensor, const Window &window)
 Add the passed 4D tensor's parameters to the object's kernel's arguments starting from the index idx. More...
 
virtual void run_op (ITensorPack &tensors, const Window &window, cl::CommandQueue &queue)
 Enqueue the OpenCL kernel to process the given window on the passed OpenCL command queue. More...
 
template<typename T >
void add_argument (unsigned int &idx, T value)
 Add the passed parameters to the object's kernel's arguments starting from the index idx. More...
 
void set_lws_hint (const cl::NDRange &lws_hint)
 Set the Local-Workgroup-Size hint. More...
 
cl::NDRange lws_hint () const
 Return the Local-Workgroup-Size hint. More...
 
void set_wbsm_hint (const cl_int &wbsm_hint)
 Set the workgroup batch size modifier hint. More...
 
cl_int wbsm_hint () const
 Return the workgroup batch size modifier hint. More...
 
const std::string & config_id () const
 Get the configuration ID. More...
 
void set_target (GPUTarget target)
 Set the targeted GPU architecture. More...
 
void set_target (cl::Device &device)
 Set the targeted GPU architecture according to the CL device. More...
 
GPUTarget get_target () const
 Get the targeted GPU architecture. More...
 
size_t get_max_workgroup_size ()
 Get the maximum workgroup size for the device the CLKernelLibrary uses. More...
 
template<unsigned int dimension_size>
void add_tensor_argument (unsigned &idx, const ICLTensor *tensor, const Window &window)
 
template<typename T , unsigned int dimension_size>
void add_array_argument (unsigned &idx, const ICLArray< T > *array, const Strides &strides, unsigned int num_dimensions, const Window &window)
 Add the passed array's parameters to the object's kernel's arguments starting from the index idx. More...
 
- Public Member Functions inherited from IKernel
 IKernel ()
 Constructor. More...
 
virtual ~IKernel ()=default
 Destructor. More...
 
virtual bool is_parallelisable () const
 Indicates whether or not the kernel is parallelisable. More...
 
virtual BorderSize border_size () const
 The size of the border for that kernel. More...
 
const Windowwindow () const
 The maximum window the kernel can be executed on. More...
 

Static Public Member Functions

static Status validate (const ITensorInfo *mm_result, const ITensorInfo *vector_sum_col, const ITensorInfo *vector_sum_row, const ITensorInfo *bias, int32_t a_offset, int32_t b_offset)
 Static function to check if given info will lead to a valid configuration of CLGEMMLowpOffsetContributionKernel. More...
 
- Static Public Member Functions inherited from ICLKernel
static constexpr unsigned int num_arguments_per_1D_array ()
 Returns the number of arguments enqueued per 1D array object. More...
 
static constexpr unsigned int num_arguments_per_1D_tensor ()
 Returns the number of arguments enqueued per 1D tensor object. More...
 
static constexpr unsigned int num_arguments_per_2D_tensor ()
 Returns the number of arguments enqueued per 2D tensor object. More...
 
static constexpr unsigned int num_arguments_per_3D_tensor ()
 Returns the number of arguments enqueued per 3D tensor object. More...
 
static constexpr unsigned int num_arguments_per_4D_tensor ()
 Returns the number of arguments enqueued per 4D tensor object. More...
 
static cl::NDRange gws_from_window (const Window &window)
 Get the global work size given an execution window. More...
 

Detailed Description

OpenCL kernel used to add the offset contribution after the matrix multiplication.

The computation is performed in-place

This kernel takes a final int32 accumulator value (the output of the matrix multiplication), and adds to it the offset contribution of matrix A and matrix B in-place.

The final result is:

mm_result[i][k] = mm_result[i][k] + (vector_sum_col[k] * a_offset) + (vector_sum_row[i] * b_offset) + (a_offset * b_offset * k)

Definition at line 46 of file CLGEMMLowpOffsetContributionKernel.h.

Constructor & Destructor Documentation

◆ CLGEMMLowpOffsetContributionKernel() [1/3]

Constructor.

Definition at line 97 of file CLGEMMLowpOffsetContributionKernel.cpp.

98  : _vector_sum_col(nullptr), _vector_sum_row(nullptr), _mm_result(nullptr), _bias(nullptr)
99 {
100 }

◆ CLGEMMLowpOffsetContributionKernel() [2/3]

Prevent instances of this class from being copied (As this class contains pointers)

◆ CLGEMMLowpOffsetContributionKernel() [3/3]

Allow instances of this class to be moved.

Member Function Documentation

◆ configure() [1/2]

void configure ( ICLTensor mm_result,
const ICLTensor vector_sum_col,
const ICLTensor vector_sum_row,
const ICLTensor bias,
int32_t  k,
int32_t  a_offset,
int32_t  b_offset 
)

Initialise the kernel's input and output.

Parameters
[in,out]mm_resultInput tensor containing the result of the matrix multiplication. Data type supported: S32
[in]vector_sum_colInput row-vector of sums of all the entries in each column of matrix B. Note: vector_sum_col can be a nullptr in case a_offset = 0. Data type supported: same as mm_result
[in]vector_sum_rowInput row-vector of sums of all the entries in each row of matrix A. Note: vector_sum_row can be a nullptr in case b_offset = 0. Data type supported: same as mm_result
[in]biasBiases tensor. Only shared biases supported and it can be a nullptr if the addition of biases is not required. Biases are 1D tensor with dimensions [OFM]. Data type supported: Same as input.
[in]kNumber of matrix A columns or Matrix B rows
[in]a_offsetOffset to be added to each element of the matrix A.
[in]b_offsetOffset to be added to each element of the matrix B.

Definition at line 102 of file CLGEMMLowpOffsetContributionKernel.cpp.

References CLKernelLibrary::get().

104 {
105  configure(CLKernelLibrary::get().get_compile_context(), mm_result, vector_sum_col, vector_sum_row, bias, k, a_offset, b_offset);
106 }
void configure(ICLTensor *mm_result, const ICLTensor *vector_sum_col, const ICLTensor *vector_sum_row, const ICLTensor *bias, int32_t k, int32_t a_offset, int32_t b_offset)
Initialise the kernel&#39;s input and output.
static CLKernelLibrary & get()
Access the KernelLibrary singleton.

◆ configure() [2/2]

void configure ( const CLCompileContext compile_context,
ICLTensor mm_result,
const ICLTensor vector_sum_col,
const ICLTensor vector_sum_row,
const ICLTensor bias,
int32_t  k,
int32_t  a_offset,
int32_t  b_offset 
)

Initialise the kernel's input and output.

Parameters
[in]compile_contextThe compile context to be used.
[in,out]mm_resultInput tensor containing the result of the matrix multiplication. Data type supported: S32
[in]vector_sum_colInput row-vector of sums of all the entries in each column of matrix B. Note: vector_sum_col can be a nullptr in case a_offset = 0. Data type supported: same as mm_result
[in]vector_sum_rowInput row-vector of sums of all the entries in each row of matrix A. Note: vector_sum_row can be a nullptr in case b_offset = 0. Data type supported: same as mm_result
[in]biasBiases tensor. Only shared biases supported and it can be a nullptr if the addition of biases is not required. Biases are 1D tensor with dimensions [OFM]. Data type supported: Same as input.
[in]kNumber of matrix A columns or Matrix B rows
[in]a_offsetOffset to be added to each element of the matrix A.
[in]b_offsetOffset to be added to each element of the matrix B.

Definition at line 108 of file CLGEMMLowpOffsetContributionKernel.cpp.

References CLBuildOptions::add_option(), arm_compute::adjust_vec_size(), ARM_COMPUTE_ERROR_ON, ARM_COMPUTE_ERROR_ON_NULLPTR, ARM_COMPUTE_ERROR_THROW_ON, arm_compute::calculate_max_window(), arm_compute::create_kernel(), ITensorInfo::dimension(), arm_compute::get_padding_info(), arm_compute::has_padding_changed(), ITensor::info(), kernel_name, Dimensions< T >::num_dimensions(), ITensorInfo::num_dimensions(), num_elems_processed_per_iteration, ITensorInfo::tensor_shape(), arm_compute::support::cpp11::to_string(), arm_compute::validate_arguments(), Dimensions< T >::x(), and Dimensions< T >::y().

112 {
113  // Perform validate step
114  ARM_COMPUTE_ERROR_ON_NULLPTR(mm_result);
116  vector_sum_col != nullptr ? vector_sum_col->info() : nullptr,
117  vector_sum_row != nullptr ? vector_sum_row->info() : nullptr,
118  bias != nullptr ? bias->info() : nullptr,
119  a_offset, b_offset)); // NOLINT
120 
121  auto padding_info = get_padding_info({ mm_result, vector_sum_col, vector_sum_row, bias });
122 
123  _vector_sum_col = vector_sum_col;
124  _vector_sum_row = vector_sum_row;
125  _mm_result = mm_result;
126  _bias = bias;
127 
128  // Check if input is a 3D reinterpretation
129  const bool reinterpret_as_3d = vector_sum_row != nullptr
130  && mm_result->info()->num_dimensions() > 1
131  && mm_result->info()->tensor_shape().y() != vector_sum_row->info()->tensor_shape().x();
132 
133  const unsigned int num_elems_processed_per_iteration = adjust_vec_size(4, mm_result->info()->dimension(0));
134 
135  // Set the arguments to pass at compile time
136  CLBuildOptions build_opts;
137  build_opts.add_option("-DVEC_SIZE=" + support::cpp11::to_string(num_elems_processed_per_iteration));
138  build_opts.add_option("-DVEC_SIZE_LEFTOVER=" + support::cpp11::to_string(mm_result->info()->dimension(0) % num_elems_processed_per_iteration));
139 
140  // If a_offset == 0, vector_sum_col can be a nullptr
141  if(a_offset != 0)
142  {
143  build_opts.add_option("-DA_OFFSET=" + support::cpp11::to_string(a_offset));
144  build_opts.add_option_if(vector_sum_col->info()->tensor_shape().num_dimensions() > 1, "-DSUM_COL_HAS_BATCHES");
145  }
146  // If b_offset == 0, vector_sum_row can be a nullptr
147  build_opts.add_option_if(b_offset != 0, "-DB_OFFSET=" + support::cpp11::to_string(b_offset));
148  build_opts.add_option("-DK_OFFSET=" + support::cpp11::to_string(a_offset * b_offset * k));
149  build_opts.add_option_if(reinterpret_as_3d, "-DHEIGHT_INPUT3D=" + support::cpp11::to_string(mm_result->info()->dimension(1)));
150  build_opts.add_option_if(reinterpret_as_3d, "-DDEPTH_INPUT3D=" + support::cpp11::to_string(mm_result->info()->dimension(2)));
151  build_opts.add_option_if(bias != nullptr, "-DADD_BIAS");
152 
153  std::string kernel_name("gemmlowp_offset_contribution");
154 
155  // Create kernel
156  _kernel = create_kernel(compile_context, kernel_name, build_opts.options());
157 
158  // Configure kernel window
159  Window win = calculate_max_window(*mm_result->info(), Steps(num_elems_processed_per_iteration));
160  ICLKernel::configure_internal(win);
161 
162  // Set config_id for enabling LWS tuning
163  _config_id = kernel_name + "_";
164  _config_id += support::cpp11::to_string(mm_result->info()->dimension(0));
165  _config_id += "_";
166  _config_id += support::cpp11::to_string(mm_result->info()->dimension(1));
167  _config_id += "_";
168  _config_id += support::cpp11::to_string(mm_result->info()->dimension(2));
169 
171 }
virtual size_t num_dimensions() const =0
The number of dimensions of the tensor (rank)
Window calculate_max_window(const ValidRegion &valid_region, const Steps &steps, bool skip_border, BorderSize border_size)
std::string to_string(T &&value)
Convert integer and float values to string.
#define ARM_COMPUTE_ERROR_ON(cond)
If the condition is true then an error message is printed and an exception thrown.
Definition: Error.h:466
#define ARM_COMPUTE_ERROR_THROW_ON(status)
Definition: Error.h:455
cl::Kernel create_kernel(const CLCompileContext &ctx, const std::string &kernel_name, const std::set< std::string > &build_opts=std::set< std::string >())
Creates an opencl kernel using a compile context.
Definition: CLHelpers.cpp:403
std::string kernel_name
virtual ITensorInfo * info() const =0
Interface to be implemented by the child class to return the tensor&#39;s metadata.
bool has_padding_changed(const std::unordered_map< const ITensorInfo *, PaddingSize > &padding_map)
Check if the previously stored padding info has changed after configuring a kernel.
Definition: Utils.cpp:528
std::unordered_map< const ITensorInfo *, PaddingSize > get_padding_info(std::initializer_list< const ITensorInfo *> infos)
Stores padding information before configuring a kernel.
Definition: Utils.cpp:513
Status validate_arguments(const ITensorInfo *input, const ITensorInfo *bias, const ITensorInfo *output, const GEMMLowpOutputStageInfo *output_stage)
unsigned int num_elems_processed_per_iteration
#define ARM_COMPUTE_ERROR_ON_NULLPTR(...)
Definition: Validate.h:161
unsigned int adjust_vec_size(unsigned int vec_size, size_t dim0)
Returns the adjusted vector size in case it is less than the input&#39;s first dimension, getting rounded down to its closest valid vector size.
Definition: Utils.h:1358

◆ operator=() [1/2]

Prevent instances of this class from being copied (As this class contains pointers)

◆ operator=() [2/2]

Allow instances of this class to be moved.

◆ run()

void run ( const Window window,
cl::CommandQueue &  queue 
)
overridevirtual

Enqueue the OpenCL kernel to process the given window on the passed OpenCL command queue.

Note
The queue is not flushed by this method, and therefore the kernel will not have been executed by the time this method returns.
Parameters
[in]windowRegion on which to execute the kernel. (Must be a valid region of the window returned by window()).
[in,out]queueCommand queue on which to enqueue the kernel.

Reimplemented from ICLKernel.

Definition at line 180 of file CLGEMMLowpOffsetContributionKernel.cpp.

References ICLKernel::add_1D_tensor_argument_if(), ICLKernel::add_2D_tensor_argument_if(), ICLKernel::add_3D_tensor_argument(), ARM_COMPUTE_ERROR_ON_INVALID_SUBWINDOW, ARM_COMPUTE_ERROR_ON_UNCONFIGURED_KERNEL, Window::collapse_if_possible(), Window::DimX, Window::DimY, Window::DimZ, arm_compute::enqueue(), Window::first_slice_window_3D(), ICLKernel::lws_hint(), Window::set(), arm_compute::test::validation::reference::slice(), Window::slide_window_slice_3D(), and IKernel::window().

181 {
184 
186  Window slice = collapsed.first_slice_window_3D();
187 
188  // Set window for vector_sum_col
189  Window win_vector_sum_col = slice;
190  win_vector_sum_col.set(Window::DimY, Window::Dimension(0, 0, 0));
191  win_vector_sum_col.set(Window::DimZ, Window::Dimension(0, 0, 0));
192 
193  // Set window for vector_sum_row
194  Window win_vector_sum_row = slice;
195  win_vector_sum_row.set(Window::DimX, Window::Dimension(0, 0, 0));
196  win_vector_sum_row.set(Window::DimY, Window::Dimension(0, 0, 0));
197  win_vector_sum_col.set(Window::DimZ, Window::Dimension(0, 0, 0));
198 
199  Window biases_slice = slice;
200  biases_slice.set(Window::DimY, Window::Dimension(0, 1, 1));
201  biases_slice.set(Window::DimZ, Window::Dimension(0, 1, 1));
202 
203  do
204  {
205  unsigned int idx = 0;
206  add_3D_tensor_argument(idx, _mm_result, slice);
207  add_2D_tensor_argument_if((_vector_sum_col != nullptr), idx, _vector_sum_col, win_vector_sum_col);
208  add_2D_tensor_argument_if((_vector_sum_row != nullptr), idx, _vector_sum_row, win_vector_sum_row);
209  add_1D_tensor_argument_if((_bias != nullptr), idx, _bias, biases_slice);
210 
211  enqueue(queue, *this, slice, lws_hint());
212  }
213  while(collapsed.slide_window_slice_3D(slice));
214 }
void add_1D_tensor_argument_if(bool cond, unsigned int &idx, const ICLTensor *tensor, const Window &window)
Add the passed 1D tensor&#39;s parameters to the object&#39;s kernel&#39;s arguments starting from the index idx ...
Definition: ICLKernel.h:135
const Window & window() const
The maximum window the kernel can be executed on.
Definition: IKernel.cpp:28
void add_2D_tensor_argument_if(bool cond, unsigned int &idx, const ICLTensor *tensor, const Window &window)
Add the passed 2D tensor&#39;s parameters to the object&#39;s kernel&#39;s arguments starting from the index idx ...
Definition: ICLKernel.h:159
void enqueue(IGCKernel &kernel, const Window &window, const gles::NDRange &lws=gles::NDRange(1U, 1U, 1U))
Add the kernel to the command queue with the given window.
Definition: IGCKernel.cpp:41
cl::NDRange lws_hint() const
Return the Local-Workgroup-Size hint.
Definition: ICLKernel.h:276
void add_3D_tensor_argument(unsigned int &idx, const ICLTensor *tensor, const Window &window)
Add the passed 3D tensor&#39;s parameters to the object&#39;s kernel&#39;s arguments starting from the index idx...
Definition: ICLKernel.h:172
static constexpr size_t DimX
Alias for dimension 0 also known as X dimension.
Definition: Window.h:43
Window collapse_if_possible(const Window &full_window, size_t first, size_t last, bool *has_collapsed=nullptr) const
Collapse the dimensions between first and last if possible.
Definition: Window.inl:68
#define ARM_COMPUTE_ERROR_ON_UNCONFIGURED_KERNEL(k)
Definition: Validate.h:941
static constexpr size_t DimY
Alias for dimension 1 also known as Y dimension.
Definition: Window.h:45
static constexpr size_t DimZ
Alias for dimension 2 also known as Z dimension.
Definition: Window.h:47
#define ARM_COMPUTE_ERROR_ON_INVALID_SUBWINDOW(f, s)
Definition: Validate.h:205
SimpleTensor< T > slice(const SimpleTensor< T > &src, Coordinates starts, Coordinates ends)

◆ validate()

Status validate ( const ITensorInfo mm_result,
const ITensorInfo vector_sum_col,
const ITensorInfo vector_sum_row,
const ITensorInfo bias,
int32_t  a_offset,
int32_t  b_offset 
)
static

Static function to check if given info will lead to a valid configuration of CLGEMMLowpOffsetContributionKernel.

Parameters
[in]mm_resultInput tensor containing the result of CLGEMMLowpOffsetContributionKernel. Data type supported: S32
[in]vector_sum_colInput row-vector of sums of all the entries in each column of matrix B. Note: vector_sum_col can be a nullptr in case a_offset = 0. Data type supported: same as mm_result
[in]vector_sum_rowInput row-vector of sums of all the entries in each row of matrix A. Note: vector_sum_row can be a nullptr in case b_offset = 0. Data type supported: same as mm_result
[in]biasBiases tensor. Only shared biases supported and it can be a nullptr if the addition of biases is not required. Biases are 1D tensor with dimensions [OFM]. Data type supported: Same as input.
[in]a_offsetOffset to be added to each element of the matrix A.
[in]b_offsetOffset to be added to each element of the matrix B.
Returns
a status

Definition at line 173 of file CLGEMMLowpOffsetContributionKernel.cpp.

References ARM_COMPUTE_RETURN_ON_ERROR, and arm_compute::validate_arguments().

Referenced by CLGEMMLowpMatrixMultiplyCore::validate().

175 {
176  ARM_COMPUTE_RETURN_ON_ERROR(validate_arguments(mm_result, vector_sum_col, vector_sum_row, bias, a_offset, b_offset));
177  return Status{};
178 }
#define ARM_COMPUTE_RETURN_ON_ERROR(status)
Checks if a status contains an error and returns it.
Definition: Error.h:204
Status validate_arguments(const ITensorInfo *input, const ITensorInfo *bias, const ITensorInfo *output, const GEMMLowpOutputStageInfo *output_stage)

The documentation for this class was generated from the following files: