Compute Library
 21.02
CLGEMMLowpOffsetContributionOutputStageKernel Class Reference

OpenCL kernel used to add the offset contribution after the matrix multiplication and perform the output stage. More...

#include <CLGEMMLowpOffsetContributionOutputStageKernel.h>

Collaboration diagram for CLGEMMLowpOffsetContributionOutputStageKernel:
[legend]

Public Member Functions

 CLGEMMLowpOffsetContributionOutputStageKernel ()
 Constructor. More...
 
 CLGEMMLowpOffsetContributionOutputStageKernel (const CLGEMMLowpOffsetContributionOutputStageKernel &)=delete
 Prevent instances of this class from being copied (As this class contains pointers) More...
 
CLGEMMLowpOffsetContributionOutputStageKerneloperator= (const CLGEMMLowpOffsetContributionOutputStageKernel &)=delete
 Prevent instances of this class from being copied (As this class contains pointers) More...
 
 CLGEMMLowpOffsetContributionOutputStageKernel (CLGEMMLowpOffsetContributionOutputStageKernel &&)=default
 Allow instances of this class to be moved. More...
 
CLGEMMLowpOffsetContributionOutputStageKerneloperator= (CLGEMMLowpOffsetContributionOutputStageKernel &&)=default
 Allow instances of this class to be moved. More...
 
void configure (const ICLTensor *mm_result, const ICLTensor *vector_sum_col, const ICLTensor *vector_sum_row, const ICLTensor *bias, ICLTensor *output, int32_t k, int32_t a_offset, int32_t b_offset, const GEMMLowpOutputStageInfo &output_stage, const ICLTensor *output_multipliers, const ICLTensor *output_shifts)
 Initialise the kernel's input and output. More...
 
void configure (const CLCompileContext &compile_context, const ICLTensor *mm_result, const ICLTensor *vector_sum_col, const ICLTensor *vector_sum_row, const ICLTensor *bias, ICLTensor *output, int32_t k, int32_t a_offset, int32_t b_offset, const GEMMLowpOutputStageInfo &output_stage, const ICLTensor *output_multipliers, const ICLTensor *output_shifts)
 Initialise the kernel's input and output. More...
 
void run (const Window &window, cl::CommandQueue &queue) override
 Enqueue the OpenCL kernel to process the given window on the passed OpenCL command queue. More...
 
- Public Member Functions inherited from ICLKernel
 ICLKernel ()
 Constructor. More...
 
cl::Kernel & kernel ()
 Returns a reference to the OpenCL kernel of this object. More...
 
template<typename T >
void add_1D_array_argument (unsigned int &idx, const ICLArray< T > *array, const Strides &strides, unsigned int num_dimensions, const Window &window)
 Add the passed 1D array's parameters to the object's kernel's arguments starting from the index idx. More...
 
void add_1D_tensor_argument (unsigned int &idx, const ICLTensor *tensor, const Window &window)
 Add the passed 1D tensor's parameters to the object's kernel's arguments starting from the index idx. More...
 
void add_1D_tensor_argument_if (bool cond, unsigned int &idx, const ICLTensor *tensor, const Window &window)
 Add the passed 1D tensor's parameters to the object's kernel's arguments starting from the index idx if the condition is true. More...
 
void add_2D_tensor_argument (unsigned int &idx, const ICLTensor *tensor, const Window &window)
 Add the passed 2D tensor's parameters to the object's kernel's arguments starting from the index idx. More...
 
void add_2D_tensor_argument_if (bool cond, unsigned int &idx, const ICLTensor *tensor, const Window &window)
 Add the passed 2D tensor's parameters to the object's kernel's arguments starting from the index idx if the condition is true. More...
 
void add_3D_tensor_argument (unsigned int &idx, const ICLTensor *tensor, const Window &window)
 Add the passed 3D tensor's parameters to the object's kernel's arguments starting from the index idx. More...
 
void add_4D_tensor_argument (unsigned int &idx, const ICLTensor *tensor, const Window &window)
 Add the passed 4D tensor's parameters to the object's kernel's arguments starting from the index idx. More...
 
virtual void run_op (ITensorPack &tensors, const Window &window, cl::CommandQueue &queue)
 Enqueue the OpenCL kernel to process the given window on the passed OpenCL command queue. More...
 
template<typename T >
void add_argument (unsigned int &idx, T value)
 Add the passed parameters to the object's kernel's arguments starting from the index idx. More...
 
void set_lws_hint (const cl::NDRange &lws_hint)
 Set the Local-Workgroup-Size hint. More...
 
cl::NDRange lws_hint () const
 Return the Local-Workgroup-Size hint. More...
 
void set_wbsm_hint (const cl_int &wbsm_hint)
 Set the workgroup batch size modifier hint. More...
 
cl_int wbsm_hint () const
 Return the workgroup batch size modifier hint. More...
 
const std::string & config_id () const
 Get the configuration ID. More...
 
void set_target (GPUTarget target)
 Set the targeted GPU architecture. More...
 
void set_target (cl::Device &device)
 Set the targeted GPU architecture according to the CL device. More...
 
GPUTarget get_target () const
 Get the targeted GPU architecture. More...
 
size_t get_max_workgroup_size ()
 Get the maximum workgroup size for the device the CLKernelLibrary uses. More...
 
template<unsigned int dimension_size>
void add_tensor_argument (unsigned &idx, const ICLTensor *tensor, const Window &window)
 
template<typename T , unsigned int dimension_size>
void add_array_argument (unsigned &idx, const ICLArray< T > *array, const Strides &strides, unsigned int num_dimensions, const Window &window)
 Add the passed array's parameters to the object's kernel's arguments starting from the index idx. More...
 
- Public Member Functions inherited from IKernel
 IKernel ()
 Constructor. More...
 
virtual ~IKernel ()=default
 Destructor. More...
 
virtual bool is_parallelisable () const
 Indicates whether or not the kernel is parallelisable. More...
 
virtual BorderSize border_size () const
 The size of the border for that kernel. More...
 
const Windowwindow () const
 The maximum window the kernel can be executed on. More...
 

Static Public Member Functions

static Status validate (const ITensorInfo *mm_result, const ITensorInfo *vector_sum_col, const ITensorInfo *vector_sum_row, const ITensorInfo *bias, const ITensorInfo *output, int32_t a_offset, int32_t b_offset, const GEMMLowpOutputStageInfo &output_stage, const ITensorInfo *output_multipliers, const ITensorInfo *output_shifts)
 Static function to check if given info will lead to a valid configuration of CLGEMMLowpOffsetContributionKernel. More...
 
- Static Public Member Functions inherited from ICLKernel
static constexpr unsigned int num_arguments_per_1D_array ()
 Returns the number of arguments enqueued per 1D array object. More...
 
static constexpr unsigned int num_arguments_per_1D_tensor ()
 Returns the number of arguments enqueued per 1D tensor object. More...
 
static constexpr unsigned int num_arguments_per_2D_tensor ()
 Returns the number of arguments enqueued per 2D tensor object. More...
 
static constexpr unsigned int num_arguments_per_3D_tensor ()
 Returns the number of arguments enqueued per 3D tensor object. More...
 
static constexpr unsigned int num_arguments_per_4D_tensor ()
 Returns the number of arguments enqueued per 4D tensor object. More...
 
static cl::NDRange gws_from_window (const Window &window)
 Get the global work size given an execution window. More...
 

Detailed Description

OpenCL kernel used to add the offset contribution after the matrix multiplication and perform the output stage.

This kernel takes a final int32 accumulator value (the output of the matrix multiplication), adds to it the offset contribution of matrix A and matrix B and performs the output stage defined by the output_stage argument

Note
For quantized computations the output data type for auto-initialization must be passed as part of the GEMMLowpOutputStageInfo.

Definition at line 40 of file CLGEMMLowpOffsetContributionOutputStageKernel.h.

Constructor & Destructor Documentation

◆ CLGEMMLowpOffsetContributionOutputStageKernel() [1/3]

Constructor.

Definition at line 121 of file CLGEMMLowpOffsetContributionOutputStageKernel.cpp.

122  : _mm_result(nullptr),
123  _vector_sum_col(nullptr),
124  _vector_sum_row(nullptr),
125  _bias(nullptr),
126  _output(nullptr),
127  _output_multipliers(nullptr),
128  _output_shifts(nullptr),
129  _is_quantized_per_channel(false)
130 {
131 }

◆ CLGEMMLowpOffsetContributionOutputStageKernel() [2/3]

Prevent instances of this class from being copied (As this class contains pointers)

◆ CLGEMMLowpOffsetContributionOutputStageKernel() [3/3]

Allow instances of this class to be moved.

Member Function Documentation

◆ configure() [1/2]

void configure ( const ICLTensor mm_result,
const ICLTensor vector_sum_col,
const ICLTensor vector_sum_row,
const ICLTensor bias,
ICLTensor output,
int32_t  k,
int32_t  a_offset,
int32_t  b_offset,
const GEMMLowpOutputStageInfo output_stage,
const ICLTensor output_multipliers,
const ICLTensor output_shifts 
)

Initialise the kernel's input and output.

Parameters
[in]mm_resultInput tensor containing the result of the matrix multiplication. Data type supported: S32
[in]vector_sum_colInput row-vector of sums of all the entries in each column of matrix B. Note: vector_sum_col can be a nullptr in case a_offset = 0. Data type supported: same as mm_result
[in]vector_sum_rowInput row-vector of sums of all the entries in each row of matrix A. Note: vector_sum_row can be a nullptr in case b_offset = 0. Data type supported: same as mm_result
[in]biasBiases tensor. Only shared biases supported and it can be a nullptr if the addition of biases is not required. Biases are 1D tensor with dimensions [OFM]. Data type supported: Same as input.
[out]outputOutput tensor. Data type supported: QASYMM8/QASYMM8_SIGNED.
[in]kNumber of matrix A columns or Matrix B rows
[in]a_offsetOffset to be added to each element of the matrix A.
[in]b_offsetOffset to be added to each element of the matrix B.
[in]output_stageGEMMLowp output stage info
[in]output_multipliersOutput multipliers tensor. In case of per-channel quantization, the number of multipliers must be equal to the number of filters (OFM). Supported data types: S32
[in]output_shiftsOutput shifts tensor. In case of per-channel quantization, the number of multipliers must be equal to the number of filters (OFM). Supported data types: S32

Definition at line 133 of file CLGEMMLowpOffsetContributionOutputStageKernel.cpp.

References CLKernelLibrary::get().

136 {
137  configure(CLKernelLibrary::get().get_compile_context(), mm_result, vector_sum_col, vector_sum_row, bias, output, k, a_offset, b_offset, output_stage, output_multipliers, output_shifts);
138 }
void configure(const ICLTensor *mm_result, const ICLTensor *vector_sum_col, const ICLTensor *vector_sum_row, const ICLTensor *bias, ICLTensor *output, int32_t k, int32_t a_offset, int32_t b_offset, const GEMMLowpOutputStageInfo &output_stage, const ICLTensor *output_multipliers, const ICLTensor *output_shifts)
Initialise the kernel&#39;s input and output.
static CLKernelLibrary & get()
Access the KernelLibrary singleton.

◆ configure() [2/2]

void configure ( const CLCompileContext compile_context,
const ICLTensor mm_result,
const ICLTensor vector_sum_col,
const ICLTensor vector_sum_row,
const ICLTensor bias,
ICLTensor output,
int32_t  k,
int32_t  a_offset,
int32_t  b_offset,
const GEMMLowpOutputStageInfo output_stage,
const ICLTensor output_multipliers,
const ICLTensor output_shifts 
)

Initialise the kernel's input and output.

Parameters
[in]compile_contextThe compile context to be used.
[in]mm_resultInput tensor containing the result of the matrix multiplication. Data type supported: S32
[in]vector_sum_colInput row-vector of sums of all the entries in each column of matrix B. Note: vector_sum_col can be a nullptr in case a_offset = 0. Data type supported: same as mm_result
[in]vector_sum_rowInput row-vector of sums of all the entries in each row of matrix A. Note: vector_sum_row can be a nullptr in case b_offset = 0. Data type supported: same as mm_result
[in]biasBiases tensor. Only shared biases supported and it can be a nullptr if the addition of biases is not required. Biases are 1D tensor with dimensions [OFM]. Data type supported: Same as input.
[out]outputOutput tensor. Data type supported: QASYMM8/QASYMM8_SIGNED.
[in]kNumber of matrix A columns or Matrix B rows
[in]a_offsetOffset to be added to each element of the matrix A.
[in]b_offsetOffset to be added to each element of the matrix B.
[in]output_stageGEMMLowp output stage info
[in]output_multipliersOutput multipliers tensor. In case of per-channel quantization, the number of multipliers must be equal to the number of filters (OFM). Supported data types: S32
[in]output_shiftsOutput shifts tensor. In case of per-channel quantization, the number of multipliers must be equal to the number of filters (OFM). Supported data types: S32

Definition at line 140 of file CLGEMMLowpOffsetContributionOutputStageKernel.cpp.

References CLBuildOptions::add_option(), arm_compute::adjust_vec_size(), ARM_COMPUTE_ERROR_ON, ARM_COMPUTE_ERROR_ON_NULLPTR, ARM_COMPUTE_ERROR_THROW_ON, arm_compute::auto_init_if_empty(), arm_compute::calculate_max_window(), ICloneable< T >::clone(), arm_compute::create_kernel(), ITensorInfo::data_type(), ITensorInfo::dimension(), arm_compute::get_cl_type_from_data_type(), arm_compute::get_min_max(), arm_compute::get_padding_info(), arm_compute::has_padding_changed(), ITensor::info(), kernel_name, Dimensions< T >::num_dimensions(), ITensorInfo::num_dimensions(), num_elems_processed_per_iteration, arm_compute::string_from_gemmlowp_output_stage(), ITensorInfo::tensor_shape(), arm_compute::support::cpp11::to_string(), arm_compute::validate_arguments(), Dimensions< T >::x(), and Dimensions< T >::y().

144 {
145  // Perform validate step
146  ARM_COMPUTE_ERROR_ON_NULLPTR(mm_result, output, output_multipliers, output_shifts);
148  vector_sum_col != nullptr ? vector_sum_col->info() : nullptr,
149  vector_sum_row != nullptr ? vector_sum_row->info() : nullptr,
150  bias != nullptr ? bias->info() : nullptr,
151  output->info(),
152  a_offset, b_offset, output_stage,
153  output_multipliers->info(), output_shifts->info())); // NOLINT
154 
155  auto padding_info = get_padding_info({ mm_result, vector_sum_col, vector_sum_row, bias, output, output_multipliers, output_shifts });
156 
157  const int min = output_stage.gemmlowp_min_bound;
158  const int max = output_stage.gemmlowp_max_bound;
159 
160  _vector_sum_col = vector_sum_col;
161  _vector_sum_row = vector_sum_row;
162  _mm_result = mm_result;
163  _bias = bias;
164  _output = output;
165  _output_multipliers = output_multipliers;
166  _output_shifts = output_shifts;
167  _is_quantized_per_channel = output_stage.is_quantized_per_channel;
168 
169  // Check if input is a 3D reinterpretation
170  const bool reinterpret_as_3d = vector_sum_row != nullptr
171  && mm_result->info()->num_dimensions() > 1
172  && mm_result->info()->tensor_shape().y() != vector_sum_row->info()->tensor_shape().x();
173 
174  // Auto initialize the output
175  auto_init_if_empty(*output->info(), mm_result->info()->clone()->set_data_type(output_stage.output_data_type));
176 
177  const unsigned int num_elems_processed_per_iteration = adjust_vec_size(4, mm_result->info()->dimension(0));
178 
179  // Set the arguments to pass at compile time
180  CLBuildOptions build_opts;
181  build_opts.add_option("-DVEC_SIZE=" + support::cpp11::to_string(num_elems_processed_per_iteration));
182  build_opts.add_option("-DVEC_SIZE_LEFTOVER=" + support::cpp11::to_string(mm_result->info()->dimension(0) % num_elems_processed_per_iteration));
183 
184  // If a_offset == 0, vector_sum_col can be a nullptr
185  if(a_offset != 0)
186  {
187  build_opts.add_option("-DA_OFFSET=" + support::cpp11::to_string(a_offset));
188  build_opts.add_option_if(vector_sum_col->info()->tensor_shape().num_dimensions() > 1, "-DSUM_COL_HAS_BATCHES");
189  }
190  // If b_offset == 0, vector_sum_row can be a nullptr
191  build_opts.add_option_if(b_offset != 0, "-DB_OFFSET=" + support::cpp11::to_string(b_offset));
192  build_opts.add_option("-DK_OFFSET=" + support::cpp11::to_string(a_offset * b_offset * k));
193  build_opts.add_option_if(reinterpret_as_3d, "-DHEIGHT_INPUT3D=" + support::cpp11::to_string(mm_result->info()->dimension(1)));
194  build_opts.add_option_if(reinterpret_as_3d, "-DDEPTH_INPUT3D=" + support::cpp11::to_string(mm_result->info()->dimension(2)));
195  build_opts.add_option_if(bias != nullptr, "-DADD_BIAS");
196  build_opts.add_option("-DRESULT_OFFSET=" + support::cpp11::to_string(output_stage.gemmlowp_offset));
197  build_opts.add_option("-DRESULT_MULTIPLIER=" + support::cpp11::to_string(output_stage.gemmlowp_multipliers[0]));
198  build_opts.add_option("-DRESULT_SHIFT=" + support::cpp11::to_string(output_stage.gemmlowp_shifts[0]));
199  build_opts.add_option_if(_is_quantized_per_channel, "-DPER_CHANNEL_QUANTIZATION");
200  build_opts.add_option("-DOUTPUT_DATA_TYPE=" + get_cl_type_from_data_type(output->info()->data_type()));
201 
202  PixelValue min_val{};
203  PixelValue max_val{};
204  std::tie(min_val, max_val) = get_min_max(output->info()->data_type());
205  build_opts.add_option_if((min > min_val.get<int32_t>()), "-DMIN_BOUND=" + support::cpp11::to_string(min));
206  build_opts.add_option_if((max < max_val.get<int32_t>()), "-DMAX_BOUND=" + support::cpp11::to_string(max));
207 
208  std::string kernel_name("gemmlowp_offset_contribution");
209  kernel_name += "_" + string_from_gemmlowp_output_stage(output_stage.type);
210 
211  // Create kernel
212  _kernel = create_kernel(compile_context, kernel_name, build_opts.options());
213 
214  // Configure kernel window
215  Window win = calculate_max_window(*mm_result->info(), Steps(num_elems_processed_per_iteration));
216  ICLKernel::configure_internal(win);
217 
218  // Set config_id for enabling LWS tuning
219  _config_id = kernel_name + "_";
220  _config_id += support::cpp11::to_string(mm_result->info()->dimension(0));
221  _config_id += "_";
222  _config_id += support::cpp11::to_string(mm_result->info()->dimension(1));
223  _config_id += "_";
224  _config_id += support::cpp11::to_string(mm_result->info()->dimension(2));
225 
227 }
Window calculate_max_window(const ValidRegion &valid_region, const Steps &steps, bool skip_border, BorderSize border_size)
std::string to_string(T &&value)
Convert integer and float values to string.
#define ARM_COMPUTE_ERROR_ON(cond)
If the condition is true then an error message is printed and an exception thrown.
Definition: Error.h:466
#define ARM_COMPUTE_ERROR_THROW_ON(status)
Definition: Error.h:455
const std::string & string_from_gemmlowp_output_stage(GEMMLowpOutputStageType output_stage)
Translates a given GEMMLowp output stage to a string.
Definition: Utils.cpp:260
cl::Kernel create_kernel(const CLCompileContext &ctx, const std::string &kernel_name, const std::set< std::string > &build_opts=std::set< std::string >())
Creates an opencl kernel using a compile context.
Definition: CLHelpers.cpp:403
std::string kernel_name
std::string get_cl_type_from_data_type(const DataType &dt)
Translates a tensor data type to the appropriate OpenCL type.
Definition: CLHelpers.cpp:37
bool auto_init_if_empty(ITensorInfo &info, const TensorShape &shape, int num_channels, DataType data_type, QuantizationInfo quantization_info=QuantizationInfo())
Auto initialize the tensor info (shape, number of channels and data type) if the current assignment i...
bool has_padding_changed(const std::unordered_map< const ITensorInfo *, PaddingSize > &padding_map)
Check if the previously stored padding info has changed after configuring a kernel.
Definition: Utils.cpp:528
std::unordered_map< const ITensorInfo *, PaddingSize > get_padding_info(std::initializer_list< const ITensorInfo *> infos)
Stores padding information before configuring a kernel.
Definition: Utils.cpp:513
Status validate_arguments(const ITensorInfo *input, const ITensorInfo *bias, const ITensorInfo *output, const GEMMLowpOutputStageInfo *output_stage)
unsigned int num_elems_processed_per_iteration
#define ARM_COMPUTE_ERROR_ON_NULLPTR(...)
Definition: Validate.h:161
unsigned int adjust_vec_size(unsigned int vec_size, size_t dim0)
Returns the adjusted vector size in case it is less than the input&#39;s first dimension, getting rounded down to its closest valid vector size.
Definition: Utils.h:1358
std::tuple< PixelValue, PixelValue > get_min_max(DataType dt)
Compute the mininum and maximum values a data type can take.
Definition: Utils.h:564

◆ operator=() [1/2]

Prevent instances of this class from being copied (As this class contains pointers)

◆ operator=() [2/2]

Allow instances of this class to be moved.

◆ run()

void run ( const Window window,
cl::CommandQueue &  queue 
)
overridevirtual

Enqueue the OpenCL kernel to process the given window on the passed OpenCL command queue.

Note
The queue is not flushed by this method, and therefore the kernel will not have been executed by the time this method returns.
Parameters
[in]windowRegion on which to execute the kernel. (Must be a valid region of the window returned by window()).
[in,out]queueCommand queue on which to enqueue the kernel.

Reimplemented from ICLKernel.

Definition at line 237 of file CLGEMMLowpOffsetContributionOutputStageKernel.cpp.

References ICLKernel::add_1D_tensor_argument_if(), ICLKernel::add_2D_tensor_argument_if(), ICLKernel::add_3D_tensor_argument(), ARM_COMPUTE_ERROR_ON_INVALID_SUBWINDOW, ARM_COMPUTE_ERROR_ON_UNCONFIGURED_KERNEL, Window::collapse_if_possible(), Window::DimX, Window::DimY, Window::DimZ, arm_compute::enqueue(), Window::first_slice_window_3D(), ICLKernel::lws_hint(), Window::set(), arm_compute::test::validation::reference::slice(), Window::slide_window_slice_3D(), and IKernel::window().

238 {
241 
243  Window slice = collapsed.first_slice_window_3D();
244 
245  // Set window for vector_sum_col
246  Window win_vector_sum_col = slice;
247  win_vector_sum_col.set(Window::DimY, Window::Dimension(0, 0, 0));
248  win_vector_sum_col.set(Window::DimZ, Window::Dimension(0, 0, 0));
249 
250  // Set window for vector_sum_row
251  Window win_vector_sum_row = slice;
252  win_vector_sum_row.set(Window::DimX, Window::Dimension(0, 0, 0));
253  win_vector_sum_row.set(Window::DimY, Window::Dimension(0, 0, 0));
254  win_vector_sum_col.set(Window::DimZ, Window::Dimension(0, 0, 0));
255 
256  Window biases_slice = slice;
257  biases_slice.set(Window::DimY, Window::Dimension(0, 1, 1));
258  biases_slice.set(Window::DimZ, Window::Dimension(0, 1, 1));
259 
260  do
261  {
262  unsigned int idx = 0;
263  add_3D_tensor_argument(idx, _mm_result, slice);
264  add_2D_tensor_argument_if((_vector_sum_col != nullptr), idx, _vector_sum_col, win_vector_sum_col);
265  add_2D_tensor_argument_if((_vector_sum_row != nullptr), idx, _vector_sum_row, win_vector_sum_row);
266  add_1D_tensor_argument_if((_bias != nullptr), idx, _bias, biases_slice);
267  add_3D_tensor_argument(idx, _output, slice);
268  add_1D_tensor_argument_if(_is_quantized_per_channel, idx, _output_multipliers, biases_slice);
269  add_1D_tensor_argument_if(_is_quantized_per_channel, idx, _output_shifts, biases_slice);
270  enqueue(queue, *this, slice, lws_hint());
271  }
272  while(collapsed.slide_window_slice_3D(slice));
273 }
void add_1D_tensor_argument_if(bool cond, unsigned int &idx, const ICLTensor *tensor, const Window &window)
Add the passed 1D tensor&#39;s parameters to the object&#39;s kernel&#39;s arguments starting from the index idx ...
Definition: ICLKernel.h:135
const Window & window() const
The maximum window the kernel can be executed on.
Definition: IKernel.cpp:28
void add_2D_tensor_argument_if(bool cond, unsigned int &idx, const ICLTensor *tensor, const Window &window)
Add the passed 2D tensor&#39;s parameters to the object&#39;s kernel&#39;s arguments starting from the index idx ...
Definition: ICLKernel.h:159
void enqueue(IGCKernel &kernel, const Window &window, const gles::NDRange &lws=gles::NDRange(1U, 1U, 1U))
Add the kernel to the command queue with the given window.
Definition: IGCKernel.cpp:41
cl::NDRange lws_hint() const
Return the Local-Workgroup-Size hint.
Definition: ICLKernel.h:276
void add_3D_tensor_argument(unsigned int &idx, const ICLTensor *tensor, const Window &window)
Add the passed 3D tensor&#39;s parameters to the object&#39;s kernel&#39;s arguments starting from the index idx...
Definition: ICLKernel.h:172
static constexpr size_t DimX
Alias for dimension 0 also known as X dimension.
Definition: Window.h:43
Window collapse_if_possible(const Window &full_window, size_t first, size_t last, bool *has_collapsed=nullptr) const
Collapse the dimensions between first and last if possible.
Definition: Window.inl:68
#define ARM_COMPUTE_ERROR_ON_UNCONFIGURED_KERNEL(k)
Definition: Validate.h:941
static constexpr size_t DimY
Alias for dimension 1 also known as Y dimension.
Definition: Window.h:45
static constexpr size_t DimZ
Alias for dimension 2 also known as Z dimension.
Definition: Window.h:47
#define ARM_COMPUTE_ERROR_ON_INVALID_SUBWINDOW(f, s)
Definition: Validate.h:205
SimpleTensor< T > slice(const SimpleTensor< T > &src, Coordinates starts, Coordinates ends)

◆ validate()

Status validate ( const ITensorInfo mm_result,
const ITensorInfo vector_sum_col,
const ITensorInfo vector_sum_row,
const ITensorInfo bias,
const ITensorInfo output,
int32_t  a_offset,
int32_t  b_offset,
const GEMMLowpOutputStageInfo output_stage,
const ITensorInfo output_multipliers,
const ITensorInfo output_shifts 
)
static

Static function to check if given info will lead to a valid configuration of CLGEMMLowpOffsetContributionKernel.

Parameters
[in]mm_resultInput tensor containing the result of CLGEMMLowpOffsetContributionKernel. Data type supported: S32
[in]vector_sum_colInput row-vector of sums of all the entries in each column of matrix B. Note: vector_sum_col can be a nullptr in case a_offset = 0. Data type supported: same as mm_result
[in]vector_sum_rowInput row-vector of sums of all the entries in each row of matrix A. Note: vector_sum_row can be a nullptr in case b_offset = 0. Data type supported: same as mm_result
[in]biasBiases tensor. Only shared biases supported and it can be a nullptr if the addition of biases is not required. Biases are 1D tensor with dimensions [OFM]. Data type supported: Same as input.
[in]outputOutput tensor. Data type supported: QASYMM8/QASYMM8_SIGNED.
[in]a_offsetOffset to be added to each element of the matrix A.
[in]b_offsetOffset to be added to each element of the matrix B.
[in]output_stageGEMMLowp output stage info
[in]output_multipliersOutput multipliers tensor info. In case of per-channel quantization, the number of multipliers must be equal to the number of filters (OFM). Supported data types: S32
[in]output_shiftsOutput shifts tensor info. In case of per-channel quantization, the number of multipliers must be equal to the number of filters (OFM). Supported data types: S32
Returns
a status

Definition at line 229 of file CLGEMMLowpOffsetContributionOutputStageKernel.cpp.

References ARM_COMPUTE_RETURN_ON_ERROR, and arm_compute::validate_arguments().

Referenced by CLGEMMLowpMatrixMultiplyCore::validate().

232 {
233  ARM_COMPUTE_RETURN_ON_ERROR(validate_arguments(mm_result, vector_sum_col, vector_sum_row, bias, output, a_offset, b_offset, output_stage, output_multipliers, output_shifts));
234  return Status{};
235 }
#define ARM_COMPUTE_RETURN_ON_ERROR(status)
Checks if a status contains an error and returns it.
Definition: Error.h:204
Status validate_arguments(const ITensorInfo *input, const ITensorInfo *bias, const ITensorInfo *output, const GEMMLowpOutputStageInfo *output_stage)

The documentation for this class was generated from the following files: