Compute Library
 22.05
ClGemmLowpOffsetContributionOutputStageKernel Class Reference

OpenCL kernel used to add the offset contribution after the matrix multiplication and perform the output stage. More...

#include <ClGemmLowpOffsetContributionOutputStageKernel.h>

Collaboration diagram for ClGemmLowpOffsetContributionOutputStageKernel:
[legend]

Public Member Functions

 ClGemmLowpOffsetContributionOutputStageKernel ()
 
 ARM_COMPUTE_DISALLOW_COPY_ALLOW_MOVE (ClGemmLowpOffsetContributionOutputStageKernel)
 
void configure (const CLCompileContext &compile_context, const ITensorInfo *mm_result, const ITensorInfo *vector_sum_col, const ITensorInfo *vector_sum_row, const ITensorInfo *bias, ITensorInfo *dst, int32_t k, int32_t a_offset, int32_t b_offset, const GEMMLowpOutputStageInfo &output_stage, const ITensorInfo *output_multipliers, const ITensorInfo *output_shifts)
 Initialise the kernel's input and output. More...
 
void run_op (ITensorPack &tensors, const Window &window, cl::CommandQueue &queue) override
 Enqueue the OpenCL kernel to process the given window on the passed OpenCL command queue. More...
 
- Public Member Functions inherited from ICLKernel
 ICLKernel ()
 Constructor. More...
 
cl::Kernel & kernel ()
 Returns a reference to the OpenCL kernel of this object. More...
 
CLKernelType type () const
 Returns the CL kernel type. More...
 
template<typename T >
void add_1D_array_argument (unsigned int &idx, const ICLArray< T > *array, const Strides &strides, unsigned int num_dimensions, const Window &window)
 Add the passed 1D array's parameters to the object's kernel's arguments starting from the index idx. More...
 
void add_1D_tensor_argument (unsigned int &idx, const ICLTensor *tensor, const Window &window)
 Add the passed 1D tensor's parameters to the object's kernel's arguments starting from the index idx. More...
 
void add_1D_tensor_argument_if (bool cond, unsigned int &idx, const ICLTensor *tensor, const Window &window)
 Add the passed 1D tensor's parameters to the object's kernel's arguments starting from the index idx if the condition is true. More...
 
void add_2D_tensor_argument (unsigned int &idx, const ICLTensor *tensor, const Window &window)
 Add the passed 2D tensor's parameters to the object's kernel's arguments starting from the index idx. More...
 
void add_2D_tensor_argument_if (bool cond, unsigned int &idx, const ICLTensor *tensor, const Window &window)
 Add the passed 2D tensor's parameters to the object's kernel's arguments starting from the index idx if the condition is true. More...
 
void add_3D_tensor_argument (unsigned int &idx, const ICLTensor *tensor, const Window &window)
 Add the passed 3D tensor's parameters to the object's kernel's arguments starting from the index idx. More...
 
void add_4D_tensor_argument (unsigned int &idx, const ICLTensor *tensor, const Window &window)
 Add the passed 4D tensor's parameters to the object's kernel's arguments starting from the index idx. More...
 
void add_5D_tensor_argument (unsigned int &idx, const ICLTensor *tensor, const Window &window)
 Add the passed 5D tensor's parameters to the object's kernel's arguments starting from the index idx. More...
 
void add_3d_tensor_nhw_argument (unsigned int &idx, const ICLTensor *tensor)
 Add the passed NHW 3D tensor's parameters to the object's kernel's arguments by passing strides, dimensions and the offset to the first valid element in bytes. More...
 
void add_4d_tensor_nhwc_argument (unsigned int &idx, const ICLTensor *tensor)
 Add the passed NHWC 4D tensor's parameters to the object's kernel's arguments by passing strides, dimensions and the offset to the first valid element in bytes. More...
 
virtual void run (const Window &window, cl::CommandQueue &queue)
 Enqueue the OpenCL kernel to process the given window on the passed OpenCL command queue. More...
 
virtual void run_composite_op (ITensorPack &tensors, const Window &window, cl::CommandQueue &queue, const experimental::dynamic_fusion::ClExecutionDescriptor &exec_desc)
 The execution is carried out through run_op method. But the run_op method needs to be extended to include ClExecutionDescriptor as now LWS GWS tuning will be separated from the IKernel. More...
 
template<typename T >
void add_argument (unsigned int &idx, T value)
 Add the passed parameters to the object's kernel's arguments starting from the index idx. More...
 
void set_lws_hint (const cl::NDRange &lws_hint)
 Set the Local-Workgroup-Size hint. More...
 
cl::NDRange lws_hint () const
 Return the Local-Workgroup-Size hint. More...
 
void set_wbsm_hint (const cl_int &wbsm_hint)
 Set the workgroup batch size modifier hint. More...
 
cl_int wbsm_hint () const
 Return the workgroup batch size modifier hint. More...
 
const std::string & config_id () const
 Get the configuration ID. More...
 
void set_target (GPUTarget target)
 Set the targeted GPU architecture. More...
 
void set_target (cl::Device &device)
 Set the targeted GPU architecture according to the CL device. More...
 
GPUTarget get_target () const
 Get the targeted GPU architecture. More...
 
size_t get_max_workgroup_size ()
 Get the maximum workgroup size for the device the CLKernelLibrary uses. More...
 
template<unsigned int dimension_size>
void add_tensor_argument (unsigned &idx, const ICLTensor *tensor, const Window &window)
 
template<typename T , unsigned int dimension_size>
void add_array_argument (unsigned &idx, const ICLArray< T > *array, const Strides &strides, unsigned int num_dimensions, const Window &window)
 Add the passed array's parameters to the object's kernel's arguments starting from the index idx. More...
 
- Public Member Functions inherited from IKernel
 IKernel ()
 Constructor. More...
 
virtual ~IKernel ()=default
 Destructor. More...
 
virtual bool is_parallelisable () const
 Indicates whether or not the kernel is parallelisable. More...
 
virtual BorderSize border_size () const
 The size of the border for that kernel. More...
 
const Windowwindow () const
 The maximum window the kernel can be executed on. More...
 
bool is_window_configured () const
 Function to check if the embedded window of this kernel has been configured. More...
 

Static Public Member Functions

static Status validate (const ITensorInfo *mm_result, const ITensorInfo *vector_sum_col, const ITensorInfo *vector_sum_row, const ITensorInfo *bias, const ITensorInfo *dst, int32_t a_offset, int32_t b_offset, const GEMMLowpOutputStageInfo &output_stage, const ITensorInfo *output_multipliers, const ITensorInfo *output_shifts)
 Static function to check if given info will lead to a valid configuration. More...
 
- Static Public Member Functions inherited from ICLKernel
static constexpr unsigned int num_arguments_per_3d_tensor_nhw ()
 Returns the number of arguments enqueued per NHW 3D Tensor object. More...
 
static constexpr unsigned int num_arguments_per_4d_tensor_nhwc ()
 Returns the number of arguments enqueued per NHWC 4D Tensor object. More...
 
static constexpr unsigned int num_arguments_per_1D_array ()
 Returns the number of arguments enqueued per 1D array object. More...
 
static constexpr unsigned int num_arguments_per_1D_tensor ()
 Returns the number of arguments enqueued per 1D tensor object. More...
 
static constexpr unsigned int num_arguments_per_2D_tensor ()
 Returns the number of arguments enqueued per 2D tensor object. More...
 
static constexpr unsigned int num_arguments_per_3D_tensor ()
 Returns the number of arguments enqueued per 3D tensor object. More...
 
static constexpr unsigned int num_arguments_per_4D_tensor ()
 Returns the number of arguments enqueued per 4D tensor object. More...
 
static cl::NDRange gws_from_window (const Window &window)
 Get the global work size given an execution window. More...
 

Detailed Description

OpenCL kernel used to add the offset contribution after the matrix multiplication and perform the output stage.

This kernel takes a final int32 accumulator value (the output of the matrix multiplication), adds to it the offset contribution of matrix A and matrix B and performs the output stage defined by the output_stage argument

Note
For quantized computations the output data type for auto-initialization must be passed as part of the GEMMLowpOutputStageInfo.

Definition at line 44 of file ClGemmLowpOffsetContributionOutputStageKernel.h.

Constructor & Destructor Documentation

◆ ClGemmLowpOffsetContributionOutputStageKernel()

Member Function Documentation

◆ ARM_COMPUTE_DISALLOW_COPY_ALLOW_MOVE()

ARM_COMPUTE_DISALLOW_COPY_ALLOW_MOVE ( ClGemmLowpOffsetContributionOutputStageKernel  )

◆ configure()

void configure ( const CLCompileContext compile_context,
const ITensorInfo mm_result,
const ITensorInfo vector_sum_col,
const ITensorInfo vector_sum_row,
const ITensorInfo bias,
ITensorInfo dst,
int32_t  k,
int32_t  a_offset,
int32_t  b_offset,
const GEMMLowpOutputStageInfo output_stage,
const ITensorInfo output_multipliers,
const ITensorInfo output_shifts 
)

Initialise the kernel's input and output.

Parameters
[in]compile_contextThe compile context to be used.
[in]mm_resultInput tensor containing the result of the matrix multiplication. Data type supported: S32
[in]vector_sum_colInput row-vector of sums of all the entries in each column of matrix B. Note: vector_sum_col can be a nullptr in case a_offset = 0. Data type supported: same as mm_result
[in]vector_sum_rowInput row-vector of sums of all the entries in each row of matrix A. Note: vector_sum_row can be a nullptr in case b_offset = 0. Data type supported: same as mm_result
[in]biasBiases tensor. Only shared biases supported and it can be a nullptr if the addition of biases is not required. Biases are 1D tensor with dimensions [OFM]. Data type supported: Same as mm_result.
[out]dstDestination tensor. Data type supported: QASYMM8/QASYMM8_SIGNED.
[in]kNumber of matrix A columns or Matrix B rows
[in]a_offsetOffset to be added to each element of the matrix A.
[in]b_offsetOffset to be added to each element of the matrix B.
[in]output_stageGEMMLowp output stage info
[in]output_multipliersOutput multipliers tensor. In case of per-channel quantization, the number of multipliers must be equal to the number of filters (OFM). Supported data types: S32
[in]output_shiftsOutput shifts tensor. In case of per-channel quantization, the number of multipliers must be equal to the number of filters (OFM). Supported data types: S32

Definition at line 132 of file ClGemmLowpOffsetContributionOutputStageKernel.cpp.

References CLBuildOptions::add_option(), arm_compute::adjust_vec_size(), ARM_COMPUTE_ERROR_ON, ARM_COMPUTE_ERROR_ON_NULLPTR, ARM_COMPUTE_ERROR_THROW_ON, arm_compute::auto_init_if_empty(), bias, arm_compute::calculate_max_window(), ICloneable< T >::clone(), arm_compute::create_kernel(), ITensorInfo::data_type(), ITensorInfo::dimension(), arm_compute::test::validation::dst, GEMMLowpOutputStageInfo::gemmlowp_max_bound, GEMMLowpOutputStageInfo::gemmlowp_min_bound, GEMMLowpOutputStageInfo::gemmlowp_multipliers, GEMMLowpOutputStageInfo::gemmlowp_offset, GEMMLowpOutputStageInfo::gemmlowp_shifts, arm_compute::get_cl_type_from_data_type(), arm_compute::get_min_max(), arm_compute::get_padding_info(), arm_compute::has_padding_changed(), GEMMLowpOutputStageInfo::is_quantized_per_channel, kernel_name, Dimensions< T >::num_dimensions(), ITensorInfo::num_dimensions(), num_elems_processed_per_iteration, GEMMLowpOutputStageInfo::output_data_type, arm_compute::string_from_gemmlowp_output_stage(), ITensorInfo::tensor_shape(), arm_compute::support::cpp11::to_string(), GEMMLowpOutputStageInfo::type, arm_compute::cpu::kernels::validate_arguments(), Dimensions< T >::x(), and Dimensions< T >::y().

136 {
137  // Perform validate step
138  ARM_COMPUTE_ERROR_ON_NULLPTR(mm_result, dst, output_multipliers, output_shifts);
139  ARM_COMPUTE_ERROR_THROW_ON(validate_arguments(mm_result, vector_sum_col, vector_sum_row, bias, dst, a_offset, b_offset, output_stage, output_multipliers, output_shifts));
140 
141  auto padding_info = get_padding_info({ mm_result, vector_sum_col, vector_sum_row, bias, dst, output_multipliers, output_shifts });
142 
143  const int min = output_stage.gemmlowp_min_bound;
144  const int max = output_stage.gemmlowp_max_bound;
145 
146  _is_quantized_per_channel = output_stage.is_quantized_per_channel;
147 
148  // Check if input is a 3D reinterpretation
149  const bool reinterpret_as_3d = vector_sum_row != nullptr
150  && mm_result->num_dimensions() > 1
151  && mm_result->tensor_shape().y() != vector_sum_row->tensor_shape().x();
152 
153  // Auto initialize the output
154  auto_init_if_empty(*dst, mm_result->clone()->set_data_type(output_stage.output_data_type));
155 
156  const unsigned int num_elems_processed_per_iteration = adjust_vec_size(4, mm_result->dimension(0));
157 
158  // Set the arguments to pass at compile time
159  CLBuildOptions build_opts;
160  build_opts.add_option("-DVEC_SIZE=" + support::cpp11::to_string(num_elems_processed_per_iteration));
161  build_opts.add_option("-DVEC_SIZE_LEFTOVER=" + support::cpp11::to_string(mm_result->dimension(0) % num_elems_processed_per_iteration));
162 
163  // If a_offset == 0, vector_sum_col can be a nullptr
164  if(a_offset != 0)
165  {
166  build_opts.add_option("-DA_OFFSET=" + support::cpp11::to_string(a_offset));
167  build_opts.add_option_if(vector_sum_col->tensor_shape().num_dimensions() > 1, "-DSUM_COL_HAS_BATCHES");
168  }
169  // If b_offset == 0, vector_sum_row can be a nullptr
170  build_opts.add_option_if(b_offset != 0, "-DB_OFFSET=" + support::cpp11::to_string(b_offset));
171  build_opts.add_option("-DK_OFFSET=" + support::cpp11::to_string(a_offset * b_offset * k));
172  build_opts.add_option_if(reinterpret_as_3d, "-DHEIGHT_INPUT3D=" + support::cpp11::to_string(mm_result->dimension(1)));
173  build_opts.add_option_if(reinterpret_as_3d, "-DDEPTH_INPUT3D=" + support::cpp11::to_string(mm_result->dimension(2)));
174  build_opts.add_option_if(bias != nullptr, "-DADD_BIAS");
175  build_opts.add_option("-DRESULT_OFFSET=" + support::cpp11::to_string(output_stage.gemmlowp_offset));
176  build_opts.add_option("-DRESULT_MULTIPLIER=" + support::cpp11::to_string(output_stage.gemmlowp_multipliers[0]));
177  build_opts.add_option("-DRESULT_SHIFT=" + support::cpp11::to_string(output_stage.gemmlowp_shifts[0]));
178  build_opts.add_option_if(_is_quantized_per_channel, "-DPER_CHANNEL_QUANTIZATION");
179  build_opts.add_option("-DOUTPUT_DATA_TYPE=" + get_cl_type_from_data_type(dst->data_type()));
180 
181  PixelValue min_val{};
182  PixelValue max_val{};
183  std::tie(min_val, max_val) = get_min_max(dst->data_type());
184  build_opts.add_option_if((min > min_val.get<int32_t>()), "-DMIN_BOUND=" + support::cpp11::to_string(min));
185  build_opts.add_option_if((max < max_val.get<int32_t>()), "-DMAX_BOUND=" + support::cpp11::to_string(max));
186 
187  std::string kernel_name("gemmlowp_offset_contribution");
189 
190  // Create kernel
191  _kernel = create_kernel(compile_context, kernel_name, build_opts.options());
192 
193  // Configure kernel window
194  Window win = calculate_max_window(*mm_result, Steps(num_elems_processed_per_iteration));
195  ICLKernel::configure_internal(win);
196 
197  // Set config_id for enabling LWS tuning
198  _config_id = kernel_name + "_";
199  _config_id += support::cpp11::to_string(mm_result->dimension(0));
200  _config_id += "_";
201  _config_id += support::cpp11::to_string(mm_result->dimension(1));
202  _config_id += "_";
203  _config_id += support::cpp11::to_string(mm_result->dimension(2));
204 
206 }
Window calculate_max_window(const ValidRegion &valid_region, const Steps &steps, bool skip_border, BorderSize border_size)
std::string to_string(T &&value)
Convert integer and float values to string.
#define ARM_COMPUTE_ERROR_ON(cond)
If the condition is true then an error message is printed and an exception thrown.
Definition: Error.h:466
#define ARM_COMPUTE_ERROR_THROW_ON(status)
Definition: Error.h:455
const std::string & string_from_gemmlowp_output_stage(GEMMLowpOutputStageType output_stage)
Translates a given GEMMLowp output stage to a string.
Definition: Utils.cpp:260
Status validate_arguments(const ITensorInfo *src, const ITensorInfo *weights, const ITensorInfo *dst, const PadStrideInfo &conv_info)
const OutputStage & output_stage
cl::Kernel create_kernel(const CLCompileContext &ctx, const std::string &kernel_name, const std::set< std::string > &build_opts=std::set< std::string >())
Creates an opencl kernel using a compile context.
Definition: CLHelpers.cpp:391
unsigned int num_elems_processed_per_iteration
std::string get_cl_type_from_data_type(const DataType &dt)
Translates a tensor data type to the appropriate OpenCL type.
Definition: CLHelpers.cpp:39
bool auto_init_if_empty(ITensorInfo &info, const TensorShape &shape, int num_channels, DataType data_type, QuantizationInfo quantization_info=QuantizationInfo())
Auto initialize the tensor info (shape, number of channels and data type) if the current assignment i...
bool has_padding_changed(const std::unordered_map< const ITensorInfo *, PaddingSize > &padding_map)
Check if the previously stored padding info has changed after configuring a kernel.
Definition: Utils.cpp:601
std::unordered_map< const ITensorInfo *, PaddingSize > get_padding_info(std::initializer_list< const ITensorInfo *> infos)
Stores padding information before configuring a kernel.
Definition: Utils.cpp:586
#define ARM_COMPUTE_ERROR_ON_NULLPTR(...)
Definition: Validate.h:157
unsigned int adjust_vec_size(unsigned int vec_size, size_t dim0)
Returns the adjusted vector size in case it is less than the input&#39;s first dimension, getting rounded down to its closest valid vector size.
Definition: Utils.h:1222
std::string kernel_name
std::tuple< PixelValue, PixelValue > get_min_max(DataType dt)
Compute the mininum and maximum values a data type can take.
Definition: Utils.h:564
const int32_t * bias

◆ run_op()

void run_op ( ITensorPack tensors,
const Window window,
cl::CommandQueue &  queue 
)
overridevirtual

Enqueue the OpenCL kernel to process the given window on the passed OpenCL command queue.

Note
The queue is not flushed by this method, and therefore the kernel will not have been executed by the time this method returns.
Parameters
[in]tensorsA vector containing the tensors to operato on.
[in]windowRegion on which to execute the kernel. (Must be a valid region of the window returned by window()).
[in,out]queueCommand queue on which to enqueue the kernel.

Reimplemented from ICLKernel.

Definition at line 216 of file ClGemmLowpOffsetContributionOutputStageKernel.cpp.

References arm_compute::ACL_BIAS, arm_compute::ACL_DST, arm_compute::ACL_MULTIPLIERS, arm_compute::ACL_SHIFTS, arm_compute::ACL_SRC, arm_compute::ACL_VEC_COL_SUM, arm_compute::ACL_VEC_ROW_SUM, ICLKernel::add_1D_tensor_argument_if(), ICLKernel::add_2D_tensor_argument_if(), ICLKernel::add_3D_tensor_argument(), ARM_COMPUTE_ERROR_ON_INVALID_SUBWINDOW, ARM_COMPUTE_ERROR_ON_UNCONFIGURED_KERNEL, Window::collapse_if_possible(), Window::DimX, Window::DimY, Window::DimZ, arm_compute::enqueue(), ITensorPack::get_const_tensor(), ITensorPack::get_tensor(), ICLKernel::lws_hint(), Window::set(), arm_compute::test::validation::reference::slice(), and IKernel::window().

217 {
220 
221  const auto mm_result = utils::cast::polymorphic_downcast<const ICLTensor *>(tensors.get_const_tensor(TensorType::ACL_SRC));
222  const auto bias = utils::cast::polymorphic_downcast<const ICLTensor *>(tensors.get_const_tensor(TensorType::ACL_BIAS));
223  const auto vector_sum_col = utils::cast::polymorphic_downcast<const ICLTensor *>(tensors.get_const_tensor(TensorType::ACL_VEC_COL_SUM));
224  const auto vector_sum_row = utils::cast::polymorphic_downcast<const ICLTensor *>(tensors.get_const_tensor(TensorType::ACL_VEC_ROW_SUM));
225  const auto output_shifts = utils::cast::polymorphic_downcast<const ICLTensor *>(tensors.get_const_tensor(TensorType::ACL_SHIFTS));
226  const auto output_multipliers = utils::cast::polymorphic_downcast<const ICLTensor *>(tensors.get_const_tensor(TensorType::ACL_MULTIPLIERS));
227  auto dst = utils::cast::polymorphic_downcast<ICLTensor *>(tensors.get_tensor(TensorType::ACL_DST));
228 
230  Window slice = collapsed.first_slice_window_3D();
231 
232  // Set window for vector_sum_col
233  Window win_vector_sum_col = slice;
234  win_vector_sum_col.set(Window::DimY, Window::Dimension(0, 0, 0));
235  win_vector_sum_col.set(Window::DimZ, Window::Dimension(0, 0, 0));
236 
237  // Set window for vector_sum_row
238  Window win_vector_sum_row = slice;
239  win_vector_sum_row.set(Window::DimX, Window::Dimension(0, 0, 0));
240  win_vector_sum_row.set(Window::DimY, Window::Dimension(0, 0, 0));
241  win_vector_sum_col.set(Window::DimZ, Window::Dimension(0, 0, 0));
242 
243  Window biases_slice = slice;
244  biases_slice.set(Window::DimY, Window::Dimension(0, 1, 1));
245  biases_slice.set(Window::DimZ, Window::Dimension(0, 1, 1));
246 
247  do
248  {
249  unsigned int idx = 0;
250  add_3D_tensor_argument(idx, mm_result, slice);
251  add_2D_tensor_argument_if((vector_sum_col != nullptr), idx, vector_sum_col, win_vector_sum_col);
252  add_2D_tensor_argument_if((vector_sum_row != nullptr), idx, vector_sum_row, win_vector_sum_row);
253  add_1D_tensor_argument_if((bias != nullptr), idx, bias, biases_slice);
254  add_3D_tensor_argument(idx, dst, slice);
255  add_1D_tensor_argument_if(_is_quantized_per_channel, idx, output_multipliers, biases_slice);
256  add_1D_tensor_argument_if(_is_quantized_per_channel, idx, output_shifts, biases_slice);
257  enqueue(queue, *this, slice, lws_hint());
258  }
259  while(collapsed.slide_window_slice_3D(slice));
260 }
void add_1D_tensor_argument_if(bool cond, unsigned int &idx, const ICLTensor *tensor, const Window &window)
Add the passed 1D tensor&#39;s parameters to the object&#39;s kernel&#39;s arguments starting from the index idx ...
Definition: ICLKernel.h:190
const Window & window() const
The maximum window the kernel can be executed on.
Definition: IKernel.cpp:28
void add_2D_tensor_argument_if(bool cond, unsigned int &idx, const ICLTensor *tensor, const Window &window)
Add the passed 2D tensor&#39;s parameters to the object&#39;s kernel&#39;s arguments starting from the index idx ...
Definition: ICLKernel.h:214
void enqueue(cl::CommandQueue &queue, ICLKernel &kernel, const Window &window, const cl::NDRange &lws_hint=CLKernelLibrary::get().default_ndrange(), bool use_dummy_work_items=false)
Add the kernel to the command queue with the given window.
Definition: ICLKernel.cpp:32
cl::NDRange lws_hint() const
Return the Local-Workgroup-Size hint.
Definition: ICLKernel.h:384
void add_3D_tensor_argument(unsigned int &idx, const ICLTensor *tensor, const Window &window)
Add the passed 3D tensor&#39;s parameters to the object&#39;s kernel&#39;s arguments starting from the index idx...
Definition: ICLKernel.h:227
static constexpr size_t DimX
Alias for dimension 0 also known as X dimension.
Definition: Window.h:43
Window collapse_if_possible(const Window &full_window, size_t first, size_t last, bool *has_collapsed=nullptr) const
Collapse the dimensions between first and last if possible.
Definition: Window.inl:68
#define ARM_COMPUTE_ERROR_ON_UNCONFIGURED_KERNEL(k)
Definition: Validate.h:915
static constexpr size_t DimY
Alias for dimension 1 also known as Y dimension.
Definition: Window.h:45
static constexpr size_t DimZ
Alias for dimension 2 also known as Z dimension.
Definition: Window.h:47
#define ARM_COMPUTE_ERROR_ON_INVALID_SUBWINDOW(f, s)
Definition: Validate.h:201
SimpleTensor< T > slice(const SimpleTensor< T > &src, Coordinates starts, Coordinates ends)
const int32_t * bias

◆ validate()

Status validate ( const ITensorInfo mm_result,
const ITensorInfo vector_sum_col,
const ITensorInfo vector_sum_row,
const ITensorInfo bias,
const ITensorInfo dst,
int32_t  a_offset,
int32_t  b_offset,
const GEMMLowpOutputStageInfo output_stage,
const ITensorInfo output_multipliers,
const ITensorInfo output_shifts 
)
static

Static function to check if given info will lead to a valid configuration.

Similar to ClGemmLowpOffsetContributionOutputStageKernel::configure()

Returns
a status

Definition at line 208 of file ClGemmLowpOffsetContributionOutputStageKernel.cpp.

References ARM_COMPUTE_RETURN_ON_ERROR, and arm_compute::cpu::kernels::validate_arguments().

Referenced by ClGemmLowpMatrixMultiplyCore::validate().

211 {
212  ARM_COMPUTE_RETURN_ON_ERROR(validate_arguments(mm_result, vector_sum_col, vector_sum_row, bias, dst, a_offset, b_offset, output_stage, output_multipliers, output_shifts));
213  return Status{};
214 }
#define ARM_COMPUTE_RETURN_ON_ERROR(status)
Checks if a status contains an error and returns it.
Definition: Error.h:204
Status validate_arguments(const ITensorInfo *src, const ITensorInfo *weights, const ITensorInfo *dst, const PadStrideInfo &conv_info)
const OutputStage & output_stage
const int32_t * bias

The documentation for this class was generated from the following files: