Compute Library
 19.08
CLDirectConvolutionLayerOutputStageKernel Class Reference

OpenCL kernel to accumulate the biases, if provided, or downscale in case of quantized input. More...

#include <CLDirectConvolutionLayerOutputStageKernel.h>

Collaboration diagram for CLDirectConvolutionLayerOutputStageKernel:
[legend]

Public Member Functions

 CLDirectConvolutionLayerOutputStageKernel ()
 Default constructor. More...
 
 CLDirectConvolutionLayerOutputStageKernel (const CLDirectConvolutionLayerOutputStageKernel &)=delete
 Prevent instances of this class from being copied (As this class contains pointers) More...
 
CLDirectConvolutionLayerOutputStageKerneloperator= (const CLDirectConvolutionLayerOutputStageKernel &)=delete
 Prevent instances of this class from being copied (As this class contains pointers) More...
 
 CLDirectConvolutionLayerOutputStageKernel (CLDirectConvolutionLayerOutputStageKernel &&)=default
 Allow instances of this class to be moved. More...
 
CLDirectConvolutionLayerOutputStageKerneloperator= (CLDirectConvolutionLayerOutputStageKernel &&)=default
 Allow instances of this class to be moved. More...
 
 ~CLDirectConvolutionLayerOutputStageKernel ()=default
 Default destructor. More...
 
void configure (ICLTensor *input, const ICLTensor *bias=nullptr, ICLTensor *output=nullptr, int result_fixedpoint_multiplier=0, int result_shift=0, int result_offset_after_shift=0)
 Set the accumulate buffer and the biases of the kernel. More...
 
void run (const Window &window, cl::CommandQueue &queue) override
 Enqueue the OpenCL kernel to process the given window on the passed OpenCL command queue. More...
 
- Public Member Functions inherited from ICLKernel
 ICLKernel ()
 Constructor. More...
 
cl::Kernel & kernel ()
 Returns a reference to the OpenCL kernel of this object. More...
 
template<typename T >
void add_1D_array_argument (unsigned int &idx, const ICLArray< T > *array, const Strides &strides, unsigned int num_dimensions, const Window &window)
 Add the passed 1D array's parameters to the object's kernel's arguments starting from the index idx. More...
 
void add_1D_tensor_argument (unsigned int &idx, const ICLTensor *tensor, const Window &window)
 Add the passed 1D tensor's parameters to the object's kernel's arguments starting from the index idx. More...
 
void add_1D_tensor_argument_if (bool cond, unsigned int &idx, const ICLTensor *tensor, const Window &window)
 Add the passed 1D tensor's parameters to the object's kernel's arguments starting from the index idx if the condition is true. More...
 
void add_2D_tensor_argument (unsigned int &idx, const ICLTensor *tensor, const Window &window)
 Add the passed 2D tensor's parameters to the object's kernel's arguments starting from the index idx. More...
 
void add_2D_tensor_argument_if (bool cond, unsigned int &idx, const ICLTensor *tensor, const Window &window)
 Add the passed 2D tensor's parameters to the object's kernel's arguments starting from the index idx if the condition is true. More...
 
void add_3D_tensor_argument (unsigned int &idx, const ICLTensor *tensor, const Window &window)
 Add the passed 3D tensor's parameters to the object's kernel's arguments starting from the index idx. More...
 
void add_4D_tensor_argument (unsigned int &idx, const ICLTensor *tensor, const Window &window)
 Add the passed 4D tensor's parameters to the object's kernel's arguments starting from the index idx. More...
 
template<typename T >
void add_argument (unsigned int &idx, T value)
 Add the passed parameters to the object's kernel's arguments starting from the index idx. More...
 
void set_lws_hint (const cl::NDRange &lws_hint)
 Set the Local-Workgroup-Size hint. More...
 
cl::NDRange lws_hint () const
 Return the Local-Workgroup-Size hint. More...
 
const std::string & config_id () const
 Get the configuration ID. More...
 
void set_target (GPUTarget target)
 Set the targeted GPU architecture. More...
 
void set_target (cl::Device &device)
 Set the targeted GPU architecture according to the CL device. More...
 
GPUTarget get_target () const
 Get the targeted GPU architecture. More...
 
size_t get_max_workgroup_size ()
 Get the maximum workgroup size for the device the CLKernelLibrary uses. More...
 
template<typename T , unsigned int dimension_size>
void add_array_argument (unsigned &idx, const ICLArray< T > *array, const Strides &strides, unsigned int num_dimensions, const Window &window)
 Add the passed array's parameters to the object's kernel's arguments starting from the index idx. More...
 
template<unsigned int dimension_size>
void add_tensor_argument (unsigned &idx, const ICLTensor *tensor, const Window &window)
 
- Public Member Functions inherited from IKernel
 IKernel ()
 Constructor. More...
 
virtual ~IKernel ()=default
 Destructor. More...
 
virtual bool is_parallelisable () const
 Indicates whether or not the kernel is parallelisable. More...
 
virtual BorderSize border_size () const
 The size of the border for that kernel. More...
 
const Windowwindow () const
 The maximum window the kernel can be executed on. More...
 

Static Public Member Functions

static Status validate (const ITensorInfo *input, const ITensorInfo *bias=nullptr, const ITensorInfo *output=nullptr)
 Static function to check if given info will lead to a valid configuration of CLDirectConvolutionLayerOutputStageKernel. More...
 
- Static Public Member Functions inherited from ICLKernel
static constexpr unsigned int num_arguments_per_1D_array ()
 Returns the number of arguments enqueued per 1D array object. More...
 
static constexpr unsigned int num_arguments_per_1D_tensor ()
 Returns the number of arguments enqueued per 1D tensor object. More...
 
static constexpr unsigned int num_arguments_per_2D_tensor ()
 Returns the number of arguments enqueued per 2D tensor object. More...
 
static constexpr unsigned int num_arguments_per_3D_tensor ()
 Returns the number of arguments enqueued per 3D tensor object. More...
 
static constexpr unsigned int num_arguments_per_4D_tensor ()
 Returns the number of arguments enqueued per 4D tensor object. More...
 
static cl::NDRange gws_from_window (const Window &window)
 Get the global work size given an execution window. More...
 

Detailed Description

OpenCL kernel to accumulate the biases, if provided, or downscale in case of quantized input.

Deprecated:
This kernel is deprecated and will be removed in release 19.05
Note
We assume bias to be shared

Definition at line 39 of file CLDirectConvolutionLayerOutputStageKernel.h.

Constructor & Destructor Documentation

◆ CLDirectConvolutionLayerOutputStageKernel() [1/3]

Default constructor.

Definition at line 124 of file CLDirectConvolutionOutputStageKernel.cpp.

125  : _input(nullptr), _bias(nullptr), _output(nullptr), _result_fixedpoint_multiplier(0), _result_shift(0), _result_offset_after_shift(0)
126 {
127 }

◆ CLDirectConvolutionLayerOutputStageKernel() [2/3]

Prevent instances of this class from being copied (As this class contains pointers)

◆ CLDirectConvolutionLayerOutputStageKernel() [3/3]

Allow instances of this class to be moved.

◆ ~CLDirectConvolutionLayerOutputStageKernel()

Default destructor.

Member Function Documentation

◆ configure()

void configure ( ICLTensor input,
const ICLTensor bias = nullptr,
ICLTensor output = nullptr,
int  result_fixedpoint_multiplier = 0,
int  result_shift = 0,
int  result_offset_after_shift = 0 
)

Set the accumulate buffer and the biases of the kernel.

Parameters
[in,out]inputInput to add the bias to. If output is not specified then accumulation is done in-place. Data type supported: S32/F16/F32
[in]bias(Optional) The shared bias tensor to add. It must be 1D Tensor. Data type supported: Same as input
[out]output(Optional) If the output tensor is specified the accumulation is done out-of-place. (Defaults to nullptr) Required parameter if output is of QASYMM8 type. Data types supported: QASYMM8/F16/F32
[in]result_fixedpoint_multiplier(Optional)Fixed point value to be multiplied to each element of the input matrix when once the result_offset has been add
[in]result_shift(Optional)Integer value used to round to nearest division by a power-of-two the result after the fixed point multiplication
[in]result_offset_after_shift(Optional)Offset to be applied to result before converting it back to QASYMM8

Definition at line 129 of file CLDirectConvolutionOutputStageKernel.cpp.

131 {
133 
134  // Auto-initialize output if required
135  if(output != nullptr)
136  {
137  // Work out expected output data type
138  const DataType output_dt = (input->info()->data_type() == DataType::S32) ? DataType::QASYMM8 : input->info()->data_type();
139  // Output tensor auto initialization if not yet initialized
140  auto_init_if_empty(*output->info(), input->info()->clone()->set_data_type(output_dt));
141  }
142 
143  // Perform validation step
144  ARM_COMPUTE_ERROR_THROW_ON(validate_arguments(input->info(), (bias == nullptr) ? nullptr : bias->info(), (output == nullptr) ? nullptr : output->info()));
145 
146  _bias = bias;
147  _input = input;
148  _output = output;
149  _result_fixedpoint_multiplier = result_fixedpoint_multiplier;
150  _result_shift = result_shift;
151  _result_offset_after_shift = result_offset_after_shift;
152 
153  const unsigned int num_elems_accessed_per_iteration = 16 / element_size_from_data_type(input->info()->data_type());
154 
155  // Create kernel
156  CLBuildOptions build_opts;
157  build_opts.add_option_if(bias != nullptr, "-DHAS_BIAS");
158  build_opts.add_option("-D" + string_from_data_layout(input->info()->data_layout()));
159  build_opts.add_option("-DVEC_SIZE=" + support::cpp11::to_string(num_elems_accessed_per_iteration));
160  _kernel = static_cast<cl::Kernel>(CLKernelLibrary::get().create_kernel("output_stage_quantized", build_opts.options()));
161 
162  // Set static kernel arguments
163  int idx = 2 * num_arguments_per_3D_tensor() + ((bias != nullptr) ? num_arguments_per_1D_tensor() : 0);
164  _kernel.setArg<int>(idx++, _result_offset_after_shift);
165  _kernel.setArg<int>(idx++, _result_fixedpoint_multiplier);
166  _kernel.setArg<int>(idx++, _result_shift);
167 
168  // Configure kernel window
169  auto win_config = validate_and_configure_window(input->info(), (bias == nullptr) ? nullptr : bias->info(), (output == nullptr) ? nullptr : output->info());
170  ARM_COMPUTE_ERROR_THROW_ON(win_config.first);
171  ICLKernel::configure_internal(win_config.second);
172 }
static constexpr unsigned int num_arguments_per_1D_tensor()
Returns the number of arguments enqueued per 1D tensor object.
Definition: ICLKernel.h:184
TensorInfo * info() const override
Interface to be implemented by the child class to return the tensor's metadata.
Definition: CLTensor.cpp:35
std::pair< Status, Window > validate_and_configure_window(ITensorInfo *input, ITensorInfo *weights, ITensorInfo *biases, ITensorInfo *output, const PadStrideInfo &conv_info, unsigned int depth_multiplier, const Size2D &dilation)
std::string to_string(T &&value)
Convert integer and float values to string.
size_t element_size_from_data_type(DataType dt)
The size in bytes of the data type.
Definition: Utils.h:184
static CLKernelLibrary & get()
Access the KernelLibrary singleton.
#define ARM_COMPUTE_ERROR_THROW_ON(status)
Definition: Error.h:327
bool auto_init_if_empty(ITensorInfo &info, const TensorShape &shape, int num_channels, DataType data_type, QuantizationInfo quantization_info=QuantizationInfo())
Auto initialize the tensor info (shape, number of channels and data type) if the current assignment i...
Definition: Helpers.inl:201
1 channel, 1 S32 per channel
static constexpr unsigned int num_arguments_per_3D_tensor()
Returns the number of arguments enqueued per 3D tensor object.
Definition: ICLKernel.h:200
quantized, asymmetric fixed-point 8-bit number
std::unique_ptr< Kernel > create_kernel()
Helper function to create and return a unique_ptr pointed to a CL/GLES kernel object.
Definition: Helpers.h:86
#define ARM_COMPUTE_ERROR_ON_NULLPTR(...)
Definition: Validate.h:161
const std::string & string_from_data_layout(DataLayout dl)
Convert a data layout identity into a string.
Definition: Utils.cpp:132
DataType
Available data types.
Definition: Types.h:74

References CLBuildOptions::add_option_if(), ARM_COMPUTE_ERROR_ON_NULLPTR, ARM_COMPUTE_ERROR_THROW_ON, arm_compute::auto_init_if_empty(), arm_compute::test::validation::bias, ICloneable< T >::clone(), arm_compute::create_kernel(), ITensorInfo::data_layout(), ITensorInfo::data_type(), arm_compute::element_size_from_data_type(), CLKernelLibrary::get(), ITensor::info(), CLTensor::info(), ICLKernel::num_arguments_per_1D_tensor(), ICLKernel::num_arguments_per_3D_tensor(), arm_compute::QASYMM8, arm_compute::S32, arm_compute::string_from_data_layout(), arm_compute::support::cpp11::to_string(), and arm_compute::validate_and_configure_window().

Referenced by CLDepthwiseConvolutionLayer::configure().

◆ operator=() [1/2]

Prevent instances of this class from being copied (As this class contains pointers)

◆ operator=() [2/2]

Allow instances of this class to be moved.

◆ run()

void run ( const Window window,
cl::CommandQueue &  queue 
)
overridevirtual

Enqueue the OpenCL kernel to process the given window on the passed OpenCL command queue.

Note
The queue is not flushed by this method, and therefore the kernel will not have been executed by the time this method returns.
Parameters
[in]windowRegion on which to execute the kernel. (Must be a valid region of the window returned by window()).
[in,out]queueCommand queue on which to enqueue the kernel.

Implements ICLKernel.

Definition at line 182 of file CLDirectConvolutionOutputStageKernel.cpp.

183 {
186 
188 
189  // Set bias vector
190  if(_bias != nullptr)
191  {
192  unsigned int idx1 = 2 * num_arguments_per_3D_tensor();
193  Window slice_biases;
194  slice_biases.use_tensor_dimensions(_bias->info()->tensor_shape());
195  add_1D_tensor_argument(idx1, _bias, slice_biases);
196  }
197 
198  // Run kernel
199  do
200  {
201  // Set arguments
202  unsigned int idx = 0;
203  add_3D_tensor_argument(idx, _input, slice);
204  add_3D_tensor_argument(idx, _output, slice);
205  enqueue(queue, *this, slice, lws_hint());
206  }
208 }
const Window & window() const
The maximum window the kernel can be executed on.
Definition: IKernel.cpp:28
void enqueue(cl::CommandQueue &queue, ICLKernel &kernel, const Window &window, const cl::NDRange &lws_hint=CLKernelLibrary::get().default_ndrange(), bool use_dummy_work_items=false)
Add the kernel to the command queue with the given window.
Definition: ICLKernel.cpp:39
cl::NDRange lws_hint() const
Return the Local-Workgroup-Size hint.
Definition: ICLKernel.h:247
void add_3D_tensor_argument(unsigned int &idx, const ICLTensor *tensor, const Window &window)
Add the passed 3D tensor's parameters to the object's kernel's arguments starting from the index idx.
Definition: ICLKernel.h:158
static constexpr unsigned int num_arguments_per_3D_tensor()
Returns the number of arguments enqueued per 3D tensor object.
Definition: ICLKernel.h:200
virtual const TensorShape & tensor_shape() const =0
Size for each dimension of the tensor.
virtual ITensorInfo * info() const =0
Interface to be implemented by the child class to return the tensor's metadata.
#define ARM_COMPUTE_ERROR_ON_MISMATCHING_WINDOWS(f, w)
Definition: Validate.h:183
bool slide_window_slice_3D(Window &slice) const
Slide the passed 3D window slice.
Definition: Window.h:319
void add_1D_tensor_argument(unsigned int &idx, const ICLTensor *tensor, const Window &window)
Add the passed 1D tensor's parameters to the object's kernel's arguments starting from the index idx.
Definition: ICLKernel.h:110
Window first_slice_window_3D() const
First 3D slice of the window.
Definition: Window.h:275
#define ARM_COMPUTE_ERROR_ON_UNCONFIGURED_KERNEL(k)
Definition: Validate.h:940
SimpleTensor< T > slice(const SimpleTensor< T > &src, Coordinates starts, Coordinates ends)

References ICLKernel::add_1D_tensor_argument(), ICLKernel::add_3D_tensor_argument(), ARM_COMPUTE_ERROR_ON_MISMATCHING_WINDOWS, ARM_COMPUTE_ERROR_ON_UNCONFIGURED_KERNEL, arm_compute::enqueue(), Window::first_slice_window_3D(), ITensor::info(), ICLKernel::lws_hint(), ICLKernel::num_arguments_per_3D_tensor(), arm_compute::test::validation::reference::slice(), Window::slide_window_slice_3D(), ITensorInfo::tensor_shape(), Window::use_tensor_dimensions(), and IKernel::window().

◆ validate()

Status validate ( const ITensorInfo input,
const ITensorInfo bias = nullptr,
const ITensorInfo output = nullptr 
)
static

Static function to check if given info will lead to a valid configuration of CLDirectConvolutionLayerOutputStageKernel.

Parameters
[in]inputInput to add the bias to. If output is not specified then accumulation is done in-place. Data type supported: F16/F32
[in]bias(Optional) The shared bias tensor to add. It must be 1D Tensor. Data type supported: Same as input
[in]output(Optional) If the output tensor is specified the accumulation is done out-of-place. (Defaults to nullptr) Data type supported: F16/F32
Returns
a status

Definition at line 174 of file CLDirectConvolutionOutputStageKernel.cpp.

175 {
176  ARM_COMPUTE_RETURN_ON_ERROR(validate_arguments(input, bias, output));
177  ARM_COMPUTE_RETURN_ON_ERROR(validate_and_configure_window(input->clone().get(), bias == nullptr ? nullptr : bias->clone().get(), output == nullptr ? nullptr : output->clone().get()).first);
178 
179  return Status{};
180 }
std::pair< Status, Window > validate_and_configure_window(ITensorInfo *input, ITensorInfo *weights, ITensorInfo *biases, ITensorInfo *output, const PadStrideInfo &conv_info, unsigned int depth_multiplier, const Size2D &dilation)
#define ARM_COMPUTE_RETURN_ON_ERROR(status)
Checks if a status contains an error and returns it.
Definition: Error.h:193

References ARM_COMPUTE_RETURN_ON_ERROR, arm_compute::test::validation::bias, ICloneable< T >::clone(), and arm_compute::validate_and_configure_window().

Referenced by CLDepthwiseConvolutionLayer::validate().


The documentation for this class was generated from the following files: