Compute Library
 21.02
CLDirectConvolutionLayer Class Reference

Basic function to execute direct convolution function: More...

#include <CLDirectConvolutionLayer.h>

Collaboration diagram for CLDirectConvolutionLayer:
[legend]

Public Member Functions

 CLDirectConvolutionLayer ()
 Default constructor. More...
 
 CLDirectConvolutionLayer (const CLDirectConvolutionLayer &)=delete
 Prevent instances of this class from being copied. More...
 
CLDirectConvolutionLayeroperator= (const CLDirectConvolutionLayer &)=delete
 Prevent instances of this class from being copied. More...
 
 ~CLDirectConvolutionLayer ()
 Default destructor. More...
 
void configure (ICLTensor *input, const ICLTensor *weights, const ICLTensor *biases, ICLTensor *output, const PadStrideInfo &conv_info, const ActivationLayerInfo &act_info=ActivationLayerInfo())
 Set the input and output tensors. More...
 
void configure (const CLCompileContext &compile_context, ICLTensor *input, const ICLTensor *weights, const ICLTensor *biases, ICLTensor *output, const PadStrideInfo &conv_info, const ActivationLayerInfo &act_info=ActivationLayerInfo())
 Set the input and output tensors. More...
 
void run () override
 Run the kernels contained in the function. More...
 
- Public Member Functions inherited from IFunction
virtual ~IFunction ()=default
 Destructor. More...
 
virtual void prepare ()
 Prepare the function for executing. More...
 

Static Public Member Functions

static Status validate (const ITensorInfo *input, const ITensorInfo *weights, const ITensorInfo *biases, const ITensorInfo *output, const PadStrideInfo &conv_info, const ActivationLayerInfo &act_info=ActivationLayerInfo())
 Static function to check if given info will lead to a valid configuration of CLDirectConvolutionLayer. More...
 

Detailed Description

Basic function to execute direct convolution function:

Definition at line 43 of file CLDirectConvolutionLayer.h.

Constructor & Destructor Documentation

◆ CLDirectConvolutionLayer() [1/2]

Default constructor.

Definition at line 36 of file CLDirectConvolutionLayer.cpp.

References CLDirectConvolutionLayer::~CLDirectConvolutionLayer().

37  : _direct_conv_kernel(std::make_unique<CLDirectConvolutionLayerKernel>()), _input_border_handler(std::make_unique<CLFillBorderKernel>()), _activationlayer_function(),
38  _is_activationlayer_enabled(false)
39 {
40 }

◆ CLDirectConvolutionLayer() [2/2]

Prevent instances of this class from being copied.

◆ ~CLDirectConvolutionLayer()

Default destructor.

Referenced by CLDirectConvolutionLayer::CLDirectConvolutionLayer().

Member Function Documentation

◆ configure() [1/2]

void configure ( ICLTensor input,
const ICLTensor weights,
const ICLTensor biases,
ICLTensor output,
const PadStrideInfo conv_info,
const ActivationLayerInfo act_info = ActivationLayerInfo() 
)

Set the input and output tensors.

Parameters
[in]inputSource tensor. 3 lower dimensions represent a single input [width, height, IFM], while every optional dimension from 4 and above represent a batch of inputs. Data types supported: QASYMM8_SIGNED/QASYMM8/F16/F32.
[in]weightsWeights tensor. Weights are 4D tensor with dimensions [kernel_x, kernel_y, IFM, OFM]. Data type supported:Same as input.
[in]biasesBiases tensor. Shared biases supported. Biases are 1D tensor with dimensions [OFM]. Data type supported: Should match input data type, except for input of QASYMM8 and QASYMM8_SIGNED type where biases should be of S32 type.
[out]outputDestination tensor. 3 lower dimensions represent a single output [width, height, OFM], while the rest represent batch of outputs. Data types supported: Same as input.
[in]conv_infoContains padding and stride information described in PadStrideInfo.
[in]act_info(Optional) Activation layer information in case of a fused activation.

Definition at line 44 of file CLDirectConvolutionLayer.cpp.

References CLKernelLibrary::get().

45 {
46  configure(CLKernelLibrary::get().get_compile_context(), input, weights, biases, output, conv_info, act_info);
47 }
void configure(ICLTensor *input, const ICLTensor *weights, const ICLTensor *biases, ICLTensor *output, const PadStrideInfo &conv_info, const ActivationLayerInfo &act_info=ActivationLayerInfo())
Set the input and output tensors.
static CLKernelLibrary & get()
Access the KernelLibrary singleton.

◆ configure() [2/2]

void configure ( const CLCompileContext compile_context,
ICLTensor input,
const ICLTensor weights,
const ICLTensor biases,
ICLTensor output,
const PadStrideInfo conv_info,
const ActivationLayerInfo act_info = ActivationLayerInfo() 
)

Set the input and output tensors.

Parameters
[in]compile_contextThe compile context to be used.
[in]inputSource tensor. 3 lower dimensions represent a single input [width, height, IFM], while every optional dimension from 4 and above represent a batch of inputs. Data types supported: QASYMM8_SIGNED/QASYMM8/F16/F32.
[in]weightsWeights tensor. Weights are 4D tensor with dimensions [kernel_x, kernel_y, IFM, OFM]. Data type supported:Same as input.
[in]biasesBiases tensor. Shared biases supported. Biases are 1D tensor with dimensions [OFM]. Data type supported: Should match input data type, except for input of QASYMM8 and QASYMM8_SIGNED type where biases should be of S32 type.
[out]outputDestination tensor. 3 lower dimensions represent a single output [width, height, OFM], while the rest represent batch of outputs. Data types supported: Same as input.
[in]conv_infoContains padding and stride information described in PadStrideInfo.
[in]act_info(Optional) Activation layer information in case of a fused activation.

Definition at line 49 of file CLDirectConvolutionLayer.cpp.

References CLActivationLayer::configure(), arm_compute::CONSTANT, ITensorInfo::data_type(), ActivationLayerInfo::enabled(), CLScheduler::get(), ITensor::info(), arm_compute::is_data_type_quantized_asymmetric(), ITensorInfo::quantization_info(), and CLScheduler::tune_kernel_static().

52 {
53  // Set GPU target
54  _direct_conv_kernel->set_target(CLScheduler::get().target());
55 
56  // Configure direct convolution
57  _direct_conv_kernel->configure(compile_context, input, weights, biases, output, conv_info);
58 
59  // Configure border handler
60  PixelValue &&zero_value(0.f);
62  {
63  zero_value = PixelValue(0, input->info()->data_type(), input->info()->quantization_info());
64  }
65  _input_border_handler->configure(compile_context, input, _direct_conv_kernel->border_size(), BorderMode::CONSTANT, zero_value);
66 
67  // Tune kernels
68  CLScheduler::get().tune_kernel_static(*_direct_conv_kernel);
69 
70  _is_activationlayer_enabled = act_info.enabled();
71 
72  //Configure Activation Layer
73  if(_is_activationlayer_enabled)
74  {
75  _activationlayer_function.configure(compile_context, output, nullptr, act_info);
76  }
77 }
Class describing the value of a pixel for any image format.
Definition: PixelValue.h:34
bool enabled() const
Check if initialised.
Definition: Types.h:1600
static CLScheduler & get()
Access the scheduler singleton.
virtual DataType data_type() const =0
Data type used for each element of the tensor.
virtual ITensorInfo * info() const =0
Interface to be implemented by the child class to return the tensor&#39;s metadata.
virtual QuantizationInfo quantization_info() const =0
Get the quantization settings (scale and offset) of the tensor.
bool is_data_type_quantized_asymmetric(DataType dt)
Check if a given data type is of asymmetric quantized type.
Definition: Utils.h:1190
void configure(ICLTensor *input, ICLTensor *output, ActivationLayerInfo act_info)
Set the input and output tensor.
void tune_kernel_static(ICLKernel &kernel)
Tunes OpenCL kernel.
Definition: CLScheduler.cpp:84

◆ operator=()

CLDirectConvolutionLayer& operator= ( const CLDirectConvolutionLayer )
delete

Prevent instances of this class from being copied.

◆ run()

void run ( )
overridevirtual

Run the kernels contained in the function.

For Neon kernels:

  • Multi-threading is used for the kernels which are parallelisable.
  • By default std::thread::hardware_concurrency() threads are used.
Note
CPPScheduler::set_num_threads() can be used to manually set the number of threads

For OpenCL kernels:

  • All the kernels are enqueued on the queue associated with CLScheduler.
  • The queue is then flushed.
Note
The function will not block until the kernels are executed. It is the user's responsibility to wait.
Will call prepare() on first run if hasn't been done

Implements IFunction.

Definition at line 90 of file CLDirectConvolutionLayer.cpp.

References CLScheduler::enqueue(), CLScheduler::get(), and CLActivationLayer::run().

91 {
92  // Run border handler
93  CLScheduler::get().enqueue(*_input_border_handler, false);
94 
95  // Run direct convolution
96  CLScheduler::get().enqueue(*_direct_conv_kernel);
97 
98  //Run Activation Layer
99  if(_is_activationlayer_enabled)
100  {
101  _activationlayer_function.run();
102  }
103 }
static CLScheduler & get()
Access the scheduler singleton.
void run() override
Run the kernels contained in the function.
void enqueue(ICLKernel &kernel, bool flush=true)
Schedule the execution of the passed kernel if possible.

◆ validate()

Status validate ( const ITensorInfo input,
const ITensorInfo weights,
const ITensorInfo biases,
const ITensorInfo output,
const PadStrideInfo conv_info,
const ActivationLayerInfo act_info = ActivationLayerInfo() 
)
static

Static function to check if given info will lead to a valid configuration of CLDirectConvolutionLayer.

Parameters
[in]inputSource tensor. 3 lower dimensions represent a single input [width, height, IFM], while every optional dimension from 4 and above represent a batch of inputs. Data types supported: QASYMM8_SIGNED/QASYMM8/F16/F32.
[in]weightsWeights tensor. Weights are 4D tensor with dimensions [kernel_x, kernel_y, IFM, OFM]. Data type supported:Same as input.
[in]biasesBiases tensor. Shared biases supported. Biases are 1D tensor with dimensions [OFM]. Data type supported: Should match input data type, except for input of QASYMM8 and QASYMM8_SIGNED type where biases should be of S32 type.
[in]outputDestination tensor. 3 lower dimensions represent a single output [width, height, OFM], while the rest represent batch of outputs. Data types supported: Same as input.
[in]conv_infoContains padding and stride information described in PadStrideInfo.
[in]act_info(Optional) Activation layer information in case of a fused activation.
Returns
a status

Definition at line 79 of file CLDirectConvolutionLayer.cpp.

References ARM_COMPUTE_RETURN_ON_ERROR, ActivationLayerInfo::enabled(), CLScheduler::get(), CLActivationLayer::validate(), and CLDirectConvolutionLayerKernel::validate().

Referenced by arm_compute::test::validation::DATA_TEST_CASE(), CLConvolutionLayer::get_convolution_method(), and CLConvolutionLayer::validate().

81 {
82  ARM_COMPUTE_RETURN_ON_ERROR(CLDirectConvolutionLayerKernel::validate(input, weights, biases, output, conv_info, CLScheduler::get().target()));
83  if(act_info.enabled())
84  {
85  ARM_COMPUTE_RETURN_ON_ERROR(CLActivationLayer::validate(output, nullptr, act_info));
86  }
87  return Status{};
88 }
static Status validate(const ITensorInfo *input, const ITensorInfo *output, const ActivationLayerInfo &act_info)
Static function to check if given info will lead to a valid configuration of CLActivationLayer.
bool enabled() const
Check if initialised.
Definition: Types.h:1600
static CLScheduler & get()
Access the scheduler singleton.
#define ARM_COMPUTE_RETURN_ON_ERROR(status)
Checks if a status contains an error and returns it.
Definition: Error.h:204
Status class.
Definition: Error.h:52
static Status validate(const ITensorInfo *input, const ITensorInfo *weights, const ITensorInfo *biases, const ITensorInfo *output, const PadStrideInfo &conv_info, const GPUTarget target)
Static function to check if given info will lead to a valid configuration of CLDirectConvolutionLayer...

The documentation for this class was generated from the following files: