Compute Library
 21.02
CLDeconvolutionLayer Class Reference

Basic function to compute the deconvolution layer. More...

#include <CLDeconvolutionLayer.h>

Collaboration diagram for CLDeconvolutionLayer:
[legend]

Public Member Functions

 CLDeconvolutionLayer (std::shared_ptr< IMemoryManager > memory_manager=nullptr)
 Default constructor. More...
 
void configure (ICLTensor *input, ICLTensor *weights, const ICLTensor *bias, ICLTensor *output, const PadStrideInfo &deconv_info, const WeightsInfo &weights_info=WeightsInfo())
 Set the input, weights, biases and output tensors. More...
 
void configure (const CLCompileContext &compile_context, ICLTensor *input, ICLTensor *weights, const ICLTensor *bias, ICLTensor *output, const PadStrideInfo &deconv_info, const WeightsInfo &weights_info=WeightsInfo())
 Set the input, weights, biases and output tensors. More...
 
void run () override
 Run the kernels contained in the function. More...
 
void prepare () override
 Prepare the function for executing. More...
 
- Public Member Functions inherited from IFunction
virtual ~IFunction ()=default
 Destructor. More...
 

Static Public Member Functions

static Status validate (const ITensorInfo *input, const ITensorInfo *weights, const ITensorInfo *bias, ITensorInfo *output, const PadStrideInfo &deconv_info, const WeightsInfo &weights_info=WeightsInfo())
 Static function to check if given info will lead to a valid configuration of CLDeconvolutionLayer. More...
 
static DeconvolutionMethod get_deconvolution_method (const ITensorInfo *input, const ITensorInfo *weights, const ITensorInfo *bias, ITensorInfo *output, const PadStrideInfo &deconv_info, const WeightsInfo &weights_info)
 

Detailed Description

Basic function to compute the deconvolution layer.

This function calls the following OpenCL kernels/functions:

  1. CLGEMMDeconvolutionLayer
  2. CLDirectDeconvolutionLayer

Definition at line 41 of file CLDeconvolutionLayer.h.

Constructor & Destructor Documentation

◆ CLDeconvolutionLayer()

CLDeconvolutionLayer ( std::shared_ptr< IMemoryManager memory_manager = nullptr)

Default constructor.

Definition at line 39 of file CLDeconvolutionLayer.cpp.

40  : _memory_manager(std::move(memory_manager)), _function()
41 {
42 }

Member Function Documentation

◆ configure() [1/2]

void configure ( ICLTensor input,
ICLTensor weights,
const ICLTensor bias,
ICLTensor output,
const PadStrideInfo deconv_info,
const WeightsInfo weights_info = WeightsInfo() 
)

Set the input, weights, biases and output tensors.

Parameters
[in,out]inputInput tensor. 3 lower dimensions represent a single input, and an optional 4th dimension for batch of inputs. Data types supported: QASYMM8_SIGNED/QASYMM8/F16/F32.
[in]weightsThe 4d weights with dimensions [width, height, IFM, OFM]. Data type supported: Same as input.
[in]bias(Optional) The biases have one dimension. Data type supported: Same as input.
[out]outputOutput tensor. The output has the same number of dimensions as the input.
[in]deconv_infoContains padding and policies to be used in the deconvolution, this is described in PadStrideInfo.
[in]weights_info(Optional) Weights information needed for CLConvolutionLayer, specifies if the weights tensor has been reshaped with CLWeightsReshapeKernel.

Definition at line 44 of file CLDeconvolutionLayer.cpp.

References CLKernelLibrary::get().

46 {
47  configure(CLKernelLibrary::get().get_compile_context(), input, weights, bias, output, deconv_info, weights_info);
48 }
static CLKernelLibrary & get()
Access the KernelLibrary singleton.
void configure(ICLTensor *input, ICLTensor *weights, const ICLTensor *bias, ICLTensor *output, const PadStrideInfo &deconv_info, const WeightsInfo &weights_info=WeightsInfo())
Set the input, weights, biases and output tensors.

◆ configure() [2/2]

void configure ( const CLCompileContext compile_context,
ICLTensor input,
ICLTensor weights,
const ICLTensor bias,
ICLTensor output,
const PadStrideInfo deconv_info,
const WeightsInfo weights_info = WeightsInfo() 
)

Set the input, weights, biases and output tensors.

Parameters
[in]compile_contextThe compile context to be used.
[in,out]inputInput tensor. 3 lower dimensions represent a single input, and an optional 4th dimension for batch of inputs. Data types supported: QASYMM8_SIGNED/QASYMM8/F16/F32.
[in]weightsThe 4d weights with dimensions [width, height, IFM, OFM]. Data type supported: Same as input.
[in]bias(Optional) The biases have one dimension. Data type supported: Same as input.
[out]outputOutput tensor. The output has the same number of dimensions as the input.
[in]deconv_infoContains padding and policies to be used in the deconvolution, this is described in PadStrideInfo.
[in]weights_info(Optional) Weights information needed for CLConvolutionLayer, specifies if the weights tensor has been reshaped with CLWeightsReshapeKernel.

Definition at line 50 of file CLDeconvolutionLayer.cpp.

References ARM_COMPUTE_ERROR, ARM_COMPUTE_ERROR_ON_NULLPTR, arm_compute::DIRECT, arm_compute::GEMM, CLDeconvolutionLayer::get_deconvolution_method(), ITensor::info(), and arm_compute::test::validation::weights_info.

52 {
53  ARM_COMPUTE_ERROR_ON_NULLPTR(input, weights, output);
54 
55  switch(CLDeconvolutionLayer::get_deconvolution_method(input->info(), weights->info(), nullptr, output->info(), deconv_info, weights_info))
56  {
58  {
59  auto f = std::make_unique<CLDirectDeconvolutionLayer>();
60  f->configure(compile_context, input, weights, bias, output, deconv_info, weights_info);
61  _function = std::move(f);
62  break;
63  }
65  {
66  auto f = std::make_unique<CLGEMMDeconvolutionLayer>(_memory_manager);
67  f->configure(compile_context, input, weights, bias, output, deconv_info);
68  _function = std::move(f);
69  break;
70  }
71  default:
72  ARM_COMPUTE_ERROR("Not supported.");
73  break;
74  }
75 }
#define ARM_COMPUTE_ERROR(msg)
Print the given message then throw an std::runtime_error.
Definition: Error.h:352
virtual ITensorInfo * info() const =0
Interface to be implemented by the child class to return the tensor&#39;s metadata.
#define ARM_COMPUTE_ERROR_ON_NULLPTR(...)
Definition: Validate.h:161
Convolution using GEMM.
static DeconvolutionMethod get_deconvolution_method(const ITensorInfo *input, const ITensorInfo *weights, const ITensorInfo *bias, ITensorInfo *output, const PadStrideInfo &deconv_info, const WeightsInfo &weights_info)

◆ get_deconvolution_method()

DeconvolutionMethod get_deconvolution_method ( const ITensorInfo input,
const ITensorInfo weights,
const ITensorInfo bias,
ITensorInfo output,
const PadStrideInfo deconv_info,
const WeightsInfo weights_info 
)
static

Definition at line 103 of file CLDeconvolutionLayer.cpp.

References ARM_COMPUTE_UNUSED, arm_compute::test::validation::data_layout, ITensorInfo::data_layout(), ITensorInfo::dimension(), arm_compute::DIRECT, arm_compute::GEMM, arm_compute::get_data_layout_dimension_index(), arm_compute::HEIGHT, PadStrideInfo::stride(), and arm_compute::WIDTH.

Referenced by CLDeconvolutionLayer::configure(), and CLDeconvolutionLayer::validate().

105 {
106  ARM_COMPUTE_UNUSED(output, bias, weights_info);
107 
108  const DataLayout data_layout = input->data_layout();
109 
110  const size_t idx_w = get_data_layout_dimension_index(data_layout, DataLayoutDimension::WIDTH);
111  const size_t idx_h = get_data_layout_dimension_index(data_layout, DataLayoutDimension::HEIGHT);
112 
113  if(weights->dimension(idx_w) != deconv_info.stride().first || weights->dimension(idx_h) != deconv_info.stride().second)
114  {
116  }
117 
119 }
virtual size_t dimension(size_t index) const =0
Return the size of the requested dimension.
const DataLayout data_layout
Definition: Im2Col.cpp:151
#define ARM_COMPUTE_UNUSED(...)
To avoid unused variables warnings.
Definition: Error.h:152
std::pair< unsigned int, unsigned int > stride() const
Get the stride.
Definition: Types.h:770
size_t get_data_layout_dimension_index(const DataLayout data_layout, const DataLayoutDimension data_layout_dimension)
Get the index of the given dimension.
Definition: Helpers.inl:193
DataLayout
[DataLayout enum definition]
Definition: Types.h:120
Convolution using GEMM.
virtual DataLayout data_layout() const =0
Get the data layout of the tensor.

◆ prepare()

void prepare ( )
overridevirtual

Prepare the function for executing.

Any one off pre-processing step required by the function is handled here

Note
Prepare stage might not need all the function's buffers' backing memory to be available in order to execute

Reimplemented from IFunction.

Definition at line 127 of file CLDeconvolutionLayer.cpp.

Referenced by CLDeconvolutionLayer::run().

128 {
129  _function->prepare();
130 }

◆ run()

void run ( )
overridevirtual

Run the kernels contained in the function.

For Neon kernels:

  • Multi-threading is used for the kernels which are parallelisable.
  • By default std::thread::hardware_concurrency() threads are used.
Note
CPPScheduler::set_num_threads() can be used to manually set the number of threads

For OpenCL kernels:

  • All the kernels are enqueued on the queue associated with CLScheduler.
  • The queue is then flushed.
Note
The function will not block until the kernels are executed. It is the user's responsibility to wait.
Will call prepare() on first run if hasn't been done

Implements IFunction.

Definition at line 121 of file CLDeconvolutionLayer.cpp.

References CLDeconvolutionLayer::prepare().

122 {
123  prepare();
124  _function->run();
125 }
void prepare() override
Prepare the function for executing.

◆ validate()

Status validate ( const ITensorInfo input,
const ITensorInfo weights,
const ITensorInfo bias,
ITensorInfo output,
const PadStrideInfo deconv_info,
const WeightsInfo weights_info = WeightsInfo() 
)
static

Static function to check if given info will lead to a valid configuration of CLDeconvolutionLayer.

Parameters
[in]inputInput tensor info. 3 lower dimensions represent a single input, and an optional 4th dimension for batch of inputs. Data types supported: QASYMM8_SIGNED/QASYMM8/F16/F32.
[in]weightsThe 4d weights info with dimensions [width, height, IFM, OFM]. Data type supported: Same as input.
[in]bias(Optional) The biases have one dimension. Data type supported: Same as input.
[in]outputOutput tensor info. The output has the same number of dimensions as the input.
[in]deconv_infoContains padding and policies to be used in the deconvolution, this is described in PadStrideInfo.
[in]weights_info(Optional) Weights information needed for CLConvolutionLayer, specifies if the weights tensor has been reshaped with CLWeightsReshapeKernel.
Returns
a status

Definition at line 77 of file CLDeconvolutionLayer.cpp.

References ARM_COMPUTE_ERROR, ARM_COMPUTE_RETURN_ERROR_ON_NULLPTR, ARM_COMPUTE_RETURN_ON_ERROR, arm_compute::DIRECT, arm_compute::GEMM, CLDeconvolutionLayer::get_deconvolution_method(), CLGEMMDeconvolutionLayer::validate(), and CLDirectDeconvolutionLayer::validate().

Referenced by arm_compute::test::validation::DATA_TEST_CASE().

79 {
80  ARM_COMPUTE_RETURN_ERROR_ON_NULLPTR(input, weights, output);
81  switch(CLDeconvolutionLayer::get_deconvolution_method(input, weights, bias, output, deconv_info, weights_info))
82  {
84  {
85  // Validate direct convolution layer
86  ARM_COMPUTE_RETURN_ON_ERROR(CLDirectDeconvolutionLayer::validate(input, weights, bias, output, deconv_info, weights_info));
87  break;
88  }
90  {
91  // Validate gemm-based convolution layer
92  ARM_COMPUTE_RETURN_ON_ERROR(CLGEMMDeconvolutionLayer::validate(input, weights, bias, output, deconv_info));
93  break;
94  }
95  default:
96  ARM_COMPUTE_ERROR("Not supported.");
97  break;
98  }
99 
100  return Status{};
101 }
static Status validate(const ITensorInfo *input, const ITensorInfo *weights, const ITensorInfo *bias, ITensorInfo *output, const PadStrideInfo &info, const WeightsInfo &weights_info=WeightsInfo())
Static function to check if given info will lead to a valid configuration of CLDirectDeconvolutionLay...
#define ARM_COMPUTE_ERROR(msg)
Print the given message then throw an std::runtime_error.
Definition: Error.h:352
static Status validate(const ITensorInfo *input, const ITensorInfo *weights, const ITensorInfo *bias, const ITensorInfo *output, const PadStrideInfo &deconv_info)
Static function to check if given info will lead to a valid configuration of CLDeconvolutionLayer.
#define ARM_COMPUTE_RETURN_ON_ERROR(status)
Checks if a status contains an error and returns it.
Definition: Error.h:204
Status class.
Definition: Error.h:52
#define ARM_COMPUTE_RETURN_ERROR_ON_NULLPTR(...)
Definition: Validate.h:163
Convolution using GEMM.
static DeconvolutionMethod get_deconvolution_method(const ITensorInfo *input, const ITensorInfo *weights, const ITensorInfo *bias, ITensorInfo *output, const PadStrideInfo &deconv_info, const WeightsInfo &weights_info)

The documentation for this class was generated from the following files: