24.04
|
Basic function to simulate a convolution layer. More...
#include <NEConvolutionLayer.h>
Public Member Functions | |
NEConvolutionLayer (std::shared_ptr< IMemoryManager > memory_manager=nullptr) | |
Constructor. More... | |
NEConvolutionLayer (const NEConvolutionLayer &)=delete | |
Prevent instances of this class from being copied (As this class contains pointers) More... | |
NEConvolutionLayer & | operator= (const NEConvolutionLayer &)=delete |
Prevent instances of this class from being copied (As this class contains pointers) More... | |
NEConvolutionLayer (NEConvolutionLayer &&)=default | |
Default move constructor. More... | |
NEConvolutionLayer & | operator= (NEConvolutionLayer &&)=default |
Prevent instances of this class from being moved (As this class contains non movable objects) More... | |
~NEConvolutionLayer () | |
Default destructor. More... | |
void | configure (ITensor *input, const ITensor *weights, const ITensor *biases, ITensor *output, const PadStrideInfo &conv_info, const WeightsInfo &weights_info=WeightsInfo(), const Size2D &dilation=Size2D(1U, 1U), const ActivationLayerInfo &act_info=ActivationLayerInfo(), bool enable_fast_math=false, unsigned int num_groups=1) |
Set the input and output tensors. More... | |
void | run () override |
Run the kernels contained in the function. More... | |
void | prepare () override |
Prepare the function for executing. More... | |
Public Member Functions inherited from IFunction | |
virtual | ~IFunction ()=default |
Destructor. More... | |
Static Public Member Functions | |
static Status | validate (const ITensorInfo *input, const ITensorInfo *weights, const ITensorInfo *biases, const ITensorInfo *output, const PadStrideInfo &conv_info, const WeightsInfo &weights_info=WeightsInfo(), const Size2D &dilation=Size2D(1U, 1U), const ActivationLayerInfo &act_info=ActivationLayerInfo(), bool enable_fast_math=false, unsigned int num_groups=1) |
Static function to check if given info will lead to a valid configuration of NEConvolutionLayer. More... | |
static ConvolutionMethod | get_convolution_method (const ITensorInfo *input, const ITensorInfo *weights, const ITensorInfo *output, const PadStrideInfo &conv_info, const WeightsInfo &weights_info=WeightsInfo(), const Size2D &dilation=Size2D(1U, 1U), const ActivationLayerInfo &act_info=ActivationLayerInfo(), bool enable_fast_math=false) |
Static function to check if given info will return the convolution called by NEConvolutionLayer. More... | |
Basic function to simulate a convolution layer.
This function calls one of the following functions:
The function selects one of the algorithms mentioned above based on:
Generally GEMM-based convolution is executed when neither Winograd nor FFT nor Direct convolution can be performed.
FP32 Algorithm | Filter Size | Input/Output feature maps |
---|---|---|
Winograd | 3x3 1x3 3x1 5x1 1x5 5x5(fast maths) 7x1 1x7 | Input channels is greater than 3 |
FFT | Squared kernels and greater than 9x9 | Input feature maps > Output feature maps |
DirectConv | 9x9 | |
GEMM | Any size |
Winograd 5x5 requires fast maths enabled.
FP16 Algorithm | Filter Size |
---|---|
Winograd | Not supported |
FFT | Not supported |
DirectConv | 9x9 |
GEMM | Any size |
Definition at line 72 of file NEConvolutionLayer.h.
NEConvolutionLayer | ( | std::shared_ptr< IMemoryManager > | memory_manager = nullptr | ) |
Constructor.
Definition at line 56 of file NEConvolutionLayer.cpp.
|
delete |
Prevent instances of this class from being copied (As this class contains pointers)
|
default |
Default move constructor.
|
default |
Default destructor.
void configure | ( | ITensor * | input, |
const ITensor * | weights, | ||
const ITensor * | biases, | ||
ITensor * | output, | ||
const PadStrideInfo & | conv_info, | ||
const WeightsInfo & | weights_info = WeightsInfo() , |
||
const Size2D & | dilation = Size2D(1U, 1U) , |
||
const ActivationLayerInfo & | act_info = ActivationLayerInfo() , |
||
bool | enable_fast_math = false , |
||
unsigned int | num_groups = 1 |
||
) |
Set the input and output tensors.
Valid data layouts:
Valid data type configurations:
src0 | src1 | src2 | dst |
---|---|---|---|
F16 | F16 | F16 | F16 |
F32 | F32 | F32 | F32 |
QASYMM8 | QASYMM8 | S32 | QASYMM8 |
QASYMM8 | QSYMM8_PER_CHANNEL | S32 | QASYMM8 |
QASYMM8_SIGNED | QASYMM8_SIGNED | S32 | QASYMM8_SIGNED |
QASYMM8_SIGNED | QSYMM8_PER_CHANNEL | S32 | QASYMM8_SIGNED |
[in] | input | Source tensor. 3 lower dimensions represent a single input [width, height, IFM], while every optional dimension from 4 and above represent a batch of inputs. Data types supported: QASYMM8/QASYMM8_SIGNED/F16/F32. |
[in] | weights | Weights tensor. Weights are 4D tensor with dimensions [kernel_x, kernel_y, IFM, OFM]. Data type supported: Same as input , also could be QSYMM8_PER_CHANNEL if input is QASYMM8/QASYMM8_SIGNED. |
[in] | biases | Biases tensor. Shared biases supported. Biases are 1D tensor with dimensions [OFM]. Data type supported: Same as input , except for input of QASYMM8/QASYMM8_SIGNED type where biases should be of S32 type. |
[out] | output | Destination tensor. 3 lower dimensions represent a single output [width, height, OFM], while the rest represent batch of outputs. Data types supported: Same as input . |
[in] | conv_info | Contains padding and stride information described in PadStrideInfo. |
[in] | weights_info | Specifies if the weights tensor has been reshaped with NEWeightsReshapeKernel. If this is not part of the fully connected layer the weights tensor has also been transposed with cpu::kernels::CpuGemmTranspose1xWKernel. Data type supported: Same as input . |
[in] | dilation | (Optional) Dilation, in elements, across x and y. Defaults to (1, 1). |
[in] | act_info | (Optional) Activation layer information in case of a fused activation. Only RELU, BOUNDED_RELU and LU_BOUNDED_RELU supported. |
[in] | enable_fast_math | (Optional) Enable fast math computation. In case this flag were set, the function could dispatch the fastest implementation available which may introduce a drop of accuracy as well. Default is false |
[in] | num_groups | (Optional) Number of groups when performing a grouped convolution. num_groups != 1 is not supported |
Definition at line 63 of file NEConvolutionLayer.cpp.
References arm_compute::ACL_DST, arm_compute::ACL_SRC_0, arm_compute::ACL_SRC_1, arm_compute::ACL_SRC_2, arm_compute::test::validation::act_info, ARM_COMPUTE_ERROR, ARM_COMPUTE_ERROR_ON_NULLPTR, ARM_COMPUTE_ERROR_THROW_ON, ARM_COMPUTE_LOG_PARAMS, ARM_COMPUTE_UNUSED, arm_compute::test::validation::conv_info, arm_compute::DIRECT, arm_compute::FFT, arm_compute::GEMM, arm_compute::GEMM_CONV2D, CpuConv2d::get_convolution_method(), ITensor::info(), arm_compute::test::validation::info, arm_compute::test::validation::input, arm_compute::test::validation::num_groups, NEConvolutionLayer::validate(), arm_compute::test::validation::weights_info, and arm_compute::WINOGRAD.
Referenced by NEDeconvolutionLayer::configure().
|
static |
Static function to check if given info will return the convolution called by NEConvolutionLayer.
[in] | input | Source tensor. 3 lower dimensions represent a single input [width, height, IFM], while every optional dimension from 4 and above represent a batch of inputs. Data types supported: QASYMM8/QASYMM8_SIGNED/F16/F32. |
[in] | weights | Weights tensor. Weights are 4D tensor with dimensions [kernel_x, kernel_y, IFM, OFM]. Data type supported:Same as input , also could be QSYMM8_PER_CHANNEL if input is QASYMM8/QASYMM8_SIGNED. |
[in] | output | Destination tensor. 3 lower dimensions represent a single output [width, height, OFM], while the rest represent batch of outputs. Data types supported: Same as input . |
[in] | conv_info | Contains padding and stride information described in PadStrideInfo. |
[in] | weights_info | Specifies if the weights tensor has been reshaped with NEWeightsReshapeKernel. If this is not part of the fully connected layer the weights tensor has also been transposed with cpu::kernels::CpuGemmTranspose1xWKernel. Data type supported: Same as input . |
[in] | dilation | (Optional) Dilation, in elements, across x and y. Defaults to (1, 1). |
[in] | act_info | (Optional) Activation layer information in case of a fused activation. |
[in] | enable_fast_math | (Optional) Enable fast math computation. In case this flag were set, the function could dispatch the fastest implementation available which may introduce a drop of accuracy as well. Default is false |
Definition at line 165 of file NEConvolutionLayer.cpp.
References arm_compute::test::validation::act_info, arm_compute::test::validation::conv_info, CpuConv2d::get_convolution_method(), arm_compute::test::validation::input, and arm_compute::test::validation::weights_info.
|
delete |
Prevent instances of this class from being copied (As this class contains pointers)
|
default |
Prevent instances of this class from being moved (As this class contains non movable objects)
|
overridevirtual |
Prepare the function for executing.
Any one off pre-processing step required by the function is handled here
Reimplemented from IFunction.
Definition at line 194 of file NEConvolutionLayer.cpp.
Referenced by NEDeconvolutionLayer::prepare(), and NEConvolutionLayer::run().
|
overridevirtual |
Run the kernels contained in the function.
For CPU kernels:
For OpenCL kernels:
Implements IFunction.
Definition at line 178 of file NEConvolutionLayer.cpp.
References NEConvolutionLayer::prepare().
Referenced by NEDeconvolutionLayer::run().
|
static |
Static function to check if given info will lead to a valid configuration of NEConvolutionLayer.
[in] | input | Source tensor. 3 lower dimensions represent a single input [width, height, IFM], while every optional dimension from 4 and above represent a batch of inputs. Data types supported: QASYMM8/QASYMM8_SIGNED/F16/F32. |
[in] | weights | Weights tensor. Weights are 4D tensor with dimensions [kernel_x, kernel_y, IFM, OFM]. Data type supported:Same as input , also could be QSYMM8_PER_CHANNEL if input is QASYMM8/QASYMM8_SIGNED. |
[in] | biases | Biases tensor. Shared biases supported. Biases are 1D tensor with dimensions [OFM]. Data type supported: Same as input , except for input of QASYMM8/QASYMM8_SIGNED type where biases should be of S32 type. |
[in] | output | Destination tensor. 3 lower dimensions represent a single output [width, height, OFM], while the rest represent batch of outputs. Data types supported: Same as input . |
[in] | conv_info | Contains padding and stride information described in PadStrideInfo. |
[in] | weights_info | Specifies if the weights tensor has been reshaped with NEWeightsReshapeKernel. If this is not part of the fully connected layer the weights tensor has also been transposed with cpu::kernels::CpuGemmTranspose1xWKernel. Data type supported: Same as input . |
[in] | dilation | (Optional) Dilation, in elements, across x and y. Defaults to (1, 1). |
[in] | act_info | (Optional) Activation layer information in case of a fused activation. |
[in] | enable_fast_math | (Optional) Enable fast math computation. In case this flag were set, the function could dispatch the fastest implementation available which may introduce a drop of accuracy as well. Default is false |
[in] | num_groups | (Optional) Number of groups when performing a grouped convolution. num_groups != 1 is not supported |
Definition at line 121 of file NEConvolutionLayer.cpp.
References arm_compute::test::validation::act_info, ITensorInfo::are_values_constant(), ARM_COMPUTE_ERROR, ARM_COMPUTE_RETURN_ERROR_ON_MSG, ARM_COMPUTE_RETURN_ON_ERROR, arm_compute::test::validation::conv_info, arm_compute::DIRECT, arm_compute::FFT, arm_compute::GEMM, arm_compute::GEMM_CONV2D, CpuConv2d::get_convolution_method(), arm_compute::test::validation::info, arm_compute::test::validation::input, arm_compute::is_data_type_quantized(), arm_compute::test::validation::num_groups, NEFFTConvolutionLayer::validate(), CpuConv2d::validate(), arm_compute::test::validation::weights_info, and arm_compute::WINOGRAD.
Referenced by NEConvolutionLayer::configure(), and NEDeconvolutionLayer::validate().