24.02.1
|
Go to the documentation of this file.
24 #ifndef ARM_COMPUTE_NEGEMMCONVOLUTIONLAYER_H
25 #define ARM_COMPUTE_NEGEMMCONVOLUTIONLAYER_H
105 bool enable_fast_math =
false,
137 bool enable_fast_math =
false,
204 bool enable_fast_math =
false);
211 std::unique_ptr<Impl> _impl;
Convolution Layer Weights Information class.
SimpleTensor< float > src
Base class for all functions.
static Status validate(const ITensorInfo *input, const ITensorInfo *weights, const ITensorInfo *biases, const ITensorInfo *output, const PadStrideInfo &conv_info, const WeightsInfo &weights_info=WeightsInfo(), const Size2D &dilation=Size2D(1U, 1U), const ActivationLayerInfo &act_info=ActivationLayerInfo(), bool enable_fast_math=false, unsigned int num_groups=1)
Static function to check if given info will lead to a valid configuration of NEGEMMConvolutionLayer.
Class for specifying the size of an image or rectangle.
Interface for CPU tensor.
void run() override
Run the kernels contained in the function.
Activation Layer Information class.
WeightFormat
Memory layouts for the weights tensor.
~NEGEMMConvolutionLayer()
Default destructor.
static Status has_opt_impl(arm_compute::WeightFormat &expected_weight_format, const ITensorInfo *src, const ITensorInfo *weights, const ITensorInfo *biases, const ITensorInfo *dst, const PadStrideInfo &conv_info, const WeightsInfo &weights_info=WeightsInfo(), const Size2D &dilation=Size2D(1U, 1U), const ActivationLayerInfo &act_info=ActivationLayerInfo(), bool enable_fast_math=false)
Static function to check if there is an optimized version of GEMM available for the input parameters.
NEGEMMConvolutionLayer(const std::shared_ptr< IMemoryManager > &memory_manager=nullptr, IWeightsManager *weights_manager=nullptr)
Constructor.
const unsigned int num_groups
Copyright (c) 2017-2024 Arm Limited.
NEGEMMConvolutionLayer & operator=(const NEGEMMConvolutionLayer &)=delete
Prevent instances of this class from being copied (As this class contains pointers)
Basic function to compute the convolution layer.
Store the tensor's metadata.
void prepare() override
Prepare the function for executing.
void configure(const ITensor *input, const ITensor *weights, const ITensor *biases, ITensor *output, const PadStrideInfo &conv_info, const WeightsInfo &weights_info=WeightsInfo(), const Size2D &dilation=Size2D(1U, 1U), const ActivationLayerInfo &act_info=ActivationLayerInfo(), bool enable_fast_math=false, unsigned int num_groups=1)
Set the input and output tensors.
Weights manager interface to handle weights transformations.