Compute Library
 21.02
NEDeconvolutionLayer Class Reference

Function to run the deconvolution layer. More...

#include <NEDeconvolutionLayer.h>

Collaboration diagram for NEDeconvolutionLayer:
[legend]

Public Member Functions

 NEDeconvolutionLayer (std::shared_ptr< IMemoryManager > memory_manager=nullptr)
 Constructor. More...
 
 NEDeconvolutionLayer (const NEDeconvolutionLayer &)=delete
 Prevent instances of this class from being copied (As this class contains pointers) More...
 
NEDeconvolutionLayeroperator= (const NEDeconvolutionLayer &)=delete
 Prevent instances of this class from being copied (As this class contains pointers) More...
 
 NEDeconvolutionLayer (NEDeconvolutionLayer &&)=delete
 Prevent instances of this class from being moved (As this class contains pointers) More...
 
NEDeconvolutionLayeroperator= (NEDeconvolutionLayer &&)=delete
 Prevent instances of this class from being moved (As this class contains pointers) More...
 
virtual ~NEDeconvolutionLayer ()=default
 Default destructor. More...
 
void configure (ITensor *input, const ITensor *weights, const ITensor *bias, ITensor *output, const PadStrideInfo &info)
 Set the input, weights, biases and output tensors. More...
 
void run () override
 Run the kernels contained in the function. More...
 
void prepare () override
 Prepare the function for executing. More...
 
- Public Member Functions inherited from IFunction
virtual ~IFunction ()=default
 Destructor. More...
 

Static Public Member Functions

static Status validate (const ITensorInfo *input, const ITensorInfo *weights, const ITensorInfo *bias, const ITensorInfo *output, const PadStrideInfo &info)
 Static function to check if given info will lead to a valid configuration of NEDeconvolutionLayer. More...
 

Detailed Description

Function to run the deconvolution layer.

Deconvolution Layer is the backward pass of Convolution Layer. First we transform the input depending on the stride and pad info and then perfrom a 1x1 convolution pass. Input stride defines how many zeroes we should put between each element of the input, pad is the amount of padding and finaly a is a user specified value where a < stride - 1 that increases the padding top and right of the input image.

The relation between input to output is as follows:

\[ width\_output = (width\_input - 1) \cdot stride\_x - 2 \cdot padding\_x + kernel\_x \]

\[ height\_output = (height\_input - 1) \cdot stride\_y - 2 \cdot padding\_y + kernel\_y \]

where width is the size of the first input dimension. height is the size of the second input dimension. width_output is the size of the first output dimension. height_output is the size of the second output dimension. kernel_x and kernel_y are the convolution sizes in x and y. stride_x and stride_y is the input stride of the first and second dimension.

The weights used by Deconvolution are supposed to be the same as the ones used for Convolution. Therefore, it will be necessary to use the weights in the reverse order to perform an actual convolution. This is achieved by using NEReverse.

This function calls the following Neon kernels/functions:

  1. CPPUpsample
  2. NEConvolutionLayer
  3. NEPermute
  4. NEReverse

Definition at line 75 of file NEDeconvolutionLayer.h.

Constructor & Destructor Documentation

◆ NEDeconvolutionLayer() [1/3]

NEDeconvolutionLayer ( std::shared_ptr< IMemoryManager memory_manager = nullptr)

Constructor.

Definition at line 69 of file NEDeconvolutionLayer.cpp.

70  : _memory_group(std::move(memory_manager)),
71  _conv_f(),
72  _upsample_f(),
73  _flip_weights(),
74  _scaled_output(),
75  _weights_flipped(),
76  _flip_axis(),
77  _original_weights(nullptr),
78  _input(nullptr),
79  _info(),
80  _is_prepared(false)
81 {
82 }

◆ NEDeconvolutionLayer() [2/3]

Prevent instances of this class from being copied (As this class contains pointers)

◆ NEDeconvolutionLayer() [3/3]

Prevent instances of this class from being moved (As this class contains pointers)

◆ ~NEDeconvolutionLayer()

virtual ~NEDeconvolutionLayer ( )
virtualdefault

Default destructor.

Member Function Documentation

◆ configure()

void configure ( ITensor input,
const ITensor weights,
const ITensor bias,
ITensor output,
const PadStrideInfo info 
)

Set the input, weights, biases and output tensors.

Parameters
[in,out]inputInput tensor. 3 lower dimensions represent a single input, and an optional 4th dimension for batch of inputs. Data types supported: F32/F16/QASYMM8/QASYMM8_SIGNED.
[in]weightsThe 4d weights with dimensions [width, height, IFM, OFM]. Data type supported: Same as input.
[in]biasOptional, ignored if NULL. The biases have one dimension. Data type supported: Data types supported: S32 for QASYMM8 and QASYMM8_SIGNED input, F32 for F32 input, F16 for F16 input.
[out]outputOutput tensor. The output has the same number of dimensions as the input.
[in]infoContains padding and policies to be used in the deconvolution, this is decribed in PadStrideInfo.

Definition at line 150 of file NEDeconvolutionLayer.cpp.

References TensorAllocator::allocate(), Tensor::allocator(), ARM_COMPUTE_ERROR_ON_NULLPTR, ARM_COMPUTE_ERROR_THROW_ON, arm_compute::auto_init_if_empty(), Tensor::buffer(), arm_compute::CEIL, ICloneable< T >::clone(), arm_compute::misc::shape_calculator::compute_deconvolution_output_shape(), arm_compute::misc::shape_calculator::compute_deconvolution_upsampled_shape(), CPPUpsample::configure(), NEReverse::configure(), NEConvolutionLayer::configure(), arm_compute::test::validation::conv_info, arm_compute::test::validation::data_layout, ITensorInfo::data_layout(), ITensorInfo::data_type(), arm_compute::deconvolution_output_dimensions(), ITensorInfo::dimension(), arm_compute::get_data_layout_dimension_index(), arm_compute::HEIGHT, ITensor::info(), arm_compute::test::validation::info, TensorAllocator::init(), arm_compute::test::validation::input, MemoryGroup::manage(), arm_compute::test::validation::output_shape, ITensorInfo::quantization_info(), TensorInfo::set_data_layout(), PadStrideInfo::stride(), arm_compute::U, arm_compute::U32, NEDeconvolutionLayer::validate(), and arm_compute::WIDTH.

151 {
152  // Perform validation step
153  ARM_COMPUTE_ERROR_ON_NULLPTR(input, weights, output);
154  ARM_COMPUTE_ERROR_THROW_ON(NEDeconvolutionLayer::validate(input->info(), weights->info(), (bias == nullptr) ? nullptr : bias->info(), output->info(), info));
155 
156  const DataLayout data_layout = input->info()->data_layout();
157  const unsigned int width_idx = get_data_layout_dimension_index(data_layout, DataLayoutDimension::WIDTH);
158  const unsigned int height_idx = get_data_layout_dimension_index(data_layout, DataLayoutDimension::HEIGHT);
159  auto out_dims = deconvolution_output_dimensions(input->info()->dimension(width_idx), input->info()->dimension(height_idx),
160  weights->info()->dimension(width_idx), weights->info()->dimension(height_idx), info);
161 
162  const TensorShape output_shape = compute_deconvolution_output_shape(out_dims, *input->info(), *weights->info());
163 
164  _input = input;
165  _original_weights = weights;
166  _info = info;
167  _is_prepared = false;
168 
169  const unsigned int stride_x = info.stride().first;
170  const unsigned int stride_y = info.stride().second;
171 
172  // Output auto initialization if not yet initialized
173  auto_init_if_empty(*output->info(), output_shape, 1, input->info()->data_type(), input->info()->quantization_info());
174 
175  _flip_axis.allocator()->init(TensorInfo(TensorShape(2U), 1, DataType::U32));
176  _memory_group.manage(&_scaled_output);
177 
178  _weights_flipped.allocator()->init(weights->info()->clone()->set_data_layout(data_layout));
179  _flip_weights.configure(weights, &_weights_flipped, &_flip_axis);
180 
181  // setup the function to convolve the upscaled output
182  const PadStrideInfo conv_info(1, 1, 0, 0, 0, 0, DimensionRoundingType::CEIL);
183  uint32_t deconv_pad_x = 0;
184  uint32_t deconv_pad_y = 0;
185 
186  const TensorShape scale_out_shape = compute_deconvolution_upsampled_shape(*input->info(), *weights->info(),
187  stride_x, stride_y,
188  out_dims, deconv_pad_x, deconv_pad_y);
189 
190  const PadStrideInfo upsample_info = compute_upsample_info(info, deconv_pad_x, deconv_pad_y);
191 
192  TensorInfo scale_out_info(scale_out_shape, 1, input->info()->data_type(), input->info()->quantization_info());
193  scale_out_info.set_data_layout(data_layout);
194  _scaled_output.allocator()->init(scale_out_info);
195 
196  _upsample_f.configure(input, &_scaled_output, upsample_info);
197 
198  _conv_f.configure(&_scaled_output, &_weights_flipped, bias, output, conv_info);
199 
200  // Setup flip axis data
201  _flip_axis.allocator()->allocate();
202  auto axis_data = reinterpret_cast<uint32_t *>(_flip_axis.buffer());
203  axis_data[0] = static_cast<uint32_t>(width_idx);
204  axis_data[1] = static_cast<uint32_t>(height_idx);
205 
206  _scaled_output.allocator()->allocate();
207 }
void init(const TensorAllocator &allocator, const Coordinates &coords, TensorInfo &sub_info)
Shares the same backing memory with another tensor allocator, while the tensor info might be differen...
std::pair< unsigned int, unsigned int > deconvolution_output_dimensions(unsigned int in_width, unsigned int in_height, unsigned int kernel_width, unsigned int kernel_height, const PadStrideInfo &pad_stride_info)
Returns expected width and height of the deconvolution&#39;s output tensor.
Definition: Utils.cpp:399
static Status validate(const ITensorInfo *input, const ITensorInfo *weights, const ITensorInfo *bias, const ITensorInfo *output, const PadStrideInfo &info)
Static function to check if given info will lead to a valid configuration of NEDeconvolutionLayer.
const DataLayout data_layout
Definition: Im2Col.cpp:151
void configure(ITensor *input, const ITensor *weights, const ITensor *biases, ITensor *output, const PadStrideInfo &conv_info, const WeightsInfo &weights_info=WeightsInfo(), const Size2D &dilation=Size2D(1U, 1U), const ActivationLayerInfo &act_info=ActivationLayerInfo(), bool enable_fast_math=false, unsigned int num_groups=1)
Set the input and output tensors.
#define ARM_COMPUTE_ERROR_THROW_ON(status)
Definition: Error.h:455
void configure(const ITensor *input, ITensor *output, const ITensor *axis)
Initialize the function.
Definition: NEReverse.cpp:30
TensorAllocator * allocator()
Return a pointer to the tensor&#39;s allocator.
Definition: Tensor.cpp:48
TensorShape compute_deconvolution_output_shape(const std::pair< unsigned int, unsigned int > &out_dims, const ITensorInfo &input, const ITensorInfo &weights)
Calculate the output shape of the deconvolution layer.
void manage(IMemoryManageable *obj) override
Sets a object to be managed by the given memory group.
Definition: MemoryGroup.h:79
1 channel, 1 U32 per channel
void allocate() override
Allocate size specified by TensorInfo of CPU memory.
bool auto_init_if_empty(ITensorInfo &info, const TensorShape &shape, int num_channels, DataType data_type, QuantizationInfo quantization_info=QuantizationInfo())
Auto initialize the tensor info (shape, number of channels and data type) if the current assignment i...
TensorShape compute_deconvolution_upsampled_shape(const ITensorInfo &input, const ITensorInfo &weights, unsigned int sx, unsigned int sy, std::pair< unsigned int, unsigned int > &out_dims, uint32_t &padx, uint32_t &pady)
Calculate the upsampled output shape used for deconvolution.
ScaleKernelInfo info(interpolation_policy, default_border_mode, PixelValue(), sampling_policy, false)
void configure(const ITensor *input, ITensor *output, const PadStrideInfo &info)
Configure the upsample CPP kernel.
Definition: CPPUpsample.cpp:30
uint8_t * buffer() const override
Interface to be implemented by the child class to return a pointer to CPU memory. ...
Definition: Tensor.cpp:43
#define ARM_COMPUTE_ERROR_ON_NULLPTR(...)
Definition: Validate.h:161
size_t get_data_layout_dimension_index(const DataLayout data_layout, const DataLayoutDimension data_layout_dimension)
Get the index of the given dimension.
Definition: Helpers.inl:193
DataLayout
[DataLayout enum definition]
Definition: Types.h:120

◆ operator=() [1/2]

NEDeconvolutionLayer& operator= ( const NEDeconvolutionLayer )
delete

Prevent instances of this class from being copied (As this class contains pointers)

◆ operator=() [2/2]

NEDeconvolutionLayer& operator= ( NEDeconvolutionLayer &&  )
delete

Prevent instances of this class from being moved (As this class contains pointers)

◆ prepare()

void prepare ( )
overridevirtual

Prepare the function for executing.

Any one off pre-processing step required by the function is handled here

Note
Prepare stage might not need all the function's buffers' backing memory to be available in order to execute

Reimplemented from IFunction.

Definition at line 219 of file NEDeconvolutionLayer.cpp.

References TensorAllocator::allocate(), Tensor::allocator(), ARM_COMPUTE_ERROR_ON, ITensor::is_used(), ITensor::mark_as_unused(), NEConvolutionLayer::prepare(), and INESimpleFunctionNoBorder::run().

Referenced by NEDeconvolutionLayer::run().

220 {
221  if(!_is_prepared)
222  {
223  ARM_COMPUTE_ERROR_ON(!_original_weights->is_used());
224 
225  // Run weights flipping and mark original weights tensor as unused
226  _weights_flipped.allocator()->allocate();
227  _flip_weights.run();
228  _original_weights->mark_as_unused();
229 
230  // Prepare convolution
231  _conv_f.prepare();
232 
233  _is_prepared = true;
234  }
235 }
void run() override final
Run the kernels contained in the function.
bool is_used() const
Flags if the tensor is used or not.
Definition: ITensor.cpp:163
#define ARM_COMPUTE_ERROR_ON(cond)
If the condition is true then an error message is printed and an exception thrown.
Definition: Error.h:466
TensorAllocator * allocator()
Return a pointer to the tensor&#39;s allocator.
Definition: Tensor.cpp:48
void mark_as_unused() const
Marks a tensor as unused.
Definition: ITensor.cpp:168
void allocate() override
Allocate size specified by TensorInfo of CPU memory.
void prepare() override
Prepare the function for executing.

◆ run()

void run ( )
overridevirtual

Run the kernels contained in the function.

For Neon kernels:

  • Multi-threading is used for the kernels which are parallelisable.
  • By default std::thread::hardware_concurrency() threads are used.
Note
CPPScheduler::set_num_threads() can be used to manually set the number of threads

For OpenCL kernels:

  • All the kernels are enqueued on the queue associated with CLScheduler.
  • The queue is then flushed.
Note
The function will not block until the kernels are executed. It is the user's responsibility to wait.
Will call prepare() on first run if hasn't been done

Implements IFunction.

Definition at line 209 of file NEDeconvolutionLayer.cpp.

References NEDeconvolutionLayer::prepare(), ICPPSimpleFunction::run(), and NEConvolutionLayer::run().

210 {
211  prepare();
212 
213  MemoryGroupResourceScope scope_mg(_memory_group);
214 
215  _upsample_f.run();
216  _conv_f.run();
217 }
void run() override
Run the kernels contained in the function.
void prepare() override
Prepare the function for executing.
void run() override final
Run the kernels contained in the function.

◆ validate()

Status validate ( const ITensorInfo input,
const ITensorInfo weights,
const ITensorInfo bias,
const ITensorInfo output,
const PadStrideInfo info 
)
static

Static function to check if given info will lead to a valid configuration of NEDeconvolutionLayer.

Parameters
[in]inputInput tensor info. 3 lower dimensions represent a single input, and an optional 4th dimension for batch of inputs. Data types supported: F32/F16/QASYMM8/QASYMM8_SIGNED.
[in]weightsThe 4d weights info with dimensions [width, height, IFM, OFM]. Data type supported: Same as input.
[in]bias(Optional) The biases have one dimension. Data type supported: Data types supported: S32 for QASYMM8 and QASYMM8_SIGNED input, F32 for F32 input, F16 for F16 input.
[in]outputOutput tensor info. The output has the same number of dimensions as the input.
[in]infoContains padding and policies to be used in the deconvolution, this is decribed in PadStrideInfo.
Returns
a status

Definition at line 84 of file NEDeconvolutionLayer.cpp.

References ARM_COMPUTE_RETURN_ERROR_ON, ARM_COMPUTE_RETURN_ERROR_ON_DATA_TYPE_CHANNEL_NOT_IN, ARM_COMPUTE_RETURN_ERROR_ON_MISMATCHING_DATA_LAYOUT, ARM_COMPUTE_RETURN_ERROR_ON_MISMATCHING_DATA_TYPES, ARM_COMPUTE_RETURN_ERROR_ON_MSG, ARM_COMPUTE_RETURN_ERROR_ON_NULLPTR, ARM_COMPUTE_RETURN_ON_ERROR, arm_compute::BATCHES, arm_compute::CEIL, arm_compute::CHANNEL, ICloneable< T >::clone(), arm_compute::misc::shape_calculator::compute_deconvolution_output_shape(), arm_compute::misc::shape_calculator::compute_deconvolution_upsampled_shape(), arm_compute::test::validation::conv_info, arm_compute::test::validation::data_layout, ITensorInfo::data_layout(), ITensorInfo::data_type(), arm_compute::deconvolution_output_dimensions(), ITensorInfo::dimension(), Window::DimX, Window::DimY, Window::DimZ, arm_compute::F16, arm_compute::F32, arm_compute::get_data_layout_dimension_index(), arm_compute::HEIGHT, arm_compute::test::validation::info, arm_compute::is_data_type_quantized_asymmetric(), arm_compute::test::validation::output_shape, arm_compute::QASYMM8, arm_compute::QASYMM8_SIGNED, arm_compute::S32, PadStrideInfo::stride(), ITensorInfo::tensor_shape(), TensorShape::total_size(), NEConvolutionLayer::validate(), arm_compute::WIDTH, Dimensions< T >::x(), Dimensions< T >::y(), and Dimensions< T >::z().

Referenced by NEDeconvolutionLayer::configure().

85 {
90  const unsigned int width_idx = get_data_layout_dimension_index(weights->data_layout(), DataLayoutDimension::WIDTH);
91  const unsigned int height_idx = get_data_layout_dimension_index(weights->data_layout(), DataLayoutDimension::HEIGHT);
92  ARM_COMPUTE_RETURN_ERROR_ON(weights->dimension(width_idx) != weights->dimension(height_idx));
93  ARM_COMPUTE_RETURN_ERROR_ON(weights->dimension(width_idx) < 1);
94 
95  auto out_dims = deconvolution_output_dimensions(input->dimension(width_idx), input->dimension(height_idx), weights->dimension(width_idx), weights->dimension(height_idx), info);
96 
98  if(bias != nullptr)
99  {
100  if(is_data_type_quantized_asymmetric(input->data_type()))
101  {
103  }
104  else
105  {
107  }
108  }
109 
110  if(output->tensor_shape().total_size() > 0)
111  {
113 
114  const TensorShape output_shape = compute_deconvolution_output_shape(out_dims, *input, *weights);
115 
116  ARM_COMPUTE_RETURN_ERROR_ON_MSG(output->dimension(Window::DimX) != output_shape.x(), "Output's width is invalid.");
117  ARM_COMPUTE_RETURN_ERROR_ON_MSG(output->dimension(Window::DimY) != output_shape.y(), "Output's height is invalid.");
118  ARM_COMPUTE_RETURN_ERROR_ON_MSG(output->dimension(Window::DimZ) != output_shape.z(), "Output's depth is invalid.");
119  }
120 
121  uint32_t deconv_pad_x = 0;
122  uint32_t deconv_pad_y = 0;
123  const unsigned int stride_x = info.stride().first;
124  const unsigned int stride_y = info.stride().second;
125  // Guard against overflows in compute_deconvolution_upsampled_shape()
126  const DataLayout data_layout = input->data_layout();
127  const size_t idx_w = get_data_layout_dimension_index(data_layout, DataLayoutDimension::WIDTH);
128  const size_t idx_h = get_data_layout_dimension_index(data_layout, DataLayoutDimension::HEIGHT);
129  const unsigned int out_x = (input->dimension(idx_w) - 1) * stride_x + 1;
130  const unsigned int out_y = (input->dimension(idx_h) - 1) * stride_y + 1;
131  ARM_COMPUTE_RETURN_ERROR_ON(weights->dimension(idx_w) > out_x);
132  ARM_COMPUTE_RETURN_ERROR_ON(weights->dimension(idx_h) > out_y);
133  ARM_COMPUTE_RETURN_ERROR_ON((out_x - weights->dimension(idx_w) + 1) > out_dims.first);
134  ARM_COMPUTE_RETURN_ERROR_ON((out_y - weights->dimension(idx_h) + 1 ) > out_dims.second);
135 
136  const TensorShape scale_out_shape = compute_deconvolution_upsampled_shape(*input, *weights, stride_x, stride_y, out_dims, deconv_pad_x, deconv_pad_y);
137  TensorInfo scale_out_info(input->clone()->set_is_resizable(true).reset_padding().set_tensor_shape(scale_out_shape));
138  const PadStrideInfo conv_info(1, 1, 0, 0, 0, 0, DimensionRoundingType::CEIL);
139 
140  const unsigned int batches_idx = get_data_layout_dimension_index(weights->data_layout(), DataLayoutDimension::BATCHES);
141  const unsigned int channel_idx = get_data_layout_dimension_index(weights->data_layout(), DataLayoutDimension::CHANNEL);
142  ARM_COMPUTE_RETURN_ERROR_ON(input->dimension(batches_idx) != scale_out_info.dimension(batches_idx));
143  ARM_COMPUTE_RETURN_ERROR_ON(input->dimension(channel_idx) != scale_out_info.dimension(channel_idx));
144 
145  ARM_COMPUTE_RETURN_ON_ERROR(NEConvolutionLayer::validate(&scale_out_info, weights, bias, output, conv_info, WeightsInfo()));
146 
147  return Status{};
148 }
#define ARM_COMPUTE_RETURN_ERROR_ON_MISMATCHING_DATA_LAYOUT(...)
Definition: Validate.h:494
std::pair< unsigned int, unsigned int > deconvolution_output_dimensions(unsigned int in_width, unsigned int in_height, unsigned int kernel_width, unsigned int kernel_height, const PadStrideInfo &pad_stride_info)
Returns expected width and height of the deconvolution&#39;s output tensor.
Definition: Utils.cpp:399
#define ARM_COMPUTE_RETURN_ON_ERROR(status)
Checks if a status contains an error and returns it.
Definition: Error.h:204
1 channel, 1 F32 per channel
const DataLayout data_layout
Definition: Im2Col.cpp:151
#define ARM_COMPUTE_RETURN_ERROR_ON(cond)
If the condition is true, an error is returned.
Definition: Error.h:296
1 channel, 1 F16 per channel
#define ARM_COMPUTE_RETURN_ERROR_ON_NULLPTR(...)
Definition: Validate.h:163
TensorShape compute_deconvolution_output_shape(const std::pair< unsigned int, unsigned int > &out_dims, const ITensorInfo &input, const ITensorInfo &weights)
Calculate the output shape of the deconvolution layer.
1 channel, 1 S32 per channel
static constexpr size_t DimX
Alias for dimension 0 also known as X dimension.
Definition: Window.h:43
quantized, asymmetric fixed-point 8-bit number unsigned
TensorShape compute_deconvolution_upsampled_shape(const ITensorInfo &input, const ITensorInfo &weights, unsigned int sx, unsigned int sy, std::pair< unsigned int, unsigned int > &out_dims, uint32_t &padx, uint32_t &pady)
Calculate the upsampled output shape used for deconvolution.
bool is_data_type_quantized_asymmetric(DataType dt)
Check if a given data type is of asymmetric quantized type.
Definition: Utils.h:1190
static constexpr size_t DimY
Alias for dimension 1 also known as Y dimension.
Definition: Window.h:45
ScaleKernelInfo info(interpolation_policy, default_border_mode, PixelValue(), sampling_policy, false)
static constexpr size_t DimZ
Alias for dimension 2 also known as Z dimension.
Definition: Window.h:47
#define ARM_COMPUTE_RETURN_ERROR_ON_MISMATCHING_DATA_TYPES(...)
Definition: Validate.h:545
#define ARM_COMPUTE_RETURN_ERROR_ON_DATA_TYPE_CHANNEL_NOT_IN(t, c,...)
Definition: Validate.h:792
#define ARM_COMPUTE_RETURN_ERROR_ON_MSG(cond, msg)
If the condition is true, an error is returned.
Definition: Error.h:244
quantized, asymmetric fixed-point 8-bit number signed
size_t get_data_layout_dimension_index(const DataLayout data_layout, const DataLayoutDimension data_layout_dimension)
Get the index of the given dimension.
Definition: Helpers.inl:193
DataLayout
[DataLayout enum definition]
Definition: Types.h:120
static Status validate(const ITensorInfo *input, const ITensorInfo *weights, const ITensorInfo *biases, const ITensorInfo *output, const PadStrideInfo &conv_info, const WeightsInfo &weights_info=WeightsInfo(), const Size2D &dilation=Size2D(1U, 1U), const ActivationLayerInfo &act_info=ActivationLayerInfo(), bool enable_fast_math=false, unsigned int num_groups=1)
Static function to check if given info will lead to a valid configuration of NEConvolutionLayer.

The documentation for this class was generated from the following files: