Compute Library
 21.02
NEFullyConnectedLayer Class Reference

Basic function to compute a Fully Connected layer on Neon. More...

#include <NEFullyConnectedLayer.h>

Collaboration diagram for NEFullyConnectedLayer:
[legend]

Public Member Functions

 NEFullyConnectedLayer (std::shared_ptr< IMemoryManager > memory_manager=nullptr, IWeightsManager *weights_manager=nullptr)
 Constructor. More...
 
 NEFullyConnectedLayer (const NEFullyConnectedLayer &)=delete
 Prevent instances of this class from being copied (As this class contains pointers) More...
 
 NEFullyConnectedLayer (NEFullyConnectedLayer &&)=delete
 Prevent instances of this class from being moved (As this class contains pointers) More...
 
NEFullyConnectedLayeroperator= (const NEFullyConnectedLayer &)=delete
 Prevent instances of this class from being copied (As this class contains pointers) More...
 
NEFullyConnectedLayeroperator= (NEFullyConnectedLayer &&)=delete
 Prevent instances of this class from being moved (As this class contains pointers) More...
 
 ~NEFullyConnectedLayer ()
 Default destructor. More...
 
void configure (const ITensor *input, const ITensor *weights, const ITensor *biases, ITensor *output, FullyConnectedLayerInfo fc_info=FullyConnectedLayerInfo())
 Set the input and output tensors. More...
 
void run () override
 Run the kernels contained in the function. More...
 
void prepare () override
 Prepare the function for executing. More...
 
- Public Member Functions inherited from IFunction
virtual ~IFunction ()=default
 Destructor. More...
 

Static Public Member Functions

static Status validate (const ITensorInfo *input, const ITensorInfo *weights, const ITensorInfo *biases, const ITensorInfo *output, FullyConnectedLayerInfo fc_info=FullyConnectedLayerInfo())
 Static function to check if given info will lead to a valid configuration of NEFullyConnectedLayer. More...
 

Detailed Description

Basic function to compute a Fully Connected layer on Neon.

This function calls the following Neon kernels:

  1. NEIm2ColKernel (called when the input comes from a convolutional layer)
  2. NEFullyConnectedLayerReshapeWeights (if are_weights_reshaped is set to false and transpose_weights is set to true ) (called once)
  3. NEGEMMMatrixMultiplyKernel or NEGEMMLowpMatrixMultiplyCore (if quantized asymmetric)
  4. NEGEMMMatrixAdditionKernel or NEGEMMLowpQuantizeDownInt32ToUint8ScaleByFixedPoint (if quantized asymmetric) (if biases is not equal to nullptr)
Note
The fully connected layer accepts "weights" tensors only with 2 dimensions.

Definition at line 122 of file NEFullyConnectedLayer.h.

Constructor & Destructor Documentation

◆ NEFullyConnectedLayer() [1/3]

NEFullyConnectedLayer ( std::shared_ptr< IMemoryManager memory_manager = nullptr,
IWeightsManager weights_manager = nullptr 
)

Constructor.

Definition at line 159 of file NEFullyConnectedLayer.cpp.

160  : _memory_group(std::move(memory_manager)), _weights_manager(weights_manager), _flatten(), _convert_weights(), _convert_weights_managed(), _reshape_weights_function(),
161  _reshape_weights_managed_function(), _mm_gemm(nullptr, weights_manager), _mm_gemmlowp(nullptr, weights_manager), _flatten_output(), _converted_weights_output(), _reshape_weights_output(),
162  _original_weights(nullptr), _are_weights_converted(true), _are_weights_reshaped(false), _is_fc_after_conv(false), _is_quantized_asymmetric(false), _is_prepared(false)
163 {
164 }

◆ NEFullyConnectedLayer() [2/3]

Prevent instances of this class from being copied (As this class contains pointers)

◆ NEFullyConnectedLayer() [3/3]

Prevent instances of this class from being moved (As this class contains pointers)

◆ ~NEFullyConnectedLayer()

~NEFullyConnectedLayer ( )
default

Default destructor.

Referenced by NEFullyConnectedLayerReshapeWeights::validate().

Member Function Documentation

◆ configure()

void configure ( const ITensor input,
const ITensor weights,
const ITensor biases,
ITensor output,
FullyConnectedLayerInfo  fc_info = FullyConnectedLayerInfo() 
)

Set the input and output tensors.

Parameters
[in]inputSource tensor. Data type supported: QASYMM8/QASYMM8_SIGNED/F16/F32.
[in]weightsWeights tensor. The weights must be 2 dimensional. If this function is called after a Convolution Layer, the (transposed) weights will have as many rows as the product of the first 3 input's dimensions. If it is called after another FullyConnected Layer, the (transposed) weights will have as many rows as the input's first dimension. Data type supported: Same as input.
[in]biasesBias tensor. Can be nullptr. Data type supported: Same as weights, S32 if weights is QASYMM8/QASYMM8_SIGNED.
[out]outputDestination tensor. Its shape should be equal to the output of a matrix multiplication between:
  • The output of im2col on the input and the (transposed) 2D weights, if the function is called after a Convolution Layer
  • The input tensor and the (transposed) 2D weights, if the function is called after another FullyConnected Layer. Data type supported: Same as input.
[in]fc_info(Optional) Fully connected layer additional info

Definition at line 231 of file NEFullyConnectedLayer.cpp.

References IWeightsManager::acquire(), FullyConnectedLayerInfo::activation_info, IWeightsManager::are_weights_managed(), FullyConnectedLayerInfo::are_weights_reshaped, ARM_COMPUTE_ERROR_ON_NULLPTR, ARM_COMPUTE_ERROR_THROW_ON, Dimensions< T >::cbegin(), Dimensions< T >::cend(), NEConvertFullyConnectedWeights::configure(), NEFullyConnectedLayerReshapeWeights::configure(), NEFullyConnectedLayerReshapeWeightsManaged::configure(), NEConvertFullyConnectedWeightsManaged::configure(), ITensorInfo::data_layout(), ITensorInfo::data_type(), ITensorInfo::dimension(), ITensor::info(), arm_compute::is_data_type_quantized_asymmetric(), IWeightsManager::manage(), ITensorInfo::num_dimensions(), Dimensions< size_t >::num_max_dimensions, FullyConnectedLayerInfo::retain_internal_weights, ITensorInfo::tensor_shape(), FullyConnectedLayerInfo::transpose_weights, NEFullyConnectedLayer::validate(), and FullyConnectedLayerInfo::weights_trained_layout.

Referenced by NERNNLayer::configure(), and NELSTMLayer::configure().

233 {
234  // Perform validate step
235  ARM_COMPUTE_ERROR_ON_NULLPTR(input, weights, output);
237  weights->info(),
238  biases != nullptr ? biases->info() : nullptr,
239  output->info(),
240  fc_info));
241 
242  _are_weights_converted = true;
243  _are_weights_reshaped = fc_info.transpose_weights ? fc_info.are_weights_reshaped : true;
244  _is_fc_after_conv = true;
245  _is_quantized_asymmetric = is_data_type_quantized_asymmetric(input->info()->data_type());
246  _original_weights = weights;
247 
248  if(_weights_manager)
249  {
250  _weights_manager->manage(weights);
251  }
252 
253  // With the Fully Connected layer we can have 4 different cases:
254  // 1) Convolution layer -> Fully Connected layer without batches
255  // 2) Fully Connected layer -> Fully Connected layer without batches
256  // 3) Convolution layer -> Fully Connected layer with batches
257  // 4) Fully Connected layer -> Fully Connected layer with batches
258 
259  const ITensor *weights_to_use = weights;
260 
261  // Check if we have a fully connected layer with batches
262  const bool is_batched_fc_layer = output->info()->dimension(1) > 1;
263  if(is_batched_fc_layer)
264  {
265  _is_fc_after_conv = (TensorShape::num_max_dimensions >= 4) && (std::equal(input->info()->tensor_shape().cbegin() + 3,
266  input->info()->tensor_shape().cend(),
267  output->info()->tensor_shape().cbegin() + 1));
268  }
269  else
270  {
271  _is_fc_after_conv = input->info()->num_dimensions() > 1;
272  }
273 
274  // Reshape weights if needed
275  if(!_are_weights_reshaped)
276  {
277  if(_weights_manager && _weights_manager->are_weights_managed(weights))
278  {
279  _reshape_weights_managed_function.configure(weights);
280  weights_to_use = _weights_manager->acquire(weights, &_reshape_weights_managed_function);
281  }
282  else
283  {
284  // Reshape the weights
285  _reshape_weights_function.configure(weights, &_reshape_weights_output);
286  weights_to_use = &_reshape_weights_output;
287  }
288  }
289 
290  // Convert weights if needed
291  if(_is_fc_after_conv && (input->info()->data_layout() != fc_info.weights_trained_layout))
292  {
293  if(_weights_manager && _weights_manager->are_weights_managed(weights_to_use))
294  {
295  _convert_weights_managed.configure(weights_to_use,
296  input->info()->tensor_shape(),
297  fc_info.weights_trained_layout);
298  weights_to_use = _weights_manager->acquire(weights, &_convert_weights_managed);
299  }
300  else
301  {
302  // Convert weights
303  _convert_weights.configure(weights_to_use,
304  &_converted_weights_output,
305  input->info()->tensor_shape(),
306  fc_info.weights_trained_layout);
307 
308  weights_to_use = &_converted_weights_output;
309  }
310  _are_weights_converted = false;
311  }
312 
313  if(_is_fc_after_conv)
314  {
315  // Fully Connected layer after a Convolution Layer without batches
316  configure_conv_fc(input, weights_to_use, biases, output, fc_info.activation_info);
317  }
318  else
319  {
320  // Fully Connected layer after a Fully Connected Layer without batches
321  configure_fc_fc(input, weights_to_use, biases, output, fc_info.activation_info);
322  }
323 
324  _are_weights_reshaped = _are_weights_reshaped || fc_info.retain_internal_weights;
325 }
#define ARM_COMPUTE_ERROR_THROW_ON(status)
Definition: Error.h:455
void manage(const ITensor *weights, ITransformWeights *parent=nullptr)
Start managing a weights tensor.
bool are_weights_managed(const ITensor *weights)
Check if the weights are managed.
void configure(const ITensor *input, const TensorShape &original_input_shape, DataLayout data_layout)
void configure(const ITensor *input, ITensor *output, const TensorShape &original_input_shape, DataLayout data_layout)
Initialize the function.
bool is_data_type_quantized_asymmetric(DataType dt)
Check if a given data type is of asymmetric quantized type.
Definition: Utils.h:1190
static Status validate(const ITensorInfo *input, const ITensorInfo *weights, const ITensorInfo *biases, const ITensorInfo *output, FullyConnectedLayerInfo fc_info=FullyConnectedLayerInfo())
Static function to check if given info will lead to a valid configuration of NEFullyConnectedLayer.
void configure(const ITensor *input, ITensor *output)
Set the input and output tensors.
#define ARM_COMPUTE_ERROR_ON_NULLPTR(...)
Definition: Validate.h:161
static constexpr size_t num_max_dimensions
Number of dimensions the tensor has.
Definition: Dimensions.h:46
ITensor * acquire(const ITensor *weights, ITransformWeights *weights_transform)
Acquire the requested reshape tensor of the selected weights.

◆ operator=() [1/2]

NEFullyConnectedLayer& operator= ( const NEFullyConnectedLayer )
delete

Prevent instances of this class from being copied (As this class contains pointers)

◆ operator=() [2/2]

NEFullyConnectedLayer& operator= ( NEFullyConnectedLayer &&  )
delete

Prevent instances of this class from being moved (As this class contains pointers)

◆ prepare()

void prepare ( )
overridevirtual

Prepare the function for executing.

Any one off pre-processing step required by the function is handled here

Note
Prepare stage might not need all the function's buffers' backing memory to be available in order to execute

Reimplemented from IFunction.

Definition at line 429 of file NEFullyConnectedLayer.cpp.

References TensorAllocator::allocate(), Tensor::allocator(), IWeightsManager::are_weights_managed(), ARM_COMPUTE_ERROR_ON, ITensor::is_used(), ITensor::mark_as_unused(), NEGEMM::prepare(), INESimpleFunctionNoBorder::run(), IWeightsManager::run(), NEConvertFullyConnectedWeights::run(), and arm_compute::test::validation::w.

Referenced by NERNNLayer::prepare(), and NEFullyConnectedLayer::run().

430 {
431  if(!_is_prepared)
432  {
433  if(!_weights_manager)
434  {
435  ARM_COMPUTE_ERROR_ON(!_original_weights->is_used());
436  }
437 
438  auto release_unused = [](Tensor * w)
439  {
440  if(!w->is_used())
441  {
442  w->allocator()->free();
443  }
444  };
445 
446  // Pointer to current weights
447  const ITensor *cur_weights = _original_weights;
448 
449  // Reshape of the weights (happens only once)
450  if(!_are_weights_reshaped)
451  {
452  if(_weights_manager && _weights_manager->are_weights_managed(_original_weights))
453  {
454  cur_weights = _weights_manager->run(cur_weights, &_reshape_weights_managed_function);
455  }
456  else
457  {
458  // Reshape of the weights (happens only once)
459  if(!_are_weights_reshaped)
460  {
461  // Run reshape weights kernel and mark weights as unused
462  _reshape_weights_output.allocator()->allocate();
463  _reshape_weights_function.run();
464  }
465  cur_weights->mark_as_unused();
466  cur_weights = &_reshape_weights_output;
467  }
468  _are_weights_reshaped = true;
469  }
470 
471  // Convert weights if needed (happens only once)
472  if(!_are_weights_converted)
473  {
474  if(_weights_manager && _weights_manager->are_weights_managed(cur_weights))
475  {
476  _weights_manager->run(cur_weights, &_convert_weights_managed);
477  }
478  else
479  {
480  _converted_weights_output.allocator()->allocate();
481  _convert_weights.run();
482  cur_weights->mark_as_unused();
483  }
484 
485  _are_weights_converted = true;
486  }
487 
488  // Release reshaped weights if unused
489  release_unused(&_reshape_weights_output);
490 
491  // Prepare GEMM prepare and release unused weights
492  if(!_is_quantized_asymmetric)
493  {
494  _mm_gemm.prepare();
495  }
496 
497  // Release converted weights if unused
498  release_unused(&_reshape_weights_output);
499  release_unused(&_converted_weights_output);
500 
501  _is_prepared = true;
502  }
503 }
SimpleTensor< float > w
Definition: DFT.cpp:156
void run() override final
Run the kernels contained in the function.
bool is_used() const
Flags if the tensor is used or not.
Definition: ITensor.cpp:163
#define ARM_COMPUTE_ERROR_ON(cond)
If the condition is true then an error message is printed and an exception thrown.
Definition: Error.h:466
TensorAllocator * allocator()
Return a pointer to the tensor&#39;s allocator.
Definition: Tensor.cpp:48
bool are_weights_managed(const ITensor *weights)
Check if the weights are managed.
void allocate() override
Allocate size specified by TensorInfo of CPU memory.
void prepare() override
Prepare the function for executing.
Definition: NEGEMM.cpp:359
ITensor * run(const ITensor *weights, ITransformWeights *weights_transform)
Run the reshape function.
void run() override
Run the kernels contained in the function.

◆ run()

void run ( )
overridevirtual

Run the kernels contained in the function.

For Neon kernels:

  • Multi-threading is used for the kernels which are parallelisable.
  • By default std::thread::hardware_concurrency() threads are used.
Note
CPPScheduler::set_num_threads() can be used to manually set the number of threads

For OpenCL kernels:

  • All the kernels are enqueued on the queue associated with CLScheduler.
  • The queue is then flushed.
Note
The function will not block until the kernels are executed. It is the user's responsibility to wait.
Will call prepare() on first run if hasn't been done

Implements IFunction.

Definition at line 406 of file NEFullyConnectedLayer.cpp.

References NEFullyConnectedLayer::prepare(), NEFlattenLayer::run(), NEGEMM::run(), and NEGEMMLowpMatrixMultiplyCore::run().

Referenced by NERNNLayer::run(), and NELSTMLayer::run().

407 {
408  prepare();
409 
410  MemoryGroupResourceScope scope_mg(_memory_group);
411 
412  // Linearize input if it comes from a convolutional layer
413  if(_is_fc_after_conv)
414  {
415  _flatten.run();
416  }
417 
418  // Run matrix multiply
419  if(_is_quantized_asymmetric)
420  {
421  _mm_gemmlowp.run();
422  }
423  else
424  {
425  _mm_gemm.run();
426  }
427 }
void run() override
Run the kernels contained in the function.
void run() override
Run the kernels contained in the function.
Definition: NEGEMM.cpp:309
void run() override
Run the kernels contained in the function.
void prepare() override
Prepare the function for executing.

◆ validate()

Status validate ( const ITensorInfo input,
const ITensorInfo weights,
const ITensorInfo biases,
const ITensorInfo output,
FullyConnectedLayerInfo  fc_info = FullyConnectedLayerInfo() 
)
static

Static function to check if given info will lead to a valid configuration of NEFullyConnectedLayer.

Parameters
[in]inputSource tensor info. Data type supported: QASYMM8/QASYMM8_SIGNED/F16/F32.
[in]weightsWeights tensor info. The weights must be 2 dimensional. If this function is called after a Convolution Layer, the (transposed) weights will have as many rows as the product of the first 3 input's dimensions. If it is called after another FullyConnected Layer, the (transposed) weights will have as many rows as the input's first dimension. Data type supported: Same as input.
[in]biasesBias tensor. Can be nullptr. Data type supported: Same as weights, S32 if weights is QASYMM8/QASYMM8_SIGNED.
[in]outputDestination tensor info. Its shape should be equal to the output of a matrix multiplication between:
  • The output of im2col on the input and the (transposed) 2D weights, if the function is called after a Convolution Layer
  • The input tensor and the (transposed) 2D weights, if the function is called after another FullyConnected Layer. Data type supported: Same as input.
[in]fc_info(Optional) Fully connected layer additional info
Returns
a status

Definition at line 327 of file NEFullyConnectedLayer.cpp.

References ActivationLayerInfo::activation(), FullyConnectedLayerInfo::activation_info, FullyConnectedLayerInfo::are_weights_reshaped, ARM_COMPUTE_RETURN_ERROR_ON, ARM_COMPUTE_RETURN_ERROR_ON_DATA_TYPE_CHANNEL_NOT_IN, ARM_COMPUTE_RETURN_ERROR_ON_MISMATCHING_DATA_TYPES, ARM_COMPUTE_RETURN_ERROR_ON_NULLPTR, ARM_COMPUTE_RETURN_ON_ERROR, ARM_COMPUTE_UNUSED, ActivationLayerInfo::BOUNDED_RELU, Dimensions< T >::cbegin(), Dimensions< T >::cend(), ICloneable< T >::clone(), arm_compute::misc::shape_calculator::compute_flatten_shape(), arm_compute::misc::shape_calculator::compute_transposed_shape(), ITensorInfo::data_layout(), ITensorInfo::data_type(), ITensorInfo::dimension(), ActivationLayerInfo::enabled(), arm_compute::F16, arm_compute::F32, arm_compute::test::validation::input, arm_compute::is_data_type_quantized(), ActivationLayerInfo::LU_BOUNDED_RELU, ITensorInfo::num_dimensions(), Dimensions< size_t >::num_max_dimensions, arm_compute::QASYMM8, arm_compute::QASYMM8_SIGNED, ActivationLayerInfo::RELU, FullyConnectedLayerInfo::retain_internal_weights, ITensorInfo::tensor_shape(), FullyConnectedLayerInfo::transpose_weights, NEFlattenLayer::validate(), NEFullyConnectedLayerReshapeWeights::validate(), NEConvertFullyConnectedWeights::validate(), and FullyConnectedLayerInfo::weights_trained_layout.

Referenced by NEFullyConnectedLayer::configure(), arm_compute::test::validation::DATA_TEST_CASE(), NERNNLayer::validate(), and NELSTMLayer::validate().

329 {
330  ARM_COMPUTE_UNUSED(fc_info.retain_internal_weights);
331  ARM_COMPUTE_RETURN_ERROR_ON_NULLPTR(input, weights, output);
334  ARM_COMPUTE_RETURN_ERROR_ON(weights->num_dimensions() > 2);
335  ARM_COMPUTE_RETURN_ERROR_ON(biases != nullptr && biases->num_dimensions() > 1);
336  ARM_COMPUTE_RETURN_ERROR_ON(fc_info.activation_info.enabled() && is_data_type_quantized(input->data_type()) && fc_info.activation_info.activation() != ActivationLayerInfo::ActivationFunction::RELU
337  && fc_info.activation_info.activation() != ActivationLayerInfo::ActivationFunction::BOUNDED_RELU && fc_info.activation_info.activation() != ActivationLayerInfo::ActivationFunction::LU_BOUNDED_RELU);
338 
339  bool weights_reshaped = fc_info.transpose_weights ? fc_info.are_weights_reshaped : true;
340  bool is_fc_after_conv = true;
341 
342  const ITensorInfo &flatten_input = TensorInfo(input->clone()->set_is_resizable(true).reset_padding().set_tensor_shape(compute_flatten_shape(input)));
343  const ITensorInfo &reshaped_weights = TensorInfo(weights->clone()->set_is_resizable(true).reset_padding().set_tensor_shape(compute_transposed_shape(*weights)));
344  const ITensorInfo &converted_weights = weights_reshaped ? TensorInfo(weights->clone()->set_is_resizable(true).reset_padding()) : TensorInfo(*reshaped_weights.clone());
345 
346  // With the Fully Connected layer we can have 4 different cases:
347  // 1) Convolution layer -> Fully Connected layer without batches
348  // 2) Fully Connected layer -> Fully Connected layer without batches
349  // 3) Convolution layer -> Fully Connected layer with batches
350  // 4) Fully Connected layer -> Fully Connected layer with batches
351 
352  const ITensorInfo *input_to_use = input;
353  const ITensorInfo *weights_to_use = weights;
354 
355  // Check if we have a fully connected layer with batches
356  const bool is_batched_fc_layer = output->dimension(1) > 1;
357 
358  if(is_batched_fc_layer)
359  {
360  is_fc_after_conv = (TensorShape::num_max_dimensions >= 4) && (std::equal(input->tensor_shape().cbegin() + 3,
361  input->tensor_shape().cend(),
362  output->tensor_shape().cbegin() + 1));
363  }
364  else
365  {
366  is_fc_after_conv = input->num_dimensions() > 1;
367  }
368 
369  if(!weights_reshaped)
370  {
371  // Validate reshape weights kernel
373  weights_to_use = &reshaped_weights;
374  }
375 
376  if(is_fc_after_conv && (input->data_layout() != fc_info.weights_trained_layout))
377  {
378  // Validate convert weights kernel
380  &converted_weights,
381  input->tensor_shape(),
382  fc_info.weights_trained_layout));
383  weights_to_use = &converted_weights;
384  }
385 
386  if(is_fc_after_conv)
387  {
388  // Fully Connected layer after a Convolution Layer without batches
389  ARM_COMPUTE_RETURN_ERROR_ON((weights_to_use->dimension(1) != (input->dimension(0) * input->dimension(1) * input->dimension(2))));
390 
391  // Validate flatten kernel
393  input_to_use = &flatten_input;
394  }
395  else
396  {
397  // Fully Connected layer after a Fully Connected Layer without batches
398  ARM_COMPUTE_RETURN_ERROR_ON(input->dimension(0) != weights_to_use->dimension(1));
399  }
400  // Validate matrix multiply kernel
401  ARM_COMPUTE_RETURN_ON_ERROR(validate_mm(input_to_use, weights_to_use, biases, output, fc_info.activation_info));
402 
403  return Status{};
404 }
bool is_data_type_quantized(DataType dt)
Check if a given data type is of quantized type.
Definition: Utils.h:1168
#define ARM_COMPUTE_RETURN_ON_ERROR(status)
Checks if a status contains an error and returns it.
Definition: Error.h:204
1 channel, 1 F32 per channel
#define ARM_COMPUTE_RETURN_ERROR_ON(cond)
If the condition is true, an error is returned.
Definition: Error.h:296
1 channel, 1 F16 per channel
TensorShape compute_transposed_shape(const ITensorInfo &input)
Calculate the transposed shape of a tensor.
#define ARM_COMPUTE_RETURN_ERROR_ON_NULLPTR(...)
Definition: Validate.h:163
static Status validate(const ITensorInfo *input, const ITensorInfo *output)
Static function to check if given info will lead to a valid configuration of NEFullyConnectedLayerRes...
TensorShape compute_flatten_shape(const ITensorInfo *input)
Calculate the flattened output shape of a tensor.
#define ARM_COMPUTE_UNUSED(...)
To avoid unused variables warnings.
Definition: Error.h:152
quantized, asymmetric fixed-point 8-bit number unsigned
#define ARM_COMPUTE_RETURN_ERROR_ON_MISMATCHING_DATA_TYPES(...)
Definition: Validate.h:545
static Status validate(const ITensorInfo *input, const ITensorInfo *output, const TensorShape &original_input_shape, DataLayout data_layout)
Static function to check if given info will lead to a valid configuration of NEConvertFullyConnectedW...
#define ARM_COMPUTE_RETURN_ERROR_ON_DATA_TYPE_CHANNEL_NOT_IN(t, c,...)
Definition: Validate.h:792
static Status validate(const ITensorInfo *input, const ITensorInfo *output)
Static function to check if given info will lead to a valid configuration of NEFlattenLayer.
quantized, asymmetric fixed-point 8-bit number signed
static constexpr size_t num_max_dimensions
Number of dimensions the tensor has.
Definition: Dimensions.h:46

The documentation for this class was generated from the following files: