Compute Library
 21.02
NEConvolutionLayerReshapeWeights Class Reference

Function to reshape the weights. More...

#include <NEGEMMConvolutionLayer.h>

Collaboration diagram for NEConvolutionLayerReshapeWeights:
[legend]

Public Member Functions

 NEConvolutionLayerReshapeWeights ()
 Constructor. More...
 
 NEConvolutionLayerReshapeWeights (const NEConvolutionLayerReshapeWeights &)=delete
 Prevent instances of this class from being copied (As this class contains pointers) More...
 
 NEConvolutionLayerReshapeWeights (NEConvolutionLayerReshapeWeights &&)=delete
 Prevent instances of this class from being moved (As this class contains non movable objects) More...
 
NEConvolutionLayerReshapeWeightsoperator= (const NEConvolutionLayerReshapeWeights &)=delete
 Prevent instances of this class from being copied (As this class contains pointers) More...
 
NEConvolutionLayerReshapeWeightsoperator= (NEConvolutionLayerReshapeWeights &&)=delete
 Prevent instances of this class from being moved (As this class contains non movable objects) More...
 
 ~NEConvolutionLayerReshapeWeights ()
 Default destructor. More...
 
void configure (const ITensor *weights, const ITensor *biases, ITensor *output)
 Set the input and output tensors. More...
 
void run () override
 Run the kernels contained in the function. More...
 
- Public Member Functions inherited from IFunction
virtual ~IFunction ()=default
 Destructor. More...
 
virtual void prepare ()
 Prepare the function for executing. More...
 

Static Public Member Functions

static Status validate (const ITensorInfo *weights, const ITensorInfo *biases, const ITensorInfo *output)
 Static function to check if given info will lead to a valid configuration of NEConvolutionLayerReshapeWeights. More...
 

Detailed Description

Function to reshape the weights.

This function calls the following kernel:

  1. NEWeightsReshapeKernel

Definition at line 50 of file NEGEMMConvolutionLayer.h.

Constructor & Destructor Documentation

◆ NEConvolutionLayerReshapeWeights() [1/3]

Constructor.

Definition at line 55 of file NEGEMMConvolutionLayer.cpp.

56  : _weights_reshape_kernel()
57 {
58 }

◆ NEConvolutionLayerReshapeWeights() [2/3]

Prevent instances of this class from being copied (As this class contains pointers)

◆ NEConvolutionLayerReshapeWeights() [3/3]

Prevent instances of this class from being moved (As this class contains non movable objects)

◆ ~NEConvolutionLayerReshapeWeights()

Default destructor.

Member Function Documentation

◆ configure()

void configure ( const ITensor weights,
const ITensor biases,
ITensor output 
)

Set the input and output tensors.

Parameters
[in]weightsWeights tensor. Weights are 4D tensor with dimensions [kernel_x, kernel_y, IFM, OFM]. Data type supported: All.
[in]biasesBiases tensor. Shared biases supported. Biases are 1D tensor with dimensions [OFM]. Data type supported: same as weights.
Warning
Appending biases to weights reshaped matrix is not supported for quantized asymmetric types.
Parameters
[out]outputDestination tensor. Data types supported: same as weights.

Definition at line 60 of file NEGEMMConvolutionLayer.cpp.

References ARM_COMPUTE_ERROR_ON_NULLPTR, ARM_COMPUTE_ERROR_THROW_ON, ITensorInfo::data_type(), ITensor::info(), arm_compute::is_data_type_quantized_asymmetric(), ITensorInfo::quantization_info(), ITensorInfo::set_quantization_info(), and NEConvolutionLayerReshapeWeights::validate().

Referenced by NEGEMMConvolutionLayer::configure().

61 {
62  // Perform validation step
63  ARM_COMPUTE_ERROR_ON_NULLPTR(weights, output);
65  (biases != nullptr) ? biases->info() : nullptr,
66  output->info()));
67  const bool append_biases = (biases != nullptr) && !is_data_type_quantized_asymmetric(weights->info()->data_type());
68  const ITensor *biases_to_use = (append_biases) ? biases : nullptr;
69 
70  _weights_reshape_kernel = std::make_unique<NEWeightsReshapeKernel>();
71  _weights_reshape_kernel->configure(weights, biases_to_use, output);
72 
73  output->info()->set_quantization_info(weights->info()->quantization_info());
74 }
#define ARM_COMPUTE_ERROR_THROW_ON(status)
Definition: Error.h:455
static Status validate(const ITensorInfo *weights, const ITensorInfo *biases, const ITensorInfo *output)
Static function to check if given info will lead to a valid configuration of NEConvolutionLayerReshap...
bool is_data_type_quantized_asymmetric(DataType dt)
Check if a given data type is of asymmetric quantized type.
Definition: Utils.h:1190
#define ARM_COMPUTE_ERROR_ON_NULLPTR(...)
Definition: Validate.h:161

◆ operator=() [1/2]

Prevent instances of this class from being copied (As this class contains pointers)

◆ operator=() [2/2]

Prevent instances of this class from being moved (As this class contains non movable objects)

◆ run()

void run ( )
overridevirtual

Run the kernels contained in the function.

For Neon kernels:

  • Multi-threading is used for the kernels which are parallelisable.
  • By default std::thread::hardware_concurrency() threads are used.
Note
CPPScheduler::set_num_threads() can be used to manually set the number of threads

For OpenCL kernels:

  • All the kernels are enqueued on the queue associated with CLScheduler.
  • The queue is then flushed.
Note
The function will not block until the kernels are executed. It is the user's responsibility to wait.
Will call prepare() on first run if hasn't been done

Implements IFunction.

Definition at line 103 of file NEGEMMConvolutionLayer.cpp.

References Scheduler::get(), IScheduler::schedule(), and NEGEMMConvolutionLayer::~NEGEMMConvolutionLayer().

Referenced by NEGEMMConvolutionLayer::prepare().

104 {
105  NEScheduler::get().schedule(_weights_reshape_kernel.get(), 3);
106 }
virtual void schedule(ICPPKernel *kernel, const Hints &hints)=0
Runs the kernel in the same thread as the caller synchronously.
static IScheduler & get()
Access the scheduler singleton.
Definition: Scheduler.cpp:94

◆ validate()

Status validate ( const ITensorInfo weights,
const ITensorInfo biases,
const ITensorInfo output 
)
static

Static function to check if given info will lead to a valid configuration of NEConvolutionLayerReshapeWeights.

Parameters
[in]weightsWeights tensor info. Weights are 4D tensor with dimensions [kernel_x, kernel_y, IFM, OFM]. Data type supported: All.
[in]biasesBiases tensor. Shared biases supported. Biases are 1D tensor with dimensions [OFM]. Data type supported: same as weights.
Warning
Appending biases to weights reshaped matrix is not supported for quantized asymmetric types.
Parameters
[in]outputDestination tensor. Data types supported: same as weights.
Returns
an error status

Definition at line 76 of file NEGEMMConvolutionLayer.cpp.

References ARM_COMPUTE_RETURN_ERROR_ON, ARM_COMPUTE_RETURN_ERROR_ON_DATA_TYPE_CHANNEL_NOT_IN, ARM_COMPUTE_RETURN_ERROR_ON_MISMATCHING_DATA_TYPES, ARM_COMPUTE_RETURN_ERROR_ON_NULLPTR, arm_compute::BATCHES, arm_compute::BFLOAT16, ITensorInfo::data_layout(), ITensorInfo::data_type(), ITensorInfo::dimension(), arm_compute::F16, arm_compute::F32, arm_compute::get_data_layout_dimension_index(), arm_compute::is_data_type_quantized_asymmetric(), ITensorInfo::num_dimensions(), arm_compute::QASYMM8, arm_compute::QASYMM8_SIGNED, arm_compute::QSYMM8_PER_CHANNEL, ITensorInfo::total_size(), and NEWeightsReshapeKernel::validate().

Referenced by NEConvolutionLayerReshapeWeights::configure(), and NEGEMMConvolutionLayer::validate().

77 {
82  ARM_COMPUTE_RETURN_ERROR_ON(weights->num_dimensions() > 4);
83 
84  if(biases != nullptr)
85  {
86  const int idx_kernels = get_data_layout_dimension_index(weights->data_layout(), DataLayoutDimension::BATCHES);
89  ARM_COMPUTE_RETURN_ERROR_ON(biases->dimension(0) != weights->dimension(idx_kernels));
90  ARM_COMPUTE_RETURN_ERROR_ON(biases->num_dimensions() > 1);
91  }
92 
93  if((output != nullptr) && (output->total_size() != 0))
94  {
96 
97  NEWeightsReshapeKernel::validate(weights, biases, output);
98  }
99 
100  return Status{};
101 }
1 channel, 1 F32 per channel
#define ARM_COMPUTE_RETURN_ERROR_ON(cond)
If the condition is true, an error is returned.
Definition: Error.h:296
static Status validate(const ITensorInfo *input, const ITensorInfo *biases, const ITensorInfo *output)
Static function to check if given info will lead to a valid configuration of NEWeightsReshapeKernel.
1 channel, 1 F16 per channel
#define ARM_COMPUTE_RETURN_ERROR_ON_NULLPTR(...)
Definition: Validate.h:163
16-bit brain floating-point number
quantized, asymmetric fixed-point 8-bit number unsigned
bool is_data_type_quantized_asymmetric(DataType dt)
Check if a given data type is of asymmetric quantized type.
Definition: Utils.h:1190
quantized, symmetric per channel fixed-point 8-bit number
#define ARM_COMPUTE_RETURN_ERROR_ON_MISMATCHING_DATA_TYPES(...)
Definition: Validate.h:545
#define ARM_COMPUTE_RETURN_ERROR_ON_DATA_TYPE_CHANNEL_NOT_IN(t, c,...)
Definition: Validate.h:792
quantized, asymmetric fixed-point 8-bit number signed
size_t get_data_layout_dimension_index(const DataLayout data_layout, const DataLayoutDimension data_layout_dimension)
Get the index of the given dimension.
Definition: Helpers.inl:193

The documentation for this class was generated from the following files: