Compute Library
 21.02
CLConvolutionLayerReshapeWeights Class Reference

Function to reshape and transpose the weights. More...

#include <CLGEMMConvolutionLayer.h>

Collaboration diagram for CLConvolutionLayerReshapeWeights:
[legend]

Public Member Functions

 CLConvolutionLayerReshapeWeights ()
 Constructor. More...
 
 CLConvolutionLayerReshapeWeights (const CLConvolutionLayerReshapeWeights &)=delete
 Prevent instances of this class from being copied. More...
 
CLConvolutionLayerReshapeWeightsoperator= (const CLConvolutionLayerReshapeWeights &)=delete
 Prevent instances of this class from being copied. More...
 
 CLConvolutionLayerReshapeWeights (CLConvolutionLayerReshapeWeights &&)=default
 Default move constructor. More...
 
CLConvolutionLayerReshapeWeightsoperator= (CLConvolutionLayerReshapeWeights &&)=default
 Default move assignment operator. More...
 
 ~CLConvolutionLayerReshapeWeights ()
 Default destructor. More...
 
void configure (const ICLTensor *weights, const ICLTensor *biases, ICLTensor *output, unsigned int num_groups=1)
 Set the input and output tensors. More...
 
void configure (const CLCompileContext &compile_context, const ICLTensor *weights, const ICLTensor *biases, ICLTensor *output, unsigned int num_groups=1)
 Set the input and output tensors. More...
 
void run () override
 Run the kernels contained in the function. More...
 
- Public Member Functions inherited from IFunction
virtual ~IFunction ()=default
 Destructor. More...
 
virtual void prepare ()
 Prepare the function for executing. More...
 

Static Public Member Functions

static Status validate (const ITensorInfo *weights, const ITensorInfo *biases, const ITensorInfo *output, unsigned int num_groups=1)
 Static function to check if given info will lead to a valid configuration of CLConvolutionLayerReshapeWeights. More...
 

Detailed Description

Function to reshape and transpose the weights.

This function calls the following kernels:

  1. CLWeightsReshapeKernel

Definition at line 52 of file CLGEMMConvolutionLayer.h.

Constructor & Destructor Documentation

◆ CLConvolutionLayerReshapeWeights() [1/3]

Constructor.

Definition at line 59 of file CLGEMMConvolutionLayer.cpp.

References CLConvolutionLayerReshapeWeights::~CLConvolutionLayerReshapeWeights().

60  : _weights_reshape_kernel(std::make_unique<CLWeightsReshapeKernel>())
61 {
62 }

◆ CLConvolutionLayerReshapeWeights() [2/3]

Prevent instances of this class from being copied.

◆ CLConvolutionLayerReshapeWeights() [3/3]

Default move constructor.

◆ ~CLConvolutionLayerReshapeWeights()

Member Function Documentation

◆ configure() [1/2]

void configure ( const ICLTensor weights,
const ICLTensor biases,
ICLTensor output,
unsigned int  num_groups = 1 
)

Set the input and output tensors.

Parameters
[in]weightsWeights tensor. Weights are 4D tensor with dimensions [kernel_x, kernel_y, IFM, OFM]. Data type supported: QASYMM8/QASYMM8_SIGNED/QSYMM8_PER_CHANNEL/F16/F32.
[in]biasesBiases tensor. Shared biases supported. Biases are 1D tensor with dimensions [OFM]. Data type supported: Same as weights.
[out]outputDestination tensor. Data types supported: Same as weights.
[in]num_groups(Optional) Number of groups when performing a grouped convolution. num_groups != 1 is only supported for NCHW data layout

Definition at line 66 of file CLGEMMConvolutionLayer.cpp.

References CLKernelLibrary::get().

Referenced by CLConvolutionLayerReshapeWeightsTransform::configure(), and CLGEMMConvolutionLayer::configure().

67 {
68  configure(CLKernelLibrary::get().get_compile_context(), weights, biases, output, num_groups);
69 }
static CLKernelLibrary & get()
Access the KernelLibrary singleton.
const unsigned int num_groups
Definition: Im2Col.cpp:153
void configure(const ICLTensor *weights, const ICLTensor *biases, ICLTensor *output, unsigned int num_groups=1)
Set the input and output tensors.

◆ configure() [2/2]

void configure ( const CLCompileContext compile_context,
const ICLTensor weights,
const ICLTensor biases,
ICLTensor output,
unsigned int  num_groups = 1 
)

Set the input and output tensors.

Parameters
[in]compile_contextThe compile context to be used.
[in]weightsWeights tensor. Weights are 4D tensor with dimensions [kernel_x, kernel_y, IFM, OFM]. Data type supported: QASYMM8/QASYMM8_SIGNED/QSYMM8_PER_CHANNEL/F16/F32.
[in]biasesBiases tensor. Shared biases supported. Biases are 1D tensor with dimensions [OFM]. Data type supported: Same as weights.
[out]outputDestination tensor. Data types supported: Same as weights.
[in]num_groups(Optional) Number of groups when performing a grouped convolution. num_groups != 1 is only supported for NCHW data layout

Definition at line 71 of file CLGEMMConvolutionLayer.cpp.

References ARM_COMPUTE_ERROR_ON_NULLPTR, ARM_COMPUTE_ERROR_THROW_ON, ITensorInfo::data_type(), ITensor::info(), arm_compute::is_data_type_quantized_asymmetric(), arm_compute::test::validation::num_groups, ITensorInfo::quantization_info(), ITensorInfo::set_quantization_info(), and CLConvolutionLayerReshapeWeights::validate().

72 {
73  // Perform validation step
74  ARM_COMPUTE_ERROR_ON_NULLPTR(weights, output);
76  (biases != nullptr) ? biases->info() : nullptr,
77  output->info(),
78  num_groups));
79 
80  const bool append_biases = (biases != nullptr) && !is_data_type_quantized_asymmetric(weights->info()->data_type());
81  const ICLTensor *biases_to_use = (append_biases) ? biases : nullptr;
82 
83  _weights_reshape_kernel->configure(compile_context, weights, biases_to_use, output, num_groups);
84 
85  output->info()->set_quantization_info(weights->info()->quantization_info());
86 }
#define ARM_COMPUTE_ERROR_THROW_ON(status)
Definition: Error.h:455
const unsigned int num_groups
Definition: Im2Col.cpp:153
bool is_data_type_quantized_asymmetric(DataType dt)
Check if a given data type is of asymmetric quantized type.
Definition: Utils.h:1190
static Status validate(const ITensorInfo *weights, const ITensorInfo *biases, const ITensorInfo *output, unsigned int num_groups=1)
Static function to check if given info will lead to a valid configuration of CLConvolutionLayerReshap...
#define ARM_COMPUTE_ERROR_ON_NULLPTR(...)
Definition: Validate.h:161

◆ operator=() [1/2]

Prevent instances of this class from being copied.

◆ operator=() [2/2]

Default move assignment operator.

◆ run()

void run ( )
overridevirtual

Run the kernels contained in the function.

For Neon kernels:

  • Multi-threading is used for the kernels which are parallelisable.
  • By default std::thread::hardware_concurrency() threads are used.
Note
CPPScheduler::set_num_threads() can be used to manually set the number of threads

For OpenCL kernels:

  • All the kernels are enqueued on the queue associated with CLScheduler.
  • The queue is then flushed.
Note
The function will not block until the kernels are executed. It is the user's responsibility to wait.
Will call prepare() on first run if hasn't been done

Implements IFunction.

Definition at line 113 of file CLGEMMConvolutionLayer.cpp.

References CLScheduler::enqueue(), and CLScheduler::get().

Referenced by CLGEMMConvolutionLayer::prepare().

114 {
115  CLScheduler::get().enqueue(*_weights_reshape_kernel);
116 }
static CLScheduler & get()
Access the scheduler singleton.
void enqueue(ICLKernel &kernel, bool flush=true)
Schedule the execution of the passed kernel if possible.

◆ validate()

Status validate ( const ITensorInfo weights,
const ITensorInfo biases,
const ITensorInfo output,
unsigned int  num_groups = 1 
)
static

Static function to check if given info will lead to a valid configuration of CLConvolutionLayerReshapeWeights.

Parameters
[in]weightsWeights tensor. Weights are 4D tensor with dimensions [kernel_x, kernel_y, IFM, OFM]. Data type supported: QASYMM8/QASYMM8_SIGNED/QSYMM8_PER_CHANNEL/F16/F32.
[in]biasesBiases tensor. Shared biases supported. Biases are 1D tensor with dimensions [OFM]. Data type supported: Same as weights.
[in]outputDestination tensor. Data types supported: Same as weights.
[in]num_groups(Optional) Number of groups when performing a grouped convolution. num_groups != 1 is only supported for NCHW data layout
Returns
a status

Definition at line 88 of file CLGEMMConvolutionLayer.cpp.

References ARM_COMPUTE_RETURN_ERROR_ON, ARM_COMPUTE_RETURN_ERROR_ON_DATA_TYPE_CHANNEL_NOT_IN, ARM_COMPUTE_RETURN_ERROR_ON_MISMATCHING_DATA_TYPES, ARM_COMPUTE_RETURN_ERROR_ON_NULLPTR, arm_compute::BATCHES, ITensorInfo::data_layout(), ITensorInfo::data_type(), ITensorInfo::dimension(), arm_compute::F16, arm_compute::F32, arm_compute::get_data_layout_dimension_index(), arm_compute::is_data_type_quantized(), ITensorInfo::num_dimensions(), arm_compute::QASYMM8, arm_compute::QASYMM8_SIGNED, arm_compute::QSYMM8_PER_CHANNEL, ITensorInfo::total_size(), and CLWeightsReshapeKernel::validate().

Referenced by CLConvolutionLayerReshapeWeights::configure(), and CLGEMMConvolutionLayer::validate().

89 {
92  ARM_COMPUTE_RETURN_ERROR_ON(weights->num_dimensions() > 4);
93 
94  if(biases != nullptr)
95  {
96  const int idx_kernels = get_data_layout_dimension_index(weights->data_layout(), DataLayoutDimension::BATCHES);
98 
100  ARM_COMPUTE_RETURN_ERROR_ON(biases->dimension(0) != weights->dimension(idx_kernels));
101  ARM_COMPUTE_RETURN_ERROR_ON(biases->num_dimensions() > 1);
102  }
103 
104  if((output != nullptr) && (output->total_size() != 0))
105  {
107  CLWeightsReshapeKernel::validate(weights, biases, output, num_groups);
108  }
109 
110  return Status{};
111 }
bool is_data_type_quantized(DataType dt)
Check if a given data type is of quantized type.
Definition: Utils.h:1168
static Status validate(const ITensorInfo *input, const ITensorInfo *biases, const ITensorInfo *output, unsigned int num_groups=1)
Static function to check if given info will lead to a valid configuration of CLWeightsReshapeKernel.
1 channel, 1 F32 per channel
#define ARM_COMPUTE_RETURN_ERROR_ON(cond)
If the condition is true, an error is returned.
Definition: Error.h:296
1 channel, 1 F16 per channel
#define ARM_COMPUTE_RETURN_ERROR_ON_NULLPTR(...)
Definition: Validate.h:163
quantized, asymmetric fixed-point 8-bit number unsigned
const unsigned int num_groups
Definition: Im2Col.cpp:153
quantized, symmetric per channel fixed-point 8-bit number
#define ARM_COMPUTE_RETURN_ERROR_ON_MISMATCHING_DATA_TYPES(...)
Definition: Validate.h:545
#define ARM_COMPUTE_RETURN_ERROR_ON_DATA_TYPE_CHANNEL_NOT_IN(t, c,...)
Definition: Validate.h:792
quantized, asymmetric fixed-point 8-bit number signed
size_t get_data_layout_dimension_index(const DataLayout data_layout, const DataLayoutDimension data_layout_dimension)
Get the index of the given dimension.
Definition: Helpers.inl:193

The documentation for this class was generated from the following files: