Compute Library
 21.02
CLFlattenLayer Class Reference

Basic function to execute flatten. More...

#include <CLFlattenLayer.h>

Collaboration diagram for CLFlattenLayer:
[legend]

Public Member Functions

void configure (const ICLTensor *input, ICLTensor *output)
 Initialise the kernel's input and output. More...
 
void configure (const CLCompileContext &compile_context, const ICLTensor *input, ICLTensor *output)
 Initialise the kernel's input and output. More...
 
void run () override
 Run the kernels contained in the function. More...
 
- Public Member Functions inherited from IFunction
virtual ~IFunction ()=default
 Destructor. More...
 
virtual void prepare ()
 Prepare the function for executing. More...
 

Static Public Member Functions

static Status validate (const ITensorInfo *input, const ITensorInfo *output)
 Static function to check if given info will lead to a valid configuration of CLFlattenLayer. More...
 

Detailed Description

Basic function to execute flatten.

This function calls the following OpenCL kernel:

  1. CLReshapeLayer

Definition at line 42 of file CLFlattenLayer.h.

Member Function Documentation

◆ configure() [1/2]

void configure ( const ICLTensor input,
ICLTensor output 
)

Initialise the kernel's input and output.

Parameters
[in]inputFirst input tensor to flatten with at least 3 dimensions. The dimensions above the third will be interpreted as batches. Data types supported: All.
[out]outputOutput tensor with shape [w*h*d, input_batches] where: w = width input tensor, h = height input tensor and d = depth input tensor. Data type supported: same as input

Definition at line 36 of file CLFlattenLayer.cpp.

References CLKernelLibrary::get().

37 {
38  configure(CLKernelLibrary::get().get_compile_context(), input, output);
39 }
static CLKernelLibrary & get()
Access the KernelLibrary singleton.
void configure(const ICLTensor *input, ICLTensor *output)
Initialise the kernel&#39;s input and output.

◆ configure() [2/2]

void configure ( const CLCompileContext compile_context,
const ICLTensor input,
ICLTensor output 
)

Initialise the kernel's input and output.

Parameters
[in]compile_contextThe compile context to be used.
[in]inputFirst input tensor to flatten with at least 3 dimensions. The dimensions above the third will be interpreted as batches. Data types supported: All.
[out]outputOutput tensor with shape [w*h*d, input_batches] where: w = width input tensor, h = height input tensor and d = depth input tensor. Data type supported: same as input

Definition at line 41 of file CLFlattenLayer.cpp.

References ARM_COMPUTE_ERROR_ON_NULLPTR, arm_compute::auto_init_if_empty(), ICloneable< T >::clone(), arm_compute::misc::shape_calculator::compute_flatten_shape(), CLReshapeLayer::configure(), and ITensor::info().

42 {
44  auto_init_if_empty(*output->info(), input->info()->clone()->set_tensor_shape(misc::shape_calculator::compute_flatten_shape(input->info())));
45  _reshape.configure(compile_context, input, output);
46 }
TensorShape compute_flatten_shape(const ITensorInfo *input)
Calculate the flattened output shape of a tensor.
bool auto_init_if_empty(ITensorInfo &info, const TensorShape &shape, int num_channels, DataType data_type, QuantizationInfo quantization_info=QuantizationInfo())
Auto initialize the tensor info (shape, number of channels and data type) if the current assignment i...
#define ARM_COMPUTE_ERROR_ON_NULLPTR(...)
Definition: Validate.h:161
void configure(const ICLTensor *input, ICLTensor *output)
Initialise the kernel&#39;s inputs and outputs.

◆ run()

void run ( )
overridevirtual

Run the kernels contained in the function.

For Neon kernels:

  • Multi-threading is used for the kernels which are parallelisable.
  • By default std::thread::hardware_concurrency() threads are used.
Note
CPPScheduler::set_num_threads() can be used to manually set the number of threads

For OpenCL kernels:

  • All the kernels are enqueued on the queue associated with CLScheduler.
  • The queue is then flushed.
Note
The function will not block until the kernels are executed. It is the user's responsibility to wait.
Will call prepare() on first run if hasn't been done

Implements IFunction.

Definition at line 59 of file CLFlattenLayer.cpp.

References CLReshapeLayer::run().

Referenced by CLFullyConnectedLayer::run().

60 {
61  _reshape.run();
62 }
void run() override
Run the kernels contained in the function.

◆ validate()

Status validate ( const ITensorInfo input,
const ITensorInfo output 
)
static

Static function to check if given info will lead to a valid configuration of CLFlattenLayer.

Parameters
[in]inputFirst input tensor to flatten with at least 3 dimensions. The dimensions above the third will be interpreted as batches. Data types supported: All.
[out]outputOutput tensor with shape [w*h*d, input_batches] where: w = width input tensor, h = height input tensor and d = depth input tensor. Data type supported: same as input
Returns
a status

Definition at line 48 of file CLFlattenLayer.cpp.

References ARM_COMPUTE_RETURN_ERROR_ON_MISMATCHING_SHAPES, ICloneable< T >::clone(), arm_compute::misc::shape_calculator::compute_flatten_shape(), ITensorInfo::total_size(), and CLReshapeLayer::validate().

Referenced by CLFullyConnectedLayer::validate().

49 {
50  // Checks performed when output is configured
51  if(output->total_size() != 0)
52  {
53  const TensorInfo tensor_info_output = input->clone()->set_tensor_shape(misc::shape_calculator::compute_flatten_shape(input));
54  ARM_COMPUTE_RETURN_ERROR_ON_MISMATCHING_SHAPES(output, &tensor_info_output);
55  }
56  return CLReshapeLayer::validate(input, output);
57 }
static Status validate(const ITensorInfo *input, const ITensorInfo *output)
Static function to check if given info will lead to a valid configuration of CLReshapeLayer.
TensorShape compute_flatten_shape(const ITensorInfo *input)
Calculate the flattened output shape of a tensor.
#define ARM_COMPUTE_RETURN_ERROR_ON_MISMATCHING_SHAPES(...)
Definition: Validate.h:443

The documentation for this class was generated from the following files: