Compute Library
 21.02
NEPoolingLayer Class Reference

Basic function to simulate a pooling layer with the specified pooling operation. More...

#include <NEPoolingLayer.h>

Collaboration diagram for NEPoolingLayer:
[legend]

Public Member Functions

 NEPoolingLayer (std::shared_ptr< IMemoryManager > memory_manager=nullptr)
 Constructor. More...
 
 NEPoolingLayer (const NEPoolingLayer &)=delete
 Prevent instances of this class from being copied (As this class contains pointers) More...
 
NEPoolingLayeroperator= (const NEPoolingLayer &)=delete
 Prevent instances of this class from being copied (As this class contains pointers) More...
 
 NEPoolingLayer (NEPoolingLayer &&)=delete
 Prevent instances of this class from being moved (As this class contains non movable objects) More...
 
NEPoolingLayeroperator= (NEPoolingLayer &&)=delete
 Prevent instances of this class from being moved (As this class contains non movable objects) More...
 
 ~NEPoolingLayer ()
 Default destructor. More...
 
void configure (ITensor *input, ITensor *output, const PoolingLayerInfo &pool_info, ITensor *indices=nullptr)
 Set the input and output tensors. More...
 
void run () override
 Run the kernels contained in the function. More...
 
- Public Member Functions inherited from IFunction
virtual ~IFunction ()=default
 Destructor. More...
 
virtual void prepare ()
 Prepare the function for executing. More...
 

Static Public Member Functions

static Status validate (const ITensorInfo *input, const ITensorInfo *output, const PoolingLayerInfo &pool_info, const ITensorInfo *indices=nullptr)
 Static function to check if given info will lead to a valid configuration of NEPoolingLayer. More...
 

Detailed Description

Basic function to simulate a pooling layer with the specified pooling operation.

This function calls the following Neon kernels:

  1. NEFillBorderKernel (executed if padding size is different from zero)
  2. cpu::kernels::CpuPoolingKernel
  3. cpu::CpuPoolingAssemblyDispatch

Definition at line 45 of file NEPoolingLayer.h.

Constructor & Destructor Documentation

◆ NEPoolingLayer() [1/3]

NEPoolingLayer ( std::shared_ptr< IMemoryManager memory_manager = nullptr)

Constructor.

Definition at line 42 of file NEPoolingLayer.cpp.

43  : _impl(std::make_unique<Impl>())
44 {
45  _impl->memory_manager = std::move(memory_manager);
46 }

◆ NEPoolingLayer() [2/3]

NEPoolingLayer ( const NEPoolingLayer )
delete

Prevent instances of this class from being copied (As this class contains pointers)

◆ NEPoolingLayer() [3/3]

NEPoolingLayer ( NEPoolingLayer &&  )
delete

Prevent instances of this class from being moved (As this class contains non movable objects)

◆ ~NEPoolingLayer()

~NEPoolingLayer ( )
default

Default destructor.

Member Function Documentation

◆ configure()

void configure ( ITensor input,
ITensor output,
const PoolingLayerInfo pool_info,
ITensor indices = nullptr 
)

Set the input and output tensors.

Note
F16 is supported for pool sizes 2 and 3 only
Parameters
[in,out]inputSource tensor. (Written to only when padding != 0) Data types supported: QASYMM8/QASYMM8_SIGNED/F16/F32.
[out]outputDestination tensor. Data types supported: Same as input.
[in]pool_infoContains pooling operation information described in PoolingLayerInfo.
[out]indices(optional) The indices of the maximal values. Data type supported: U32.

Definition at line 48 of file NEPoolingLayer.cpp.

References ITensor::info(), and arm_compute::test::validation::input.

49 {
50  _impl->src = input;
51  _impl->dst = output;
52  _impl->indices = indices;
53  _impl->op = std::make_unique<cpu::CpuPooling>(_impl->memory_manager);
54  _impl->op->configure(input->info(), output->info(), pool_info, (indices) ? indices->info() : nullptr);
55 }

◆ operator=() [1/2]

NEPoolingLayer& operator= ( const NEPoolingLayer )
delete

Prevent instances of this class from being copied (As this class contains pointers)

◆ operator=() [2/2]

NEPoolingLayer& operator= ( NEPoolingLayer &&  )
delete

Prevent instances of this class from being moved (As this class contains non movable objects)

◆ run()

void run ( )
overridevirtual

Run the kernels contained in the function.

For Neon kernels:

  • Multi-threading is used for the kernels which are parallelisable.
  • By default std::thread::hardware_concurrency() threads are used.
Note
CPPScheduler::set_num_threads() can be used to manually set the number of threads

For OpenCL kernels:

  • All the kernels are enqueued on the queue associated with CLScheduler.
  • The queue is then flushed.
Note
The function will not block until the kernels are executed. It is the user's responsibility to wait.
Will call prepare() on first run if hasn't been done

Implements IFunction.

Definition at line 62 of file NEPoolingLayer.cpp.

References arm_compute::ACL_DST_0, arm_compute::ACL_DST_1, arm_compute::ACL_SRC, and ITensorPack::add_tensor().

63 {
64  ITensorPack pack;
65  pack.add_tensor(TensorType::ACL_SRC, _impl->src);
66  pack.add_tensor(TensorType::ACL_DST_0, _impl->dst);
67  pack.add_tensor(TensorType::ACL_DST_1, _impl->indices);
68  _impl->op->run(pack);
69 }

◆ validate()

Status validate ( const ITensorInfo input,
const ITensorInfo output,
const PoolingLayerInfo pool_info,
const ITensorInfo indices = nullptr 
)
static

Static function to check if given info will lead to a valid configuration of NEPoolingLayer.

Note
F16 is supported for pool sizes 2 and 3 only
Parameters
[in]inputSource tensor info. (Written to only when padding != 0) Data types supported: QASYMM8/QASYMM8_SIGNED/F16/F32.
[in]outputDestination tensor info. Data types supported: Same as input.
[in]pool_infoContains pooling operation information described in PoolingLayerInfo.
[in]indices(optional) Tensor info of the indices of the maximal values. Data type supported: U32.
Returns
a status

Definition at line 57 of file NEPoolingLayer.cpp.

References CpuPooling::validate().

Referenced by arm_compute::test::validation::DATA_TEST_CASE().

58 {
59  return cpu::CpuPooling::validate(input, output, pool_info, indices);
60 }
static Status validate(const ITensorInfo *src, const ITensorInfo *dst, const PoolingLayerInfo &pool_info, const ITensorInfo *indices=nullptr)
Static function to check if given info will lead to a valid configuration of CpuPooling.
Definition: CpuPooling.cpp:91

The documentation for this class was generated from the following files: