Compute Library
 19.08
NEPoolingLayer Class Reference

Basic function to simulate a pooling layer with the specified pooling operation. More...

#include <NEPoolingLayer.h>

Collaboration diagram for NEPoolingLayer:
[legend]

Public Member Functions

 NEPoolingLayer ()
 Constructor. More...
 
void configure (ITensor *input, ITensor *output, const PoolingLayerInfo &pool_info)
 Set the input and output tensors. More...
 
void run () override
 Run the kernels contained in the function. More...
 
- Public Member Functions inherited from IFunction
virtual ~IFunction ()=default
 Destructor. More...
 
virtual void prepare ()
 Prepare the function for executing. More...
 

Static Public Member Functions

static Status validate (const ITensorInfo *input, const ITensorInfo *output, const PoolingLayerInfo &pool_info)
 Static function to check if given info will lead to a valid configuration of NEPoolingLayer. More...
 

Detailed Description

Basic function to simulate a pooling layer with the specified pooling operation.

This function calls the following NEON kernels:

  1. NEFillBorderKernel (executed if padding size is different from zero)
  2. NEPoolingLayerKernel

Definition at line 42 of file NEPoolingLayer.h.

Constructor & Destructor Documentation

◆ NEPoolingLayer()

Constructor.

Definition at line 33 of file NEPoolingLayer.cpp.

34  : _pooling_layer_kernel(), _border_handler(), _is_global_pooling_layer(false), _data_layout(DataLayout::NCHW)
35 {
36 }
Num samples, channels, height, width.

References arm_compute::NCHW.

Member Function Documentation

◆ configure()

void configure ( ITensor input,
ITensor output,
const PoolingLayerInfo pool_info 
)

Set the input and output tensors.

Note
F16 is supported for pool sizes 2 and 3 only
Parameters
[in,out]inputSource tensor. (Written to only when padding != 0) Data types supported: QASYMM8/F16/F32.
[out]outputDestination tensor. Data types supported: Same as input.
[in]pool_infoContains pooling operation information described in PoolingLayerInfo.

Definition at line 38 of file NEPoolingLayer.cpp.

39 {
40  // Check if we have Global Pooling Layer
41  _is_global_pooling_layer = (input->info()->dimension(0) == pool_info.pool_size().width) && (input->info()->dimension(1) == pool_info.pool_size().height);
42 
43  // Get data layout
44  _data_layout = input->info()->data_layout();
45 
46  // Configure pooling kernel
47  _pooling_layer_kernel.configure(input, output, pool_info);
48 
49  switch(_data_layout)
50  {
51  case DataLayout::NCHW:
52  {
53  // Configure border depending on operation required (quantize border in case of asymmetric data_type)
55  PixelValue zero_value(0.f);
56  if(is_data_type_quantized_asymmetric(input->info()->data_type()) && !pool_info.exclude_padding())
57  {
58  zero_value = PixelValue(static_cast<uint32_t>(input->info()->quantization_info().uniform().offset));
59  }
60  _border_handler.configure(input, _pooling_layer_kernel.border_size(), border_mode, zero_value);
61  break;
62  }
63  case DataLayout::NHWC:
64  break;
65  default:
66  ARM_COMPUTE_ERROR("Data layout not supported");
67  }
68 }
BorderMode
Methods available to handle borders.
Definition: Types.h:251
bool exclude_padding() const
Check if padding is excluded in calculations.
Definition: Types.h:1257
#define ARM_COMPUTE_ERROR(...)
Print the given message then throw an std::runtime_error.
Definition: Error.h:261
Class describing the value of a pixel for any image format.
Definition: PixelValue.h:34
BorderSize border_size() const override
The size of the border for that kernel.
virtual size_t dimension(size_t index) const =0
Return the size of the requested dimension.
virtual DataType data_type() const =0
Data type used for each element of the tensor.
void configure(ITensor *tensor, BorderSize border_size, BorderMode border_mode, const PixelValue &constant_border_value=PixelValue())
Initialise the function.
size_t height
Height of the image region or rectangle.
Definition: Size2D.h:93
UniformQuantizationInfo uniform() const
Return per layer quantization info.
virtual ITensorInfo * info() const =0
Interface to be implemented by the child class to return the tensor's metadata.
virtual QuantizationInfo quantization_info() const =0
Get the quantization settings (scale and offset) of the tensor.
void configure(const ITensor *input, ITensor *output, const PoolingLayerInfo &pool_info)
Set the input and output tensors.
Num samples, channels, height, width.
bool is_data_type_quantized_asymmetric(DataType dt)
Check if a given data type is of asymmetric quantized type.
Definition: Utils.h:1030
const Size2D & pool_size() const
Get the pooling size.
Definition: Types.h:1247
size_t width
Width of the image region or rectangle.
Definition: Size2D.h:92
Pixels outside the image are assumed to have the same value as the closest image pixel.
PoolingType pool_type() const
Get the pooling type.
Definition: Types.h:1242
Num samples, height, width, channels.
virtual DataLayout data_layout() const =0
Get the data layout of the tensor.

References ARM_COMPUTE_ERROR, arm_compute::test::validation::border_mode, NEPoolingLayerKernel::border_size(), NEPoolingLayerKernel::configure(), NEFillBorderKernel::configure(), arm_compute::CONSTANT, ITensorInfo::data_layout(), ITensorInfo::data_type(), ITensorInfo::dimension(), PoolingLayerInfo::exclude_padding(), Size2D::height, ITensor::info(), arm_compute::is_data_type_quantized_asymmetric(), arm_compute::MAX, arm_compute::NCHW, arm_compute::NHWC, UniformQuantizationInfo::offset, PoolingLayerInfo::pool_size(), PoolingLayerInfo::pool_type(), ITensorInfo::quantization_info(), arm_compute::REPLICATE, QuantizationInfo::uniform(), and Size2D::width.

◆ run()

void run ( )
overridevirtual

Run the kernels contained in the function.

For NEON kernels:

  • Multi-threading is used for the kernels which are parallelisable.
  • By default std::thread::hardware_concurrency() threads are used.
Note
CPPScheduler::set_num_threads() can be used to manually set the number of threads

For OpenCL kernels:

  • All the kernels are enqueued on the queue associated with CLScheduler.
  • The queue is then flushed.
Note
The function will not block until the kernels are executed. It is the user's responsibility to wait.
Will call prepare() on first run if hasn't been done

Implements IFunction.

Definition at line 75 of file NEPoolingLayer.cpp.

76 {
77  switch(_data_layout)
78  {
79  case DataLayout::NCHW:
80  // Fill border
81  NEScheduler::get().schedule(&_border_handler, Window::DimY);
82 
83  // Run pooling layer
84  NEScheduler::get().schedule(&_pooling_layer_kernel, _is_global_pooling_layer ? Window::DimZ : Window::DimY);
85  break;
86  case DataLayout::NHWC:
87  // Run pooling layer
88  NEScheduler::get().schedule(&_pooling_layer_kernel, Window::DimX);
89  break;
90  default:
91  ARM_COMPUTE_ERROR("Data layout not supported");
92  }
93 }
#define ARM_COMPUTE_ERROR(...)
Print the given message then throw an std::runtime_error.
Definition: Error.h:261
static constexpr size_t DimX
Alias for dimension 0 also known as X dimension.
Definition: Window.h:43
Num samples, channels, height, width.
static constexpr size_t DimY
Alias for dimension 1 also known as Y dimension.
Definition: Window.h:45
virtual void schedule(ICPPKernel *kernel, const Hints &hints)=0
Runs the kernel in the same thread as the caller synchronously.
static constexpr size_t DimZ
Alias for dimension 2 also known as Z dimension.
Definition: Window.h:47
Num samples, height, width, channels.
static IScheduler & get()
Access the scheduler singleton.
Definition: Scheduler.cpp:96

References ARM_COMPUTE_ERROR, Window::DimX, Window::DimY, Window::DimZ, Scheduler::get(), arm_compute::NCHW, arm_compute::NHWC, and IScheduler::schedule().

◆ validate()

Status validate ( const ITensorInfo input,
const ITensorInfo output,
const PoolingLayerInfo pool_info 
)
static

Static function to check if given info will lead to a valid configuration of NEPoolingLayer.

Note
F16 is supported for pool sizes 2 and 3 only
Parameters
[in]inputSource tensor. (Written to only when padding != 0) Data types supported: QASYMM8/F16/F32.
[in]outputDestination tensor. Data types supported: Same as input.
[in]pool_infoContains pooling operation information described in PoolingLayerInfo.
Returns
a status

Definition at line 70 of file NEPoolingLayer.cpp.

71 {
72  return NEPoolingLayerKernel::validate(input, output, pool_info);
73 }
static Status validate(const ITensorInfo *input, const ITensorInfo *output, const PoolingLayerInfo &pool_info)
Static function to check if given info will lead to a valid configuration of NEPoolingLayerKernel.

References NEPoolingLayerKernel::validate().


The documentation for this class was generated from the following files: