Compute Library
 19.11
NEActivationLayerKernel Class Reference

Interface for the activation layer kernel. More...

#include <NEActivationLayerKernel.h>

Collaboration diagram for NEActivationLayerKernel:
[legend]

Public Member Functions

const char * name () const override
 Name of the kernel. More...
 
 NEActivationLayerKernel ()
 Constructor. More...
 
 NEActivationLayerKernel (const NEActivationLayerKernel &)=delete
 Prevent instances of this class from being copied (As this class contains pointers) More...
 
 NEActivationLayerKernel (NEActivationLayerKernel &&)=default
 Default move constructor. More...
 
NEActivationLayerKerneloperator= (const NEActivationLayerKernel &)=delete
 Prevent instances of this class from being copied (As this class contains pointers) More...
 
NEActivationLayerKerneloperator= (NEActivationLayerKernel &&)=default
 Default move assignment operator. More...
 
void configure (ITensor *input, ITensor *output, ActivationLayerInfo activation_info)
 Set the input and output tensor. More...
 
void run (const Window &window, const ThreadInfo &info) override
 Execute the kernel on the passed window. More...
 
- Public Member Functions inherited from ICPPKernel
virtual ~ICPPKernel ()=default
 Default destructor. More...
 
- Public Member Functions inherited from IKernel
 IKernel ()
 Constructor. More...
 
virtual ~IKernel ()=default
 Destructor. More...
 
virtual bool is_parallelisable () const
 Indicates whether or not the kernel is parallelisable. More...
 
virtual BorderSize border_size () const
 The size of the border for that kernel. More...
 
const Windowwindow () const
 The maximum window the kernel can be executed on. More...
 

Static Public Member Functions

static Status validate (const ITensorInfo *input, const ITensorInfo *output, const ActivationLayerInfo &act_info)
 Static function to check if given info will lead to a valid configuration of NEActivationLayerKernel. More...
 

Detailed Description

Interface for the activation layer kernel.

Definition at line 39 of file NEActivationLayerKernel.h.

Constructor & Destructor Documentation

◆ NEActivationLayerKernel() [1/3]

Constructor.

Definition at line 111 of file NEActivationLayerKernel.cpp.

112  : _input(nullptr), _output(nullptr), _func(nullptr), _act_info()
113 {
114 }

◆ NEActivationLayerKernel() [2/3]

Prevent instances of this class from being copied (As this class contains pointers)

◆ NEActivationLayerKernel() [3/3]

Default move constructor.

Member Function Documentation

◆ configure()

void configure ( ITensor input,
ITensor output,
ActivationLayerInfo  activation_info 
)

Set the input and output tensor.

Note
If the output tensor is a nullptr, the activation function will be performed in-place
Parameters
[in,out]inputSource tensor. In case of output tensor = nullptr, this tensor will store the result of the activation function. Data types supported: QASYMM8/QSYMM16/F16/F32.
[out]outputDestination tensor. Data type supported: same as input
[in]activation_infoActivation layer information.

Definition at line 116 of file NEActivationLayerKernel.cpp.

117 {
119 
120  _input = input;
121  _act_info = activation_info;
122  _output = input;
123 
124  // Out-of-place calculation
125  if(output != nullptr)
126  {
127  _output = output;
128  }
129 
130  // Disabled activation, thus no operation needed
131  if(!activation_info.enabled())
132  {
133  _func = nullptr;
134  }
135 
136  ARM_COMPUTE_ERROR_THROW_ON(validate_arguments(input->info(), (output != nullptr) ? output->info() : nullptr, activation_info));
137 
138  // Activation functions : FP32
139  static std::map<ActivationFunction, ActivationFunctionExecutorPtr> act_map_f32 =
140  {
141  { ActivationFunction::ABS, &NEActivationLayerKernel::activation<ActivationFunction::ABS, float> },
142  { ActivationFunction::LINEAR, &NEActivationLayerKernel::activation<ActivationFunction::LINEAR, float> },
143  { ActivationFunction::LOGISTIC, &NEActivationLayerKernel::activation<ActivationFunction::LOGISTIC, float> },
144  { ActivationFunction::RELU, &NEActivationLayerKernel::activation<ActivationFunction::RELU, float> },
145  { ActivationFunction::BOUNDED_RELU, &NEActivationLayerKernel::activation<ActivationFunction::BOUNDED_RELU, float> },
146  { ActivationFunction::LU_BOUNDED_RELU, &NEActivationLayerKernel::activation<ActivationFunction::LU_BOUNDED_RELU, float> },
147  { ActivationFunction::LEAKY_RELU, &NEActivationLayerKernel::activation<ActivationFunction::LEAKY_RELU, float> },
148  { ActivationFunction::SOFT_RELU, &NEActivationLayerKernel::activation<ActivationFunction::SOFT_RELU, float> },
149  { ActivationFunction::ELU, &NEActivationLayerKernel::activation<ActivationFunction::ELU, float> },
150  { ActivationFunction::SQRT, &NEActivationLayerKernel::activation<ActivationFunction::SQRT, float> },
151  { ActivationFunction::SQUARE, &NEActivationLayerKernel::activation<ActivationFunction::SQUARE, float> },
152  { ActivationFunction::TANH, &NEActivationLayerKernel::activation<ActivationFunction::TANH, float> },
153  { ActivationFunction::IDENTITY, &NEActivationLayerKernel::activation<ActivationFunction::IDENTITY, float> },
154  };
155 
156 #ifdef __ARM_FEATURE_FP16_VECTOR_ARITHMETIC
157  // Activation functions : FP16
158  static std::map<ActivationFunction, ActivationFunctionExecutorPtr> act_map_f16 =
159  {
160  { ActivationFunction::ABS, &NEActivationLayerKernel::activation<ActivationFunction::ABS, float16_t> },
161  { ActivationFunction::LINEAR, &NEActivationLayerKernel::activation<ActivationFunction::LINEAR, float16_t> },
162  { ActivationFunction::LOGISTIC, &NEActivationLayerKernel::activation<ActivationFunction::LOGISTIC, float16_t> },
163  { ActivationFunction::RELU, &NEActivationLayerKernel::activation<ActivationFunction::RELU, float16_t> },
164  { ActivationFunction::BOUNDED_RELU, &NEActivationLayerKernel::activation<ActivationFunction::BOUNDED_RELU, float16_t> },
165  { ActivationFunction::LU_BOUNDED_RELU, &NEActivationLayerKernel::activation<ActivationFunction::LU_BOUNDED_RELU, float16_t> },
166  { ActivationFunction::LEAKY_RELU, &NEActivationLayerKernel::activation<ActivationFunction::LEAKY_RELU, float16_t> },
167  { ActivationFunction::SOFT_RELU, &NEActivationLayerKernel::activation<ActivationFunction::SOFT_RELU, float16_t> },
168  { ActivationFunction::ELU, &NEActivationLayerKernel::activation<ActivationFunction::ELU, float16_t> },
169  { ActivationFunction::SQRT, &NEActivationLayerKernel::activation<ActivationFunction::SQRT, float16_t> },
170  { ActivationFunction::SQUARE, &NEActivationLayerKernel::activation<ActivationFunction::SQUARE, float16_t> },
171  { ActivationFunction::TANH, &NEActivationLayerKernel::activation<ActivationFunction::TANH, float16_t> },
172  { ActivationFunction::IDENTITY, &NEActivationLayerKernel::activation<ActivationFunction::IDENTITY, float16_t> },
173  };
174 #endif /* __ARM_FEATURE_FP16_VECTOR_ARITHMETIC*/
175 
176  // Activation functions : QASYMM8
177  static std::map<ActivationFunction, ActivationFunctionExecutorPtr> act_map_qasymm8 =
178  {
179  { ActivationFunction::LOGISTIC, &NEActivationLayerKernel::activation<ActivationFunction::LOGISTIC, qasymm8_t> },
180  { ActivationFunction::BOUNDED_RELU, &NEActivationLayerKernel::activation<ActivationFunction::BOUNDED_RELU, qasymm8_t> },
181  { ActivationFunction::LU_BOUNDED_RELU, &NEActivationLayerKernel::activation<ActivationFunction::LU_BOUNDED_RELU, qasymm8_t> },
182  { ActivationFunction::RELU, &NEActivationLayerKernel::activation<ActivationFunction::RELU, qasymm8_t> },
183  { ActivationFunction::TANH, &NEActivationLayerKernel::activation<ActivationFunction::TANH, qasymm8_t> },
184  { ActivationFunction::IDENTITY, &NEActivationLayerKernel::activation<ActivationFunction::IDENTITY, qasymm8_t> },
185  };
186 
187  // Activation functions : QSYMM16
188  static std::map<ActivationFunction, ActivationFunctionExecutorPtr> act_map_qsymm16 =
189  {
190  { ActivationFunction::LOGISTIC, &NEActivationLayerKernel::activation<ActivationFunction::LOGISTIC, qsymm16_t> },
191  { ActivationFunction::TANH, &NEActivationLayerKernel::activation<ActivationFunction::TANH, qsymm16_t> },
192  };
193 
194  switch(input->info()->data_type())
195  {
196  case DataType::QASYMM8:
197  _func = act_map_qasymm8[activation_info.activation()];
198  break;
199  case DataType::QSYMM16:
200  _func = act_map_qsymm16[activation_info.activation()];
201  break;
202  case DataType::F32:
203  _func = act_map_f32[activation_info.activation()];
204  break;
205 #ifdef __ARM_FEATURE_FP16_VECTOR_ARITHMETIC
206  case DataType::F16:
207  _func = act_map_f16[activation_info.activation()];
208  break;
209 #endif /* __ARM_FEATURE_FP16_VECTOR_ARITHMETIC */
210  default:
211  ARM_COMPUTE_ERROR("Unsupported data type.");
212  }
213 
214  // Configure kernel window
215  auto win_config = validate_and_configure_window(input->info(), (output != nullptr) ? output->info() : nullptr);
216  ARM_COMPUTE_ERROR_THROW_ON(win_config.first);
217  ICPPKernel::configure(win_config.second);
218 }
quantized, symmetric fixed-point 16-bit number
bool enabled() const
Check if initialised.
Definition: Types.h:1596
#define ARM_COMPUTE_ERROR(msg)
Print the given message then throw an std::runtime_error.
Definition: Error.h:352
1 channel, 1 F32 per channel
#define ARM_COMPUTE_ERROR_THROW_ON(status)
Definition: Error.h:455
1 channel, 1 F16 per channel
quantized, asymmetric fixed-point 8-bit number unsigned
virtual ITensorInfo * info() const =0
Interface to be implemented by the child class to return the tensor's metadata.
#define ARM_COMPUTE_ERROR_ON_NULLPTR(...)
Definition: Validate.h:161
ActivationFunction activation() const
Get the type of activation function.
Definition: Types.h:1581

References ActivationLayerInfo::ABS, ActivationLayerInfo::activation(), ARM_COMPUTE_ERROR, ARM_COMPUTE_ERROR_ON_NULLPTR, ARM_COMPUTE_ERROR_THROW_ON, ActivationLayerInfo::BOUNDED_RELU, ActivationLayerInfo::ELU, ActivationLayerInfo::enabled(), arm_compute::F16, arm_compute::F32, ActivationLayerInfo::IDENTITY, ITensor::info(), arm_compute::test::validation::input, ActivationLayerInfo::LEAKY_RELU, ActivationLayerInfo::LINEAR, ActivationLayerInfo::LOGISTIC, ActivationLayerInfo::LU_BOUNDED_RELU, arm_compute::QASYMM8, arm_compute::QSYMM16, ActivationLayerInfo::RELU, ActivationLayerInfo::SOFT_RELU, ActivationLayerInfo::SQRT, ActivationLayerInfo::SQUARE, and ActivationLayerInfo::TANH.

Referenced by NERNNLayer::configure(), and NELSTMLayer::configure().

◆ name()

const char* name ( ) const
inlineoverridevirtual

Name of the kernel.

Returns
Kernel name

Implements ICPPKernel.

Definition at line 42 of file NEActivationLayerKernel.h.

43  {
44  return "NEActivationLayerKernel";
45  }

◆ operator=() [1/2]

NEActivationLayerKernel& operator= ( const NEActivationLayerKernel )
delete

Prevent instances of this class from being copied (As this class contains pointers)

◆ operator=() [2/2]

NEActivationLayerKernel& operator= ( NEActivationLayerKernel &&  )
default

Default move assignment operator.

◆ run()

void run ( const Window window,
const ThreadInfo info 
)
overridevirtual

Execute the kernel on the passed window.

Warning
If is_parallelisable() returns false then the passed window must be equal to window()
Note
The window has to be a region within the window returned by the window() method
The width of the window has to be a multiple of num_elems_processed_per_iteration().
Parameters
[in]windowRegion on which to execute the kernel. (Must be a region of the window returned by window())
[in]infoInfo about executing thread and CPU.

Implements ICPPKernel.

Definition at line 618 of file NEActivationLayerKernel.cpp.

619 {
620  // Early exit on disabled activation
621  if(!_act_info.enabled())
622  {
623  return;
624  }
625 
629  ARM_COMPUTE_ERROR_ON(_func == nullptr);
630 
631  (this->*_func)(window);
632 }
const Window & window() const
The maximum window the kernel can be executed on.
Definition: IKernel.cpp:28
bool enabled() const
Check if initialised.
Definition: Types.h:1596
#define ARM_COMPUTE_ERROR_ON(cond)
If the condition is true then an error message is printed and an exception thrown.
Definition: Error.h:466
#define ARM_COMPUTE_UNUSED(...)
To avoid unused variables warnings.
Definition: Error.h:152
#define ARM_COMPUTE_ERROR_ON_INVALID_SUBWINDOW(f, s)
Definition: Validate.h:205
#define ARM_COMPUTE_ERROR_ON_UNCONFIGURED_KERNEL(k)
Definition: Validate.h:941

References ARM_COMPUTE_ERROR_ON, ARM_COMPUTE_ERROR_ON_INVALID_SUBWINDOW, ARM_COMPUTE_ERROR_ON_UNCONFIGURED_KERNEL, ARM_COMPUTE_UNUSED, ActivationLayerInfo::enabled(), arm_compute::test::validation::info, and IKernel::window().

◆ validate()

Status validate ( const ITensorInfo input,
const ITensorInfo output,
const ActivationLayerInfo act_info 
)
static

Static function to check if given info will lead to a valid configuration of NEActivationLayerKernel.

Parameters
[in]inputSource tensor info. In case of output tensor info = nullptr, this tensor will store the result of the activation function. Data types supported: QASYMM8/QSYMM16/F16/F32.
[in]outputDestination tensor info. Data type supported: same as input
[in]act_infoActivation layer information.
Returns
a status

Definition at line 609 of file NEActivationLayerKernel.cpp.

610 {
612  ARM_COMPUTE_RETURN_ON_ERROR(validate_arguments(input, output, act_info));
613  ARM_COMPUTE_RETURN_ON_ERROR(validate_and_configure_window(input->clone().get(), (output != nullptr) ? output->clone().get() : nullptr).first);
614 
615  return Status{};
616 }
#define ARM_COMPUTE_RETURN_ON_ERROR(status)
Checks if a status contains an error and returns it.
Definition: Error.h:204
Status class.
Definition: Error.h:52
#define ARM_COMPUTE_UNUSED(...)
To avoid unused variables warnings.
Definition: Error.h:152
virtual std::unique_ptr< T > clone() const =0
Provide a clone of the current object of class T.

References arm_compute::test::validation::act_info, ARM_COMPUTE_RETURN_ON_ERROR, ARM_COMPUTE_UNUSED, ICloneable< T >::clone(), and arm_compute::test::validation::input.

Referenced by NEActivationLayer::validate(), NERNNLayer::validate(), and NELSTMLayer::validate().


The documentation for this class was generated from the following files: