Compute Library
 22.08
ActivationLayerInfo Class Reference

Activation Layer Information class. More...

#include <Types.h>

Public Types

enum  ActivationFunction {
  LOGISTIC, TANH, RELU, BOUNDED_RELU,
  LU_BOUNDED_RELU, LEAKY_RELU, SOFT_RELU, ELU,
  ABS, SQUARE, SQRT, LINEAR,
  IDENTITY, HARD_SWISH
}
 Available activation functions. More...
 
using LookupTable256 = std::array< qasymm8_t, 256 >
 Lookup table. More...
 

Public Member Functions

 ActivationLayerInfo ()=default
 
 ActivationLayerInfo (ActivationFunction f, float a=0.0f, float b=0.0f)
 Default Constructor. More...
 
ActivationFunction activation () const
 Get the type of activation function. More...
 
float a () const
 Get the alpha value. More...
 
float b () const
 Get the beta value. More...
 
bool enabled () const
 Check if initialised. More...
 

Static Public Member Functions

static bool is_lut_supported (ActivationFunction act_func, DataType data_type)
 

Detailed Description

Activation Layer Information class.

Definition at line 1625 of file Types.h.

Member Typedef Documentation

◆ LookupTable256

using LookupTable256 = std::array<qasymm8_t, 256>

Lookup table.

Definition at line 1648 of file Types.h.

Member Enumeration Documentation

◆ ActivationFunction

enum ActivationFunction
strong

Available activation functions.

Enumerator
LOGISTIC 

Logistic ( \( f(x) = \frac{1}{1 + e^{-x}} \) )

TANH 

Hyperbolic tangent ( \( f(x) = a \cdot tanh(b \cdot x) \) )

RELU 

Rectifier ( \( f(x) = max(0,x) \) )

BOUNDED_RELU 

Upper Bounded Rectifier ( \( f(x) = min(a, max(0,x)) \) )

LU_BOUNDED_RELU 

Lower and Upper Bounded Rectifier ( \( f(x) = min(a, max(b,x)) \) )

LEAKY_RELU 

Leaky Rectifier ( \( f(x) = \begin{cases} \alpha x & \quad \text{if } x \text{ < 0}\\ x & \quad \text{if } x \geq \text{ 0 } \end{cases} \) )

SOFT_RELU 

Soft Rectifier ( \( f(x)= log(1+e^x) \) )

ELU 

Exponential Linear Unit ( \( f(x) = \begin{cases} \alpha (exp(x) - 1) & \quad \text{if } x \text{ < 0}\\ x & \quad \text{if } x \geq \text{ 0 } \end{cases} \) )

ABS 

Absolute ( \( f(x)= |x| \) )

SQUARE 

Square ( \( f(x)= x^2 \) )

SQRT 

Square root ( \( f(x) = \sqrt{x} \) )

LINEAR 

Linear ( \( f(x)= ax + b \) )

IDENTITY 

Identity ( \( f(x)= x \) )

HARD_SWISH 

Hard-swish ( \( f(x) = (x * relu6(x+3))/6 \) )

Definition at line 1629 of file Types.h.

1630  {
1631  LOGISTIC, /**< Logistic ( \f$ f(x) = \frac{1}{1 + e^{-x}} \f$ ) */
1632  TANH, /**< Hyperbolic tangent ( \f$ f(x) = a \cdot tanh(b \cdot x) \f$ ) */
1633  RELU, /**< Rectifier ( \f$ f(x) = max(0,x) \f$ ) */
1634  BOUNDED_RELU, /**< Upper Bounded Rectifier ( \f$ f(x) = min(a, max(0,x)) \f$ ) */
1635  LU_BOUNDED_RELU, /**< Lower and Upper Bounded Rectifier ( \f$ f(x) = min(a, max(b,x)) \f$ ) */
1636  LEAKY_RELU, /**< Leaky Rectifier ( \f$ f(x) = \begin{cases} \alpha x & \quad \text{if } x \text{ < 0}\\ x & \quad \text{if } x \geq \text{ 0 } \end{cases} \f$ ) */
1637  SOFT_RELU, /**< Soft Rectifier ( \f$ f(x)= log(1+e^x) \f$ ) */
1638  ELU, /**< Exponential Linear Unit ( \f$ f(x) = \begin{cases} \alpha (exp(x) - 1) & \quad \text{if } x \text{ < 0}\\ x & \quad \text{if } x \geq \text{ 0 } \end{cases} \f$ ) */
1639  ABS, /**< Absolute ( \f$ f(x)= |x| \f$ ) */
1640  SQUARE, /**< Square ( \f$ f(x)= x^2 \f$ )*/
1641  SQRT, /**< Square root ( \f$ f(x) = \sqrt{x} \f$ )*/
1642  LINEAR, /**< Linear ( \f$ f(x)= ax + b \f$ ) */
1643  IDENTITY, /**< Identity ( \f$ f(x)= x \f$ ) */
1644  HARD_SWISH /**< Hard-swish ( \f$ f(x) = (x * relu6(x+3))/6 \f$ ) */
1645  };

Constructor & Destructor Documentation

◆ ActivationLayerInfo() [1/2]

ActivationLayerInfo ( )
default

◆ ActivationLayerInfo() [2/2]

ActivationLayerInfo ( ActivationFunction  f,
float  a = 0.0f,
float  b = 0.0f 
)
inline

Default Constructor.

Parameters
[in]fThe activation function to use.
[in]a(Optional) The alpha parameter used by some activation functions (ActivationFunction::BOUNDED_RELU, ActivationFunction::LU_BOUNDED_RELU, ActivationFunction::LINEAR, ActivationFunction::TANH).
[in]b(Optional) The beta parameter used by some activation functions (ActivationFunction::LINEAR, ActivationFunction::LU_BOUNDED_RELU, ActivationFunction::TANH).

Definition at line 1658 of file Types.h.

1659  : _act(f), _a(a), _b(b), _enabled(true)
1660  {
1661  }
float a() const
Get the alpha value.
Definition: Types.h:1668
float b() const
Get the beta value.
Definition: Types.h:1673

Member Function Documentation

◆ a()

◆ activation()

ActivationFunction activation ( ) const
inline

Get the type of activation function.

Definition at line 1663 of file Types.h.

Referenced by arm_compute::test::validation::reference::activation_layer(), ClElementwiseKernel::ClElementwiseKernel(), ClActivationKernel::configure(), CpuActivationKernel::configure(), ClGemmMatrixMultiplyNativeKernel::configure(), ClGemmMatrixMultiplyReshapedOnlyRhsMMULKernel::configure(), ClWinogradOutputTransformKernel::configure(), ClDirectConv2dKernel::configure(), ClMulKernel::configure(), CLBatchNormalizationLayerKernel::configure(), ClComplexMulKernel::configure(), ClGemmConv2d::configure(), arm_compute::graph::backends::detail::create_batch_normalization_layer(), arm_compute::graph::backends::detail::create_convolution_layer(), arm_compute::graph::backends::detail::create_depthwise_convolution_layer(), arm_compute::graph::backends::detail::create_fused_convolution_batch_normalization_layer(), arm_compute::graph::backends::detail::create_fused_depthwise_convolution_batch_normalization_layer(), arm_compute::cpu::fp32_neon_batch_normalization(), arm_compute::cpu::fp_neon_activation_impl(), arm_compute::get_quantized_activation_min_max(), arm_compute::utils::info_helpers::is_relu(), arm_compute::utils::info_helpers::is_relu6(), arm_compute::assembly_utils::map_to_arm_gemm_activation(), arm_compute::cpu::neon_qasymm8_activation(), arm_compute::cpu::neon_qasymm8_activation_lut(), arm_compute::cpu::neon_qasymm8_signed_activation(), arm_compute::cpu::neon_qsymm16_activation(), arm_compute::operator<<(), arm_compute::cpu::sve2_qasymm8_activation(), arm_compute::cpu::sve2_qasymm8_signed_activation(), arm_compute::cpu::sve2_qsymm16_activation(), arm_compute::cpu::sve_fp32_activation(), arm_compute::to_string(), ClFullyConnected::validate(), CpuFullyConnected::validate(), ClGemmConv2d::validate(), and DotGraphVisitor::visit().

1664  {
1665  return _act;
1666  }

◆ b()

◆ enabled()

bool enabled ( ) const
inline

Check if initialised.

Definition at line 1678 of file Types.h.

Referenced by arm_compute::test::validation::reference::batch_normalization_layer(), ClElementwiseKernel::ClElementwiseKernel(), ClGemmMatrixMultiplyNativeKernel::configure(), ClDirectConv2d::configure(), ClGemmMatrixMultiplyReshapedOnlyRhsMMULKernel::configure(), ClWinogradOutputTransformKernel::configure(), FusedDepthwiseConvolutionBatchNormalizationFunction< TargetInfo, FusedLayerTypes >::configure(), NEBatchNormalizationLayerKernel::configure(), ClDirectConv2dKernel::configure(), CpuGemmDirectConv2d::configure(), ClMulKernel::configure(), CpuWinogradConv2d::configure(), CpuDirectConv2d::configure(), CpuDirectConv3d::configure(), CLBatchNormalizationLayerKernel::configure(), NEFFTConvolutionLayer::configure(), ClComplexMulKernel::configure(), ClGemmConv2d::configure(), CpuGemmLowpMatrixMultiplyCore::configure(), arm_compute::graph::backends::detail::create_batch_normalization_layer(), arm_compute::graph::backends::detail::create_convolution_layer(), arm_compute::graph::backends::detail::create_depthwise_convolution_layer(), arm_compute::graph::backends::detail::create_fused_convolution_batch_normalization_layer(), arm_compute::graph::backends::detail::create_fused_depthwise_convolution_batch_normalization_layer(), arm_compute::cpu::fp32_neon_batch_normalization(), arm_compute::utils::info_helpers::is_relu(), arm_compute::utils::info_helpers::is_relu6(), arm_compute::operator<<(), CpuActivationKernel::run_op(), ClMulKernel::run_op(), arm_compute::to_string(), CpuSub::validate(), CpuAdd::validate(), ClDirectConv2d::validate(), CpuMul::validate(), CpuDirectConv2d::validate(), NEElementwiseMax::validate(), CpuDirectConv3d::validate(), ClFullyConnected::validate(), CpuFullyConnected::validate(), CpuComplexMul::validate(), CpuGemm::validate(), ClGemmConv2d::validate(), NEFFTConvolutionLayer::validate(), CpuGemmLowpMatrixMultiplyCore::validate(), CLFFTConvolutionLayer::validate(), NEElementwiseMin::validate(), ClSaturatedArithmeticKernel::validate(), ClArithmeticKernel::validate(), NEElementwiseSquaredDiff::validate(), NEElementwiseDivision::validate(), NEElementwisePower::validate(), and DotGraphVisitor::visit().

1679  {
1680  return _enabled;
1681  }

◆ is_lut_supported()

static bool is_lut_supported ( ActivationFunction  act_func,
DataType  data_type 
)
inlinestatic

Definition at line 1702 of file Types.h.

References ARM_COMPUTE_UNUSED, ActivationLayerInfo::IDENTITY, arm_compute::QASYMM8, arm_compute::qasymm8_hard_swish(), and arm_compute::qasymm8_leaky_relu().

Referenced by CpuActivationKernel::configure(), and arm_compute::cpu::neon_qasymm8_activation_lut().

1703  {
1704 #ifdef __aarch64__
1705  auto supported = (data_type == DataType::QASYMM8 && (act_func == ActivationFunction::HARD_SWISH || act_func == ActivationFunction::LEAKY_RELU));
1706  return supported;
1707 #else // __aarch64__
1708  ARM_COMPUTE_UNUSED(act_func);
1710  return false;
1711 #endif // __aarch64__
1712  }
#define ARM_COMPUTE_UNUSED(...)
To avoid unused variables warnings.
Definition: Error.h:152
quantized, asymmetric fixed-point 8-bit number unsigned

The documentation for this class was generated from the following file: