24.02.1
|
Kernel to perform layer normalization for QLSTM.
More...
#include <NEQLSTMLayerNormalizationKernel.h>
Kernel to perform layer normalization for QLSTM.
Definition at line 36 of file NEQLSTMLayerNormalizationKernel.h.
◆ NEQLSTMLayerNormalizationKernel() [1/3]
◆ NEQLSTMLayerNormalizationKernel() [2/3]
Prevent instances of this class from being copied (As this class contains pointers)
◆ NEQLSTMLayerNormalizationKernel() [3/3]
Default Move Constructor.
◆ ~NEQLSTMLayerNormalizationKernel()
◆ configure()
Set the input and output tensors.
- Parameters
-
[in] | input | Source tensor. Data types supported: QSYMM16. |
[out] | output | Destination tensor. Data types supported: Same as input . |
[in] | weight | Weight tensor. Data types supported: Same as input . |
[in] | bias | Bias tensor. Data types supported: S32 |
Definition at line 84 of file NEQLSTMLayerNormalizationKernel.cpp.
93 static const std::map<DataType, ComputeFuncType> fn_map = {
94 {
DataType::QSYMM16, std::mem_fn(&NEQLSTMLayerNormalizationKernel::compute_qsymm16)},
112 _output_multiplier = 0;
116 Window win = configure_window(output);
117 INEKernel::configure(win);
References ARM_COMPUTE_ERROR_ON, ARM_COMPUTE_ERROR_ON_NULLPTR, ARM_COMPUTE_ERROR_THROW_ON, arm_compute::auto_init_if_empty(), bias, arm_compute::quantization::calculate_quantized_multiplier(), ITensorInfo::data_type(), ITensor::info(), arm_compute::test::validation::input, arm_compute::QSYMM16, ITensorInfo::quantization_info(), UniformQuantizationInfo::scale, ITensorInfo::set_quantization_info(), QuantizationInfo::uniform(), and NEQLSTMLayerNormalizationKernel::validate().
◆ name()
const char* name |
( |
| ) |
const |
|
inlineoverridevirtual |
◆ operator=() [1/2]
Prevent instances of this class from being copied (As this class contains pointers)
◆ operator=() [2/2]
Default move assignment operator.
◆ run()
◆ validate()
Static function to check if given info will lead to a valid configuration of NEQLSTMLayerNormalizationKernel.
- Parameters
-
[in] | input | Source tensor info. Data types supported: QSYMM16. |
[in] | output | Destination tensor info. Data types supported: Same as input . |
[in] | weight | Weight tensor info. Data types supported: Same as input . |
[in] | bias | Bias tensor info. Data types supported: S32 |
- Returns
- a status
Definition at line 139 of file NEQLSTMLayerNormalizationKernel.cpp.
159 if (output->total_size() != 0)
References ARM_COMPUTE_ERROR_ON_NULLPTR, ARM_COMPUTE_RETURN_ERROR_ON, ARM_COMPUTE_RETURN_ERROR_ON_DATA_TYPE_CHANNEL_NOT_IN, ARM_COMPUTE_RETURN_ERROR_ON_MISMATCHING_DATA_TYPES, ARM_COMPUTE_RETURN_ERROR_ON_MISMATCHING_SHAPES, ARM_COMPUTE_UNUSED, bias, arm_compute::test::validation::input, ITensorInfo::num_dimensions(), arm_compute::QSYMM16, arm_compute::S32, ITensorInfo::tensor_shape(), ITensorInfo::total_size(), and Dimensions< T >::x().
Referenced by NEQLSTMLayerNormalizationKernel::configure().
The documentation for this class was generated from the following files:
#define ARM_COMPUTE_ERROR_ON_UNCONFIGURED_KERNEL(k)
#define ARM_COMPUTE_RETURN_ERROR_ON_MISMATCHING_SHAPES(...)
static Status validate(const ITensorInfo *input, const ITensorInfo *output, const ITensorInfo *weight, const ITensorInfo *bias)
Static function to check if given info will lead to a valid configuration of NEQLSTMLayerNormalizatio...
#define ARM_COMPUTE_RETURN_ERROR_ON_MISMATCHING_DATA_TYPES(...)
#define ARM_COMPUTE_RETURN_ERROR_ON_DATA_TYPE_CHANNEL_NOT_IN(t, c,...)
@ QSYMM16
quantized, symmetric fixed-point 16-bit number
#define ARM_COMPUTE_ERROR_ON_NULLPTR(...)
virtual ITensorInfo * info() const =0
Interface to be implemented by the child class to return the tensor's metadata.
#define ARM_COMPUTE_ERROR_ON(cond)
If the condition is true then an error message is printed and an exception thrown.
#define ARM_COMPUTE_ERROR_THROW_ON(status)
#define ARM_COMPUTE_ERROR_ON_MSG(cond, msg)
#define ARM_COMPUTE_RETURN_ERROR_ON(cond)
If the condition is true, an error is returned.
bool auto_init_if_empty(ITensorInfo &info, const TensorShape &shape, int num_channels, DataType data_type, QuantizationInfo quantization_info=QuantizationInfo())
Auto initialize the tensor info (shape, number of channels and data type) if the current assignment i...
#define ARM_COMPUTE_ERROR_ON_INVALID_SUBWINDOW(f, s)
virtual DataType data_type() const =0
Data type used for each element of the tensor.
UniformQuantizationInfo uniform() const
Return per layer quantization info.
#define ARM_COMPUTE_UNUSED(...)
To avoid unused variables warnings.
virtual QuantizationInfo quantization_info() const =0
Get the quantization settings (scale and offset) of the tensor.
const Window & window() const
The maximum window the kernel can be executed on.
@ S32
signed 32-bit number
Status calculate_quantized_multiplier(float multiplier, int32_t *quant_multiplier, int32_t *shift, bool ignore_epsilon=false)
Calculate quantized representation of multiplier.
ScaleKernelInfo info(interpolation_policy, default_border_mode, PixelValue(), sampling_policy, false)
virtual ITensorInfo & set_quantization_info(const QuantizationInfo &quantization_info)=0
Set the quantization settings (scale and offset) of the tensor.