21.02
|
Basic function to run CLLSTMLayerQuantized. More...
#include <CLLSTMLayerQuantized.h>
Public Member Functions | |
CLLSTMLayerQuantized (std::shared_ptr< IMemoryManager > memory_manager=nullptr) | |
Default constructor. More... | |
CLLSTMLayerQuantized (const CLLSTMLayerQuantized &)=delete | |
Prevent instances of this class from being copied (As this class contains pointers) More... | |
CLLSTMLayerQuantized (CLLSTMLayerQuantized &&)=default | |
Default move constructor. More... | |
CLLSTMLayerQuantized & | operator= (const CLLSTMLayerQuantized &)=delete |
Prevent instances of this class from being copied (As this class contains pointers) More... | |
CLLSTMLayerQuantized & | operator= (CLLSTMLayerQuantized &&)=default |
Default move assignment operator. More... | |
void | configure (const ICLTensor *input, const ICLTensor *input_to_input_weights, const ICLTensor *input_to_forget_weights, const ICLTensor *input_to_cell_weights, const ICLTensor *input_to_output_weights, const ICLTensor *recurrent_to_input_weights, const ICLTensor *recurrent_to_forget_weights, const ICLTensor *recurrent_to_cell_weights, const ICLTensor *recurrent_to_output_weights, const ICLTensor *input_gate_bias, const ICLTensor *forget_gate_bias, const ICLTensor *cell_bias, const ICLTensor *output_gate_bias, ICLTensor *cell_state_in, const ICLTensor *output_state_in, ICLTensor *cell_state_out, ICLTensor *output_state_out) |
Initialize function's tensors. More... | |
void | configure (const CLCompileContext &compile_context, const ICLTensor *input, const ICLTensor *input_to_input_weights, const ICLTensor *input_to_forget_weights, const ICLTensor *input_to_cell_weights, const ICLTensor *input_to_output_weights, const ICLTensor *recurrent_to_input_weights, const ICLTensor *recurrent_to_forget_weights, const ICLTensor *recurrent_to_cell_weights, const ICLTensor *recurrent_to_output_weights, const ICLTensor *input_gate_bias, const ICLTensor *forget_gate_bias, const ICLTensor *cell_bias, const ICLTensor *output_gate_bias, ICLTensor *cell_state_in, const ICLTensor *output_state_in, ICLTensor *cell_state_out, ICLTensor *output_state_out) |
Initialize function's tensors. More... | |
void | run () override |
Run the kernels contained in the function. More... | |
void | prepare () override |
Prepare the function for executing. More... | |
Public Member Functions inherited from IFunction | |
virtual | ~IFunction ()=default |
Destructor. More... | |
Static Public Member Functions | |
static Status | validate (const ITensorInfo *input, const ITensorInfo *input_to_input_weights, const ITensorInfo *input_to_forget_weights, const ITensorInfo *input_to_cell_weights, const ITensorInfo *input_to_output_weights, const ITensorInfo *recurrent_to_input_weights, const ITensorInfo *recurrent_to_forget_weights, const ITensorInfo *recurrent_to_cell_weights, const ITensorInfo *recurrent_to_output_weights, const ITensorInfo *input_gate_bias, const ITensorInfo *forget_gate_bias, const ITensorInfo *cell_bias, const ITensorInfo *output_gate_bias, const ITensorInfo *cell_state_in, const ITensorInfo *output_state_in, const ITensorInfo *cell_state_out, const ITensorInfo *output_state_out) |
Static function to check if given info will lead to a valid configuration of CLLSTMLayerQuantized. More... | |
Basic function to run CLLSTMLayerQuantized.
This function calls the following CL functions/kernels:
Definition at line 61 of file CLLSTMLayerQuantized.h.
CLLSTMLayerQuantized | ( | std::shared_ptr< IMemoryManager > | memory_manager = nullptr | ) |
Default constructor.
Definition at line 53 of file CLLSTMLayerQuantized.cpp.
|
delete |
Prevent instances of this class from being copied (As this class contains pointers)
|
default |
Default move constructor.
void configure | ( | const ICLTensor * | input, |
const ICLTensor * | input_to_input_weights, | ||
const ICLTensor * | input_to_forget_weights, | ||
const ICLTensor * | input_to_cell_weights, | ||
const ICLTensor * | input_to_output_weights, | ||
const ICLTensor * | recurrent_to_input_weights, | ||
const ICLTensor * | recurrent_to_forget_weights, | ||
const ICLTensor * | recurrent_to_cell_weights, | ||
const ICLTensor * | recurrent_to_output_weights, | ||
const ICLTensor * | input_gate_bias, | ||
const ICLTensor * | forget_gate_bias, | ||
const ICLTensor * | cell_bias, | ||
const ICLTensor * | output_gate_bias, | ||
ICLTensor * | cell_state_in, | ||
const ICLTensor * | output_state_in, | ||
ICLTensor * | cell_state_out, | ||
ICLTensor * | output_state_out | ||
) |
Initialize function's tensors.
[in] | input | Source tensor. Input is a 2D tensor with dimensions [input_size, batch_size]. Data types supported: QASYMM8. |
[in] | input_to_input_weights | 2D weights tensor with dimensions [input_size, output_size]. Data type supported: Same as input . |
[in] | input_to_forget_weights | 2D weights tensor with dimensions [input_size, output_size]. Data type supported: Same as input . |
[in] | input_to_cell_weights | 2D weights tensor with dimensions [input_size, output_size]. Data type supported: Same as input . |
[in] | input_to_output_weights | 2D weights tensor with dimensions [input_size, output_size]. Data type supported: Same as input . |
[in] | recurrent_to_input_weights | 2D weights tensor with dimensions [output_size, output_size]. Data type supported: Same as input . |
[in] | recurrent_to_forget_weights | 2D weights tensor with dimensions [output_size, output_size]. Data type supported: Same as input . |
[in] | recurrent_to_cell_weights | 2D weights tensor with dimensions [output_size, output_size]. Data type supported: Same as input . |
[in] | recurrent_to_output_weights | 2D weights tensor with dimensions [output_size, output_size]. Data type supported: Same as input . |
[in] | input_gate_bias | 1D weights tensor with dimensions [output_size]. Data type supported: S32. |
[in] | forget_gate_bias | 1D weights tensor with dimensions [output_size]. Data type supported: S32. |
[in] | cell_bias | 1D weights tensor with dimensions [output_size]. Data type supported: S32. |
[in] | output_gate_bias | 1D weights tensor with dimensions [output_size]. Data type supported: S32. |
[in] | cell_state_in | 2D tensor with dimensions [output_size, batch_size]. Data type supported: QSYMM16. |
[in] | output_state_in | 2D tensor with dimensions [output_size, batch_size]. Data type supported: Same as input . |
[out] | cell_state_out | Destination tensor. Output is a 2D tensor with dimensions [output_size, batch_size]. Data type supported: QSYMM16. |
[out] | output_state_out | Destination tensor. Output is a 2D tensor with dimensions [output_size, batch_size].Data types supported: Same as input . |
Definition at line 65 of file CLLSTMLayerQuantized.cpp.
References CLKernelLibrary::get().
Referenced by arm_compute::test::validation::TEST_CASE().
void configure | ( | const CLCompileContext & | compile_context, |
const ICLTensor * | input, | ||
const ICLTensor * | input_to_input_weights, | ||
const ICLTensor * | input_to_forget_weights, | ||
const ICLTensor * | input_to_cell_weights, | ||
const ICLTensor * | input_to_output_weights, | ||
const ICLTensor * | recurrent_to_input_weights, | ||
const ICLTensor * | recurrent_to_forget_weights, | ||
const ICLTensor * | recurrent_to_cell_weights, | ||
const ICLTensor * | recurrent_to_output_weights, | ||
const ICLTensor * | input_gate_bias, | ||
const ICLTensor * | forget_gate_bias, | ||
const ICLTensor * | cell_bias, | ||
const ICLTensor * | output_gate_bias, | ||
ICLTensor * | cell_state_in, | ||
const ICLTensor * | output_state_in, | ||
ICLTensor * | cell_state_out, | ||
ICLTensor * | output_state_out | ||
) |
Initialize function's tensors.
[in] | compile_context | The compile context to be used. |
[in] | input | Source tensor. Input is a 2D tensor with dimensions [input_size, batch_size]. Data types supported: QASYMM8. |
[in] | input_to_input_weights | 2D weights tensor with dimensions [input_size, output_size]. Data type supported: Same as input . |
[in] | input_to_forget_weights | 2D weights tensor with dimensions [input_size, output_size]. Data type supported: Same as input . |
[in] | input_to_cell_weights | 2D weights tensor with dimensions [input_size, output_size]. Data type supported: Same as input . |
[in] | input_to_output_weights | 2D weights tensor with dimensions [input_size, output_size]. Data type supported: Same as input . |
[in] | recurrent_to_input_weights | 2D weights tensor with dimensions [output_size, output_size]. Data type supported: Same as input . |
[in] | recurrent_to_forget_weights | 2D weights tensor with dimensions [output_size, output_size]. Data type supported: Same as input . |
[in] | recurrent_to_cell_weights | 2D weights tensor with dimensions [output_size, output_size]. Data type supported: Same as input . |
[in] | recurrent_to_output_weights | 2D weights tensor with dimensions [output_size, output_size]. Data type supported: Same as input . |
[in] | input_gate_bias | 1D weights tensor with dimensions [output_size]. Data type supported: S32. |
[in] | forget_gate_bias | 1D weights tensor with dimensions [output_size]. Data type supported: S32. |
[in] | cell_bias | 1D weights tensor with dimensions [output_size]. Data type supported: S32. |
[in] | output_gate_bias | 1D weights tensor with dimensions [output_size]. Data type supported: S32. |
[in] | cell_state_in | 2D tensor with dimensions [output_size, batch_size]. Data type supported: QSYMM16. |
[in] | output_state_in | 2D tensor with dimensions [output_size, batch_size]. Data type supported: Same as input . |
[out] | cell_state_out | Destination tensor. Output is a 2D tensor with dimensions [output_size, batch_size]. Data type supported: QSYMM16. |
[out] | output_state_out | Destination tensor. Output is a 2D tensor with dimensions [output_size, batch_size].Data types supported: Same as input . |
Definition at line 77 of file CLLSTMLayerQuantized.cpp.
References CLTensorAllocator::allocate(), CLTensor::allocator(), ARM_COMPUTE_ERROR_ON_NULLPTR, ARM_COMPUTE_ERROR_THROW_ON, arm_compute::auto_init_if_empty(), arm_compute::quantization::calculate_quantized_multiplier(), CLDequantizationLayer::configure(), CLTranspose::configure(), CLQuantizationLayer::configure(), CLActivationLayer::configure(), CLConcatenateLayer::configure(), CLGEMMLowpMatrixMultiplyCore::configure(), CLArithmeticAddition::configure(), CLSlice::configure(), CLPixelWiseMultiplication::configure(), CLGEMMLowpQuantizeDownInt32ToInt16ScaleByFixedPoint::configure(), ITensorInfo::dimension(), Window::DimX, Window::DimY, arm_compute::F32, arm_compute::test::validation::forget_gate_bias, ITensor::info(), CLTensor::info(), ITensorAllocator::init(), arm_compute::test::validation::input_gate_bias, arm_compute::test::validation::input_size, arm_compute::test::validation::input_to_cell_weights, arm_compute::test::validation::input_to_forget_weights, arm_compute::test::validation::input_to_input_weights, arm_compute::test::validation::input_to_output_weights, ActivationLayerInfo::LOGISTIC, MemoryGroup::manage(), UniformQuantizationInfo::offset, arm_compute::test::validation::output_gate_bias, arm_compute::test::validation::output_size, arm_compute::test::validation::qasymm(), arm_compute::QASYMM8, arm_compute::QSYMM16, arm_compute::test::validation::qsymm_3(), arm_compute::test::validation::qsymm_4(), ITensorInfo::quantization_info(), arm_compute::test::validation::qweights(), arm_compute::test::validation::recurrent_to_cell_weights, arm_compute::test::validation::recurrent_to_forget_weights, arm_compute::test::validation::recurrent_to_input_weights, arm_compute::test::validation::recurrent_to_output_weights, arm_compute::S32, arm_compute::SATURATE, UniformQuantizationInfo::scale, TensorInfo::set_quantization_info(), ActivationLayerInfo::TANH, ITensorInfo::tensor_shape(), TensorInfo::tensor_shape(), arm_compute::TO_ZERO, QuantizationInfo::uniform(), and CLLSTMLayerQuantized::validate().
|
delete |
Prevent instances of this class from being copied (As this class contains pointers)
|
default |
Default move assignment operator.
|
overridevirtual |
Prepare the function for executing.
Any one off pre-processing step required by the function is handled here
Reimplemented from IFunction.
Definition at line 532 of file CLLSTMLayerQuantized.cpp.
References CLTensorAllocator::allocate(), CLTensor::allocator(), CLTensorAllocator::free(), ITensor::mark_as_unused(), ICLSimpleFunction::run(), and CLConcatenateLayer::run().
Referenced by CLLSTMLayerQuantized::run().
|
overridevirtual |
Run the kernels contained in the function.
For Neon kernels:
For OpenCL kernels:
Implements IFunction.
Definition at line 485 of file CLLSTMLayerQuantized.cpp.
References CLLSTMLayerQuantized::prepare(), ICLSimpleFunction::run(), CLActivationLayer::run(), CLConcatenateLayer::run(), CLGEMMLowpMatrixMultiplyCore::run(), CLArithmeticAddition::run(), CLSlice::run(), and CLPixelWiseMultiplication::run().
|
static |
Static function to check if given info will lead to a valid configuration of CLLSTMLayerQuantized.
[in] | input | Source tensor info. Input is a 2D tensor info with dimensions [input_size, batch_size]. Data types supported: QASYMM8. |
[in] | input_to_input_weights | 2D weights tensor info with dimensions [input_size, output_size]. Data type supported: Same as input . |
[in] | input_to_forget_weights | 2D weights tensor info with dimensions [input_size, output_size]. Data type supported: Same as input . |
[in] | input_to_cell_weights | 2D weights tensor info with dimensions [input_size, output_size]. Data type supported: Same as input . |
[in] | input_to_output_weights | 2D weights tensor info with dimensions [input_size, output_size]. Data type supported: Same as input . |
[in] | recurrent_to_input_weights | 2D weights tensor info with dimensions [output_size, output_size]. Data type supported: Same as input . |
[in] | recurrent_to_forget_weights | 2D weights tensor info with dimensions [output_size, output_size]. Data type supported: Same as input . |
[in] | recurrent_to_cell_weights | 2D weights tensor info with dimensions [output_size, output_size]. Data type supported: Same as input . |
[in] | recurrent_to_output_weights | 2D weights tensor info with dimensions [output_size, output_size]. Data type supported: Same as input . |
[in] | input_gate_bias | 1D weights tensor info with dimensions [output_size]. Data type supported: S32. |
[in] | forget_gate_bias | 1D weights tensor info with dimensions [output_size]. Data type supported: S32. |
[in] | cell_bias | 1D weights tensor info with dimensions [output_size]. Data type supported: S32. |
[in] | output_gate_bias | 1D weights tensor info with dimensions [output_size]. Data type supported: S32. |
[in] | cell_state_in | 2D tensor info with dimensions [output_size, batch_size]. Data type supported: QSYMM16. |
[in] | output_state_in | 2D tensor info with dimensions [output_size, batch_size]. Data type supported: Same as input . |
[out] | cell_state_out | Destination tensor info. Output is a 2D tensor info with dimensions [output_size, batch_size]. Data type supported: QSYMM16. |
[out] | output_state_out | Destination tensor info. Output is a 2D tensor info with dimensions [output_size, batch_size].Data types supported: Same as input . |
Definition at line 275 of file CLLSTMLayerQuantized.cpp.
References ARM_COMPUTE_RETURN_ERROR_ON, ARM_COMPUTE_RETURN_ERROR_ON_DATA_TYPE_NOT_IN, ARM_COMPUTE_RETURN_ERROR_ON_MISMATCHING_DATA_TYPES, ARM_COMPUTE_RETURN_ERROR_ON_MISMATCHING_QUANTIZATION_INFO, ARM_COMPUTE_RETURN_ERROR_ON_MISMATCHING_SHAPES, ARM_COMPUTE_RETURN_ERROR_ON_NULLPTR, ARM_COMPUTE_RETURN_ON_ERROR, arm_compute::test::validation::bias_info, arm_compute::quantization::calculate_quantized_multiplier(), ICloneable< T >::clone(), TensorInfo::clone(), ITensorInfo::dimension(), Window::DimX, Window::DimY, arm_compute::F32, arm_compute::test::validation::input_size, ActivationLayerInfo::LOGISTIC, ITensorInfo::num_dimensions(), UniformQuantizationInfo::offset, arm_compute::test::validation::output_size, arm_compute::test::validation::qasymm(), arm_compute::QASYMM8, arm_compute::QSYMM16, arm_compute::test::validation::qsymm_3(), arm_compute::test::validation::qsymm_4(), ITensorInfo::quantization_info(), arm_compute::test::validation::qweights(), arm_compute::S32, arm_compute::SATURATE, UniformQuantizationInfo::scale, ITensorInfo::set_quantization_info(), TensorInfo::set_quantization_info(), ActivationLayerInfo::TANH, ITensorInfo::tensor_shape(), TensorInfo::tensor_shape(), arm_compute::TO_ZERO, ITensorInfo::total_size(), QuantizationInfo::uniform(), CLDequantizationLayer::validate(), CLTranspose::validate(), CLQuantizationLayer::validate(), CLActivationLayer::validate(), CLConcatenateLayer::validate(), CLGEMMLowpMatrixMultiplyCore::validate(), CLArithmeticAddition::validate(), CLSlice::validate(), CLPixelWiseMultiplication::validate(), and CLGEMMLowpQuantizeDownInt32ToInt16ScaleByFixedPoint::validate().
Referenced by CLLSTMLayerQuantized::configure().