24.02.1
|
Go to the documentation of this file.
24 #ifndef ARM_COMPUTE_CLLSTMLAYERQUANTIZED_H
25 #define ARM_COMPUTE_CLLSTMLAYERQUANTIZED_H
232 const ICLTensor *_input_to_input_weights;
233 const ICLTensor *_input_to_forget_weights;
235 const ICLTensor *_input_to_output_weights;
236 const ICLTensor *_recurrent_to_input_weights;
237 const ICLTensor *_recurrent_to_forget_weights;
238 const ICLTensor *_recurrent_to_cell_weights;
239 const ICLTensor *_recurrent_to_output_weights;
257 CLTensor _input_modulation_gate_input;
261 CLTensor _input_modulation_gate_output;
Basic function to execute GEMMLowpMatrixMultiplyCore on OpenCL.
Basic function to execute an opencl::kernels::ClTransposeKernel.
void run() override
Run the kernels contained in the function.
Base class for all functions.
CLLSTMLayerQuantized & operator=(const CLLSTMLayerQuantized &)=delete
Prevent instances of this class from being copied (As this class contains pointers)
Basic function to run opencl::ClDequantize that dequantizes an input tensor.
void prepare() override
Prepare the function for executing.
Interface for OpenCL tensor.
Basic implementation of the OpenCL tensor interface.
Basic function to execute concatenate tensors along a given axis.
Basic function to simulate a quantization layer.
Basic function to perform tensor slicing.
Basic function to execute GEMMLowpQuantizeDown kernels on CL.
auto recurrent_to_forget_weights
static Status validate(const ITensorInfo *input, const ITensorInfo *input_to_input_weights, const ITensorInfo *input_to_forget_weights, const ITensorInfo *input_to_cell_weights, const ITensorInfo *input_to_output_weights, const ITensorInfo *recurrent_to_input_weights, const ITensorInfo *recurrent_to_forget_weights, const ITensorInfo *recurrent_to_cell_weights, const ITensorInfo *recurrent_to_output_weights, const ITensorInfo *input_gate_bias, const ITensorInfo *forget_gate_bias, const ITensorInfo *cell_bias, const ITensorInfo *output_gate_bias, const ITensorInfo *cell_state_in, const ITensorInfo *output_state_in, const ITensorInfo *cell_state_out, const ITensorInfo *output_state_out)
Static function to check if given info will lead to a valid configuration of CLLSTMLayerQuantized.
Basic function to run opencl::ClMul.
auto recurrent_to_output_weights
Copyright (c) 2017-2024 Arm Limited.
CLLSTMLayerQuantized(std::shared_ptr< IMemoryManager > memory_manager=nullptr)
Default constructor.
void configure(const ICLTensor *input, const ICLTensor *input_to_input_weights, const ICLTensor *input_to_forget_weights, const ICLTensor *input_to_cell_weights, const ICLTensor *input_to_output_weights, const ICLTensor *recurrent_to_input_weights, const ICLTensor *recurrent_to_forget_weights, const ICLTensor *recurrent_to_cell_weights, const ICLTensor *recurrent_to_output_weights, const ICLTensor *input_gate_bias, const ICLTensor *forget_gate_bias, const ICLTensor *cell_bias, const ICLTensor *output_gate_bias, ICLTensor *cell_state_in, const ICLTensor *output_state_in, ICLTensor *cell_state_out, ICLTensor *output_state_out)
Initialize function's tensors.
Basic function to run CLLSTMLayerQuantized.
auto recurrent_to_input_weights
Basic function to run opencl::kernels::ClSaturatedArithmeticKernel for addition.
Store the tensor's metadata.
Basic function to run opencl::kernels::ClActivationKernel.
auto input_to_forget_weights
auto input_to_output_weights
auto input_to_input_weights
auto input_to_cell_weights
auto recurrent_to_cell_weights