24.02.1
|
Go to the documentation of this file.
24 #ifndef ARM_COMPUTE_NERNNLAYER_H
25 #define ARM_COMPUTE_NERNNLAYER_H
44 NERNNLayer(std::shared_ptr<IMemoryManager> memory_manager =
nullptr);
77 const ITensor *recurrent_weights,
113 Tensor _fully_connected_out;
Basic function to run cpu::kernels::CpuAddKernel.
Base class for all functions.
void run() override
Run the kernels contained in the function.
Basic function to execute GEMM.
Interface for CPU tensor.
Basic function to compute a Fully Connected layer.
void configure(const ITensor *input, const ITensor *weights, const ITensor *recurrent_weights, const ITensor *bias, ITensor *hidden_state, ITensor *output, ActivationLayerInfo &info)
Initialize the function.
NERNNLayer(std::shared_ptr< IMemoryManager > memory_manager=nullptr)
Default constructor.
void prepare() override
Prepare the function for executing.
Basic function to run cpu::kernels::CpuCopyKernel.
Activation Layer Information class.
Basic function to run cpu::kernels::CpuActivationKernel.
Basic function to run NERNNLayer.
NERNNLayer & operator=(const NERNNLayer &)=delete
Prevent instances of this class from being copied (As this class contains pointers)
Copyright (c) 2017-2024 Arm Limited.
Store the tensor's metadata.
Basic implementation of the tensor interface.
ScaleKernelInfo info(interpolation_policy, default_border_mode, PixelValue(), sampling_policy, false)
~NERNNLayer()
Default destructor.
static Status validate(const ITensorInfo *input, const ITensorInfo *weights, const ITensorInfo *recurrent_weights, const ITensorInfo *bias, const ITensorInfo *hidden_state, const ITensorInfo *output, const ActivationLayerInfo &info)
Initialize the function.