Compute Library
 21.02
NERNNLayer Class Reference

Basic function to run NERNNLayer. More...

#include <NERNNLayer.h>

Collaboration diagram for NERNNLayer:
[legend]

Public Member Functions

 NERNNLayer (std::shared_ptr< IMemoryManager > memory_manager=nullptr)
 Default constructor. More...
 
 NERNNLayer (const NERNNLayer &)=delete
 Prevent instances of this class from being copied (As this class contains pointers) More...
 
 NERNNLayer (NERNNLayer &&)=delete
 Prevent instances of this class from being moved (As this class contains pointers) More...
 
NERNNLayeroperator= (const NERNNLayer &)=delete
 Prevent instances of this class from being copied (As this class contains pointers) More...
 
NERNNLayeroperator= (NERNNLayer &&)=delete
 Prevent instances of this class from being moved (As this class contains pointers) More...
 
 ~NERNNLayer ()
 Default destructor. More...
 
void configure (const ITensor *input, const ITensor *weights, const ITensor *recurrent_weights, const ITensor *bias, ITensor *hidden_state, ITensor *output, ActivationLayerInfo &info)
 Initialize the function. More...
 
void run () override
 Run the kernels contained in the function. More...
 
void prepare () override
 Prepare the function for executing. More...
 
- Public Member Functions inherited from IFunction
virtual ~IFunction ()=default
 Destructor. More...
 

Static Public Member Functions

static Status validate (const ITensorInfo *input, const ITensorInfo *weights, const ITensorInfo *recurrent_weights, const ITensorInfo *bias, const ITensorInfo *hidden_state, const ITensorInfo *output, const ActivationLayerInfo &info)
 Initialize the function. More...
 

Detailed Description

Basic function to run NERNNLayer.

Definition at line 40 of file NERNNLayer.h.

Constructor & Destructor Documentation

◆ NERNNLayer() [1/3]

NERNNLayer ( std::shared_ptr< IMemoryManager memory_manager = nullptr)

Default constructor.

Definition at line 48 of file NERNNLayer.cpp.

49  : _memory_group(std::move(memory_manager)), _gemm_state_f(), _add_f(), _activation(), _fully_connected(memory_manager), _copy_f(), _fully_connected_out(), _gemm_output(), _add_output(),
50  _is_prepared(false)
51 {
52 }

◆ NERNNLayer() [2/3]

NERNNLayer ( const NERNNLayer )
delete

Prevent instances of this class from being copied (As this class contains pointers)

◆ NERNNLayer() [3/3]

NERNNLayer ( NERNNLayer &&  )
delete

Prevent instances of this class from being moved (As this class contains pointers)

◆ ~NERNNLayer()

~NERNNLayer ( )
default

Default destructor.

Member Function Documentation

◆ configure()

void configure ( const ITensor input,
const ITensor weights,
const ITensor recurrent_weights,
const ITensor bias,
ITensor hidden_state,
ITensor output,
ActivationLayerInfo info 
)

Initialize the function.

Parameters
[in]inputInput is a 2-D tensor of shape [input_size, batch_size]. Data types supported: F16/F32
[in]weightsWeights tensor of shape [input_size, num_units] that multiplies the input. Data types supported: Same as input
[in]recurrent_weightsWeights tensor of shape [num_units, num_units] that multiplies the current 'state'. Data types supported: Same as input
[in]biasBias vector of shape [num_units]. Data types supported: Same as input
[out]outputOutput tensor of shape [num_units, batch_size]. Data types supported: Same as input
[in,out]hidden_stateOutput tensor of shape [num_units, batch_size]. Data types supported: Same as input
[in]infoActivation layer parameter.

Definition at line 81 of file NERNNLayer.cpp.

References TensorAllocator::allocate(), Tensor::allocator(), ARM_COMPUTE_ERROR_ON_NULLPTR, ARM_COMPUTE_ERROR_THROW_ON, arm_compute::misc::shape_calculator::compute_rnn_shape(), NECopy::configure(), NEActivationLayer::configure(), NEArithmeticAddition::configure(), NEGEMM::configure(), NEFullyConnectedLayer::configure(), ITensorInfo::data_layout(), ITensorInfo::data_type(), ITensorInfo::dimension(), arm_compute::get_data_layout_dimension_index(), arm_compute::HEIGHT, arm_compute::test::validation::idx_height, ITensor::info(), arm_compute::test::validation::info, TensorAllocator::init(), MemoryGroup::manage(), arm_compute::SATURATE, arm_compute::test::validation::shape, and NERNNLayer::validate().

83 {
84  ARM_COMPUTE_ERROR_ON_NULLPTR(input, weights, recurrent_weights, bias, hidden_state, output);
85  ARM_COMPUTE_ERROR_THROW_ON(NERNNLayer::validate(input->info(), weights->info(), recurrent_weights->info(), bias->info(), hidden_state->info(), output->info(), info));
86 
88  TensorShape shape = misc::shape_calculator::compute_rnn_shape(recurrent_weights->info(), hidden_state->info()->dimension(idx_height));
89 
90  _is_prepared = false;
91 
92  // Manage intermediate buffers and configure
93  _fully_connected_out.allocator()->init(TensorInfo(shape, 1, input->info()->data_type()));
94  _gemm_output.allocator()->init(TensorInfo(shape, 1, input->info()->data_type()));
95 
96  // Manage intermediate buffers and configure
97  _memory_group.manage(&_fully_connected_out);
98  _fully_connected.configure(input, weights, bias, &_fully_connected_out);
99 
100  _memory_group.manage(&_gemm_output);
101  _gemm_state_f.configure(hidden_state, recurrent_weights, nullptr, &_gemm_output, 1.f, 0.f);
102 
103  _add_output.allocator()->init(TensorInfo(shape, 1, input->info()->data_type()));
104  _memory_group.manage(&_add_output);
105 
106  _add_f.configure(&_fully_connected_out, &_gemm_output, &_add_output, ConvertPolicy::SATURATE);
107 
108  _fully_connected_out.allocator()->allocate();
109  _gemm_output.allocator()->allocate();
110 
111  _activation.configure(&_add_output, hidden_state, info);
112  _add_output.allocator()->allocate();
113 
114  _copy_f.configure(hidden_state, output);
115 }
void init(const TensorAllocator &allocator, const Coordinates &coords, TensorInfo &sub_info)
Shares the same backing memory with another tensor allocator, while the tensor info might be differen...
TensorShape compute_rnn_shape(const ITensorInfo *input, const unsigned int batch_size)
Calculate the RNN shape of a tensor.
#define ARM_COMPUTE_ERROR_THROW_ON(status)
Definition: Error.h:455
void configure(const ITensor *input1, const ITensor *input2, ITensor *output, ConvertPolicy policy, const ActivationLayerInfo &act_info=ActivationLayerInfo())
Initialise the kernel&#39;s inputs, output and conversion policy.
TensorAllocator * allocator()
Return a pointer to the tensor&#39;s allocator.
Definition: Tensor.cpp:48
void manage(IMemoryManageable *obj) override
Sets a object to be managed by the given memory group.
Definition: MemoryGroup.h:79
void configure(const ITensor *input, const ITensor *weights, const ITensor *biases, ITensor *output, FullyConnectedLayerInfo fc_info=FullyConnectedLayerInfo())
Set the input and output tensors.
void allocate() override
Allocate size specified by TensorInfo of CPU memory.
ScaleKernelInfo info(interpolation_policy, default_border_mode, PixelValue(), sampling_policy, false)
void configure(const ITensor *a, const ITensor *b, const ITensor *c, ITensor *d, float alpha, float beta, const GEMMInfo &gemm_info=GEMMInfo())
Initialise the kernel&#39;s inputs, output.
Definition: NEGEMM.cpp:72
void configure(ITensor *input, ITensor *output, ActivationLayerInfo activation_info)
[NEActivationLayer snippet]
static Status validate(const ITensorInfo *input, const ITensorInfo *weights, const ITensorInfo *recurrent_weights, const ITensorInfo *bias, const ITensorInfo *hidden_state, const ITensorInfo *output, const ActivationLayerInfo &info)
Initialize the function.
Definition: NERNNLayer.cpp:54
#define ARM_COMPUTE_ERROR_ON_NULLPTR(...)
Definition: Validate.h:161
size_t get_data_layout_dimension_index(const DataLayout data_layout, const DataLayoutDimension data_layout_dimension)
Get the index of the given dimension.
Definition: Helpers.inl:193
void configure(ITensor *input, ITensor *output)
Initialise the function&#39;s source and destination.
Definition: NECopy.cpp:48

◆ operator=() [1/2]

NERNNLayer& operator= ( const NERNNLayer )
delete

Prevent instances of this class from being copied (As this class contains pointers)

◆ operator=() [2/2]

NERNNLayer& operator= ( NERNNLayer &&  )
delete

Prevent instances of this class from being moved (As this class contains pointers)

◆ prepare()

void prepare ( )
overridevirtual

Prepare the function for executing.

Any one off pre-processing step required by the function is handled here

Note
Prepare stage might not need all the function's buffers' backing memory to be available in order to execute

Reimplemented from IFunction.

Definition at line 134 of file NERNNLayer.cpp.

References NEGEMM::prepare(), and NEFullyConnectedLayer::prepare().

Referenced by NERNNLayer::run().

135 {
136  if(!_is_prepared)
137  {
138  _fully_connected.prepare();
139  _gemm_state_f.prepare();
140 
141  _is_prepared = true;
142  }
143 }
void prepare() override
Prepare the function for executing.
Definition: NEGEMM.cpp:359
void prepare() override
Prepare the function for executing.

◆ run()

void run ( )
overridevirtual

Run the kernels contained in the function.

For Neon kernels:

  • Multi-threading is used for the kernels which are parallelisable.
  • By default std::thread::hardware_concurrency() threads are used.
Note
CPPScheduler::set_num_threads() can be used to manually set the number of threads

For OpenCL kernels:

  • All the kernels are enqueued on the queue associated with CLScheduler.
  • The queue is then flushed.
Note
The function will not block until the kernels are executed. It is the user's responsibility to wait.
Will call prepare() on first run if hasn't been done

Implements IFunction.

Definition at line 117 of file NERNNLayer.cpp.

References NERNNLayer::prepare(), NECopy::run(), NEActivationLayer::run(), NEArithmeticAddition::run(), NEGEMM::run(), and NEFullyConnectedLayer::run().

118 {
119  prepare();
120 
121  MemoryGroupResourceScope scope_mg(_memory_group);
122 
123  _fully_connected.run();
124 
125  _gemm_state_f.run();
126 
127  _add_f.run();
128  _activation.run();
129 
130  // copy hidden out to output
131  _copy_f.run();
132 }
void prepare() override
Prepare the function for executing.
Definition: NERNNLayer.cpp:134
void run() override
Run the kernels contained in the function.
void run() override
Run the kernels contained in the function.
Definition: NEGEMM.cpp:309
void run() override
Run the kernels contained in the function.
void run() override
Run the kernels contained in the function.
void run() override
Run the kernels contained in the function.
Definition: NECopy.cpp:66

◆ validate()

Status validate ( const ITensorInfo input,
const ITensorInfo weights,
const ITensorInfo recurrent_weights,
const ITensorInfo bias,
const ITensorInfo hidden_state,
const ITensorInfo output,
const ActivationLayerInfo info 
)
static

Initialize the function.

Parameters
[in]inputInput is a 2-D tensor of shape [input_size, batch_size]. Data types supported: F16/F32
[in]weightsWeights tensor of shape [input_size, num_units] that multiplies the input. Data types supported: Same as input
[in]recurrent_weightsWeights tensor of shape [num_units, num_units] that multiplies the current 'state'. Data types supported: Same as input
[in]biasBias vector of shape [num_units]. Data types supported: Same as input
[in]outputOutput tensor of shape [num_units, batch_size]. Data types supported: Same as input
[in]hidden_stateOutput tensor of shape [num_units, batch_size]. Data types supported: Same as input
[in]infoActivation layer parameter.
Returns
a status

Definition at line 54 of file NERNNLayer.cpp.

References ARM_COMPUTE_RETURN_ERROR_ON, ARM_COMPUTE_RETURN_ERROR_ON_DATA_TYPE_NOT_IN, ARM_COMPUTE_RETURN_ERROR_ON_MISMATCHING_DIMENSIONS, ARM_COMPUTE_RETURN_ERROR_ON_NULLPTR, ARM_COMPUTE_RETURN_ON_ERROR, arm_compute::misc::shape_calculator::compute_rnn_shape(), ITensorInfo::data_layout(), ITensorInfo::data_type(), ITensorInfo::dimension(), arm_compute::F16, arm_compute::F32, arm_compute::get_data_layout_dimension_index(), arm_compute::HEIGHT, arm_compute::test::validation::idx_height, arm_compute::test::validation::idx_width, ITensorInfo::num_dimensions(), arm_compute::SATURATE, ITensorInfo::tensor_shape(), NEActivationLayer::validate(), NEArithmeticAddition::validate(), NEFullyConnectedLayer::validate(), and arm_compute::WIDTH.

Referenced by NERNNLayer::configure(), and arm_compute::test::validation::DATA_TEST_CASE().

56 {
57  ARM_COMPUTE_RETURN_ERROR_ON_NULLPTR(input, weights, recurrent_weights, bias, hidden_state, output);
59 
62  ARM_COMPUTE_RETURN_ERROR_ON(input->dimension(idx_width) != weights->dimension(idx_width));
63  ARM_COMPUTE_RETURN_ERROR_ON(input->num_dimensions() != 2);
64  ARM_COMPUTE_RETURN_ERROR_ON(weights->dimension(idx_height) != recurrent_weights->dimension(idx_width));
65  ARM_COMPUTE_RETURN_ERROR_ON(recurrent_weights->dimension(idx_width) != recurrent_weights->dimension(idx_height));
66  ARM_COMPUTE_RETURN_ERROR_ON(bias->num_dimensions() != 1);
67  ARM_COMPUTE_RETURN_ERROR_ON(bias->dimension(idx_width) != weights->dimension(idx_height));
68  ARM_COMPUTE_RETURN_ERROR_ON(hidden_state->dimension(idx_width) != weights->dimension(idx_height));
69  ARM_COMPUTE_RETURN_ERROR_ON(hidden_state->dimension(idx_height) != input->dimension(idx_height));
70  ARM_COMPUTE_RETURN_ERROR_ON_MISMATCHING_DIMENSIONS(output->tensor_shape(), hidden_state->tensor_shape());
71 
72  auto shape_info = TensorInfo(misc::shape_calculator::compute_rnn_shape(recurrent_weights, hidden_state->dimension(idx_height)), 1, input->data_type());
73 
77 
78  return Status{};
79 }
static Status validate(const ITensorInfo *input1, const ITensorInfo *input2, const ITensorInfo *output, ConvertPolicy policy, const ActivationLayerInfo &act_info=ActivationLayerInfo())
Static function to check if given info will lead to a valid configuration of NEArithmeticAddition.
#define ARM_COMPUTE_RETURN_ON_ERROR(status)
Checks if a status contains an error and returns it.
Definition: Error.h:204
static Status validate(const ITensorInfo *input, const ITensorInfo *output, const ActivationLayerInfo &act_info)
[NEActivationLayer snippet]
1 channel, 1 F32 per channel
TensorShape compute_rnn_shape(const ITensorInfo *input, const unsigned int batch_size)
Calculate the RNN shape of a tensor.
#define ARM_COMPUTE_RETURN_ERROR_ON(cond)
If the condition is true, an error is returned.
Definition: Error.h:296
#define ARM_COMPUTE_RETURN_ERROR_ON_MISMATCHING_DIMENSIONS(...)
Definition: Validate.h:288
1 channel, 1 F16 per channel
#define ARM_COMPUTE_RETURN_ERROR_ON_NULLPTR(...)
Definition: Validate.h:163
ScaleKernelInfo info(interpolation_policy, default_border_mode, PixelValue(), sampling_policy, false)
static Status validate(const ITensorInfo *input, const ITensorInfo *weights, const ITensorInfo *biases, const ITensorInfo *output, FullyConnectedLayerInfo fc_info=FullyConnectedLayerInfo())
Static function to check if given info will lead to a valid configuration of NEFullyConnectedLayer.
#define ARM_COMPUTE_RETURN_ERROR_ON_DATA_TYPE_NOT_IN(t,...)
Definition: Validate.h:694
size_t get_data_layout_dimension_index(const DataLayout data_layout, const DataLayoutDimension data_layout_dimension)
Get the index of the given dimension.
Definition: Helpers.inl:193

The documentation for this class was generated from the following files: