ArmNN
 25.11
Loading...
Searching...
No Matches
OutputLayer Class Reference

A layer user-provided data can be bound to (e.g. inputs, outputs). More...

#include <OutputLayer.hpp>

Inheritance diagram for OutputLayer:
[legend]
Collaboration diagram for OutputLayer:
[legend]

Public Member Functions

virtual std::unique_ptr< IWorkloadCreateWorkload (const IWorkloadFactory &factory) const override
 Returns nullptr for Output type.
virtual void CreateTensorHandles (const TensorHandleFactoryRegistry &registry, const IWorkloadFactory &factory, const bool isMemoryManaged=true) override
 Set the outputs to be appropriate sub tensors of the input if sub tensors are supported otherwise creates tensor handlers by default.
OutputLayerClone (Graph &graph) const override
 Creates a dynamically-allocated copy of this layer.
void ValidateTensorShapesFromInputs () override
 Check if the input tensor shape(s) will lead to a valid configuration of OutputLayer.
void ExecuteStrategy (IStrategy &strategy) const override
 Apply a visitor to this layer.
Public Member Functions inherited from BindableLayer
 BindableLayer (unsigned int numInputSlots, unsigned int numOutputSlots, LayerType type, const char *name, LayerBindingId id)
LayerBindingId GetBindingId () const
Public Member Functions inherited from Layer
 Layer (unsigned int numInputSlots, unsigned int numOutputSlots, LayerType type, const char *name)
 Layer (unsigned int numInputSlots, unsigned int numOutputSlots, LayerType type, DataLayout layout, const char *name)
const std::string & GetNameStr () const
const OutputHandlerGetOutputHandler (unsigned int i=0) const
OutputHandlerGetOutputHandler (unsigned int i=0)
ShapeInferenceMethod GetShapeInferenceMethod () const
bool GetAllowExpandedDims () const
const std::vector< InputSlot > & GetInputSlots () const
const std::vector< OutputSlot > & GetOutputSlots () const
std::vector< InputSlot >::iterator BeginInputSlots ()
std::vector< InputSlot >::iterator EndInputSlots ()
std::vector< OutputSlot >::iterator BeginOutputSlots ()
std::vector< OutputSlot >::iterator EndOutputSlots ()
bool IsOutputUnconnected ()
void ResetPriority () const
LayerPriority GetPriority () const
LayerType GetType () const override
 Returns the armnn::LayerType of this layer.
DataType GetDataType () const
const BackendIdGetBackendId () const
void SetBackendId (const BackendId &id) override
 Set the backend of the IConnectableLayer.
void VerifyLayerConnections (unsigned int expectedConnections, const CheckLocation &location) const
std::vector< TensorShapeInferOutputShapes (const std::vector< TensorShape > &inputShapes) const override
 Infer the shape of the output(s) based on the provided input shape(s)
virtual void SerializeLayerParameters (ParameterStringifyFunction &fn) const
 Helper to serialize the layer parameters to string.
virtual void ReleaseConstantData ()
template<typename Op>
void OperateOnConstantTensors (Op op)
const char * GetName () const override
 Returns the name of the layer.
unsigned int GetNumInputSlots () const override
 Returns the number of connectable input slots.
unsigned int GetNumOutputSlots () const override
 Returns the number of connectable output slots.
const InputSlotGetInputSlot (unsigned int index) const override
 Get a const input slot handle by slot index.
InputSlotGetInputSlot (unsigned int index) override
 Get the input slot handle by slot index.
const OutputSlotGetOutputSlot (unsigned int index=0) const override
 Get the const output slot handle by slot index.
OutputSlotGetOutputSlot (unsigned int index=0) override
 Get the output slot handle by slot index.
void SetGuid (LayerGuid guid)
LayerGuid GetGuid () const final
 Returns the unique id of the layer.
void AddRelatedLayerName (const std::string layerName)
const std::list< std::string > & GetRelatedLayerNames ()
virtual void Reparent (Graph &dest, std::list< Layer * >::const_iterator iterator)=0
void BackendSelectionHint (Optional< BackendId > backend) final
 Provide a hint for the optimizer as to which backend to prefer for this layer.
Optional< BackendIdGetBackendHint () const
void SetShapeInferenceMethod (ShapeInferenceMethod shapeInferenceMethod)
void SetAllowExpandedDims (bool allowExpandedDims)
template<typename T>
std::shared_ptr< T > GetAdditionalInformation () const
void SetAdditionalInfoForObject (const AdditionalInfoObjectPtr &additionalInfo)
virtual const BaseDescriptorGetParameters () const override
 If the layer has a descriptor return it.

Protected Member Functions

 OutputLayer (LayerBindingId id, const char *name)
 Constructor to create an OutputLayer.
 ~OutputLayer ()=default
 Default destructor.
Protected Member Functions inherited from BindableLayer
 ~BindableLayer ()=default
Protected Member Functions inherited from Layer
virtual ~Layer ()=default
template<typename QueueDescriptor>
void CollectQueueDescriptorInputs (QueueDescriptor &descriptor, WorkloadInfo &info) const
template<typename QueueDescriptor>
void CollectQueueDescriptorOutputs (QueueDescriptor &descriptor, WorkloadInfo &info) const
void ValidateAndCopyShape (const TensorShape &outputShape, const TensorShape &inferredShape, const ShapeInferenceMethod shapeInferenceMethod, const std::string &layerName, const unsigned int outputSlotIndex=0)
void VerifyShapeInferenceType (const TensorShape &outputShape, ShapeInferenceMethod shapeInferenceMethod)
template<typename QueueDescriptor>
WorkloadInfo PrepInfoAndDesc (QueueDescriptor &descriptor) const
 Helper function to reduce duplication in *LayerCreateWorkload.
template<typename LayerType, typename ... Params>
LayerTypeCloneBase (Graph &graph, Params &&... params) const
virtual ConstantTensors GetConstantTensorsByRef () override final
virtual ImmutableConstantTensors GetConstantTensorsByRef () const override
void SetAdditionalInfo (QueueDescriptor &descriptor) const
Protected Member Functions inherited from IConnectableLayer
 ~IConnectableLayer ()
 Objects are not deletable via the handle.

Additional Inherited Members

Public Types inherited from IConnectableLayer
using ConstantTensors = std::vector<std::reference_wrapper<std::shared_ptr<ConstTensorHandle>>>
using ImmutableConstantTensors = std::vector<std::reference_wrapper<const std::shared_ptr<ConstTensorHandle>>>
Protected Attributes inherited from Layer
AdditionalInfoObjectPtr m_AdditionalInfoObject
std::vector< OutputHandlerm_OutputHandlers
ShapeInferenceMethod m_ShapeInferenceMethod

Detailed Description

A layer user-provided data can be bound to (e.g. inputs, outputs).

Examples
SimpleSample.cpp.

Definition at line 13 of file OutputLayer.hpp.

Constructor & Destructor Documentation

◆ OutputLayer()

OutputLayer ( LayerBindingId id,
const char * name )
protected

Constructor to create an OutputLayer.

Parameters
idThe layer binding id number.
nameOptional name for the layer.

Definition at line 16 of file OutputLayer.cpp.

17 : BindableLayer(1, 0, LayerType::Output, name, id)
18{
19}

References BindableLayer::BindableLayer(), and armnn::Output.

Referenced by Clone().

◆ ~OutputLayer()

~OutputLayer ( )
protecteddefault

Default destructor.

Member Function Documentation

◆ Clone()

OutputLayer * Clone ( Graph & graph) const
overridevirtual

Creates a dynamically-allocated copy of this layer.

Parameters
[in]graphThe graph into which this layer is being cloned.

Implements Layer.

Definition at line 27 of file OutputLayer.cpp.

28{
29 return CloneBase<OutputLayer>(graph, GetBindingId(), GetName());
30}

References Layer::CloneBase(), BindableLayer::GetBindingId(), Layer::GetName(), and OutputLayer().

◆ CreateTensorHandles()

virtual void CreateTensorHandles ( const TensorHandleFactoryRegistry & registry,
const IWorkloadFactory & factory,
const bool isMemoryManaged = true )
inlineoverridevirtual

Set the outputs to be appropriate sub tensors of the input if sub tensors are supported otherwise creates tensor handlers by default.

Ignores parameters for Output type.

Parameters
[in]registryContains all the registered tensor handle factories available for use.
[in]factoryThe workload factory which will create the workload.
[in]IsMemoryManagedDetermine whether or not to assign a memory manager during creation

Reimplemented from Layer.

Definition at line 27 of file OutputLayer.hpp.

30 {
31 IgnoreUnused(registry, factory, isMemoryManaged);
32 }
void IgnoreUnused(Ts &&...)

References armnn::IgnoreUnused().

◆ CreateWorkload()

std::unique_ptr< IWorkload > CreateWorkload ( const IWorkloadFactory & factory) const
overridevirtual

Returns nullptr for Output type.

Parameters
[in]graphThe graph where this layer can be found.
[in]factoryThe workload factory which will create the workload.
Returns
A pointer to the created workload, or nullptr if not created.

Implements Layer.

Definition at line 21 of file OutputLayer.cpp.

22{
23 IgnoreUnused(factory);
24 return nullptr;
25}

References armnn::IgnoreUnused().

◆ ExecuteStrategy()

void ExecuteStrategy ( IStrategy & strategy) const
overridevirtual

Apply a visitor to this layer.

Reimplemented from BindableLayer.

Definition at line 40 of file OutputLayer.cpp.

41{
42 strategy.ExecuteStrategy(this, GetParameters(), {}, GetName(), GetBindingId());
43}

References IStrategy::ExecuteStrategy(), BindableLayer::GetBindingId(), Layer::GetName(), and Layer::GetParameters().

◆ ValidateTensorShapesFromInputs()

void ValidateTensorShapesFromInputs ( )
overridevirtual

Check if the input tensor shape(s) will lead to a valid configuration of OutputLayer.

Parameters
[in]shapeInferenceMethodIndicates if output shape shall be overwritten or just validated.

Implements Layer.

Definition at line 32 of file OutputLayer.cpp.

33{
34
35 // Just validates that the input is connected.
36 ConditionalThrow<LayerValidationException>(GetInputSlot(0).GetConnection() != nullptr,
37 "OutputLayer: Input slot must be connected.");
38}

References armnn::ConditionalThrow(), and Layer::GetInputSlot().


The documentation for this class was generated from the following files: