ArmNN
 25.11
Loading...
Searching...
No Matches
RefBackend Class Reference

#include <RefBackend.hpp>

Inheritance diagram for RefBackend:
[legend]
Collaboration diagram for RefBackend:
[legend]

Public Member Functions

 RefBackend ()=default
 ~RefBackend ()=default
const BackendIdGetId () const override
IBackendInternal::IMemoryManagerUniquePtr CreateMemoryManager () const override
IBackendInternal::IWorkloadFactoryPtr CreateWorkloadFactory (const IBackendInternal::IMemoryManagerSharedPtr &memoryManager=nullptr) const override
IBackendInternal::IWorkloadFactoryPtr CreateWorkloadFactory (class TensorHandleFactoryRegistry &tensorHandleFactoryRegistry) const override
IBackendInternal::IBackendContextPtr CreateBackendContext (const IRuntime::CreationOptions &) const override
 Create the runtime context of the backend.
IBackendInternal::IBackendProfilingContextPtr CreateBackendProfilingContext (const IRuntime::CreationOptions &creationOptions, IBackendProfilingPtr &backendProfiling) override
 Create context specifically used for profiling interaction from backends.
IBackendInternal::ILayerSupportSharedPtr GetLayerSupport () const override
OptimizationViews OptimizeSubgraphView (const SubgraphView &subgraph, const ModelOptions &modelOptions) const override
std::vector< ITensorHandleFactory::FactoryIdGetHandleFactoryPreferences () const override
 (Optional) Returns a vector of supported TensorHandleFactory ids in preference order.
void RegisterTensorHandleFactories (class TensorHandleFactoryRegistry &registry) override
 (Optional) Register TensorHandleFactories Either this method or CreateMemoryManager() and IWorkloadFactory::CreateTensor() IWorkloadFactory::CreateSubtensor() methods must be implemented.
BackendCapabilities GetCapabilities () const override
 Returns a BackendCapability if the backend lists the capability The BackendCapability must then be inspected to check whether or not that BackendCapability is supported Otherwise returns an EmptyOptional if the BackendCapability is unlisted.
std::unique_ptr< ICustomAllocatorGetDefaultAllocator () const override
 Returns the default memory allocator for the backend.
Public Member Functions inherited from IBackendInternal
 ~IBackendInternal () override=default
 Allow backends created by the factory function to be destroyed through IBackendInternal.
virtual IWorkloadFactoryPtr CreateWorkloadFactory (const IMemoryManagerSharedPtr &memoryManager, const ModelOptions &modelOptions) const
virtual IWorkloadFactoryPtr CreateWorkloadFactory (class TensorHandleFactoryRegistry &tensorHandleFactoryRegistry, const ModelOptions &modelOptions) const
virtual IWorkloadFactoryPtr CreateWorkloadFactory (class TensorHandleFactoryRegistry &tensorHandleFactoryRegistry, const ModelOptions &modelOptions, MemorySourceFlags inputFlags, MemorySourceFlags outputFlags) const
virtual IBackendSpecificModelContextPtr CreateBackendSpecificModelContext (const ModelOptions &modelOptions) const
virtual ILayerSupportSharedPtr GetLayerSupport (const ModelOptions &modelOptions) const
virtual OptimizationViews OptimizeSubgraphView (const SubgraphView &subgraph) const
bool SupportsTensorAllocatorAPI () const
ITensorHandleFactory::FactoryId GetBackwardCompatibleFavoriteHandleFactory ()
virtual void RegisterTensorHandleFactories (class TensorHandleFactoryRegistry &registry, MemorySourceFlags inputFlags, MemorySourceFlags outputFlags)
 (Optional) Register TensorHandleFactories Either this method or CreateMemoryManager() and IWorkloadFactory::CreateTensor() IWorkloadFactory::CreateSubtensor() methods must be implemented.
virtual bool UseCustomMemoryAllocator (std::shared_ptr< ICustomAllocator > allocator, armnn::Optional< std::string & > errMsg)
 Signals the backend to use a custom memory allocator provided by the user.
virtual unsigned int GetNumberOfCacheFiles () const
 Returns the number of files cached if backend supports caching.

Static Public Member Functions

static const BackendIdGetIdStatic ()
Static Public Member Functions inherited from IBackendInternal
static constexpr BackendVersion GetApiVersion ()
 Returns the version of the Backend API.

Additional Inherited Members

Public Types inherited from IBackendInternal
using IWorkloadFactoryPtr = std::unique_ptr<IWorkloadFactory>
using IBackendContextPtr = std::unique_ptr<IBackendContext>
using IBackendProfilingContextPtr = std::shared_ptr<arm::pipe::IBackendProfilingContext>
 This is the bridge between backend and backend profiling we'll keep it in the backend namespace.
using IBackendProfilingPtr = std::unique_ptr<arm::pipe::IBackendProfiling>
using ILayerSupportSharedPtr = std::shared_ptr<ILayerSupport>
using IBackendSpecificModelContextPtr = std::shared_ptr<IBackendModelContext>
using IMemoryManagerUniquePtr = std::unique_ptr<IMemoryManager>
using IMemoryManagerSharedPtr = std::shared_ptr<IMemoryManager>
Protected Member Functions inherited from IBackendInternal
 IBackendInternal ()=default
 Creation must be done through a specific backend interface.
Protected Member Functions inherited from IBackend
 IBackend ()
virtual ~IBackend ()

Detailed Description

Definition at line 30 of file RefBackend.hpp.

Constructor & Destructor Documentation

◆ RefBackend()

RefBackend ( )
default

◆ ~RefBackend()

~RefBackend ( )
default

Member Function Documentation

◆ CreateBackendContext()

IBackendInternal::IBackendContextPtr CreateBackendContext ( const IRuntime::CreationOptions & ) const
overridevirtual

Create the runtime context of the backend.

Implementations may return a default-constructed IBackendContextPtr if no context is needed at runtime. Implementations must throw BackendUnavailableException if the backend cannot be used (for example, necessary accelerator hardware is not present). The default implementation always returns a default-constructed pointer.

Reimplemented from IBackendInternal.

Definition at line 50 of file RefBackend.cpp.

51{
52 return IBackendContextPtr{};
53}

◆ CreateBackendProfilingContext()

IBackendInternal::IBackendProfilingContextPtr CreateBackendProfilingContext ( const IRuntime::CreationOptions & creationOptions,
IBackendProfilingPtr & backendProfiling )
overridevirtual

Create context specifically used for profiling interaction from backends.

Reimplemented from IBackendInternal.

Definition at line 55 of file RefBackend.cpp.

57{
58 return IBackendProfilingContextPtr{};
59}

◆ CreateMemoryManager()

IBackendInternal::IMemoryManagerUniquePtr CreateMemoryManager ( ) const
overridevirtual

Reimplemented from IBackendInternal.

Definition at line 61 of file RefBackend.cpp.

62{
63 return std::make_unique<RefMemoryManager>();
64}

◆ CreateWorkloadFactory() [1/2]

IBackendInternal::IWorkloadFactoryPtr CreateWorkloadFactory ( class TensorHandleFactoryRegistry & tensorHandleFactoryRegistry) const
overridevirtual

Reimplemented from IBackendInternal.

Definition at line 34 of file RefBackend.cpp.

36{
37 auto memoryManager = std::make_shared<RefMemoryManager>();
38
39 tensorHandleFactoryRegistry.RegisterMemoryManager(memoryManager);
40
41 std::unique_ptr<RefTensorHandleFactory> factory = std::make_unique<RefTensorHandleFactory>(memoryManager);
42 // Register copy and import factory pair
43 tensorHandleFactoryRegistry.RegisterCopyAndImportFactoryPair(factory->GetId(), factory->GetId());
44 // Register the factory
45 tensorHandleFactoryRegistry.RegisterFactory(std::move(factory));
46
47 return std::make_unique<RefWorkloadFactory>(PolymorphicPointerDowncast<RefMemoryManager>(memoryManager));
48}

References armnn::PolymorphicPointerDowncast(), TensorHandleFactoryRegistry::RegisterCopyAndImportFactoryPair(), TensorHandleFactoryRegistry::RegisterFactory(), and TensorHandleFactoryRegistry::RegisterMemoryManager().

◆ CreateWorkloadFactory() [2/2]

IBackendInternal::IWorkloadFactoryPtr CreateWorkloadFactory ( const IBackendInternal::IMemoryManagerSharedPtr & memoryManager = nullptr) const
overridevirtual

Implements IBackendInternal.

Definition at line 28 of file RefBackend.cpp.

30{
31 return std::make_unique<RefWorkloadFactory>(PolymorphicPointerDowncast<RefMemoryManager>(memoryManager));
32}

References armnn::PolymorphicPointerDowncast().

◆ GetCapabilities()

BackendCapabilities GetCapabilities ( ) const
inlineoverridevirtual

Returns a BackendCapability if the backend lists the capability The BackendCapability must then be inspected to check whether or not that BackendCapability is supported Otherwise returns an EmptyOptional if the BackendCapability is unlisted.

Reimplemented from IBackendInternal.

Definition at line 61 of file RefBackend.hpp.

62 {
63 return cpuRefCapabilities;
64 };
const BackendCapabilities cpuRefCapabilities("CpuRef", { {"NonConstWeights", true}, {"ProtectedContentAllocation", false}, {"ConstantTensorsAsInputs", true}, {"PreImportIOTensors", true}, {"ExternallyManagedMemory", true}, {"MultiAxisPacking", false}, {"SingleAxisPacking", true}, {"HasFp16", true}, {"AllOrNothing", false} })

References armnn::cpuRefCapabilities.

◆ GetDefaultAllocator()

std::unique_ptr< ICustomAllocator > GetDefaultAllocator ( ) const
overridevirtual

Returns the default memory allocator for the backend.

Returns
- Returns unique pointer to the Default Allocator of the Backend

Reimplemented from IBackendInternal.

Definition at line 205 of file RefBackend.cpp.

206{
207 return std::make_unique<DefaultAllocator>();
208}

◆ GetHandleFactoryPreferences()

std::vector< ITensorHandleFactory::FactoryId > GetHandleFactoryPreferences ( ) const
overridevirtual

(Optional) Returns a vector of supported TensorHandleFactory ids in preference order.

Reimplemented from IBackendInternal.

Definition at line 186 of file RefBackend.cpp.

187{
188 return std::vector<ITensorHandleFactory::FactoryId> { RefTensorHandleFactory::GetIdStatic() };
189}

References RefTensorHandleFactory::GetIdStatic().

◆ GetId()

const BackendId & GetId ( ) const
inlineoverridevirtual

Implements IBackend.

Definition at line 37 of file RefBackend.hpp.

37{ return GetIdStatic(); }

References GetIdStatic().

◆ GetIdStatic()

const BackendId & GetIdStatic ( )
static

Definition at line 22 of file RefBackend.cpp.

23{
24 static const BackendId s_Id{RefBackendId()};
25 return s_Id;
26}
constexpr const char * RefBackendId()

References armnn::RefBackendId().

Referenced by GetBackendId(), and GetId().

◆ GetLayerSupport()

IBackendInternal::ILayerSupportSharedPtr GetLayerSupport ( ) const
overridevirtual

Implements IBackendInternal.

Definition at line 66 of file RefBackend.cpp.

67{
68 static ILayerSupportSharedPtr layerSupport{new RefLayerSupport};
69 return layerSupport;
70}
std::shared_ptr< ILayerSupport > ILayerSupportSharedPtr

◆ OptimizeSubgraphView()

OptimizationViews OptimizeSubgraphView ( const SubgraphView & subgraph,
const ModelOptions & modelOptions ) const
overridevirtual

Reimplemented from IBackendInternal.

Definition at line 72 of file RefBackend.cpp.

74{
75 OptimizationViews optimizationViews(modelOptions);
76
77 auto it = subgraph.end();
78 std::map<LayerGuid, Layer*> untouched;
79
80 while (it != subgraph.begin())
81 {
82 --it;
83 Layer& base = *(PolymorphicDowncast<Layer*>(*it));
84 untouched.insert({base.GetGuid(), &base});
85 }
86
87 it = subgraph.end();
88 while (it != subgraph.begin())
89 {
90 --it;
91 Layer& base = *(PolymorphicDowncast<Layer*>(*it));
92
93 // Special case to fuse padding into average pooling 2d for quantized datatype.
94 // Required to be done as a backend specific optimization as Neon does not support this special case.
95 if (base.GetType() == LayerType::Pooling2d)
96 {
97 Pooling2dLayer* baseLayer = PolymorphicDowncast<Pooling2dLayer*>(&base);
98 Pooling2dDescriptor poolingDescriptor = baseLayer->GetParameters();
99
100 if (baseLayer->GetInputSlot(0).GetConnectedOutputSlot()->GetOwningLayer().GetType() == LayerType::Pad)
101 {
102 PadLayer* padLayer = PolymorphicDowncast<PadLayer*>(
103 &baseLayer->GetInputSlot(0).GetConnectedOutputSlot()->GetOwningLayer());
104 if (padLayer->GetOutputSlot(0).GetNumConnections() == 1 &&
105 optimizations::pad_fold::TryFoldPadIntoLayer2d(padLayer->GetParameters(),
106 poolingDescriptor,
107 padLayer->GetOutputSlot().GetTensorInfo(),
108 true))
109 {
110 FoldPadLayer2d<Pooling2dLayer, Pooling2dDescriptor>(optimizationViews, baseLayer,
111 poolingDescriptor, padLayer);
112 untouched.erase(baseLayer->GetGuid());
113 untouched.erase(padLayer->GetGuid());
114 }
115 }
116 }
117
118 if (base.GetType() == LayerType::Convolution2d)
119 {
120 Convolution2dLayer* baseLayer = PolymorphicDowncast<Convolution2dLayer*>(&base);
121 Convolution2dDescriptor convDescriptor = baseLayer->GetParameters();
122 if (baseLayer->GetInputSlot(0).GetConnectedOutputSlot()->GetOwningLayer().GetType() == LayerType::Pad)
123 {
124 // perform fold pad into conv2d if possible
125 PadLayer* padLayer = PolymorphicDowncast<PadLayer*>(
126 &baseLayer->GetInputSlot(0).GetConnectedOutputSlot()->GetOwningLayer());
127 if (padLayer->GetOutputSlot(0).GetNumConnections() == 1 &&
128 optimizations::pad_fold::TryFoldPadIntoLayer2d<Convolution2dDescriptor>(padLayer->GetParameters(),
129 convDescriptor,
130 padLayer->GetOutputSlot().GetTensorInfo()))
131 {
132 FoldPadLayer2d<Convolution2dLayer, Convolution2dDescriptor>(optimizationViews, baseLayer,
133 convDescriptor, padLayer);
134
135 untouched.erase(baseLayer->GetGuid());
136 untouched.erase(padLayer->GetGuid());
137 }
138 }
139 }
140 if (base.GetType() == LayerType::DepthwiseConvolution2d)
141 {
142 DepthwiseConvolution2dLayer* baseLayer = PolymorphicDowncast<DepthwiseConvolution2dLayer*>(&base);
143 DepthwiseConvolution2dDescriptor convDescriptor = baseLayer->GetParameters();
144 if (baseLayer->GetInputSlot(0).GetConnectedOutputSlot()->GetOwningLayer().GetType() == LayerType::Pad)
145 {
146 // perform fold pad into depthwiseconv2d if possible
147 PadLayer* padLayer = PolymorphicDowncast<PadLayer*>(
148 &baseLayer->GetInputSlot(0).GetConnectedOutputSlot()->GetOwningLayer());
149 if (padLayer->GetOutputSlot(0).GetNumConnections() == 1 &&
150 optimizations::pad_fold::TryFoldPadIntoLayer2d<DepthwiseConvolution2dDescriptor>(
151 padLayer->GetParameters(),
152 convDescriptor,
153 padLayer->GetOutputSlot().GetTensorInfo()))
154 {
155 FoldPadLayer2d<DepthwiseConvolution2dLayer, DepthwiseConvolution2dDescriptor>(optimizationViews,
156 baseLayer,
157 convDescriptor,
158 padLayer);
159
160 untouched.erase(baseLayer->GetGuid());
161 untouched.erase(padLayer->GetGuid());
162 }
163 }
164 }
165
166 // Remove Reshape where possible
167 if (base.GetType() == LayerType::Reshape)
168 {
169 ReshapeLayer* baseLayer = PolymorphicDowncast<ReshapeLayer*>(&base);
170 RemoveReshapeLayer(baseLayer, untouched, optimizationViews);
171 }
172 }
173
174 if (optimizationViews.GetSubstitutions().empty() && optimizationViews.GetDeletedSubgraphs().empty())
175 {
176 optimizationViews.AddUntouchedSubgraph(SubgraphView(subgraph));
177 }
178 else
179 {
180 ReportUntouchedLayers(optimizationViews, untouched);
181 }
182
183 return optimizationViews;
184}
const armnnSerializer::Pooling2dDescriptor * Pooling2dDescriptor
void RemoveReshapeLayer(ReshapeLayer *baseLayer, std::map< LayerGuid, Layer * > &untouched, OptimizationViews &optimizationViews)
void ReportUntouchedLayers(OptimizationViews &optimizationViews, std::map< LayerGuid, Layer * > untouched)

References OptimizationViews::AddUntouchedSubgraph(), SubgraphView::begin(), armnn::Convolution2d, armnn::DepthwiseConvolution2d, SubgraphView::end(), armnn::FoldPadLayer2d(), InputSlot::GetConnectedOutputSlot(), OptimizationViews::GetDeletedSubgraphs(), Layer::GetGuid(), Layer::GetInputSlot(), OutputSlot::GetNumConnections(), Layer::GetOutputSlot(), OutputSlot::GetOwningLayer(), LayerWithParameters< Parameters >::GetParameters(), OptimizationViews::GetSubstitutions(), OutputSlot::GetTensorInfo(), Layer::GetType(), armnn::Pad, armnn::PolymorphicDowncast(), armnn::Pooling2d, armnn::RemoveReshapeLayer(), armnn::ReportUntouchedLayers(), armnn::Reshape, and armnn::optimizations::pad_fold::TryFoldPadIntoLayer2d().

◆ RegisterTensorHandleFactories()

void RegisterTensorHandleFactories ( class TensorHandleFactoryRegistry & )
overridevirtual

(Optional) Register TensorHandleFactories Either this method or CreateMemoryManager() and IWorkloadFactory::CreateTensor() IWorkloadFactory::CreateSubtensor() methods must be implemented.

Reimplemented from IBackendInternal.

Definition at line 191 of file RefBackend.cpp.

192{
193 auto memoryManager = std::make_shared<RefMemoryManager>();
194
195 registry.RegisterMemoryManager(memoryManager);
196
197 std::unique_ptr<RefTensorHandleFactory> factory = std::make_unique<RefTensorHandleFactory>(memoryManager);
198
199 // Register copy and import factory pair
200 registry.RegisterCopyAndImportFactoryPair(factory->GetId(), factory->GetId());
201 // Register the factory
202 registry.RegisterFactory(std::move(factory));
203}

References TensorHandleFactoryRegistry::RegisterCopyAndImportFactoryPair(), TensorHandleFactoryRegistry::RegisterFactory(), and TensorHandleFactoryRegistry::RegisterMemoryManager().


The documentation for this class was generated from the following files: