ArmNN
 25.11
Loading...
Searching...
No Matches
IOptimizedNetwork Class Reference

#include <INetwork.hpp>

Public Member Functions

Status PrintGraph ()
Status SerializeToDot (std::ostream &stream) const
arm::pipe::ProfilingGuid GetGuid () const
size_t GetNumInputs () const
size_t GetNumOutputs () const
void ExecuteStrategy (IStrategy &strategy) const
 IOptimizedNetwork (const IOptimizedNetwork &other, const ModelOptions &modelOptions)
 Creates a copy of the IOptimizedNetwork.
 IOptimizedNetwork (std::unique_ptr< Graph > graph)
 IOptimizedNetwork (std::unique_ptr< OptimizedNetworkImpl > impl)
 ~IOptimizedNetwork ()
const std::shared_ptr< IProfiler > & GetProfiler () const

Static Public Member Functions

static void Destroy (IOptimizedNetwork *network)

Protected Member Functions

 IOptimizedNetwork (std::unique_ptr< Graph > graph, const ModelOptions &modelOptions)

Protected Attributes

std::unique_ptr< OptimizedNetworkImplpOptimizedNetworkImpl

Friends

class LoadedNetwork
GraphGetGraphForTesting (IOptimizedNetwork *optNetPtr)
ModelOptionsGetModelOptionsForTesting (IOptimizedNetwork *optNetPtr)
IOptimizedNetworkPtr Optimize (const INetwork &inNetwork, const std::vector< BackendId > &backendPreferences, const IDeviceSpec &deviceSpec, const OptimizerOptionsOpaque &options=OptimizerOptionsOpaque(), Optional< std::vector< std::string > & > messages=EmptyOptional())
 Create an optimized version of the network.
IOptimizedNetworkPtr Optimize (const Graph &inGraph, const std::vector< BackendId > &backendPreferences, const IDeviceSpec &deviceSpec, const OptimizerOptionsOpaque &options, Optional< std::vector< std::string > & > messages=EmptyOptional())
 Create an optimized version of the network.

Detailed Description

Definition at line 902 of file INetwork.hpp.

Constructor & Destructor Documentation

◆ IOptimizedNetwork() [1/4]

IOptimizedNetwork ( const IOptimizedNetwork & other,
const ModelOptions & modelOptions )

Creates a copy of the IOptimizedNetwork.

The IOptimizedNetwork will not be reoptimized, the provided ModelOptions will only be used when creating a LoadedNetwork.

Definition at line 692 of file Network.cpp.

693 : pOptimizedNetworkImpl(new OptimizedNetworkImpl(*other.pOptimizedNetworkImpl.get(), modelOptions)) {}

References IOptimizedNetwork(), and pOptimizedNetworkImpl.

Referenced by Destroy(), GetGraphForTesting, GetModelOptionsForTesting, IOptimizedNetwork(), LoadedNetwork, and Optimize.

◆ IOptimizedNetwork() [2/4]

IOptimizedNetwork ( std::unique_ptr< Graph > graph)

Definition at line 695 of file Network.cpp.

696 : pOptimizedNetworkImpl(new OptimizedNetworkImpl(std::move(graph))) {}

References pOptimizedNetworkImpl.

◆ IOptimizedNetwork() [3/4]

IOptimizedNetwork ( std::unique_ptr< OptimizedNetworkImpl > impl)

Definition at line 698 of file Network.cpp.

699 : pOptimizedNetworkImpl(std::move(impl)) {}

References pOptimizedNetworkImpl.

◆ ~IOptimizedNetwork()

~IOptimizedNetwork ( )
default

◆ IOptimizedNetwork() [4/4]

IOptimizedNetwork ( std::unique_ptr< Graph > graph,
const ModelOptions & modelOptions )
protected

Definition at line 701 of file Network.cpp.

702 : pOptimizedNetworkImpl(new OptimizedNetworkImpl(std::move(graph), modelOptions)) {}

References pOptimizedNetworkImpl.

Member Function Documentation

◆ Destroy()

void Destroy ( IOptimizedNetwork * network)
static

Definition at line 706 of file Network.cpp.

707{
708 delete network;
709}

References IOptimizedNetwork().

Referenced by Optimize.

◆ ExecuteStrategy()

void ExecuteStrategy ( IStrategy & strategy) const

Definition at line 3277 of file Network.cpp.

3278{
3279 pOptimizedNetworkImpl->ExecuteStrategy(strategy);
3280}

References pOptimizedNetworkImpl.

◆ GetGuid()

arm::pipe::ProfilingGuid GetGuid ( ) const

Definition at line 726 of file Network.cpp.

727{
728 return pOptimizedNetworkImpl->GetGuid();
729}

References pOptimizedNetworkImpl.

◆ GetNumInputs()

size_t GetNumInputs ( ) const

Definition at line 731 of file Network.cpp.

732{
733 return pOptimizedNetworkImpl->GetNumInputs();
734}

References pOptimizedNetworkImpl.

◆ GetNumOutputs()

size_t GetNumOutputs ( ) const

Definition at line 736 of file Network.cpp.

737{
738 return pOptimizedNetworkImpl->GetNumOutputs();
739}

References pOptimizedNetworkImpl.

◆ GetProfiler()

const std::shared_ptr< IProfiler > & GetProfiler ( ) const

Definition at line 721 of file Network.cpp.

722{
723 return pOptimizedNetworkImpl->GetGraph().GetProfiler();
724}

References pOptimizedNetworkImpl.

◆ PrintGraph()

Status PrintGraph ( )

Definition at line 711 of file Network.cpp.

712{
713 return pOptimizedNetworkImpl->PrintGraph();
714}

References pOptimizedNetworkImpl.

◆ SerializeToDot()

Status SerializeToDot ( std::ostream & stream) const

Definition at line 716 of file Network.cpp.

717{
718 return pOptimizedNetworkImpl->SerializeToDot(stream);
719}

References pOptimizedNetworkImpl.

◆ GetGraphForTesting

Graph & GetGraphForTesting ( IOptimizedNetwork * optNetPtr)
friend

Definition at line 49 of file TestUtils.cpp.

50{
51 return optNet->pOptimizedNetworkImpl->GetGraph();
52}

References IOptimizedNetwork(), and pOptimizedNetworkImpl.

◆ GetModelOptionsForTesting

ModelOptions & GetModelOptionsForTesting ( IOptimizedNetwork * optNetPtr)
friend

Definition at line 54 of file TestUtils.cpp.

55{
56 return optNet->pOptimizedNetworkImpl->GetModelOptions();
57}

References IOptimizedNetwork(), and pOptimizedNetworkImpl.

◆ LoadedNetwork

friend class LoadedNetwork
friend

Definition at line 927 of file INetwork.hpp.

References IOptimizedNetwork(), and LoadedNetwork.

Referenced by LoadedNetwork.

◆ Optimize [1/2]

IOptimizedNetworkPtr Optimize ( const Graph & inGraph,
const std::vector< BackendId > & backendPreferences,
const IDeviceSpec & deviceSpec,
const OptimizerOptionsOpaque & options,
Optional< std::vector< std::string > & > messages = EmptyOptional() )
friend

Create an optimized version of the network.

Parameters
inGraphGraph to be optimized.
backendPreferencesThe choice of the backend ordered by user preferences.
deviceSpecDeviceSpec object as queried from the runtime. See IRuntime::GetDeviceSpec()
messagesIf there are failures or warnings a string describing same will be added to the vector
optionsOptimizerOptions object with optimizer configuration options
Returns
An IOptimizedNetworkPtr interface to the optimized network, throws an exception derived from armnn::Exception if process fails.

Definition at line 2026 of file Network.cpp.

2031{
2032 ARMNN_LOG(debug) << options.ToString();
2033
2034 // Enable profiling
2035 auto profiler = inGraph.GetProfiler();
2036 ProfilerManager::GetInstance().RegisterProfiler(profiler.get());
2037 profiler->EnableProfiling(options.GetProfilingEnabled());
2038
2039 ARMNN_SCOPED_PROFILING_EVENT(Compute::Undefined, "Optimizer");
2040 if (backendPreferences.empty())
2041 {
2042 throw InvalidArgumentException("Invoked Optimize with no backends specified");
2043 }
2044
2045 if (options.GetReduceFp32ToBf16())
2046 {
2047 throw InvalidArgumentException("BFloat16 optimization is currently ignored. In order to use Bf16 optimization "
2048 "Please use the FastMathEnabled backend option for CpuAcc or GpuAcc.");
2049 }
2050
2051 if (options.GetReduceFp32ToFp16() && options.GetReduceFp32ToBf16())
2052 {
2053 throw InvalidArgumentException("BFloat16 and Float16 optimization cannot be enabled at the same time.");
2054 }
2055
2056 // Ensure TensorInfo is set on all output slots of ConstantLayers in the graph
2057 inGraph.VerifyConstantLayerSetTensorInfo();
2058
2059 std::unique_ptr<Graph> graph = std::make_unique<Graph>(inGraph);
2060
2061 // We need to pass on the information about whether import and export is enabled to the LoadNetwork phase.
2062 // The mechanism to do that is to add model options to the optimized network.
2063 armnn::BackendOptions importExport("Global",
2064 {{"ImportEnabled", options.GetImportEnabled()},
2065 {"ExportEnabled", options.GetExportEnabled()}});
2066 ModelOptions optimizedOptions(options.GetModelOptions());
2067 optimizedOptions.push_back(importExport);
2068
2069 auto optNet = IOptimizedNetworkPtr(new IOptimizedNetwork(std::move(graph), optimizedOptions),
2070 &IOptimizedNetwork::Destroy);
2071
2072 IOptimizedNetwork* optNetObjPtr = optNet.get();
2073
2074 // Get the optimized graph
2075 Graph& optGraph = optNetObjPtr->pOptimizedNetworkImpl->GetGraph();
2076
2077 if(options.GetShapeInferenceMethod() == ShapeInferenceMethod::InferAndValidate)
2078 {
2079 // Infer the tensor infos for all output slots. Throws an exception on failure
2080 optGraph.InferTensorInfos();
2081 }
2082
2083 using namespace optimizations;
2084 // Substitute Max + Min with Bounded Relu before AddBroadcastReshapeLayer optimisation,
2085 // as Bounded ReLu needs the constants to be 1D size 1
2086 Optimizer::Pass(optGraph, MakeOptimizations(MaxMinIntoBoundedRelu()));
2087
2088 // Perform BroadcastToOptimizationLayer before AddBroadcastReshapeLayer optimisation
2089 Optimizer::Pass(optGraph, MakeOptimizations(BroadcastToOptimizationLayer()));
2090
2091 Optimizer::Pass(optGraph, MakeOptimizations(AddBroadcastReshapeLayer()));
2092
2093 if(options.GetShapeInferenceMethod() == ShapeInferenceMethod::ValidateOnly)
2094 {
2095 // Validate the tensor infos for all output slots. Throws an exception on failure
2096 optGraph.InferTensorInfos();
2097 }
2098
2099 // Initialize backend settings
2100 BackendSettings backendSettings(backendPreferences, deviceSpec);
2101 auto availablePreferredBackends = backendSettings.GetAvailablePreferredBackends();
2102 if (availablePreferredBackends.empty())
2103 {
2104 std::stringstream failureMsg;
2105 failureMsg << "None of the preferred backends " << backendPreferences
2106 << " are supported. Current platform provides " << backendSettings.m_SupportedBackends;
2107 ReportError(failureMsg.str(), messages);
2108 throw InvalidArgumentException(failureMsg.str());
2109 }
2110
2111 // Create a map to temporarily hold initialized backend objects
2112 TensorHandleFactoryRegistry tensorHandleFactoryRegistry;
2113 BackendsMap backends = CreateSupportedBackends(tensorHandleFactoryRegistry, backendSettings);
2114 bool hasFp16 = CheckFp16Support(backends, availablePreferredBackends);
2115
2116 bool reduceFp32ToFp16 = options.GetReduceFp32ToFp16();
2117 // If fp16 is supported on the backend and fastmath has been enabled and the model is a TfLite converted Fp16
2118 // model: enable turbo mode optimizations
2119 if (hasFp16 && CheckFastMathSupport(availablePreferredBackends, optimizedOptions) && IsTfLiteTurboModel(optGraph))
2120 {
2122 reduceFp32ToFp16 = true;
2123 }
2124 else
2125 {
2127 }
2128
2129 // Group Constant Layer optimizations together where possible.
2130 // This is important as:
2131 // FusePermuteIntoConstantLayer must happen before FoldPadIntoDepthwiseConvolution2d and
2132 // FuseBatchNormIntoDepthwiseConvolution2D.
2133 Optimizer::Pass(optGraph, MakeOptimizations(FusePermuteIntoConstLayer()));
2134 // Perform optimisation passes
2135 Optimizer::Pass(optGraph, MakeOptimizations(SquashEqualPermuteSiblings(),
2140 MovePermuteUp(),
2152
2153 const std::vector<BackendId> mappedGpuBackends = BackendRegistryInstance().GetMappedGpuBackends();
2154
2155 // All or nothing Gpu backends cannot be used as fallback
2156 for (auto backend : mappedGpuBackends)
2157 {
2158 if (std::count(backendPreferences.begin(), backendPreferences.end(), backend)
2159 && (backendPreferences[0] != backend) &&
2160 (backendPreferences[0] != armnn::BackendId("GpuAcc")))
2161 {
2162 std::stringstream failureMsg;
2163 failureMsg << backend << " backend cannot be specified as fallback.";
2164 ReportError(failureMsg.str(), messages);
2165 throw InvalidArgumentException(failureMsg.str());
2166 }
2167 }
2168
2169 std::vector<BackendId> amendedBackendPreferences = backendPreferences;
2170 std::unordered_set<BackendId> supportedBackends = armnn::BackendRegistryInstance().GetBackendIds();
2171 if (amendedBackendPreferences[0] == armnn::BackendId("GpuAcc"))
2172 {
2173 // Add mapped Gpu backends if not already there and GpuAcc is first backend requested
2174 for (auto backend : mappedGpuBackends)
2175 {
2176 if (!std::count(amendedBackendPreferences.begin(), amendedBackendPreferences.end(), backend))
2177 {
2178 amendedBackendPreferences.insert(amendedBackendPreferences.begin(), backend);
2179 }
2180 }
2181 }
2182
2183 if (reduceFp32ToFp16 && hasFp16)
2184 {
2185 ARMNN_SCOPED_PROFILING_EVENT(Compute::Undefined, "Optimizer_ReduceFp32ToFp16");
2186 Optimizer::Pass(optGraph, MakeOptimizations(Fp32NetworkToFp16Converter()));
2187 Optimizer::Pass(optGraph, MakeOptimizations(ConvertConstantsFloatToHalf()));
2188 }
2189 // Assign an available backend to each layer
2190 Graph::Iterator firstLayer = optGraph.begin();
2191 Graph::Iterator lastLayer = optGraph.end();
2192 OptimizationResult assignBackendsResult = AssignBackends(optNetObjPtr->pOptimizedNetworkImpl.get(),
2193 backendSettings,
2194 firstLayer,
2195 lastLayer,
2196 messages);
2197 if (assignBackendsResult.m_Error)
2198 {
2199 // Failed to assign a backend to each layer
2200 throw InvalidArgumentException("Failed to assign a backend to each layer");
2201 }
2202
2203 Optimizer::Pass(optGraph, MakeOptimizations(OptimizeInverseConversionsFp16(),
2205
2206 // Apply the backend-specific optimizations
2207 OptimizationResult backendOptimizationResult = ApplyBackendOptimizations(optNetObjPtr->pOptimizedNetworkImpl.get(),
2208 backendSettings,
2209 backends,
2210 options.GetModelOptions(),
2211 messages);
2212 if (backendOptimizationResult.m_Error)
2213 {
2214 // Failed to apply the backend-specific optimizations
2215 throw InvalidArgumentException("Failed to apply the backend-specific optimizations");
2216 }
2217
2218 // Convert constants
2219 {
2220 ARMNN_SCOPED_PROFILING_EVENT(Compute::Undefined, "Optimizer_ConvertConstants");
2221 Optimizer::Pass(optGraph, MakeOptimizations(ConvertConstantsFloatToHalf()));
2222 Optimizer::Pass(optGraph, MakeOptimizations(ConvertConstantsHalfToFloat()));
2223 }
2224
2225 // This must occur after all topological changes to the graph and any redirection of variables
2226 // If the debug flag is set, then insert a DebugLayer after each layer
2227 // Doing this after applying the backend optimizations as they might have changed some layers
2228 if (options.GetDebugEnabled() && !options.GetDebugToFileEnabled())
2229 {
2230 Optimizer::Pass(optGraph, MakeOptimizations(InsertDebugLayer()));
2231 }
2232 else if (options.GetDebugToFileEnabled())
2233 {
2234 // Setup the output file path
2235 try
2236 {
2237#if !defined(ARMNN_DISABLE_FILESYSTEM)
2238 auto result = armnnUtils::Filesystem::CreateDirectory("/ArmNNIntermediateLayerOutputs");
2239 ARMNN_LOG(info) << "Intermediate tensors will be written to: " << result;
2240#endif
2241 Optimizer::Pass(optGraph, MakeOptimizations(InsertDebugToFileLayer()));
2242 }
2243 catch (const armnn::RuntimeException& e)
2244 {
2245 // If we cannot create the output directory then we'll issue a warning and continue.
2246 ARMNN_LOG(warning) << "Unable to print intermediate layer outputs : " << e.what();
2247 }
2248 }
2249
2250 // Calculate the compatibility strategies for tensor handles
2251 OptimizationResult strategyResult = SelectTensorHandleStrategy(optGraph,
2252 backends,
2253 tensorHandleFactoryRegistry,
2254 options.GetImportEnabled(),
2255 options.GetExportEnabled(),
2256 messages);
2257
2258 if (strategyResult.m_Error)
2259 {
2260 // Failed to apply the backend-specific optimizations
2261 return IOptimizedNetworkPtr(nullptr, &IOptimizedNetwork::Destroy);
2262 }
2263
2264 // Based on the tensor handle strategy determined above, insert copy layers where required.
2265 {
2266 ARMNN_SCOPED_PROFILING_EVENT(Compute::Undefined, "Optimizer_AddCompatibilityLayers");
2267 optGraph.AddCompatibilityLayers(backends, tensorHandleFactoryRegistry);
2268 }
2269
2270 return optNet;
2271}
#define ARMNN_LOG(severity)
Definition Logging.hpp:212
#define ARMNN_SCOPED_PROFILING_EVENT(backendId, name)
BackendIdSet GetBackendIds() const
BackendIdVector GetMappedGpuBackends()
virtual const char * what() const noexcept override
OptimizeForConnection< PermuteLayer, PermuteLayer, OptimizeInversePermutesImpl< PermuteLayer > > OptimizeInversePermutes
ConvertConstants< Float16ToFloat32, IsFloat32Layer > ConvertConstantsHalfToFloat
OptimizeForExclusiveConnection< ElementwiseBinaryLayer, ElementwiseBinaryLayer, MaxMinIntoBoundedReluImpl > MaxMinIntoBoundedRelu
OptimizeForConnection< Layer, TransposeLayer, SquashEqualSiblingsImpl< TransposeLayer > > SquashEqualTransposeSiblings
ConvertConstants< Float32ToFloat16, IsFloat16Layer > ConvertConstantsFloatToHalf
OptimizeForConnection< ConvertFp16ToFp32Layer, ConvertFp32ToFp16Layer, OptimizeInverseConversionsImpl > OptimizeInverseConversionsFp16
OptimizeForConnection< ConstantLayer, DequantizeLayer, ConvertConstDequantisationLayersToConstLayersImpl > ConvertConstDequantisationLayersToConstLayers
OptimizeForType< Layer, AddBroadcastReshapeLayerImpl > AddBroadcastReshapeLayer
OptimizeForExclusiveConnection< Convolution2dLayer, BatchNormalizationLayer, FuseBatchNorm< Convolution2dLayer, armnn::DataType::Float16 > > FuseBatchNormIntoConvolution2DFloat16
OptimizeForConnection< Layer, TransposeLayer, MoveTransposeUpImpl > MoveTransposeUp
OptimizeForConnection< PermuteLayer, BatchToSpaceNdLayer, PermuteAndBatchToSpaceAsDepthToSpaceImpl< PermuteLayer > > PermuteAndBatchToSpaceAsDepthToSpace
OptimizeForConnection< TransposeLayer, BatchToSpaceNdLayer, PermuteAndBatchToSpaceAsDepthToSpaceImpl< TransposeLayer > > TransposeAndBatchToSpaceAsDepthToSpace
OptimizeForConnection< TransposeLayer, TransposeLayer, OptimizeInversePermutesImpl< TransposeLayer > > OptimizeInverseTransposes
OptimizeForConnection< ConstantLayer, DequantizeLayer, TurboConvertConstDequantisationLayersToConstLayersImpl > TurboConvertConstDequantisationLayersToConstLayers
OptimizeForConnection< ReshapeLayer, ReshapeLayer, OptimizeConsecutiveReshapesImpl > OptimizeConsecutiveReshapes
OptimizeForConnection< Layer, ReshapeLayer, SquashEqualSiblingsImpl< ReshapeLayer > > SquashEqualReshapeSiblings
OptimizeForType< Layer, ConvertFp32NetworkToFp16Impl > Fp32NetworkToFp16Converter
OptimizeForConnection< Layer, PermuteLayer, SquashEqualSiblingsImpl< PermuteLayer > > SquashEqualPermuteSiblings
OptimizeForConnection< ConstantLayer, PermuteLayer, ConvertConstPermuteLayersToConstLayers > FusePermuteIntoConstLayer
OptimizeForType< Layer, AddDebugToFileImpl > InsertDebugToFileLayer
Definition AddDebug.hpp:54
OptimizeForType< Layer, AddDebugImpl > InsertDebugLayer
Definition AddDebug.hpp:53
OptimizeForConnection< Layer, PermuteLayer, MovePermuteUpImpl > MovePermuteUp
OptimizeForExclusiveConnection< DepthwiseConvolution2dLayer, BatchNormalizationLayer, FuseBatchNorm< DepthwiseConvolution2dLayer, armnn::DataType::Float16 > > FuseBatchNormIntoDepthwiseConvolution2DFloat16
OptimizeForExclusiveConnection< DepthwiseConvolution2dLayer, BatchNormalizationLayer, FuseBatchNorm< DepthwiseConvolution2dLayer, armnn::DataType::Float32 > > FuseBatchNormIntoDepthwiseConvolution2DFloat32
OptimizeForType< BroadcastToLayer, DeleteBroadcastToImpl > BroadcastToOptimizationLayer
OptimizeForType< TransposeLayer, TransposeAsReshapeImpl > TransposeAsReshape
OptimizeForExclusiveConnection< Convolution2dLayer, BatchNormalizationLayer, FuseBatchNorm< Convolution2dLayer, armnn::DataType::Float32 > > FuseBatchNormIntoConvolution2DFloat32
OptimizeForConnection< ConvertFp32ToFp16Layer, ConvertFp16ToFp32Layer, OptimizeInverseConversionsImpl > OptimizeInverseConversionsFp32
OptimizeForType< PermuteLayer, PermuteAsReshapeImpl > PermuteAsReshape
bool IsTfLiteTurboModel(const Graph &optGraph)
Definition Network.cpp:1976
BackendsMap CreateSupportedBackends(TensorHandleFactoryRegistry &handleFactoryRegistry, BackendSettings &backendSettings)
Definition Network.cpp:1354
Optimizer::Optimizations MakeOptimizations(Args &&... args)
Definition Optimizer.hpp:43
bool CheckFp16Support(BackendsMap &backends, const std::vector< BackendId > &availablePreferredBackends)
Definition Network.cpp:1045
bool CheckFastMathSupport(const std::vector< BackendId > &availablePreferredBackends, const ModelOptions &modelOptions)
Definition Network.cpp:1948
std::vector< BackendOptions > ModelOptions
std::unique_ptr< IOptimizedNetwork, void(*)(IOptimizedNetwork *network)> IOptimizedNetworkPtr
Definition INetwork.hpp:340
OptimizationResult AssignBackends(OptimizedNetworkImpl *optNetObjPtr, BackendSettings &backendSettings, Graph::Iterator &firstLayer, Graph::Iterator &lastLayer, Optional< std::vector< std::string > & > errMessages)
Definition Network.cpp:1211
OptimizationResult ApplyBackendOptimizations(OptimizedNetworkImpl *optNetObjPtr, BackendSettings &backendSettings, BackendsMap &backends, const ModelOptions &modelOptions, Optional< std::vector< std::string > & > errMessages)
Definition Network.cpp:1372
void ReportError(const std::string &errorMessage, Optional< std::vector< std::string > & > errorMessages)
Definition Network.cpp:762
BackendRegistry & BackendRegistryInstance()
OptimizationResult SelectTensorHandleStrategy(Graph &optGraph, BackendsMap &backends, TensorHandleFactoryRegistry &registry, bool importEnabled, bool exportEnabled, Optional< std::vector< std::string > & > errMessages)
Definition Network.cpp:1877
std::map< BackendId, std::unique_ptr< class IBackendInternal > > BackendsMap
Definition Network.hpp:285
std::string CreateDirectory(std::string sPath)
Returns full path to temporary folder.

References Graph::AddCompatibilityLayers(), armnn::ApplyBackendOptimizations(), ARMNN_LOG, ARMNN_SCOPED_PROFILING_EVENT, armnn::AssignBackends(), armnn::BackendRegistryInstance(), Graph::begin(), armnn::CheckFastMathSupport(), armnn::CheckFp16Support(), armnnUtils::Filesystem::CreateDirectory(), armnn::CreateSupportedBackends(), armnn::debug, Destroy(), Graph::end(), BackendSettings::GetAvailablePreferredBackends(), BackendRegistry::GetBackendIds(), OptimizerOptionsOpaque::GetDebugEnabled(), OptimizerOptionsOpaque::GetDebugToFileEnabled(), OptimizerOptionsOpaque::GetExportEnabled(), OptimizerOptionsOpaque::GetImportEnabled(), ProfilerManager::GetInstance(), BackendRegistry::GetMappedGpuBackends(), OptimizerOptionsOpaque::GetModelOptions(), Graph::GetProfiler(), OptimizerOptionsOpaque::GetProfilingEnabled(), OptimizerOptionsOpaque::GetReduceFp32ToBf16(), OptimizerOptionsOpaque::GetReduceFp32ToFp16(), OptimizerOptionsOpaque::GetShapeInferenceMethod(), armnn::InferAndValidate, Graph::InferTensorInfos(), armnn::info, IOptimizedNetwork(), armnn::IsTfLiteTurboModel(), OptimizationResult::m_Error, BackendSettings::m_SupportedBackends, armnn::MakeOptimizations(), Optimizer::Pass(), pOptimizedNetworkImpl, ProfilerManager::RegisterProfiler(), armnn::ReportError(), armnn::SelectTensorHandleStrategy(), OptimizerOptionsOpaque::ToString(), armnn::Undefined, armnn::ValidateOnly, Graph::VerifyConstantLayerSetTensorInfo(), armnn::warning, and Exception::what().

◆ Optimize [2/2]

IOptimizedNetworkPtr Optimize ( const INetwork & inNetwork,
const std::vector< BackendId > & backendPreferences,
const IDeviceSpec & deviceSpec,
const OptimizerOptionsOpaque & options = OptimizerOptionsOpaque(),
Optional< std::vector< std::string > & > messages = EmptyOptional() )
friend

Create an optimized version of the network.

Parameters
networkINetwork description of the network to be optimized.
backendPreferencesThe choice of the backend ordered by user preferences.
deviceSpecDeviceSpec object as queried from the runtime. See IRuntime::GetDeviceSpec()
messagesIf there are failures or warnings a string describing same will be added to the vector
optionsOptimizerOptions object with optimizer configuration options
Returns
An IOptimizedNetworkPtr interface to the optimized network, throws an exception derived from armnn::Exception if process fails.

Definition at line 2287 of file Network.cpp.

2292{
2293 return Optimize(inNetwork.pNetworkImpl->GetGraph(),
2294 backendPreferences,
2295 deviceSpec,
2296 options,
2297 messages);
2298}
IOptimizedNetworkPtr Optimize(const INetwork &network, const std::vector< BackendId > &backendPreferences, const IDeviceSpec &deviceSpec, const OptimizerOptionsOpaque &options=OptimizerOptionsOpaque(), Optional< std::vector< std::string > & > messages=EmptyOptional())
Create an optimized version of the network.
Definition Network.cpp:2287

Member Data Documentation

◆ pOptimizedNetworkImpl


The documentation for this class was generated from the following files: