ArmNN
 25.02
All Classes Namespaces Files Functions Variables Typedefs Enumerations Enumerator Friends Macros Pages
IOptimizedNetwork Class Reference

#include <INetwork.hpp>

Public Member Functions

Status PrintGraph ()
 
Status SerializeToDot (std::ostream &stream) const
 
arm::pipe::ProfilingGuid GetGuid () const
 
size_t GetNumInputs () const
 
size_t GetNumOutputs () const
 
void ExecuteStrategy (IStrategy &strategy) const
 
 IOptimizedNetwork (const IOptimizedNetwork &other, const ModelOptions &modelOptions)
 Creates a copy of the IOptimizedNetwork. More...
 
 IOptimizedNetwork (std::unique_ptr< Graph > graph)
 
 IOptimizedNetwork (std::unique_ptr< OptimizedNetworkImpl > impl)
 
 ~IOptimizedNetwork ()
 
const std::shared_ptr< IProfiler > & GetProfiler () const
 

Static Public Member Functions

static void Destroy (IOptimizedNetwork *network)
 

Protected Member Functions

 IOptimizedNetwork (std::unique_ptr< Graph > graph, const ModelOptions &modelOptions)
 

Protected Attributes

std::unique_ptr< OptimizedNetworkImplpOptimizedNetworkImpl
 

Friends

class LoadedNetwork
 
GraphGetGraphForTesting (IOptimizedNetwork *optNetPtr)
 
ModelOptionsGetModelOptionsForTesting (IOptimizedNetwork *optNetPtr)
 
IOptimizedNetworkPtr Optimize (const INetwork &inNetwork, const std::vector< BackendId > &backendPreferences, const IDeviceSpec &deviceSpec, const OptimizerOptionsOpaque &options, Optional< std::vector< std::string > & > messages)
 Create an optimized version of the network. More...
 
IOptimizedNetworkPtr Optimize (const Graph &inGraph, const std::vector< BackendId > &backendPreferences, const IDeviceSpec &deviceSpec, const OptimizerOptionsOpaque &options, Optional< std::vector< std::string > & > messages)
 Create an optimized version of the network. More...
 

Detailed Description

Definition at line 902 of file INetwork.hpp.

Constructor & Destructor Documentation

◆ IOptimizedNetwork() [1/4]

IOptimizedNetwork ( const IOptimizedNetwork other,
const ModelOptions modelOptions 
)

Creates a copy of the IOptimizedNetwork.

The IOptimizedNetwork will not be reoptimized, the provided ModelOptions will only be used when creating a LoadedNetwork.

Definition at line 692 of file Network.cpp.

693  : pOptimizedNetworkImpl(new OptimizedNetworkImpl(*other.pOptimizedNetworkImpl.get(), modelOptions)) {}
std::unique_ptr< OptimizedNetworkImpl > pOptimizedNetworkImpl
Definition: INetwork.hpp:944

◆ IOptimizedNetwork() [2/4]

IOptimizedNetwork ( std::unique_ptr< Graph graph)

Definition at line 695 of file Network.cpp.

696  : pOptimizedNetworkImpl(new OptimizedNetworkImpl(std::move(graph))) {}

◆ IOptimizedNetwork() [3/4]

IOptimizedNetwork ( std::unique_ptr< OptimizedNetworkImpl impl)

Definition at line 698 of file Network.cpp.

699  : pOptimizedNetworkImpl(std::move(impl)) {}

◆ ~IOptimizedNetwork()

~IOptimizedNetwork ( )
default

◆ IOptimizedNetwork() [4/4]

IOptimizedNetwork ( std::unique_ptr< Graph graph,
const ModelOptions modelOptions 
)
protected

Definition at line 701 of file Network.cpp.

702  : pOptimizedNetworkImpl(new OptimizedNetworkImpl(std::move(graph), modelOptions)) {}

Member Function Documentation

◆ Destroy()

void Destroy ( IOptimizedNetwork network)
static

Definition at line 706 of file Network.cpp.

707 {
708  delete network;
709 }

◆ ExecuteStrategy()

void ExecuteStrategy ( IStrategy strategy) const

Definition at line 3276 of file Network.cpp.

3277 {
3278  pOptimizedNetworkImpl->ExecuteStrategy(strategy);
3279 }

References IOptimizedNetwork::pOptimizedNetworkImpl.

◆ GetGuid()

arm::pipe::ProfilingGuid GetGuid ( ) const

Definition at line 726 of file Network.cpp.

727 {
728  return pOptimizedNetworkImpl->GetGuid();
729 }

References IOptimizedNetwork::pOptimizedNetworkImpl.

◆ GetNumInputs()

size_t GetNumInputs ( ) const

Definition at line 731 of file Network.cpp.

732 {
733  return pOptimizedNetworkImpl->GetNumInputs();
734 }

References IOptimizedNetwork::pOptimizedNetworkImpl.

◆ GetNumOutputs()

size_t GetNumOutputs ( ) const

Definition at line 736 of file Network.cpp.

737 {
738  return pOptimizedNetworkImpl->GetNumOutputs();
739 }

References IOptimizedNetwork::pOptimizedNetworkImpl.

◆ GetProfiler()

const std::shared_ptr< IProfiler > & GetProfiler ( ) const

Definition at line 721 of file Network.cpp.

722 {
723  return pOptimizedNetworkImpl->GetGraph().GetProfiler();
724 }

References IOptimizedNetwork::pOptimizedNetworkImpl.

◆ PrintGraph()

Status PrintGraph ( )

Definition at line 711 of file Network.cpp.

712 {
713  return pOptimizedNetworkImpl->PrintGraph();
714 }

References IOptimizedNetwork::pOptimizedNetworkImpl.

◆ SerializeToDot()

Status SerializeToDot ( std::ostream &  stream) const

Definition at line 716 of file Network.cpp.

717 {
718  return pOptimizedNetworkImpl->SerializeToDot(stream);
719 }

References IOptimizedNetwork::pOptimizedNetworkImpl.

Friends And Related Function Documentation

◆ GetGraphForTesting

Graph& GetGraphForTesting ( IOptimizedNetwork optNetPtr)
friend

Definition at line 49 of file TestUtils.cpp.

50 {
51  return optNet->pOptimizedNetworkImpl->GetGraph();
52 }

◆ GetModelOptionsForTesting

ModelOptions& GetModelOptionsForTesting ( IOptimizedNetwork optNetPtr)
friend

Definition at line 54 of file TestUtils.cpp.

55 {
56  return optNet->pOptimizedNetworkImpl->GetModelOptions();
57 }

◆ LoadedNetwork

friend class LoadedNetwork
friend

Definition at line 927 of file INetwork.hpp.

◆ Optimize [1/2]

IOptimizedNetworkPtr Optimize ( const Graph inGraph,
const std::vector< BackendId > &  backendPreferences,
const IDeviceSpec deviceSpec,
const OptimizerOptionsOpaque options,
Optional< std::vector< std::string > & >  messages = EmptyOptional() 
)
friend

Create an optimized version of the network.

Parameters
inGraphGraph to be optimized.
backendPreferencesThe choice of the backend ordered by user preferences.
deviceSpecDeviceSpec object as queried from the runtime. See IRuntime::GetDeviceSpec()
messagesIf there are failures or warnings a string describing same will be added to the vector
optionsOptimizerOptions object with optimizer configuration options
Returns
An IOptimizedNetworkPtr interface to the optimized network, throws an exception derived from armnn::Exception if process fails.

Definition at line 2022 of file Network.cpp.

2027 {
2028  ARMNN_LOG(debug) << options.ToString();
2029 
2030  // Enable profiling
2031  auto profiler = inGraph.GetProfiler();
2033  profiler->EnableProfiling(options.GetProfilingEnabled());
2034 
2036  if (backendPreferences.empty())
2037  {
2038  throw InvalidArgumentException("Invoked Optimize with no backends specified");
2039  }
2040 
2041  if (options.GetReduceFp32ToBf16())
2042  {
2043  throw InvalidArgumentException("BFloat16 optimization is currently ignored. In order to use Bf16 optimization "
2044  "Please use the FastMathEnabled backend option for CpuAcc or GpuAcc.");
2045  }
2046 
2047  if (options.GetReduceFp32ToFp16() && options.GetReduceFp32ToBf16())
2048  {
2049  throw InvalidArgumentException("BFloat16 and Float16 optimization cannot be enabled at the same time.");
2050  }
2051 
2052  // Ensure TensorInfo is set on all output slots of ConstantLayers in the graph
2053  inGraph.VerifyConstantLayerSetTensorInfo();
2054 
2055  std::unique_ptr<Graph> graph = std::make_unique<Graph>(inGraph);
2056 
2057  // We need to pass on the information about whether import and export is enabled to the LoadNetwork phase.
2058  // The mechanism to do that is to add model options to the optimized network.
2059  armnn::BackendOptions importExport("Global",
2060  {{"ImportEnabled", options.GetImportEnabled()},
2061  {"ExportEnabled", options.GetExportEnabled()}});
2062  ModelOptions optimizedOptions(options.GetModelOptions());
2063  optimizedOptions.push_back(importExport);
2064 
2065  auto optNet = IOptimizedNetworkPtr(new IOptimizedNetwork(std::move(graph), optimizedOptions),
2067 
2068  IOptimizedNetwork* optNetObjPtr = optNet.get();
2069 
2070  // Get the optimized graph
2071  Graph& optGraph = optNetObjPtr->pOptimizedNetworkImpl->GetGraph();
2072 
2073  if(options.GetShapeInferenceMethod() == ShapeInferenceMethod::InferAndValidate)
2074  {
2075  // Infer the tensor infos for all output slots. Throws an exception on failure
2076  optGraph.InferTensorInfos();
2077  }
2078 
2079  using namespace optimizations;
2080  // Substitute Max + Min with Bounded Relu before AddBroadcastReshapeLayer optimisation,
2081  // as Bounded ReLu needs the constants to be 1D size 1
2083 
2084  // Perform BroadcastToOptimizationLayer before AddBroadcastReshapeLayer optimisation
2086 
2088 
2089  if(options.GetShapeInferenceMethod() == ShapeInferenceMethod::ValidateOnly)
2090  {
2091  // Validate the tensor infos for all output slots. Throws an exception on failure
2092  optGraph.InferTensorInfos();
2093  }
2094 
2095  // Initialize backend settings
2096  BackendSettings backendSettings(backendPreferences, deviceSpec);
2097  auto availablePreferredBackends = backendSettings.GetAvailablePreferredBackends();
2098  if (availablePreferredBackends.empty())
2099  {
2100  std::stringstream failureMsg;
2101  failureMsg << "None of the preferred backends " << backendPreferences
2102  << " are supported. Current platform provides " << backendSettings.m_SupportedBackends;
2103  ReportError(failureMsg.str(), messages);
2104  throw InvalidArgumentException(failureMsg.str());
2105  }
2106 
2107  // Create a map to temporarily hold initialized backend objects
2108  TensorHandleFactoryRegistry tensorHandleFactoryRegistry;
2109  BackendsMap backends = CreateSupportedBackends(tensorHandleFactoryRegistry, backendSettings);
2110  bool hasFp16 = CheckFp16Support(backends, availablePreferredBackends);
2111 
2112  bool reduceFp32ToFp16 = options.GetReduceFp32ToFp16();
2113  // If fp16 is supported on the backend and fastmath has been enabled and the model is a TfLite converted Fp16
2114  // model: enable turbo mode optimizations
2115  if (hasFp16 && CheckFastMathSupport(availablePreferredBackends, optimizedOptions) && IsTfLiteTurboModel(optGraph))
2116  {
2118  reduceFp32ToFp16 = true;
2119  }
2120  else
2121  {
2123  }
2124 
2125  // Group Constant Layer optimizations together where possible.
2126  // This is important as:
2127  // FusePermuteIntoConstantLayer must happen before FoldPadIntoDepthwiseConvolution2d and
2128  // FuseBatchNormIntoDepthwiseConvolution2D.
2130  // Perform optimisation passes
2136  MovePermuteUp(),
2137  MoveTransposeUp(),
2138  PermuteAsReshape(),
2151 
2152  const std::vector<BackendId> mappedGpuBackends = BackendRegistryInstance().GetMappedGpuBackends();
2153 
2154  // All or nothing Gpu backends cannot be used as fallback
2155  for (auto backend : mappedGpuBackends)
2156  {
2157  if (std::count(backendPreferences.begin(), backendPreferences.end(), backend)
2158  && (backendPreferences[0] != backend) &&
2159  (backendPreferences[0] != armnn::BackendId("GpuAcc")))
2160  {
2161  std::stringstream failureMsg;
2162  failureMsg << backend << " backend cannot be specified as fallback.";
2163  ReportError(failureMsg.str(), messages);
2164  throw InvalidArgumentException(failureMsg.str());
2165  }
2166  }
2167 
2168  std::vector<BackendId> amendedBackendPreferences = backendPreferences;
2169  std::unordered_set<BackendId> supportedBackends = armnn::BackendRegistryInstance().GetBackendIds();
2170  if (amendedBackendPreferences[0] == armnn::BackendId("GpuAcc"))
2171  {
2172  // Add mapped Gpu backends if not already there and GpuAcc is first backend requested
2173  for (auto backend : mappedGpuBackends)
2174  {
2175  if (!std::count(amendedBackendPreferences.begin(), amendedBackendPreferences.end(), backend))
2176  {
2177  amendedBackendPreferences.insert(amendedBackendPreferences.begin(), backend);
2178  }
2179  }
2180  }
2181 
2182  if (reduceFp32ToFp16 && hasFp16)
2183  {
2184  ARMNN_SCOPED_PROFILING_EVENT(Compute::Undefined, "Optimizer_ReduceFp32ToFp16");
2187  }
2188  // Assign an available backend to each layer
2189  Graph::Iterator firstLayer = optGraph.begin();
2190  Graph::Iterator lastLayer = optGraph.end();
2191  OptimizationResult assignBackendsResult = AssignBackends(optNetObjPtr->pOptimizedNetworkImpl.get(),
2192  backendSettings,
2193  firstLayer,
2194  lastLayer,
2195  messages);
2196  if (assignBackendsResult.m_Error)
2197  {
2198  // Failed to assign a backend to each layer
2199  throw InvalidArgumentException("Failed to assign a backend to each layer");
2200  }
2201 
2204 
2205  // Apply the backend-specific optimizations
2206  OptimizationResult backendOptimizationResult = ApplyBackendOptimizations(optNetObjPtr->pOptimizedNetworkImpl.get(),
2207  backendSettings,
2208  backends,
2209  options.GetModelOptions(),
2210  messages);
2211  if (backendOptimizationResult.m_Error)
2212  {
2213  // Failed to apply the backend-specific optimizations
2214  throw InvalidArgumentException("Failed to apply the backend-specific optimizations");
2215  }
2216 
2217  // Convert constants
2218  {
2219  ARMNN_SCOPED_PROFILING_EVENT(Compute::Undefined, "Optimizer_ConvertConstants");
2222  }
2223 
2224  // This must occur after all topological changes to the graph and any redirection of variables
2225  // If the debug flag is set, then insert a DebugLayer after each layer
2226  // Doing this after applying the backend optimizations as they might have changed some layers
2227  if (options.GetDebugEnabled() && !options.GetDebugToFileEnabled())
2228  {
2230  }
2231  else if (options.GetDebugToFileEnabled())
2232  {
2233  // Setup the output file path
2234  try
2235  {
2236 #if !defined(ARMNN_DISABLE_FILESYSTEM)
2237  auto result = armnnUtils::Filesystem::CreateDirectory("/ArmNNIntermediateLayerOutputs");
2238  ARMNN_LOG(info) << "Intermediate tensors will be written to: " << result;
2239 #endif
2241  }
2242  catch (const armnn::RuntimeException& e)
2243  {
2244  // If we cannot create the output directory then we'll issue a warning and continue.
2245  ARMNN_LOG(warning) << "Unable to print intermediate layer outputs : " << e.what();
2246  }
2247  }
2248 
2249  // Calculate the compatibility strategies for tensor handles
2250  OptimizationResult strategyResult = SelectTensorHandleStrategy(optGraph,
2251  backends,
2252  tensorHandleFactoryRegistry,
2253  options.GetImportEnabled(),
2254  options.GetExportEnabled(),
2255  messages);
2256 
2257  if (strategyResult.m_Error)
2258  {
2259  // Failed to apply the backend-specific optimizations
2261  }
2262 
2263  // Based on the tensor handle strategy determined above, insert copy layers where required.
2264  {
2265  ARMNN_SCOPED_PROFILING_EVENT(Compute::Undefined, "Optimizer_AddCompatibilityLayers");
2266  optGraph.AddCompatibilityLayers(backends, tensorHandleFactoryRegistry);
2267  }
2268 
2269  return optNet;
2270 }
#define ARMNN_LOG(severity)
Definition: Logging.hpp:212
#define ARMNN_SCOPED_PROFILING_EVENT(backendId, name)
Definition: Profiling.hpp:220
BackendIdSet GetBackendIds() const
BackendIdVector GetMappedGpuBackends()
virtual const char * what() const noexcept override
Definition: Exceptions.cpp:32
LayerList::const_iterator Iterator
Definition: Graph.hpp:53
IOptimizedNetwork(const IOptimizedNetwork &other, const ModelOptions &modelOptions)
Creates a copy of the IOptimizedNetwork.
Definition: Network.cpp:692
static void Destroy(IOptimizedNetwork *network)
Definition: Network.cpp:706
static void Pass(Graph &graph, const Optimizations &optimizations)
Definition: Optimizer.cpp:16
void RegisterProfiler(IProfiler *profiler)
Definition: Profiling.cpp:609
static ProfilerManager & GetInstance()
Definition: Profiling.cpp:602
ConvertConstants< Float16ToFloat32, IsFloat32Layer > ConvertConstantsHalfToFloat
OptimizeForConnection< PermuteLayer, BatchToSpaceNdLayer, PermuteAndBatchToSpaceAsDepthToSpaceImpl< PermuteLayer > > PermuteAndBatchToSpaceAsDepthToSpace
OptimizeForConnection< ConvertFp16ToFp32Layer, ConvertFp32ToFp16Layer, OptimizeInverseConversionsImpl > OptimizeInverseConversionsFp16
ConvertConstants< Float32ToFloat16, IsFloat16Layer > ConvertConstantsFloatToHalf
OptimizeForExclusiveConnection< PadLayer, DepthwiseConvolution2dLayer, pad_fold::FoldPadIntoDepthwiseConvolution2dImpl > FoldPadIntoDepthwiseConvolution2d
OptimizeForExclusiveConnection< PadLayer, Pooling2dLayer, pad_fold::FoldPadIntoPooling2dImpl > FoldPadIntoPooling2d
OptimizeForConnection< Layer, ReshapeLayer, SquashEqualSiblingsImpl< ReshapeLayer > > SquashEqualReshapeSiblings
OptimizeForConnection< TransposeLayer, TransposeLayer, OptimizeInversePermutesImpl< TransposeLayer > > OptimizeInverseTransposes
OptimizeForConnection< ConstantLayer, DequantizeLayer, TurboConvertConstDequantisationLayersToConstLayersImpl > TurboConvertConstDequantisationLayersToConstLayers
OptimizeForConnection< ConstantLayer, DequantizeLayer, ConvertConstDequantisationLayersToConstLayersImpl > ConvertConstDequantisationLayersToConstLayers
OptimizeForType< Layer, AddBroadcastReshapeLayerImpl > AddBroadcastReshapeLayer
OptimizeForExclusiveConnection< DepthwiseConvolution2dLayer, BatchNormalizationLayer, FuseBatchNorm< DepthwiseConvolution2dLayer, armnn::DataType::Float32 > > FuseBatchNormIntoDepthwiseConvolution2DFloat32
OptimizeForConnection< Layer, TransposeLayer, MoveTransposeUpImpl > MoveTransposeUp
OptimizeForConnection< Layer, PermuteLayer, SquashEqualSiblingsImpl< PermuteLayer > > SquashEqualPermuteSiblings
OptimizeForConnection< ReshapeLayer, ReshapeLayer, OptimizeConsecutiveReshapesImpl > OptimizeConsecutiveReshapes
OptimizeForType< Layer, ConvertFp32NetworkToFp16Impl > Fp32NetworkToFp16Converter
OptimizeForExclusiveConnection< Convolution2dLayer, BatchNormalizationLayer, FuseBatchNorm< Convolution2dLayer, armnn::DataType::Float16 > > FuseBatchNormIntoConvolution2DFloat16
OptimizeForExclusiveConnection< PadLayer, Convolution2dLayer, pad_fold::FoldPadIntoConvolution2dImpl > FoldPadIntoConvolution2d
OptimizeForConnection< TransposeLayer, BatchToSpaceNdLayer, PermuteAndBatchToSpaceAsDepthToSpaceImpl< TransposeLayer > > TransposeAndBatchToSpaceAsDepthToSpace
OptimizeForType< Layer, AddDebugToFileImpl > InsertDebugToFileLayer
Definition: AddDebug.hpp:54
OptimizeForConnection< PermuteLayer, PermuteLayer, OptimizeInversePermutesImpl< PermuteLayer > > OptimizeInversePermutes
OptimizeForExclusiveConnection< Convolution2dLayer, BatchNormalizationLayer, FuseBatchNorm< Convolution2dLayer, armnn::DataType::Float32 > > FuseBatchNormIntoConvolution2DFloat32
OptimizeForType< Layer, AddDebugImpl > InsertDebugLayer
Definition: AddDebug.hpp:53
OptimizeForConnection< Layer, PermuteLayer, MovePermuteUpImpl > MovePermuteUp
OptimizeForExclusiveConnection< DepthwiseConvolution2dLayer, BatchNormalizationLayer, FuseBatchNorm< DepthwiseConvolution2dLayer, armnn::DataType::Float16 > > FuseBatchNormIntoDepthwiseConvolution2DFloat16
OptimizeForConnection< Layer, TransposeLayer, SquashEqualSiblingsImpl< TransposeLayer > > SquashEqualTransposeSiblings
OptimizeForExclusiveConnection< ElementwiseBinaryLayer, ElementwiseBinaryLayer, MaxMinIntoBoundedReluImpl > MaxMinIntoBoundedRelu
OptimizeForType< BroadcastToLayer, DeleteBroadcastToImpl > BroadcastToOptimizationLayer
OptimizeForType< TransposeLayer, TransposeAsReshapeImpl > TransposeAsReshape
OptimizeForConnection< ConstantLayer, PermuteLayer, ConvertConstPermuteLayersToConstLayers > FusePermuteIntoConstLayer
OptimizeForType< PermuteLayer, PermuteAsReshapeImpl > PermuteAsReshape
OptimizeForConnection< ConvertFp32ToFp16Layer, ConvertFp16ToFp32Layer, OptimizeInverseConversionsImpl > OptimizeInverseConversionsFp32
BackendsMap CreateSupportedBackends(TensorHandleFactoryRegistry &handleFactoryRegistry, BackendSettings &backendSettings)
Definition: Network.cpp:1354
void ReportError(const std::string &errorMessage, Optional< std::vector< std::string > & > errorMessages)
Definition: Network.cpp:762
bool IsTfLiteTurboModel(const Graph &optGraph)
Definition: Network.cpp:1972
std::vector< BackendOptions > ModelOptions
std::unique_ptr< IOptimizedNetwork, void(*)(IOptimizedNetwork *network)> IOptimizedNetworkPtr
Definition: INetwork.hpp:340
OptimizationResult AssignBackends(OptimizedNetworkImpl *optNetObjPtr, BackendSettings &backendSettings, Graph::Iterator &firstLayer, Graph::Iterator &lastLayer, Optional< std::vector< std::string > & > errMessages)
Definition: Network.cpp:1211
bool CheckFastMathSupport(const std::vector< BackendId > &availablePreferredBackends, const ModelOptions &modelOptions)
Definition: Network.cpp:1944
std::map< BackendId, std::unique_ptr< class IBackendInternal > > BackendsMap
Definition: Network.hpp:285
bool CheckFp16Support(BackendsMap &backends, const std::vector< BackendId > &availablePreferredBackends)
Definition: Network.cpp:1045
Optimizer::Optimizations MakeOptimizations(Args &&... args)
Definition: Optimizer.hpp:43
BackendRegistry & BackendRegistryInstance()
OptimizationResult ApplyBackendOptimizations(OptimizedNetworkImpl *optNetObjPtr, BackendSettings &backendSettings, BackendsMap &backends, const ModelOptions &modelOptions, Optional< std::vector< std::string > & > errMessages)
Definition: Network.cpp:1372
OptimizationResult SelectTensorHandleStrategy(Graph &optGraph, BackendsMap &backends, TensorHandleFactoryRegistry &registry, bool importEnabled, bool exportEnabled, Optional< std::vector< std::string > & > errMessages)
Definition: Network.cpp:1873
@ InferAndValidate
Infer missing output shapes and validate all output shapes.
@ ValidateOnly
Validate all output shapes.
std::string CreateDirectory(std::string sPath)
Returns full path to temporary folder.
Definition: Filesystem.cpp:47
Struct for the users to pass backend specific options.

◆ Optimize [2/2]

IOptimizedNetworkPtr Optimize ( const INetwork inNetwork,
const std::vector< BackendId > &  backendPreferences,
const IDeviceSpec deviceSpec,
const OptimizerOptionsOpaque options = OptimizerOptionsOpaque(),
Optional< std::vector< std::string > & >  messages = EmptyOptional() 
)
friend

Create an optimized version of the network.

Parameters
networkINetwork description of the network to be optimized.
backendPreferencesThe choice of the backend ordered by user preferences.
deviceSpecDeviceSpec object as queried from the runtime. See IRuntime::GetDeviceSpec()
messagesIf there are failures or warnings a string describing same will be added to the vector
optionsOptimizerOptions object with optimizer configuration options
Returns
An IOptimizedNetworkPtr interface to the optimized network, throws an exception derived from armnn::Exception if process fails.

Definition at line 2286 of file Network.cpp.

2291 {
2292  return Optimize(inNetwork.pNetworkImpl->GetGraph(),
2293  backendPreferences,
2294  deviceSpec,
2295  options,
2296  messages);
2297 }
friend IOptimizedNetworkPtr Optimize(const INetwork &inNetwork, const std::vector< BackendId > &backendPreferences, const IDeviceSpec &deviceSpec, const OptimizerOptionsOpaque &options, Optional< std::vector< std::string > & > messages)
Create an optimized version of the network.
Definition: Network.cpp:2286

Member Data Documentation

◆ pOptimizedNetworkImpl


The documentation for this class was generated from the following files: