ArmNN
 25.11
Loading...
Searching...
No Matches
OptimizerOptions Struct Reference

#include <INetwork.hpp>

Public Member Functions

 OptimizerOptions ()
 OptimizerOptions (bool reduceFp32ToFp16, bool debug, bool reduceFp32ToBf16, bool importEnabled, ModelOptions modelOptions={}, bool exportEnabled=false, bool debugToFile=false)
 OptimizerOptions (bool reduceFp32ToFp16, bool debug, bool reduceFp32ToBf16=false, ShapeInferenceMethod shapeInferenceMethod=armnn::ShapeInferenceMethod::ValidateOnly, bool importEnabled=false, ModelOptions modelOptions={}, bool exportEnabled=false, bool debugToFile=false, bool allowExpandedDims=false)
const std::string ToString () const

Public Attributes

bool m_ReduceFp32ToFp16
 Reduces all Fp32 operators in the model to Fp16 for faster processing.
bool m_Debug
 Add debug data for easier troubleshooting.
bool m_DebugToFile
 Pass debug data to separate output files for easier troubleshooting.
bool m_ReduceFp32ToBf16
 @Note This feature has been replaced by enabling Fast Math in compute library backend options.
ShapeInferenceMethod m_shapeInferenceMethod
 Infer output size when not available.
bool m_ImportEnabled
 Enable Import.
ModelOptions m_ModelOptions
 Enable Model Options.
bool m_ProfilingEnabled
 Enable profiling dump of the optimizer phase.
bool m_ExportEnabled
 Enable Export.
bool m_AllowExpandedDims
 When calculating tensor sizes, dimensions of size == 1 will be ignored.

Detailed Description

Definition at line 151 of file INetwork.hpp.

Constructor & Destructor Documentation

◆ OptimizerOptions() [1/3]

OptimizerOptions ( )
inline

Definition at line 154 of file INetwork.hpp.

155 : m_ReduceFp32ToFp16(false)
156 , m_Debug(false)
157 , m_DebugToFile(false)
158 , m_ReduceFp32ToBf16(false)
159 , m_shapeInferenceMethod(armnn::ShapeInferenceMethod::ValidateOnly)
160 , m_ImportEnabled(false)
161 , m_ModelOptions()
162 , m_ProfilingEnabled(false)
163 , m_ExportEnabled(false)
164 , m_AllowExpandedDims(false)
165 {}
@ ValidateOnly
Validate all output shapes.
Definition Types.hpp:239

References m_AllowExpandedDims, m_Debug, m_DebugToFile, m_ExportEnabled, m_ImportEnabled, m_ModelOptions, m_ProfilingEnabled, m_ReduceFp32ToBf16, m_ReduceFp32ToFp16, m_shapeInferenceMethod, OptimizerOptions(), and armnn::ValidateOnly.

Referenced by OptimizerOptions(), OptimizerOptions(), and OptimizerOptions().

◆ OptimizerOptions() [2/3]

OptimizerOptions ( bool reduceFp32ToFp16,
bool debug,
bool reduceFp32ToBf16,
bool importEnabled,
ModelOptions modelOptions = {},
bool exportEnabled = false,
bool debugToFile = false )
inline

Definition at line 168 of file INetwork.hpp.

169 {}, bool exportEnabled = false, bool debugToFile = false)
170 : m_ReduceFp32ToFp16(reduceFp32ToFp16)
171 , m_Debug(debug)
172 , m_DebugToFile(debugToFile)
173 , m_ReduceFp32ToBf16(reduceFp32ToBf16)
174 , m_shapeInferenceMethod(armnn::ShapeInferenceMethod::ValidateOnly)
175 , m_ImportEnabled(importEnabled)
176 , m_ModelOptions(modelOptions)
177 , m_ProfilingEnabled(false)
178 , m_ExportEnabled(exportEnabled)
179 , m_AllowExpandedDims(false)
180 {
181 }
ShapeInferenceMethod
The ShapeInferenceMethod modify how the output shapes are treated.
Definition Types.hpp:237

References armnn::debug, and OptimizerOptions().

◆ OptimizerOptions() [3/3]

OptimizerOptions ( bool reduceFp32ToFp16,
bool debug,
bool reduceFp32ToBf16 = false,
ShapeInferenceMethod shapeInferenceMethod = armnn::ShapeInferenceMethod::ValidateOnly,
bool importEnabled = false,
ModelOptions modelOptions = {},
bool exportEnabled = false,
bool debugToFile = false,
bool allowExpandedDims = false )
inline

Definition at line 184 of file INetwork.hpp.

186 {}, bool exportEnabled = false,
187 bool debugToFile = false, bool allowExpandedDims = false)
188 : m_ReduceFp32ToFp16(reduceFp32ToFp16)
189 , m_Debug(debug)
190 , m_DebugToFile(debugToFile)
191 , m_ReduceFp32ToBf16(reduceFp32ToBf16)
192 , m_shapeInferenceMethod(shapeInferenceMethod)
193 , m_ImportEnabled(importEnabled)
194 , m_ModelOptions(modelOptions)
195 , m_ProfilingEnabled(false)
196 , m_ExportEnabled(exportEnabled)
197 , m_AllowExpandedDims(allowExpandedDims)
198 {
199 }

References armnn::debug, OptimizerOptions(), and armnn::ValidateOnly.

Member Function Documentation

◆ ToString()

const std::string ToString ( ) const
inline

Definition at line 201 of file INetwork.hpp.

202 {
203 std::stringstream stream;
204 stream << "OptimizerOptions: \n";
205 stream << "\tReduceFp32ToFp16: " << m_ReduceFp32ToFp16 << "\n";
206 stream << "\tReduceFp32ToBf16: " << m_ReduceFp32ToBf16 << "\n";
207 stream << "\tDebug: " << m_Debug << "\n";
208 stream << "\tDebug to file: " << m_DebugToFile << "\n";
209 stream << "\tShapeInferenceMethod: " <<
210 (m_shapeInferenceMethod == ShapeInferenceMethod::ValidateOnly
211 ? "ValidateOnly" : "InferAndValidate") << "\n";
212 stream << "\tImportEnabled: " << m_ImportEnabled << "\n";
213 stream << "\tExportEnabled: " << m_ExportEnabled << "\n";
214 stream << "\tProfilingEnabled: " << m_ProfilingEnabled << "\n";
215 stream << "\tAllowExpandedDims: " << m_AllowExpandedDims << "\n";
216
217 stream << "\tModelOptions: \n";
218 for (auto optionsGroup : m_ModelOptions)
219 {
220 for (size_t i=0; i < optionsGroup.GetOptionCount(); i++)
221 {
222 const armnn::BackendOptions::BackendOption option = optionsGroup.GetOption(i);
223 stream << "\t\tBackend: " << optionsGroup.GetBackendId() << "\n"
224 << "\t\t\tOption: " << option.GetName() << "\n"
225 << "\t\t\tValue: " << std::string(option.GetValue().ToString()) << "\n";
226 }
227 }
228
229 return stream.str();
230 }

References BackendOptions::BackendOption::GetName(), BackendOptions::BackendOption::GetValue(), m_AllowExpandedDims, m_Debug, m_DebugToFile, m_ExportEnabled, m_ImportEnabled, m_ModelOptions, m_ProfilingEnabled, m_ReduceFp32ToBf16, m_ReduceFp32ToFp16, m_shapeInferenceMethod, BackendOptions::Var::ToString(), and armnn::ValidateOnly.

Member Data Documentation

◆ m_AllowExpandedDims

bool m_AllowExpandedDims

When calculating tensor sizes, dimensions of size == 1 will be ignored.

Definition at line 265 of file INetwork.hpp.

Referenced by OptimizerOptions(), OptimizerOptionsOpaque::OptimizerOptionsOpaque(), and ToString().

◆ m_Debug

bool m_Debug

Add debug data for easier troubleshooting.

Definition at line 240 of file INetwork.hpp.

Referenced by OptimizerOptions(), OptimizerOptionsOpaque::OptimizerOptionsOpaque(), and ToString().

◆ m_DebugToFile

bool m_DebugToFile

Pass debug data to separate output files for easier troubleshooting.

Definition at line 243 of file INetwork.hpp.

Referenced by OptimizerOptions(), OptimizerOptionsOpaque::OptimizerOptionsOpaque(), and ToString().

◆ m_ExportEnabled

bool m_ExportEnabled

Enable Export.

Definition at line 262 of file INetwork.hpp.

Referenced by OptimizerOptions(), OptimizerOptionsOpaque::OptimizerOptionsOpaque(), and ToString().

◆ m_ImportEnabled

bool m_ImportEnabled

Enable Import.

Definition at line 253 of file INetwork.hpp.

Referenced by OptimizerOptions(), OptimizerOptionsOpaque::OptimizerOptionsOpaque(), and ToString().

◆ m_ModelOptions

ModelOptions m_ModelOptions

Enable Model Options.

Definition at line 256 of file INetwork.hpp.

Referenced by OptimizerOptions(), OptimizerOptionsOpaque::OptimizerOptionsOpaque(), and ToString().

◆ m_ProfilingEnabled

bool m_ProfilingEnabled

Enable profiling dump of the optimizer phase.

Definition at line 259 of file INetwork.hpp.

Referenced by OptimizerOptions(), OptimizerOptionsOpaque::OptimizerOptionsOpaque(), and ToString().

◆ m_ReduceFp32ToBf16

bool m_ReduceFp32ToBf16

@Note This feature has been replaced by enabling Fast Math in compute library backend options.

This is currently a placeholder option

Definition at line 247 of file INetwork.hpp.

Referenced by OptimizerOptions(), OptimizerOptionsOpaque::OptimizerOptionsOpaque(), and ToString().

◆ m_ReduceFp32ToFp16

bool m_ReduceFp32ToFp16

Reduces all Fp32 operators in the model to Fp16 for faster processing.

@Note This feature works best if all operators of the model are in Fp32. ArmNN will add conversion layers between layers that weren't in Fp32 in the first place or if the operator is not supported in Fp16. The overhead of these conversions can lead to a slower overall performance if too many conversions are required.

Definition at line 237 of file INetwork.hpp.

Referenced by OptimizerOptions(), OptimizerOptionsOpaque::OptimizerOptionsOpaque(), and ToString().

◆ m_shapeInferenceMethod

ShapeInferenceMethod m_shapeInferenceMethod

Infer output size when not available.

Definition at line 250 of file INetwork.hpp.

Referenced by OptimizerOptions(), OptimizerOptionsOpaque::OptimizerOptionsOpaque(), and ToString().


The documentation for this struct was generated from the following file: