21.05
|
Copyright (c) 2021 ARM Limited and Contributors. More...
Namespaces | |
experimental | |
gatordmock | |
optimizations | |
profiling | |
stringUtils | |
test | |
timelinedecoder | |
utility | |
Functions | |
LayerSupportHandle | GetILayerSupportByBackendId (const armnn::BackendId &backend) |
Convenience function to retrieve the ILayerSupportHandle for a backend. More... | |
bool | IsCapabilitySupported (const armnn::BackendId &backend, armnn::BackendCapability capability) |
Convenience function to check a capability on a backend. More... | |
constexpr char const * | GetComputeDeviceAsCString (Compute compute) |
Deprecated function that will be removed together with the Compute enum. More... | |
std::ostream & | operator<< (std::ostream &os, const std::vector< Compute > &compute) |
Deprecated function that will be removed together with the Compute enum. More... | |
std::ostream & | operator<< (std::ostream &os, const std::set< Compute > &compute) |
Deprecated function that will be removed together with the Compute enum. More... | |
std::ostream & | operator<< (std::ostream &os, const Compute &compute) |
Deprecated function that will be removed together with the Compute enum. More... | |
std::ostream & | operator<< (std::ostream &os, const BackendId &id) |
template<template< typename... > class TContainer, typename... TContainerTemplateArgs> | |
std::ostream & | operator<< (std::ostream &os, const TContainer< BackendId, TContainerTemplateArgs... > &ids) |
template<typename F > | |
void | ParseOptions (const std::vector< BackendOptions > &options, BackendId backend, F f) |
BackendRegistry & | BackendRegistryInstance () |
std::ostream & | operator<< (std::ostream &os, const BackendVersion &backendVersion) |
template<typename TensorShapeIt > | |
OriginsDescriptor | CreateMergerDescriptorForConcatenation (TensorShapeIt first, TensorShapeIt last, unsigned int concatenationDimension) |
template<typename TensorShapeIt > | |
OriginsDescriptor | CreateDescriptorForConcatenation (TensorShapeIt first, TensorShapeIt last, unsigned int concatenationDimension) |
Convenience template to create an OriginsDescriptor to use when creating a ConcatLayer for performing concatenation of a number of input tensors. More... | |
template<typename ExceptionType > | |
void | ConditionalThrow (bool condition, const std::string &message) |
template<typename ExceptionType > | |
void | ConditionalThrow (bool condition) |
template<typename ExceptionType , typename ComparedType > | |
void | ConditionalThrowIfNotEqual (const std::string &message, const ComparedType &leftHandSide, const ComparedType &rightHandSide) |
ComparedType must support: operator==(const ComparedType&) operator<<(ostream&, const ComparedType&) More... | |
IOptimizedNetworkPtr | Optimize (const INetwork &network, const std::vector< BackendId > &backendPreferences, const IDeviceSpec &deviceSpec, const OptimizerOptions &options=OptimizerOptions(), Optional< std::vector< std::string > &> messages=EmptyOptional()) |
Create an optimized version of the network. More... | |
bool | IsActivationSupported (const BackendId &backend, const TensorInfo &input, const TensorInfo &output, const ActivationDescriptor &descriptor, char *reasonIfUnsupported=nullptr, size_t reasonIfUnsupportedMaxLength=1024) |
Deprecated in favor of IBackend and ILayerSupport interfaces. More... | |
bool | IsAdditionSupported (const BackendId &backend, const TensorInfo &input0, const TensorInfo &input1, const TensorInfo &output, char *reasonIfUnsupported=nullptr, size_t reasonIfUnsupportedMaxLength=1024) |
Deprecated in favor of IBackend and ILayerSupport interfaces. More... | |
bool | IsBatchNormalizationSupported (const BackendId &backend, const TensorInfo &input, const TensorInfo &output, const TensorInfo &mean, const TensorInfo &var, const TensorInfo &beta, const TensorInfo &gamma, const BatchNormalizationDescriptor &descriptor, char *reasonIfUnsupported=nullptr, size_t reasonIfUnsupportedMaxLength=1024) |
Deprecated in favor of IBackend and ILayerSupport interfaces. More... | |
bool | IsBatchToSpaceNdSupported (const BackendId &backend, const TensorInfo &input, const TensorInfo &output, const BatchToSpaceNdDescriptor &descriptor, char *reasonIfUnsupported=nullptr, size_t reasonIfUnsupportedMaxLength=1024) |
Deprecated in favor of IBackend and ILayerSupport interfaces. More... | |
bool | IsConcatSupported (const BackendId &backend, const std::vector< const TensorInfo *> inputs, const TensorInfo &output, const OriginsDescriptor &descriptor, char *reasonIfUnsupported=nullptr, size_t reasonIfUnsupportedMaxLength=1024) |
Deprecated in favor of IBackend and ILayerSupport interfaces. More... | |
bool | IsConstantSupported (const BackendId &backend, const TensorInfo &output, char *reasonIfUnsupported=nullptr, size_t reasonIfUnsupportedMaxLength=1024) |
Deprecated in favor of IBackend and ILayerSupport interfaces. More... | |
bool | IsConvertFp16ToFp32Supported (const BackendId &backend, const TensorInfo &input, const TensorInfo &output, char *reasonIfUnsupported=nullptr, size_t reasonIfUnsupportedMaxLength=1024) |
Deprecated in favor of IBackend and ILayerSupport interfaces. More... | |
bool | IsConvertFp32ToFp16Supported (const BackendId &backend, const TensorInfo &input, const TensorInfo &output, char *reasonIfUnsupported=nullptr, size_t reasonIfUnsupportedMaxLength=1024) |
Deprecated in favor of IBackend and ILayerSupport interfaces. More... | |
bool | IsConvolution2dSupported (const BackendId &backend, const TensorInfo &input, const TensorInfo &output, const Convolution2dDescriptor &descriptor, const TensorInfo &weights, const Optional< TensorInfo > &biases, char *reasonIfUnsupported=nullptr, size_t reasonIfUnsupportedMaxLength=1024) |
Deprecated in favor of IBackend and ILayerSupport interfaces. More... | |
bool | IsDebugSupported (const BackendId &backend, const TensorInfo &input, const TensorInfo &output, char *reasonIfUnsupported=nullptr, size_t reasonIfUnsupportedMaxLength=1024) |
Deprecated in favor of IBackend and ILayerSupport interfaces. More... | |
bool | IsDepthwiseConvolutionSupported (const BackendId &backend, const TensorInfo &input, const TensorInfo &output, const DepthwiseConvolution2dDescriptor &descriptor, const TensorInfo &weights, const Optional< TensorInfo > &biases, char *reasonIfUnsupported=nullptr, size_t reasonIfUnsupportedMaxLength=1024) |
Deprecated in favor of IBackend and ILayerSupport interfaces. More... | |
bool | IsDequantizeSupported (const BackendId &backend, const TensorInfo &input, const TensorInfo &output, char *reasonIfUnsupported=nullptr, size_t reasonIfUnsupportedMaxLength=1024) |
Deprecated in favor of IBackend and ILayerSupport interfaces. More... | |
bool | IsDivisionSupported (const BackendId &backend, const TensorInfo &input0, const TensorInfo &input1, const TensorInfo &output, char *reasonIfUnsupported=nullptr, size_t reasonIfUnsupportedMaxLength=1024) |
Deprecated in favor of IBackend and ILayerSupport interfaces. More... | |
bool | IsEqualSupported (const BackendId &backend, const TensorInfo &input0, const TensorInfo &input1, const TensorInfo &output, char *reasonIfUnsupported=nullptr, size_t reasonIfUnsupportedMaxLength=1024) |
Deprecated in favor of IBackend and ILayerSupport interfaces. More... | |
bool | IsFakeQuantizationSupported (const BackendId &backend, const TensorInfo &input, const FakeQuantizationDescriptor &descriptor, char *reasonIfUnsupported=nullptr, size_t reasonIfUnsupportedMaxLength=1024) |
Deprecated in favor of IBackend and ILayerSupport interfaces. More... | |
bool | IsFloorSupported (const BackendId &backend, const TensorInfo &input, const TensorInfo &output, char *reasonIfUnsupported=nullptr, size_t reasonIfUnsupportedMaxLength=1024) |
Deprecated in favor of IBackend and ILayerSupport interfaces. More... | |
bool | IsFullyConnectedSupported (const BackendId &backend, const TensorInfo &input, const TensorInfo &output, const TensorInfo &weights, const TensorInfo &biases, const FullyConnectedDescriptor &descriptor, char *reasonIfUnsupported=nullptr, size_t reasonIfUnsupportedMaxLength=1024) |
Deprecated in favor of IBackend and ILayerSupport interfaces. More... | |
bool | IsGreaterSupported (const BackendId &backend, const TensorInfo &input0, const TensorInfo &input1, const TensorInfo &output, char *reasonIfUnsupported=nullptr, size_t reasonIfUnsupportedMaxLength=1024) |
Deprecated in favor of IBackend and ILayerSupport interfaces. More... | |
bool | IsInputSupported (const BackendId &backend, const TensorInfo &input, char *reasonIfUnsupported=nullptr, size_t reasonIfUnsupportedMaxLength=1024) |
Deprecated in favor of IBackend and ILayerSupport interfaces. More... | |
bool | IsL2NormalizationSupported (const BackendId &backend, const TensorInfo &input, const TensorInfo &output, const L2NormalizationDescriptor &descriptor, char *reasonIfUnsupported=nullptr, size_t reasonIfUnsupportedMaxLength=1024) |
Deprecated in favor of IBackend and ILayerSupport interfaces. More... | |
bool | IsLstmSupported (const BackendId &backend, const TensorInfo &input, const TensorInfo &outputStateIn, const TensorInfo &cellStateIn, const TensorInfo &scratchBuffer, const TensorInfo &outputStateOut, const TensorInfo &cellStateOut, const TensorInfo &output, const LstmDescriptor &descriptor, const LstmInputParamsInfo ¶msInfo, char *reasonIfUnsupported=nullptr, size_t reasonIfUnsupportedMaxLength=1024) |
Deprecated in favor of IBackend and ILayerSupport interfaces. More... | |
bool | IsMaximumSupported (const BackendId &backend, const TensorInfo &input0, const TensorInfo &input1, const TensorInfo &output, char *reasonIfUnSupported=nullptr, size_t reasonIfUnSupportedMaxLength=0) |
Deprecated in favor of IBackend and ILayerSupport interfaces. More... | |
bool | IsMeanSupported (const BackendId &backend, const TensorInfo &input, const TensorInfo &output, const MeanDescriptor &descriptor, char *reasonIfUnsupported=nullptr, size_t reasonIfUnsupportedMaxLength=1024) |
Deprecated in favor of IBackend and ILayerSupport interfaces. More... | |
bool | IsMemCopySupported (const BackendId &backend, const TensorInfo &input, const TensorInfo &output, char *reasonIfUnsupported=nullptr, size_t reasonIfUnsupportedMaxLength=1024) |
Deprecated in favor of IBackend and ILayerSupport interfaces. More... | |
bool | IsMergeSupported (const BackendId &backend, const TensorInfo &input0, const TensorInfo &input1, const TensorInfo &output, char *reasonIfUnsupported=nullptr, size_t reasonIfUnsupportedMaxLength=1024) |
Deprecated in favor of IBackend and ILayerSupport interfaces. More... | |
bool | IsMergerSupported (const BackendId &backend, const std::vector< const TensorInfo *> inputs, const TensorInfo &output, const OriginsDescriptor &descriptor, char *reasonIfUnsupported=nullptr, size_t reasonIfUnsupportedMaxLength=1024) |
Deprecated in favor of IBackend and ILayerSupport interfaces. More... | |
bool | IsMinimumSupported (const BackendId &backend, const TensorInfo &input0, const TensorInfo &input1, const TensorInfo &output, char *reasonIfUnsupported=nullptr, size_t reasonIfUnsupportedMaxLength=1024) |
Deprecated in favor of IBackend and ILayerSupport interfaces. More... | |
bool | IsMultiplicationSupported (const BackendId &backend, const TensorInfo &input0, const TensorInfo &input1, const TensorInfo &output, char *reasonIfUnsupported=nullptr, size_t reasonIfUnsupportedMaxLength=1024) |
Deprecated in favor of IBackend and ILayerSupport interfaces. More... | |
bool | IsNormalizationSupported (const BackendId &backend, const TensorInfo &input, const TensorInfo &output, const NormalizationDescriptor &descriptor, char *reasonIfUnsupported=nullptr, size_t reasonIfUnsupportedMaxLength=1024) |
Deprecated in favor of IBackend and ILayerSupport interfaces. More... | |
bool | IsOutputSupported (const BackendId &backend, const TensorInfo &output, char *reasonIfUnsupported=nullptr, size_t reasonIfUnsupportedMaxLength=1024) |
Deprecated in favor of IBackend and ILayerSupport interfaces. More... | |
bool | IsPadSupported (const BackendId &backend, const TensorInfo &input, const TensorInfo &output, const PadDescriptor &descriptor, char *reasonIfUnsupported=nullptr, size_t reasonIfUnsupportedMaxLength=1024) |
Deprecated in favor of IBackend and ILayerSupport interfaces. More... | |
bool | IsPermuteSupported (const BackendId &backend, const TensorInfo &input, const TensorInfo &output, const PermuteDescriptor &descriptor, char *reasonIfUnsupported=nullptr, size_t reasonIfUnsupportedMaxLength=1024) |
Deprecated in favor of IBackend and ILayerSupport interfaces. More... | |
bool | IsPreCompiledSupported (const BackendId &backend, const TensorInfo &input, char *reasonIfUnsupported=nullptr, size_t reasonIfUnsupportedMaxLength=1024) |
Deprecated in favor of IBackend and ILayerSupport interfaces. More... | |
bool | IsPreluSupported (const BackendId &backend, const TensorInfo &input, const TensorInfo &alpha, const TensorInfo &output, char *reasonIfUnsupported=nullptr, size_t reasonIfUnsupportedMaxLength=1024) |
Deprecated in favor of IBackend and ILayerSupport interfaces. More... | |
bool | IsPooling2dSupported (const BackendId &backend, const TensorInfo &input, const TensorInfo &output, const Pooling2dDescriptor &descriptor, char *reasonIfUnsupported=nullptr, size_t reasonIfUnsupportedMaxLength=1024) |
Deprecated in favor of IBackend and ILayerSupport interfaces. More... | |
bool | IsQuantizedLstmSupported (const BackendId &backend, const TensorInfo &input, const TensorInfo &previousCellStateIn, const TensorInfo &previousOutputIn, const TensorInfo &cellStateOut, const TensorInfo &output, const QuantizedLstmInputParamsInfo ¶msInfo, char *reasonIfUnsupported=nullptr, size_t reasonIfUnsupportedMaxLength=1024) |
Deprecated in favor of IBackend and ILayerSupport interfaces. More... | |
bool | IsReduceSupported (const BackendId &backend, const TensorInfo &input, const TensorInfo &output, const ReduceDescriptor &descriptor, char *reasonIfUnsupported=nullptr, size_t reasonIfUnsupportedMaxLength=1024) |
Deprecated in favor of IBackend and ILayerSupport interfaces. More... | |
bool | IsReshapeSupported (const BackendId &backend, const TensorInfo &input, const ReshapeDescriptor &descriptor, char *reasonIfUnsupported=nullptr, size_t reasonIfUnsupportedMaxLength=1024) |
Deprecated in favor of IBackend and ILayerSupport interfaces. More... | |
bool | IsResizeBilinearSupported (const BackendId &backend, const TensorInfo &input, const TensorInfo &output, char *reasonIfUnsupported=nullptr, size_t reasonIfUnsupportedMaxLength=1024) |
Deprecated in favor of IBackend and ILayerSupport interfaces. More... | |
bool | IsResizeSupported (const BackendId &backend, const TensorInfo &input, const TensorInfo &output, const ResizeDescriptor &descriptor, char *reasonIfUnsupported=nullptr, size_t reasonIfUnsupportedMaxLength=1024) |
Deprecated in favor of IBackend and ILayerSupport interfaces. More... | |
bool | IsRsqrtSupported (const BackendId &backend, const TensorInfo &input, const TensorInfo &output, char *reasonIfUnsupported=nullptr, size_t reasonIfUnsupportedMaxLength=1024) |
Deprecated in favor of IBackend and ILayerSupport interfaces. More... | |
bool | IsSoftmaxSupported (const BackendId &backend, const TensorInfo &input, const TensorInfo &output, const SoftmaxDescriptor &descriptor, char *reasonIfUnsupported=nullptr, size_t reasonIfUnsupportedMaxLength=1024) |
Deprecated in favor of IBackend and ILayerSupport interfaces. More... | |
bool | IsSpaceToBatchNdSupported (const BackendId &backend, const TensorInfo &input, const TensorInfo &output, const SpaceToBatchNdDescriptor &descriptor, char *reasonIfUnsupported=nullptr, size_t reasonIfUnsupportedMaxLength=1024) |
Deprecated in favor of IBackend and ILayerSupport interfaces. More... | |
bool | IsSpaceToDepthSupported (const BackendId &backend, const TensorInfo &input, const TensorInfo &output, const SpaceToDepthDescriptor &descriptor, char *reasonIfUnsupported=nullptr, size_t reasonIfUnsupportedMaxLength=1024) |
Deprecated in favor of IBackend and ILayerSupport interfaces. More... | |
bool | IsSplitterSupported (const BackendId &backend, const TensorInfo &input, const ViewsDescriptor &descriptor, char *reasonIfUnsupported=nullptr, size_t reasonIfUnsupportedMaxLength=1024) |
bool | IsSplitterSupported (const BackendId &backend, const TensorInfo &input, const std::vector< std::reference_wrapper< TensorInfo >> &outputs, const ViewsDescriptor &descriptor, char *reasonIfUnsupported=nullptr, size_t reasonIfUnsupportedMaxLength=1024) |
Deprecated in favor of IBackend and ILayerSupport interfaces. More... | |
bool | IsStackSupported (const BackendId &backend, const std::vector< const TensorInfo *> inputs, const TensorInfo &output, const StackDescriptor &descriptor, char *reasonIfUnsupported=nullptr, size_t reasonIfUnsupportedMaxLength=1024) |
Deprecated in favor of IBackend and ILayerSupport interfaces. More... | |
bool | IsStridedSliceSupported (const BackendId &backend, const TensorInfo &input, const TensorInfo &output, const StridedSliceDescriptor &descriptor, char *reasonIfUnsupported=nullptr, size_t reasonIfUnsupportedMaxLength=1024) |
Deprecated in favor of IBackend and ILayerSupport interfaces. More... | |
bool | IsSubtractionSupported (const BackendId &backend, const TensorInfo &input0, const TensorInfo &input1, const TensorInfo &output, char *reasonIfUnsupported=nullptr, size_t reasonIfUnsupportedMaxLength=1024) |
Deprecated in favor of IBackend and ILayerSupport interfaces. More... | |
bool | IsSwitchSupported (const BackendId &backend, const TensorInfo &input0, const TensorInfo &input1, const TensorInfo &output0, const TensorInfo &output1, char *reasonIfUnsupported=nullptr, size_t reasonIfUnsupportedMaxLength=1024) |
Deprecated in favor of IBackend and ILayerSupport interfaces. More... | |
bool | IsTransposeConvolution2dSupported (const BackendId &backend, const TensorInfo &input, const TensorInfo &output, const TransposeConvolution2dDescriptor &descriptor, const TensorInfo &weights, const Optional< TensorInfo > &biases, char *reasonIfUnsupported=nullptr, size_t reasonIfUnsupportedMaxLength=1024) |
Deprecated in favor of IBackend and ILayerSupport interfaces. More... | |
std::string | LevelToString (LogSeverity level) |
LogSeverity | StringToLogLevel (std::string level) |
void | SetLogFilter (LogSeverity level) |
void | SetAllLoggingSinks (bool standardOut, bool debugOut, bool coloured) |
constexpr LogSeverity | ConvertLogSeverity (BoostLogSeverityMapping severity) |
template<typename Arg , typename std::enable_if< IsMemorySource< Arg >::value >::type * = nullptr> | |
MemorySourceFlags | Combine (Arg sourceA, Arg sourceB) |
template<typename Arg , typename ... Args, typename std::enable_if< IsMemorySource< Arg >::value >::type * = nullptr> | |
MemorySourceFlags | Combine (Arg source, Args... rest) |
bool | CheckFlag (MemorySourceFlags flags, MemorySource source) |
template<typename T , class... Args> | |
Optional< T > | MakeOptional (Args &&... args) |
Utility template that constructs an object of type T in-place and wraps it inside an Optional<T> object. More... | |
const char * | GetLayerTypeAsCString (LayerType type) |
constexpr char const * | GetStatusAsCString (Status status) |
constexpr char const * | GetActivationFunctionAsCString (ActivationFunction activation) |
constexpr char const * | GetArgMinMaxFunctionAsCString (ArgMinMaxFunction function) |
constexpr char const * | GetComparisonOperationAsCString (ComparisonOperation operation) |
constexpr char const * | GetUnaryOperationAsCString (UnaryOperation operation) |
constexpr char const * | GetLogicalBinaryOperationAsCString (LogicalBinaryOperation operation) |
constexpr char const * | GetPoolingAlgorithmAsCString (PoolingAlgorithm pooling) |
constexpr char const * | GetOutputShapeRoundingAsCString (OutputShapeRounding rounding) |
constexpr char const * | GetPaddingMethodAsCString (PaddingMethod method) |
constexpr char const * | GetReduceOperationAsCString (ReduceOperation reduce_operation) |
constexpr unsigned int | GetDataTypeSize (DataType dataType) |
template<unsigned N> | |
constexpr bool | StrEqual (const char *strA, const char(&strB)[N]) |
constexpr armnn::Compute | ParseComputeDevice (const char *str) |
Deprecated function that will be removed together with the Compute enum. More... | |
constexpr const char * | GetDataTypeName (DataType dataType) |
constexpr const char * | GetDataLayoutName (DataLayout dataLayout) |
constexpr const char * | GetNormalizationAlgorithmChannelAsCString (NormalizationAlgorithmChannel channel) |
constexpr const char * | GetNormalizationAlgorithmMethodAsCString (NormalizationAlgorithmMethod method) |
constexpr const char * | GetResizeMethodAsCString (ResizeMethod method) |
template<typename T > | |
constexpr bool | IsQuantizedType () |
constexpr bool | IsQuantized8BitType (DataType dataType) |
constexpr bool | IsQuantizedType (DataType dataType) |
std::ostream & | operator<< (std::ostream &os, Status stat) |
std::ostream & | operator<< (std::ostream &os, const armnn::TensorShape &shape) |
template<typename QuantizedType > | |
QuantizedType | Quantize (float value, float scale, int32_t offset) |
Quantize a floating point data type into an 8-bit data type. More... | |
template<typename QuantizedType > | |
float | Dequantize (QuantizedType value, float scale, int32_t offset) |
Dequantize an 8-bit data type into a floating point data type. More... | |
void | VerifyTensorInfoDataType (const armnn::TensorInfo &info, armnn::DataType dataType) |
template<typename ... Ts> | |
void | IgnoreUnused (Ts &&...) |
template<typename Dest , typename Source > | |
std::enable_if_t< std::is_unsigned< Source >::value &&std::is_unsigned< Dest >::value, Dest > | numeric_cast (Source source) |
template<typename Dest , typename Source > | |
std::enable_if_t< std::is_signed< Source >::value &&std::is_integral< Source >::value &&std::is_signed< Dest >::value &&std::is_integral< Dest >::value, Dest > | numeric_cast (Source source) |
template<typename Dest , typename Source > | |
std::enable_if_t< std::is_floating_point< Source >::value &&std::is_floating_point< Dest >::value, Dest > | numeric_cast (Source source) |
template<typename Dest , typename Source > | |
std::enable_if_t< std::is_floating_point< Source >::value &&std::is_signed< Dest >::value &&std::is_integral< Dest >::value, Dest > | numeric_cast (Source source) |
template<typename Dest , typename Source > | |
std::enable_if_t< std::is_signed< Source >::value &&std::is_integral< Source >::value &&std::is_floating_point< Dest >::value, Dest > | numeric_cast (Source source) |
template<typename Dest , typename Source > | |
std::enable_if_t< std::is_signed< Dest >::value &&std::is_integral< Dest >::value &&std::is_unsigned< Source >::value, Dest > | numeric_cast (Source sValue) |
template<typename Dest , typename Source > | |
std::enable_if_t< std::is_floating_point< Dest >::value &&std::is_unsigned< Source >::value, Dest > | numeric_cast (Source sValue) |
template<typename Dest , typename Source > | |
std::enable_if_t< std::is_unsigned< Dest >::value &&std::is_signed< Source >::value &&std::is_integral< Source >::value, Dest > | numeric_cast (Source sValue) |
template<typename Dest , typename Source > | |
std::enable_if_t< std::is_unsigned< Dest >::value &&std::is_floating_point< Source >::value, Dest > | numeric_cast (Source sValue) |
template<typename DestType , typename SourceType > | |
DestType | PolymorphicDowncast (SourceType value) |
Polymorphic downcast for build in pointers only. More... | |
template<typename DestType , typename SourceType > | |
auto | PolymorphicPointerDowncast (const SourceType &value) |
Polymorphic downcast for shared pointers and build in pointers. More... | |
std::chrono::high_resolution_clock::time_point | GetTimeNow () |
std::chrono::duration< double, std::milli > | GetTimeDuration (std::chrono::high_resolution_clock::time_point start_time) |
template<typename Function , typename Iterator > | |
constexpr TransformIterator< Function, Iterator > | MakeTransformIterator (Iterator i, Function f) |
void | ConfigureLogging (bool printToStandardOutput, bool printToDebugOutput, LogSeverity severity) |
Configures the logging behaviour of the ARMNN library. More... | |
bool | NeonDetected () |
const std::string | GetVersion () |
template<typename T > | |
bool | CompatibleTypes (DataType) |
template<> | |
bool | CompatibleTypes< float > (DataType dataType) |
template<> | |
bool | CompatibleTypes< Half > (DataType dataType) |
template<> | |
bool | CompatibleTypes< BFloat16 > (DataType dataType) |
template<> | |
bool | CompatibleTypes< uint8_t > (DataType dataType) |
template<> | |
bool | CompatibleTypes< int8_t > (DataType dataType) |
template<> | |
bool | CompatibleTypes< int16_t > (DataType dataType) |
template<> | |
bool | CompatibleTypes< int32_t > (DataType dataType) |
void | swap (OriginsDescriptor &first, OriginsDescriptor &second) |
void | swap (ViewsDescriptor &first, ViewsDescriptor &second) |
template<typename T > | |
constexpr LayerType | LayerEnumOf (const T *=nullptr) |
template<> | |
constexpr LayerType | LayerEnumOf (const ActivationLayer *) |
template<> | |
constexpr LayerType | LayerEnumOf (const AdditionLayer *) |
template<> | |
constexpr LayerType | LayerEnumOf (const ArgMinMaxLayer *) |
template<> | |
constexpr LayerType | LayerEnumOf (const BatchNormalizationLayer *) |
template<> | |
constexpr LayerType | LayerEnumOf (const BatchToSpaceNdLayer *) |
template<> | |
constexpr LayerType | LayerEnumOf (const ComparisonLayer *) |
template<> | |
constexpr LayerType | LayerEnumOf (const ConcatLayer *) |
template<> | |
constexpr LayerType | LayerEnumOf (const ConstantLayer *) |
template<> | |
constexpr LayerType | LayerEnumOf (const ConvertBf16ToFp32Layer *) |
template<> | |
constexpr LayerType | LayerEnumOf (const ConvertFp16ToFp32Layer *) |
template<> | |
constexpr LayerType | LayerEnumOf (const ConvertFp32ToBf16Layer *) |
template<> | |
constexpr LayerType | LayerEnumOf (const ConvertFp32ToFp16Layer *) |
template<> | |
constexpr LayerType | LayerEnumOf (const Convolution2dLayer *) |
template<> | |
constexpr LayerType | LayerEnumOf (const DebugLayer *) |
template<> | |
constexpr LayerType | LayerEnumOf (const DepthToSpaceLayer *) |
template<> | |
constexpr LayerType | LayerEnumOf (const DepthwiseConvolution2dLayer *) |
template<> | |
constexpr LayerType | LayerEnumOf (const DequantizeLayer *) |
template<> | |
constexpr LayerType | LayerEnumOf (const DetectionPostProcessLayer *) |
template<> | |
constexpr LayerType | LayerEnumOf (const DivisionLayer *) |
template<> | |
constexpr LayerType | LayerEnumOf (const ElementwiseUnaryLayer *) |
template<> | |
constexpr LayerType | LayerEnumOf (const FakeQuantizationLayer *) |
template<> | |
constexpr LayerType | LayerEnumOf (const FillLayer *) |
template<> | |
constexpr LayerType | LayerEnumOf (const FloorLayer *) |
template<> | |
constexpr LayerType | LayerEnumOf (const FullyConnectedLayer *) |
template<> | |
constexpr LayerType | LayerEnumOf (const GatherLayer *) |
template<> | |
constexpr LayerType | LayerEnumOf (const InputLayer *) |
template<> | |
constexpr LayerType | LayerEnumOf (const InstanceNormalizationLayer *) |
template<> | |
constexpr LayerType | LayerEnumOf (const L2NormalizationLayer *) |
template<> | |
constexpr LayerType | LayerEnumOf (const LogicalBinaryLayer *) |
template<> | |
constexpr LayerType | LayerEnumOf (const LogSoftmaxLayer *) |
template<> | |
constexpr LayerType | LayerEnumOf (const LstmLayer *) |
template<> | |
constexpr LayerType | LayerEnumOf (const MapLayer *) |
template<> | |
constexpr LayerType | LayerEnumOf (const MaximumLayer *) |
template<> | |
constexpr LayerType | LayerEnumOf (const MeanLayer *) |
template<> | |
constexpr LayerType | LayerEnumOf (const MemCopyLayer *) |
template<> | |
constexpr LayerType | LayerEnumOf (const MemImportLayer *) |
template<> | |
constexpr LayerType | LayerEnumOf (const MergeLayer *) |
template<> | |
constexpr LayerType | LayerEnumOf (const MinimumLayer *) |
template<> | |
constexpr LayerType | LayerEnumOf (const MultiplicationLayer *) |
template<> | |
constexpr LayerType | LayerEnumOf (const NormalizationLayer *) |
template<> | |
constexpr LayerType | LayerEnumOf (const OutputLayer *) |
template<> | |
constexpr LayerType | LayerEnumOf (const PadLayer *) |
template<> | |
constexpr LayerType | LayerEnumOf (const PermuteLayer *) |
template<> | |
constexpr LayerType | LayerEnumOf (const Pooling2dLayer *) |
template<> | |
constexpr LayerType | LayerEnumOf (const PreCompiledLayer *) |
template<> | |
constexpr LayerType | LayerEnumOf (const PreluLayer *) |
template<> | |
constexpr LayerType | LayerEnumOf (const QuantizeLayer *) |
template<> | |
constexpr LayerType | LayerEnumOf (const QLstmLayer *) |
template<> | |
constexpr LayerType | LayerEnumOf (const QuantizedLstmLayer *) |
template<> | |
constexpr LayerType | LayerEnumOf (const RankLayer *) |
template<> | |
constexpr LayerType | LayerEnumOf (const ReduceLayer *) |
template<> | |
constexpr LayerType | LayerEnumOf (const ReshapeLayer *) |
template<> | |
constexpr LayerType | LayerEnumOf (const ResizeLayer *) |
template<> | |
constexpr LayerType | LayerEnumOf (const SliceLayer *) |
template<> | |
constexpr LayerType | LayerEnumOf (const SoftmaxLayer *) |
template<> | |
constexpr LayerType | LayerEnumOf (const SpaceToBatchNdLayer *) |
template<> | |
constexpr LayerType | LayerEnumOf (const SpaceToDepthLayer *) |
template<> | |
constexpr LayerType | LayerEnumOf (const SplitterLayer *) |
template<> | |
constexpr LayerType | LayerEnumOf (const StackLayer *) |
template<> | |
constexpr LayerType | LayerEnumOf (const StandInLayer *) |
template<> | |
constexpr LayerType | LayerEnumOf (const StridedSliceLayer *) |
template<> | |
constexpr LayerType | LayerEnumOf (const SubtractionLayer *) |
template<> | |
constexpr LayerType | LayerEnumOf (const SwitchLayer *) |
template<> | |
constexpr LayerType | LayerEnumOf (const TransposeLayer *) |
template<> | |
constexpr LayerType | LayerEnumOf (const TransposeConvolution2dLayer *) |
template<> | |
constexpr LayerType | LayerEnumOf (const UnmapLayer *) |
template<> | |
constexpr LayerType | LayerEnumOf (const CastLayer *) |
bool | CheckTensorDataTypesEqual (const TensorInfo &input0, const TensorInfo &input1) |
bool | IsArgMinMaxSupported (const BackendId &backend, const TensorInfo &input, const TensorInfo &output, const ArgMinMaxDescriptor &descriptor, char *reasonIfUnsupported, size_t reasonIfUnsupportedMaxLength) |
bool | IsConcatSupported (const BackendId &backend, std::vector< const TensorInfo *> inputs, const TensorInfo &output, const OriginsDescriptor &descriptor, char *reasonIfUnsupported, size_t reasonIfUnsupportedMaxLength) |
bool | IsDetectionPostProcessSupported (const BackendId &backend, const TensorInfo &input0, const TensorInfo &input1, const DetectionPostProcessDescriptor &descriptor, char *reasonIfUnsupported, size_t reasonIfUnsupportedMaxLength) |
bool | IsGatherSupported (const BackendId &backend, const TensorInfo &input0, const TensorInfo &input1, const TensorInfo &output, char *reasonIfUnsupported, size_t reasonIfUnsupportedMaxLength) |
bool | IsGatherSupported (const BackendId &backend, const TensorInfo &input0, const TensorInfo &input1, const TensorInfo &output, const GatherDescriptor &descriptor, char *reasonIfUnsupported, size_t reasonIfUnsupportedMaxLength) |
bool | IsMemImportSupported (const BackendId &backend, const TensorInfo &input, const TensorInfo &output, char *reasonIfUnsupported, size_t reasonIfUnsupportedMaxLength) |
bool | IsMergerSupported (const BackendId &backend, std::vector< const TensorInfo *> inputs, const TensorInfo &output, const OriginsDescriptor &descriptor, char *reasonIfUnsupported, size_t reasonIfUnsupportedMaxLength) |
bool | IsQuantizeSupported (const BackendId &backend, const TensorInfo &input, const TensorInfo &output, char *reasonIfUnsupported, size_t reasonIfUnsupportedMaxLength) |
bool | IsQLstmSupported (const BackendId &backend, const TensorInfo &input, const TensorInfo &previousOutputIn, const TensorInfo &previousCellStateIn, const TensorInfo &outputStateOut, const TensorInfo &cellStateOut, const TensorInfo &output, const QLstmDescriptor &descriptor, const LstmInputParamsInfo ¶msInfo, char *reasonIfUnsupported, size_t reasonIfUnsupportedMaxLength) |
bool | IsReshapeSupported (const BackendId &backend, const TensorInfo &input, const TensorInfo &output, const ReshapeDescriptor &descriptor, char *reasonIfUnsupported, size_t reasonIfUnsupportedMaxLength) |
template<typename T , typename V > | |
void | SetValueChecked (Optional< T &> optionalRef, V &&val) |
template<typename Float16Func , typename Float32Func , typename Uint8Func , typename Int32Func , typename BooleanFunc , typename ... Params> | |
bool | IsSupportedForDataTypeGeneric (Optional< std::string &> reasonIfUnsupported, DataType dataType, Float16Func float16FuncPtr, Float32Func float32FuncPtr, Uint8Func uint8FuncPtr, Int32Func int32FuncPtr, BooleanFunc booleanFuncPtr, Params &&... params) |
template<typename ... Params> | |
bool | TrueFunc (Optional< std::string &> reasonIfUnsupported, Params &&... params) |
template<typename ... Params> | |
bool | FalseFunc (Optional< std::string &> reasonIfUnsupported, Params &&... params) |
template<typename ... Params> | |
bool | FalseFuncF16 (Optional< std::string &> reasonIfUnsupported, Params &&... params) |
template<typename ... Params> | |
bool | FalseFuncF32 (Optional< std::string &> reasonIfUnsupported, Params &&... params) |
template<typename ... Params> | |
bool | FalseFuncU8 (Optional< std::string &> reasonIfUnsupported, Params &&... params) |
template<typename ... Params> | |
bool | FalseFuncI32 (Optional< std::string &> reasonIfUnsupported, Params &&... params) |
template<typename ... Params> | |
bool | FalseInputFuncF32 (Optional< std::string &> reasonIfUnsupported, Params &&... params) |
template<typename ... Params> | |
bool | FalseInputFuncF16 (Optional< std::string &> reasonIfUnsupported, Params &&... params) |
template<typename ... Params> | |
bool | FalseOutputFuncF32 (Optional< std::string &> reasonIfUnsupported, Params &&... params) |
template<typename ... Params> | |
bool | FalseOutputFuncF16 (Optional< std::string &> reasonIfUnsupported, Params &&... params) |
const armnn::ConstTensor | GetInputTensor (const LayerBindingId layerId, const InputTensors &inputTensors) |
const armnn::Tensor | GetOutputTensor (const LayerBindingId layerId, const OutputTensors &outputTensors) |
template<LogSeverity Level> | |
void | SetLoggingSinks (bool standardOut, bool debugOut, bool coloured) |
void | ReportError (const std::string &errorMessage, Optional< std::vector< std::string > &> errorMessages) |
void | ReportWarning (const std::string &warningMessage, Optional< std::vector< std::string > &> warningMessages) |
OptimizationResult | ReturnWithError (OptimizationResult res, const Layer *layer, const BackendSettings &backendSettings, Optional< std::vector< std::string > &> errMessages) |
bool | CheckScaleSetOnQuantizedType (Layer *layer, Optional< std::vector< std::string > &> errMessages) |
template<typename LayerT > | |
LayerT * | ConvertBf16ToFp32Weight (Layer *l) |
OptimizationResult | AttemptBackendAssignment (BackendSettings &backendSettings, Graph &graph, Layer *layer, BackendId backend, DataType dataTypeIn, DataType dataTypeOut, const std::vector< BackendId > &availablePreferredBackends, std::string &reasonIfUnsupported, Optional< std::vector< std::string > &> errMessages) |
OptimizationResult | AssignBackends (OptimizedNetworkImpl *optNetObjPtr, BackendSettings &backendSettings, Graph::Iterator &firstLayer, Graph::Iterator &lastLayer, Optional< std::vector< std::string > &> errMessages) |
OptimizationResult | AssignBackends (OptimizedNetworkImpl *optNetObjPtr, BackendSettings &backendSettings, SubgraphView &subgraph, Optional< std::vector< std::string > &> errMessages) |
BackendsMap | CreateSupportedBackends (TensorHandleFactoryRegistry &handleFactoryRegistry, BackendSettings &backendSettings) |
OptimizationResult | ApplyBackendOptimizations (OptimizedNetworkImpl *optNetObjPtr, BackendSettings &backendSettings, BackendsMap &backends, const ModelOptions &modelOptions, Optional< std::vector< std::string > &> errMessages) |
bool | RequiresCopy (ITensorHandleFactory::FactoryId src, ITensorHandleFactory::FactoryId dst, TensorHandleFactoryRegistry ®istry) |
ITensorHandleFactory::FactoryId | CalculateSlotOptionForInput (BackendsMap &backends, OutputSlot &slot, TensorHandleFactoryRegistry ®istry, bool importEnabled) |
ITensorHandleFactory::FactoryId | CalculateSlotOptionForOutput (BackendsMap &backends, OutputSlot &slot, TensorHandleFactoryRegistry ®istry) |
ITensorHandleFactory::FactoryId | CalculateSlotOption (BackendsMap &backends, OutputSlot &outputSlot, TensorHandleFactoryRegistry ®istry, bool importEnabled) |
EdgeStrategy | CalculateEdgeStrategy (BackendsMap &backends, ITensorHandleFactory::FactoryId srcFactoryId, const Layer &layer, const Layer &connectedLayer, TensorHandleFactoryRegistry ®istry, bool importEnabled) |
OptimizationResult | SelectTensorHandleStrategy (Graph &optGraph, BackendsMap &backends, TensorHandleFactoryRegistry ®istry, bool importEnabled, Optional< std::vector< std::string > &> errMessages) |
std::vector< ConvertBf16ToFp32Layer * > | InsertConvertBf16ToFp32LayersBefore (Graph &graph, Layer &layer, bool expectCorrectInputType) |
std::vector< ConvertFp32ToBf16Layer * > | InsertConvertFp32ToBf16LayersBefore (Graph &graph, Layer &layer, bool expectCorrectInputType) |
std::vector< ConvertFp16ToFp32Layer * > | InsertConvertFp16ToFp32LayersBefore (Graph &graph, Layer &layer, bool expectCorrectInputType) |
std::vector< ConvertFp32ToBf16Layer * > | InsertConvertFp32ToBf16LayersAfter (Graph &graph, Layer &layer) |
std::vector< ConvertFp32ToFp16Layer * > | InsertConvertFp32ToFp16LayersAfter (Graph &graph, Layer &layer) |
std::vector< DebugLayer * > | InsertDebugLayerAfter (Graph &graph, Layer &layer) |
template<typename T > | |
void | Append (Optimizer::Optimizations &optimizations, T &&optimization) |
template<typename Front , typename... Others> | |
void | Append (Optimizer::Optimizations &optimizations, Front &&front, Others &&... others) |
template<typename... Args> | |
Optimizer::Optimizations | MakeOptimizations (Args &&... args) |
Measurement | FindMeasurement (const std::string &name, const Event *event) |
std::vector< Measurement > | FindKernelMeasurements (const Event *event) |
const Event * | GetEventPtr (const Event *ptr) |
const Event * | GetEventPtr (const std::unique_ptr< Event > &ptr) |
int | CalcLevel (const Event *eventPtr) |
void | ExtractJsonObjects (unsigned int inferenceIndex, const Event *parentEvent, JsonChildObject &parentObject, std::map< const Event *, std::vector< const Event *>> descendantsMap) |
template<typename Delegate > | |
void | ForEachLayerInput (LayerSelectionInfo::LayerInfoContainer &layerInfos, LayerSelectionInfo &layerInfo, Delegate function) |
template<typename Delegate > | |
void | ForEachLayerOutput (LayerSelectionInfo::LayerInfoContainer &layerInfos, LayerSelectionInfo &layerInfo, Delegate function) |
void | AssignSplitId (LayerSelectionInfo::LayerInfoContainer &layerInfos, LayerSelectionInfo &layerInfo) |
bool | IsReadyForSplitAssignment (LayerSelectionInfo::LayerInfoContainer &layerInfos, LayerSelectionInfo &layerInfo) |
BOOST_AUTO_TEST_CASE (CheckConvolution2dLayer) | |
BOOST_AUTO_TEST_CASE (CheckNamedConvolution2dLayer) | |
BOOST_AUTO_TEST_CASE (CheckConvolution2dLayerWithBiases) | |
BOOST_AUTO_TEST_CASE (CheckNamedConvolution2dLayerWithBiases) | |
BOOST_AUTO_TEST_CASE (CheckDepthwiseConvolution2dLayer) | |
BOOST_AUTO_TEST_CASE (CheckNamedDepthwiseConvolution2dLayer) | |
BOOST_AUTO_TEST_CASE (CheckDepthwiseConvolution2dLayerWithBiases) | |
BOOST_AUTO_TEST_CASE (CheckNamedDepthwiseConvolution2dLayerWithBiases) | |
BOOST_AUTO_TEST_CASE (CheckFullyConnectedLayer) | |
BOOST_AUTO_TEST_CASE (CheckNamedFullyConnectedLayer) | |
BOOST_AUTO_TEST_CASE (CheckFullyConnectedLayerWithBiases) | |
BOOST_AUTO_TEST_CASE (CheckNamedFullyConnectedLayerWithBiases) | |
BOOST_AUTO_TEST_CASE (CheckBatchNormalizationLayer) | |
BOOST_AUTO_TEST_CASE (CheckNamedBatchNormalizationLayer) | |
BOOST_AUTO_TEST_CASE (CheckConstLayer) | |
BOOST_AUTO_TEST_CASE (CheckNamedConstLayer) | |
BOOST_AUTO_TEST_CASE (CheckLstmLayerBasic) | |
BOOST_AUTO_TEST_CASE (CheckNamedLstmLayerBasic) | |
BOOST_AUTO_TEST_CASE (CheckLstmLayerCifgDisabled) | |
BOOST_AUTO_TEST_CASE (CheckNamedLstmLayerCifgDisabled) | |
BOOST_AUTO_TEST_CASE (CheckLstmLayerPeephole) | |
BOOST_AUTO_TEST_CASE (CheckLstmLayerPeepholeCifgDisabled) | |
BOOST_AUTO_TEST_CASE (CheckNamedLstmLayerPeephole) | |
BOOST_AUTO_TEST_CASE (CheckLstmLayerProjection) | |
BOOST_AUTO_TEST_CASE (CheckNamedLstmLayerProjection) | |
BOOST_AUTO_TEST_CASE (CheckQLstmLayerBasic) | |
BOOST_AUTO_TEST_CASE (CheckNamedQLstmLayerBasic) | |
BOOST_AUTO_TEST_CASE (CheckQLstmLayerCifgDisabled) | |
BOOST_AUTO_TEST_CASE (CheckQLstmLayerCifgDisabledPeepholeEnabled) | |
BOOST_AUTO_TEST_CASE (CheckQLstmLayerCifgEnabledPeepholeEnabled) | |
BOOST_AUTO_TEST_CASE (CheckQLstmLayerProjectionEnabled) | |
BOOST_AUTO_TEST_CASE (CheckQLstmLayerCifgDisabledLayerNormEnabled) | |
BOOST_AUTO_TEST_CASE (CheckQuantizedLstmLayer) | |
BOOST_AUTO_TEST_CASE (CheckNamedQuantizedLstmLayer) | |
template<typename T > | |
std::vector< T > | GetVector (unsigned int size, float initial, float increment) |
template<typename LayerTest , DataType ArmnnType> | |
INetworkPtr | CreatNetwork (ActivationDescriptor activationDescriptor, bool preventFusing, float scale, int32_t offset) |
template<typename LayerTest , DataType ArmnnType, typename LayerType = typename LayerTest::LayerType, typename T = ResolveType<ArmnnType>> | |
void | FuseActivationIntoPreviousLayerTest (ActivationDescriptor activationDescriptor, float tolerance, Compute backendId, float scale=1.f, int32_t offset=0) |
template<typename LayerTest , DataType ArmnnType, typename LayerType = typename LayerTest::LayerType, typename T = ResolveType<ArmnnType>> | |
bool | FuseActivationSimpleTest (ActivationDescriptor activationDescriptor, Compute backendId, float scale=1.f, int32_t offset=0) |
size_t | GetProfilerEventSequenceSize (armnn::IProfiler *profiler) |
void | RuntimeLoadedNetworksReserve (armnn::RuntimeImpl *runtime) |
std::ostream & | boost_test_print_type (std::ostream &ostr, const TensorInfo &right) |
std::ostream & | boost_test_print_type (std::ostream &ostr, const TensorShape &shape) |
BOOST_AUTO_TEST_CASE (CheckInputLayerVisitorBindingIdAndName) | |
BOOST_AUTO_TEST_CASE (CheckInputLayerVisitorBindingIdAndNameNull) | |
BOOST_AUTO_TEST_CASE (CheckOutputLayerVisitorBindingIdAndName) | |
BOOST_AUTO_TEST_CASE (CheckOutputLayerVisitorBindingIdAndNameNull) | |
void | CheckLayerBindingId (LayerBindingId visitorId, LayerBindingId id) |
Graph & | GetGraphForTesting (IOptimizedNetwork *optNet) |
ModelOptions & | GetModelOptionsForTesting (IOptimizedNetwork *optNet) |
profiling::ProfilingService & | GetProfilingService (armnn::RuntimeImpl *runtime) |
std::ostream & | operator<< (std::ostream &os, const BFloat16 &b) |
void | ReportUntouchedLayers (OptimizationViews &optimizationViews, std::map< LayerGuid, Layer *> untouched) |
template<typename LayerType > | |
LayerType * | FuseLayerWithoutParameters (OptimizationViews &optimizationViews, LayerType *baseLayer, ActivationLayer *activationLayer, ActivationDescriptor &activationDesc, std::string name) |
template<typename LayerType > | |
LayerType * | FuseLayerWithParameters (OptimizationViews &optimizationViews, LayerType *baseLayer, ActivationLayer *activationLayer, ActivationDescriptor &activationDesc, std::string name) |
template<typename LayerType > | |
LayerType * | FuseLayerWithWeightsAndBiases (OptimizationViews &optimizationViews, LayerType *baseLayer, ActivationLayer *activationLayer, ActivationDescriptor &activationDesc, std::string name) |
arm_compute::NormalizationLayerInfo | CreateAclNormalizationLayerInfoForL2Normalization (const armnn::TensorInfo &tensorInfo, armnn::DataLayout dataLayout) |
arm_compute::ActivationLayerInfo::ActivationFunction | ConvertActivationFunctionToAclActivationFunction (ActivationFunction armnnFunction) |
arm_compute::ActivationLayerInfo | ConvertActivationDescriptorToAclActivationLayerInfo (const ActivationDescriptor &actDesc) |
arm_compute::ActivationLayerInfo | ConvertActivationDescriptorToAclActivationLayerInfo (const ActivationDescriptor *activationDescPtr) |
arm_compute::ActivationLayerInfo | ConvertAdditionalInfoToAclActivationLayerInfo (const QueueDescriptor &queueDescriptor) |
arm_compute::ComparisonOperation | ConvertComparisonOperationToAcl (const ComparisonDescriptor &descriptor) |
arm_compute::PoolingType | ConvertPoolingAlgorithmToAclPoolingType (PoolingAlgorithm poolingAlgorithm) |
arm_compute::DimensionRoundingType | ConvertOutputShapeRoundingToAclDimensionRoundingType (OutputShapeRounding rounding) |
arm_compute::NormType | ConvertNormalizationAlgorithmChannelToAclNormType (NormalizationAlgorithmChannel channelType) |
arm_compute::FullyConnectedLayerInfo | ConvertFullyConnectedDescriptorToAclFullyConnectedLayerInfo (const FullyConnectedDescriptor &fullyConnectedDesc, const ActivationDescriptor *activationDesc) |
arm_compute::FullyConnectedLayerInfo | ConvertFullyConnectedDescriptorToAclFullyConnectedLayerInfo (const FullyConnectedDescriptor &fullyConnectedDesc, arm_compute::ActivationLayerInfo activationLayerInfo) |
arm_compute::InterpolationPolicy | ConvertResizeMethodToAclInterpolationPolicy (ResizeMethod resizeMethod) |
template<typename T > | |
T | ComputeSoftmaxAclAxis (const SoftmaxDescriptor &softmaxDesc, const armnn::TensorInfo &tensor) |
std::set< unsigned int > | ComputeSplitAxis (const armnn::SplitterDescriptor &desc, const TensorShape &input) |
int | ComputeAclAxis (const int &armnnAxis, const armnn::TensorInfo &tensor) |
Function to convert ArmNN axis (left to right) to ACL axis (right to left) ranging from [-rank, rank) More... | |
unsigned int | ComputePositiveAxis (const int &axis, const armnn::TensorInfo &tensor) |
Function to convert axis to its positive equivalent value. More... | |
arm_compute::ReductionOperation | ConvertReductionOperationToAcl (const ReduceDescriptor &descriptor) |
armnn::Optional< armnn::DataType > | GetBiasTypeFromWeightsType (armnn::Optional< armnn::DataType > weightsType) |
template<typename F > | |
bool | CheckSupportRule (F rule, Optional< std::string &> reasonIfUnsupported, const char *reason) |
template<typename T > | |
bool | AllTypesAreEqualImpl (T) |
template<typename T , typename... Rest> | |
bool | AllTypesAreEqualImpl (T t1, T t2, Rest... rest) |
TensorShape | GetUnpaddedTensorStrides (const TensorInfo &tensorInfo) |
constexpr const char * | MockImportBackendId () |
constexpr const char * | MockBackendId () |
DataType | GetBiasDataType (DataType inputDataType) |
armnn::ConstTensor | PermuteTensor (const ConstTensorHandle *tensor, const PermutationVector &permutationVector, void *permuteBuffer) |
void | ReshapeWeightsForAcl (TensorInfo &weightInfo, DataLayout dataLayout) |
template<typename DataType > | |
ConstTensor | ReorderWeightChannelsForAcl (const ConstTensor &weightHandle, DataLayout dataLayout, void *permuteBuffer) |
TensorInfo | ConvertWeightTensorInfoFromArmnnToAcl (const TensorInfo &weightInfo, DataLayout dataLayout) |
armnn::ConstTensor | ConvertWeightTensorFromArmnnToAcl (const ConstTensorHandle *weightTensor, DataLayout dataLayout, void *permuteBuffer) |
int32_t | ConvertMaskToACLFormat (int32_t mask, int32_t numDim) |
template<typename CopyFunc > | |
void | CopyTensorContentsGeneric (const ITensorHandle *srcTensor, ITensorHandle *dstTensor, CopyFunc copy) |
template<typename SrcTensorHandleType , typename DstTensorHandleType , typename DescriptorType > | |
void | GatherTensorHandlePairs (const DescriptorType &descriptor, std::vector< std::pair< SrcTensorHandleType *, DstTensorHandleType *>> &tensorHandlePairs) |
std::string | LowerString (std::string value) |
TuningLevel | ParseTuningLevel (const BackendOptions::Var &value, TuningLevel defaultValue) |
bool | ParseBoolean (const BackendOptions::Var &value, bool defaultValue) |
std::string | ParseFile (const BackendOptions::Var &value, std::string defaultValue) |
void | ConfigureTuner (arm_compute::CLTuner &tuner, TuningLevel level) |
constexpr const char * | ClBackendId () |
flatbuffers::Offset< ClContext > | CreateClContext (flatbuffers::FlatBufferBuilder &_fbb, flatbuffers::Offset< flatbuffers::Vector< flatbuffers::Offset< armnn::Program >>> programs=0) |
flatbuffers::Offset< ClContext > | CreateClContextDirect (flatbuffers::FlatBufferBuilder &_fbb, const std::vector< flatbuffers::Offset< armnn::Program >> *programs=nullptr) |
flatbuffers::Offset< Program > | CreateProgram (flatbuffers::FlatBufferBuilder &_fbb, flatbuffers::Offset< flatbuffers::String > name=0, flatbuffers::Offset< flatbuffers::Vector< uint8_t >> binary=0) |
flatbuffers::Offset< Program > | CreateProgramDirect (flatbuffers::FlatBufferBuilder &_fbb, const char *name=nullptr, const std::vector< uint8_t > *binary=nullptr) |
const armnn::ClContext * | GetClContext (const void *buf) |
const armnn::ClContext * | GetSizePrefixedClContext (const void *buf) |
const char * | ClContextIdentifier () |
bool | ClContextBufferHasIdentifier (const void *buf) |
bool | VerifyClContextBuffer (flatbuffers::Verifier &verifier) |
bool | VerifySizePrefixedClContextBuffer (flatbuffers::Verifier &verifier) |
const char * | ClContextExtension () |
void | FinishClContextBuffer (flatbuffers::FlatBufferBuilder &fbb, flatbuffers::Offset< armnn::ClContext > root) |
void | FinishSizePrefixedClContextBuffer (flatbuffers::FlatBufferBuilder &fbb, flatbuffers::Offset< armnn::ClContext > root) |
constexpr const char * | ClImportTensorHandleFactoryId () |
constexpr const char * | ClTensorHandleFactoryId () |
arm_compute::Status | ClAbsWorkloadValidate (const TensorInfo &input, const TensorInfo &output) |
arm_compute::Status | ClActivationWorkloadValidate (const TensorInfo &input, const TensorInfo &output, const ActivationDescriptor &descriptor) |
arm_compute::Status | ClAdditionValidate (const TensorInfo &input0, const TensorInfo &input1, const TensorInfo &output, const ActivationDescriptor *activationDescriptor) |
arm_compute::Status | ClArgMinMaxWorkloadValidate (const TensorInfo &input, const TensorInfo &output, const ArgMinMaxDescriptor &descriptor) |
arm_compute::Status | ClBatchNormalizationValidate (const TensorInfo &input, const TensorInfo &output, const TensorInfo &mean, const TensorInfo &var, const TensorInfo &beta, const TensorInfo &gamma, const BatchNormalizationDescriptor &desc, const ActivationDescriptor *activationDescriptor) |
arm_compute::Status | ClBatchToSpaceNdWorkloadValidate (const TensorInfo &input, const TensorInfo &output, const BatchToSpaceNdDescriptor &desc) |
arm_compute::Status | ClCastValidate (const TensorInfo &input, const TensorInfo &output) |
arm_compute::Status | ClComparisonWorkloadValidate (const TensorInfo &input0, const TensorInfo &input1, const TensorInfo &output, const ComparisonDescriptor &descriptor) |
arm_compute::Status | ClConcatWorkloadValidate (const std::vector< const TensorInfo *> &inputs, const TensorInfo &output, const OriginsDescriptor &descriptor) |
arm_compute::Status | ClConstantWorkloadValidate (const TensorInfo &output) |
arm_compute::Status | ClConvertFp16ToFp32WorkloadValidate (const TensorInfo &input, const TensorInfo &output) |
arm_compute::Status | ClConvertFp32ToFp16WorkloadValidate (const TensorInfo &input, const TensorInfo &output) |
arm_compute::Status | ClConvolution2dWorkloadValidate (const TensorInfo &input, const TensorInfo &output, const Convolution2dDescriptor &descriptor, const TensorInfo &weights, const Optional< TensorInfo > &biases, bool isFastMathEnabled, const ActivationDescriptor *activationDescriptor) |
arm_compute::Status | ClDepthToSpaceWorkloadValidate (const TensorInfo &input, const TensorInfo &output, const DepthToSpaceDescriptor &desc) |
arm_compute::Status | ClDepthwiseConvolutionWorkloadValidate (const TensorInfo &input, const TensorInfo &output, const DepthwiseConvolution2dDescriptor &descriptor, const TensorInfo &weights, const Optional< TensorInfo > &biases, const ActivationDescriptor *activationDescriptor) |
arm_compute::Status | ClDequantizeWorkloadValidate (const TensorInfo &input, const TensorInfo &output) |
arm_compute::Status | ClDivisionWorkloadValidate (const TensorInfo &input0, const TensorInfo &input1, const TensorInfo &output, const ActivationDescriptor *activationDescriptor) |
arm_compute::Status | ClExpWorkloadValidate (const TensorInfo &input, const TensorInfo &output) |
arm_compute::Status | ClFloorWorkloadValidate (const TensorInfo &input, const TensorInfo &output) |
arm_compute::Status | ClFullyConnectedWorkloadValidate (const TensorInfo &input, const TensorInfo &output, const TensorInfo &weights, const TensorInfo &biases, const FullyConnectedDescriptor &descriptor, const ActivationDescriptor *activationDescriptor) |
arm_compute::Status | ClGatherWorkloadValidate (const TensorInfo &input, const TensorInfo &indices, const TensorInfo &output, const GatherDescriptor &descriptor) |
arm_compute::Status | ClInstanceNormalizationWorkloadValidate (const TensorInfo &input, const TensorInfo &output, const InstanceNormalizationDescriptor &descriptor) |
arm_compute::Status | ClL2NormalizationWorkloadValidate (const TensorInfo &input, const TensorInfo &output, const L2NormalizationDescriptor &descriptor) |
arm_compute::Status | ClLogicalAndWorkloadValidate (const TensorInfo &input0, const TensorInfo &input1, const TensorInfo &output) |
arm_compute::Status | ClLogicalNotWorkloadValidate (const TensorInfo &input, const TensorInfo &output) |
arm_compute::Status | ClLogicalOrWorkloadValidate (const TensorInfo &input0, const TensorInfo &input1, const TensorInfo &output) |
arm_compute::Status | ClLogSoftmaxWorkloadValidate (const TensorInfo &input, const TensorInfo &output, const LogSoftmaxDescriptor &descriptor) |
arm_compute::Status | ClLstmFloatWorkloadValidate (const TensorInfo &input, const TensorInfo &outputStateIn, const TensorInfo &cellStateIn, const TensorInfo &scratchBuffer, const TensorInfo &outputStateOut, const TensorInfo &cellStateOut, const TensorInfo &output, const LstmDescriptor &descriptor, const LstmInputParamsInfo ¶msInfo) |
arm_compute::Status | ClMaximumWorkloadValidate (const TensorInfo &input0, const TensorInfo &input1, const TensorInfo &output) |
arm_compute::Status | ClMeanValidate (const TensorInfo &input, const TensorInfo &output, const MeanDescriptor &desc) |
arm_compute::Status | ClMinimumWorkloadValidate (const TensorInfo &input0, const TensorInfo &input1, const TensorInfo &output) |
arm_compute::Status | ClMultiplicationWorkloadValidate (const TensorInfo &input0, const TensorInfo &input1, const TensorInfo &output, const ActivationDescriptor *activationDescriptor) |
arm_compute::Status | ClNegWorkloadValidate (const TensorInfo &input, const TensorInfo &output) |
arm_compute::Status | ClNormalizationWorkloadValidate (const TensorInfo &input, const TensorInfo &output, const NormalizationDescriptor &descriptor) |
arm_compute::Status | ClPadValidate (const TensorInfo &input, const TensorInfo &output, const PadDescriptor &descriptor) |
arm_compute::Status | ClPermuteWorkloadValidate (const TensorInfo &input, const TensorInfo &output, const PermuteDescriptor &descriptor) |
arm_compute::Status | ClPooling2dWorkloadValidate (const TensorInfo &input, const TensorInfo &output, const Pooling2dDescriptor &descriptor) |
arm_compute::Status | ClPreluWorkloadValidate (const TensorInfo &input, const TensorInfo &alpha, const TensorInfo &output) |
arm_compute::Status | ClQLstmWorkloadValidate (const TensorInfo &input, const TensorInfo &cellStateIn, const TensorInfo &outputStateIn, const TensorInfo &cellStateOut, const TensorInfo &outputStateOut, const TensorInfo &output, const QLstmDescriptor &descriptor, const LstmInputParamsInfo ¶msInfo) |
arm_compute::Status | ClQuantizedLstmWorkloadValidate (const TensorInfo &input, const TensorInfo &previousCellStateIn, const TensorInfo &previousOutputIn, const TensorInfo &cellStateOut, const TensorInfo &output, const QuantizedLstmInputParamsInfo ¶msInfo) |
arm_compute::Status | ClQuantizeWorkloadValidate (const TensorInfo &input, const TensorInfo &output) |
arm_compute::Status | ClReduceWorkloadValidate (const TensorInfo &input, const TensorInfo &output, const ReduceDescriptor &desc) |
arm_compute::Status | ClReshapeWorkloadValidate (const TensorInfo &input, const TensorInfo &output) |
arm_compute::Status | ClResizeWorkloadValidate (const TensorInfo &input, const TensorInfo &output, const ResizeDescriptor &descriptor) |
arm_compute::Status | ClRsqrtWorkloadValidate (const TensorInfo &input, const TensorInfo &output) |
arm_compute::Status | ClSliceWorkloadValidate (const TensorInfo &input, const TensorInfo &output, const SliceDescriptor &descriptor) |
arm_compute::Status | ClSoftmaxWorkloadValidate (const TensorInfo &input, const TensorInfo &output, const SoftmaxDescriptor &descriptor) |
arm_compute::Status | ClSpaceToBatchNdWorkloadValidate (const TensorInfo &input, const TensorInfo &output, const SpaceToBatchNdDescriptor &descriptor) |
arm_compute::Status | ClSpaceToDepthWorkloadValidate (const TensorInfo &input, const TensorInfo &output, const SpaceToDepthDescriptor &desc) |
arm_compute::Status | ClSplitterWorkloadValidate (const TensorInfo &input, const std::vector< std::reference_wrapper< TensorInfo >> &outputs, unsigned int splitAxis) |
arm_compute::Status | ClStackWorkloadValidate (const std::vector< const TensorInfo *> &inputs, const TensorInfo &output, const StackDescriptor &descriptor) |
arm_compute::Status | ClStridedSliceWorkloadValidate (const TensorInfo &input, const TensorInfo &output, const StridedSliceDescriptor &descriptor) |
arm_compute::Status | ClSubtractionValidate (const TensorInfo &input0, const TensorInfo &input1, const TensorInfo &output, const ActivationDescriptor *activationDescriptor) |
arm_compute::Status | ClTransposeConvolution2dWorkloadValidate (const TensorInfo &input, const TensorInfo &output, const TransposeConvolution2dDescriptor &descriptor, const TensorInfo &weights, const Optional< TensorInfo > &biases) |
arm_compute::Status | ClTransposeWorkloadValidate (const TensorInfo &input, const TensorInfo &output, const TransposeDescriptor &descriptor) |
template<typename T > | |
void | CopyArmComputeClTensorData (arm_compute::CLTensor &dstTensor, const T *srcData) |
auto | SetClStridedSliceData (const std::vector< int > &m_begin, const std::vector< int > &m_end, const std::vector< int > &m_stride) |
auto | SetClSliceData (const std::vector< unsigned int > &m_begin, const std::vector< unsigned int > &m_size) |
void | InitializeArmComputeClTensorData (arm_compute::CLTensor &clTensor, const ConstTensorHandle *handle) |
RuntimeException | WrapClError (const cl::Error &clError, const CheckLocation &location) |
void | RunClFunction (arm_compute::IFunction &function, const CheckLocation &location) |
template<typename DataType , typename PayloadType > | |
DataType * | GetOutputTensorData (unsigned int idx, const PayloadType &data) |
constexpr const char * | NeonBackendId () |
constexpr const char * | NeonTensorHandleFactoryId () |
arm_compute::Status | NeonAbsWorkloadValidate (const TensorInfo &input, const TensorInfo &output) |
arm_compute::Status | NeonActivationWorkloadValidate (const TensorInfo &input, const TensorInfo &output, const ActivationDescriptor &descriptor) |
arm_compute::Status | NeonAdditionWorkloadValidate (const TensorInfo &input0, const TensorInfo &input1, const TensorInfo &output, const ActivationDescriptor *activationDescriptor) |
arm_compute::Status | NeonArgMinMaxWorkloadValidate (const TensorInfo &input, const TensorInfo &output, const ArgMinMaxDescriptor &descriptor) |
arm_compute::Status | NeonBatchNormalizationValidate (const TensorInfo &input, const TensorInfo &output, const TensorInfo &mean, const TensorInfo &var, const TensorInfo &beta, const TensorInfo &gamma, const BatchNormalizationDescriptor &descriptor, const ActivationDescriptor *activationDescriptor) |
arm_compute::Status | NeonBatchToSpaceNdWorkloadValidate (const TensorInfo &input, const TensorInfo &output, const BatchToSpaceNdDescriptor &desc) |
arm_compute::Status | NeonCastValidate (const TensorInfo &input, const TensorInfo &output) |
arm_compute::Status | NeonComparisonWorkloadValidate (const TensorInfo &input0, const TensorInfo &input1, const TensorInfo &output, const ComparisonDescriptor &descriptor) |
arm_compute::Status | NeonConcatWorkloadValidate (const std::vector< const TensorInfo *> &inputs, const TensorInfo &output, const OriginsDescriptor &descriptor) |
arm_compute::Status | NeonConstantWorkloadValidate (const TensorInfo &output) |
arm_compute::Status | NeonConvolution2dWorkloadValidate (const TensorInfo &input, const TensorInfo &output, const Convolution2dDescriptor &descriptor, const TensorInfo &weights, const Optional< TensorInfo > &biases, bool isFastMathEnabled, const ActivationDescriptor *activationDescriptor) |
arm_compute::Status | NeonDepthToSpaceWorkloadValidate (const TensorInfo &input, const TensorInfo &output, const DepthToSpaceDescriptor &descriptor) |
arm_compute::Status | NeonDepthwiseConvolutionWorkloadValidate (const TensorInfo &input, const TensorInfo &output, const DepthwiseConvolution2dDescriptor &descriptor, const TensorInfo &weights, const Optional< TensorInfo > &biases, const ActivationDescriptor *activationDescriptor) |
arm_compute::Status | NeonDequantizeWorkloadValidate (const TensorInfo &input, const TensorInfo &output) |
arm_compute::DetectionPostProcessLayerInfo | MakeInfo (const DetectionPostProcessDescriptor &desc) |
arm_compute::Status | NeonDetectionPostProcessValidate (const TensorInfo &boxEncodings, const TensorInfo &scores, const TensorInfo &anchors, const TensorInfo &detectionBoxes, const TensorInfo &detectionClasses, const TensorInfo &detectionScores, const TensorInfo &numDetections, const DetectionPostProcessDescriptor &desc) |
arm_compute::Status | NeonDivisionWorkloadValidate (const TensorInfo &input0, const TensorInfo &input1, const TensorInfo &output, const ActivationDescriptor *activationDescriptor) |
arm_compute::Status | NeonExpWorkloadValidate (const TensorInfo &input, const TensorInfo &output) |
arm_compute::Status | NeonFullyConnectedWorkloadValidate (const TensorInfo &input, const TensorInfo &output, const TensorInfo &weights, const TensorInfo &biases, const FullyConnectedDescriptor &descriptor, const ActivationDescriptor *activationDescriptor) |
arm_compute::Status | NeonGatherWorkloadValidate (const TensorInfo &input, const TensorInfo &indices, const TensorInfo &output, const GatherDescriptor &descriptor) |
arm_compute::Status | NeonInstanceNormalizationWorkloadValidate (const TensorInfo &input, const TensorInfo &output, const InstanceNormalizationDescriptor &descriptor) |
arm_compute::Status | NeonL2NormalizationWorkloadValidate (const TensorInfo &input, const TensorInfo &output, const L2NormalizationDescriptor &descriptor) |
arm_compute::Status | NeonLogicalAndWorkloadValidate (const TensorInfo &input0, const TensorInfo &input1, const TensorInfo &output) |
arm_compute::Status | NeonLogicalNotWorkloadValidate (const TensorInfo &input, const TensorInfo &output) |
arm_compute::Status | NeonLogicalOrWorkloadValidate (const TensorInfo &input0, const TensorInfo &input1, const TensorInfo &output) |
arm_compute::Status | NeonLogSoftmaxWorkloadValidate (const TensorInfo &input, const TensorInfo &output, const LogSoftmaxDescriptor &descriptor) |
arm_compute::Status | NeonLstmFloatWorkloadValidate (const TensorInfo &input, const TensorInfo &outputStateIn, const TensorInfo &cellStateIn, const TensorInfo &scratchBuffer, const TensorInfo &outputStateOut, const TensorInfo &cellStateOut, const TensorInfo &output, const LstmDescriptor &descriptor, const LstmInputParamsInfo ¶msInfo) |
arm_compute::Status | NeonMaximumWorkloadValidate (const TensorInfo &input0, const TensorInfo &input1, const TensorInfo &output) |
arm_compute::Status | NeonMeanWorkloadValidate (const TensorInfo &input, const TensorInfo &output, const MeanDescriptor &desc) |
arm_compute::Status | NeonMinimumWorkloadValidate (const TensorInfo &input0, const TensorInfo &input1, const TensorInfo &output) |
Validate function for validating the inputs and output. More... | |
arm_compute::Status | NeonMultiplicationWorkloadValidate (const TensorInfo &input0, const TensorInfo &input1, const TensorInfo &output, const ActivationDescriptor *activationDescriptor) |
arm_compute::Status | NeonNegWorkloadValidate (const TensorInfo &input, const TensorInfo &output) |
arm_compute::Status | NeonNormalizationWorkloadValidate (const TensorInfo &input, const TensorInfo &output, const NormalizationDescriptor &descriptor) |
arm_compute::Status | NeonPadWorkloadValidate (const TensorInfo &input, const TensorInfo &output, const PadDescriptor &descriptor) |
arm_compute::Status | NeonPermuteWorkloadValidate (const TensorInfo &input, const TensorInfo &output, const PermuteDescriptor &descriptor) |
arm_compute::Status | NeonPooling2dWorkloadValidate (const TensorInfo &input, const TensorInfo &output, const Pooling2dDescriptor &descriptor) |
arm_compute::Status | NeonPreluWorkloadValidate (const TensorInfo &input, const TensorInfo &alpha, const TensorInfo &output) |
arm_compute::Status | NeonQLstmWorkloadValidate (const TensorInfo &input, const TensorInfo &cellStateIn, const TensorInfo &outputStateIn, const TensorInfo &cellStateOut, const TensorInfo &outputStateOut, const TensorInfo &output, const QLstmDescriptor &descriptor, const LstmInputParamsInfo ¶msInfo) |
arm_compute::Status | NeonQuantizedLstmWorkloadValidate (const TensorInfo &input, const TensorInfo &cellStateIn, const TensorInfo &outputStateIn, const TensorInfo &cellStateOut, const TensorInfo &outputStateOut, const QuantizedLstmInputParamsInfo ¶msInfo) |
arm_compute::Status | NeonQuantizeWorkloadValidate (const TensorInfo &input, const TensorInfo &output) |
arm_compute::Status | NeonReduceWorkloadValidate (const TensorInfo &input, const TensorInfo &output, const ReduceDescriptor &desc) |
arm_compute::Status | NeonReshapeWorkloadValidate (const TensorInfo &input, const TensorInfo &output) |
arm_compute::Status | NeonResizeWorkloadValidate (const TensorInfo &input, const TensorInfo &output, const ResizeDescriptor &descriptor) |
arm_compute::Status | NeonRsqrtWorkloadValidate (const TensorInfo &input, const TensorInfo &output) |
arm_compute::Status | NeonSliceWorkloadValidate (const TensorInfo &input, const TensorInfo &output, const SliceDescriptor &descriptor) |
arm_compute::Status | NeonSoftmaxWorkloadValidate (const TensorInfo &input, const TensorInfo &output, const SoftmaxDescriptor &descriptor) |
arm_compute::Status | NeonSpaceToBatchNdWorkloadValidate (const TensorInfo &input, const TensorInfo &output, const SpaceToBatchNdDescriptor &descriptor) |
arm_compute::Status | NeonSpaceToDepthWorkloadValidate (const TensorInfo &input, const TensorInfo &output, const SpaceToDepthDescriptor &descriptor) |
arm_compute::Status | NeonSplitterWorkloadValidate (const TensorInfo &input, const std::vector< std::reference_wrapper< TensorInfo >> &outputs, unsigned int splitAxis) |
arm_compute::Status | NeonStackWorkloadValidate (const std::vector< const TensorInfo *> &inputs, const TensorInfo &output, const StackDescriptor &descriptor) |
arm_compute::Status | NeonStridedSliceWorkloadValidate (const TensorInfo &input, const TensorInfo &output, const StridedSliceDescriptor &descriptor) |
arm_compute::Status | NeonSubtractionWorkloadValidate (const TensorInfo &input0, const TensorInfo &input1, const TensorInfo &output, const ActivationDescriptor *activationDescriptor) |
arm_compute::Status | NeonTransposeConvolution2dWorkloadValidate (const TensorInfo &input, const TensorInfo &output, const TransposeConvolution2dDescriptor &descriptor, const TensorInfo &weights, const Optional< TensorInfo > &biases) |
arm_compute::Status | NeonTransposeWorkloadValidate (const TensorInfo &input, const TensorInfo &output, const TransposeDescriptor &descriptor) |
template<typename T > | |
void | CopyArmComputeTensorData (arm_compute::Tensor &dstTensor, const T *srcData) |
void | InitializeArmComputeTensorData (arm_compute::Tensor &tensor, const ConstTensorHandle *handle) |
auto | SetNeonStridedSliceData (const std::vector< int > &m_begin, const std::vector< int > &m_end, const std::vector< int > &m_stride) |
auto | SetNeonSliceData (const std::vector< unsigned int > &m_begin, const std::vector< unsigned int > &m_size) |
constexpr const char * | RefBackendId () |
constexpr const char * | RefTensorHandleFactoryId () |
template<DataType ArmnnType> | |
bool | IsDataType (const WorkloadInfo &info) |
bool | IsSigned32 (const WorkloadInfo &info) |
bool | IsBFloat16 (const WorkloadInfo &info) |
bool | IsFloat16 (const WorkloadInfo &info) |
bool | IsQSymmS16 (const WorkloadInfo &info) |
bool | IsQSymmS8 (const WorkloadInfo &info) |
bool | IsQAsymmS8 (const WorkloadInfo &info) |
bool | IsQAsymmU8 (const WorkloadInfo &info) |
template<typename QueueDescriptorType > | |
constexpr bool | IsOperationQueueDescriptor (const QueueDescriptorType &) |
template<> | |
constexpr bool | IsOperationQueueDescriptor (const MemCopyQueueDescriptor &) |
template<> | |
constexpr bool | IsOperationQueueDescriptor (const ConstantQueueDescriptor &) |
template<> | |
constexpr bool | IsOperationQueueDescriptor (const PermuteQueueDescriptor &) |
float | Activation (float in, ActivationFunction function, float a, float b) |
void | Activation (Decoder< float > &in, Encoder< float > &out, const TensorInfo &tensorInfo, ActivationFunction function, float a, float b) |
template<typename OUT > | |
void | ArgMinMax (Decoder< float > &in, OUT *out, const TensorInfo &inputTensorInfo, const TensorInfo &outputTensorInfo, ArgMinMaxFunction function, int axis) |
template void | ArgMinMax (Decoder< float > &in, int32_t *out, const TensorInfo &inputTensorInfo, const TensorInfo &outputTensorInfo, ArgMinMaxFunction function, int axis) |
template void | ArgMinMax (Decoder< float > &in, int64_t *out, const TensorInfo &inputTensorInfo, const TensorInfo &outputTensorInfo, ArgMinMaxFunction function, int axis) |
void | BatchNormImpl (const BatchNormalizationQueueDescriptor &data, Decoder< float > &meanDecoder, Decoder< float > &varianceDecoder, Decoder< float > &betaDecoder, Decoder< float > &gammaDecoder, Decoder< float > &inputDecoder, Encoder< float > &outputEncoder) |
unsigned int | Offset (const TensorShape &shape, unsigned int batch, unsigned int height, unsigned int width, unsigned int channels, const DataLayoutIndexed &dataLayout) |
void | BatchToSpaceNd (const DataLayoutIndexed &dataLayout, const TensorInfo &inputTensorInfo, const TensorInfo &outputTensorInfo, const std::vector< unsigned int > &blockShape, const std::vector< std::pair< unsigned int, unsigned int >> &cropsData, Decoder< float > &inputDecoder, Encoder< float > &outputEncoder) |
void | Concatenate (const ConcatQueueDescriptor &data, std::vector< ITensorHandle *> inputs, std::vector< ITensorHandle *> outputs) |
void | Convolve (const TensorShape &rInputShape, Decoder< float > &rInputDecoder, const TensorShape &rOutputShape, Encoder< float > &rOutputEncoder, const TensorShape &rFilterShape, Decoder< float > &rFilterDecoder, bool biasEnabled, Decoder< float > *pBiasDecoder, DataLayout dataLayout, unsigned int paddingTop, unsigned int paddingLeft, unsigned int xStride, unsigned int yStride, unsigned int xDilation, unsigned int yDilation, bool depthwise) |
template<typename T > | |
void | Debug (const TensorInfo &inputInfo, const T *inputData, LayerGuid guid, const std::string &layerName, unsigned int slotIndex) |
template void | Debug< BFloat16 > (const TensorInfo &inputInfo, const BFloat16 *inputData, LayerGuid guid, const std::string &layerName, unsigned int slotIndex) |
template void | Debug< Half > (const TensorInfo &inputInfo, const Half *inputData, LayerGuid guid, const std::string &layerName, unsigned int slotIndex) |
template void | Debug< float > (const TensorInfo &inputInfo, const float *inputData, LayerGuid guid, const std::string &layerName, unsigned int slotIndex) |
template void | Debug< uint8_t > (const TensorInfo &inputInfo, const uint8_t *inputData, LayerGuid guid, const std::string &layerName, unsigned int slotIndex) |
template void | Debug< int8_t > (const TensorInfo &inputInfo, const int8_t *inputData, LayerGuid guid, const std::string &layerName, unsigned int slotIndex) |
template void | Debug< int16_t > (const TensorInfo &inputInfo, const int16_t *inputData, LayerGuid guid, const std::string &layerName, unsigned int slotIndex) |
template void | Debug< int32_t > (const TensorInfo &inputInfo, const int32_t *inputData, LayerGuid guid, const std::string &layerName, unsigned int slotIndex) |
template<typename T > | |
std::unique_ptr< Decoder< T > > | MakeDecoder (const TensorInfo &info, const void *data=nullptr) |
template<> | |
std::unique_ptr< Decoder< float > > | MakeDecoder (const TensorInfo &info, const void *data) |
template<> | |
std::unique_ptr< Decoder< bool > > | MakeDecoder (const TensorInfo &info, const void *data) |
template<> | |
std::unique_ptr< Decoder< int32_t > > | MakeDecoder (const TensorInfo &info, const void *data) |
void | DepthToSpace (const TensorInfo &inputInfo, const DepthToSpaceDescriptor &descriptor, const void *inputData, void *outputData, unsigned int dataTypeSize) |
void | Dequantize (Decoder< float > &inputDecoder, Encoder< float > &outputEncoder, const TensorInfo &inputInfo, const TensorInfo &outputInfo) |
std::vector< unsigned int > | GenerateRangeK (unsigned int k) |
void | TopKSort (unsigned int k, unsigned int *indices, const float *values, unsigned int numElement) |
float | IntersectionOverUnion (const float *boxI, const float *boxJ) |
std::vector< unsigned int > | NonMaxSuppression (unsigned int numBoxes, const std::vector< float > &boxCorners, const std::vector< float > &scores, float nmsScoreThreshold, unsigned int maxDetection, float nmsIouThreshold) |
void | AllocateOutputData (unsigned int numOutput, unsigned int numSelected, const std::vector< float > &boxCorners, const std::vector< unsigned int > &outputIndices, const std::vector< unsigned int > &selectedBoxes, const std::vector< unsigned int > &selectedClasses, const std::vector< float > &selectedScores, float *detectionBoxes, float *detectionScores, float *detectionClasses, float *numDetections) |
void | DetectionPostProcess (const TensorInfo &boxEncodingsInfo, const TensorInfo &scoresInfo, const TensorInfo &anchorsInfo, const TensorInfo &detectionBoxesInfo, const TensorInfo &detectionClassesInfo, const TensorInfo &detectionScoresInfo, const TensorInfo &numDetectionsInfo, const DetectionPostProcessDescriptor &desc, Decoder< float > &boxEncodings, Decoder< float > &scores, Decoder< float > &anchors, float *detectionBoxes, float *detectionClasses, float *detectionScores, float *numDetections) |
template<typename T > | |
std::unique_ptr< Encoder< T > > | MakeEncoder (const TensorInfo &info, void *data=nullptr) |
template<> | |
std::unique_ptr< Encoder< float > > | MakeEncoder (const TensorInfo &info, void *data) |
template<> | |
std::unique_ptr< Encoder< bool > > | MakeEncoder (const TensorInfo &info, void *data) |
template<> | |
std::unique_ptr< Encoder< int32_t > > | MakeEncoder (const TensorInfo &info, void *data) |
void | Fill (Encoder< float > &output, const TensorShape &desiredOutputShape, const float value) |
Creates a tensor and fills it with a scalar value. More... | |
void | FullyConnected (const TensorShape &rInputShape, Decoder< float > &rInputDecoder, const TensorShape &rOutputShape, Encoder< float > &rOutputEncoder, const TensorShape &rWeightsShape, Decoder< float > &rWeightDecoder, Decoder< float > &rBiasDecoder, bool biasEnabled, unsigned int K, bool transposeWeights) |
Performs a matrix multiplication and optionally adds a bias. More... | |
void | Gather (const TensorInfo ¶msInfo, const TensorInfo &indicesInfo, const TensorInfo &outputInfo, Decoder< float > ¶ms, const int32_t *indices, Encoder< float > &output, const int32_t axis) |
void | InstanceNorm (const InstanceNormalizationQueueDescriptor &data, const TensorInfo &inputInfo, Decoder< float > &inputDecoder, Encoder< float > &outputEncoder) |
void | LogSoftmax (Decoder< float > &input, Encoder< float > &output, const TensorInfo &inputInfo, const LogSoftmaxDescriptor &descriptor) |
void | Pad (const TensorInfo &inputInfo, const TensorInfo &outputInfo, const ITensorHandle *inputHandle, ITensorHandle *outputHandle, const PadQueueDescriptor &data) |
void | Pooling2d (Decoder< float > &rInputDecoder, Encoder< float > &rOutputEncoder, const TensorInfo &inputInfo, const TensorInfo &outputInfo, const Pooling2dDescriptor ¶ms) |
Computes the Pooling2d operation. More... | |
void | PreluImpl (const TensorInfo &inputInfo, const TensorInfo &alphaInfo, const TensorInfo &outputInfo, Decoder< float > &inputData, Decoder< float > &alphaData, Encoder< float > &outputData) |
bool | NextIndex (const unsigned int numDims, const armnn::TensorShape &dims, std::vector< unsigned int > ¤t) |
unsigned int | ReducedOutputOffset (const unsigned int numDims, const armnn::TensorShape &dims, std::vector< unsigned int > &index, const unsigned int numAxis, const std::vector< unsigned int > &axis) |
void | Reduce (const TensorInfo &inputInfo, const TensorInfo &outputInfo, Decoder< float > &input, Encoder< float > &output, const std::vector< uint32_t > axis, const ReduceOperation reduceOperation) |
void | FakeQuantization (const float *inputData, float *outputData, uint32_t numElements, float min, float max) |
const TensorInfo & | GetTensorInfo (const ITensorHandle *tensorHandle) |
float32 helpers More... | |
template<typename DataType , typename PayloadType > | |
const DataType * | GetInputTensorData (unsigned int idx, const PayloadType &data) |
template<typename DataType > | |
DataType * | GetOutputTensorData (ITensorHandle *tensorHandle) |
template<typename PayloadType > | |
const float * | GetInputTensorDataFloat (unsigned int idx, const PayloadType &data) |
template<typename PayloadType > | |
float * | GetOutputTensorDataFloat (unsigned int idx, const PayloadType &data) |
template<typename PayloadType > | |
const Half * | GetInputTensorDataHalf (unsigned int idx, const PayloadType &data) |
template<typename PayloadType > | |
Half * | GetOutputTensorDataHalf (unsigned int idx, const PayloadType &data) |
template<typename PayloadType > | |
const BFloat16 * | GetInputTensorDataBFloat16 (unsigned int idx, const PayloadType &data) |
template<typename PayloadType > | |
BFloat16 * | GetOutputTensorDataBFloat16 (unsigned int idx, const PayloadType &data) |
template<typename T > | |
std::vector< float > | Dequantize (const T *quant, const TensorInfo &info) |
u8 helpers More... | |
template<typename T > | |
void | Dequantize (const T *inputData, float *outputData, const TensorInfo &info) |
void | Quantize (uint8_t *quant, const float *dequant, const TensorInfo &info) |
void | Resize (Decoder< float > &in, const TensorInfo &inputInfo, Encoder< float > &out, const TensorInfo &outputInfo, DataLayoutIndexed dataLayout, armnn::ResizeMethod resizeMethod, bool alignCorners, bool halfPixelCenters) |
void | Slice (const TensorInfo &inputInfo, const SliceDescriptor &descriptor, const void *inputData, void *outputData, unsigned int dataTypeSize) |
void | Softmax (Decoder< float > &in, Encoder< float > &out, const TensorInfo &inputTensorInfo, float beta, int axis) |
Computes the softmax function on some inputs, into outputs, with a shape given by tensorInfo. More... | |
unsigned int | GetOffset (const TensorShape &shape, unsigned int b, unsigned int h, unsigned int w, unsigned int c, const DataLayoutIndexed &dataLayout) |
void | SpaceToBatchNd (const TensorInfo &inputInfo, const TensorInfo &outputInfo, const SpaceToBatchNdDescriptor ¶ms, Decoder< float > &inputData, Encoder< float > &outputData) |
void | SpaceToDepth (const TensorInfo &inputInfo, const TensorInfo &outputInfo, const SpaceToDepthDescriptor ¶ms, Decoder< float > &inputData, Encoder< float > &outputData) |
void | Split (const SplitterQueueDescriptor &data, std::vector< ITensorHandle *> inputs, std::vector< ITensorHandle *> outputs) |
template<typename DataType > | |
void | Splitter (const SplitterQueueDescriptor &data, std::vector< ITensorHandle *> inputs, std::vector< ITensorHandle *> outputs) |
void | Stack (const StackQueueDescriptor &data, std::vector< std::unique_ptr< Decoder< float >>> &inputs, Encoder< float > &output, const TensorInfo &inputInfo, const TensorInfo &outputInfo) |
void | StridedSlice (const TensorInfo &inputInfo, const StridedSliceDescriptor ¶ms, const void *inputData, void *outputData, unsigned int dataTypeSize) |
void | TransposeConvolution2dImpl (const TransposeConvolution2dDescriptor &descriptor, const TensorShape &inputShape, Decoder< float > &inputDecoder, const TensorShape &outputShape, Encoder< float > &outputEncoder, const TensorShape &weightsShape, Decoder< float > &weightsDecoder, Decoder< float > *biasesDecoder) |
std::istream & | operator>> (std::istream &in, armnn::Compute &compute) |
std::istream & | operator>> (std::istream &in, armnn::BackendId &backend) |
Variables | |
constexpr unsigned int | MaxNumOfTensorDimensions = 5U |
constexpr unsigned int | LOWEST_CAPTURE_PERIOD = 10000u |
The lowest performance data capture interval we support is 10 miliseconds. More... | |
constexpr unsigned int | EXPIRE_RATE = 3U |
Variable to control expire rate of priority queue. More... | |
constexpr std::size_t | g_ProfilingEventCountHint = 1024 |
constexpr bool | g_WriteProfilingEventSequence = true |
constexpr bool | g_AggregateProfilingEventsByInference = true |
constexpr bool | g_WriteReportToStdOutOnProfilerDestruction = false |
thread_local IProfiler * | tl_Profiler = nullptr |
const std::set< armnn::BackendCapability > | gpuAccCapabilities |
const std::set< armnn::BackendCapability > | cpuAccCapabilities |
const std::set< armnn::LayerType > | paddingRequiredLayers |
const std::set< armnn::BackendCapability > | cpuRefCapabilities |
Copyright (c) 2021 ARM Limited and Contributors.
Optional is a drop in replacement for std::optional until we migrate to c++-17.
Create pages for each tool so they appear nicely in the doxygen tree-view.
All rights reserved.
SPDX-License-Identifier: MIT
Subpages are not listed there.
Note: The parser, serializer and deserializer pages are created in 01_parsers.dox or 02_deserializer_serializer.dox
Subpages are not listed there. Also we can overwrite the page name this way.
Only a subset of the optional features are implemented that we intend to use in ArmNN. There are two distinct implementations here:
1, for normal constructable/destructable types and reference types 2, for reference types The std::optional features we support are:
using AdditionalInfoObjectPtr = std::shared_ptr<void> |
using BackendIdSet = std::unordered_set<BackendId> |
Definition at line 191 of file BackendId.hpp.
using BackendIdVector = std::vector<BackendId> |
Definition at line 190 of file BackendId.hpp.
using BackendsMap = std::map<BackendId, std::unique_ptr<class IBackendInternal> > |
Definition at line 317 of file Network.hpp.
using BaseFloat32ComparisonWorkload = MultiTypedWorkload<QueueDescriptor, armnn::DataType::Float32, armnn::DataType::Boolean> |
Definition at line 187 of file Workload.hpp.
using BaseUint8ComparisonWorkload = MultiTypedWorkload<QueueDescriptor, armnn::DataType::QAsymmU8, armnn::DataType::Boolean> |
Definition at line 192 of file Workload.hpp.
using BFloat16ToFloat32Workload = MultiTypedWorkload<QueueDescriptor, armnn::DataType::BFloat16, armnn::DataType::Float32> |
Definition at line 197 of file Workload.hpp.
using BindingPointInfo = std::pair<armnn::LayerBindingId, armnn::TensorInfo> |
Definition at line 261 of file Tensor.hpp.
Definition at line 182 of file Workload.hpp.
using CompiledBlobDeleter = std::function<void(const void*)> |
Definition at line 17 of file ISubgraphViewConverter.hpp.
using CompiledBlobPtr = std::unique_ptr<void, CompiledBlobDeleter> |
Definition at line 18 of file ISubgraphViewConverter.hpp.
using ConcatDescriptor = OriginsDescriptor |
Definition at line 52 of file DescriptorsFwd.hpp.
using Coordinates = std::array<unsigned int, MaxNumOfTensorDimensions> |
Definition at line 14 of file InternalTypes.hpp.
using DebugCallbackFunction = std::function<void(LayerGuid guid, unsigned int slotIndex, ITensorHandle* tensorHandle)> |
Define the type of callback for the Debug layer to call.
guid | - guid of layer connected to the input of the Debug layer |
slotIndex | - index of the output slot connected to the input of the Debug layer |
tensorHandle | - TensorHandle for the input tensor to the Debug layer |
A DepthToSpaceDescriptor for the DepthToSpaceLayer.
Definition at line 916 of file Descriptors.hpp.
using Dimensions = std::array<unsigned int, MaxNumOfTensorDimensions> |
Definition at line 15 of file InternalTypes.hpp.
using DynamicBackendPtr = std::unique_ptr<DynamicBackend> |
Definition at line 52 of file DynamicBackend.hpp.
Definition at line 18 of file ClImportTensorHandleFactory.cpp.
using Float16ToFloat32Workload = MultiTypedWorkload<QueueDescriptor, armnn::DataType::Float16, armnn::DataType::Float32> |
Definition at line 207 of file Workload.hpp.
using Float32ToBFloat16Workload = MultiTypedWorkload<QueueDescriptor, armnn::DataType::Float32, armnn::DataType::BFloat16> |
Definition at line 202 of file Workload.hpp.
using Float32ToFloat16Workload = MultiTypedWorkload<QueueDescriptor, armnn::DataType::Float32, armnn::DataType::Float16> |
Definition at line 212 of file Workload.hpp.
Definition at line 173 of file Workload.hpp.
using FloatWorkload = TypedWorkload<QueueDescriptor, armnn::DataType::Float16, armnn::DataType::Float32> |
Definition at line 170 of file Workload.hpp.
using HighResolutionClock = std::chrono::high_resolution_clock::time_point |
using IBackendContextUniquePtr = std::unique_ptr<IBackendContext> |
Definition at line 31 of file IBackendContext.hpp.
typedef std::unique_ptr< IBackendInternal > IBackendInternalUniquePtr |
Definition at line 23 of file BackendRegistry.hpp.
using IBackendSharedPtr = std::shared_ptr<IBackend> |
using IBackendUniquePtr = std::unique_ptr<IBackend, void(*)(IBackend* backend)> |
using IGpuAccTunedParametersPtr = std::shared_ptr<IGpuAccTunedParameters> |
The following API is replaced by the backend options API.
Definition at line 237 of file IRuntime.hpp.
using ILayerSupportSharedPtr = std::shared_ptr<ILayerSupport> |
Definition at line 425 of file ILayerSupport.hpp.
using IMemoryManagerUniquePtr = std::unique_ptr<IMemoryManager> |
Definition at line 24 of file IMemoryManager.hpp.
using INetworkPtr = std::unique_ptr<INetwork, void(*)(INetwork* network)> |
Definition at line 173 of file INetwork.hpp.
using InferenceTimingPair = std::pair<HighResolutionClock, HighResolutionClock> |
Definition at line 81 of file WorkloadData.hpp.
using InputTensors = std::vector<std::pair<LayerBindingId, class ConstTensor> > |
Definition at line 340 of file Tensor.hpp.
typedef ConstPassthroughTensorHandle instead |
Definition at line 105 of file SubgraphView.hpp.
Definition at line 179 of file Workload.hpp.
using IOptimizedNetworkPtr = std::unique_ptr<IOptimizedNetwork, void(*)(IOptimizedNetwork* network)> |
Definition at line 174 of file INetwork.hpp.
Definition at line 28 of file Runtime.hpp.
using IRuntimePtr = std::unique_ptr<IRuntime, void(*)(IRuntime* runtime)> |
Definition at line 28 of file IRuntime.hpp.
using LayerBindingId = int |
using LayerGuid = profiling::ProfilingGuid |
using LayerPriority = unsigned int |
using LayerTypeOf = typename LayerTypeOfImpl<Type>::Type |
Definition at line 84 of file LayersFwd.hpp.
using LoadedNetworks = std::unordered_map<NetworkId, std::unique_ptr<LoadedNetwork> > |
Definition at line 27 of file Runtime.hpp.
A LogSoftmaxDescriptor for the LogSoftmaxLayer.
Definition at line 158 of file Descriptors.hpp.
using MemorySourceFlags = unsigned int |
Definition at line 15 of file MemorySources.hpp.
using MergerDescriptor = OriginsDescriptor |
MergerDescriptor is deprecated, use ConcatDescriptor instead.
Definition at line 56 of file DescriptorsFwd.hpp.
Definition at line 139 of file WorkloadData.hpp.
using ModelOptions = std::vector<BackendOptions> |
Definition at line 17 of file BackendOptions.hpp.
typedef int NetworkId |
Definition at line 22 of file IRuntime.hpp.
using NetworkImplPtr = std::unique_ptr<NetworkImpl, void(*)(NetworkImpl* network)> |
Definition at line 28 of file Network.hpp.
using NetworkOptions = std::vector<BackendOptions> |
Definition at line 15 of file BackendOptions.hpp.
Definition at line 82 of file WorkloadData.hpp.
using OutputTensors = std::vector<std::pair<LayerBindingId, class Tensor> > |
Definition at line 341 of file Tensor.hpp.
using ParameterStringifyFunction = std::function<void(const std::string& name, const std::string& value)> |
Definition at line 14 of file SerializeLayerParameters.hpp.
using PreCompiledObjectDeleter = std::function<void(const void*)> |
Definition at line 19 of file PreCompiledLayer.hpp.
using PreCompiledObjectPtr = std::unique_ptr<void, PreCompiledObjectDeleter> |
Definition at line 20 of file PreCompiledLayer.hpp.
using RefAdditionWorkload = RefElementwiseWorkload<std::plus<DataType>, AdditionQueueDescriptor, StringMapping::RefAdditionWorkload_Execute> |
Definition at line 40 of file RefElementwiseWorkload.hpp.
Definition at line 42 of file RefDebugWorkload.hpp.
Definition at line 43 of file RefDebugWorkload.hpp.
Definition at line 44 of file RefDebugWorkload.hpp.
Definition at line 46 of file RefDebugWorkload.hpp.
Definition at line 45 of file RefDebugWorkload.hpp.
Definition at line 47 of file RefDebugWorkload.hpp.
Definition at line 48 of file RefDebugWorkload.hpp.
Definition at line 49 of file RefDebugWorkload.hpp.
using RefDivisionWorkload = RefElementwiseWorkload<std::divides<DataType>, DivisionQueueDescriptor, StringMapping::RefDivisionWorkload_Execute> |
Definition at line 58 of file RefElementwiseWorkload.hpp.
using RefMaximumWorkload = RefElementwiseWorkload<armnn::maximum<DataType>, MaximumQueueDescriptor, StringMapping::RefMaximumWorkload_Execute> |
Definition at line 64 of file RefElementwiseWorkload.hpp.
using RefMinimumWorkload = RefElementwiseWorkload<armnn::minimum<DataType>, MinimumQueueDescriptor, StringMapping::RefMinimumWorkload_Execute> |
Definition at line 70 of file RefElementwiseWorkload.hpp.
using RefMultiplicationWorkload = RefElementwiseWorkload<std::multiplies<DataType>, MultiplicationQueueDescriptor, StringMapping::RefMultiplicationWorkload_Execute> |
Definition at line 52 of file RefElementwiseWorkload.hpp.
Definition at line 33 of file RefPermuteWorkload.hpp.
Definition at line 34 of file RefPermuteWorkload.hpp.
Definition at line 35 of file RefPermuteWorkload.hpp.
Definition at line 37 of file RefPermuteWorkload.hpp.
Definition at line 36 of file RefPermuteWorkload.hpp.
Definition at line 38 of file RefPermuteWorkload.hpp.
using RefSubtractionWorkload = RefElementwiseWorkload<std::minus<DataType>, SubtractionQueueDescriptor, StringMapping::RefSubtractionWorkload_Execute> |
Definition at line 46 of file RefElementwiseWorkload.hpp.
Definition at line 33 of file RefTransposeWorkload.hpp.
Definition at line 34 of file RefTransposeWorkload.hpp.
Definition at line 35 of file RefTransposeWorkload.hpp.
Definition at line 37 of file RefTransposeWorkload.hpp.
Definition at line 36 of file RefTransposeWorkload.hpp.
Definition at line 38 of file RefTransposeWorkload.hpp.
using ResolveType = typename ResolveTypeImpl<DT>::Type |
Definition at line 79 of file ResolveType.hpp.
using SplitterDescriptor = ViewsDescriptor |
Definition at line 57 of file DescriptorsFwd.hpp.
using supported = ISubgraphViewConverter |
Definition at line 31 of file ISubgraphViewConverter.hpp.
using Uint8ToFloat32Workload = MultiTypedWorkload<QueueDescriptor, armnn::DataType::QAsymmU8, armnn::DataType::Float32> |
Definition at line 217 of file Workload.hpp.
Definition at line 176 of file Workload.hpp.
using WorkloadQueue = std::vector< std::unique_ptr<IWorkload> > |
Definition at line 13 of file ExecutionFrame.hpp.
|
strong |
|
strong |
|
strong |
BackendCapability class.
Enumerator | |
---|---|
NonConstWeights | Constant weights can be accessed through the descriptors, On the other hand, non-const weights can be accessed through inputs. |
AsyncExecution | Asynchronous Execution. |
Definition at line 220 of file Types.hpp.
|
strong |
|
strong |
Capability class to calculate in the GetCapabilities function so that only the capability in the scope can be choose to calculate.
Enumerator | |
---|---|
PaddingRequired | |
FallbackImportDisabled | |
CapabilityClassMax |
Definition at line 20 of file ITensorHandleFactory.hpp.
|
strong |
|
strong |
The Compute enum is now deprecated and it is now being replaced by BackendId.
Enumerator | |
---|---|
Undefined | |
CpuRef | CPU Execution: Reference C++ kernels. |
CpuAcc | CPU Execution: NEON: ArmCompute. |
GpuAcc | GPU Execution: OpenCL: ArmCompute. |
Definition at line 21 of file BackendId.hpp.
|
strong |
|
strong |
Enumerator | |
---|---|
Float16 | |
Float32 | |
QAsymmU8 | |
Signed32 | |
Boolean | |
QSymmS16 | |
QuantizedSymm8PerAxis | |
QSymmS8 | |
QAsymmS8 | |
BFloat16 | |
Signed64 | |
QuantisedAsymm8 | |
QuantisedSymm16 |
Definition at line 36 of file Types.hpp.
|
strong |
|
strong |
Definition at line 100 of file ITensorHandleFactory.hpp.
|
strong |
|
strong |
Enumerator | |
---|---|
Measurement | |
Event |
Definition at line 18 of file JsonPrinter.hpp.
|
strong |
When adding a new layer, adapt also the LastLayer enum value in the enum class LayerType below.
Definition at line 455 of file Types.hpp.
|
strong |
|
strong |
Enumerator | |
---|---|
Trace | |
Debug | |
Info | |
Warning | |
Error | |
Fatal |
Definition at line 13 of file Utils.hpp.
|
strong |
|
strong |
|
strong |
|
strong |
|
strong |
The padding method modifies the output of pooling layers.
In both supported methods, the values are ignored (they are not even zeroes, which would make a difference for max pooling a tensor with negative values). The difference between IgnoreValue and Exclude is that the former counts the padding fields in the divisor of Average and L2 pooling, while Exclude does not.
Enumerator | |
---|---|
IgnoreValue | The padding fields count, but are ignored. |
Exclude | The padding fields don't count and are ignored. |
Definition at line 152 of file Types.hpp.
|
strong |
|
strong |
|
strong |
|
strong |
|
strong |
The ShapeInferenceMethod modify how the output shapes are treated.
When ValidateOnly is selected, the output shapes are inferred from the input parameters of the layer and any mismatch is reported. When InferAndValidate is selected 2 actions must be performed: (1)infer output shape from inputs and (2)validate the shapes as in ValidateOnly. This option has been added to work with tensors which rank or dimension sizes are not specified explicitly, however this information can be calculated from the inputs.
Enumerator | |
---|---|
ValidateOnly | Validate all output shapes. |
InferAndValidate | Infer missing output shapes and validate all output shapes. |
Definition at line 188 of file Types.hpp.
|
strong |
|
strong |
Enumerator | |
---|---|
None | |
Rapid | |
Normal | |
Exhaustive |
Definition at line 70 of file ClBackendContext.cpp.
|
strong |
float Activation | ( | float | in, |
ActivationFunction | function, | ||
float | a, | ||
float | b | ||
) |
Definition at line 13 of file Activation.cpp.
References Abs, BoundedReLu, Elu, HardSwish, LeakyReLu, Linear, ReLu, Sigmoid, SoftReLu, Sqrt, Square, and TanH.
Referenced by Activation().
void Activation | ( | Decoder< float > & | in, |
Encoder< float > & | out, | ||
const TensorInfo & | tensorInfo, | ||
ActivationFunction | function, | ||
float | a, | ||
float | b | ||
) |
Definition at line 95 of file Activation.cpp.
References Activation(), Decoder< IType >::Get(), TensorInfo::GetNumElements(), and Encoder< IType >::Set().
void armnn::AllocateOutputData | ( | unsigned int | numOutput, |
unsigned int | numSelected, | ||
const std::vector< float > & | boxCorners, | ||
const std::vector< unsigned int > & | outputIndices, | ||
const std::vector< unsigned int > & | selectedBoxes, | ||
const std::vector< unsigned int > & | selectedClasses, | ||
const std::vector< float > & | selectedScores, | ||
float * | detectionBoxes, | ||
float * | detectionScores, | ||
float * | detectionClasses, | ||
float * | numDetections | ||
) |
Definition at line 102 of file DetectionPostProcess.cpp.
References numeric_cast().
Referenced by DetectionPostProcess().
bool armnn::AllTypesAreEqualImpl | ( | T | ) |
Definition at line 59 of file LayerSupportRules.hpp.
Referenced by AllTypesAreEqualImpl(), and TypesAreEqual::TypesAreEqual().
bool armnn::AllTypesAreEqualImpl | ( | T | t1, |
T | t2, | ||
Rest... | rest | ||
) |
Definition at line 65 of file LayerSupportRules.hpp.
References AllTypesAreEqualImpl().
void armnn::Append | ( | Optimizer::Optimizations & | optimizations, |
T && | optimization | ||
) |
Definition at line 30 of file Optimizer.hpp.
Referenced by Append(), and MakeOptimizations().
void armnn::Append | ( | Optimizer::Optimizations & | optimizations, |
Front && | front, | ||
Others &&... | others | ||
) |
Definition at line 36 of file Optimizer.hpp.
References Append().
OptimizationResult armnn::ApplyBackendOptimizations | ( | OptimizedNetworkImpl * | optNetObjPtr, |
BackendSettings & | backendSettings, | ||
BackendsMap & | backends, | ||
const ModelOptions & | modelOptions, | ||
Optional< std::vector< std::string > &> | errMessages | ||
) |
Definition at line 1047 of file Network.cpp.
References ARMNN_ASSERT, AssignBackends(), SubgraphView::begin(), SubgraphView::end(), Layer::GetBackendId(), OptimizationViews::GetFailedSubgraphs(), OptimizedNetworkImpl::GetGraph(), OptimizationViews::GetSubstitutions(), Layer::GetType(), Input, OptimizationResult::m_Error, BackendSettings::m_SelectedBackends, Output, ReportWarning(), SubgraphViewSelector::SelectSubgraphs(), Graph::SubstituteSubgraph(), and OptimizationViews::Validate().
Referenced by Optimize().