ArmNN
 24.08
QuantizeQueueDescriptor Struct Reference

#include <WorkloadData.hpp>

Inheritance diagram for QuantizeQueueDescriptor:
[legend]
Collaboration diagram for QuantizeQueueDescriptor:
[legend]

Public Member Functions

void Validate (const WorkloadInfo &workloadInfo) const
 
- Public Member Functions inherited from QueueDescriptor
virtual ~QueueDescriptor ()=default
 
void ValidateTensorNumDimensions (const TensorInfo &tensor, std::string const &descName, unsigned int numDimensions, std::string const &tensorName) const
 
void ValidateTensorNumDimNumElem (const TensorInfo &tensorInfo, unsigned int numDimension, unsigned int numElements, std::string const &tensorName) const
 
void ValidateInputsOutputs (const std::string &descName, unsigned int numExpectedIn, unsigned int numExpectedOut) const
 
template<typename T >
const T * GetAdditionalInformation () const
 

Additional Inherited Members

- Public Attributes inherited from QueueDescriptor
std::vector< ITensorHandle * > m_Inputs
 
std::vector< ITensorHandle * > m_Outputs
 
void * m_AdditionalInfoObject
 
bool m_AllowExpandedDims = false
 
- Protected Member Functions inherited from QueueDescriptor
 QueueDescriptor ()
 
 QueueDescriptor (QueueDescriptor const &)=default
 
QueueDescriptoroperator= (QueueDescriptor const &)=default
 

Detailed Description

Definition at line 299 of file WorkloadData.hpp.

Member Function Documentation

◆ Validate()

void Validate ( const WorkloadInfo workloadInfo) const

Definition at line 2499 of file WorkloadData.cpp.

2500 {
2501  const std::string descriptorName{"QuantizeQueueDescriptor"};
2502 
2503  ValidateNumInputs(workloadInfo, descriptorName, 1);
2504  ValidateNumOutputs(workloadInfo, descriptorName, 1);
2505 
2506  const TensorInfo& inputTensorInfo = workloadInfo.m_InputTensorInfos[0];
2507  const TensorInfo& outputTensorInfo = workloadInfo.m_OutputTensorInfos[0];
2508 
2509  std::vector<DataType> supportedTypes =
2510  {
2518  };
2519 
2520  ValidateDataTypes(inputTensorInfo, supportedTypes, descriptorName);
2521 
2522  if (!IsQuantizedType(outputTensorInfo.GetDataType()))
2523  {
2524  throw InvalidArgumentException(descriptorName + ": Output of quantized layer must be quantized type.");
2525  }
2526 }

References armnn::BFloat16, armnn::Float16, armnn::Float32, TensorInfo::GetDataType(), armnn::IsQuantizedType(), WorkloadInfo::m_InputTensorInfos, WorkloadInfo::m_OutputTensorInfos, armnn::QAsymmS8, armnn::QAsymmU8, armnn::QSymmS16, and armnn::QSymmS8.


The documentation for this struct was generated from the following files:
armnn::TensorInfo
Definition: Tensor.hpp:152
armnn::DataType::Float32
@ Float32
armnn::DataType::QAsymmU8
@ QAsymmU8
armnn::DataType::QSymmS8
@ QSymmS8
armnn::DataType::QSymmS16
@ QSymmS16
armnn::DataType::BFloat16
@ BFloat16
armnn::DataType::Float16
@ Float16
armnn::WorkloadInfo::m_OutputTensorInfos
std::vector< TensorInfo > m_OutputTensorInfos
Definition: WorkloadInfo.hpp:19
armnn::InvalidArgumentException
Definition: Exceptions.hpp:80
armnn::TensorInfo::GetDataType
DataType GetDataType() const
Definition: Tensor.hpp:200
armnn::DataType::QAsymmS8
@ QAsymmS8
armnn::WorkloadInfo::m_InputTensorInfos
std::vector< TensorInfo > m_InputTensorInfos
Definition: WorkloadInfo.hpp:18
armnn::IsQuantizedType
constexpr bool IsQuantizedType()
Definition: TypesUtils.hpp:311