ArmNN
 25.11
Loading...
Searching...
No Matches
DepthwiseConvolution2dQueueDescriptor Struct Reference

Depthwise Convolution 2D layer workload data. More...

#include <WorkloadData.hpp>

Inheritance diagram for DepthwiseConvolution2dQueueDescriptor:
[legend]
Collaboration diagram for DepthwiseConvolution2dQueueDescriptor:
[legend]

Public Member Functions

void Validate (const WorkloadInfo &workloadInfo) const
Public Member Functions inherited from QueueDescriptorWithParameters< DepthwiseConvolution2dDescriptor >
virtual ~QueueDescriptorWithParameters ()=default
Public Member Functions inherited from QueueDescriptor
virtual ~QueueDescriptor ()=default
void ValidateTensorNumDimensions (const TensorInfo &tensor, std::string const &descName, unsigned int numDimensions, std::string const &tensorName) const
void ValidateTensorNumDimNumElem (const TensorInfo &tensorInfo, unsigned int numDimension, unsigned int numElements, std::string const &tensorName) const
void ValidateInputsOutputs (const std::string &descName, unsigned int numExpectedIn, unsigned int numExpectedOut) const
template<typename T>
const T * GetAdditionalInformation () const

Additional Inherited Members

Public Attributes inherited from QueueDescriptorWithParameters< DepthwiseConvolution2dDescriptor >
DepthwiseConvolution2dDescriptor m_Parameters
Public Attributes inherited from QueueDescriptor
std::vector< ITensorHandle * > m_Inputs
std::vector< ITensorHandle * > m_Outputs
void * m_AdditionalInfoObject
bool m_AllowExpandedDims = false
Protected Member Functions inherited from QueueDescriptorWithParameters< DepthwiseConvolution2dDescriptor >
 QueueDescriptorWithParameters ()=default
QueueDescriptorWithParametersoperator= (QueueDescriptorWithParameters const &)=default
Protected Member Functions inherited from QueueDescriptor
 QueueDescriptor ()
 QueueDescriptor (QueueDescriptor const &)=default
QueueDescriptoroperator= (QueueDescriptor const &)=default

Detailed Description

Depthwise Convolution 2D layer workload data.

Note
The weights are in the format [1, H, W, I*M]. Where I is the input channel size, M the depthwise mutliplier and H, W is the height and width of the filter kernel. If per channel quantization is applied the weights will be quantized along the last dimension/axis (I*M) which corresponds to the output channel size. If per channel quantization is applied the weights tensor will have I*M scales, one for each dimension of the quantization axis. You have to be aware of this when reshaping the weights tensor. Splitting the I*M axis, e.g. [1, H, W, I*M] --> [H, W, I, M], won't work without taking care of the corresponding quantization scales. If there is no per channel quantization applied reshaping the weights tensor won't cause any issues. There are preconfigured permutation functions available here.

Definition at line 234 of file WorkloadData.hpp.

Member Function Documentation

◆ Validate()

void Validate ( const WorkloadInfo & workloadInfo) const

Definition at line 1392 of file WorkloadData.cpp.

1393{
1394 const std::string descriptorName{"DepthwiseConvolution2dQueueDescriptor"};
1395
1396 uint32_t numInputs = 2;
1398 {
1399 numInputs = 3;
1400 }
1401
1402 ValidateNumInputs(workloadInfo, descriptorName, numInputs);
1403 ValidateNumOutputs(workloadInfo, descriptorName, 1);
1404
1405 const TensorInfo& inputTensorInfo = workloadInfo.m_InputTensorInfos[0];
1406 const TensorInfo& outputTensorInfo = workloadInfo.m_OutputTensorInfos[0];
1407
1408 ValidateTensorNumDimensions(inputTensorInfo, descriptorName, 4, "input");
1409 ValidateTensorNumDimensions(outputTensorInfo, descriptorName, 4, "output");
1410
1411 const TensorInfo& weightTensorInfo = workloadInfo.m_InputTensorInfos[1];
1412 ValidateTensorNumDimensions(weightTensorInfo, descriptorName, 4, "weight");
1413
1415 {
1416 throw InvalidArgumentException(
1417 fmt::format("{}: dilationX (provided {}) and dilationY (provided {}) "
1418 "cannot be smaller than 1.",
1420 }
1421
1422 if (m_Parameters.m_StrideX <= 0 || m_Parameters.m_StrideY <= 0 )
1423 {
1424 throw InvalidArgumentException(
1425 fmt::format("{}: strideX (provided {}) and strideY (provided {}) "
1426 "cannot be either negative or 0.",
1427 descriptorName, m_Parameters.m_StrideX, m_Parameters.m_StrideY));
1428 }
1429
1430 if (weightTensorInfo.GetShape()[0] != 1)
1431 {
1432 throw InvalidArgumentException(fmt::format(
1433 "{0}: The weight format in armnn is expected to be [1, H, W, Cout]."
1434 "But first dimension is not equal to 1. Provided weight shape: [{1}, {2}, {3}, {4}]",
1435 descriptorName,
1436 weightTensorInfo.GetShape()[0],
1437 weightTensorInfo.GetShape()[1],
1438 weightTensorInfo.GetShape()[2],
1439 weightTensorInfo.GetShape()[3]));
1440 }
1441
1442 const unsigned int channelIndex = (m_Parameters.m_DataLayout == DataLayout::NCHW) ? 1 : 3;
1443 const unsigned int numWeightOutputChannelsRefFormat = weightTensorInfo.GetShape()[3];
1444 const unsigned int numWeightOutputChannelsAclFormat = weightTensorInfo.GetShape()[1];
1445 const unsigned int numOutputChannels = outputTensorInfo.GetShape()[channelIndex];
1446
1447 // Weights format has two valid options: [1, H, W, Cout] (CpuRef) or [1, Cout, H, W] (CpuAcc/GpuAcc).
1448 bool validRefFormat = (numWeightOutputChannelsRefFormat == numOutputChannels);
1449 bool validAclFormat = (numWeightOutputChannelsAclFormat == numOutputChannels);
1450
1451 if (!(validRefFormat || validAclFormat))
1452 {
1453 throw InvalidArgumentException(fmt::format(
1454 "{0}: The weight format in armnn is expected to be [1, H, W, Cout] (CpuRef) or [1, Cout, H, W] "
1455 "(CpuAcc/GpuAcc). But neither the 4th (CpuRef) or 2nd (CpuAcc/GpuAcc) dimension is equal to Cout."
1456 "Cout = {1} Provided weight shape: [{2}, {3}, {4}, {5}]",
1457 descriptorName,
1458 numOutputChannels,
1459 weightTensorInfo.GetShape()[0],
1460 weightTensorInfo.GetShape()[1],
1461 weightTensorInfo.GetShape()[2],
1462 weightTensorInfo.GetShape()[3]));
1463 }
1464
1465 ValidateWeightDataType(inputTensorInfo, weightTensorInfo, descriptorName);
1466
1467 Optional<TensorInfo> optionalBiasTensorInfo;
1469 {
1470 optionalBiasTensorInfo = MakeOptional<TensorInfo>(workloadInfo.m_InputTensorInfos[2]);
1471 const TensorInfo& biasTensorInfo = optionalBiasTensorInfo.value();
1472
1473 ValidateBiasTensorQuantization(biasTensorInfo, weightTensorInfo, descriptorName);
1474 ValidateTensorDataType(biasTensorInfo, GetBiasDataType(inputTensorInfo.GetDataType()), descriptorName, "bias");
1475 }
1476 ValidatePerAxisQuantization(inputTensorInfo,
1477 outputTensorInfo,
1478 weightTensorInfo,
1479 optionalBiasTensorInfo,
1480 descriptorName);
1481
1482 std::vector<DataType> supportedTypes =
1483 {
1490 };
1491
1492 ValidateDataTypes(inputTensorInfo, supportedTypes, descriptorName);
1493 ValidateTensorDataTypesMatch(inputTensorInfo, outputTensorInfo, descriptorName, "input", "output");
1494}
const TensorShape & GetShape() const
Definition Tensor.hpp:193
DataType GetDataType() const
Definition Tensor.hpp:200
Optional< T > MakeOptional(Args &&... args)
Utility template that constructs an object of type T in-place and wraps it inside an Optional<T> obje...
Definition Optional.hpp:305
DataType GetBiasDataType(DataType inputDataType)
uint32_t m_DilationY
Dilation factor value for height dimension.
DataLayout m_DataLayout
The data layout to be used (NCHW, NHWC).
uint32_t m_DilationX
Dilation factor value for width dimension.
uint32_t m_StrideY
Stride value when proceeding through input for the height dimension.
bool m_BiasEnabled
Enable/disable bias.
uint32_t m_StrideX
Stride value when proceeding through input for the width dimension.
void ValidateTensorNumDimensions(const TensorInfo &tensor, std::string const &descName, unsigned int numDimensions, std::string const &tensorName) const
std::vector< TensorInfo > m_OutputTensorInfos
std::vector< TensorInfo > m_InputTensorInfos

References armnn::BFloat16, armnn::Float16, armnn::Float32, armnn::GetBiasDataType(), TensorInfo::GetDataType(), TensorInfo::GetShape(), WorkloadInfo::m_InputTensorInfos, WorkloadInfo::m_OutputTensorInfos, QueueDescriptorWithParameters< DepthwiseConvolution2dDescriptor >::m_Parameters, armnn::MakeOptional(), armnn::NCHW, armnn::QAsymmS8, armnn::QAsymmU8, armnn::QSymmS16, QueueDescriptor::ValidateTensorNumDimensions(), and OptionalReferenceSwitch< IsReference, T >::value().


The documentation for this struct was generated from the following files: