ArmNN
 24.08
NeonConvertFp32ToFp16Workload Class Reference

#include <NeonConvertFp32ToFp16Workload.hpp>

Inheritance diagram for NeonConvertFp32ToFp16Workload:
[legend]
Collaboration diagram for NeonConvertFp32ToFp16Workload:
[legend]

Public Member Functions

 NeonConvertFp32ToFp16Workload (const ConvertFp32ToFp16QueueDescriptor &descriptor, const WorkloadInfo &info)
 
virtual void Execute () const override
 
void ReplaceInputTensorHandle (ITensorHandle *tensorHandle, unsigned int slot) override
 
void ReplaceOutputTensorHandle (ITensorHandle *tensorHandle, unsigned int slot) override
 
- Public Member Functions inherited from MultiTypedWorkload< QueueDescriptor, InputDataType, OutputDataType >
 MultiTypedWorkload (const QueueDescriptor &descriptor, const WorkloadInfo &info)
 
- Public Member Functions inherited from BaseWorkload< QueueDescriptor >
 BaseWorkload (const QueueDescriptor &descriptor, const WorkloadInfo &info)
 
virtual const std::string & GetName () const override
 
void ExecuteAsync (ExecutionData &executionData) override
 
void PostAllocationConfigure () override
 
const QueueDescriptorGetData () const
 
arm::pipe::ProfilingGuid GetGuid () const final
 
virtual bool SupportsTensorHandleReplacement () const override
 
- Public Member Functions inherited from IWorkload
virtual ~IWorkload ()
 
virtual void RegisterDebugCallback (const DebugCallbackFunction &)
 
virtual armnn::Optional< armnn::MemoryRequirementsGetMemoryRequirements ()
 

Additional Inherited Members

- Protected Attributes inherited from BaseWorkload< QueueDescriptor >
QueueDescriptor m_Data
 
const arm::pipe::ProfilingGuid m_Guid
 
const std::string m_Name
 

Detailed Description

Definition at line 19 of file NeonConvertFp32ToFp16Workload.hpp.

Constructor & Destructor Documentation

◆ NeonConvertFp32ToFp16Workload()

Definition at line 31 of file NeonConvertFp32ToFp16Workload.cpp.

33  : Float32ToFloat16Workload<ConvertFp32ToFp16QueueDescriptor>(descriptor, info)
34 {
35  this->m_Data.ValidateInputsOutputs("NeonConvertFp32ToFp16Workload", 1, 1);
36 
37  arm_compute::ITensor& input = PolymorphicDowncast<IAclTensorHandle*>(m_Data.m_Inputs[0])->GetTensor();
38  arm_compute::ITensor& output = PolymorphicDowncast<IAclTensorHandle*>(m_Data.m_Outputs[0])->GetTensor();
39 
40  if (arm_compute::NECast::validate(input.info(), output.info(), g_AclConvertPolicy))
41  {
42  // Use NECast if supported (needs hardware support for FP16)
43  m_Cast.reset(new arm_compute::NECast);
44  m_Cast->configure(&input, &output, g_AclConvertPolicy);
45  }
46  else
47  {
48  // Else use software implementation from Half.hpp
49  GatherTensorHandlePairs(descriptor, m_TensorHandlePairs);
50  }
51 }

References armnn::GatherTensorHandlePairs(), armnn::info, BaseWorkload< QueueDescriptor >::m_Data, QueueDescriptor::m_Inputs, QueueDescriptor::m_Outputs, and QueueDescriptor::ValidateInputsOutputs().

Member Function Documentation

◆ Execute()

void Execute ( ) const
overridevirtual

Implements IWorkload.

Definition at line 53 of file NeonConvertFp32ToFp16Workload.cpp.

54 {
55  ARMNN_SCOPED_PROFILING_EVENT_NEON_NAME_GUID("NeonConvertFp32ToFp16Workload_Execute");
56 
57  if (m_Cast)
58  {
59  // Use NECast if supported and initialised
60  m_Cast->run();
61  }
62  else
63  {
64  // Else use softwre implementabion using Half.hpp
65  auto convertFunc = [](uint8_t* dst, const uint8_t* src, size_t size)
66  {
67  auto input = reinterpret_cast<const float*>(src);
68  auto output = reinterpret_cast<Half*>(dst);
69  size_t numElements = size/2; // 2 bytes per fp16
71  };
72 
73  for (const auto& pair : m_TensorHandlePairs)
74  {
75  CopyTensorContentsGeneric(pair.first, pair.second, convertFunc);
76  }
77  }
78 }

References ARMNN_SCOPED_PROFILING_EVENT_NEON_NAME_GUID, FloatingPointConverter::ConvertFloat32To16(), and armnn::CopyTensorContentsGeneric().

◆ ReplaceInputTensorHandle()

void ReplaceInputTensorHandle ( ITensorHandle tensorHandle,
unsigned int  slot 
)
overridevirtual

Reimplemented from BaseWorkload< QueueDescriptor >.

Definition at line 80 of file NeonConvertFp32ToFp16Workload.cpp.

81 {
82  ITensorHandle* backupHandle = this->m_Data.m_Inputs[slot];
83  this->m_Data.m_Inputs[slot] = tensorHandle;
84  try
85  {
86  Reconfigure();
87  }
89  {
90  // Cannot reconfigure, revert the slot back and throw the exception.
91  this->m_Data.m_Inputs[slot] = backupHandle;
92  throw e;
93  }
94 }

References BaseWorkload< QueueDescriptor >::m_Data, and QueueDescriptor::m_Inputs.

◆ ReplaceOutputTensorHandle()

void ReplaceOutputTensorHandle ( ITensorHandle tensorHandle,
unsigned int  slot 
)
overridevirtual

Reimplemented from BaseWorkload< QueueDescriptor >.

Definition at line 97 of file NeonConvertFp32ToFp16Workload.cpp.

98 {
99  ITensorHandle* backupHandle = this->m_Data.m_Inputs[slot];
100  this->m_Data.m_Inputs[slot] = tensorHandle;
101  try
102  {
103  Reconfigure();
104  }
106  {
107  // Cannot reconfigure, revert the slot back and throw the exception.
108  this->m_Data.m_Inputs[slot] = backupHandle;
109  throw e;
110  }
111 }

References BaseWorkload< QueueDescriptor >::m_Data, and QueueDescriptor::m_Inputs.


The documentation for this class was generated from the following files:
armnn::QueueDescriptor::ValidateInputsOutputs
void ValidateInputsOutputs(const std::string &descName, unsigned int numExpectedIn, unsigned int numExpectedOut) const
Definition: WorkloadData.cpp:447
armnn::Half
half_float::half Half
Definition: Half.hpp:22
armnn::GatherTensorHandlePairs
void GatherTensorHandlePairs(const DescriptorType &descriptor, std::vector< std::pair< SrcTensorHandleType *, DstTensorHandleType * >> &tensorHandlePairs)
Definition: WorkloadUtils.hpp:204
armnn::CopyTensorContentsGeneric
void CopyTensorContentsGeneric(const ITensorHandle *srcTensor, ITensorHandle *dstTensor, CopyFunc copy)
Definition: WorkloadUtils.hpp:46
armnn::QueueDescriptor::m_Outputs
std::vector< ITensorHandle * > m_Outputs
Definition: WorkloadData.hpp:27
armnn::BaseWorkload::m_Data
QueueDescriptor m_Data
Definition: Workload.hpp:89
armnnUtils::FloatingPointConverter::ConvertFloat32To16
static void ConvertFloat32To16(const float *srcFloat32Buffer, size_t numElements, void *dstFloat16Buffer)
Converts a buffer of FP32 values to FP16, and stores in the given dstFloat16Buffer.
Definition: FloatingPointConverter.cpp:17
armnn::UnimplementedException
Definition: Exceptions.hpp:98
ARMNN_SCOPED_PROFILING_EVENT_NEON_NAME_GUID
#define ARMNN_SCOPED_PROFILING_EVENT_NEON_NAME_GUID(label)
Creates a profiling event that uses GetGuid() and GetName() from the calling class.
Definition: NeonWorkloadUtils.hpp:33
armnn::QueueDescriptor::m_Inputs
std::vector< ITensorHandle * > m_Inputs
Definition: WorkloadData.hpp:26