ArmNN
 24.08
SyncMemGenericWorkload Class Reference

#include <MemSyncWorkload.hpp>

Inheritance diagram for SyncMemGenericWorkload:
[legend]
Collaboration diagram for SyncMemGenericWorkload:
[legend]

Public Member Functions

 SyncMemGenericWorkload (const MemSyncQueueDescriptor &descriptor, const WorkloadInfo &info)
 
void Execute () const override
 
void ExecuteAsync (ExecutionData &executionData) override
 
- Public Member Functions inherited from BaseWorkload< MemSyncQueueDescriptor >
 BaseWorkload (const MemSyncQueueDescriptor &descriptor, const WorkloadInfo &info)
 
virtual const std::string & GetName () const override
 
void ExecuteAsync (ExecutionData &executionData) override
 
void PostAllocationConfigure () override
 
const MemSyncQueueDescriptorGetData () const
 
arm::pipe::ProfilingGuid GetGuid () const final
 
virtual bool SupportsTensorHandleReplacement () const override
 
void ReplaceInputTensorHandle (ITensorHandle *tensorHandle, unsigned int slot) override
 
void ReplaceOutputTensorHandle (ITensorHandle *tensorHandle, unsigned int slot) override
 
- Public Member Functions inherited from IWorkload
virtual ~IWorkload ()
 
virtual arm::pipe::ProfilingGuid GetGuid () const =0
 
virtual bool SupportsTensorHandleReplacement () const =0
 
virtual const std::string & GetName () const =0
 
virtual void RegisterDebugCallback (const DebugCallbackFunction &)
 
virtual armnn::Optional< armnn::MemoryRequirementsGetMemoryRequirements ()
 

Additional Inherited Members

- Protected Attributes inherited from BaseWorkload< MemSyncQueueDescriptor >
MemSyncQueueDescriptor m_Data
 
const arm::pipe::ProfilingGuid m_Guid
 
const std::string m_Name
 

Detailed Description

Definition at line 17 of file MemSyncWorkload.hpp.

Constructor & Destructor Documentation

◆ SyncMemGenericWorkload()

SyncMemGenericWorkload ( const MemSyncQueueDescriptor descriptor,
const WorkloadInfo info 
)

Definition at line 16 of file MemSyncWorkload.cpp.

18  : BaseWorkload<MemSyncQueueDescriptor>(descriptor, info)
19 {
20  m_TensorHandle = descriptor.m_Inputs[0];
21 }

References armnn::info, and QueueDescriptor::m_Inputs.

Member Function Documentation

◆ Execute()

void Execute ( ) const
overridevirtual

Implements IWorkload.

Definition at line 23 of file MemSyncWorkload.cpp.

24 {
25  ARMNN_SCOPED_PROFILING_EVENT(Compute::Undefined, "SyncMemGeneric_Execute");
26  m_TensorHandle->Map(true);
27  m_TensorHandle->Unmap();
28 }

References ARMNN_SCOPED_PROFILING_EVENT, ITensorHandle::Map(), armnn::Undefined, and ITensorHandle::Unmap().

◆ ExecuteAsync()

void ExecuteAsync ( ExecutionData executionData)
overridevirtual

Implements IWorkload.

Definition at line 30 of file MemSyncWorkload.cpp.

31 {
32  ARMNN_SCOPED_PROFILING_EVENT(Compute::Undefined, "SyncMemGeneric_Execute_WorkingMemDescriptor");
33 
34  WorkingMemDescriptor* workingMemDescriptor = static_cast<WorkingMemDescriptor*>(executionData.m_Data);
35  workingMemDescriptor->m_Inputs[0]->Map(true);
36  workingMemDescriptor->m_Inputs[0]->Unmap();
37 }

References ARMNN_SCOPED_PROFILING_EVENT, ExecutionData::m_Data, WorkingMemDescriptor::m_Inputs, and armnn::Undefined.


The documentation for this class was generated from the following files:
armnn::Compute::Undefined
@ Undefined
ARMNN_SCOPED_PROFILING_EVENT
#define ARMNN_SCOPED_PROFILING_EVENT(backendId, name)
Definition: Profiling.hpp:220
armnn::ITensorHandle::Unmap
virtual void Unmap() const =0
Unmap the tensor data.
armnn::ITensorHandle::Map
virtual const void * Map(bool blocking=true) const =0
Map the tensor data for access.