ArmNN
 24.08
ClImportTensorHandle Class Reference

#include <ClImportTensorHandle.hpp>

Inheritance diagram for ClImportTensorHandle:
[legend]
Collaboration diagram for ClImportTensorHandle:
[legend]

Public Member Functions

 ClImportTensorHandle (const TensorInfo &tensorInfo, MemorySourceFlags importFlags)
 
 ClImportTensorHandle (const TensorInfo &tensorInfo, DataLayout dataLayout, MemorySourceFlags importFlags)
 
arm_compute::CLTensor & GetTensor () override
 
arm_compute::CLTensor const & GetTensor () const override
 
virtual void Allocate () override
 Indicate to the memory manager that this resource is no longer active. More...
 
virtual void Manage () override
 Indicate to the memory manager that this resource is active. More...
 
virtual const void * Map (bool blocking=true) const override
 Map the tensor data for access. More...
 
virtual void Unmap () const override
 Unmap the tensor data. More...
 
virtual ITensorHandleGetParent () const override
 Get the parent tensor if this is a subtensor. More...
 
virtual arm_compute::DataType GetDataType () const override
 
virtual void SetMemoryGroup (const std::shared_ptr< arm_compute::IMemoryGroup > &memoryGroup) override
 
TensorShape GetStrides () const override
 Get the strides for each dimension ordered from largest to smallest where the smallest value is the same as the size of a single element in the tensor. More...
 
TensorShape GetShape () const override
 Get the number of elements for each dimension ordered from slowest iterating dimension to fastest iterating dimension. More...
 
void SetImportFlags (MemorySourceFlags importFlags)
 
MemorySourceFlags GetImportFlags () const override
 Get flags describing supported import sources. More...
 
virtual bool Import (void *memory, MemorySource source) override
 Import externally allocated memory. More...
 
virtual bool CanBeImported (void *, MemorySource source) override
 Implementations must determine if this memory block can be imported. More...
 
- Public Member Functions inherited from ITensorHandle
virtual ~ITensorHandle ()
 
void * Map (bool blocking=true)
 Map the tensor data for access. More...
 
void Unmap ()
 Unmap the tensor data that was previously mapped with call to Map(). More...
 
virtual void Unimport ()
 Unimport externally allocated memory. More...
 
virtual std::shared_ptr< ITensorHandleDecorateTensorHandle (const TensorInfo &tensorInfo)
 Returns a decorated version of this TensorHandle allowing us to override the TensorInfo for it. More...
 

Detailed Description

Definition at line 30 of file ClImportTensorHandle.hpp.

Constructor & Destructor Documentation

◆ ClImportTensorHandle() [1/2]

ClImportTensorHandle ( const TensorInfo tensorInfo,
MemorySourceFlags  importFlags 
)
inline

Definition at line 33 of file ClImportTensorHandle.hpp.

34  : m_ImportFlags(importFlags)
35  {
36  armnn::armcomputetensorutils::BuildArmComputeTensor(m_Tensor, tensorInfo);
37  }

◆ ClImportTensorHandle() [2/2]

ClImportTensorHandle ( const TensorInfo tensorInfo,
DataLayout  dataLayout,
MemorySourceFlags  importFlags 
)
inline

Definition at line 39 of file ClImportTensorHandle.hpp.

42  : m_ImportFlags(importFlags), m_Imported(false)
43  {
44  armnn::armcomputetensorutils::BuildArmComputeTensor(m_Tensor, tensorInfo, dataLayout);
45  }

Member Function Documentation

◆ Allocate()

virtual void Allocate ( )
inlineoverridevirtual

Indicate to the memory manager that this resource is no longer active.

This is used to compute overlapping lifetimes of resources.

Implements ITensorHandle.

Definition at line 49 of file ClImportTensorHandle.hpp.

49 {}

◆ CanBeImported()

virtual bool CanBeImported ( void *  memory,
MemorySource  source 
)
inlineoverridevirtual

Implementations must determine if this memory block can be imported.

This might be based on alignment or memory source type.

Returns
true if this memory can be imported.
false by default, cannot be imported.

Reimplemented from ITensorHandle.

Definition at line 187 of file ClImportTensorHandle.hpp.

188  {
189  if (m_ImportFlags & static_cast<MemorySourceFlags>(source))
190  {
191  if (source == MemorySource::Malloc)
192  {
193  // Returning true as ClImport() function will decide if memory can be imported or not
194  return true;
195  }
196  }
197  else
198  {
199  throw MemoryImportException("ClImportTensorHandle::Incorrect import flag");
200  }
201  return false;
202  }

References armnn::Malloc.

◆ GetDataType()

virtual arm_compute::DataType GetDataType ( ) const
inlineoverridevirtual

Implements IClTensorHandle.

Definition at line 62 of file ClImportTensorHandle.hpp.

63  {
64  return m_Tensor.info()->data_type();
65  }

◆ GetImportFlags()

MemorySourceFlags GetImportFlags ( ) const
inlineoverridevirtual

Get flags describing supported import sources.

Reimplemented from ITensorHandle.

Definition at line 87 of file ClImportTensorHandle.hpp.

88  {
89  return m_ImportFlags;
90  }

◆ GetParent()

virtual ITensorHandle* GetParent ( ) const
inlineoverridevirtual

Get the parent tensor if this is a subtensor.

Returns
a pointer to the parent tensor. Otherwise nullptr if not a subtensor.

Implements ITensorHandle.

Definition at line 60 of file ClImportTensorHandle.hpp.

60 { return nullptr; }

◆ GetShape()

TensorShape GetShape ( ) const
inlineoverridevirtual

Get the number of elements for each dimension ordered from slowest iterating dimension to fastest iterating dimension.

Returns
a TensorShape filled with the number of elements for each dimension.

Implements ITensorHandle.

Definition at line 77 of file ClImportTensorHandle.hpp.

78  {
79  return armcomputetensorutils::GetShape(m_Tensor.info()->tensor_shape());
80  }

◆ GetStrides()

TensorShape GetStrides ( ) const
inlineoverridevirtual

Get the strides for each dimension ordered from largest to smallest where the smallest value is the same as the size of a single element in the tensor.

Returns
a TensorShape filled with the strides for each dimension

Implements ITensorHandle.

Definition at line 72 of file ClImportTensorHandle.hpp.

73  {
74  return armcomputetensorutils::GetStrides(m_Tensor.info()->strides_in_bytes());
75  }

◆ GetTensor() [1/2]

arm_compute::CLTensor const& GetTensor ( ) const
inlineoverridevirtual

Implements IClTensorHandle.

Definition at line 48 of file ClImportTensorHandle.hpp.

48 { return m_Tensor; }

◆ GetTensor() [2/2]

arm_compute::CLTensor& GetTensor ( )
inlineoverridevirtual

Implements IClTensorHandle.

Definition at line 47 of file ClImportTensorHandle.hpp.

47 { return m_Tensor; }

◆ Import()

virtual bool Import ( void *  memory,
MemorySource  source 
)
inlineoverridevirtual

Import externally allocated memory.

Parameters
memorybase address of the memory being imported.
sourcesource of the allocation for the memory being imported.
Returns
true on success or false on failure

Reimplemented from ITensorHandle.

Definition at line 92 of file ClImportTensorHandle.hpp.

93  {
94  if (m_ImportFlags & static_cast<MemorySourceFlags>(source))
95  {
96  if (source == MemorySource::Malloc)
97  {
98  const cl_import_properties_arm importProperties[] =
99  {
100  CL_IMPORT_TYPE_ARM,
101  CL_IMPORT_TYPE_HOST_ARM,
102  0
103  };
104  return ClImport(importProperties, memory);
105  }
106  if (source == MemorySource::DmaBuf)
107  {
108  const cl_import_properties_arm importProperties[] =
109  {
110  CL_IMPORT_TYPE_ARM,
111  CL_IMPORT_TYPE_DMA_BUF_ARM,
112  CL_IMPORT_DMA_BUF_DATA_CONSISTENCY_WITH_HOST_ARM,
113  CL_TRUE,
114  0
115  };
116 
117  return ClImport(importProperties, memory);
118 
119  }
120  if (source == MemorySource::DmaBufProtected)
121  {
122  const cl_import_properties_arm importProperties[] =
123  {
124  CL_IMPORT_TYPE_ARM,
125  CL_IMPORT_TYPE_DMA_BUF_ARM,
126  CL_IMPORT_TYPE_PROTECTED_ARM,
127  CL_TRUE,
128  0
129  };
130 
131  return ClImport(importProperties, memory, true);
132 
133  }
134  // Case for importing memory allocated by OpenCl externally directly into the tensor
135  else if (source == MemorySource::Gralloc)
136  {
137  // m_Tensor not yet Allocated
138  if (!m_Imported && !m_Tensor.buffer())
139  {
140  // Importing memory allocated by OpenCl into the tensor directly.
141  arm_compute::Status status =
142  m_Tensor.allocator()->import_memory(cl::Buffer(static_cast<cl_mem>(memory)));
143  m_Imported = bool(status);
144  if (!m_Imported)
145  {
146  throw MemoryImportException(status.error_description());
147  }
148  return m_Imported;
149  }
150 
151  // m_Tensor.buffer() initially allocated with Allocate().
152  else if (!m_Imported && m_Tensor.buffer())
153  {
154  throw MemoryImportException(
155  "ClImportTensorHandle::Import Attempting to import on an already allocated tensor");
156  }
157 
158  // m_Tensor.buffer() previously imported.
159  else if (m_Imported)
160  {
161  // Importing memory allocated by OpenCl into the tensor directly.
162  arm_compute::Status status =
163  m_Tensor.allocator()->import_memory(cl::Buffer(static_cast<cl_mem>(memory)));
164  m_Imported = bool(status);
165  if (!m_Imported)
166  {
167  throw MemoryImportException(status.error_description());
168  }
169  return m_Imported;
170  }
171  else
172  {
173  throw MemoryImportException("ClImportTensorHandle::Failed to Import Gralloc Memory");
174  }
175  }
176  else
177  {
178  throw MemoryImportException("ClImportTensorHandle::Import flag is not supported");
179  }
180  }
181  else
182  {
183  throw MemoryImportException("ClImportTensorHandle::Incorrect import flag");
184  }
185  }

References armnn::DmaBuf, armnn::DmaBufProtected, armnn::Gralloc, and armnn::Malloc.

◆ Manage()

virtual void Manage ( )
inlineoverridevirtual

Indicate to the memory manager that this resource is active.

This is used to compute overlapping lifetimes of resources.

Implements ITensorHandle.

Definition at line 50 of file ClImportTensorHandle.hpp.

50 {}

◆ Map()

virtual const void* Map ( bool  blocking = true) const
inlineoverridevirtual

Map the tensor data for access.

Parameters
blockinghint to block the calling thread until all other accesses are complete. (backend dependent)
Returns
pointer to the first element of the mapped data.

Implements ITensorHandle.

Definition at line 52 of file ClImportTensorHandle.hpp.

53  {
54  IgnoreUnused(blocking);
55  return static_cast<const void*>(m_Tensor.buffer() + m_Tensor.info()->offset_first_element_in_bytes());
56  }

References armnn::IgnoreUnused().

◆ SetImportFlags()

void SetImportFlags ( MemorySourceFlags  importFlags)
inline

Definition at line 82 of file ClImportTensorHandle.hpp.

83  {
84  m_ImportFlags = importFlags;
85  }

◆ SetMemoryGroup()

virtual void SetMemoryGroup ( const std::shared_ptr< arm_compute::IMemoryGroup > &  memoryGroup)
inlineoverridevirtual

Implements IClTensorHandle.

Definition at line 67 of file ClImportTensorHandle.hpp.

68  {
69  IgnoreUnused(memoryGroup);
70  }

References armnn::IgnoreUnused().

◆ Unmap()

virtual void Unmap ( ) const
inlineoverridevirtual

Unmap the tensor data.

Implements ITensorHandle.

Definition at line 58 of file ClImportTensorHandle.hpp.

58 {}

The documentation for this class was generated from the following file:
armnn::MemorySource::Malloc
@ Malloc
armnn::MemorySource::Gralloc
@ Gralloc
armnn::MemorySource::DmaBufProtected
@ DmaBufProtected
armnn::MemorySourceFlags
unsigned int MemorySourceFlags
Definition: MemorySources.hpp:15
armnn::MemorySource::DmaBuf
@ DmaBuf
armnn::Status
Status
Definition: Types.hpp:42
armnn::IgnoreUnused
void IgnoreUnused(Ts &&...)
Definition: IgnoreUnused.hpp:14