24.02.1
|
Go to the documentation of this file.
33 namespace experimental
35 namespace dynamic_fusion
50 return _impl->cl_compile_context()->get_gpu_target();
55 return _impl->gpu_language();
60 return _impl->cl_compile_context();
63 void GpuWorkloadContext::register_user_tensor(std::unique_ptr<TensorInfo> &&
tensor_info)
65 _impl->register_user_tensor(std::move(
tensor_info));
79 : _gpu_language(gpu_language),
80 _cl_compile_ctx(cl_compile_ctx),
83 _managed_tensor_info()
94 return _cl_compile_ctx;
106 const auto tensor_id = next_tensor_id();
118 const auto tensor_id = -next_tensor_id();
122 return inserted.first->second.get();
127 auto tensor_info = std::make_unique<TensorInfo>(itensor_info);
128 const auto tensor_id = next_tensor_id();
132 return inserted.first->second.get();
137 return _managed_tensor_info.at(
id).get();
142 return _managed_tensor_info.at(
id).get();
147 return _next_tensor_id++;
GpuWorkloadContext(CLCompileContext *cl_compile_context)
Constructor.
Descriptor of a workload tensor memory.
@ Auxiliary
Additional memory required by the workload tensor, e.g.
@ Virtual
Virtual type is of No-Alloc type.
const MemoryDescriptorMap & mem_map() const
Get memory descriptor registry.
ITensorInfo * create_auxiliary_tensor(const ITensorInfo &tensor_info)
Create an auxiliary (see MemoryType) tensor info and save it.
~GpuWorkloadContext()
Destructor.
Internal implementation of workload context.
void register_user_tensor(std::unique_ptr< TensorInfo > &&tensor_info)
Set a new ID and register the user tensor info.
const CLCompileContext * cl_compile_context() const
Get CL compile context.
Impl(GpuLanguage gpu_language, CLCompileContext *cl_compile_ctx)
Constructor.
ITensorInfo * create_virtual_tensor()
Create a virtual (see MemoryType) tensor info and save it.
#define ARM_COMPUTE_ERROR_ON(cond)
If the condition is true then an error message is printed and an exception thrown.
Impl & implementation()
Get the internal implementation.
Provide context necessary for the creation and configuration of a workload e.g.
int32_t Id
An id that uniquely identifies an ITensorInfo within some domain (e.g.
const CLCompileContext * cl_compile_context() const
Get CLCompileContext If the gpu language is not OpenCL, then return nullptr.
GpuWorkloadContext & operator=(const GpuWorkloadContext &config)=delete
Prohibit instances of this class to be copied.
ITensorInfo * get_tensor_info(ITensorInfo::Id id)
Get tensor info created by this context, from id.
Memory information for tensors with MemoryType::Auxiliary.
GPUTarget
Available GPU Targets.
@ User
Both User and Auxiliary types are of Alloc type.
GpuTarget gpu_target() const
Get GpuTarget of the context.
Copyright (c) 2017-2024 Arm Limited.
GpuLanguage gpu_language() const
Get GpuLanguage of the context.
GpuLanguage gpu_language() const
Get target GPU language.
Store the tensor's metadata.
TensorInfo tensor_info
Associated tensor info.
std::map< ITensorInfo::Id, MemoryDescriptor > MemoryDescriptorMap
A map from ITensorInfo to their corresponding MemoryDescriptor.