This is simple example that shows how to use a dynamic backend.
This is simple example that shows how to use a dynamic backend. Dynamic Backends can be compiled as standalone against Arm NN and can be loaded by Arm NN dynamically at runtime. This way you can quickly integrate new backends without having to worry or recompile Arm NN.
This example makes use of a very simplistic dynamic backend called 'SampleDynamic'. There is a guide that tells you more about dynamic backends and how this particular backend was created so you can create a dynamic backend yourself Dynamically loadable Backend.
#include <iostream>
int main()
{
if (!optNet)
{
std::cerr << "Error: Failed to optimise the input network." << std::endl;
return 1;
}
run->LoadNetwork(networkIdentifier, std::move(optNet));
std::vector<float> input0Data
{
5.0f, 3.0f
};
std::vector<float> input1Data
{
10.0f, 8.0f
};
std::vector<float> outputData(2);
TensorInfo inputTensorInfo = run->GetInputTensorInfo(networkIdentifier, 0);
{
};
{
{0,
armnn::Tensor(run->GetOutputTensorInfo(networkIdentifier, 0), outputData.data())}
};
run->EnqueueWorkload(networkIdentifier, inputTensors, outputTensors);
std::cout << "Addition operator result is {" << outputData[0] << "," << outputData[1] << "}" << std::endl;
return 0;
}
A tensor defined by a TensorInfo (shape and data type) and an immutable backing store.
Interface for a layer that is connectable to other layers via InputSlots and OutputSlots.
virtual const IInputSlot & GetInputSlot(unsigned int index) const =0
Get a const input slot handle by slot index.
virtual const IOutputSlot & GetOutputSlot(unsigned int index) const =0
Get the const output slot handle by slot index.
static INetworkPtr Create(const NetworkOptions &networkOptions={})
virtual void SetTensorInfo(const TensorInfo &tensorInfo)=0
virtual int Connect(IInputSlot &destination)=0
static IRuntimePtr Create(const CreationOptions &options)
A tensor defined by a TensorInfo (shape and data type) and a mutable backing store.
void SetConstant(const bool IsConstant=true)
Marks the data corresponding to this tensor info as constant.
Copyright (c) 2021 ARM Limited and Contributors.
std::unique_ptr< IRuntime, void(*)(IRuntime *runtime)> IRuntimePtr
std::unique_ptr< IOptimizedNetwork, void(*)(IOptimizedNetwork *network)> IOptimizedNetworkPtr
std::vector< std::pair< LayerBindingId, class ConstTensor > > InputTensors
IOptimizedNetworkPtr Optimize(const INetwork &network, const std::vector< BackendId > &backendPreferences, const IDeviceSpec &deviceSpec, const OptimizerOptionsOpaque &options=OptimizerOptionsOpaque(), Optional< std::vector< std::string > & > messages=EmptyOptional())
Create an optimized version of the network.
std::unique_ptr< INetwork, void(*)(INetwork *network)> INetworkPtr
std::vector< std::pair< LayerBindingId, class Tensor > > OutputTensors