This is a very simple example which uses the Arm NN SDK API to create a neural network which consists of nothing else but a single fully connected layer with a single weights value.
This is a very simple example which uses the Arm NN SDK API to create a neural network which consists of nothing else but a single fully connected layer with a single weights value. It's as minimalistic as it can get.
(You can find more complex examples using the TfLite Parser in samples/ObjectDetection and samples/SpeechRecognition.)
#include <iostream>
int main()
{
float number;
std::cout << "Please enter a number: " << std::endl;
std::cin >> number;
float weightsData[] = {1.0f};
IConnectableLayer*
const constantWeightsLayer = myNetwork->AddConstantLayer(weights,
"const weights");
IConnectableLayer*
const fullyConnectedLayer = myNetwork->AddFullyConnectedLayer(fullyConnectedDesc,
"fully connected");
if (!optNet)
{
std::cerr << "Error: Failed to optimise the input network." << std::endl;
return 1;
}
run->LoadNetwork(networkIdentifier, std::move(optNet));
std::vector<float> inputData{number};
std::vector<float> outputData(1);
inputTensorInfo = run->GetInputTensorInfo(networkIdentifier, 0);
inputData.data())}};
outputData.data())}};
run->EnqueueWorkload(networkIdentifier, inputTensors, outputTensors);
std::cout << "Your number was " << outputData[0] << std::endl;
return 0;
}
A tensor defined by a TensorInfo (shape and data type) and an immutable backing store.
Interface for a layer that is connectable to other layers via InputSlots and OutputSlots.
virtual const IInputSlot & GetInputSlot(unsigned int index) const =0
Get a const input slot handle by slot index.
virtual const IOutputSlot & GetOutputSlot(unsigned int index) const =0
Get the const output slot handle by slot index.
static INetworkPtr Create(const NetworkOptions &networkOptions={})
virtual void SetTensorInfo(const TensorInfo &tensorInfo)=0
virtual int Connect(IInputSlot &destination)=0
static IRuntimePtr Create(const CreationOptions &options)
const InputSlot & GetInputSlot(unsigned int index) const override
Get a const input slot handle by slot index.
const OutputSlot & GetOutputSlot(unsigned int index=0) const override
Get the const output slot handle by slot index.
A layer user-provided data can be bound to (e.g. inputs, outputs).
void SetTensorInfo(const TensorInfo &tensorInfo) override
int Connect(InputSlot &destination)
A tensor defined by a TensorInfo (shape and data type) and a mutable backing store.
void SetConstant(const bool IsConstant=true)
Marks the data corresponding to this tensor info as constant.
Copyright (c) 2021 ARM Limited and Contributors.
std::unique_ptr< IRuntime, void(*)(IRuntime *runtime)> IRuntimePtr
std::unique_ptr< IOptimizedNetwork, void(*)(IOptimizedNetwork *network)> IOptimizedNetworkPtr
std::vector< std::pair< LayerBindingId, class ConstTensor > > InputTensors
IOptimizedNetworkPtr Optimize(const INetwork &network, const std::vector< BackendId > &backendPreferences, const IDeviceSpec &deviceSpec, const OptimizerOptionsOpaque &options=OptimizerOptionsOpaque(), Optional< std::vector< std::string > & > messages=EmptyOptional())
Create an optimized version of the network.
void ConfigureLogging(bool printToStandardOutput, bool printToDebugOutput, LogSeverity severity)
Configures the logging behaviour of the ARMNN library.
std::unique_ptr< INetwork, void(*)(INetwork *network)> INetworkPtr
@ CpuRef
CPU Execution: Reference C++ kernels.
std::vector< std::pair< LayerBindingId, class Tensor > > OutputTensors
A FullyConnectedDescriptor for the FullyConnectedLayer.