Compute Library
 23.08
DependencyGraph Class Reference

A multi-input (tensors), multi-output (tensors) acyclic directed graph Represented as a doubly-linked adjacency list with the differentiation between source and destination. More...

#include <DependencyGraph.h>

Data Structures

struct  OpPack
 A pack of operator including its input and output tensors, used by traversing through the graph in topological order. More...
 

Public Types

using Id = int32_t
 
using TensorId = Id
 
using OperatorId = Id
 
using AdjList = std::map< Id, std::vector< Id > >
 Adjacency list. More...
 

Public Member Functions

 DependencyGraph ()=default
 
bool try_add_operator_as_linear (OperatorId op, const std::vector< TensorId > &inputs, const std::vector< TensorId > &outputs, bool is_output=false) const
 Try adding an operator (without actually adding it), while keeping the graph as a "linear sequence" / list. More...
 
void add_operator_as_linear (OperatorId op, const std::vector< TensorId > &inputs, const std::vector< TensorId > &outputs, bool is_output=false)
 Add an operator, while keeping the graph as a "linear sequence". More...
 
bool add_operator (OperatorId op, const std::vector< TensorId > &inputs, const std::vector< TensorId > &outputs, bool is_output=false)
 Add a new operator Return invalid if it violates the DAG invariant Invalid operation will not change the graph. More...
 
std::vector< OpPackbuild_operators_sequence () const
 Build a sequence of operators from the acyclic graph of operators. More...
 
std::vector< OperatorIdsrc_ops_from_tensor (TensorId tensor) const
 
std::vector< OperatorIddst_ops_from_tensor (TensorId tensor) const
 
std::vector< TensorIdall_tensors () const
 Get all tensors. More...
 
std::vector< TensorIdglobal_src_tensors () const
 Get source tensors of the whole graph. More...
 
std::vector< TensorIdglobal_dst_tensors () const
 Get destination tensors of the whole graph. More...
 
std::vector< TensorIdintermediate_tensors () const
 Get intermediate tensors of the whole graph. More...
 
std::vector< OperatorIdget_root_ops () const
 Get all root ops. More...
 

Friends

std::ostream & operator<< (std::ostream &os, const DependencyGraph &)
 
bool operator== (const DependencyGraph &g0, const DependencyGraph &g1)
 Strict equality comparison (all internal ids and order of insertion matter). More...
 

Detailed Description

A multi-input (tensors), multi-output (tensors) acyclic directed graph Represented as a doubly-linked adjacency list with the differentiation between source and destination.

Definition at line 52 of file DependencyGraph.h.

Member Typedef Documentation

◆ AdjList

using AdjList = std::map<Id, std::vector<Id> >

Adjacency list.

Definition at line 61 of file DependencyGraph.h.

◆ Id

using Id = int32_t

Definition at line 55 of file DependencyGraph.h.

◆ OperatorId

using OperatorId = Id

Definition at line 57 of file DependencyGraph.h.

◆ TensorId

using TensorId = Id

Definition at line 56 of file DependencyGraph.h.

Constructor & Destructor Documentation

◆ DependencyGraph()

DependencyGraph ( )
default

Member Function Documentation

◆ add_operator()

bool add_operator ( OperatorId  op,
const std::vector< TensorId > &  inputs,
const std::vector< TensorId > &  outputs,
bool  is_output = false 
)
inline

Add a new operator Return invalid if it violates the DAG invariant Invalid operation will not change the graph.

Parameters
[in]opOperator to add
[in]inputsInput tensors to the operator
[in]outputsOutput tensors to the operator
[in]is_outputWhether this is an output operator

Definition at line 186 of file DependencyGraph.h.

187  {
188  if(operator_exists(op))
189  {
190  return false;
191  }
192  _adj_src_tensors[op] = {};
193  _adj_dst_tensors[op] = {};
194  for(auto in_tensor : inputs)
195  {
196  // Linking input tensor to operator node will never create a cycle / loop because we guarantee
197  // each op is newly created, so every <input, op> pair / edge is new
198  link_input(op, in_tensor);
199  }
200  for(auto out_tensor : outputs)
201  {
202  // If there exists a back path from op's output tensor to op already, then linking the two will create a loop / cycle
203  if(path_exists_from_tensor_to_op(out_tensor, op))
204  {
205  remove_operator(op);
206  return false;
207  }
208  else
209  {
210  link_output(op, out_tensor);
211  }
212  }
213 
214  if(!is_output)
215  {
216  _last_op_available = true;
217  _last_op = op;
218  }
219 
220  return true;
221  }

Referenced by GpuKernelComponentGraph::add_new_component(), and DependencyGraph::add_operator_as_linear().

◆ add_operator_as_linear()

void add_operator_as_linear ( OperatorId  op,
const std::vector< TensorId > &  inputs,
const std::vector< TensorId > &  outputs,
bool  is_output = false 
)
inline

Add an operator, while keeping the graph as a "linear sequence".

PRECONDITION: The current graph is already linear INVARIANT: The list can only grow from head to tail INVARIANT: POSTCONDITION: The graph is linear

Definition at line 171 of file DependencyGraph.h.

172  {
173  const auto success = add_operator(op, inputs, outputs, is_output);
174  ARM_COMPUTE_UNUSED(success);
175  ARM_COMPUTE_ERROR_ON(!success);
176  }

References DependencyGraph::add_operator(), ARM_COMPUTE_ERROR_ON, and ARM_COMPUTE_UNUSED.

Referenced by GpuOperatorGroup::add_operator().

◆ all_tensors()

std::vector<TensorId> all_tensors ( ) const
inline

Get all tensors.

Returns
std::vector<TensorId>

Definition at line 280 of file DependencyGraph.h.

281  {
282  std::vector<TensorId> tensors{};
283  std::transform(std::begin(_adj_src_ops), std::end(_adj_src_ops), std::back_inserter(tensors), [](const auto & it)
284  {
285  return it.first;
286  });
287  return tensors;
288  }

References arm_compute::mlgo::parser::end().

◆ build_operators_sequence()

std::vector<OpPack> build_operators_sequence ( ) const
inline

Build a sequence of operators from the acyclic graph of operators.

The graph will be visited in depth-first strategy. The operator can only be added to the sequence when all operators that supply the input tensors have been added. Otherwise, the operator will be ignored and later visited again. In other words, the dependency between operators will be preserved in the sequence.

Definition at line 230 of file DependencyGraph.h.

231  {
232  std::vector<OpPack> ops_seq;
233  std::set<Id> done_ops;
234  std::set<Id> done_tensors;
235 
236  const auto input_tensors = global_src_tensors();
237 
238  for(auto tensor : input_tensors)
239  {
240  done_tensors.insert(tensor);
241 
242  for(auto op : _adj_dst_ops.at(tensor))
243  {
244  build_operators_sequence_from_op(op, ops_seq, done_ops, done_tensors);
245  }
246  }
247 
248  return ops_seq;
249  }

References DependencyGraph::global_src_tensors(), and tensor.

Referenced by GpuKernelComponentGraph::fuse().

◆ dst_ops_from_tensor()

std::vector<OperatorId> dst_ops_from_tensor ( TensorId  tensor) const
inline

Definition at line 272 of file DependencyGraph.h.

273  {
274  return _adj_dst_ops.at(tensor);
275  }

References tensor.

◆ get_root_ops()

std::vector<OperatorId> get_root_ops ( ) const
inline

Get all root ops.

Root ops can also be referred to as "src ops" of the whole graph

Returns
std::vector<OperatorId>

Definition at line 352 of file DependencyGraph.h.

353  {
354  std::vector<OperatorId> ops{};
355  const auto op_list = all_ops();
356 
357  for(auto op : op_list)
358  {
359  if(src_ops(op).empty())
360  {
361  ops.emplace_back(op);
362  }
363  }
364  return ops;
365  }

Referenced by GpuOperatorGroup::get_root_operator().

◆ global_dst_tensors()

std::vector<TensorId> global_dst_tensors ( ) const
inline

Get destination tensors of the whole graph.

Returns
std::vector<TensorId>

Definition at line 309 of file DependencyGraph.h.

310  {
311  std::vector<TensorId> tensors;
312  for(auto tensor_dst_ops : _adj_dst_ops)
313  {
314  if(tensor_dst_ops.second.empty())
315  {
316  tensors.push_back(tensor_dst_ops.first);
317  }
318  }
319  return tensors;
320  }

◆ global_src_tensors()

std::vector<TensorId> global_src_tensors ( ) const
inline

Get source tensors of the whole graph.

Returns
std::vector<TensorId>

Definition at line 293 of file DependencyGraph.h.

294  {
295  std::vector<TensorId> tensors;
296  for(auto tensor_src_ops : _adj_src_ops)
297  {
298  if(tensor_src_ops.second.empty())
299  {
300  tensors.push_back(tensor_src_ops.first);
301  }
302  }
303  return tensors;
304  }

Referenced by DependencyGraph::build_operators_sequence().

◆ intermediate_tensors()

std::vector<TensorId> intermediate_tensors ( ) const
inline

Get intermediate tensors of the whole graph.

Returns
std::vector<TensorId>

Definition at line 325 of file DependencyGraph.h.

326  {
327  std::vector<TensorId> tensors;
328 
329  // If a tensor is used to connect the input of an operator and the output of another operator,
330  // it is not allocated in the memory. The tensor exists as a temporary variable only.
331  for(auto src_tensor : _adj_src_ops)
332  {
333  if(!src_tensor.second.empty())
334  {
335  const auto dst_tensor = _adj_dst_ops.find(src_tensor.first);
336  if(dst_tensor != _adj_dst_ops.end())
337  {
338  if(!dst_tensor->second.empty())
339  {
340  tensors.push_back(src_tensor.first);
341  }
342  }
343  }
344  }
345 
346  return tensors;
347  }

◆ src_ops_from_tensor()

std::vector<OperatorId> src_ops_from_tensor ( TensorId  tensor) const
inline

Definition at line 268 of file DependencyGraph.h.

269  {
270  return _adj_src_ops.at(tensor);
271  }

References tensor.

◆ try_add_operator_as_linear()

bool try_add_operator_as_linear ( OperatorId  op,
const std::vector< TensorId > &  inputs,
const std::vector< TensorId > &  outputs,
bool  is_output = false 
) const
inline

Try adding an operator (without actually adding it), while keeping the graph as a "linear sequence" / list.

Rule: If the new operator is not the first operator, at least one input tensor must be the output tensor of the last non-output operator. All other input tensors must be the global input of the graph (i.e. not the output of any operator).

Rule: The output tensor of the new operator must not be the input tensor of any previously added operator.

PRECONDITION: The current graph is already linear

Returns
true If the operator can be added while keeping the graph as a linear sequence
false Otherwise

Definition at line 98 of file DependencyGraph.h.

99  {
100  ARM_COMPUTE_UNUSED(op, is_output);
101  if(all_ops().empty())
102  {
103  return true;
104  }
105 
106  // If the new operator is not the first operator, at least one input tensor must be
107  // the output tensor of the last non-output operator. All other input tensors must be
108  // the global input of the graph (i.e. not the output of any operator).
109  if(_last_op_available)
110  {
111  auto use_input_from_last_op = false;
112 
113  for(auto src_tensor : inputs)
114  {
115  const auto src_ops = _adj_src_ops.find(src_tensor);
116 
117  if(src_ops != _adj_src_ops.end())
118  {
119  ARM_COMPUTE_ERROR_ON(src_ops->second.size() > 1);
120 
121  if(!src_ops->second.empty())
122  {
123  const auto src_op = src_ops->second[0];
124 
125  if(src_op == _last_op)
126  {
127  if(use_input_from_last_op)
128  {
129  // To be safe, we also forbid using the output tensor
130  // of the last operator twice.
131  return false;
132  }
133 
134  use_input_from_last_op = true;
135  }
136  else
137  {
138  // The input tensor of this operator must not be the output tensor
139  // of any other operator except the last non-output operator.
140  return false;
141  }
142  }
143  }
144  }
145 
146  if(!use_input_from_last_op)
147  {
148  // At least one input tensor must be the output tensor of the last non-output operator.
149  return false;
150  }
151  }
152 
153  // The output tensor of the new operator must not be the input tensor of any previously
154  // added operator.
155  for(auto dst_tensor : outputs)
156  {
157  if(_adj_dst_ops.find(dst_tensor) != _adj_dst_ops.end())
158  {
159  return false;
160  }
161  }
162 
163  return true;
164  }

References ARM_COMPUTE_ERROR_ON, and ARM_COMPUTE_UNUSED.

Referenced by GpuOperatorGroup::try_add_operator().

Friends And Related Function Documentation

◆ operator<<

std::ostream& operator<< ( std::ostream &  os,
const DependencyGraph  
)
friend

◆ operator==

bool operator== ( const DependencyGraph g0,
const DependencyGraph g1 
)
friend

Strict equality comparison (all internal ids and order of insertion matter).

In the future this may be replaced with a topological comparison, allowing equivalent graphs with different internal ids to be equal

Parameters
[in]g0
[in]g1
Returns
true If the same
false Otherwise

Definition at line 260 of file DependencyGraph.h.

261  {
262  // Do not compare id allocators
263  return std::make_tuple(
264  g0._adj_src_tensors, g0._adj_dst_tensors, g0._adj_src_ops, g0._adj_dst_ops)
265  == std::make_tuple(
266  g1._adj_src_tensors, g1._adj_dst_tensors, g1._adj_src_ops, g1._adj_dst_ops);
267  }

The documentation for this class was generated from the following file:
ARM_COMPUTE_ERROR_ON
#define ARM_COMPUTE_ERROR_ON(cond)
If the condition is true then an error message is printed and an exception thrown.
Definition: Error.h:467
arm_compute::experimental::dynamic_fusion::DependencyGraph::add_operator
bool add_operator(OperatorId op, const std::vector< TensorId > &inputs, const std::vector< TensorId > &outputs, bool is_output=false)
Add a new operator Return invalid if it violates the DAG invariant Invalid operation will not change ...
Definition: DependencyGraph.h:186
arm_compute::experimental::dynamic_fusion::DependencyGraph::global_src_tensors
std::vector< TensorId > global_src_tensors() const
Get source tensors of the whole graph.
Definition: DependencyGraph.h:293
ARM_COMPUTE_UNUSED
#define ARM_COMPUTE_UNUSED(...)
To avoid unused variables warnings.
Definition: Error.h:152
tensor
CLTensor * tensor
Pointer to the auxiliary tensor.
Definition: ClWorkloadRuntime.cpp:66
arm_compute::mlgo::parser::end
void end(TokenStream &in, bool &valid)
Definition: MLGOParser.cpp:290