Compute Library
 22.11
DepthConcatSubTensorMutator Class Referencefinal

Mutation pass to optimize depth concatenation operations by using sub-tensors. More...

#include <DepthConcatSubTensorMutator.h>

Collaboration diagram for DepthConcatSubTensorMutator:
[legend]

Public Member Functions

virtual void mutate (Graph &g) override
 Walk the graph and perform a specific mutation. More...
 
MutationType type () const override
 Returns mutation type. More...
 
const char * name () override
 Returns mutator name. More...
 
- Public Member Functions inherited from IGraphMutator
virtual ~IGraphMutator ()=default
 Virtual Destructor. More...
 

Additional Inherited Members

- Public Types inherited from IGraphMutator
enum  MutationType { IR, Backend }
 Mutation type. More...
 

Detailed Description

Mutation pass to optimize depth concatenation operations by using sub-tensors.

Warning
Always run as one of the last mutation pass as optimizations might change the parent of sub-tensors.

Definition at line 37 of file DepthConcatSubTensorMutator.h.

Member Function Documentation

◆ mutate()

void mutate ( Graph g)
overridevirtual

Walk the graph and perform a specific mutation.

Parameters
[in,out]gGraph to walk and mutate

Implements IGraphMutator.

Definition at line 50 of file DepthConcatSubTensorMutator.cpp.

References ARM_COMPUTE_LOG_GRAPH_VERBOSE, arm_compute::graph::ConcatenateLayer, IDeviceBackend::create_subtensor(), Tensor::desc(), arm_compute::graph::dfs(), Graph::edge(), BackendRegistry::get(), BackendRegistry::get_backend(), arm_compute::graph::get_dimension_idx(), INode::id(), INode::input(), INode::input_edges(), arm_compute::test::validation::input_shape, arm_compute::graph::is_target_supported(), INode::name(), Graph::node(), Graph::nodes(), INode::output(), TensorDescriptor::quant_info, arm_compute::utils::iterable::reverse_iterate(), TensorDescriptor::target, Edge::tensor(), and INode::type().

51 {
52  // Early exit if no Concatenation layers exist in graph
53  if(g.nodes(NodeType::ConcatenateLayer).empty())
54  {
55  return;
56  }
57 
58  // Perform topological sort
59  std::vector<NodeID> topological_sorted_node_ids = dfs(g);
60 
61  // Should be in reverse order of execution
62  for(auto &node_id : arm_compute::utils::iterable::reverse_iterate(topological_sorted_node_ids))
63  {
64  INode *node = g.node(node_id);
65  if(node != nullptr && node->type() == NodeType::ConcatenateLayer && node->output(0) != nullptr)
66  {
67  // Get output tensor
68  auto output_tensor = node->output(0);
69 
70  // Check concatenation axis (Sub-tensor optimization is supported for concatenation axis >=2)
71  auto *concat_node = arm_compute::utils::cast::polymorphic_downcast<ConcatenateLayerNode *>(node);
72  if(output_tensor == nullptr || get_dimension_idx(output_tensor->desc().layout, concat_node->concatenation_axis()) < 2)
73  {
74  continue;
75  }
76 
77  // Check that all tensor have the same target, valid inputs and same quantization info
78  bool is_valid = std::all_of(node->input_edges().cbegin(), node->input_edges().cend(),
79  [&](const EdgeID & eid)
80  {
81  return (g.edge(eid) != nullptr) && (g.edge(eid)->tensor() != nullptr) && (g.edge(eid)->tensor()->desc().target == output_tensor->desc().target)
82  && (g.edge(eid)->tensor()->desc().quant_info == output_tensor->desc().quant_info);
83  });
84 
85  // Create subtensors
86  if(is_valid && is_target_supported(output_tensor->desc().target))
87  {
88  ARM_COMPUTE_LOG_GRAPH_VERBOSE("Using sub-tensors for the node with ID : "
89  << node->id() << " and name : " << node->name() << std::endl);
90  // Create sub-tensor handles
91  unsigned depth = 0;
92  for(unsigned int i = 0; i < node->input_edges().size(); ++i)
93  {
94  auto input_tensor = node->input(i);
95  const auto input_shape = input_tensor->desc().shape;
96 
97  backends::IDeviceBackend &backend = backends::BackendRegistry::get().get_backend(input_tensor->desc().target);
98  std::unique_ptr<ITensorHandle> handle = backend.create_subtensor(output_tensor->handle(), input_shape, Coordinates(0, 0, depth), false);
99  input_tensor->set_handle(std::move(handle));
100 
101  depth += input_shape.z();
102  }
103 
104  auto *dc_node = arm_compute::utils::cast::polymorphic_downcast<ConcatenateLayerNode *>(node);
105  dc_node->set_enabled(false);
106  }
107  }
108  }
109 }
IDeviceBackend & get_backend(Target target)
Get a backend from the registry.
const auto input_shape
Validate test suite is to test ARM_COMPUTE_RETURN_ON_* macros we use to check the validity of given a...
bool is_target_supported(Target target)
Checks if a specific target is supported.
Definition: Utils.cpp:34
static BackendRegistry & get()
Gets backend registry instance.
std::vector< NodeID > dfs(Graph &g)
Depth first search traversal.
virtual std::unique_ptr< ITensorHandle > create_subtensor(ITensorHandle *parent, TensorShape shape, Coordinates coords, bool extend_parent)=0
Create a backend Sub-Tensor.
unsigned int EdgeID
Definition: Types.h:70
#define ARM_COMPUTE_LOG_GRAPH_VERBOSE(x)
Definition: Logger.h:50
reverse_iterable< T > reverse_iterate(T &val)
Creates a reverse iterable for a given type.
Definition: Iterable.h:101
size_t get_dimension_idx(DataLayout data_layout, const DataLayoutDimension data_layout_dimension)
Get index of a tensor&#39;s given dimension depending on its layout.
Definition: Utils.cpp:148

◆ name()

const char * name ( )
overridevirtual

Returns mutator name.

Returns
Mutator name

Implements IGraphMutator.

Definition at line 40 of file DepthConcatSubTensorMutator.cpp.

41 {
42  return "DepthConcatSubTensorMutator";
43 }

◆ type()

IGraphMutator::MutationType type ( ) const
overridevirtual

Returns mutation type.

Returns
Mutation type enumeration

Implements IGraphMutator.

Definition at line 45 of file DepthConcatSubTensorMutator.cpp.

References IGraphMutator::Backend.


The documentation for this class was generated from the following files: