Compute Library
 21.02
CLArgMinMaxLayer Class Reference

Function to calculate the index of the minimum or maximum values in a tensor based on an axis. More...

#include <CLArgMinMaxLayer.h>

Collaboration diagram for CLArgMinMaxLayer:
[legend]

Public Member Functions

 CLArgMinMaxLayer (std::shared_ptr< IMemoryManager > memory_manager=nullptr)
 Default Constructor. More...
 
 CLArgMinMaxLayer (const CLArgMinMaxLayer &)=delete
 Prevent instances of this class from being copied. More...
 
CLArgMinMaxLayeroperator= (const CLArgMinMaxLayer &)=delete
 Prevent instances of this class from being copied. More...
 
 CLArgMinMaxLayer (CLArgMinMaxLayer &&)=delete
 Prevent instances of this class to be moved. More...
 
CLArgMinMaxLayeroperator= (CLArgMinMaxLayer &&)=delete
 Prevent instances of this class to be moved. More...
 
 ~CLArgMinMaxLayer ()
 Default destructor. More...
 
void configure (const ICLTensor *input, int axis, ICLTensor *output, const ReductionOperation &op)
 Set the input and output tensors. More...
 
void configure (const CLCompileContext &compile_context, const ICLTensor *input, int axis, ICLTensor *output, const ReductionOperation &op)
 Set the input and output tensors. More...
 
void run () override
 Run the kernels contained in the function. More...
 
- Public Member Functions inherited from IFunction
virtual ~IFunction ()=default
 Destructor. More...
 
virtual void prepare ()
 Prepare the function for executing. More...
 

Static Public Member Functions

static Status validate (const ITensorInfo *input, int axis, const ITensorInfo *output, const ReductionOperation &op)
 Static function to check if given info will lead to a valid configuration of CLArgMinMaxLayer. More...
 

Detailed Description

Function to calculate the index of the minimum or maximum values in a tensor based on an axis.

Note
The default data type for an uninitialized output tensor is signed 32-bit integer (S32). It is the user's responsibility to check that the results do not overflow because the indices are computed in unsigned 32-bit (U32).

Definition at line 48 of file CLArgMinMaxLayer.h.

Constructor & Destructor Documentation

◆ CLArgMinMaxLayer() [1/3]

CLArgMinMaxLayer ( std::shared_ptr< IMemoryManager memory_manager = nullptr)

Default Constructor.

Parameters
[in]memory_manager(Optional) Memory manager.

Definition at line 39 of file CLArgMinMaxLayer.cpp.

References CLArgMinMaxLayer::~CLArgMinMaxLayer().

40  : _memory_group(std::move(memory_manager)), _results_vector(), _not_reshaped_output(), _reduction_kernels_vector(), _reshape(), _num_of_stages(), _reduction_axis()
41 {
42 }

◆ CLArgMinMaxLayer() [2/3]

CLArgMinMaxLayer ( const CLArgMinMaxLayer )
delete

Prevent instances of this class from being copied.

◆ CLArgMinMaxLayer() [3/3]

Prevent instances of this class to be moved.

◆ ~CLArgMinMaxLayer()

~CLArgMinMaxLayer ( )
default

Default destructor.

Referenced by CLArgMinMaxLayer::CLArgMinMaxLayer().

Member Function Documentation

◆ configure() [1/2]

void configure ( const ICLTensor input,
int  axis,
ICLTensor output,
const ReductionOperation op 
)

Set the input and output tensors.

Parameters
[in]inputInput source tensor. Data types supported: QASYMM8/QASYMM8_SIGNED/S32/F16/F32.
[in]axisAxis to find max/min index.
[out]outputOutput source tensor. Data types supported: U32/S32.
[in]opReduction operation to perform. Operations supported: ARG_IDX_MAX, ARG_IDX_MIN

Definition at line 114 of file CLArgMinMaxLayer.cpp.

References CLKernelLibrary::get().

115 {
116  configure(CLKernelLibrary::get().get_compile_context(), input, axis, output, op);
117 }
static CLKernelLibrary & get()
Access the KernelLibrary singleton.
void configure(const ICLTensor *input, int axis, ICLTensor *output, const ReductionOperation &op)
Set the input and output tensors.

◆ configure() [2/2]

void configure ( const CLCompileContext compile_context,
const ICLTensor input,
int  axis,
ICLTensor output,
const ReductionOperation op 
)

Set the input and output tensors.

Parameters
[in]compile_contextThe compile context to be used.
[in]inputInput source tensor. Data types supported: QASYMM8/QASYMM8_SIGNED/S32/F16/F32.
[in]axisAxis to find max/min index.
[out]outputOutput source tensor. Data types supported: U32/S32.
[in]opReduction operation to perform. Operations supported: ARG_IDX_MAX, ARG_IDX_MIN

Definition at line 119 of file CLArgMinMaxLayer.cpp.

References CLTensorAllocator::allocate(), CLTensor::allocator(), ARM_COMPUTE_ERROR_ON_NULLPTR, arm_compute::auto_init_if_empty(), arm_compute::utils::calculate_number_of_stages_only_x_axis(), ICloneable< T >::clone(), arm_compute::misc::shape_calculator::compute_reduced_shape(), CLReshapeLayer::configure(), ITensorInfo::data_type(), ITensorInfo::dimension(), ITensor::info(), arm_compute::test::validation::input, MemoryGroup::manage(), arm_compute::test::validation::output_shape, arm_compute::S32, arm_compute::test::validation::shape, ITensorInfo::tensor_shape(), and arm_compute::UNKNOWN.

120 {
122  _num_of_stages = utils::calculate_number_of_stages_only_x_axis(input->info()->dimension(0), axis);
123  _reduction_axis = axis;
124 
125  const TensorShape output_shape = arm_compute::misc::shape_calculator::compute_reduced_shape(input->info()->tensor_shape(), axis, false);
126  DataType output_data_type = (output->info()->data_type() == DataType::UNKNOWN) ? DataType::S32 : output->info()->data_type();
127  auto_init_if_empty(*output->info(), input->info()->clone()->set_tensor_shape(output_shape).set_data_type(output_data_type).reset_padding().set_is_resizable(true));
128 
129  // Configure reduction operation kernels
130  _reduction_kernels_vector.reserve(_num_of_stages);
131 
132  auto add_reduction_kernel = [this, &compile_context, axis, op](const ICLTensor * input, const ICLTensor * prev_output, ICLTensor * output)
133  {
134  _reduction_kernels_vector.emplace_back(std::make_unique<CLArgMinMaxLayerKernel>());
135  _reduction_kernels_vector.back()->configure(compile_context, input, prev_output, output, axis, op);
136  };
137 
138  _memory_group.manage(&_not_reshaped_output);
139  // Create temporary tensors
140  if(_num_of_stages == 1)
141  {
142  add_reduction_kernel(input, nullptr, &_not_reshaped_output);
143  }
144  else
145  {
146  _results_vector.resize(_num_of_stages - 1);
147  TensorShape shape{ input->info()->tensor_shape() };
148  for(unsigned int i = 0; i < _num_of_stages - 1; i++)
149  {
150  shape.set(0, ceil(shape.x() / 128.f));
151  _results_vector[i].allocator()->init(input->info()->clone()->set_tensor_shape(shape).set_data_type(output_data_type));
152  }
153 
154  // Apply ReductionOperation only on first kernel
155  _memory_group.manage(&_results_vector[0]);
156  add_reduction_kernel(input, nullptr, &_results_vector[0]);
157 
158  // Apply ReductionOperation on intermediate stages
159  for(unsigned int i = 1; i < _num_of_stages - 1; ++i)
160  {
161  _memory_group.manage(&_results_vector[i]);
162  add_reduction_kernel(input, &_results_vector[i - 1], &_results_vector[i]);
163  _results_vector[i - 1].allocator()->allocate();
164  }
165 
166  // Apply ReductionOperation on the last stage
167  const unsigned int last_stage = _num_of_stages - 1;
168  add_reduction_kernel(input, &_results_vector[last_stage - 1], &_not_reshaped_output);
169  _results_vector[last_stage - 1].allocator()->allocate();
170  }
171  _reshape.configure(compile_context, &_not_reshaped_output, output);
172  _not_reshaped_output.allocator()->allocate();
173 }
unsigned int calculate_number_of_stages_only_x_axis(size_t input_x_dimension, unsigned int axis)
Calculate number of stages for parallel implementations.
Definition: Utils.cpp:68
CLTensorAllocator * allocator()
Return a pointer to the tensor&#39;s allocator.
Definition: CLTensor.cpp:61
1 channel, 1 S32 per channel
void manage(IMemoryManageable *obj) override
Sets a object to be managed by the given memory group.
Definition: MemoryGroup.h:79
TensorShape compute_reduced_shape(const TensorShape &input, unsigned int axis, bool keep_dims=true)
Calculate the reduced shape of a tensor given an axis.
bool auto_init_if_empty(ITensorInfo &info, const TensorShape &shape, int num_channels, DataType data_type, QuantizationInfo quantization_info=QuantizationInfo())
Auto initialize the tensor info (shape, number of channels and data type) if the current assignment i...
void allocate() override
Allocate size specified by TensorInfo of OpenCL memory.
#define ARM_COMPUTE_ERROR_ON_NULLPTR(...)
Definition: Validate.h:161
void configure(const ICLTensor *input, ICLTensor *output)
Initialise the kernel&#39;s inputs and outputs.
DataType
Available data types.
Definition: Types.h:77

◆ operator=() [1/2]

CLArgMinMaxLayer& operator= ( const CLArgMinMaxLayer )
delete

Prevent instances of this class from being copied.

◆ operator=() [2/2]

CLArgMinMaxLayer& operator= ( CLArgMinMaxLayer &&  )
delete

Prevent instances of this class to be moved.

◆ run()

void run ( )
overridevirtual

Run the kernels contained in the function.

For Neon kernels:

  • Multi-threading is used for the kernels which are parallelisable.
  • By default std::thread::hardware_concurrency() threads are used.
Note
CPPScheduler::set_num_threads() can be used to manually set the number of threads

For OpenCL kernels:

  • All the kernels are enqueued on the queue associated with CLScheduler.
  • The queue is then flushed.
Note
The function will not block until the kernels are executed. It is the user's responsibility to wait.
Will call prepare() on first run if hasn't been done

Implements IFunction.

Definition at line 175 of file CLArgMinMaxLayer.cpp.

References CLScheduler::enqueue(), CLScheduler::get(), and CLReshapeLayer::run().

176 {
177  MemoryGroupResourceScope scope_mg(_memory_group);
178 
179  for(unsigned int i = 0; i < _num_of_stages; ++i)
180  {
181  CLScheduler::get().enqueue(*_reduction_kernels_vector[i], false);
182  }
183  _reshape.run();
184 }
static CLScheduler & get()
Access the scheduler singleton.
void run() override
Run the kernels contained in the function.
void enqueue(ICLKernel &kernel, bool flush=true)
Schedule the execution of the passed kernel if possible.

◆ validate()

Status validate ( const ITensorInfo input,
int  axis,
const ITensorInfo output,
const ReductionOperation op 
)
static

Static function to check if given info will lead to a valid configuration of CLArgMinMaxLayer.

Parameters
[in]inputInput source tensor info. Data types supported: QASYMM8/QASYMM8_SIGNED/S32/F16/F32.
[in]axisAxis to find max/min index.
[in]outputOutput source tensor info. Data types supported: U32/S32.
[in]opReduction operation to perform. Operations supported: ARG_IDX_MAX, ARG_IDX_MIN
Returns
a status

Definition at line 46 of file CLArgMinMaxLayer.cpp.

References arm_compute::ARG_IDX_MAX, arm_compute::ARG_IDX_MIN, ARM_COMPUTE_ERROR_ON_NULLPTR, ARM_COMPUTE_RETURN_ERROR_ON_DATA_TYPE_CHANNEL_NOT_IN, ARM_COMPUTE_RETURN_ERROR_ON_F16_UNSUPPORTED, ARM_COMPUTE_RETURN_ERROR_ON_MISMATCHING_SHAPES, ARM_COMPUTE_RETURN_ERROR_ON_MSG, ARM_COMPUTE_RETURN_ON_ERROR, arm_compute::utils::calculate_number_of_stages_only_x_axis(), ICloneable< T >::clone(), arm_compute::misc::shape_calculator::compute_reduced_shape(), arm_compute::test::validation::data_type, ITensorInfo::data_type(), ITensorInfo::dimension(), arm_compute::F16, arm_compute::F32, ITensorInfo::num_channels(), Dimensions< size_t >::num_max_dimensions, arm_compute::QASYMM8, arm_compute::QASYMM8_SIGNED, arm_compute::test::validation::qinfo, ITensorInfo::quantization_info(), arm_compute::S32, TensorShape::set(), TensorInfo::set_data_type(), ITensorInfo::set_num_channels(), ITensorInfo::set_quantization_info(), ITensorInfo::set_tensor_shape(), arm_compute::test::validation::shape, ITensorInfo::tensor_shape(), ITensorInfo::total_size(), CLReshapeLayer::validate(), and CLArgMinMaxLayerKernel::validate().

Referenced by arm_compute::test::validation::DATA_TEST_CASE().

47 {
52  ARM_COMPUTE_RETURN_ERROR_ON_MSG(axis >= static_cast<int>(TensorShape::num_max_dimensions), "Reduction axis greater than max number of dimensions");
53  ARM_COMPUTE_RETURN_ERROR_ON_MSG(axis > 3, "Unsupported reduction axis");
54  const unsigned int num_of_stages = utils::calculate_number_of_stages_only_x_axis(input->dimension(0), axis);
55 
56  DataType output_data_type = DataType::S32;
57  TensorInfo not_reshaped_output;
58  const auto input_num_channles = input->num_channels();
59  const auto input_qinfo = input->quantization_info();
60 
61  if(output->total_size() != 0)
62  {
63  output_data_type = output->data_type();
64  const TensorInfo expected_output_shape = output->clone()->set_tensor_shape(arm_compute::misc::shape_calculator::compute_reduced_shape(input->tensor_shape(), axis, false));
65  ARM_COMPUTE_RETURN_ERROR_ON_MISMATCHING_SHAPES(&expected_output_shape, output);
66  }
67 
68  auto shape_before_reshape = input->tensor_shape();
69  shape_before_reshape.set(axis, 1);
70  auto initialize_tensorinfo = [](TensorInfo & ti, TensorShape shape, DataType data_type, int num_channels, QuantizationInfo qinfo)
71  {
72  ti.set_data_type(data_type).set_tensor_shape(shape).set_num_channels(num_channels).set_quantization_info(qinfo);
73  };
74 
75  initialize_tensorinfo(not_reshaped_output, shape_before_reshape, output_data_type, input_num_channles, input_qinfo);
76 
77  if(num_of_stages == 1)
78  {
79  ARM_COMPUTE_RETURN_ON_ERROR(CLArgMinMaxLayerKernel::validate(input, nullptr, &not_reshaped_output, axis, op));
80  }
81  else
82  {
83  // Create temporary tensor infos
84  std::vector<TensorInfo> sums_vector(num_of_stages - 1);
85 
86  // Create intermediate tensor info
87  TensorShape shape{ input->tensor_shape() };
88 
89  for(unsigned int i = 0; i < num_of_stages - 1; i++)
90  {
91  shape.set(0, ceil(shape.x() / 128.f));
92  sums_vector[i].set_data_type(input->data_type());
93  sums_vector[i].set_tensor_shape(shape);
94  sums_vector[i].set_num_channels(input->num_channels());
95  }
96 
97  // Validate ReductionOperation only on first kernel
98  ARM_COMPUTE_RETURN_ON_ERROR(CLArgMinMaxLayerKernel::validate(input, nullptr, &sums_vector[0], axis, op));
99 
100  // Validate ReductionOperation on intermediate stages
101  for(unsigned int i = 1; i < num_of_stages - 1; ++i)
102  {
103  ARM_COMPUTE_RETURN_ON_ERROR(CLArgMinMaxLayerKernel::validate(input, &sums_vector[i - 1], &sums_vector[i], axis, op));
104  }
105 
106  // Validate ReductionOperation on the last stage
107  const unsigned int last_stage = num_of_stages - 1;
108  ARM_COMPUTE_RETURN_ON_ERROR(CLArgMinMaxLayerKernel::validate(input, &sums_vector[last_stage - 1], &not_reshaped_output, axis, op));
109  }
110  ARM_COMPUTE_RETURN_ON_ERROR(CLReshapeLayer::validate(&not_reshaped_output, output));
111  return Status{};
112 }
unsigned int calculate_number_of_stages_only_x_axis(size_t input_x_dimension, unsigned int axis)
Calculate number of stages for parallel implementations.
Definition: Utils.cpp:68
static Status validate(const ITensorInfo *input, const ITensorInfo *output)
Static function to check if given info will lead to a valid configuration of CLReshapeLayer.
#define ARM_COMPUTE_RETURN_ERROR_ON_F16_UNSUPPORTED(tensor)
Definition: CLValidate.h:35
#define ARM_COMPUTE_RETURN_ON_ERROR(status)
Checks if a status contains an error and returns it.
Definition: Error.h:204
1 channel, 1 F32 per channel
1 channel, 1 F16 per channel
1 channel, 1 S32 per channel
const DataType data_type
Definition: Im2Col.cpp:150
quantized, asymmetric fixed-point 8-bit number unsigned
TensorShape compute_reduced_shape(const TensorShape &input, unsigned int axis, bool keep_dims=true)
Calculate the reduced shape of a tensor given an axis.
static Status validate(const ITensorInfo *input, const ITensorInfo *prev_output, const ITensorInfo *output, unsigned int axis, ReductionOperation op)
Static function to check if given info will lead to a valid configuration of CLArgMinMaxLayerKernel.
#define ARM_COMPUTE_RETURN_ERROR_ON_MISMATCHING_SHAPES(...)
Definition: Validate.h:443
const QuantizationInfo qinfo
Definition: Im2Col.cpp:155
#define ARM_COMPUTE_RETURN_ERROR_ON_DATA_TYPE_CHANNEL_NOT_IN(t, c,...)
Definition: Validate.h:792
#define ARM_COMPUTE_RETURN_ERROR_ON_MSG(cond, msg)
If the condition is true, an error is returned.
Definition: Error.h:244
#define ARM_COMPUTE_ERROR_ON_NULLPTR(...)
Definition: Validate.h:161
quantized, asymmetric fixed-point 8-bit number signed
static constexpr size_t num_max_dimensions
Number of dimensions the tensor has.
Definition: Dimensions.h:46
DataType
Available data types.
Definition: Types.h:77

The documentation for this class was generated from the following files: