Compute Library
 19.08
CLGenerateProposalsLayer Class Reference

Basic function to generate proposals for a RPN (Region Proposal Network) More...

#include <CLGenerateProposalsLayer.h>

Collaboration diagram for CLGenerateProposalsLayer:
[legend]

Public Member Functions

 CLGenerateProposalsLayer (std::shared_ptr< IMemoryManager > memory_manager=nullptr)
 Default constructor. More...
 
 CLGenerateProposalsLayer (const CLGenerateProposalsLayer &)=delete
 Prevent instances of this class from being copied (As this class contains pointers) More...
 
 CLGenerateProposalsLayer (CLGenerateProposalsLayer &&)=default
 Default move constructor. More...
 
CLGenerateProposalsLayeroperator= (const CLGenerateProposalsLayer &)=delete
 Prevent instances of this class from being copied (As this class contains pointers) More...
 
CLGenerateProposalsLayeroperator= (CLGenerateProposalsLayer &&)=default
 Default move assignment operator. More...
 
void configure (const ICLTensor *scores, const ICLTensor *deltas, const ICLTensor *anchors, ICLTensor *proposals, ICLTensor *scores_out, ICLTensor *num_valid_proposals, const GenerateProposalsInfo &info)
 Set the input and output tensors. More...
 
void run () override
 Run the kernels contained in the function. More...
 
- Public Member Functions inherited from IFunction
virtual ~IFunction ()=default
 Destructor. More...
 
virtual void prepare ()
 Prepare the function for executing. More...
 

Static Public Member Functions

static Status validate (const ITensorInfo *scores, const ITensorInfo *deltas, const ITensorInfo *anchors, const ITensorInfo *proposals, const ITensorInfo *scores_out, const ITensorInfo *num_valid_proposals, const GenerateProposalsInfo &info)
 Static function to check if given info will lead to a valid configuration of CLGenerateProposalsLayer. More...
 

Detailed Description

Basic function to generate proposals for a RPN (Region Proposal Network)

This function calls the following OpenCL kernels:

  1. CLComputeAllAnchors
  2. CLPermute x 2
  3. CLReshapeLayer x 2
  4. CLStridedSlice x 3
  5. CLBoundingBoxTransform
  6. CLCopyKernel
  7. CLMemsetKernel And the following CPP kernels:
  8. CPPBoxWithNonMaximaSuppressionLimit

Definition at line 58 of file CLGenerateProposalsLayer.h.

Constructor & Destructor Documentation

◆ CLGenerateProposalsLayer() [1/3]

CLGenerateProposalsLayer ( std::shared_ptr< IMemoryManager memory_manager = nullptr)

Default constructor.

Parameters
[in]memory_manager(Optional) Memory manager.

Definition at line 32 of file CLGenerateProposalsLayer.cpp.

33  : _memory_group(std::move(memory_manager)),
34  _permute_deltas_kernel(),
35  _flatten_deltas_kernel(),
36  _permute_scores_kernel(),
37  _flatten_scores_kernel(),
38  _compute_anchors_kernel(),
39  _bounding_box_kernel(),
40  _memset_kernel(),
41  _padded_copy_kernel(),
42  _cpp_nms_kernel(),
43  _is_nhwc(false),
44  _deltas_permuted(),
45  _deltas_flattened(),
46  _scores_permuted(),
47  _scores_flattened(),
48  _all_anchors(),
49  _all_proposals(),
50  _keeps_nms_unused(),
51  _classes_nms_unused(),
52  _proposals_4_roi_values(),
53  _num_valid_proposals(nullptr),
54  _scores_out(nullptr)
55 {
56 }

◆ CLGenerateProposalsLayer() [2/3]

Prevent instances of this class from being copied (As this class contains pointers)

◆ CLGenerateProposalsLayer() [3/3]

Default move constructor.

Member Function Documentation

◆ configure()

void configure ( const ICLTensor scores,
const ICLTensor deltas,
const ICLTensor anchors,
ICLTensor proposals,
ICLTensor scores_out,
ICLTensor num_valid_proposals,
const GenerateProposalsInfo info 
)

Set the input and output tensors.

Parameters
[in]scoresScores from convolution layer of size (W, H, A), where H and W are the height and width of the feature map, and A is the number of anchors. Data types supported: F16/F32
[in]deltasBounding box deltas from convolution layer of size (W, H, 4*A). Data types supported: Same as scores
[in]anchorsAnchors tensor of size (4, A). Data types supported: Same as input
[out]proposalsBox proposals output tensor of size (5, W*H*A). Data types supported: Same as input
[out]scores_outBox scores output tensor of size (W*H*A). Data types supported: Same as input
[out]num_valid_proposalsScalar output tensor which says which of the first proposals are valid. Data types supported: U32
[in]infoContains GenerateProposals operation information described in GenerateProposalsInfo
Note
Only single image prediction is supported. Height and Width (and scale) of the image will be contained in the GenerateProposalsInfo struct.
Proposals contains all the proposals. Of those, only the first num_valid_proposals are valid.

Definition at line 58 of file CLGenerateProposalsLayer.cpp.

60 {
61  ARM_COMPUTE_ERROR_ON_NULLPTR(scores, deltas, anchors, proposals, scores_out, num_valid_proposals);
62  ARM_COMPUTE_ERROR_THROW_ON(CLGenerateProposalsLayer::validate(scores->info(), deltas->info(), anchors->info(), proposals->info(), scores_out->info(), num_valid_proposals->info(), info));
63 
64  _is_nhwc = scores->info()->data_layout() == DataLayout::NHWC;
65  const DataType data_type = deltas->info()->data_type();
66  const int num_anchors = scores->info()->dimension(get_data_layout_dimension_index(scores->info()->data_layout(), DataLayoutDimension::CHANNEL));
67  const int feat_width = scores->info()->dimension(get_data_layout_dimension_index(scores->info()->data_layout(), DataLayoutDimension::WIDTH));
68  const int feat_height = scores->info()->dimension(get_data_layout_dimension_index(scores->info()->data_layout(), DataLayoutDimension::HEIGHT));
69  const int total_num_anchors = num_anchors * feat_width * feat_height;
70  const int pre_nms_topN = info.pre_nms_topN();
71  const int post_nms_topN = info.post_nms_topN();
72  const size_t values_per_roi = info.values_per_roi();
73 
74  // Compute all the anchors
75  _memory_group.manage(&_all_anchors);
76  _compute_anchors_kernel.configure(anchors, &_all_anchors, ComputeAnchorsInfo(feat_width, feat_height, info.spatial_scale()));
77 
78  const TensorShape flatten_shape_deltas(values_per_roi, total_num_anchors);
79  _deltas_flattened.allocator()->init(TensorInfo(flatten_shape_deltas, 1, data_type));
80 
81  // Permute and reshape deltas
82  if(!_is_nhwc)
83  {
84  _memory_group.manage(&_deltas_permuted);
85  _memory_group.manage(&_deltas_flattened);
86  _permute_deltas_kernel.configure(deltas, &_deltas_permuted, PermutationVector{ 2, 0, 1 });
87  _flatten_deltas_kernel.configure(&_deltas_permuted, &_deltas_flattened);
88  _deltas_permuted.allocator()->allocate();
89  }
90  else
91  {
92  _memory_group.manage(&_deltas_flattened);
93  _flatten_deltas_kernel.configure(deltas, &_deltas_flattened);
94  }
95 
96  const TensorShape flatten_shape_scores(1, total_num_anchors);
97  _scores_flattened.allocator()->init(TensorInfo(flatten_shape_scores, 1, data_type));
98 
99  // Permute and reshape scores
100  if(!_is_nhwc)
101  {
102  _memory_group.manage(&_scores_permuted);
103  _memory_group.manage(&_scores_flattened);
104  _permute_scores_kernel.configure(scores, &_scores_permuted, PermutationVector{ 2, 0, 1 });
105  _flatten_scores_kernel.configure(&_scores_permuted, &_scores_flattened);
106  _scores_permuted.allocator()->allocate();
107  }
108  else
109  {
110  _memory_group.manage(&_scores_flattened);
111  _flatten_scores_kernel.configure(scores, &_scores_flattened);
112  }
113 
114  // Bounding box transform
115  _memory_group.manage(&_all_proposals);
116  BoundingBoxTransformInfo bbox_info(info.im_width(), info.im_height(), 1.f);
117  _bounding_box_kernel.configure(&_all_anchors, &_all_proposals, &_deltas_flattened, bbox_info);
118  _deltas_flattened.allocator()->allocate();
119  _all_anchors.allocator()->allocate();
120 
121  // The original layer implementation first selects the best pre_nms_topN anchors (thus having a lightweight sort)
122  // that are then transformed by bbox_transform. The boxes generated are then fed into a non-sorting NMS operation.
123  // Since we are reusing the NMS layer and we don't implement any CL/sort, we let NMS do the sorting (of all the input)
124  // and the filtering
125  const int scores_nms_size = std::min<int>(std::min<int>(post_nms_topN, pre_nms_topN), total_num_anchors);
126  const float min_size_scaled = info.min_size() * info.im_scale();
127  _memory_group.manage(&_classes_nms_unused);
128  _memory_group.manage(&_keeps_nms_unused);
129 
130  // Note that NMS needs outputs preinitialized.
131  auto_init_if_empty(*scores_out->info(), TensorShape(scores_nms_size), 1, data_type);
132  auto_init_if_empty(*_proposals_4_roi_values.info(), TensorShape(values_per_roi, scores_nms_size), 1, data_type);
133  auto_init_if_empty(*num_valid_proposals->info(), TensorShape(1), 1, DataType::U32);
134 
135  // Initialize temporaries (unused) outputs
136  _classes_nms_unused.allocator()->init(TensorInfo(TensorShape(1, 1), 1, data_type));
137  _keeps_nms_unused.allocator()->init(*scores_out->info());
138 
139  // Save the output (to map and unmap them at run)
140  _scores_out = scores_out;
141  _num_valid_proposals = num_valid_proposals;
142 
143  _memory_group.manage(&_proposals_4_roi_values);
144  _cpp_nms_kernel.configure(&_scores_flattened, &_all_proposals, nullptr, scores_out, &_proposals_4_roi_values, &_classes_nms_unused, nullptr, &_keeps_nms_unused, num_valid_proposals,
145  BoxNMSLimitInfo(0.0f, info.nms_thres(), scores_nms_size, false, NMSType::LINEAR, 0.5f, 0.001f, true, min_size_scaled, info.im_width(), info.im_height()));
146  _keeps_nms_unused.allocator()->allocate();
147  _classes_nms_unused.allocator()->allocate();
148  _all_proposals.allocator()->allocate();
149  _scores_flattened.allocator()->allocate();
150 
151  // Add the first column that represents the batch id. This will be all zeros, as we don't support multiple images
152  _padded_copy_kernel.configure(&_proposals_4_roi_values, proposals, PaddingList{ { 1, 0 } });
153  _proposals_4_roi_values.allocator()->allocate();
154 
155  _memset_kernel.configure(proposals, PixelValue());
156 }
void configure(const ICLTensor *anchors, ICLTensor *all_anchors, const ComputeAnchorsInfo &info)
Set the input and output tensors.
void configure(const ITensor *scores_in, const ITensor *boxes_in, const ITensor *batch_splits_in, ITensor *scores_out, ITensor *boxes_out, ITensor *classes, ITensor *batch_splits_out=nullptr, ITensor *keeps=nullptr, ITensor *keeps_size=nullptr, const BoxNMSLimitInfo info=BoxNMSLimitInfo())
Initialise the kernel's input and output tensors.
TensorInfo * info() const override
Interface to be implemented by the child class to return the tensor's metadata.
Definition: CLTensor.cpp:35
std::vector< PaddingInfo > PaddingList
List of padding information.
Definition: Types.h:445
Strides PermutationVector
Permutation vector.
Definition: Types.h:47
void configure(const ICLTensor *input, ICLTensor *output, const PaddingList &padding=PaddingList(), Window *output_window=nullptr)
Initialize the kernel's input, output.
CLTensorAllocator * allocator()
Return a pointer to the tensor's allocator.
Definition: CLTensor.cpp:55
#define ARM_COMPUTE_ERROR_THROW_ON(status)
Definition: Error.h:327
void init(const TensorInfo &input, size_t alignment=0)
Initialize a tensor based on the passed TensorInfo.
bool auto_init_if_empty(ITensorInfo &info, const TensorShape &shape, int num_channels, DataType data_type, QuantizationInfo quantization_info=QuantizationInfo())
Auto initialize the tensor info (shape, number of channels and data type) if the current assignment i...
Definition: Helpers.inl:201
void configure(const ICLTensor *boxes, ICLTensor *pred_boxes, const ICLTensor *deltas, const BoundingBoxTransformInfo &info)
Set the input and output tensors.
void manage(TensorType *obj)
Sets a object to be managed by the given memory group.
1 channel, 1 U32 per channel
void configure(const ICLTensor *input, ICLTensor *output)
Set the input and output of the kernel.
static Status validate(const ITensorInfo *scores, const ITensorInfo *deltas, const ITensorInfo *anchors, const ITensorInfo *proposals, const ITensorInfo *scores_out, const ITensorInfo *num_valid_proposals, const GenerateProposalsInfo &info)
Static function to check if given info will lead to a valid configuration of CLGenerateProposalsLayer...
void allocate() override
Allocate size specified by TensorInfo of OpenCL memory.
#define ARM_COMPUTE_ERROR_ON_NULLPTR(...)
Definition: Validate.h:161
Num samples, height, width, channels.
void configure(const ICLTensor *input, ICLTensor *output, const PermutationVector &perm)
Set the input and output of the kernel.
void configure(ICLTensor *tensor, const PixelValue &constant_value, Window *window=nullptr)
Initialise the kernel's tensor and filling value.
size_t get_data_layout_dimension_index(const DataLayout data_layout, const DataLayoutDimension data_layout_dimension)
Get the index of the given dimension.
Definition: Helpers.inl:326
DataType
Available data types.
Definition: Types.h:74

References CLTensorAllocator::allocate(), CLTensor::allocator(), ARM_COMPUTE_ERROR_ON_NULLPTR, ARM_COMPUTE_ERROR_THROW_ON, arm_compute::auto_init_if_empty(), arm_compute::CHANNEL, CLReshapeLayerKernel::configure(), CLCopyKernel::configure(), CLComputeAllAnchorsKernel::configure(), CLPermuteKernel::configure(), CLMemsetKernel::configure(), CLBoundingBoxTransformKernel::configure(), CPPBoxWithNonMaximaSuppressionLimitKernel::configure(), ITensorInfo::data_layout(), arm_compute::test::validation::data_type, ITensorInfo::data_type(), ITensorInfo::dimension(), arm_compute::get_data_layout_dimension_index(), arm_compute::HEIGHT, ITensor::info(), CLTensor::info(), arm_compute::test::validation::info, ITensorAllocator::init(), arm_compute::LINEAR, MemoryGroupBase< TensorType >::manage(), arm_compute::NHWC, arm_compute::U32, CLGenerateProposalsLayer::validate(), and arm_compute::WIDTH.

◆ operator=() [1/2]

CLGenerateProposalsLayer& operator= ( const CLGenerateProposalsLayer )
delete

Prevent instances of this class from being copied (As this class contains pointers)

◆ operator=() [2/2]

Default move assignment operator.

◆ run()

void run ( )
overridevirtual

Run the kernels contained in the function.

For NEON kernels:

  • Multi-threading is used for the kernels which are parallelisable.
  • By default std::thread::hardware_concurrency() threads are used.
Note
CPPScheduler::set_num_threads() can be used to manually set the number of threads

For OpenCL kernels:

  • All the kernels are enqueued on the queue associated with CLScheduler.
  • The queue is then flushed.
Note
The function will not block until the kernels are executed. It is the user's responsibility to wait.
Will call prepare() on first run if hasn't been done

Implements IFunction.

Definition at line 256 of file CLGenerateProposalsLayer.cpp.

257 {
258  // Acquire all the temporaries
259  MemoryGroupResourceScope scope_mg(_memory_group);
260 
261  // Compute all the anchors
262  CLScheduler::get().enqueue(_compute_anchors_kernel, false);
263 
264  // Transpose and reshape the inputs
265  if(!_is_nhwc)
266  {
267  CLScheduler::get().enqueue(_permute_deltas_kernel, false);
268  CLScheduler::get().enqueue(_permute_scores_kernel, false);
269  }
270  CLScheduler::get().enqueue(_flatten_deltas_kernel, false);
271  CLScheduler::get().enqueue(_flatten_scores_kernel, false);
272 
273  // Build the boxes
274  CLScheduler::get().enqueue(_bounding_box_kernel, false);
275  // Non maxima suppression
276  run_cpp_nms_kernel();
277  // Add dummy batch indexes
278  CLScheduler::get().enqueue(_memset_kernel, true);
279  CLScheduler::get().enqueue(_padded_copy_kernel, true);
280 }
static CLScheduler & get()
Access the scheduler singleton.
Definition: CLScheduler.cpp:41
void enqueue(ICLKernel &kernel, bool flush=true)
Schedule the execution of the passed kernel if possible.
Definition: CLScheduler.cpp:95

References CLScheduler::enqueue(), and CLScheduler::get().

◆ validate()

Status validate ( const ITensorInfo scores,
const ITensorInfo deltas,
const ITensorInfo anchors,
const ITensorInfo proposals,
const ITensorInfo scores_out,
const ITensorInfo num_valid_proposals,
const GenerateProposalsInfo info 
)
static

Static function to check if given info will lead to a valid configuration of CLGenerateProposalsLayer.

Parameters
[in]scoresScores info from convolution layer of size (W, H, A), where H and W are the height and width of the feature map, and A is the number of anchors. Data types supported: F16/F32
[in]deltasBounding box deltas info from convolution layer of size (W, H, 4*A). Data types supported: Same as scores
[in]anchorsAnchors tensor info of size (4, A). Data types supported: Same as input
[in]proposalsBox proposals info output tensor of size (5, W*H*A). Data types supported: Data types supported: U32
[in]scores_outBox scores output tensor info of size (W*H*A). Data types supported: Same as input
[in]num_valid_proposalsScalar output tensor info which says which of the first proposals are valid. Data types supported: Same as input
[in]infoContains GenerateProposals operation information described in GenerateProposalsInfo
Returns
a Status

Definition at line 158 of file CLGenerateProposalsLayer.cpp.

160 {
161  ARM_COMPUTE_RETURN_ERROR_ON_NULLPTR(scores, deltas, anchors, proposals, scores_out, num_valid_proposals);
164 
165  const int num_anchors = scores->dimension(get_data_layout_dimension_index(scores->data_layout(), DataLayoutDimension::CHANNEL));
166  const int feat_width = scores->dimension(get_data_layout_dimension_index(scores->data_layout(), DataLayoutDimension::WIDTH));
167  const int feat_height = scores->dimension(get_data_layout_dimension_index(scores->data_layout(), DataLayoutDimension::HEIGHT));
168  const int num_images = scores->dimension(3);
169  const int total_num_anchors = num_anchors * feat_width * feat_height;
170  const int values_per_roi = info.values_per_roi();
171 
172  ARM_COMPUTE_RETURN_ERROR_ON(num_images > 1);
173 
174  TensorInfo all_anchors_info(anchors->clone()->set_tensor_shape(TensorShape(values_per_roi, total_num_anchors)).set_is_resizable(true));
175  ARM_COMPUTE_RETURN_ON_ERROR(CLComputeAllAnchorsKernel::validate(anchors, &all_anchors_info, ComputeAnchorsInfo(feat_width, feat_height, info.spatial_scale())));
176 
177  TensorInfo deltas_permuted_info = deltas->clone()->set_tensor_shape(TensorShape(values_per_roi * num_anchors, feat_width, feat_height)).set_is_resizable(true);
178  TensorInfo scores_permuted_info = scores->clone()->set_tensor_shape(TensorShape(num_anchors, feat_width, feat_height)).set_is_resizable(true);
179  if(scores->data_layout() == DataLayout::NHWC)
180  {
181  ARM_COMPUTE_RETURN_ERROR_ON_MISMATCHING_SHAPES(deltas, &deltas_permuted_info);
182  ARM_COMPUTE_RETURN_ERROR_ON_MISMATCHING_SHAPES(scores, &scores_permuted_info);
183  }
184  else
185  {
186  ARM_COMPUTE_RETURN_ON_ERROR(CLPermuteKernel::validate(deltas, &deltas_permuted_info, PermutationVector{ 2, 0, 1 }));
187  ARM_COMPUTE_RETURN_ON_ERROR(CLPermuteKernel::validate(scores, &scores_permuted_info, PermutationVector{ 2, 0, 1 }));
188  }
189 
190  TensorInfo deltas_flattened_info(deltas->clone()->set_tensor_shape(TensorShape(values_per_roi, total_num_anchors)).set_is_resizable(true));
191  ARM_COMPUTE_RETURN_ON_ERROR(CLReshapeLayerKernel::validate(&deltas_permuted_info, &deltas_flattened_info));
192 
193  TensorInfo scores_flattened_info(deltas->clone()->set_tensor_shape(TensorShape(1, total_num_anchors)).set_is_resizable(true));
194  TensorInfo proposals_4_roi_values(deltas->clone()->set_tensor_shape(TensorShape(values_per_roi, total_num_anchors)).set_is_resizable(true));
195 
196  ARM_COMPUTE_RETURN_ON_ERROR(CLReshapeLayerKernel::validate(&scores_permuted_info, &scores_flattened_info));
197  ARM_COMPUTE_RETURN_ON_ERROR(CLBoundingBoxTransformKernel::validate(&all_anchors_info, &proposals_4_roi_values, &deltas_flattened_info, BoundingBoxTransformInfo(info.im_width(), info.im_height(),
198  1.f)));
199 
200  ARM_COMPUTE_RETURN_ON_ERROR(CLCopyKernel::validate(&proposals_4_roi_values, proposals, PaddingList{ { 0, 1 } }));
201  ARM_COMPUTE_RETURN_ON_ERROR(CLMemsetKernel::validate(proposals, PixelValue()));
202 
203  if(num_valid_proposals->total_size() > 0)
204  {
205  ARM_COMPUTE_RETURN_ERROR_ON(num_valid_proposals->num_dimensions() > 1);
206  ARM_COMPUTE_RETURN_ERROR_ON(num_valid_proposals->dimension(0) > 1);
208  }
209 
210  if(proposals->total_size() > 0)
211  {
212  ARM_COMPUTE_RETURN_ERROR_ON(proposals->num_dimensions() > 2);
213  ARM_COMPUTE_RETURN_ERROR_ON(proposals->dimension(0) != size_t(values_per_roi) + 1);
214  ARM_COMPUTE_RETURN_ERROR_ON(proposals->dimension(1) != size_t(total_num_anchors));
216  }
217 
218  if(scores_out->total_size() > 0)
219  {
220  ARM_COMPUTE_RETURN_ERROR_ON(scores_out->num_dimensions() > 1);
221  ARM_COMPUTE_RETURN_ERROR_ON(scores_out->dimension(0) != size_t(total_num_anchors));
223  }
224 
225  return Status{};
226 }
static Status validate(const ITensorInfo *tensor, const PixelValue &constant_value, Window *window=nullptr)
Static function to check if given info will lead to a valid configuration of CLMemsetKernel.
std::vector< PaddingInfo > PaddingList
List of padding information.
Definition: Types.h:445
#define ARM_COMPUTE_RETURN_ERROR_ON_MISMATCHING_DATA_LAYOUT(...)
Definition: Validate.h:494
#define ARM_COMPUTE_RETURN_ERROR_ON_MISMATCHING_DATA_TYPES(...)
Definition: Validate.h:545
#define ARM_COMPUTE_RETURN_ON_ERROR(status)
Checks if a status contains an error and returns it.
Definition: Error.h:193
#define ARM_COMPUTE_RETURN_ERROR_ON_DATA_TYPE_CHANNEL_NOT_IN(t, c,...)
Definition: Validate.h:791
Strides PermutationVector
Permutation vector.
Definition: Types.h:47
#define ARM_COMPUTE_RETURN_ERROR_ON(cond)
If the condition is true, an error is returned.
Definition: Error.h:244
static Status validate(const ITensorInfo *anchors, const ITensorInfo *all_anchors, const ComputeAnchorsInfo &info)
Static function to check if given info will lead to a valid configuration of CLComputeAllAnchorsKerne...
1 channel, 1 U32 per channel
#define ARM_COMPUTE_RETURN_ERROR_ON_MISMATCHING_SHAPES(...)
Definition: Validate.h:443
static Status validate(const ITensorInfo *input, const ITensorInfo *output, const PaddingList &padding=PaddingList(), Window *output_window=nullptr)
Static function to check if given info will lead to a valid configuration of CLCopyKernel.
static Status validate(const ITensorInfo *input, const ITensorInfo *output)
Static function to check if given info will lead to a valid configuration of CLReshapeLayerKernel.
Num samples, channels, height, width.
static Status validate(const ITensorInfo *boxes, const ITensorInfo *pred_boxes, const ITensorInfo *deltas, const BoundingBoxTransformInfo &info)
Static function to check if given info will lead to a valid configuration of CLBoundingBoxTransform.
#define ARM_COMPUTE_RETURN_ERROR_ON_NULLPTR(...)
Definition: Validate.h:163
Num samples, height, width, channels.
size_t get_data_layout_dimension_index(const DataLayout data_layout, const DataLayoutDimension data_layout_dimension)
Get the index of the given dimension.
Definition: Helpers.inl:326
#define ARM_COMPUTE_RETURN_ERROR_ON_DATA_LAYOUT_NOT_IN(t,...)
Definition: Validate.h:745
static Status validate(const ITensorInfo *input, const ITensorInfo *output, const PermutationVector &perm)
Static function to check if given info will lead to a valid configuration of CLPermuteKernel.

References ARM_COMPUTE_RETURN_ERROR_ON, ARM_COMPUTE_RETURN_ERROR_ON_DATA_LAYOUT_NOT_IN, ARM_COMPUTE_RETURN_ERROR_ON_DATA_TYPE_CHANNEL_NOT_IN, ARM_COMPUTE_RETURN_ERROR_ON_MISMATCHING_DATA_LAYOUT, ARM_COMPUTE_RETURN_ERROR_ON_MISMATCHING_DATA_TYPES, ARM_COMPUTE_RETURN_ERROR_ON_MISMATCHING_SHAPES, ARM_COMPUTE_RETURN_ERROR_ON_NULLPTR, ARM_COMPUTE_RETURN_ON_ERROR, arm_compute::CHANNEL, ICloneable< T >::clone(), ITensorInfo::data_layout(), ITensorInfo::dimension(), arm_compute::get_data_layout_dimension_index(), arm_compute::HEIGHT, arm_compute::test::validation::info, arm_compute::NCHW, arm_compute::NHWC, ITensorInfo::num_dimensions(), ITensorInfo::total_size(), arm_compute::U32, CLReshapeLayerKernel::validate(), CLCopyKernel::validate(), CLPermuteKernel::validate(), CLComputeAllAnchorsKernel::validate(), CLMemsetKernel::validate(), CLBoundingBoxTransformKernel::validate(), and arm_compute::WIDTH.

Referenced by CLGenerateProposalsLayer::configure().


The documentation for this class was generated from the following files: