Compute Library
 23.08
ClMulKernel Class Reference

Interface for the pixelwise multiplication kernel. More...

#include <ClMulKernel.h>

Collaboration diagram for ClMulKernel:
[legend]

Public Member Functions

 ClMulKernel ()
 
 ARM_COMPUTE_DISALLOW_COPY_ALLOW_MOVE (ClMulKernel)
 
void configure (const CLCompileContext &compile_context, ITensorInfo *src1, ITensorInfo *src2, ITensorInfo *dst, float scale, ConvertPolicy overflow_policy, RoundingPolicy rounding_policy, const ActivationLayerInfo &act_info=ActivationLayerInfo())
 Initialise the kernel's src and dst. More...
 
void run_op (ITensorPack &tensors, const Window &window, cl::CommandQueue &queue) override
 Enqueue the OpenCL kernel to process the given window on the passed OpenCL command queue. More...
 
- Public Member Functions inherited from ICLKernel
 ICLKernel ()
 Constructor. More...
 
cl::Kernel & kernel ()
 Returns a reference to the OpenCL kernel of this object. More...
 
CLKernelType type () const
 Returns the CL kernel type. More...
 
template<typename T >
void add_1D_array_argument (unsigned int &idx, const ICLArray< T > *array, const Strides &strides, unsigned int num_dimensions, const Window &window)
 Add the passed 1D array's parameters to the object's kernel's arguments starting from the index idx. More...
 
void add_1D_tensor_argument (unsigned int &idx, const ICLTensor *tensor, const Window &window)
 Add the passed 1D tensor's parameters to the object's kernel's arguments starting from the index idx. More...
 
void add_1D_tensor_argument_if (bool cond, unsigned int &idx, const ICLTensor *tensor, const Window &window)
 Add the passed 1D tensor's parameters to the object's kernel's arguments starting from the index idx if the condition is true. More...
 
void add_2D_tensor_argument (unsigned int &idx, const ICLTensor *tensor, const Window &window)
 Add the passed 2D tensor's parameters to the object's kernel's arguments starting from the index idx. More...
 
void add_2D_tensor_argument_if (bool cond, unsigned int &idx, const ICLTensor *tensor, const Window &window)
 Add the passed 2D tensor's parameters to the object's kernel's arguments starting from the index idx if the condition is true. More...
 
void add_3D_tensor_argument (unsigned int &idx, const ICLTensor *tensor, const Window &window)
 Add the passed 3D tensor's parameters to the object's kernel's arguments starting from the index idx. More...
 
void add_4D_tensor_argument (unsigned int &idx, const ICLTensor *tensor, const Window &window)
 Add the passed 4D tensor's parameters to the object's kernel's arguments starting from the index idx. More...
 
void add_5D_tensor_argument (unsigned int &idx, const ICLTensor *tensor, const Window &window)
 Add the passed 5D tensor's parameters to the object's kernel's arguments starting from the index idx. More...
 
void add_3d_tensor_nhw_argument (unsigned int &idx, const ICLTensor *tensor)
 Add the passed NHW 3D tensor's parameters to the object's kernel's arguments by passing strides, dimensions and the offset to the first valid element in bytes. More...
 
void add_4d_tensor_nhwc_argument (unsigned int &idx, const ICLTensor *tensor)
 Add the passed NHWC 4D tensor's parameters to the object's kernel's arguments by passing strides, dimensions and the offset to the first valid element in bytes. More...
 
virtual void run (const Window &window, cl::CommandQueue &queue)
 Enqueue the OpenCL kernel to process the given window on the passed OpenCL command queue. More...
 
template<typename T >
void add_argument (unsigned int &idx, T value)
 Add the passed parameters to the object's kernel's arguments starting from the index idx. More...
 
void set_lws_hint (const cl::NDRange &lws_hint)
 Set the Local-Workgroup-Size hint. More...
 
cl::NDRange lws_hint () const
 Return the Local-Workgroup-Size hint. More...
 
void set_wbsm_hint (const cl_int &wbsm_hint)
 Set the workgroup batch size modifier hint. More...
 
cl_int wbsm_hint () const
 Return the workgroup batch size modifier hint. More...
 
const std::string & config_id () const
 Get the configuration ID. More...
 
void set_target (GPUTarget target)
 Set the targeted GPU architecture. More...
 
void set_target (cl::Device &device)
 Set the targeted GPU architecture according to the CL device. More...
 
GPUTarget get_target () const
 Get the targeted GPU architecture. More...
 
size_t get_max_workgroup_size ()
 Get the maximum workgroup size for the device the CLKernelLibrary uses. More...
 
cl::NDRange get_cached_gws () const
 Get the cached gws used to enqueue this kernel. More...
 
void cache_gws (const cl::NDRange &gws)
 Cache the latest gws used to enqueue this kernel. More...
 
template<unsigned int dimension_size>
void add_tensor_argument (unsigned &idx, const ICLTensor *tensor, const Window &window)
 
template<typename T , unsigned int dimension_size>
void add_array_argument (unsigned &idx, const ICLArray< T > *array, const Strides &strides, unsigned int num_dimensions, const Window &window)
 Add the passed array's parameters to the object's kernel's arguments starting from the index idx. More...
 
- Public Member Functions inherited from IKernel
 IKernel ()
 Constructor. More...
 
virtual ~IKernel ()=default
 Destructor. More...
 
virtual bool is_parallelisable () const
 Indicates whether or not the kernel is parallelisable. More...
 
virtual BorderSize border_size () const
 The size of the border for that kernel. More...
 
const Windowwindow () const
 The maximum window the kernel can be executed on. More...
 
bool is_window_configured () const
 Function to check if the embedded window of this kernel has been configured. More...
 

Static Public Member Functions

static Status validate (const ITensorInfo *src1, const ITensorInfo *src2, const ITensorInfo *dst, float scale, ConvertPolicy overflow_policy, RoundingPolicy rounding_policy, const ActivationLayerInfo &act_info=ActivationLayerInfo())
 Static function to check if given info will lead to a valid configuration. More...
 
- Static Public Member Functions inherited from ICLKernel
constexpr static unsigned int num_arguments_per_3d_tensor_nhw ()
 Returns the number of arguments enqueued per NHW 3D Tensor object. More...
 
constexpr static unsigned int num_arguments_per_4d_tensor_nhwc ()
 Returns the number of arguments enqueued per NHWC 4D Tensor object. More...
 
constexpr static unsigned int num_arguments_per_1D_array ()
 Returns the number of arguments enqueued per 1D array object. More...
 
constexpr static unsigned int num_arguments_per_1D_tensor ()
 Returns the number of arguments enqueued per 1D tensor object. More...
 
constexpr static unsigned int num_arguments_per_2D_tensor ()
 Returns the number of arguments enqueued per 2D tensor object. More...
 
constexpr static unsigned int num_arguments_per_3D_tensor ()
 Returns the number of arguments enqueued per 3D tensor object. More...
 
constexpr static unsigned int num_arguments_per_4D_tensor ()
 Returns the number of arguments enqueued per 4D tensor object. More...
 
static cl::NDRange gws_from_window (const Window &window, bool use_dummy_work_items)
 Get the global work size given an execution window. More...
 

Detailed Description

Interface for the pixelwise multiplication kernel.

For binary elementwise ops in-place cannot be enabled by passing nullptr to dst, it can only be enabled by passing either src1 or src2 to dst instead.

Definition at line 43 of file ClMulKernel.h.

Constructor & Destructor Documentation

◆ ClMulKernel()

Definition at line 112 of file ClMulKernel.cpp.

113 {
115 }

References arm_compute::ELEMENTWISE.

Member Function Documentation

◆ ARM_COMPUTE_DISALLOW_COPY_ALLOW_MOVE()

ARM_COMPUTE_DISALLOW_COPY_ALLOW_MOVE ( ClMulKernel  )

◆ configure()

void configure ( const CLCompileContext compile_context,
ITensorInfo src1,
ITensorInfo src2,
ITensorInfo dst,
float  scale,
ConvertPolicy  overflow_policy,
RoundingPolicy  rounding_policy,
const ActivationLayerInfo act_info = ActivationLayerInfo() 
)

Initialise the kernel's src and dst.

Valid configurations (Input1,Input2) -> Output :

  • (U8,U8) -> U8
  • (U8,U8) -> S16
  • (U8,S16) -> S16
  • (S16,U8) -> S16
  • (S16,S16) -> S16
  • (S32,S32) -> S32
  • (F16,F16) -> F16
  • (F32,F32) -> F32
  • (QASYMM8,QASYMM8) -> QASYMM8
  • (QASYMM8_SIGNED,QASYMM8_SIGNED) -> QASYMM8_SIGNED
  • (QSYMM16,QSYMM16) -> QSYMM16
  • (QSYMM16,QSYMM16) -> S32
Parameters
[in]compile_contextThe compile context to be used.
[in]src1An src tensor info. Data types supported: U8/QASYMM8/QASYMM8_SIGNED/S16/QSYMM16/F16/F32/S32
[in]src2An src tensor info. Data types supported: U8/QASYMM8/QASYMM8_SIGNED/S16/QSYMM16/F16/F32/S32
[out]dstThe dst tensor info. Data types supported: U8/QASYMM8/QASYMM8_SIGNED/S16/QSYMM16/F16/F32/S32
[in]scaleScale to apply after multiplication. Scale must be positive and its value must be either 1/255 or 1/2^n where n is between 0 and 15.
[in]overflow_policyOverflow policy. Supported overflow policies: Wrap, Saturate
[in]rounding_policyRounding policy. Supported rounding modes: to zero, to nearest even.
[in]act_info(Optional) Activation layer information in case of a fused activation.

Definition at line 117 of file ClMulKernel.cpp.

119 {
120  ARM_COMPUTE_ERROR_ON_NULLPTR(src1, src2, dst);
122  scale, overflow_policy, rounding_policy, act_info));
123 
124  auto padding_info = get_padding_info({ src1, src2, dst });
125 
126  const TensorShape &out_shape = TensorShape::broadcast_shape(src1->tensor_shape(), src2->tensor_shape());
127  auto_init_if_empty(*dst, src1->clone()->set_tensor_shape(out_shape));
128 
129  int scale_int = -1;
130  // Extract sign, exponent and mantissa
131  int exponent = 0;
132  float normalized_mantissa = std::frexp(scale, &exponent);
133  // Use int scaling if factor is equal to 1/2^n for 0 <= n <= 15
134  // frexp returns 0.5 as mantissa which means that the exponent will be in the range of -1 <= e <= 14
135  // Moreover, it will be negative as we deal with 1/2^n
136  if((normalized_mantissa == 0.5f) && (-14 <= exponent) && (exponent <= 1))
137  {
138  // Store the positive exponent. We know that we compute 1/2^n
139  // Additionally we need to subtract 1 to compensate that frexp used a mantissa of 0.5
140  scale_int = std::abs(exponent - 1);
141  }
142 
143  std::string acc_type;
144  // Check if it has float src and dst
145  if(is_data_type_float(src1->data_type()) || is_data_type_float(src2->data_type()))
146  {
147  scale_int = -1;
148  acc_type = (src1->data_type() == DataType::F32 || src2->data_type() == DataType::F32) ? "float" : "half";
149  }
150  else
151  {
152  if(src1->element_size() == 4 || src2->element_size() == 4)
153  {
154  // use 64 bit accumulator for 32-bit input
155  acc_type = "long";
156  }
157  else if(src1->element_size() == 2 || src2->element_size() == 2)
158  {
159  // Use 32-bit accumulator for 16-bit input
160  acc_type = "int";
161  }
162  else
163  {
164  // Use 16-bit accumulator for 8-bit input
165  acc_type = "ushort";
166  }
167  }
168 
169  const bool is_quantized = is_data_type_quantized(src1->data_type());
170  const unsigned int vec_size = adjust_vec_size(16 / dst->element_size(), dst->dimension(0));
171  const unsigned int vec_size_leftover = dst->dimension(0) % vec_size;
172 
173  // Set kernel build options
174  std::string kernel_name = "pixelwise_mul";
175  CLBuildOptions build_opts;
176  build_opts.add_option("-DDATA_TYPE_IN1=" + get_cl_type_from_data_type(src1->data_type()));
177  build_opts.add_option("-DDATA_TYPE_IN2=" + get_cl_type_from_data_type(src2->data_type()));
178  build_opts.add_option("-DDATA_TYPE_OUT=" + get_cl_type_from_data_type(dst->data_type()));
179  build_opts.add_option("-DVEC_SIZE_IN1=" + ((dst->dimension(0) != 1 && src1->dimension(0) == 1) ? "1" : support::cpp11::to_string(vec_size)));
180  build_opts.add_option("-DVEC_SIZE_IN2=" + ((dst->dimension(0) != 1 && src2->dimension(0) == 1) ? "1" : support::cpp11::to_string(vec_size)));
181  build_opts.add_option("-DVEC_SIZE_OUT=" + support::cpp11::to_string(vec_size));
182  build_opts.add_option("-DVEC_SIZE_LEFTOVER=" + support::cpp11::to_string(vec_size_leftover));
183  if(is_quantized && (dst->data_type() != DataType::S32))
184  {
185  const UniformQuantizationInfo iq1_info = src1->quantization_info().uniform();
186  const UniformQuantizationInfo iq2_info = src2->quantization_info().uniform();
187  const UniformQuantizationInfo oq_info = dst->quantization_info().uniform();
188 
189  build_opts.add_option_if(is_data_type_quantized_asymmetric(src1->data_type()),
190  "-DOFFSET_IN1=" + support::cpp11::to_string(iq1_info.offset));
191  build_opts.add_option_if(is_data_type_quantized_asymmetric(src2->data_type()),
192  "-DOFFSET_IN2=" + support::cpp11::to_string(iq2_info.offset));
193  build_opts.add_option_if(is_data_type_quantized_asymmetric(dst->data_type()),
194  "-DOFFSET_OUT=" + support::cpp11::to_string(oq_info.offset));
195  build_opts.add_option("-DSCALE_IN1=" + float_to_string_with_full_precision(iq1_info.scale));
196  build_opts.add_option("-DSCALE_IN2=" + float_to_string_with_full_precision(iq2_info.scale));
197  build_opts.add_option("-DSCALE_OUT=" + float_to_string_with_full_precision(oq_info.scale));
198  kernel_name += "_quantized";
199  }
200  else
201  {
202  kernel_name += (scale_int >= 0) ? "_int" : "_float";
203  build_opts.add_option_if_else(overflow_policy == ConvertPolicy::WRAP || is_data_type_float(dst->data_type()), "-DWRAP", "-DSATURATE");
204  build_opts.add_option_if_else(rounding_policy == RoundingPolicy::TO_ZERO, "-DROUND=_rtz", "-DROUND=_rte");
205  build_opts.add_option("-DACC_DATA_TYPE=" + acc_type);
206  if(act_info.enabled())
207  {
208  build_opts.add_option("-DACTIVATION_TYPE=" + lower_string(string_from_activation_func(act_info.activation())));
209  build_opts.add_option("-DA_VAL=" + float_to_string_with_full_precision(act_info.a()));
210  build_opts.add_option("-DB_VAL=" + float_to_string_with_full_precision(act_info.b()));
211  }
212  }
213 
214  // Check whether it is in_place calculation
215  const bool in_place = (src1 == dst) || (src2 == dst);
216  const bool src1_in_place = in_place && (src1 == dst);
217  build_opts.add_option_if(in_place, "-DIN_PLACE");
218  build_opts.add_option_if(src1_in_place, "-DSRC1_IN_PLACE");
219 
220  // Create kernel
221  _kernel = create_kernel(compile_context, kernel_name, build_opts.options());
222 
223  // Set scale argument
224  unsigned int idx = (in_place ? 2 : 3) * num_arguments_per_3D_tensor(); // Skip the src and dst parameters
225 
226  if(scale_int >= 0 && !is_quantized)
227  {
228  _kernel.setArg(idx++, scale_int);
229  }
230  else
231  {
232  _kernel.setArg(idx++, scale);
233  }
234 
235  Window win = calculate_max_window(*dst, Steps(vec_size));
236  ICLKernel::configure_internal(win);
237 
239 
240  // Set config_id for enabling LWS tuning
241  _config_id = kernel_name;
242  _config_id += "_";
243  _config_id += lower_string(string_from_data_type(dst->data_type()));
244  _config_id += "_";
245  _config_id += support::cpp11::to_string(src1->dimension(0));
246  _config_id += "_";
247  _config_id += support::cpp11::to_string(src1->dimension(1));
248  _config_id += "_";
249  _config_id += support::cpp11::to_string(src1->dimension(2));
250  _config_id += "_";
251  _config_id += support::cpp11::to_string(src2->dimension(0));
252  _config_id += "_";
253  _config_id += support::cpp11::to_string(src2->dimension(1));
254  _config_id += "_";
255  _config_id += support::cpp11::to_string(src2->dimension(2));
256  _config_id += "_";
257  _config_id += support::cpp11::to_string(dst->dimension(0));
258  _config_id += "_";
259  _config_id += support::cpp11::to_string(dst->dimension(1));
260  _config_id += "_";
261  _config_id += support::cpp11::to_string(dst->dimension(2));
262 }

References arm_compute::test::validation::act_info, CLBuildOptions::add_option(), CLBuildOptions::add_option_if(), CLBuildOptions::add_option_if_else(), arm_compute::adjust_vec_size(), ARM_COMPUTE_ERROR_ON, ARM_COMPUTE_ERROR_ON_NULLPTR, ARM_COMPUTE_ERROR_THROW_ON, arm_compute::auto_init_if_empty(), TensorShape::broadcast_shape(), arm_compute::calculate_max_window(), ICloneable< T >::clone(), arm_compute::create_kernel(), ITensorInfo::data_type(), ITensorInfo::dimension(), arm_compute::test::validation::dst, ITensorInfo::element_size(), arm_compute::F32, arm_compute::float_to_string_with_full_precision(), arm_compute::get_cl_type_from_data_type(), arm_compute::get_padding_info(), arm_compute::has_padding_changed(), arm_compute::is_data_type_float(), arm_compute::is_data_type_quantized(), arm_compute::is_data_type_quantized_asymmetric(), kernel_name, arm_compute::lower_string(), ICLKernel::num_arguments_per_3D_tensor(), UniformQuantizationInfo::offset, CLBuildOptions::options(), ITensorInfo::quantization_info(), arm_compute::S32, UniformQuantizationInfo::scale, arm_compute::test::validation::scale, arm_compute::string_from_activation_func(), arm_compute::string_from_data_type(), ITensorInfo::tensor_shape(), arm_compute::support::cpp11::to_string(), arm_compute::TO_ZERO, QuantizationInfo::uniform(), arm_compute::cpu::kernels::validate_arguments(), and arm_compute::WRAP.

◆ run_op()

void run_op ( ITensorPack tensors,
const Window window,
cl::CommandQueue &  queue 
)
overridevirtual

Enqueue the OpenCL kernel to process the given window on the passed OpenCL command queue.

Note
The queue is not flushed by this method, and therefore the kernel will not have been executed by the time this method returns.
Parameters
[in]tensorsA vector containing the tensors to operato on.
[in]windowRegion on which to execute the kernel. (Must be a valid region of the window returned by window()).
[in,out]queueCommand queue on which to enqueue the kernel.

Reimplemented from ICLKernel.

Definition at line 273 of file ClMulKernel.cpp.

274 {
277 
278  const auto src_0 = utils::cast::polymorphic_downcast<const ICLTensor *>(tensors.get_const_tensor(TensorType::ACL_SRC_0));
279  const auto src_1 = utils::cast::polymorphic_downcast<const ICLTensor *>(tensors.get_const_tensor(TensorType::ACL_SRC_1));
280  auto dst = utils::cast::polymorphic_downcast<ICLTensor *>(tensors.get_tensor(TensorType::ACL_DST));
281 
282  ARM_COMPUTE_ERROR_ON_NULLPTR(src_0, src_1, dst);
283 
284  const TensorShape &in_shape1 = src_0->info()->tensor_shape();
285  const TensorShape &in_shape2 = src_1->info()->tensor_shape();
286  const TensorShape &out_shape = dst->info()->tensor_shape();
287 
288  bool can_collapse = true;
289  if(std::min(in_shape1.total_size(), in_shape2.total_size()) > 1)
290  {
291  can_collapse = (std::min(in_shape1.num_dimensions(), in_shape2.num_dimensions()) > Window::DimZ);
292  for(size_t d = Window::DimZ; can_collapse && (d < out_shape.num_dimensions()); ++d)
293  {
294  can_collapse = (in_shape1[d] == in_shape2[d]);
295  }
296  }
297 
298  bool has_collapsed = false;
299  Window collapsed = can_collapse ? window.collapse_if_possible(ICLKernel::window(), Window::DimZ, &has_collapsed) : window;
300 
301  const TensorShape &in_shape1_collapsed = has_collapsed ? in_shape1.collapsed_from(Window::DimZ) : in_shape1;
302  const TensorShape &in_shape2_collapsed = has_collapsed ? in_shape2.collapsed_from(Window::DimZ) : in_shape2;
303 
304  Window slice = collapsed.first_slice_window_3D();
305  Window slice_input1 = slice.broadcast_if_dimension_le_one(in_shape1_collapsed);
306  Window slice_input2 = slice.broadcast_if_dimension_le_one(in_shape2_collapsed);
307 
308  // Check whether it is in_place calculation
309  const bool in_place = (src_0 == dst) || (src_1 == dst);
310  do
311  {
312  unsigned int idx = 0;
313  add_3D_tensor_argument(idx, src_0, slice_input1);
314  add_3D_tensor_argument(idx, src_1, slice_input2);
315  if(!in_place)
316  {
318  }
319  enqueue(queue, *this, slice, lws_hint());
320 
321  ARM_COMPUTE_UNUSED(collapsed.slide_window_slice_3D(slice_input1));
322  ARM_COMPUTE_UNUSED(collapsed.slide_window_slice_3D(slice_input2));
323  }
324  while(collapsed.slide_window_slice_3D(slice));
325 }

References arm_compute::ACL_DST, arm_compute::ACL_SRC_0, arm_compute::ACL_SRC_1, ICLKernel::add_3D_tensor_argument(), ARM_COMPUTE_ERROR_ON_INVALID_SUBWINDOW, ARM_COMPUTE_ERROR_ON_NULLPTR, ARM_COMPUTE_ERROR_ON_UNCONFIGURED_KERNEL, ARM_COMPUTE_UNUSED, Window::collapse_if_possible(), TensorShape::collapsed_from(), Window::DimZ, arm_compute::test::validation::dst, arm_compute::enqueue(), Window::first_slice_window_3D(), ITensorPack::get_const_tensor(), ITensorPack::get_tensor(), ICLKernel::lws_hint(), Dimensions< T >::num_dimensions(), arm_compute::test::validation::reference::slice(), Window::slide_window_slice_3D(), TensorShape::total_size(), and IKernel::window().

◆ validate()

Status validate ( const ITensorInfo src1,
const ITensorInfo src2,
const ITensorInfo dst,
float  scale,
ConvertPolicy  overflow_policy,
RoundingPolicy  rounding_policy,
const ActivationLayerInfo act_info = ActivationLayerInfo() 
)
static

Static function to check if given info will lead to a valid configuration.

Similar to ClMulKernel::configure()

Returns
a status

Definition at line 264 of file ClMulKernel.cpp.

266 {
267  ARM_COMPUTE_ERROR_ON_NULLPTR(src1, src2, dst);
268  ARM_COMPUTE_RETURN_ON_ERROR(validate_arguments(src1, src2, dst, scale, overflow_policy, rounding_policy, act_info));
269 
270  return Status{};
271 }

References arm_compute::test::validation::act_info, ARM_COMPUTE_ERROR_ON_NULLPTR, ARM_COMPUTE_RETURN_ON_ERROR, arm_compute::test::validation::dst, arm_compute::test::validation::scale, and arm_compute::cpu::kernels::validate_arguments().

Referenced by ClMul::validate().


The documentation for this class was generated from the following files:
arm_compute::support::cpp11::to_string
std::string to_string(T &&value)
Convert integer and float values to string.
Definition: StringSupport.h:168
arm_compute::calculate_max_window
Window calculate_max_window(const ValidRegion &valid_region, const Steps &steps, bool skip_border, BorderSize border_size)
Definition: WindowHelpers.cpp:28
arm_compute::test::validation::dst
auto dst
Definition: DFT.cpp:170
arm_compute::lower_string
std::string lower_string(const std::string &val)
Lower a given string.
Definition: StringUtils.cpp:38
arm_compute::cpu::kernels::validate_arguments
Status validate_arguments(const ITensorInfo *src, const ITensorInfo *weights, const ITensorInfo *dst, const PadStrideInfo &conv_info)
Definition: CpuDirectConv2dKernel.cpp:60
ARM_COMPUTE_ERROR_ON_UNCONFIGURED_KERNEL
#define ARM_COMPUTE_ERROR_ON_UNCONFIGURED_KERNEL(k)
Definition: Validate.h:1004
arm_compute::Window::collapse_if_possible
Window collapse_if_possible(const Window &full_window, size_t first, size_t last, bool *has_collapsed=nullptr) const
Collapse the dimensions between first and last if possible.
Definition: Window.inl:68
arm_compute::string_from_data_type
const std::string & string_from_data_type(DataType dt)
Convert a data type identity into a string.
Definition: DataTypeUtils.cpp:31
arm_compute::ACL_SRC_0
@ ACL_SRC_0
Definition: Types.h:45
arm_compute::ACL_SRC_1
@ ACL_SRC_1
Definition: Types.h:46
ARM_COMPUTE_RETURN_ON_ERROR
#define ARM_COMPUTE_RETURN_ON_ERROR(status)
Checks if a status contains an error and returns it.
Definition: Error.h:204
arm_compute::test::validation::act_info
act_info
Definition: DirectConvolutionLayer.cpp:547
ARM_COMPUTE_ERROR_ON_NULLPTR
#define ARM_COMPUTE_ERROR_ON_NULLPTR(...)
Definition: Validate.h:161
ARM_COMPUTE_ERROR_ON
#define ARM_COMPUTE_ERROR_ON(cond)
If the condition is true then an error message is printed and an exception thrown.
Definition: Error.h:467
arm_compute::ConvertPolicy::WRAP
@ WRAP
Wrap around.
ARM_COMPUTE_ERROR_THROW_ON
#define ARM_COMPUTE_ERROR_THROW_ON(status)
Definition: Error.h:456
arm_compute::ACL_DST
@ ACL_DST
Definition: Types.h:55
arm_compute::auto_init_if_empty
bool auto_init_if_empty(ITensorInfo &info, const TensorShape &shape, int num_channels, DataType data_type, QuantizationInfo quantization_info=QuantizationInfo())
Auto initialize the tensor info (shape, number of channels and data type) if the current assignment i...
Definition: AutoConfiguration.h:43
arm_compute::TensorShape::broadcast_shape
static TensorShape broadcast_shape(const Shapes &... shapes)
If shapes are broadcast compatible, return the broadcasted shape.
Definition: TensorShape.h:215
arm_compute::create_kernel
cl::Kernel create_kernel(const CLCompileContext &ctx, const std::string &kernel_name, const std::set< std::string > &build_opts=std::set< std::string >())
Creates an opencl kernel using a compile context.
Definition: CLHelpers.cpp:404
ARM_COMPUTE_ERROR_ON_INVALID_SUBWINDOW
#define ARM_COMPUTE_ERROR_ON_INVALID_SUBWINDOW(f, s)
Definition: Validate.h:205
arm_compute::float_to_string_with_full_precision
std::string float_to_string_with_full_precision(float val)
Create a string with the float in full precision.
Definition: StringUtils.cpp:52
ARM_COMPUTE_UNUSED
#define ARM_COMPUTE_UNUSED(...)
To avoid unused variables warnings.
Definition: Error.h:152
arm_compute::IKernel::window
const Window & window() const
The maximum window the kernel can be executed on.
Definition: IKernel.cpp:28
arm_compute::get_cl_type_from_data_type
std::string get_cl_type_from_data_type(const DataType &dt)
Translates a tensor data type to the appropriate OpenCL type.
Definition: CLHelpers.cpp:40
arm_compute::ICLKernel::add_3D_tensor_argument
void add_3D_tensor_argument(unsigned int &idx, const ICLTensor *tensor, const Window &window)
Add the passed 3D tensor's parameters to the object's kernel's arguments starting from the index idx.
Definition: ICLKernel.h:222
arm_compute::ELEMENTWISE
@ ELEMENTWISE
Elementwise CL kernel type.
Definition: CLTypes.h:85
arm_compute::ICLKernel::num_arguments_per_3D_tensor
constexpr static unsigned int num_arguments_per_3D_tensor()
Returns the number of arguments enqueued per 3D tensor object.
Definition: ICLKernel.h:309
arm_compute::test::validation::scale
NEScale scale
Definition: Scale.cpp:272
arm_compute::RoundingPolicy::TO_ZERO
@ TO_ZERO
Truncates the least significant values that are lost in operations.
arm_compute::adjust_vec_size
unsigned int adjust_vec_size(unsigned int vec_size, size_t dim0)
Returns the adjusted vector size in case it is less than the input's first dimension,...
Definition: AdjustVecSize.h:38
arm_compute::has_padding_changed
bool has_padding_changed(const std::unordered_map< const ITensorInfo *, PaddingSize > &padding_map)
Check if the previously stored padding info has changed after configuring a kernel.
Definition: Utils.cpp:462
arm_compute::Window::DimZ
static constexpr size_t DimZ
Alias for dimension 2 also known as Z dimension.
Definition: Window.h:47
arm_compute::is_data_type_float
bool is_data_type_float(DataType dt)
Check if a given data type is of floating point type.
Definition: DataTypeUtils.h:304
arm_compute::DataType::S32
@ S32
signed 32-bit number
arm_compute::ICLKernel::lws_hint
cl::NDRange lws_hint() const
Return the Local-Workgroup-Size hint.
Definition: ICLKernel.h:371
arm_compute::is_data_type_quantized_asymmetric
bool is_data_type_quantized_asymmetric(DataType dt)
Check if a given data type is of asymmetric quantized type.
Definition: DataTypeUtils.h:346
arm_compute::string_from_activation_func
const std::string & string_from_activation_func(const ActivationFunction &act)
Translates a given activation function to a string.
Definition: ActivationFunctionUtils.cpp:31
arm_compute::is_data_type_quantized
bool is_data_type_quantized(DataType dt)
Check if a given data type is of quantized type.
Definition: DataTypeUtils.h:324
arm_compute::DataType::F32
@ F32
32-bit floating-point number
arm_compute::get_padding_info
std::unordered_map< const ITensorInfo *, PaddingSize > get_padding_info(std::initializer_list< const ITensorInfo * > infos)
Stores padding information before configuring a kernel.
Definition: Utils.cpp:447
arm_compute::test::validation::reference::slice
SimpleTensor< T > slice(const SimpleTensor< T > &src, Coordinates starts, Coordinates ends)
Definition: SliceOperations.cpp:38
kernel_name
std::string kernel_name
Definition: ClIm2ColKernel.cpp:57
arm_compute::enqueue
void enqueue(cl::CommandQueue &queue, ICLKernel &kernel, const Window &window, const cl::NDRange &lws_hint=CLKernelLibrary::get().default_ndrange(), bool use_dummy_work_items=false)
Add the kernel to the command queue with the given window.
Definition: ICLKernel.cpp:32