Compute Library
 21.02
CLLogits1DMaxShiftExpSumKernel Class Reference

Interface for max, shifting, exponentiating and summing the logits. More...

#include <CLSoftmaxLayerKernel.h>

Collaboration diagram for CLLogits1DMaxShiftExpSumKernel:
[legend]

Public Types

using ParallelReductionInfo = std::tuple< bool, unsigned int >
 Info for whether a parallel reduction will be run and the vector size of the execution. More...
 

Public Member Functions

 CLLogits1DMaxShiftExpSumKernel ()
 Default constructor. More...
 
 CLLogits1DMaxShiftExpSumKernel (const CLLogits1DMaxShiftExpSumKernel &)=delete
 Prevent instances of this class from being copied (As this class contains pointers) More...
 
CLLogits1DMaxShiftExpSumKerneloperator= (const CLLogits1DMaxShiftExpSumKernel &)=delete
 Prevent instances of this class from being copied (As this class contains pointers) More...
 
 CLLogits1DMaxShiftExpSumKernel (CLLogits1DMaxShiftExpSumKernel &&)=default
 Allow instances of this class to be moved. More...
 
CLLogits1DMaxShiftExpSumKerneloperator= (CLLogits1DMaxShiftExpSumKernel &&)=default
 Allow instances of this class to be moved. More...
 
void configure (const ICLTensor *input, ICLTensor *max, ICLTensor *output, ICLTensor *sum, const SoftmaxKernelInfo &info)
 Set the input and output tensors. More...
 
void configure (const CLCompileContext &compile_context, const ICLTensor *input, ICLTensor *max, ICLTensor *output, ICLTensor *sum, const SoftmaxKernelInfo &info)
 Set the input and output tensors. More...
 
void run (const Window &window, cl::CommandQueue &queue) override
 Enqueue the OpenCL kernel to process the given window on the passed OpenCL command queue. More...
 
- Public Member Functions inherited from ICLKernel
 ICLKernel ()
 Constructor. More...
 
cl::Kernel & kernel ()
 Returns a reference to the OpenCL kernel of this object. More...
 
template<typename T >
void add_1D_array_argument (unsigned int &idx, const ICLArray< T > *array, const Strides &strides, unsigned int num_dimensions, const Window &window)
 Add the passed 1D array's parameters to the object's kernel's arguments starting from the index idx. More...
 
void add_1D_tensor_argument (unsigned int &idx, const ICLTensor *tensor, const Window &window)
 Add the passed 1D tensor's parameters to the object's kernel's arguments starting from the index idx. More...
 
void add_1D_tensor_argument_if (bool cond, unsigned int &idx, const ICLTensor *tensor, const Window &window)
 Add the passed 1D tensor's parameters to the object's kernel's arguments starting from the index idx if the condition is true. More...
 
void add_2D_tensor_argument (unsigned int &idx, const ICLTensor *tensor, const Window &window)
 Add the passed 2D tensor's parameters to the object's kernel's arguments starting from the index idx. More...
 
void add_2D_tensor_argument_if (bool cond, unsigned int &idx, const ICLTensor *tensor, const Window &window)
 Add the passed 2D tensor's parameters to the object's kernel's arguments starting from the index idx if the condition is true. More...
 
void add_3D_tensor_argument (unsigned int &idx, const ICLTensor *tensor, const Window &window)
 Add the passed 3D tensor's parameters to the object's kernel's arguments starting from the index idx. More...
 
void add_4D_tensor_argument (unsigned int &idx, const ICLTensor *tensor, const Window &window)
 Add the passed 4D tensor's parameters to the object's kernel's arguments starting from the index idx. More...
 
virtual void run_op (ITensorPack &tensors, const Window &window, cl::CommandQueue &queue)
 Enqueue the OpenCL kernel to process the given window on the passed OpenCL command queue. More...
 
template<typename T >
void add_argument (unsigned int &idx, T value)
 Add the passed parameters to the object's kernel's arguments starting from the index idx. More...
 
void set_lws_hint (const cl::NDRange &lws_hint)
 Set the Local-Workgroup-Size hint. More...
 
cl::NDRange lws_hint () const
 Return the Local-Workgroup-Size hint. More...
 
void set_wbsm_hint (const cl_int &wbsm_hint)
 Set the workgroup batch size modifier hint. More...
 
cl_int wbsm_hint () const
 Return the workgroup batch size modifier hint. More...
 
const std::string & config_id () const
 Get the configuration ID. More...
 
void set_target (GPUTarget target)
 Set the targeted GPU architecture. More...
 
void set_target (cl::Device &device)
 Set the targeted GPU architecture according to the CL device. More...
 
GPUTarget get_target () const
 Get the targeted GPU architecture. More...
 
size_t get_max_workgroup_size ()
 Get the maximum workgroup size for the device the CLKernelLibrary uses. More...
 
template<unsigned int dimension_size>
void add_tensor_argument (unsigned &idx, const ICLTensor *tensor, const Window &window)
 
template<typename T , unsigned int dimension_size>
void add_array_argument (unsigned &idx, const ICLArray< T > *array, const Strides &strides, unsigned int num_dimensions, const Window &window)
 Add the passed array's parameters to the object's kernel's arguments starting from the index idx. More...
 
- Public Member Functions inherited from IKernel
 IKernel ()
 Constructor. More...
 
virtual ~IKernel ()=default
 Destructor. More...
 
virtual bool is_parallelisable () const
 Indicates whether or not the kernel is parallelisable. More...
 
virtual BorderSize border_size () const
 The size of the border for that kernel. More...
 
const Windowwindow () const
 The maximum window the kernel can be executed on. More...
 

Static Public Member Functions

static Status validate (const ITensorInfo *input, const ITensorInfo *max, const ITensorInfo *output, const ITensorInfo *sum)
 Static function to check if given info will lead to a valid configuration of CLLogits1DMaxShiftExpSumKernel. More...
 
static ParallelReductionInfo is_parallel_reduction (size_t size)
 Checks if the given size is eligible for parallel reduction. More...
 
- Static Public Member Functions inherited from ICLKernel
static constexpr unsigned int num_arguments_per_1D_array ()
 Returns the number of arguments enqueued per 1D array object. More...
 
static constexpr unsigned int num_arguments_per_1D_tensor ()
 Returns the number of arguments enqueued per 1D tensor object. More...
 
static constexpr unsigned int num_arguments_per_2D_tensor ()
 Returns the number of arguments enqueued per 2D tensor object. More...
 
static constexpr unsigned int num_arguments_per_3D_tensor ()
 Returns the number of arguments enqueued per 3D tensor object. More...
 
static constexpr unsigned int num_arguments_per_4D_tensor ()
 Returns the number of arguments enqueued per 4D tensor object. More...
 
static cl::NDRange gws_from_window (const Window &window)
 Get the global work size given an execution window. More...
 

Detailed Description

Interface for max, shifting, exponentiating and summing the logits.

Definition at line 35 of file CLSoftmaxLayerKernel.h.

Member Typedef Documentation

◆ ParallelReductionInfo

using ParallelReductionInfo = std::tuple<bool, unsigned int>

Info for whether a parallel reduction will be run and the vector size of the execution.

Definition at line 39 of file CLSoftmaxLayerKernel.h.

Constructor & Destructor Documentation

◆ CLLogits1DMaxShiftExpSumKernel() [1/3]

Default constructor.

Definition at line 153 of file CLSoftmaxLayerKernel.cpp.

154  : _input(nullptr), _max(nullptr), _output(nullptr), _sum(nullptr)
155 {
156 }

◆ CLLogits1DMaxShiftExpSumKernel() [2/3]

Prevent instances of this class from being copied (As this class contains pointers)

◆ CLLogits1DMaxShiftExpSumKernel() [3/3]

Allow instances of this class to be moved.

Member Function Documentation

◆ configure() [1/2]

void configure ( const ICLTensor input,
ICLTensor max,
ICLTensor output,
ICLTensor sum,
const SoftmaxKernelInfo info 
)

Set the input and output tensors.

Parameters
[in]inputSource tensor. Data types supported: QASYMM8/QASYMM8_SIGNED/F16/F32
[in,out]maxMax values tensor. Data types supported: same as input
[out]outputDestination tensor. Data types supported: same as input
[out]sumSum of 1D logits tensor. Data types supported: same as input
[in]infoContains information consumed by kernels for softmax described in SoftmaxKernelInfo.

Definition at line 158 of file CLSoftmaxLayerKernel.cpp.

References CLKernelLibrary::get().

159 {
160  configure(CLKernelLibrary::get().get_compile_context(), input, max, output, sum, info);
161 }
DATA_TYPE sum(__global const DATA_TYPE *input)
Calculate sum of a vector.
static CLKernelLibrary & get()
Access the KernelLibrary singleton.
ScaleKernelInfo info(interpolation_policy, default_border_mode, PixelValue(), sampling_policy, false)
void configure(const ICLTensor *input, ICLTensor *max, ICLTensor *output, ICLTensor *sum, const SoftmaxKernelInfo &info)
Set the input and output tensors.

◆ configure() [2/2]

void configure ( const CLCompileContext compile_context,
const ICLTensor input,
ICLTensor max,
ICLTensor output,
ICLTensor sum,
const SoftmaxKernelInfo info 
)

Set the input and output tensors.

Parameters
[in]compile_contextThe compile context to be used.
[in]inputSource tensor. Data types supported: QASYMM8/QASYMM8_SIGNED/F16/F32
[in,out]maxMax values tensor. Data types supported: same as input
[out]outputDestination tensor. Data types supported: same as input
[out]sumSum of 1D logits tensor. Data types supported: same as input
[in]infoContains information consumed by kernels for softmax described in SoftmaxKernelInfo.

Definition at line 163 of file CLSoftmaxLayerKernel.cpp.

References CLBuildOptions::add_option(), CLBuildOptions::add_option_if(), CLBuildOptions::add_options_if(), arm_compute::adjust_vec_size(), ARM_COMPUTE_ERROR_ON_NULLPTR, ARM_COMPUTE_ERROR_THROW_ON, arm_compute::auto_init_if_empty(), SoftmaxKernelInfo::beta, ICloneable< T >::clone(), ITensorInfo::data_type(), ITensorInfo::dimension(), dt, arm_compute::F16, arm_compute::float_to_string_with_full_precision(), arm_compute::get_cl_type_from_data_type(), arm_compute::get_padding_info(), ITensor::info(), arm_compute::test::validation::input, SoftmaxKernelInfo::input_data_type, arm_compute::is_data_type_float(), arm_compute::is_data_type_quantized_asymmetric(), arm_compute::is_data_type_quantized_asymmetric_signed(), SoftmaxKernelInfo::is_log, CLLogits1DMaxShiftExpSumKernel::is_parallel_reduction(), ICLKernel::lws_hint(), arm_compute::test::validation::qinfo, ITensorInfo::quantization_info(), UniformQuantizationInfo::scale, sum(), ITensorInfo::tensor_shape(), arm_compute::support::cpp11::to_string(), and QuantizationInfo::uniform().

164 {
165  ARM_COMPUTE_ERROR_ON_NULLPTR(input, max, sum, output);
166 
167  auto padding_info = get_padding_info({ input, max, output, sum });
168 
169  // Output auto initialization if not yet initialized
170  auto_init_if_empty(*sum->info(), input->info()->clone()->set_tensor_shape(max->info()->tensor_shape()));
171  auto_init_if_empty(*output->info(), *input->info()->clone());
172 
173  // Perform validation step
174  ARM_COMPUTE_ERROR_THROW_ON(validate_arguments_1DMaxShiftExpSum(input->info(), max->info(), output->info(), sum->info()));
175 
176  _input = input;
177  _max = max;
178  _output = output;
179  _sum = sum;
180 
181  const DataType dt = input->info()->data_type();
182  const UniformQuantizationInfo qinfo = input->info()->quantization_info().uniform();
183  const size_t reduction_dim_size = input->info()->dimension(0);
184  const float beta = info.beta;
185  const auto is_signed_qasymm8 = is_data_type_quantized_asymmetric_signed(info.input_data_type);
186  const int min_value = is_signed_qasymm8 ? CL_SCHAR_MIN : 0;
187 
188  ParallelReductionInfo parallel_reduction_info = is_parallel_reduction(reduction_dim_size);
189  const unsigned int vector_size = adjust_vec_size(std::get<1>(parallel_reduction_info), reduction_dim_size);
190 
191  // Set build options
192  CLBuildOptions build_opts;
193  build_opts.add_option("-DDATA_TYPE=" + get_cl_type_from_data_type(dt));
194  build_opts.add_option("-DMIN_VALUE=" + support::cpp11::to_string(min_value));
195  build_opts.add_option("-DVECTOR_SIZE=" + support::cpp11::to_string(vector_size));
196  build_opts.add_option("-DSRC_WIDTH=" + support::cpp11::to_string(reduction_dim_size));
197  build_opts.add_option("-DVECTOR_SIZE_LEFTOVER=" + support::cpp11::to_string(reduction_dim_size % vector_size));
198  build_opts.add_option("-DLOG_VECTOR_SIZE=" + support::cpp11::to_string(lround(log2(vector_size))));
199  build_opts.add_option_if((reduction_dim_size % vector_size) != 0, "-DNON_MULTIPLE_OF_VECTOR_SIZE");
200  build_opts.add_option_if(is_signed_qasymm8, "-DQASYMM8_SIGNED");
201  build_opts.add_option_if(is_data_type_float(dt) && (beta != 1.0f), "-DBETA=" + float_to_string_with_full_precision(beta));
202  build_opts.add_option_if(is_data_type_float(dt) && info.is_log, "-DLOG_SOFTMAX");
203  build_opts.add_option_if(is_data_type_float(dt), "-DMINVAL=" + ((dt == DataType::F16) ? std::string("-HALF_MAX") : std::string("-FLT_MAX")));
204  build_opts.add_options_if(is_data_type_quantized_asymmetric(dt), prepare_quantized_softmax_build_options(qinfo.scale, beta).options());
205 
206  cl::NDRange lws_hint(cl::NullRange);
207  std::string kernel_name = std::string("softmax_layer_max_shift_exp_sum_") + (is_data_type_quantized_asymmetric(dt) ? "quantized_" : "");
208 
209  // Configure parallel kernel if needed
210  if(std::get<0>(parallel_reduction_info))
211  {
212  kernel_name += "parallel";
213  bool is_grid_size_pow2 = (_grid_size != 0) && ((_grid_size & (_grid_size - 1)) == 0);
214  build_opts.add_option_if(is_grid_size_pow2 && _grid_size <= 256, "-DGRID_SIZE=" + support::cpp11::to_string(_grid_size));
215 
216  // Handle boundary conditions.
217  const unsigned int multiple_grid_size = (reduction_dim_size / vector_size) % _grid_size;
218  build_opts.add_option_if((multiple_grid_size != 0) || ((reduction_dim_size % vector_size) != 0), "-DNON_MULTIPLE_OF_GRID_SIZE");
219  // Setting _lws_hint in this way can also communicate grid_size to CLLogits1DMaxShiftExpSumKernel::run().
220  // A single workgroup performs reduction in dimension 0 in the parallel case, hence lws[0]==gws[0].
221  lws_hint = cl::NDRange(_grid_size);
222  }
223  else
224  {
225  kernel_name += "serial";
226  }
227 
228  // Create kernel.
229  _kernel = create_kernel(compile_context, kernel_name, build_opts.options());
230 
231  // Configure window
232  Window win = calculate_max_window(*(input->info()), Steps(reduction_dim_size));
233  ICLKernel::configure_internal(win, lws_hint);
234 
236 }
Window calculate_max_window(const ValidRegion &valid_region, const Steps &steps, bool skip_border, BorderSize border_size)
cl::NDRange lws_hint() const
Return the Local-Workgroup-Size hint.
Definition: ICLKernel.h:276
DATA_TYPE sum(__global const DATA_TYPE *input)
Calculate sum of a vector.
std::string to_string(T &&value)
Convert integer and float values to string.
#define ARM_COMPUTE_ERROR_ON(cond)
If the condition is true then an error message is printed and an exception thrown.
Definition: Error.h:466
#define ARM_COMPUTE_ERROR_THROW_ON(status)
Definition: Error.h:455
1 channel, 1 F16 per channel
DataType dt
cl::Kernel create_kernel(const CLCompileContext &ctx, const std::string &kernel_name, const std::set< std::string > &build_opts=std::set< std::string >())
Creates an opencl kernel using a compile context.
Definition: CLHelpers.cpp:403
std::string float_to_string_with_full_precision(float val)
Create a string with the float in full precision.
Definition: Utils.h:1262
std::string kernel_name
std::string get_cl_type_from_data_type(const DataType &dt)
Translates a tensor data type to the appropriate OpenCL type.
Definition: CLHelpers.cpp:37
bool is_data_type_quantized_asymmetric_signed(DataType dt)
Check if a given data type is of asymmetric quantized signed type.
Definition: Utils.h:1209
bool auto_init_if_empty(ITensorInfo &info, const TensorShape &shape, int num_channels, DataType data_type, QuantizationInfo quantization_info=QuantizationInfo())
Auto initialize the tensor info (shape, number of channels and data type) if the current assignment i...
bool has_padding_changed(const std::unordered_map< const ITensorInfo *, PaddingSize > &padding_map)
Check if the previously stored padding info has changed after configuring a kernel.
Definition: Utils.cpp:528
bool is_data_type_quantized_asymmetric(DataType dt)
Check if a given data type is of asymmetric quantized type.
Definition: Utils.h:1190
ScaleKernelInfo info(interpolation_policy, default_border_mode, PixelValue(), sampling_policy, false)
static ParallelReductionInfo is_parallel_reduction(size_t size)
Checks if the given size is eligible for parallel reduction.
const QuantizationInfo qinfo
Definition: Im2Col.cpp:155
std::unordered_map< const ITensorInfo *, PaddingSize > get_padding_info(std::initializer_list< const ITensorInfo *> infos)
Stores padding information before configuring a kernel.
Definition: Utils.cpp:513
#define ARM_COMPUTE_ERROR_ON_NULLPTR(...)
Definition: Validate.h:161
std::tuple< bool, unsigned int > ParallelReductionInfo
Info for whether a parallel reduction will be run and the vector size of the execution.
unsigned int adjust_vec_size(unsigned int vec_size, size_t dim0)
Returns the adjusted vector size in case it is less than the input&#39;s first dimension, getting rounded down to its closest valid vector size.
Definition: Utils.h:1358
DataType
Available data types.
Definition: Types.h:77
bool is_data_type_float(DataType dt)
Check if a given data type is of floating point type.
Definition: Utils.h:1148

◆ is_parallel_reduction()

CLLogits1DMaxShiftExpSumKernel::ParallelReductionInfo is_parallel_reduction ( size_t  size)
static

Checks if the given size is eligible for parallel reduction.

Note
Serial reduction is launched for width < (_grid_size * _serial_vector_size).
Parallel reduction is launched for width >= (_grid_size * _serial_vector_size) and vector_size is forced to 4.
Parameters
[in]sizeSize to check
Returns
A two-element tuple where the first element is a boolean specifying if a parallel reduction will be run, while the second element is the vector size of the execution.

Definition at line 244 of file CLSoftmaxLayerKernel.cpp.

Referenced by CLLogits1DMaxShiftExpSumKernel::configure(), and CLLogits1DMaxShiftExpSumKernel::run().

245 {
246  bool is_parallel_reduction = (size >= (_grid_size * _serial_vector_size)) && (_grid_size > 1);
247  unsigned int vector_size = is_parallel_reduction ? _parallel_vector_size : _serial_vector_size;
248  return std::make_tuple(is_parallel_reduction, vector_size);
249 }
static ParallelReductionInfo is_parallel_reduction(size_t size)
Checks if the given size is eligible for parallel reduction.

◆ operator=() [1/2]

Prevent instances of this class from being copied (As this class contains pointers)

◆ operator=() [2/2]

Allow instances of this class to be moved.

◆ run()

void run ( const Window window,
cl::CommandQueue &  queue 
)
overridevirtual

Enqueue the OpenCL kernel to process the given window on the passed OpenCL command queue.

Note
The queue is not flushed by this method, and therefore the kernel will not have been executed by the time this method returns.
Parameters
[in]windowRegion on which to execute the kernel. (Must be a valid region of the window returned by window()).
[in,out]queueCommand queue on which to enqueue the kernel.

Reimplemented from ICLKernel.

Definition at line 251 of file CLSoftmaxLayerKernel.cpp.

References ICLKernel::add_3D_tensor_argument(), ARM_COMPUTE_ERROR_ON_INVALID_SUBWINDOW, ARM_COMPUTE_ERROR_ON_UNCONFIGURED_KERNEL, Window::collapse_if_possible(), ITensorInfo::dimension(), Window::DimX, Window::DimZ, arm_compute::enqueue(), Window::first_slice_window_3D(), ITensor::info(), CLLogits1DMaxShiftExpSumKernel::is_parallel_reduction(), ICLKernel::lws_hint(), Window::set(), arm_compute::test::validation::reference::slice(), Window::slide_window_slice_3D(), and IKernel::window().

252 {
255 
256  // Collapse window in Z dimension
257  Window window_collapsed = window.collapse_if_possible(ICLKernel::window(), Window::DimZ);
258 
259  // Reconfigure window in case of parallel reduction
260  ParallelReductionInfo parallel_reduction_info = is_parallel_reduction(_input->info()->dimension(0));
261  if(std::get<0>(parallel_reduction_info))
262  {
263  // Launch grid_size parallel work items
264  window_collapsed.set(Window::DimX, Window::Dimension(0, _grid_size, 1));
265  }
266 
267  // Get slices
268  Window slice = window_collapsed.first_slice_window_3D();
269  do
270  {
271  unsigned int idx = 0;
272  // Set inputs
273  add_3D_tensor_argument(idx, _input, slice);
274  add_3D_tensor_argument(idx, _max, slice);
275  add_3D_tensor_argument(idx, _output, slice);
276  add_3D_tensor_argument(idx, _sum, slice);
277  enqueue(queue, *this, slice, lws_hint());
278  }
279  while(window_collapsed.slide_window_slice_3D(slice));
280 }
const Window & window() const
The maximum window the kernel can be executed on.
Definition: IKernel.cpp:28
void enqueue(IGCKernel &kernel, const Window &window, const gles::NDRange &lws=gles::NDRange(1U, 1U, 1U))
Add the kernel to the command queue with the given window.
Definition: IGCKernel.cpp:41
virtual size_t dimension(size_t index) const =0
Return the size of the requested dimension.
cl::NDRange lws_hint() const
Return the Local-Workgroup-Size hint.
Definition: ICLKernel.h:276
void add_3D_tensor_argument(unsigned int &idx, const ICLTensor *tensor, const Window &window)
Add the passed 3D tensor&#39;s parameters to the object&#39;s kernel&#39;s arguments starting from the index idx...
Definition: ICLKernel.h:172
static constexpr size_t DimX
Alias for dimension 0 also known as X dimension.
Definition: Window.h:43
Window collapse_if_possible(const Window &full_window, size_t first, size_t last, bool *has_collapsed=nullptr) const
Collapse the dimensions between first and last if possible.
Definition: Window.inl:68
virtual ITensorInfo * info() const =0
Interface to be implemented by the child class to return the tensor&#39;s metadata.
#define ARM_COMPUTE_ERROR_ON_UNCONFIGURED_KERNEL(k)
Definition: Validate.h:941
static ParallelReductionInfo is_parallel_reduction(size_t size)
Checks if the given size is eligible for parallel reduction.
static constexpr size_t DimZ
Alias for dimension 2 also known as Z dimension.
Definition: Window.h:47
std::tuple< bool, unsigned int > ParallelReductionInfo
Info for whether a parallel reduction will be run and the vector size of the execution.
#define ARM_COMPUTE_ERROR_ON_INVALID_SUBWINDOW(f, s)
Definition: Validate.h:205
SimpleTensor< T > slice(const SimpleTensor< T > &src, Coordinates starts, Coordinates ends)

◆ validate()

Status validate ( const ITensorInfo input,
const ITensorInfo max,
const ITensorInfo output,
const ITensorInfo sum 
)
static

Static function to check if given info will lead to a valid configuration of CLLogits1DMaxShiftExpSumKernel.

Parameters
[in]inputSource tensor. Data types supported: QASYMM8/QASYMM8_SIGNED/F16/F32
[in]maxMax values tensor. Data types supported: same as input
[in]outputDestination tensor. Data types supported: same as input
[in]sumSum of 1D logits tensor. Data types supported: same as input
Returns
a status

Definition at line 238 of file CLSoftmaxLayerKernel.cpp.

References ARM_COMPUTE_RETURN_ON_ERROR.

Referenced by CLSoftmaxLayerGeneric< IS_LOG >::validate().

239 {
240  ARM_COMPUTE_RETURN_ON_ERROR(validate_arguments_1DMaxShiftExpSum(input, max, output, sum));
241  return Status{};
242 }
DATA_TYPE sum(__global const DATA_TYPE *input)
Calculate sum of a vector.
#define ARM_COMPUTE_RETURN_ON_ERROR(status)
Checks if a status contains an error and returns it.
Definition: Error.h:204

The documentation for this class was generated from the following files: