Compute Library
 21.02
NEDirectConvolutionLayerOutputStageKernel Class Reference

Neon kernel to accumulate the biases, if provided, or downscale in case of quantized input. More...

#include <NEDirectConvolutionLayerOutputStageKernel.h>

Collaboration diagram for NEDirectConvolutionLayerOutputStageKernel:
[legend]

Public Member Functions

const char * name () const override
 Name of the kernel. More...
 
 NEDirectConvolutionLayerOutputStageKernel ()
 Default constructor. More...
 
 NEDirectConvolutionLayerOutputStageKernel (const NEDirectConvolutionLayerOutputStageKernel &)=delete
 Prevent instances of this class from being copied (As this class contains pointers) More...
 
NEDirectConvolutionLayerOutputStageKerneloperator= (const NEDirectConvolutionLayerOutputStageKernel &)=delete
 Prevent instances of this class from being copied (As this class contains pointers) More...
 
 NEDirectConvolutionLayerOutputStageKernel (NEDirectConvolutionLayerOutputStageKernel &&)=default
 Allow instances of this class to be moved. More...
 
NEDirectConvolutionLayerOutputStageKerneloperator= (NEDirectConvolutionLayerOutputStageKernel &&)=default
 Allow instances of this class to be moved. More...
 
 ~NEDirectConvolutionLayerOutputStageKernel ()=default
 Default destructor. More...
 
void configure (ITensor *input, const ITensor *bias=nullptr, ITensor *output=nullptr, const DirectConvolutionLayerOutputStageKernelInfo &info=DirectConvolutionLayerOutputStageKernelInfo())
 Set the accumulate buffer and the biases of the kernel. More...
 
void run (const Window &window, const ThreadInfo &info) override
 Execute the kernel on the passed window. More...
 
- Public Member Functions inherited from ICPPKernel
virtual ~ICPPKernel ()=default
 Default destructor. More...
 
virtual void run_nd (const Window &window, const ThreadInfo &info, const Window &thread_locator)
 legacy compatibility layer for implemantions which do not support thread_locator In these cases we simply narrow the interface down the legacy version More...
 
virtual void run_op (ITensorPack &tensors, const Window &window, const ThreadInfo &info)
 Execute the kernel on the passed window. More...
 
- Public Member Functions inherited from IKernel
 IKernel ()
 Constructor. More...
 
virtual ~IKernel ()=default
 Destructor. More...
 
virtual bool is_parallelisable () const
 Indicates whether or not the kernel is parallelisable. More...
 
virtual BorderSize border_size () const
 The size of the border for that kernel. More...
 
const Windowwindow () const
 The maximum window the kernel can be executed on. More...
 

Static Public Member Functions

static Status validate (const ITensorInfo *input, const ITensorInfo *bias=nullptr, const ITensorInfo *output=nullptr, const DirectConvolutionLayerOutputStageKernelInfo &info=DirectConvolutionLayerOutputStageKernelInfo())
 Static function to check if given info will lead to a valid configuration of NEDirectConvolutionLayerOutputStageKernel. More...
 

Detailed Description

Neon kernel to accumulate the biases, if provided, or downscale in case of quantized input.

Note
We assume bias to be shared
For quantized computations (i.e. input of S32 type) the output data type for auto-initialization must be passed as part of the DirectConvolutionLayerOutputStageKernelInfo.

Definition at line 39 of file NEDirectConvolutionLayerOutputStageKernel.h.

Constructor & Destructor Documentation

◆ NEDirectConvolutionLayerOutputStageKernel() [1/3]

Default constructor.

Definition at line 380 of file NEDirectConvolutionLayerOutputStageKernel.cpp.

Referenced by NEDirectConvolutionLayerOutputStageKernel::name().

381  : _func(nullptr), _input(nullptr), _bias(nullptr), _output(nullptr), _result_fixedpoint_multiplier(0), _result_shift(0), _result_offset_after_shift(0)
382 {
383 }

◆ NEDirectConvolutionLayerOutputStageKernel() [2/3]

Prevent instances of this class from being copied (As this class contains pointers)

◆ NEDirectConvolutionLayerOutputStageKernel() [3/3]

Allow instances of this class to be moved.

◆ ~NEDirectConvolutionLayerOutputStageKernel()

Member Function Documentation

◆ configure()

void configure ( ITensor input,
const ITensor bias = nullptr,
ITensor output = nullptr,
const DirectConvolutionLayerOutputStageKernelInfo info = DirectConvolutionLayerOutputStageKernelInfo() 
)

Set the accumulate buffer and the biases of the kernel.

Parameters
[in,out]inputInput to add the bias to. If output is not specified then accumulation is done in-place. Data type supported: F16/F32/S32
[in]bias(Optional) The shared bias tensor to add. It must be 1D Tensor. Data type supported: Same as input
[out]output(Optional) If the output tensor is specified the accumulation is done out-of-place. (Defaults to nullptr) Note that in-place computation is only supported for F16/F32. For S32 this must not be nullptr. Data type supported: F16/F32 or QASYMM8/QASYMM8_SIGNED if input is S32
[in]info(Optional) DirectConvolutionLayerOutputStageKernel descriptor metadata

Definition at line 385 of file NEDirectConvolutionLayerOutputStageKernel.cpp.

References ARM_COMPUTE_ERROR, ARM_COMPUTE_ERROR_ON_NULLPTR, ARM_COMPUTE_ERROR_THROW_ON, arm_compute::auto_init_if_empty(), arm_compute::calculate_max_window(), ICloneable< T >::clone(), ITensorInfo::data_layout(), ITensorInfo::data_type(), arm_compute::F16, arm_compute::F32, ITensor::info(), arm_compute::test::validation::info, arm_compute::test::validation::input, arm_compute::is_data_type_quantized_asymmetric_signed(), arm_compute::NCHW, ITensorInfo::num_dimensions(), DirectConvolutionLayerOutputStageKernelInfo::output_data_type, DirectConvolutionLayerOutputStageKernelInfo::result_fixedpoint_multiplier, DirectConvolutionLayerOutputStageKernelInfo::result_offset_after_shift, DirectConvolutionLayerOutputStageKernelInfo::result_shift, arm_compute::S32, Dimensions< T >::set_num_dimensions(), ITensorInfo::set_valid_region(), ITensorInfo::tensor_shape(), ITensorInfo::total_size(), and arm_compute::validate_arguments().

Referenced by NEDirectConvolutionLayerOutputStageKernel::name().

387 {
388  // Perform validation step
390  ARM_COMPUTE_ERROR_THROW_ON(validate_arguments(input->info(), (bias == nullptr) ? nullptr : bias->info(), (output == nullptr) ? nullptr : output->info(), info));
391 
392  _func = nullptr;
393  _bias = bias;
394  _input = input;
395  _output = (output != nullptr) ? output : input;
396  _result_fixedpoint_multiplier = info.result_fixedpoint_multiplier;
397  _result_shift = info.result_shift;
398  _result_offset_after_shift = info.result_offset_after_shift;
399 
400  // Auto-initialize output output if required
401  if(output != nullptr && output->info() != nullptr)
402  {
403  // Work out expected output data type
404  const DataType output_dt = (input->info()->data_type() == DataType::S32) ? info.output_data_type : DataType::S32;
405  // Output tensor auto initialization if not yet initialized
406  auto_init_if_empty(*output->info(), input->info()->clone()->set_data_type(output_dt));
407  }
408 
409  Window win = calculate_max_window(*input->info(), Steps());
410  Coordinates coord;
411  coord.set_num_dimensions(input->info()->num_dimensions());
412 
413  if(output != nullptr && (output->info()->total_size() != 0))
414  {
415  output->info()->set_valid_region(ValidRegion(coord, output->info()->tensor_shape()));
416  }
417  else
418  {
419  input->info()->set_valid_region(ValidRegion(coord, input->info()->tensor_shape()));
420  }
421 
422  INEKernel::configure(win);
423 
424  const bool is_qasymm8_signed = (output != nullptr) ? is_data_type_quantized_asymmetric_signed(output->info()->data_type()) : false;
425 
426  // Set appropriate function
427  if(input->info()->data_layout() == DataLayout::NCHW)
428  {
429  switch(input->info()->data_type())
430  {
431  case DataType::S32:
432  {
433  if(is_qasymm8_signed)
434  {
435  _func = &output_stage_nchw<int8_t>;
436  }
437  else
438  {
439  _func = &output_stage_nchw<uint8_t>;
440  }
441  break;
442  }
443 #ifdef __ARM_FEATURE_FP16_VECTOR_ARITHMETIC
444  case DataType::F16:
445  {
446  _func = &output_stage_nchw<float16_t>;
447  break;
448  }
449 #endif /* __ARM_FEATURE_FP16_VECTOR_ARITHMETIC */
450  case DataType::F32:
451  {
452  _func = &output_stage_nchw<float>;
453  break;
454  }
455  default:
456  {
457  ARM_COMPUTE_ERROR("Unsupported combination of types among the inputs.");
458  }
459  }
460  }
461  else
462  {
463  switch(input->info()->data_type())
464  {
465  case DataType::S32:
466  {
467  if(is_qasymm8_signed)
468  {
469  _func = &output_stage_nhwc<int8_t>;
470  }
471  else
472  {
473  _func = &output_stage_nhwc<uint8_t>;
474  }
475  break;
476  }
477 #ifdef __ARM_FEATURE_FP16_VECTOR_ARITHMETIC
478  case DataType::F16:
479  {
480  _func = &output_stage_nhwc<float16_t>;
481  break;
482  }
483 #endif /* __ARM_FEATURE_FP16_VECTOR_ARITHMETIC */
484  case DataType::F32:
485  {
486  _func = &output_stage_nhwc<float>;
487  break;
488  }
489  default:
490  {
491  ARM_COMPUTE_ERROR("Unsupported combination of types among the inputs.");
492  }
493  }
494  }
495 }
Window calculate_max_window(const ValidRegion &valid_region, const Steps &steps, bool skip_border, BorderSize border_size)
#define ARM_COMPUTE_ERROR(msg)
Print the given message then throw an std::runtime_error.
Definition: Error.h:352
1 channel, 1 F32 per channel
#define ARM_COMPUTE_ERROR_THROW_ON(status)
Definition: Error.h:455
1 channel, 1 F16 per channel
1 channel, 1 S32 per channel
bool is_data_type_quantized_asymmetric_signed(DataType dt)
Check if a given data type is of asymmetric quantized signed type.
Definition: Utils.h:1209
bool auto_init_if_empty(ITensorInfo &info, const TensorShape &shape, int num_channels, DataType data_type, QuantizationInfo quantization_info=QuantizationInfo())
Auto initialize the tensor info (shape, number of channels and data type) if the current assignment i...
Num samples, channels, height, width.
ScaleKernelInfo info(interpolation_policy, default_border_mode, PixelValue(), sampling_policy, false)
Status validate_arguments(const ITensorInfo *input, const ITensorInfo *bias, const ITensorInfo *output, const GEMMLowpOutputStageInfo *output_stage)
#define ARM_COMPUTE_ERROR_ON_NULLPTR(...)
Definition: Validate.h:161
DataType
Available data types.
Definition: Types.h:77

◆ name()

◆ operator=() [1/2]

Prevent instances of this class from being copied (As this class contains pointers)

Referenced by NEDirectConvolutionLayerOutputStageKernel::name().

◆ operator=() [2/2]

Allow instances of this class to be moved.

◆ run()

void run ( const Window window,
const ThreadInfo info 
)
overridevirtual

Execute the kernel on the passed window.

Warning
If is_parallelisable() returns false then the passed window must be equal to window()
Note
The window has to be a region within the window returned by the window() method
The width of the window has to be a multiple of num_elems_processed_per_iteration().
Parameters
[in]windowRegion on which to execute the kernel. (Must be a region of the window returned by window())
[in]infoInfo about executing thread and CPU.

Reimplemented from ICPPKernel.

Definition at line 505 of file NEDirectConvolutionLayerOutputStageKernel.cpp.

References ARM_COMPUTE_ERROR_ON, ARM_COMPUTE_ERROR_ON_INVALID_SUBWINDOW, ARM_COMPUTE_ERROR_ON_UNCONFIGURED_KERNEL, ARM_COMPUTE_UNUSED, arm_compute::test::validation::has_bias, and IKernel::window().

Referenced by NEDirectConvolutionLayerOutputStageKernel::name().

506 {
510  ARM_COMPUTE_ERROR_ON(_func == nullptr);
511 
512  const bool has_bias = _bias != nullptr;
513  (*_func)(_input, _bias, window, _output, _result_fixedpoint_multiplier, _result_shift, _result_offset_after_shift, has_bias);
514 }
const Window & window() const
The maximum window the kernel can be executed on.
Definition: IKernel.cpp:28
#define ARM_COMPUTE_ERROR_ON(cond)
If the condition is true then an error message is printed and an exception thrown.
Definition: Error.h:466
#define ARM_COMPUTE_UNUSED(...)
To avoid unused variables warnings.
Definition: Error.h:152
#define ARM_COMPUTE_ERROR_ON_UNCONFIGURED_KERNEL(k)
Definition: Validate.h:941
ScaleKernelInfo info(interpolation_policy, default_border_mode, PixelValue(), sampling_policy, false)
#define ARM_COMPUTE_ERROR_ON_INVALID_SUBWINDOW(f, s)
Definition: Validate.h:205

◆ validate()

Status validate ( const ITensorInfo input,
const ITensorInfo bias = nullptr,
const ITensorInfo output = nullptr,
const DirectConvolutionLayerOutputStageKernelInfo info = DirectConvolutionLayerOutputStageKernelInfo() 
)
static

Static function to check if given info will lead to a valid configuration of NEDirectConvolutionLayerOutputStageKernel.

Parameters
[in]inputInput to add the bias to. If output is not specified then accumulation is done in-place. Data type supported: F16/F32/S32
[in]bias(Optional) The shared bias tensor to add. It must be 1D Tensor. Data type supported: Same as input
[in]output(Optional) If the output tensor is specified the accumulation is done out-of-place. (Defaults to nullptr) Note that in-place computation is only supported for F16/F32. For S32 this must not be nullptr. Data type supported: F16/F32 or QASYMM8/QASYMM8_SIGNED if input is S32
[in]info(Optional) DirectConvolutionLayerOutputStageKernel descriptor metadata
Returns
a status

Definition at line 497 of file NEDirectConvolutionLayerOutputStageKernel.cpp.

References ARM_COMPUTE_RETURN_ON_ERROR, and arm_compute::validate_arguments().

Referenced by NEDirectConvolutionLayerOutputStageKernel::name(), and NEDirectConvolutionLayer::validate().

499 {
501 
502  return Status{};
503 }
#define ARM_COMPUTE_RETURN_ON_ERROR(status)
Checks if a status contains an error and returns it.
Definition: Error.h:204
ScaleKernelInfo info(interpolation_policy, default_border_mode, PixelValue(), sampling_policy, false)
Status validate_arguments(const ITensorInfo *input, const ITensorInfo *bias, const ITensorInfo *output, const GEMMLowpOutputStageInfo *output_stage)

The documentation for this class was generated from the following files: