Compute Library
 19.08
NEDirectConvolutionLayerOutputStageKernel Class Reference

NEON kernel to accumulate the biases, if provided, or downscale in case of quantized input. More...

#include <NEDirectConvolutionLayerOutputStageKernel.h>

Collaboration diagram for NEDirectConvolutionLayerOutputStageKernel:
[legend]

Public Member Functions

const char * name () const override
 Name of the kernel. More...
 
 NEDirectConvolutionLayerOutputStageKernel ()
 Default constructor. More...
 
 NEDirectConvolutionLayerOutputStageKernel (const NEDirectConvolutionLayerOutputStageKernel &)=delete
 Prevent instances of this class from being copied (As this class contains pointers) More...
 
NEDirectConvolutionLayerOutputStageKerneloperator= (const NEDirectConvolutionLayerOutputStageKernel &)=delete
 Prevent instances of this class from being copied (As this class contains pointers) More...
 
 NEDirectConvolutionLayerOutputStageKernel (NEDirectConvolutionLayerOutputStageKernel &&)=default
 Allow instances of this class to be moved. More...
 
NEDirectConvolutionLayerOutputStageKerneloperator= (NEDirectConvolutionLayerOutputStageKernel &&)=default
 Allow instances of this class to be moved. More...
 
 ~NEDirectConvolutionLayerOutputStageKernel ()=default
 Default destructor. More...
 
void configure (ITensor *input, const ITensor *bias=nullptr, ITensor *output=nullptr, int result_fixedpoint_multiplier=0, int result_shift=0, int result_offset_after_shift=0)
 Set the accumulate buffer and the biases of the kernel. More...
 
void run (const Window &window, const ThreadInfo &info) override
 Execute the kernel on the passed window. More...
 
- Public Member Functions inherited from ICPPKernel
virtual ~ICPPKernel ()=default
 Default destructor. More...
 
- Public Member Functions inherited from IKernel
 IKernel ()
 Constructor. More...
 
virtual ~IKernel ()=default
 Destructor. More...
 
virtual bool is_parallelisable () const
 Indicates whether or not the kernel is parallelisable. More...
 
virtual BorderSize border_size () const
 The size of the border for that kernel. More...
 
const Windowwindow () const
 The maximum window the kernel can be executed on. More...
 

Static Public Member Functions

static Status validate (const ITensorInfo *input, const ITensorInfo *bias=nullptr, const ITensorInfo *output=nullptr, int result_fixedpoint_multiplier=0, int result_shift=0, int result_offset_after_shift=0)
 Static function to check if given info will lead to a valid configuration of NEDirectConvolutionLayerOutputStageKernel. More...
 

Detailed Description

NEON kernel to accumulate the biases, if provided, or downscale in case of quantized input.

Note
We assume bias to be shared

Definition at line 36 of file NEDirectConvolutionLayerOutputStageKernel.h.

Constructor & Destructor Documentation

◆ NEDirectConvolutionLayerOutputStageKernel() [1/3]

Default constructor.

Definition at line 462 of file NEDirectConvolutionLayerOutputStageKernel.cpp.

463  : _func(nullptr), _input(nullptr), _bias(nullptr), _output(nullptr), _result_fixedpoint_multiplier(0), _result_shift(0), _result_offset_after_shift(0)
464 {
465 }

◆ NEDirectConvolutionLayerOutputStageKernel() [2/3]

Prevent instances of this class from being copied (As this class contains pointers)

◆ NEDirectConvolutionLayerOutputStageKernel() [3/3]

Allow instances of this class to be moved.

◆ ~NEDirectConvolutionLayerOutputStageKernel()

Default destructor.

Member Function Documentation

◆ configure()

void configure ( ITensor input,
const ITensor bias = nullptr,
ITensor output = nullptr,
int  result_fixedpoint_multiplier = 0,
int  result_shift = 0,
int  result_offset_after_shift = 0 
)

Set the accumulate buffer and the biases of the kernel.

Parameters
[in,out]inputInput to add the bias to. If output is not specified then accumulation is done in-place. Data type supported: F16/F32
[in]bias(Optional) The shared bias tensor to add. It must be 1D Tensor. Data type supported: Same as input
[out]output(Optional) If the output tensor is specified the accumulation is done out-of-place. (Defaults to nullptr) Data type supported: F16/F32
[in]result_fixedpoint_multiplier(Optional) Fixed point value to be multiplied to each element of the input matrix once the result_offset has been added
[in]result_shift(Optional) Integer value used to round the result of the fixed point multiplication to nearest division by a power-of-two
[in]result_offset_after_shift(Optional) Offset to be applied to result before converting it back to QASYMM8

Definition at line 467 of file NEDirectConvolutionLayerOutputStageKernel.cpp.

469 {
471 
472  // Auto-initialize output output if required
473  if(output != nullptr)
474  {
475  // Work out expected output data type
476  const DataType output_dt = (input->info()->data_type() == DataType::S32) ? DataType::QASYMM8 : input->info()->data_type();
477  // Output tensor auto initialization if not yet initialized
478  auto_init_if_empty(*output->info(), input->info()->clone()->set_data_type(output_dt));
479  }
480 
481  // Perform validation step
482  ARM_COMPUTE_ERROR_THROW_ON(validate_arguments(input->info(), (bias == nullptr) ? nullptr : bias->info(), (output == nullptr) ? nullptr : output->info(),
483  result_fixedpoint_multiplier, result_shift, result_offset_after_shift));
484 
485  _func = nullptr;
486  _bias = bias;
487  _input = input;
488  _output = output;
489  _result_fixedpoint_multiplier = result_fixedpoint_multiplier;
490  _result_shift = result_shift;
491  _result_offset_after_shift = result_offset_after_shift;
492 
493  // Configure kernel window
494  auto win_config = validate_and_configure_window(input->info(), (bias == nullptr) ? nullptr : bias->info(), (output == nullptr) ? nullptr : output->info());
495  ARM_COMPUTE_ERROR_THROW_ON(win_config.first);
496  INEKernel::configure(win_config.second);
497 
498  const bool has_bias = bias != nullptr;
499 
500  // Set appropriate function
501  if(input->info()->data_layout() == DataLayout::NCHW)
502  {
503  switch(input->info()->data_type())
504  {
505  case DataType::S32:
506  {
507  _func = (bias == nullptr) ? &output_stage_nchw<int32_t, uint8_t, false, false> : &output_stage_nchw<int32_t, uint8_t, false, true>;
508  break;
509  }
510 #ifdef __ARM_FEATURE_FP16_VECTOR_ARITHMETIC
511  case DataType::F16:
512  {
513  if(has_bias)
514  {
515  _func = (output == nullptr) ? &output_stage_nchw<float16_t, float16_t, true, true> : &output_stage_nchw<float16_t, float16_t, false, true>;
516  }
517  else
518  {
519  _func = (output == nullptr) ? &output_stage_nchw<float16_t, float16_t, true, false> : &output_stage_nchw<float16_t, float16_t, false, false>;
520  }
521  break;
522  }
523 #endif /* __ARM_FEATURE_FP16_VECTOR_ARITHMETIC */
524  case DataType::F32:
525  {
526  if(has_bias)
527  {
528  _func = (output == nullptr) ? &output_stage_nchw<float, float, true, true> : &output_stage_nchw<float, float, false, true>;
529  }
530  else
531  {
532  _func = (output == nullptr) ? &output_stage_nchw<float, float, true, false> : &output_stage_nchw<float, float, false, false>;
533  }
534  break;
535  }
536  default:
537  {
538  ARM_COMPUTE_ERROR("Unsupported combination of types among the inputs.");
539  }
540  }
541  }
542  else
543  {
544  switch(input->info()->data_type())
545  {
546  case DataType::S32:
547  {
548  _func = (bias == nullptr) ? &output_stage_nhwc<int32_t, uint8_t, false, false> : &output_stage_nhwc<int32_t, uint8_t, false, true>;
549  break;
550  }
551 #ifdef __ARM_FEATURE_FP16_VECTOR_ARITHMETIC
552  case DataType::F16:
553  {
554  if(has_bias)
555  {
556  _func = (output == nullptr) ? &output_stage_nhwc<float16_t, float16_t, true, true> : &output_stage_nhwc<float16_t, float16_t, false, true>;
557  }
558  else
559  {
560  _func = (output == nullptr) ? &output_stage_nhwc<float16_t, float16_t, true, false> : &output_stage_nhwc<float16_t, float16_t, false, false>;
561  }
562  break;
563  }
564 #endif /* __ARM_FEATURE_FP16_VECTOR_ARITHMETIC */
565  case DataType::F32:
566  {
567  if(has_bias)
568  {
569  _func = (output == nullptr) ? &output_stage_nhwc<float, float, true, true> : &output_stage_nhwc<float, float, false, true>;
570  }
571  else
572  {
573  _func = (output == nullptr) ? &output_stage_nhwc<float, float, true, false> : &output_stage_nhwc<float, float, false, false>;
574  }
575  break;
576  }
577  default:
578  {
579  ARM_COMPUTE_ERROR("Unsupported combination of types among the inputs.");
580  }
581  }
582  }
583 }
#define ARM_COMPUTE_ERROR(...)
Print the given message then throw an std::runtime_error.
Definition: Error.h:261
TensorInfo * info() const override
Interface to be implemented by the child class to return the tensor's metadata.
Definition: CLTensor.cpp:35
std::pair< Status, Window > validate_and_configure_window(ITensorInfo *input, ITensorInfo *weights, ITensorInfo *biases, ITensorInfo *output, const PadStrideInfo &conv_info, unsigned int depth_multiplier, const Size2D &dilation)
virtual DataType data_type() const =0
Data type used for each element of the tensor.
1 channel, 1 F32 per channel
#define ARM_COMPUTE_ERROR_THROW_ON(status)
Definition: Error.h:327
bool auto_init_if_empty(ITensorInfo &info, const TensorShape &shape, int num_channels, DataType data_type, QuantizationInfo quantization_info=QuantizationInfo())
Auto initialize the tensor info (shape, number of channels and data type) if the current assignment i...
Definition: Helpers.inl:201
1 channel, 1 F16 per channel
1 channel, 1 S32 per channel
quantized, asymmetric fixed-point 8-bit number
virtual std::unique_ptr< T > clone() const =0
Provide a clone of the current object of class T.
virtual ITensorInfo * info() const =0
Interface to be implemented by the child class to return the tensor's metadata.
Num samples, channels, height, width.
#define ARM_COMPUTE_ERROR_ON_NULLPTR(...)
Definition: Validate.h:161
DataType
Available data types.
Definition: Types.h:74
virtual DataLayout data_layout() const =0
Get the data layout of the tensor.

References ARM_COMPUTE_ERROR, ARM_COMPUTE_ERROR_ON_NULLPTR, ARM_COMPUTE_ERROR_THROW_ON, arm_compute::auto_init_if_empty(), arm_compute::test::validation::bias, ICloneable< T >::clone(), ITensorInfo::data_layout(), ITensorInfo::data_type(), arm_compute::F16, arm_compute::F32, arm_compute::test::validation::has_bias, ITensor::info(), CLTensor::info(), arm_compute::NCHW, arm_compute::QASYMM8, arm_compute::S32, and arm_compute::validate_and_configure_window().

Referenced by NEDirectConvolutionLayer::configure(), and NEDepthwiseConvolutionLayer::configure().

◆ name()

const char* name ( ) const
inlineoverridevirtual

Name of the kernel.

Returns
Kernel name

Implements ICPPKernel.

Definition at line 39 of file NEDirectConvolutionLayerOutputStageKernel.h.

40  {
41  return "NEDirectConvolutionLayerOutputStageKernel";
42  }

◆ operator=() [1/2]

Prevent instances of this class from being copied (As this class contains pointers)

◆ operator=() [2/2]

Allow instances of this class to be moved.

◆ run()

void run ( const Window window,
const ThreadInfo info 
)
overridevirtual

Execute the kernel on the passed window.

Warning
If is_parallelisable() returns false then the passed window must be equal to window()
Note
The window has to be a region within the window returned by the window() method
The width of the window has to be a multiple of num_elems_processed_per_iteration().
Parameters
[in]windowRegion on which to execute the kernel. (Must be a region of the window returned by window())
[in]infoInfo about executing thread and CPU.

Implements ICPPKernel.

Definition at line 594 of file NEDirectConvolutionLayerOutputStageKernel.cpp.

595 {
599  ARM_COMPUTE_ERROR_ON(_func == nullptr);
600 
601  (*_func)(_input, _bias, window, _output, _result_fixedpoint_multiplier, _result_shift, _result_offset_after_shift);
602 }
const Window & window() const
The maximum window the kernel can be executed on.
Definition: IKernel.cpp:28
#define ARM_COMPUTE_ERROR_ON(cond)
If the condition is true then an error message is printed and an exception thrown.
Definition: Error.h:337
#define ARM_COMPUTE_UNUSED(...)
To avoid unused variables warnings.
Definition: Error.h:160
#define ARM_COMPUTE_ERROR_ON_INVALID_SUBWINDOW(f, s)
Definition: Validate.h:205
#define ARM_COMPUTE_ERROR_ON_UNCONFIGURED_KERNEL(k)
Definition: Validate.h:940

References ARM_COMPUTE_ERROR_ON, ARM_COMPUTE_ERROR_ON_INVALID_SUBWINDOW, ARM_COMPUTE_ERROR_ON_UNCONFIGURED_KERNEL, ARM_COMPUTE_UNUSED, arm_compute::test::validation::info, and IKernel::window().

◆ validate()

Status validate ( const ITensorInfo input,
const ITensorInfo bias = nullptr,
const ITensorInfo output = nullptr,
int  result_fixedpoint_multiplier = 0,
int  result_shift = 0,
int  result_offset_after_shift = 0 
)
static

Static function to check if given info will lead to a valid configuration of NEDirectConvolutionLayerOutputStageKernel.

Parameters
[in]inputInput to add the bias to. If output is not specified then accumulation is done in-place. Data type supported: F16/F32
[in]bias(Optional) The shared bias tensor to add. It must be 1D Tensor. Data type supported: Same as input
[in]output(Optional) If the output tensor is specified the accumulation is done out-of-place. (Defaults to nullptr) Data type supported: F16/F32
[in]result_fixedpoint_multiplier(Optional) Fixed point value to be multiplied to each element of the input matrix once the result_offset has been added
[in]result_shift(Optional) Integer value used to round the result of the fixed point multiplication to nearest division by a power-of-two
[in]result_offset_after_shift(Optional) Offset to be applied to result before converting it back to QASYMM8
Returns
a status

Definition at line 585 of file NEDirectConvolutionLayerOutputStageKernel.cpp.

587 {
588  ARM_COMPUTE_RETURN_ON_ERROR(validate_arguments(input, bias, output, result_fixedpoint_multiplier, result_shift, result_offset_after_shift));
589  ARM_COMPUTE_RETURN_ON_ERROR(validate_and_configure_window(input->clone().get(), bias == nullptr ? nullptr : bias->clone().get(), output == nullptr ? nullptr : output->clone().get()).first);
590 
591  return Status{};
592 }
std::pair< Status, Window > validate_and_configure_window(ITensorInfo *input, ITensorInfo *weights, ITensorInfo *biases, ITensorInfo *output, const PadStrideInfo &conv_info, unsigned int depth_multiplier, const Size2D &dilation)
#define ARM_COMPUTE_RETURN_ON_ERROR(status)
Checks if a status contains an error and returns it.
Definition: Error.h:193
Status class.
Definition: Error.h:52
virtual std::unique_ptr< T > clone() const =0
Provide a clone of the current object of class T.

References ARM_COMPUTE_RETURN_ON_ERROR, arm_compute::test::validation::bias, ICloneable< T >::clone(), and arm_compute::validate_and_configure_window().

Referenced by NEDirectConvolutionLayer::validate(), NEDepthwiseConvolutionLayer3x3::validate(), NEDepthwiseConvolutionLayerOptimized::validate(), and NEDepthwiseConvolutionLayer::validate().


The documentation for this class was generated from the following files: