Compute Library
 21.02
NEGEMMLowpOutputStage Class Reference

Basic function to execute GEMMLowpQuantizeDown kernels on Neon. More...

#include <NEGEMMLowpOutputStage.h>

Collaboration diagram for NEGEMMLowpOutputStage:
[legend]

Public Member Functions

 NEGEMMLowpOutputStage ()=default
 Constructor. More...
 
 NEGEMMLowpOutputStage (const NEGEMMLowpOutputStage &)=delete
 Prevent instances of this class from being copied (As this class contains pointers) More...
 
NEGEMMLowpOutputStageoperator= (const NEGEMMLowpOutputStage &)=delete
 Prevent instances of this class from being copied (As this class contains pointers) More...
 
 NEGEMMLowpOutputStage (NEGEMMLowpOutputStage &&)=delete
 Prevent instances of this class from being moved (As this class contains non movable objects) More...
 
NEGEMMLowpOutputStageoperator= (NEGEMMLowpOutputStage &&)=delete
 Prevent instances of this class from being moved (As this class contains non movable objects) More...
 
 ~NEGEMMLowpOutputStage ()
 Default destructor. More...
 
void configure (const ITensor *input, const ITensor *bias, ITensor *output, const GEMMLowpOutputStageInfo &info)
 Initialise the kernel's inputs, output. More...
 
- Public Member Functions inherited from INESimpleFunctionNoBorder
 INESimpleFunctionNoBorder (IRuntimeContext *ctx=nullptr)
 Constructor. More...
 
 INESimpleFunctionNoBorder (const INESimpleFunctionNoBorder &)=delete
 Prevent instances of this class from being copied (As this class contains pointers) More...
 
 INESimpleFunctionNoBorder (INESimpleFunctionNoBorder &&)=default
 Default move constructor. More...
 
INESimpleFunctionNoBorderoperator= (const INESimpleFunctionNoBorder &)=delete
 Prevent instances of this class from being copied (As this class contains pointers) More...
 
INESimpleFunctionNoBorderoperator= (INESimpleFunctionNoBorder &&)=default
 Default move assignment operator. More...
 
 ~INESimpleFunctionNoBorder ()
 Default destructor. More...
 
void run () override final
 Run the kernels contained in the function. More...
 
- Public Member Functions inherited from IFunction
virtual ~IFunction ()=default
 Destructor. More...
 
virtual void prepare ()
 Prepare the function for executing. More...
 

Static Public Member Functions

static Status validate (const ITensorInfo *input, const ITensorInfo *bias, const ITensorInfo *output, const GEMMLowpOutputStageInfo &info)
 Static function to check if given info will lead to a valid configuration of NEGEMMLowpOutputStage. More...
 

Detailed Description

Constructor & Destructor Documentation

◆ NEGEMMLowpOutputStage() [1/3]

NEGEMMLowpOutputStage ( )
default

Constructor.

◆ NEGEMMLowpOutputStage() [2/3]

Prevent instances of this class from being copied (As this class contains pointers)

◆ NEGEMMLowpOutputStage() [3/3]

Prevent instances of this class from being moved (As this class contains non movable objects)

◆ ~NEGEMMLowpOutputStage()

Member Function Documentation

◆ configure()

void configure ( const ITensor input,
const ITensor bias,
ITensor output,
const GEMMLowpOutputStageInfo info 
)

Initialise the kernel's inputs, output.

Parameters
[in]inputInput tensor. Data type supported: S32
[in]biasBiases tensor. Only shared biases supported and it can be a nullptr if the biases addition is not required. Biases are 1D tensor with dimensions [OFM]. Data type supported: Same as input.
[out]outputOutput tensor. Data type supported: Data type supported: QASYMM8/QASYMM8_SIGNED/QSYMM16
[in]infoGEMMLowp output stage metadata.

Definition at line 81 of file NEGEMMLowpOutputStage.cpp.

References ARM_COMPUTE_ERROR, ARM_COMPUTE_ERROR_ON_NULLPTR, ARM_COMPUTE_ERROR_THROW_ON, GEMMLowpOutputStageInfo::gemmlowp_max_bound, GEMMLowpOutputStageInfo::gemmlowp_min_bound, GEMMLowpOutputStageInfo::gemmlowp_multiplier, GEMMLowpOutputStageInfo::gemmlowp_offset, GEMMLowpOutputStageInfo::gemmlowp_shift, ITensor::info(), arm_compute::test::validation::info, GEMMLowpOutputStageInfo::output_data_type, arm_compute::QASYMM8, arm_compute::QASYMM8_SIGNED, arm_compute::QSYMM16, arm_compute::QUANTIZE_DOWN, arm_compute::QUANTIZE_DOWN_FIXEDPOINT, GEMMLowpOutputStageInfo::type, and NEGEMMLowpOutputStage::validate().

Referenced by NEQLSTMLayer::configure(), arm_compute::test::validation::DATA_TEST_CASE(), and NEQLSTMLayer::NEQLSTMLayer().

82 {
83  // Perform validate step
85  ARM_COMPUTE_ERROR_THROW_ON(NEGEMMLowpOutputStage::validate(input->info(), bias != nullptr ? bias->info() : nullptr, output->info(), info));
86 
87  switch(info.type)
88  {
90  {
91  switch(info.output_data_type)
92  {
93  case DataType::QASYMM8:
94  {
95  auto k = std::make_unique<NEGEMMLowpQuantizeDownInt32ToUint8ScaleByFixedPointKernel>();
96  k->configure(input, bias, output, info.gemmlowp_multiplier, info.gemmlowp_shift, info.gemmlowp_offset, info.gemmlowp_min_bound, info.gemmlowp_max_bound);
97  _kernel = std::move(k);
98  break;
99  }
101  {
102  auto k = std::make_unique<NEGEMMLowpQuantizeDownInt32ToInt8ScaleByFixedPointKernel>();
103  k->configure(input, bias, output, info.gemmlowp_multiplier, info.gemmlowp_shift, info.gemmlowp_offset, info.gemmlowp_min_bound, info.gemmlowp_max_bound);
104  _kernel = std::move(k);
105  break;
106  }
107  case DataType::QSYMM16:
108  {
109  auto k = std::make_unique<NEGEMMLowpQuantizeDownInt32ToInt16ScaleByFixedPointKernel>();
110  k->configure(input, bias, output, info.gemmlowp_multiplier, info.gemmlowp_shift, info.gemmlowp_min_bound, info.gemmlowp_max_bound);
111  _kernel = std::move(k);
112  break;
113  }
114  default:
115  {
116  ARM_COMPUTE_ERROR("Unsupported output data type.");
117  break;
118  }
119  }
120  break;
121  }
123  {
124  switch(info.output_data_type)
125  {
126  case DataType::QASYMM8:
128  {
129  auto k = std::make_unique<NEGEMMLowpQuantizeDownInt32ScaleKernel>();
130  k->configure(input, bias, output, &info);
131  _kernel = std::move(k);
132  break;
133  }
134  default:
135  {
136  ARM_COMPUTE_ERROR("Unsupported output data type.");
137  break;
138  }
139  }
140  break;
141  }
142  default:
143  ARM_COMPUTE_ERROR("Unsupported GEMMLowpOutputStage type.");
144  }
145 }
Quantize using a fixed point multiplication.
quantized, symmetric fixed-point 16-bit number
#define ARM_COMPUTE_ERROR(msg)
Print the given message then throw an std::runtime_error.
Definition: Error.h:352
#define ARM_COMPUTE_ERROR_THROW_ON(status)
Definition: Error.h:455
quantized, asymmetric fixed-point 8-bit number unsigned
Quantize using an integer multiplication.
ScaleKernelInfo info(interpolation_policy, default_border_mode, PixelValue(), sampling_policy, false)
static Status validate(const ITensorInfo *input, const ITensorInfo *bias, const ITensorInfo *output, const GEMMLowpOutputStageInfo &info)
Static function to check if given info will lead to a valid configuration of NEGEMMLowpOutputStage.
#define ARM_COMPUTE_ERROR_ON_NULLPTR(...)
Definition: Validate.h:161
quantized, asymmetric fixed-point 8-bit number signed

◆ operator=() [1/2]

NEGEMMLowpOutputStage& operator= ( const NEGEMMLowpOutputStage )
delete

Prevent instances of this class from being copied (As this class contains pointers)

◆ operator=() [2/2]

NEGEMMLowpOutputStage& operator= ( NEGEMMLowpOutputStage &&  )
delete

Prevent instances of this class from being moved (As this class contains non movable objects)

◆ validate()

Status validate ( const ITensorInfo input,
const ITensorInfo bias,
const ITensorInfo output,
const GEMMLowpOutputStageInfo info 
)
static

Static function to check if given info will lead to a valid configuration of NEGEMMLowpOutputStage.

Parameters
[in]inputInput tensor info. It is the output of NEGEMMLowpMatrixMultiplyCore function. Data type supported: S32
[in]biasBiases tensor info. Only shared biases supported and it can be a nullptr if the addition of biases is not required. Biases are 1D tensor with dimensions [OFM]. Data type supported: Same as input.
[in]outputOutput tensor info. Data type supported: Data type supported: QASYMM8/QASYMM8_SIGNED/QSYMM16
[in]infoGEMMLowp output stage metadata.
Returns
a status

Definition at line 147 of file NEGEMMLowpOutputStage.cpp.

References ARM_COMPUTE_CREATE_ERROR, ARM_COMPUTE_RETURN_ERROR_ON, ARM_COMPUTE_RETURN_ERROR_ON_DATA_TYPE_CHANNEL_NOT_IN, ARM_COMPUTE_RETURN_ERROR_ON_MSG, ARM_COMPUTE_RETURN_ERROR_ON_NULLPTR, ITensorInfo::data_type(), GEMMLowpOutputStageInfo::gemmlowp_max_bound, GEMMLowpOutputStageInfo::gemmlowp_min_bound, arm_compute::QASYMM8, arm_compute::QASYMM8_SIGNED, arm_compute::QSYMM16, arm_compute::QUANTIZE_DOWN, arm_compute::QUANTIZE_DOWN_FIXEDPOINT, arm_compute::RUNTIME_ERROR, GEMMLowpOutputStageInfo::type, arm_compute::UNKNOWN, NEGEMMLowpQuantizeDownInt32ScaleKernel::validate(), NEGEMMLowpQuantizeDownInt32ToInt16ScaleByFixedPointKernel::validate(), NEGEMMLowpQuantizeDownInt32ToUint8ScaleByFixedPointKernel::validate(), and NEGEMMLowpQuantizeDownInt32ToInt8ScaleByFixedPointKernel::validate().

Referenced by NEGEMMLowpOutputStage::configure(), arm_compute::test::validation::DATA_TEST_CASE(), and NEQLSTMLayer::validate().

148 {
150  ARM_COMPUTE_RETURN_ERROR_ON_MSG(output->data_type() == DataType::UNKNOWN, "NEGEMMLowpQuantizeDownScaleByFixedPoint cannot be used with UNKNOWN output data type.");
152 
154 
155  switch(info.type)
156  {
158  {
159  switch(output->data_type())
160  {
161  case DataType::QASYMM8:
162  return NEGEMMLowpQuantizeDownInt32ToUint8ScaleByFixedPointKernel::validate(input, bias, output, info.gemmlowp_min_bound, info.gemmlowp_max_bound);
164  return NEGEMMLowpQuantizeDownInt32ToInt8ScaleByFixedPointKernel::validate(input, bias, output, info.gemmlowp_min_bound, info.gemmlowp_max_bound);
165  case DataType::QSYMM16:
166  return NEGEMMLowpQuantizeDownInt32ToInt16ScaleByFixedPointKernel::validate(input, bias, output, info.gemmlowp_min_bound, info.gemmlowp_max_bound);
167  default:
168  return ARM_COMPUTE_CREATE_ERROR(ErrorCode::RUNTIME_ERROR, "Unsupported output data type.");
169  }
170  }
172  {
173  switch(output->data_type())
174  {
175  case DataType::QASYMM8:
178  default:
179  return ARM_COMPUTE_CREATE_ERROR(ErrorCode::RUNTIME_ERROR, "Unsupported output data type.");
180  }
181  }
182  default:
183  return ARM_COMPUTE_CREATE_ERROR(ErrorCode::RUNTIME_ERROR, "Unsupported GEMMLowpOutputStage type.");
184  }
185 }
static Status validate(const ITensorInfo *input, const ITensorInfo *bias, const ITensorInfo *output, int min=0, int max=0)
Static function to check if given info will lead to a valid configuration of NEGEMMLowpQuantizeDownIn...
Quantize using a fixed point multiplication.
quantized, symmetric fixed-point 16-bit number
static Status validate(const ITensorInfo *input, const ITensorInfo *bias, const ITensorInfo *output, const GEMMLowpOutputStageInfo *output_stage)
Static function to check if given info will lead to a valid configuration of NEGEMMLowpQuantizeDownIn...
#define ARM_COMPUTE_RETURN_ERROR_ON(cond)
If the condition is true, an error is returned.
Definition: Error.h:296
#define ARM_COMPUTE_RETURN_ERROR_ON_NULLPTR(...)
Definition: Validate.h:163
quantized, asymmetric fixed-point 8-bit number unsigned
static Status validate(const ITensorInfo *input, const ITensorInfo *bias, const ITensorInfo *output, int min=0, int max=0)
Static function to check if given info will lead to a valid configuration of NEGEMMLowpQuantizeDownIn...
Quantize using an integer multiplication.
static Status validate(const ITensorInfo *input, const ITensorInfo *bias, const ITensorInfo *output, int min=0, int max=0)
Static function to check if given info will lead to a valid configuration of NEGEMMLowpQuantizeDownIn...
ScaleKernelInfo info(interpolation_policy, default_border_mode, PixelValue(), sampling_policy, false)
#define ARM_COMPUTE_CREATE_ERROR(error_code, msg)
Creates an error with a given message.
Definition: Error.h:159
#define ARM_COMPUTE_RETURN_ERROR_ON_DATA_TYPE_CHANNEL_NOT_IN(t, c,...)
Definition: Validate.h:792
#define ARM_COMPUTE_RETURN_ERROR_ON_MSG(cond, msg)
If the condition is true, an error is returned.
Definition: Error.h:244
quantized, asymmetric fixed-point 8-bit number signed

The documentation for this class was generated from the following files: