Compute Library
 21.08
CpuQuantizeKernel Class Reference

Interface for the quantization layer kernel. More...

#include <CpuQuantizeKernel.h>

Collaboration diagram for CpuQuantizeKernel:
[legend]

Public Member Functions

 CpuQuantizeKernel ()=default
 
 ARM_COMPUTE_DISALLOW_COPY_ALLOW_MOVE (CpuQuantizeKernel)
 
void configure (const ITensorInfo *src, ITensorInfo *dst)
 Set the input, output. More...
 
void run_op (ITensorPack &tensors, const Window &window, const ThreadInfo &info) override
 Execute the kernel on the passed window. More...
 
const char * name () const override
 Name of the kernel. More...
 
- Public Member Functions inherited from ICPPKernel
virtual ~ICPPKernel ()=default
 Default destructor. More...
 
virtual void run (const Window &window, const ThreadInfo &info)
 Execute the kernel on the passed window. More...
 
virtual void run_nd (const Window &window, const ThreadInfo &info, const Window &thread_locator)
 legacy compatibility layer for implemantions which do not support thread_locator In these cases we simply narrow the interface down the legacy version More...
 
- Public Member Functions inherited from IKernel
 IKernel ()
 Constructor. More...
 
virtual ~IKernel ()=default
 Destructor. More...
 
virtual bool is_parallelisable () const
 Indicates whether or not the kernel is parallelisable. More...
 
virtual BorderSize border_size () const
 The size of the border for that kernel. More...
 
const Windowwindow () const
 The maximum window the kernel can be executed on. More...
 
bool is_window_configured () const
 Function to check if the embedded window of this kernel has been configured. More...
 

Static Public Member Functions

static Status validate (const ITensorInfo *src, const ITensorInfo *dst)
 Static function to check if given info will lead to a valid configuration. More...
 

Detailed Description

Interface for the quantization layer kernel.

Note
The implementation supports only 3D input tensors

Definition at line 40 of file CpuQuantizeKernel.h.

Constructor & Destructor Documentation

◆ CpuQuantizeKernel()

CpuQuantizeKernel ( )
default

Member Function Documentation

◆ ARM_COMPUTE_DISALLOW_COPY_ALLOW_MOVE()

ARM_COMPUTE_DISALLOW_COPY_ALLOW_MOVE ( CpuQuantizeKernel  )

◆ configure()

void configure ( const ITensorInfo src,
ITensorInfo dst 
)

Set the input, output.

Parameters
[in]srcSource tensor info. The dimensions over the third will be interpreted as batches. Data types supported: QASYMM8/QASYMM8_SIGNED/F32/F16.
[out]dstDestination tensor info with the same dimensions of input. Data types supported: QASYMM8/QASYMM8_SIGNED/QASYMM16.
Note
Output auto initialization is not supported by this kernel

Definition at line 111 of file CpuQuantizeKernel.cpp.

References ARM_COMPUTE_ERROR, ARM_COMPUTE_ERROR_ON_NULLPTR, ARM_COMPUTE_ERROR_THROW_ON, arm_compute::calculate_max_window(), ITensorInfo::data_type(), and arm_compute::string_from_data_type().

112 {
114  ARM_COMPUTE_ERROR_THROW_ON(validate_arguments(src, dst));
115 
116  static const std::map<std::string, QuantizeFunctionExecutorPtr> quant_map =
117  {
118  { "op_QASYMM8_QASYMM8", &CpuQuantizeKernel::run_quantize_qasymm8<uint8_t, uint8_t> },
119  { "op_QASYMM8_QASYMM8_SIGNED", &CpuQuantizeKernel::run_quantize_qasymm8<uint8_t, int8_t> },
120  { "op_QASYMM8_QASYMM16", &CpuQuantizeKernel::run_quantize_qasymm16<uint8_t> },
121 
122  { "op_QASYMM8_SIGNED_QASYMM8", &CpuQuantizeKernel::run_quantize_qasymm8<int8_t, uint8_t> },
123  { "op_QASYMM8_SIGNED_QASYMM8_SIGNED", &CpuQuantizeKernel::run_quantize_qasymm8<int8_t, int8_t> },
124  { "op_QASYMM8_SIGNED_QASYMM16", &CpuQuantizeKernel::run_quantize_qasymm16<int8_t> },
125 
126  { "op_F32_QASYMM8", &CpuQuantizeKernel::run_quantize_qasymm8<float, uint8_t> },
127  { "op_F32_QASYMM8_SIGNED", &CpuQuantizeKernel::run_quantize_qasymm8<float, int8_t> },
128  { "op_F32_QASYMM16", &CpuQuantizeKernel::run_quantize_qasymm16<float> },
129 
130 #ifdef __ARM_FEATURE_FP16_VECTOR_ARITHMETIC
131  { "op_F16_QASYMM8", &CpuQuantizeKernel::run_quantize_qasymm8<float16_t, uint8_t> },
132  { "op_F16_QASYMM8_SIGNED", &CpuQuantizeKernel::run_quantize_qasymm8<float16_t, int8_t> },
133  { "op_F16_QASYMM16", &CpuQuantizeKernel::run_quantize_qasymm16<float16_t> },
134 #endif /* __ARM_FEATURE_FP16_VECTOR_ARITHMETIC*/
135  };
136 
137  std::string function_to_call("op_");
138  function_to_call += string_from_data_type(src->data_type()) + "_";
139  function_to_call += string_from_data_type(dst->data_type());
140 
141  auto it = quant_map.find(function_to_call);
142 
143  if(it == quant_map.end())
144  {
145  ARM_COMPUTE_ERROR("Unsupported combination of input and output data types");
146  }
147  _func = it->second;
148 
149  // Configure kernel window
150  Window win_config = calculate_max_window(*src, Steps());
151  ICpuKernel::configure(win_config);
152 }
Window calculate_max_window(const ValidRegion &valid_region, const Steps &steps, bool skip_border, BorderSize border_size)
#define ARM_COMPUTE_ERROR(msg)
Print the given message then throw an std::runtime_error.
Definition: Error.h:352
#define ARM_COMPUTE_ERROR_THROW_ON(status)
Definition: Error.h:455
SimpleTensor< float > src
Definition: DFT.cpp:155
const std::string & string_from_data_type(DataType dt)
Convert a data type identity into a string.
Definition: Utils.cpp:135
#define ARM_COMPUTE_ERROR_ON_NULLPTR(...)
Definition: Validate.h:157

◆ name()

const char * name ( ) const
overridevirtual

Name of the kernel.

Returns
Kernel name

Implements ICPPKernel.

Definition at line 260 of file CpuQuantizeKernel.cpp.

261 {
262  return "CpuQuantizeKernel";
263 }

◆ run_op()

void run_op ( ITensorPack tensors,
const Window window,
const ThreadInfo info 
)
overridevirtual

Execute the kernel on the passed window.

Warning
If is_parallelisable() returns false then the passed window must be equal to window()
Note
The window has to be a region within the window returned by the window() method
The width of the window has to be a multiple of num_elems_processed_per_iteration().
Parameters
[in]tensorsA vector containing the tensors to operate on.
[in]windowRegion on which to execute the kernel. (Must be a region of the window returned by window())
[in]infoInfo about executing thread and CPU.

Reimplemented from ICPPKernel.

Definition at line 248 of file CpuQuantizeKernel.cpp.

References arm_compute::ACL_DST, arm_compute::ACL_SRC, ARM_COMPUTE_ERROR_ON, ARM_COMPUTE_ERROR_ON_INVALID_SUBWINDOW, ARM_COMPUTE_ERROR_ON_UNCONFIGURED_KERNEL, ARM_COMPUTE_UNUSED, ITensorPack::get_const_tensor(), ITensorPack::get_tensor(), and IKernel::window().

249 {
253  ARM_COMPUTE_ERROR_ON(_func == nullptr);
254 
255  const auto src = tensors.get_const_tensor(TensorType::ACL_SRC);
256  auto dst = tensors.get_tensor(TensorType::ACL_DST);
257  (this->*_func)(src, dst, window);
258 }
const Window & window() const
The maximum window the kernel can be executed on.
Definition: IKernel.cpp:28
#define ARM_COMPUTE_ERROR_ON(cond)
If the condition is true then an error message is printed and an exception thrown.
Definition: Error.h:466
SimpleTensor< float > src
Definition: DFT.cpp:155
#define ARM_COMPUTE_UNUSED(...)
To avoid unused variables warnings.
Definition: Error.h:152
#define ARM_COMPUTE_ERROR_ON_UNCONFIGURED_KERNEL(k)
Definition: Validate.h:915
ScaleKernelInfo info(interpolation_policy, default_border_mode, PixelValue(), sampling_policy, false)
#define ARM_COMPUTE_ERROR_ON_INVALID_SUBWINDOW(f, s)
Definition: Validate.h:201

◆ validate()


The documentation for this class was generated from the following files: