Compute Library
 19.08
NEPixelWiseMultiplication Class Reference

Basic function to run NEPixelWiseMultiplicationKernel. More...

#include <NEPixelWiseMultiplication.h>

Collaboration diagram for NEPixelWiseMultiplication:
[legend]

Public Member Functions

void configure (ITensor *input1, ITensor *input2, ITensor *output, float scale, ConvertPolicy overflow_policy, RoundingPolicy rounding_policy)
 Initialise the kernel's inputs, output and convertion policy. More...
 
- Public Member Functions inherited from INESimpleFunction
 INESimpleFunction ()
 Constructor. More...
 
void run () override final
 Run the kernels contained in the function. More...
 
- Public Member Functions inherited from IFunction
virtual ~IFunction ()=default
 Destructor. More...
 
virtual void prepare ()
 Prepare the function for executing. More...
 

Static Public Member Functions

static Status validate (const ITensorInfo *input1, const ITensorInfo *input2, const ITensorInfo *output, float scale, ConvertPolicy overflow_policy, RoundingPolicy rounding_policy)
 Static function to check if given info will lead to a valid configuration of NEPixelWiseMultiplication. More...
 

Detailed Description

Basic function to run NEPixelWiseMultiplicationKernel.

Definition at line 35 of file NEPixelWiseMultiplication.h.

Member Function Documentation

◆ configure()

void configure ( ITensor input1,
ITensor input2,
ITensor output,
float  scale,
ConvertPolicy  overflow_policy,
RoundingPolicy  rounding_policy 
)

Initialise the kernel's inputs, output and convertion policy.

Note
For scale equal to 1/255 only round to nearest even (implemented as round half up) is supported. For all other scale values only round to zero (implemented as round towards minus infinity) is supported.
Parameters
[in,out]input1An input tensor. Data types supported: U8/QASYMM8/S16/QSYMM16/F16/F32 This input tensor is [in, out] because its TensorInfo might be modified inside the kernel in case of broadcasting of dimension 0.
[in,out]input2An input tensor. Data types supported: U8, QASYMM8 (only if input1 is QASYMM8), S16, QSYMM16 (only if input1 is QSYMM16), F16 (only if input1 is F16), F32 (only if input1 is F32). This input tensor is [in, out] because its TensorInfo might be modified inside the kernel in case of broadcasting of dimension 0.
[out]outputOutput tensor. Data types supported: U8 (Only if both inputs are U8), QASYMM8 (only if both inputs are QASYMM8), S16, QSYMM16 (only if both inputs are QSYMM16), F16 (only if input1 is F16), F32 (only if both inputs are F32).
[in]scaleScale to apply after multiplication. Scale must be positive and its value must be either 1/255 or 1/2^n where n is between 0 and 15.
[in]overflow_policyOverflow policy. ConvertPolicy cannot be WRAP if datatype is QASYMM8 or QSYMM16.
[in]rounding_policyRounding policy.

Definition at line 34 of file NEPixelWiseMultiplication.cpp.

35 {
36  auto k = arm_compute::support::cpp14::make_unique<NEPixelWiseMultiplicationKernel>();
37  k->configure(input1, input2, output, scale, overflow_policy, rounding_policy);
38  _kernel = std::move(k);
39 
40  if(output->info()->dimension(0) > 1)
41  {
42  ITensor *broadcasted_info = (input1->info()->dimension(0) == 1) ? input1 : input2;
43 
44  if(broadcasted_info->info()->dimension(0) == 1)
45  {
46  _border_handler.configure(broadcasted_info, _kernel->border_size(), BorderMode::REPLICATE);
47  }
48  }
49 }
Pixels outside the image are assumed to have the same value as the closest image pixel.

References ITensorInfo::dimension(), ITensor::info(), arm_compute::REPLICATE, arm_compute::test::validation::rounding_policy, and arm_compute::test::validation::scale.

Referenced by NELSTMLayerQuantized::configure().

◆ validate()

Status validate ( const ITensorInfo input1,
const ITensorInfo input2,
const ITensorInfo output,
float  scale,
ConvertPolicy  overflow_policy,
RoundingPolicy  rounding_policy 
)
static

Static function to check if given info will lead to a valid configuration of NEPixelWiseMultiplication.

Note
For scale equal to 1/255 only round to nearest even (implemented as round half up) is supported. For all other scale values only round to zero (implemented as round towards minus infinity) is supported.
Parameters
[in]input1An input tensor info. Data types supported: U8/QASYMM8/S16/QSYMM16/F16/F32
[in]input2An input tensor info. Data types supported: U8, QASYMM8 (only if input1 is QASYMM8), S16, QSYMM16 (only if both inputs are QSYMM16), F16 (only if input1 is F16), F32 (only if input1 is F32).
[in]outputOutput tensor info. Data types supported: U8 (Only if both inputs are U8), QASYMM8 (only if both inputs are QASYMM8), S16, QSYMM16 (only if both inputs are QSYMM16), F16 (only if input1 is F16), F32 (only if both inputs are F32).
[in]scaleScale to apply after multiplication. Scale must be positive and its value must be either 1/255 or 1/2^n where n is between 0 and 15.
[in]overflow_policyOverflow policy. ConvertPolicy cannot be WRAP if datatype is QASYMM8 or QSYMM16.
[in]rounding_policyRounding policy.
Returns
a status

Definition at line 50 of file NEPixelWiseMultiplication.cpp.

51 {
52  return NEPixelWiseMultiplicationKernel::validate(input1, input2, output, scale, overflow_policy, rounding_policy);
53 }
static Status validate(const ITensorInfo *input1, const ITensorInfo *input2, const ITensorInfo *output, float scale, ConvertPolicy overflow_policy, RoundingPolicy rounding_policy)
Static function to check if given info will lead to a valid configuration of NEPixelWiseMultiplicatio...

References arm_compute::test::validation::rounding_policy, arm_compute::test::validation::scale, and NEPixelWiseMultiplicationKernel::validate().

Referenced by NELSTMLayerQuantized::validate().


The documentation for this class was generated from the following files: