Compute Library
 22.05
NEFuseBatchNormalizationKernel Class Reference

OpenNE kernel to fuse the batch normalization node to a preceding convolution node. More...

#include <NEFuseBatchNormalizationKernel.h>

Collaboration diagram for NEFuseBatchNormalizationKernel:
[legend]

Public Member Functions

const char * name () const override
 Name of the kernel. More...
 
 NEFuseBatchNormalizationKernel ()
 Default constructor. More...
 
 NEFuseBatchNormalizationKernel (const NEFuseBatchNormalizationKernel &)=delete
 Prevent instances of this class from being copied (As this class contains pointers) More...
 
NEFuseBatchNormalizationKerneloperator= (const NEFuseBatchNormalizationKernel &)=delete
 Prevent instances of this class from being copied (As this class contains pointers) More...
 
 NEFuseBatchNormalizationKernel (NEFuseBatchNormalizationKernel &&)=default
 Allow instances of this class to be moved. More...
 
NEFuseBatchNormalizationKerneloperator= (NEFuseBatchNormalizationKernel &&)=default
 Allow instances of this class to be moved. More...
 
 ~NEFuseBatchNormalizationKernel ()=default
 Default destructor. More...
 
void configure (const ITensor *input_weights, const ITensor *bn_mean, const ITensor *bn_var, ITensor *fused_weights, ITensor *fused_bias, const ITensor *input_bias=nullptr, const ITensor *bn_beta=nullptr, const ITensor *bn_gamma=nullptr, float epsilon=0.001f, FuseBatchNormalizationType fbn_type=FuseBatchNormalizationType::CONVOLUTION)
 Set the source, destination of the kernel. More...
 
void run (const Window &window, const ThreadInfo &info) override
 Execute the kernel on the passed window. More...
 
- Public Member Functions inherited from ICPPKernel
virtual ~ICPPKernel ()=default
 Default destructor. More...
 
virtual void run_nd (const Window &window, const ThreadInfo &info, const Window &thread_locator)
 legacy compatibility layer for implemantions which do not support thread_locator In these cases we simply narrow the interface down the legacy version More...
 
virtual void run_op (ITensorPack &tensors, const Window &window, const ThreadInfo &info)
 Execute the kernel on the passed window. More...
 
virtual size_t get_mws (const CPUInfo &platform, size_t thread_count) const
 Return minimum workload size of the relevant kernel. More...
 
- Public Member Functions inherited from IKernel
 IKernel ()
 Constructor. More...
 
virtual ~IKernel ()=default
 Destructor. More...
 
virtual bool is_parallelisable () const
 Indicates whether or not the kernel is parallelisable. More...
 
virtual BorderSize border_size () const
 The size of the border for that kernel. More...
 
const Windowwindow () const
 The maximum window the kernel can be executed on. More...
 
bool is_window_configured () const
 Function to check if the embedded window of this kernel has been configured. More...
 

Static Public Member Functions

static Status validate (const ITensorInfo *input_weights, const ITensorInfo *bn_mean, const ITensorInfo *bn_var, const ITensorInfo *fused_weights, const ITensorInfo *fused_bias, const ITensorInfo *input_bias=nullptr, const ITensorInfo *bn_beta=nullptr, const ITensorInfo *bn_gamma=nullptr, float epsilon=0.001f, FuseBatchNormalizationType fbn_type=FuseBatchNormalizationType::CONVOLUTION)
 Static function to check if given info will lead to a valid configuration of NEFuseBatchNormalizationKernel. More...
 

Additional Inherited Members

- Static Public Attributes inherited from ICPPKernel
static constexpr size_t default_mws = 1
 

Detailed Description

OpenNE kernel to fuse the batch normalization node to a preceding convolution node.

Definition at line 35 of file NEFuseBatchNormalizationKernel.h.

Constructor & Destructor Documentation

◆ NEFuseBatchNormalizationKernel() [1/3]

Default constructor.

Definition at line 214 of file NEFuseBatchNormalizationKernel.cpp.

Referenced by NEFuseBatchNormalizationKernel::name().

215  : _input_weights(nullptr), _input_bias(nullptr), _bn_mean(nullptr), _bn_var(nullptr), _bn_gamma(nullptr), _bn_beta(nullptr), _fused_weights(nullptr), _fused_bias(nullptr), _epsilon(),
216  _run_in_place_weights(false), _run_in_place_bias(false), _func(nullptr)
217 {
218 }

◆ NEFuseBatchNormalizationKernel() [2/3]

Prevent instances of this class from being copied (As this class contains pointers)

◆ NEFuseBatchNormalizationKernel() [3/3]

Allow instances of this class to be moved.

◆ ~NEFuseBatchNormalizationKernel()

Default destructor.

Referenced by NEFuseBatchNormalizationKernel::name().

Member Function Documentation

◆ configure()

void configure ( const ITensor input_weights,
const ITensor bn_mean,
const ITensor bn_var,
ITensor fused_weights,
ITensor fused_bias,
const ITensor input_bias = nullptr,
const ITensor bn_beta = nullptr,
const ITensor bn_gamma = nullptr,
float  epsilon = 0.001f,
FuseBatchNormalizationType  fbn_type = FuseBatchNormalizationType::CONVOLUTION 
)

Set the source, destination of the kernel.

Parameters
[in]input_weightsInput weights tensor for convolution or depthwise convolution layer. Data type supported: F16/F32. Data layout supported: NCHW, NHWC
[in]bn_meanBatch normalization layer mean tensor. Same as input_weights
[in]bn_varBatch normalization layer variance tensor. Same as input_weights
[out]fused_weights(Optional) Output fused weights tensor. It can be a nullptr in case of in-place computation. Same as input_weights
[out]fused_bias(Optional) Output fused bias tensor. It can be a nullptr in case of in-place computation and input_bias != nullptr. Same as input_weights
[in]input_bias(Optional) Input bias tensor for convolution or depthwise convolution layer. It can be a nullptr in case the bias tensor is not required. Same as input_weights
[in]bn_beta(Optional) Batch normalization layer beta tensor. It can be a nullptr in case the beta tensor is not required. Same as input_weights
Note
if nullptr, bn_beta is set to 0.0
Parameters
[in]bn_gamma(Optional) Batch normalization layer gamma tensor. It can be a nullptr in case the gamma tensor is not required. Same as input_weights
Note
if nullptr, bn_gamma is set to 1.0
Parameters
[in]epsilon(Optional) Batch normalization layer epsilon parameter. Defaults to 0.001f.
[in]fbn_type(Optional) Fused batch normalization type. Defaults to CONVOLUTION.

Definition at line 220 of file NEFuseBatchNormalizationKernel.cpp.

References ARM_COMPUTE_ERROR_ON, ARM_COMPUTE_ERROR_ON_NULLPTR, ARM_COMPUTE_ERROR_THROW_ON, arm_compute::auto_init_if_empty(), arm_compute::calculate_max_window(), ICloneable< T >::clone(), ITensorInfo::data_layout(), ITensorInfo::data_type(), arm_compute::quantization::epsilon, fbn_type, CPUInfo::get(), CPUInfo::get_isa(), ITensor::info(), and arm_compute::cpu::kernels::validate_arguments().

Referenced by NEFuseBatchNormalizationKernel::name().

224 {
225  ARM_COMPUTE_ERROR_ON_NULLPTR(input_weights, bn_mean, bn_var);
226 
227  _input_weights = input_weights;
228  _input_bias = input_bias;
229  _bn_mean = bn_mean;
230  _bn_var = bn_var;
231  _bn_beta = bn_beta;
232  _bn_gamma = bn_gamma;
233  _fused_weights = fused_weights;
234  _fused_bias = fused_bias;
235  _epsilon = epsilon;
236 
237  _run_in_place_weights = (fused_weights == nullptr) || (fused_weights == input_weights);
238  _run_in_place_bias = (fused_bias == nullptr) || (input_bias != nullptr && fused_bias == input_bias);
239 
240  // Auto initialize outputs
241  if(_fused_weights != nullptr)
242  {
243  // Output tensor auto initialization if not yet initialized
244  auto_init_if_empty(*_fused_weights->info(), *_input_weights->info()->clone());
245  }
246  if(_fused_bias != nullptr)
247  {
248  // Output tensor auto initialization if not yet initialized
249  auto_init_if_empty(*_fused_bias->info(), *_bn_mean->info()->clone());
250  }
251 
252  // Validate arguments
253  ARM_COMPUTE_ERROR_THROW_ON(validate_arguments(input_weights->info(), bn_mean->info(), bn_var->info(),
254  (fused_weights != nullptr) ? fused_weights->info() : nullptr,
255  (fused_bias != nullptr) ? fused_bias->info() : nullptr,
256  (input_bias != nullptr) ? input_bias->info() : nullptr,
257  (bn_beta != nullptr) ? bn_beta->info() : nullptr,
258  (bn_gamma != nullptr) ? bn_gamma->info() : nullptr,
259  epsilon, fbn_type));
260 
261  const auto *uk = get_implementation(FuseBatchNormalizeSelectorData{ input_weights->info()->data_type(), input_weights->info()->data_layout(), fbn_type, CPUInfo::get().get_isa() });
263  ARM_COMPUTE_ERROR_ON(uk->ukernel == nullptr);
264  _func = uk->ukernel;
265 
266  // Configure kernel window
267  Window win = calculate_max_window(*input_weights->info());
268  INEKernel::configure(win);
269 }
Window calculate_max_window(const ValidRegion &valid_region, const Steps &steps, bool skip_border, BorderSize border_size)
FuseBatchNormalizationType fbn_type
#define ARM_COMPUTE_ERROR_ON(cond)
If the condition is true then an error message is printed and an exception thrown.
Definition: Error.h:466
#define ARM_COMPUTE_ERROR_THROW_ON(status)
Definition: Error.h:455
Status validate_arguments(const ITensorInfo *src, const ITensorInfo *weights, const ITensorInfo *dst, const PadStrideInfo &conv_info)
bool auto_init_if_empty(ITensorInfo &info, const TensorShape &shape, int num_channels, DataType data_type, QuantizationInfo quantization_info=QuantizationInfo())
Auto initialize the tensor info (shape, number of channels and data type) if the current assignment i...
virtual std::unique_ptr< T > clone() const =0
Provide a clone of the current object of class T.
virtual ITensorInfo * info() const =0
Interface to be implemented by the child class to return the tensor&#39;s metadata.
#define ARM_COMPUTE_ERROR_ON_NULLPTR(...)
Definition: Validate.h:157
static CPUInfo & get()
Access the KernelLibrary singleton.
Definition: CPPTypes.cpp:40
cpuinfo::CpuIsaInfo get_isa() const
Gets the current cpu&#39;s ISA information.
Definition: CPPTypes.cpp:114

◆ name()

◆ operator=() [1/2]

Prevent instances of this class from being copied (As this class contains pointers)

Referenced by NEFuseBatchNormalizationKernel::name().

◆ operator=() [2/2]

Allow instances of this class to be moved.

◆ run()

void run ( const Window window,
const ThreadInfo info 
)
overridevirtual

Execute the kernel on the passed window.

Warning
If is_parallelisable() returns false then the passed window must be equal to window()
Note
The window has to be a region within the window returned by the window() method
The width of the window has to be a multiple of num_elems_processed_per_iteration().
Parameters
[in]windowRegion on which to execute the kernel. (Must be a region of the window returned by window())
[in]infoInfo about executing thread and CPU.

Reimplemented from ICPPKernel.

Definition at line 280 of file NEFuseBatchNormalizationKernel.cpp.

References ARM_COMPUTE_ERROR_ON, ARM_COMPUTE_ERROR_ON_INVALID_SUBWINDOW, ARM_COMPUTE_ERROR_ON_UNCONFIGURED_KERNEL, ARM_COMPUTE_UNUSED, and IKernel::window().

Referenced by NEFuseBatchNormalizationKernel::name().

281 {
285 
286  ARM_COMPUTE_ERROR_ON(_func == nullptr);
287  (*_func)(_input_weights, _input_bias, _fused_weights, _fused_bias, _bn_mean, _bn_var, _bn_beta, _bn_gamma, _epsilon, window);
288 }
const Window & window() const
The maximum window the kernel can be executed on.
Definition: IKernel.cpp:28
#define ARM_COMPUTE_ERROR_ON(cond)
If the condition is true then an error message is printed and an exception thrown.
Definition: Error.h:466
#define ARM_COMPUTE_UNUSED(...)
To avoid unused variables warnings.
Definition: Error.h:152
#define ARM_COMPUTE_ERROR_ON_UNCONFIGURED_KERNEL(k)
Definition: Validate.h:915
ScaleKernelInfo info(interpolation_policy, default_border_mode, PixelValue(), sampling_policy, false)
#define ARM_COMPUTE_ERROR_ON_INVALID_SUBWINDOW(f, s)
Definition: Validate.h:201

◆ validate()

Status validate ( const ITensorInfo input_weights,
const ITensorInfo bn_mean,
const ITensorInfo bn_var,
const ITensorInfo fused_weights,
const ITensorInfo fused_bias,
const ITensorInfo input_bias = nullptr,
const ITensorInfo bn_beta = nullptr,
const ITensorInfo bn_gamma = nullptr,
float  epsilon = 0.001f,
FuseBatchNormalizationType  fbn_type = FuseBatchNormalizationType::CONVOLUTION 
)
static

Static function to check if given info will lead to a valid configuration of NEFuseBatchNormalizationKernel.

Parameters
[in]input_weightsInput weights tensor info for convolution or depthwise convolution layer. Data type supported: F16/F32. Data layout supported: NCHW, NHWC
[in]bn_meanBatch normalization layer mean tensor info. Same as input_weights
[in]bn_varBatch normalization layer variance tensor info. Same as input_weights
[in]fused_weights(Optional) Output fused weights tensor info. It can be a nullptr in case of in-place computation. Same as input_weights
[in]fused_bias(Optional) Output fused bias tensor info. It can be a nullptr in case of in-place computation and input_bias != nullptr. Same as input_weights
[in]input_bias(Optional) Input bias tensor info for convolution or depthwise convolution layer. It can be a nullptr in case the bias tensor is not required. Same as input_weights
[in]bn_beta(Optional) Batch normalization layer beta tensor info. It can be a nullptr in case the beta tensor is not required. Same as input_weights
Note
if nullptr, bn_beta is set to 0.0
Parameters
[in]bn_gamma(Optional) Batch normalization layer gamma tensor info. It can be a nullptr in case the gamma tensor is not required. Same as input_weights
Note
if nullptr, bn_gamma is set to 1.0
Parameters
[in]epsilon(Optional) Batch normalization layer epsilon parameter. Defaults to 0.001f.
[in]fbn_type(Optional) Fused batch normalization type. Defaults to CONVOLUTION.
Returns
a status

Definition at line 271 of file NEFuseBatchNormalizationKernel.cpp.

References ARM_COMPUTE_RETURN_ON_ERROR, and arm_compute::cpu::kernels::validate_arguments().

Referenced by NEFuseBatchNormalizationKernel::name(), and NEFuseBatchNormalization::validate().

275 {
276  ARM_COMPUTE_RETURN_ON_ERROR(validate_arguments(input_weights, bn_mean, bn_var, fused_weights, fused_bias, input_bias, bn_beta, bn_gamma, epsilon, fbn_type));
277  return Status{};
278 }
FuseBatchNormalizationType fbn_type
#define ARM_COMPUTE_RETURN_ON_ERROR(status)
Checks if a status contains an error and returns it.
Definition: Error.h:204
Status validate_arguments(const ITensorInfo *src, const ITensorInfo *weights, const ITensorInfo *dst, const PadStrideInfo &conv_info)

The documentation for this class was generated from the following files: