Compute Library
 21.02
CLFuseBatchNormalizationKernel Class Reference

OpenCL kernel to fuse the batch normalization node to a preceding convolution node. More...

#include <CLFuseBatchNormalizationKernel.h>

Collaboration diagram for CLFuseBatchNormalizationKernel:
[legend]

Public Member Functions

 CLFuseBatchNormalizationKernel ()
 Default constructor. More...
 
 CLFuseBatchNormalizationKernel (const CLFuseBatchNormalizationKernel &)=delete
 Prevent instances of this class from being copied (As this class contains pointers) More...
 
CLFuseBatchNormalizationKerneloperator= (const CLFuseBatchNormalizationKernel &)=delete
 Prevent instances of this class from being copied (As this class contains pointers) More...
 
 CLFuseBatchNormalizationKernel (CLFuseBatchNormalizationKernel &&)=default
 Allow instances of this class to be moved. More...
 
CLFuseBatchNormalizationKerneloperator= (CLFuseBatchNormalizationKernel &&)=default
 Allow instances of this class to be moved. More...
 
 ~CLFuseBatchNormalizationKernel ()=default
 Default destructor. More...
 
void configure (const ICLTensor *input_weights, const ICLTensor *bn_mean, const ICLTensor *bn_var, ICLTensor *fused_weights, ICLTensor *fused_bias, const ICLTensor *input_bias=nullptr, const ICLTensor *bn_beta=nullptr, const ICLTensor *bn_gamma=nullptr, float epsilon=0.001f, FuseBatchNormalizationType fbn_type=FuseBatchNormalizationType::CONVOLUTION)
 Set the source, destination of the kernel. More...
 
void configure (const CLCompileContext &compile_context, const ICLTensor *input_weights, const ICLTensor *bn_mean, const ICLTensor *bn_var, ICLTensor *fused_weights, ICLTensor *fused_bias, const ICLTensor *input_bias=nullptr, const ICLTensor *bn_beta=nullptr, const ICLTensor *bn_gamma=nullptr, float epsilon=0.001f, FuseBatchNormalizationType fbn_type=FuseBatchNormalizationType::CONVOLUTION)
 Set the source, destination of the kernel. More...
 
void run (const Window &window, cl::CommandQueue &queue) override
 Enqueue the OpenCL kernel to process the given window on the passed OpenCL command queue. More...
 
- Public Member Functions inherited from ICLKernel
 ICLKernel ()
 Constructor. More...
 
cl::Kernel & kernel ()
 Returns a reference to the OpenCL kernel of this object. More...
 
template<typename T >
void add_1D_array_argument (unsigned int &idx, const ICLArray< T > *array, const Strides &strides, unsigned int num_dimensions, const Window &window)
 Add the passed 1D array's parameters to the object's kernel's arguments starting from the index idx. More...
 
void add_1D_tensor_argument (unsigned int &idx, const ICLTensor *tensor, const Window &window)
 Add the passed 1D tensor's parameters to the object's kernel's arguments starting from the index idx. More...
 
void add_1D_tensor_argument_if (bool cond, unsigned int &idx, const ICLTensor *tensor, const Window &window)
 Add the passed 1D tensor's parameters to the object's kernel's arguments starting from the index idx if the condition is true. More...
 
void add_2D_tensor_argument (unsigned int &idx, const ICLTensor *tensor, const Window &window)
 Add the passed 2D tensor's parameters to the object's kernel's arguments starting from the index idx. More...
 
void add_2D_tensor_argument_if (bool cond, unsigned int &idx, const ICLTensor *tensor, const Window &window)
 Add the passed 2D tensor's parameters to the object's kernel's arguments starting from the index idx if the condition is true. More...
 
void add_3D_tensor_argument (unsigned int &idx, const ICLTensor *tensor, const Window &window)
 Add the passed 3D tensor's parameters to the object's kernel's arguments starting from the index idx. More...
 
void add_4D_tensor_argument (unsigned int &idx, const ICLTensor *tensor, const Window &window)
 Add the passed 4D tensor's parameters to the object's kernel's arguments starting from the index idx. More...
 
virtual void run_op (ITensorPack &tensors, const Window &window, cl::CommandQueue &queue)
 Enqueue the OpenCL kernel to process the given window on the passed OpenCL command queue. More...
 
template<typename T >
void add_argument (unsigned int &idx, T value)
 Add the passed parameters to the object's kernel's arguments starting from the index idx. More...
 
void set_lws_hint (const cl::NDRange &lws_hint)
 Set the Local-Workgroup-Size hint. More...
 
cl::NDRange lws_hint () const
 Return the Local-Workgroup-Size hint. More...
 
void set_wbsm_hint (const cl_int &wbsm_hint)
 Set the workgroup batch size modifier hint. More...
 
cl_int wbsm_hint () const
 Return the workgroup batch size modifier hint. More...
 
const std::string & config_id () const
 Get the configuration ID. More...
 
void set_target (GPUTarget target)
 Set the targeted GPU architecture. More...
 
void set_target (cl::Device &device)
 Set the targeted GPU architecture according to the CL device. More...
 
GPUTarget get_target () const
 Get the targeted GPU architecture. More...
 
size_t get_max_workgroup_size ()
 Get the maximum workgroup size for the device the CLKernelLibrary uses. More...
 
template<unsigned int dimension_size>
void add_tensor_argument (unsigned &idx, const ICLTensor *tensor, const Window &window)
 
template<typename T , unsigned int dimension_size>
void add_array_argument (unsigned &idx, const ICLArray< T > *array, const Strides &strides, unsigned int num_dimensions, const Window &window)
 Add the passed array's parameters to the object's kernel's arguments starting from the index idx. More...
 
- Public Member Functions inherited from IKernel
 IKernel ()
 Constructor. More...
 
virtual ~IKernel ()=default
 Destructor. More...
 
virtual bool is_parallelisable () const
 Indicates whether or not the kernel is parallelisable. More...
 
virtual BorderSize border_size () const
 The size of the border for that kernel. More...
 
const Windowwindow () const
 The maximum window the kernel can be executed on. More...
 

Static Public Member Functions

static Status validate (const ITensorInfo *input_weights, const ITensorInfo *bn_mean, const ITensorInfo *bn_var, const ITensorInfo *fused_weights, const ITensorInfo *fused_bias, const ITensorInfo *input_bias=nullptr, const ITensorInfo *bn_beta=nullptr, const ITensorInfo *bn_gamma=nullptr, float epsilon=0.001f, FuseBatchNormalizationType fbn_type=FuseBatchNormalizationType::CONVOLUTION)
 Static function to check if given info will lead to a valid configuration of CLFuseBatchNormalizationKernel. More...
 
- Static Public Member Functions inherited from ICLKernel
static constexpr unsigned int num_arguments_per_1D_array ()
 Returns the number of arguments enqueued per 1D array object. More...
 
static constexpr unsigned int num_arguments_per_1D_tensor ()
 Returns the number of arguments enqueued per 1D tensor object. More...
 
static constexpr unsigned int num_arguments_per_2D_tensor ()
 Returns the number of arguments enqueued per 2D tensor object. More...
 
static constexpr unsigned int num_arguments_per_3D_tensor ()
 Returns the number of arguments enqueued per 3D tensor object. More...
 
static constexpr unsigned int num_arguments_per_4D_tensor ()
 Returns the number of arguments enqueued per 4D tensor object. More...
 
static cl::NDRange gws_from_window (const Window &window)
 Get the global work size given an execution window. More...
 

Detailed Description

OpenCL kernel to fuse the batch normalization node to a preceding convolution node.

Definition at line 35 of file CLFuseBatchNormalizationKernel.h.

Constructor & Destructor Documentation

◆ CLFuseBatchNormalizationKernel() [1/3]

Default constructor.

Definition at line 102 of file CLFuseBatchNormalizationKernel.cpp.

103  : _input_weights(nullptr), _input_bias(nullptr), _bn_mean(nullptr), _bn_var(nullptr), _bn_gamma(nullptr), _bn_beta(nullptr), _fused_weights(nullptr), _fused_bias(nullptr), _epsilon(),
104  _run_in_place_weights(false), _run_in_place_bias(false)
105 {
106 }

◆ CLFuseBatchNormalizationKernel() [2/3]

Prevent instances of this class from being copied (As this class contains pointers)

◆ CLFuseBatchNormalizationKernel() [3/3]

Allow instances of this class to be moved.

◆ ~CLFuseBatchNormalizationKernel()

Default destructor.

Member Function Documentation

◆ configure() [1/2]

void configure ( const ICLTensor input_weights,
const ICLTensor bn_mean,
const ICLTensor bn_var,
ICLTensor fused_weights,
ICLTensor fused_bias,
const ICLTensor input_bias = nullptr,
const ICLTensor bn_beta = nullptr,
const ICLTensor bn_gamma = nullptr,
float  epsilon = 0.001f,
FuseBatchNormalizationType  fbn_type = FuseBatchNormalizationType::CONVOLUTION 
)

Set the source, destination of the kernel.

Parameters
[in]input_weightsInput weights tensor for convolution or depthwise convolution layer. Data type supported: F16/F32. Data layout supported: NCHW, NHWC
[in]bn_meanBatch normalization layer mean tensor. Same as input_weights
[in]bn_varBatch normalization layer variance tensor. Same as input_weights
[out]fused_weightsOutput fused weights tensor. It can be a nullptr in case of in-place computation. Same as input_weights
[out]fused_biasOutput fused bias tensor. It can be a nullptr in case of in-place computation and input_bias != nullptr. Same as input_weights
[in]input_bias(Optional) Input bias tensor for convolution or depthwise convolution layer. It can be a nullptr in case the bias tensor is not required. Same as input_weights
[in]bn_beta(Optional) Batch normalization layer beta tensor. It can be a nullptr in case the beta tensor is not required. Same as input_weights
Note
if nullptr, bn_beta is set to 0.0
Parameters
[in]bn_gamma(Optional) Batch normalization layer gamma tensor. It can be a nullptr in case the gamma tensor is not required. Same as input_weights
Note
if nullptr, bn_gamma is set to 1.0
Parameters
[in]epsilon(Optional) Batch normalization layer epsilon parameter. Defaults to 0.001f.
[in]fbn_type(Optional) Fused batch normalization type. Defaults to CONVOLUTION.

Definition at line 108 of file CLFuseBatchNormalizationKernel.cpp.

References CLKernelLibrary::get().

112 {
113  configure(CLKernelLibrary::get().get_compile_context(), input_weights, bn_mean, bn_var, fused_weights, fused_bias, input_bias, bn_beta, bn_gamma, epsilon, fbn_type);
114 }
static CLKernelLibrary & get()
Access the KernelLibrary singleton.
void configure(const ICLTensor *input_weights, const ICLTensor *bn_mean, const ICLTensor *bn_var, ICLTensor *fused_weights, ICLTensor *fused_bias, const ICLTensor *input_bias=nullptr, const ICLTensor *bn_beta=nullptr, const ICLTensor *bn_gamma=nullptr, float epsilon=0.001f, FuseBatchNormalizationType fbn_type=FuseBatchNormalizationType::CONVOLUTION)
Set the source, destination of the kernel.

◆ configure() [2/2]

void configure ( const CLCompileContext compile_context,
const ICLTensor input_weights,
const ICLTensor bn_mean,
const ICLTensor bn_var,
ICLTensor fused_weights,
ICLTensor fused_bias,
const ICLTensor input_bias = nullptr,
const ICLTensor bn_beta = nullptr,
const ICLTensor bn_gamma = nullptr,
float  epsilon = 0.001f,
FuseBatchNormalizationType  fbn_type = FuseBatchNormalizationType::CONVOLUTION 
)

Set the source, destination of the kernel.

Parameters
[in]compile_contextThe compile context to be used.
[in]input_weightsInput weights tensor for convolution or depthwise convolution layer. Data type supported: F16/F32. Data layout supported: NCHW, NHWC
[in]bn_meanBatch normalization layer mean tensor. Same as input_weights
[in]bn_varBatch normalization layer variance tensor. Same as input_weights
[out]fused_weightsOutput fused weights tensor. It can be a nullptr in case of in-place computation. Same as input_weights
[out]fused_biasOutput fused bias tensor. It can be a nullptr in case of in-place computation and input_bias != nullptr. Same as input_weights
[in]input_bias(Optional) Input bias tensor for convolution or depthwise convolution layer. It can be a nullptr in case the bias tensor is not required. Same as input_weights
[in]bn_beta(Optional) Batch normalization layer beta tensor. It can be a nullptr in case the beta tensor is not required. Same as input_weights
Note
if nullptr, bn_beta is set to 0.0
Parameters
[in]bn_gamma(Optional) Batch normalization layer gamma tensor. It can be a nullptr in case the gamma tensor is not required. Same as input_weights
Note
if nullptr, bn_gamma is set to 1.0
Parameters
[in]epsilon(Optional) Batch normalization layer epsilon parameter. Defaults to 0.001f.
[in]fbn_type(Optional) Fused batch normalization type. Defaults to CONVOLUTION.

Definition at line 116 of file CLFuseBatchNormalizationKernel.cpp.

References CLBuildOptions::add_option(), CLBuildOptions::add_option_if(), ARM_COMPUTE_ERROR_ON, ARM_COMPUTE_ERROR_ON_NULLPTR, ARM_COMPUTE_ERROR_THROW_ON, arm_compute::auto_init_if_empty(), arm_compute::calculate_max_window(), ICloneable< T >::clone(), arm_compute::CONVOLUTION, arm_compute::create_kernel(), ITensorInfo::data_type(), ITensorInfo::dimension(), arm_compute::quantization::epsilon, arm_compute::float_to_string_with_full_precision(), arm_compute::get_cl_type_from_data_type(), arm_compute::get_padding_info(), arm_compute::has_padding_changed(), ITensor::info(), arm_compute::NHWC, CLBuildOptions::options(), arm_compute::support::cpp11::to_string(), and arm_compute::validate_arguments().

120 {
121  ARM_COMPUTE_ERROR_ON_NULLPTR(input_weights, bn_mean, bn_var);
122 
123  auto padding_info = get_padding_info({ input_weights, bn_mean, bn_var, fused_weights, fused_bias, input_bias, bn_beta, bn_gamma });
124 
125  _input_weights = input_weights;
126  _input_bias = input_bias;
127  _bn_mean = bn_mean;
128  _bn_var = bn_var;
129  _bn_beta = bn_beta;
130  _bn_gamma = bn_gamma;
131  _fused_weights = fused_weights;
132  _fused_bias = fused_bias;
133  _epsilon = epsilon;
134 
135  _run_in_place_weights = (fused_weights == nullptr) || (fused_weights == input_weights);
136  _run_in_place_bias = (input_bias != nullptr && fused_bias == nullptr) || (input_bias != nullptr && fused_bias == input_bias);
137 
138  // Auto initialize outputs
139  if(_fused_weights != nullptr)
140  {
141  // Output tensor auto initialization if not yet initialized
142  auto_init_if_empty(*_fused_weights->info(), *_input_weights->info()->clone());
143  }
144  if(_fused_bias != nullptr)
145  {
146  // Output tensor auto initialization if not yet initialized
147  auto_init_if_empty(*_fused_bias->info(), *_bn_mean->info()->clone());
148  }
149 
150  // Validate arguments
151  ARM_COMPUTE_ERROR_THROW_ON(validate_arguments(input_weights->info(), bn_mean->info(), bn_var->info(),
152  (fused_weights != nullptr) ? fused_weights->info() : nullptr,
153  (fused_bias != nullptr) ? fused_bias->info() : nullptr,
154  (input_bias != nullptr) ? input_bias->info() : nullptr,
155  (bn_beta != nullptr) ? bn_beta->info() : nullptr,
156  (bn_gamma != nullptr) ? bn_gamma->info() : nullptr,
157  epsilon, fbn_type));
158 
159  // Configure kernel window
160  Window win = calculate_max_window(*input_weights->info());
161  ICLKernel::configure_internal(win);
162 
163  // Set build options
164  CLBuildOptions build_opts;
165  build_opts.add_option("-DDATA_TYPE=" + get_cl_type_from_data_type(input_weights->info()->data_type()));
166  build_opts.add_option_if(fbn_type == FuseBatchNormalizationType::CONVOLUTION, "-DDIM2=" + support::cpp11::to_string(input_weights->info()->dimension(2)));
167  build_opts.add_option("-DEPSILON=" + float_to_string_with_full_precision(epsilon));
168  build_opts.add_option_if(_input_weights->info()->data_layout() == DataLayout::NHWC, "-DNHWC");
169  build_opts.add_option_if(_run_in_place_weights, "-DIN_PLACE_W");
170  build_opts.add_option_if(_run_in_place_bias, "-DIN_PLACE_B");
171  build_opts.add_option_if(input_bias != nullptr, "-DBIAS");
172  build_opts.add_option_if(bn_beta != nullptr, "-DBETA");
173  build_opts.add_option_if(bn_gamma != nullptr, "-DGAMMA");
174 
175  // Create kernel
176  _kernel = create_kernel(compile_context, "fuse_batchnormalization_layer", build_opts.options());
177 
179 }
Window calculate_max_window(const ValidRegion &valid_region, const Steps &steps, bool skip_border, BorderSize border_size)
std::string to_string(T &&value)
Convert integer and float values to string.
#define ARM_COMPUTE_ERROR_ON(cond)
If the condition is true then an error message is printed and an exception thrown.
Definition: Error.h:466
#define ARM_COMPUTE_ERROR_THROW_ON(status)
Definition: Error.h:455
cl::Kernel create_kernel(const CLCompileContext &ctx, const std::string &kernel_name, const std::set< std::string > &build_opts=std::set< std::string >())
Creates an opencl kernel using a compile context.
Definition: CLHelpers.cpp:403
std::string float_to_string_with_full_precision(float val)
Create a string with the float in full precision.
Definition: Utils.h:1262
std::string get_cl_type_from_data_type(const DataType &dt)
Translates a tensor data type to the appropriate OpenCL type.
Definition: CLHelpers.cpp:37
bool auto_init_if_empty(ITensorInfo &info, const TensorShape &shape, int num_channels, DataType data_type, QuantizationInfo quantization_info=QuantizationInfo())
Auto initialize the tensor info (shape, number of channels and data type) if the current assignment i...
virtual std::unique_ptr< T > clone() const =0
Provide a clone of the current object of class T.
virtual ITensorInfo * info() const =0
Interface to be implemented by the child class to return the tensor&#39;s metadata.
bool has_padding_changed(const std::unordered_map< const ITensorInfo *, PaddingSize > &padding_map)
Check if the previously stored padding info has changed after configuring a kernel.
Definition: Utils.cpp:528
Num samples, height, width, channels.
std::unordered_map< const ITensorInfo *, PaddingSize > get_padding_info(std::initializer_list< const ITensorInfo *> infos)
Stores padding information before configuring a kernel.
Definition: Utils.cpp:513
Status validate_arguments(const ITensorInfo *input, const ITensorInfo *bias, const ITensorInfo *output, const GEMMLowpOutputStageInfo *output_stage)
#define ARM_COMPUTE_ERROR_ON_NULLPTR(...)
Definition: Validate.h:161

◆ operator=() [1/2]

Prevent instances of this class from being copied (As this class contains pointers)

◆ operator=() [2/2]

Allow instances of this class to be moved.

◆ run()

void run ( const Window window,
cl::CommandQueue &  queue 
)
overridevirtual

Enqueue the OpenCL kernel to process the given window on the passed OpenCL command queue.

Note
The queue is not flushed by this method, and therefore the kernel will not have been executed by the time this method returns.
Parameters
[in]windowRegion on which to execute the kernel. (Must be a valid region of the window returned by window()).
[in,out]queueCommand queue on which to enqueue the kernel.

Reimplemented from ICLKernel.

Definition at line 190 of file CLFuseBatchNormalizationKernel.cpp.

References ICLKernel::add_1D_tensor_argument(), ICLKernel::add_3D_tensor_argument(), ARM_COMPUTE_ERROR_ON_INVALID_SUBWINDOW, ARM_COMPUTE_ERROR_ON_UNCONFIGURED_KERNEL, Window::collapse(), Window::DimZ, arm_compute::enqueue(), Window::first_slice_window_1D(), Window::first_slice_window_3D(), ICLKernel::lws_hint(), and IKernel::window().

191 {
194 
195  // Create window slice
196  Window collapsed_window = window.collapse(window, Window::DimZ);
197  Window slice_1d = window.first_slice_window_1D();
198  Window slice_3d = collapsed_window.first_slice_window_3D();
199 
200  // Add kernel arguments
201  unsigned int idx = 0;
202  add_3D_tensor_argument(idx, _input_weights, slice_3d);
203  if(_input_bias != nullptr)
204  {
205  add_1D_tensor_argument(idx, _input_bias, slice_1d);
206  }
207  add_1D_tensor_argument(idx, _bn_mean, slice_1d);
208  add_1D_tensor_argument(idx, _bn_var, slice_1d);
209  if(!_run_in_place_weights)
210  {
211  add_3D_tensor_argument(idx, _fused_weights, slice_3d);
212  }
213  if(!_run_in_place_bias)
214  {
215  add_1D_tensor_argument(idx, _fused_bias, slice_1d);
216  }
217  if(_bn_beta != nullptr)
218  {
219  add_1D_tensor_argument(idx, _bn_beta, slice_1d);
220  }
221  if(_bn_gamma != nullptr)
222  {
223  add_1D_tensor_argument(idx, _bn_gamma, slice_1d);
224  }
225  enqueue(queue, *this, slice_3d, lws_hint());
226 }
const Window & window() const
The maximum window the kernel can be executed on.
Definition: IKernel.cpp:28
void enqueue(IGCKernel &kernel, const Window &window, const gles::NDRange &lws=gles::NDRange(1U, 1U, 1U))
Add the kernel to the command queue with the given window.
Definition: IGCKernel.cpp:41
cl::NDRange lws_hint() const
Return the Local-Workgroup-Size hint.
Definition: ICLKernel.h:276
void add_3D_tensor_argument(unsigned int &idx, const ICLTensor *tensor, const Window &window)
Add the passed 3D tensor&#39;s parameters to the object&#39;s kernel&#39;s arguments starting from the index idx...
Definition: ICLKernel.h:172
Window collapse(const Window &full_window, size_t first, size_t last=Coordinates::num_max_dimensions) const
Collapse the dimensions between first and last.
Definition: Window.inl:111
#define ARM_COMPUTE_ERROR_ON_UNCONFIGURED_KERNEL(k)
Definition: Validate.h:941
static constexpr size_t DimZ
Alias for dimension 2 also known as Z dimension.
Definition: Window.h:47
void add_1D_tensor_argument(unsigned int &idx, const ICLTensor *tensor, const Window &window)
Add the passed 1D tensor&#39;s parameters to the object&#39;s kernel&#39;s arguments starting from the index idx...
Definition: ICLKernel.h:124
Window first_slice_window_3D() const
First 3D slice of the window.
Definition: Window.h:291
#define ARM_COMPUTE_ERROR_ON_INVALID_SUBWINDOW(f, s)
Definition: Validate.h:205
Window first_slice_window_1D() const
First 1D slice of the window.
Definition: Window.h:275

◆ validate()

Status validate ( const ITensorInfo input_weights,
const ITensorInfo bn_mean,
const ITensorInfo bn_var,
const ITensorInfo fused_weights,
const ITensorInfo fused_bias,
const ITensorInfo input_bias = nullptr,
const ITensorInfo bn_beta = nullptr,
const ITensorInfo bn_gamma = nullptr,
float  epsilon = 0.001f,
FuseBatchNormalizationType  fbn_type = FuseBatchNormalizationType::CONVOLUTION 
)
static

Static function to check if given info will lead to a valid configuration of CLFuseBatchNormalizationKernel.

Parameters
[in]input_weightsInput weights tensor info for convolution or depthwise convolution layer. Data type supported: F16/F32. Data layout supported: NCHW, NHWC
[in]bn_meanBatch normalization layer mean tensor info. Same as input_weights
[in]bn_varBatch normalization layer variance tensor info. Same as input_weights
[in]fused_weightsOutput fused weights tensor info. It can be a nullptr in case of in-place computation. Same as input_weights
[in]fused_biasOutput fused bias tensor info. It can be a nullptr in case of in-place computation and input_bias != nullptr. Same as input_weights
[in]input_bias(Optional) Input bias tensor info for convolution or depthwise convolution layer. It can be a nullptr in case the bias tensor is not required. Same as input_weights
[in]bn_beta(Optional) Batch normalization layer beta tensor info. It can be a nullptr in case the beta tensor is not required. Same as input_weights
Note
if nullptr, bn_beta is set to 0.0
Parameters
[in]bn_gamma(Optional) Batch normalization layer gamma tensor info. It can be a nullptr in case the gamma tensor is not required. Same as input_weights
Note
if nullptr, bn_gamma is set to 1.0
Parameters
[in]epsilon(Optional) Batch normalization layer epsilon parameter. Defaults to 0.001f.
[in]fbn_type(Optional) Fused batch normalization type. Defaults to CONVOLUTION.
Returns
a status

Definition at line 181 of file CLFuseBatchNormalizationKernel.cpp.

References ARM_COMPUTE_RETURN_ON_ERROR, and arm_compute::validate_arguments().

Referenced by CLFuseBatchNormalization::validate().

185 {
186  ARM_COMPUTE_RETURN_ON_ERROR(validate_arguments(input_weights, bn_mean, bn_var, fused_weights, fused_bias, input_bias, bn_beta, bn_gamma, epsilon, fbn_type));
187  return Status{};
188 }
#define ARM_COMPUTE_RETURN_ON_ERROR(status)
Checks if a status contains an error and returns it.
Definition: Error.h:204
Status validate_arguments(const ITensorInfo *input, const ITensorInfo *bias, const ITensorInfo *output, const GEMMLowpOutputStageInfo *output_stage)

The documentation for this class was generated from the following files: