Compute Library
IScheduler Class Referenceabstract

Scheduler interface to run kernels. More...

#include <IScheduler.h>

Data Structures

class  Hints
 Scheduler hints. More...

Public Types

enum  StrategyHint { STATIC, DYNAMIC }
 Strategies available to split a workload. More...
using Workload = std::function< void(const ThreadInfo &)>
 Signature for the workloads to execute. More...

Public Member Functions

 IScheduler ()
 Default constructor. More...
virtual ~IScheduler ()=default
 Destructor. More...
virtual void set_num_threads (unsigned int num_threads)=0
 Sets the number of threads the scheduler will use to run the kernels. More...
virtual unsigned int num_threads () const =0
 Returns the number of threads that the SingleThreadScheduler has in his pool. More...
virtual void schedule (ICPPKernel *kernel, const Hints &hints)=0
 Runs the kernel in the same thread as the caller synchronously. More...
virtual void run_tagged_workloads (std::vector< Workload > &workloads, const char *tag)
 Execute all the passed workloads. More...
CPUInfocpu_info ()
 Get CPU info. More...
unsigned int num_threads_hint () const
 Get a hint for the best possible number of execution threads. More...

Detailed Description

Scheduler interface to run kernels.

Definition at line 36 of file IScheduler.h.

Member Typedef Documentation

◆ Workload

using Workload = std::function<void(const ThreadInfo &)>

Signature for the workloads to execute.

Definition at line 106 of file IScheduler.h.

Member Enumeration Documentation

◆ StrategyHint

enum StrategyHint

Strategies available to split a workload.


Split the workload evenly among the threads.


Split the workload dynamically using a bucket system.

Definition at line 40 of file IScheduler.h.

41  {
44  };

Constructor & Destructor Documentation

◆ IScheduler()

Default constructor.

Definition at line 31 of file IScheduler.cpp.

32  : _cpu_info()
33 {
34  get_cpu_configuration(_cpu_info);
35  // Work out the best possible number of execution threads
36  _num_threads_hint = get_threads_hint();
37 }
void get_cpu_configuration(CPUInfo &cpuinfo)
This function will try to detect the CPU configuration on the system and will fill the cpuinfo object...
Definition: CPUUtils.cpp:351
unsigned int get_threads_hint()
Some systems have both big and small cores, this fuction computes the minimum number of cores that ar...
Definition: CPUUtils.cpp:406

References arm_compute::get_cpu_configuration(), and arm_compute::get_threads_hint().

◆ ~IScheduler()

virtual ~IScheduler ( )


Member Function Documentation

◆ cpu_info()

CPUInfo & cpu_info ( )

Get CPU info.

CPU info.

Definition at line 39 of file IScheduler.cpp.

40 {
41  return _cpu_info;
42 }

Referenced by NEGEMMInterleavedWrapper::configure(), and main().

◆ num_threads()

virtual unsigned int num_threads ( ) const
pure virtual

◆ num_threads_hint()

unsigned int num_threads_hint ( ) const

Get a hint for the best possible number of execution threads.

In case we can't work out the best number of threads, std::thread::hardware_concurrency() is returned else 1 in case of bare metal builds
Best possible number of execution threads to use

Definition at line 44 of file IScheduler.cpp.

45 {
46  return _num_threads_hint;
47 }

Referenced by CPPScheduler::set_num_threads().

◆ run_tagged_workloads()

void run_tagged_workloads ( std::vector< Workload > &  workloads,
const char *  tag 

Execute all the passed workloads.

there is no guarantee regarding the order in which the workloads will be executed or whether or not they will be executed in parallel.
[in]workloadsArray of workloads to run
[in]tagString that can be used by profiling tools to identify the workloads run by the scheduler (Can be null).

Definition at line 48 of file IScheduler.cpp.

49 {
51  run_workloads(workloads);
52 }
To avoid unused variables warnings.
Definition: Error.h:160


Referenced by NEGEMMInterleavedWrapper::run().

◆ schedule()

virtual void schedule ( ICPPKernel kernel,
const Hints hints 
pure virtual

Runs the kernel in the same thread as the caller synchronously.

[in]kernelKernel to execute.
[in]hintsHints for the scheduler.

Implemented in CPPScheduler, OMPScheduler, and SingleThreadScheduler.

Referenced by NEWinogradConvolutionLayer::prepare(), NELocallyConnectedLayer::prepare(), NEGEMM::prepare(), NEGEMMLowpMatrixMultiplyCore::prepare(), NEDeconvolutionLayer::prepare(), NEGEMMInterleavedWrapper::prepare(), NEDepthwiseConvolutionLayer::prepare(), ICPPSimpleFunction::run(), INESimpleFunctionNoBorder::run(), INESimpleFunction::run(), NESimpleAssemblyFunction::run(), NEHistogram::run(), NEFillBorder::run(), NEMeanStdDev::run(), NEGEMMLowpAssemblyMatrixMultiplyCore::run(), NEConvertFullyConnectedWeights::run(), NEDerivative::run(), NEHOGDescriptor::run(), NEEqualizeHistogram::run(), NEROIPoolingLayer::run(), NEMinMaxLocation::run(), NEHOGGradient::run(), NERange::run(), NEGaussian5x5::run(), NEUpsampleLayer::run(), NESobel5x5::run(), NESobel7x7::run(), NEArgMinMaxLayer::run(), NEPoolingLayer::run(), NEReductionOperation::run(), NEFastCorners::run(), NEFFT1D::run(), NEL2NormalizeLayer::run(), NEStackLayer::run(), NESpaceToDepthLayer::run(), NEIm2Col::run(), NENormalizationLayer::run(), NEPadLayer::run(), NEConvolutionLayerReshapeWeights::run(), NEScale::run(), NEWinogradConvolutionLayer::run(), NERNNLayer::run(), NECannyEdge::run(), NEConcatenateLayer::run(), NEBatchNormalizationLayer::run(), NEOpticalFlow::run(), CLHarrisCorners::run(), NEHarrisCorners::run(), NESoftmaxLayer::run(), NEConvolutionSquare< matrix_size >::run(), CLHOGMultiDetection::run(), NEHOGMultiDetection::run(), NEGaussianPyramidHalf::run(), NELocallyConnectedLayer::run(), NECropResize::run(), NEFuseBatchNormalization::run(), NEGEMM::run(), NEDirectConvolutionLayer::run(), NESpaceToBatchLayer::run(), NEGEMMLowpMatrixMultiplyCore::run(), NEDepthwiseConvolutionAssemblyDispatch::run(), NEFullyConnectedLayer::run(), NELSTMLayer::run(), NEGEMMConvolutionLayer::run(), and NEDepthwiseConvolutionLayer::run().

◆ set_num_threads()

virtual void set_num_threads ( unsigned int  num_threads)
pure virtual

Sets the number of threads the scheduler will use to run the kernels.

[in]num_threadsIf set to 0, then one thread per CPU core available on the system will be used, otherwise the number of threads specified.

Implemented in CPPScheduler, OMPScheduler, and SingleThreadScheduler.

Referenced by main(), and NEDeviceBackend::setup_backend_context().

The documentation for this class was generated from the following files: