22.08
|
scons 2.3 or above is required to build the library. To see the build options available simply run scons -h
For Linux, the library was successfully built and tested using the following Linaro GCC toolchain:
To cross-compile the library in debug mode, with Arm® Neon™ only support, for Linux 32bit:
scons Werror=1 -j8 debug=1 neon=1 opencl=0 os=linux arch=armv7a
To cross-compile the library in asserts mode, with OpenCL only support, for Linux 64bit:
scons Werror=1 -j8 debug=0 asserts=1 neon=0 opencl=1 embed_kernels=1 os=linux arch=armv8a
You can also compile the library natively on an Arm device by using build=native:
scons Werror=1 -j8 debug=0 neon=1 opencl=0 os=linux arch=armv8a build=native scons Werror=1 -j8 debug=0 neon=1 opencl=0 os=linux arch=armv7a build=native
For example on a 64bit Debian based system you would have to install g++-arm-linux-gnueabihf
apt-get install g++-arm-linux-gnueabihf
Then run
scons Werror=1 -j8 debug=0 neon=1 opencl=0 os=linux arch=armv7a build=cross_compile
or simply remove the build parameter as build=cross_compile is the default value:
scons Werror=1 -j8 debug=0 neon=1 opencl=0 os=linux arch=armv7a
The examples get automatically built by scons as part of the build process of the library described above. This section just describes how you can build and link your own application against our library.
To cross compile a Arm® Neon™ example for Linux 32bit:
arm-linux-gnueabihf-g++ examples/neon_cnn.cpp utils/Utils.cpp -I. -Iinclude -std=c++14 -mfpu=neon -L. -larm_compute -larm_compute_core -o neon_cnn
To cross compile a Arm® Neon™ example for Linux 64bit:
aarch64-linux-gnu-g++ examples/neon_cnn.cpp utils/Utils.cpp -I. -Iinclude -std=c++14 -L. -larm_compute -larm_compute_core -o neon_cnn
(notice the only difference with the 32 bit command is that we don't need the -mfpu option and the compiler's name is different)
To cross compile an OpenCL example for Linux 32bit:
arm-linux-gnueabihf-g++ examples/cl_sgemm.cpp utils/Utils.cpp -I. -Iinclude -std=c++14 -mfpu=neon -L. -larm_compute -larm_compute_core -o cl_sgemm -DARM_COMPUTE_CL
To cross compile an OpenCL example for Linux 64bit:
aarch64-linux-gnu-g++ examples/cl_sgemm.cpp utils/Utils.cpp -I. -Iinclude -std=c++14 -L. -larm_compute -larm_compute_core -o cl_sgemm -DARM_COMPUTE_CL
(notice the only difference with the 32 bit command is that we don't need the -mfpu option and the compiler's name is different)
To cross compile the examples with the Graph API, such as graph_lenet.cpp, you need to link the examples against arm_compute_graph.so too.
i.e. to cross compile the "graph_lenet" example for Linux 32bit:
arm-linux-gnueabihf-g++ examples/graph_lenet.cpp utils/Utils.cpp utils/GraphUtils.cpp utils/CommonGraphOptions.cpp -I. -Iinclude -std=c++14 -mfpu=neon -L. -larm_compute_graph -larm_compute -larm_compute_core -Wl,--allow-shlib-undefined -o graph_lenet
i.e. to cross compile the "graph_lenet" example for Linux 64bit:
aarch64-linux-gnu-g++ examples/graph_lenet.cpp utils/Utils.cpp utils/GraphUtils.cpp utils/CommonGraphOptions.cpp -I. -Iinclude -std=c++14 -L. -larm_compute_graph -larm_compute -larm_compute_core -Wl,--allow-shlib-undefined -o graph_lenet
(notice the only difference with the 32 bit command is that we don't need the -mfpu option and the compiler's name is different)
To compile natively (i.e directly on an Arm device) for Arm® Neon™ for Linux 32bit:
g++ examples/neon_cnn.cpp utils/Utils.cpp -I. -Iinclude -std=c++14 -mfpu=neon -larm_compute -larm_compute_core -o neon_cnn
To compile natively (i.e directly on an Arm device) for Arm® Neon™ for Linux 64bit:
g++ examples/neon_cnn.cpp utils/Utils.cpp -I. -Iinclude -std=c++14 -larm_compute -larm_compute_core -o neon_cnn
(notice the only difference with the 32 bit command is that we don't need the -mfpu option)
To compile natively (i.e directly on an Arm device) for OpenCL for Linux 32bit or Linux 64bit:
g++ examples/cl_sgemm.cpp utils/Utils.cpp -I. -Iinclude -std=c++14 -larm_compute -larm_compute_core -o cl_sgemm -DARM_COMPUTE_CL
To compile natively the examples with the Graph API, such as graph_lenet.cpp, you need to link the examples against arm_compute_graph.so too.
i.e. to natively compile the "graph_lenet" example for Linux 32bit:
g++ examples/graph_lenet.cpp utils/Utils.cpp utils/GraphUtils.cpp utils/CommonGraphOptions.cpp -I. -Iinclude -std=c++14 -mfpu=neon -L. -larm_compute_graph -larm_compute -larm_compute_core -Wl,--allow-shlib-undefined -o graph_lenet
i.e. to natively compile the "graph_lenet" example for Linux 64bit:
g++ examples/graph_lenet.cpp utils/Utils.cpp utils/GraphUtils.cpp utils/CommonGraphOptions.cpp -I. -Iinclude -std=c++14 -L. -larm_compute_graph -larm_compute -larm_compute_core -Wl,--allow-shlib-undefined -o graph_lenet
(notice the only difference with the 32 bit command is that we don't need the -mfpu option)
To run the built executable simply run:
LD_LIBRARY_PATH=build ./neon_cnn
or
LD_LIBRARY_PATH=build ./cl_sgemm
For example:
LD_LIBRARY_PATH=. ./graph_lenet --help
Below is a list of the common parameters among the graph examples :
In order to build for SVE or SVE2 you need a compiler that supports them. You can find more information in the following these links:
An example build command with SVE is:
scons arch=armv8.2-a-sve os=linux build_dir=arm64 -j55 standalone=0 opencl=0 openmp=0 validation_tests=1 neon=1 cppthreads=1 toolchain_prefix=aarch64-none-linux-gnu-
For Android, the library was successfully built and tested using Google's standalone toolchains:
For NDK r18 or older, here is a guide to create your Android standalone toolchains from the NDK:
Generate the 32 and/or 64 toolchains by running the following commands to your toolchain directory $MY_TOOLCHAINS:
$NDK/build/tools/make_standalone_toolchain.py –arch arm64 –install-dir $MY_TOOLCHAINS/aarch64-linux-android-ndk-r18b –stl libc++ –api 21
$NDK/build/tools/make_standalone_toolchain.py –arch arm –install-dir $MY_TOOLCHAINS/arm-linux-android-ndk-r18b –stl libc++ –api 21
For NDK r19 or newer, you can directly Download the NDK package for your development platform, without the need to launch the make_standalone_toolchain.py script. You can find all the prebuilt binaries inside $NDK/toolchains/llvm/prebuilt/$OS_ARCH/bin/.
CC=clang CXX=clang++ scons toolchain_prefix=aarch64-linux-android21-
CC=aarch64-linux-android21-clang CXX=aarch64-linux-android21-clang++ scons toolchain_prefix=""
export PATH=$PATH:$MY_TOOLCHAINS/aarch64-linux-android-ndk-r18b/bin:$MY_TOOLCHAINS/arm-linux-android-ndk-r18b/bin
To cross-compile the library in debug mode, with Arm® Neon™ only support, for Android 32bit:
CXX=clang++ CC=clang scons Werror=1 -j8 debug=1 neon=1 opencl=0 os=android arch=armv7a
To cross-compile the library in asserts mode, with OpenCL only support, for Android 64bit:
CXX=clang++ CC=clang scons Werror=1 -j8 debug=0 asserts=1 neon=0 opencl=1 embed_kernels=1 os=android arch=armv8a
The examples get automatically built by scons as part of the build process of the library described above. This section just describes how you can build and link your own application against our library.
Once you've got your Android standalone toolchain built and added to your path you can do the following:
To cross compile a Arm® Neon™ example:
#32 bit: arm-linux-androideabi-clang++ examples/neon_cnn.cpp utils/Utils.cpp -I. -Iinclude -std=c++14 -larm_compute-static -larm_compute_core-static -L. -o neon_cnn_arm -static-libstdc++ -pie #64 bit: aarch64-linux-android-clang++ examples/neon_cnn.cpp utils/Utils.cpp -I. -Iinclude -std=c++14 -larm_compute-static -larm_compute_core-static -L. -o neon_cnn_aarch64 -static-libstdc++ -pie
To cross compile an OpenCL example:
#32 bit: arm-linux-androideabi-clang++ examples/cl_sgemm.cpp utils/Utils.cpp -I. -Iinclude -std=c++14 -larm_compute-static -larm_compute_core-static -L. -o cl_sgemm_arm -static-libstdc++ -pie -DARM_COMPUTE_CL #64 bit: aarch64-linux-android-clang++ examples/cl_sgemm.cpp utils/Utils.cpp -I. -Iinclude -std=c++14 -larm_compute-static -larm_compute_core-static -L. -o cl_sgemm_aarch64 -static-libstdc++ -pie -DARM_COMPUTE_CL
To cross compile the examples with the Graph API, such as graph_lenet.cpp, you need to link the library arm_compute_graph also.
#32 bit: arm-linux-androideabi-clang++ examples/graph_lenet.cpp utils/Utils.cpp utils/GraphUtils.cpp utils/CommonGraphOptions.cpp -I. -Iinclude -std=c++14 -Wl,--whole-archive -larm_compute_graph-static -Wl,--no-whole-archive -larm_compute-static -larm_compute_core-static -L. -o graph_lenet_arm -static-libstdc++ -pie -DARM_COMPUTE_CL #64 bit: aarch64-linux-android-clang++ examples/graph_lenet.cpp utils/Utils.cpp utils/GraphUtils.cpp utils/CommonGraphOptions.cpp -I. -Iinclude -std=c++14 -Wl,--whole-archive -larm_compute_graph-static -Wl,--no-whole-archive -larm_compute-static -larm_compute_core-static -L. -o graph_lenet_aarch64 -static-libstdc++ -pie -DARM_COMPUTE_CL
Then you need to do is upload the executable and the shared library to the device using ADB:
adb push neon_cnn_arm /data/local/tmp/ adb push cl_sgemm_arm /data/local/tmp/ adb push gc_absdiff_arm /data/local/tmp/ adb shell chmod 777 -R /data/local/tmp/
And finally to run the example:
adb shell /data/local/tmp/neon_cnn_arm adb shell /data/local/tmp/cl_sgemm_arm adb shell /data/local/tmp/gc_absdiff_arm
For 64bit:
adb push neon_cnn_aarch64 /data/local/tmp/ adb push cl_sgemm_aarch64 /data/local/tmp/ adb push gc_absdiff_aarch64 /data/local/tmp/ adb shell chmod 777 -R /data/local/tmp/
And finally to run the example:
adb shell /data/local/tmp/neon_cnn_aarch64 adb shell /data/local/tmp/cl_sgemm_aarch64 adb shell /data/local/tmp/gc_absdiff_aarch64
For example: adb shell /data/local/tmp/graph_lenet –help
In this case the first argument of LeNet (like all the graph examples) is the target (i.e 0 to run on Neon™, 1 to run on OpenCL if available, 2 to run on OpenCL using the CLTuner), the second argument is the path to the folder containing the npy files for the weights and finally the third argument is the number of batches to run.
The library was successfully natively built for Apple Silicon under macOS 11.1 using clang v12.0.0.
To natively compile the library with accelerated CPU support:
scons Werror=1 -j8 neon=1 opencl=0 os=macos arch=armv8a build=native
For bare metal, the library was successfully built using linaro's latest (gcc-linaro-6.3.1-2017.05) bare metal toolchains:
Download linaro for armv7a and armv8a.
To cross-compile the library with Arm® Neon™ support for baremetal armv8a:
scons Werror=1 -j8 debug=0 neon=1 opencl=0 os=bare_metal arch=armv8a build=cross_compile cppthreads=0 openmp=0 standalone=1
Examples are disabled when building for bare metal. If you want to build the examples you need to provide a custom bootcode depending on the target architecture and link against the compute library. More information about bare metal bootcode can be found here.
Using scons
directly from the Windows command line is known to cause problems. The reason seems to be that if scons
is setup for cross-compilation it gets confused about Windows style paths (using backslashes). Thus it is recommended to follow one of the options outlined below.
The best and easiest option is to use Ubuntu on Windows. This feature is still marked as beta and thus might not be available. However, if it is building the library is as simple as opening a Bash on Ubuntu on Windows shell and following the general guidelines given above.
If the Windows subsystem for Linux is not available Cygwin can be used to install and run scons
, the minimum Cygwin version must be 3.0.7 or later. In addition to the default packages installed by Cygwin scons
has to be selected in the installer. (git
might also be useful but is not strictly required if you already have got the source code of the library.) Linaro provides pre-built versions of GCC cross-compilers that can be used from the Cygwin terminal. When building for Android the compiler is included in the Android standalone toolchain. After everything has been set up in the Cygwin terminal the general guide on building the library can be followed.
Native builds on Windows are experimental and some features from the library interacting with the OS are missing.
It's possible to build Compute Library natively on a windows system running on ARM.
Windows on ARM(WoA) systems provide compatibility emulating x86 binaries on aarch64. Unfortunately Visual Studio 2022 does not work on aarch64 systems because it's an x86_64bit application and these binaries cannot be exectuted on WoA yet.
Because we cannot use Visual Studio to build Compute Library we have to set up a native standalone toolchain to compile C++ code for arm64 on Windows.
Native arm64 toolchain installation for WoA:
There are some additional tools we need to install to build Compute Library:
In order to use clang to build windows binaries natively we have to initialize the environment variables from VS22 correctly so that the compiler could find the arm64 C++ libraries. This can be done by pressing the key windows + r and running the command:
cmd /k "C:\Program Files (x86)\Microsoft Visual Studio\2022\BuildTools\VC\Auxiliary\Build\vcvarsx86_arm64.bat"
To build Compute Library type:
scons opencl=0 neon=1 os=windows examples=0 validation_tests=1 benchmark_examples=0 build=native arch=armv8a Werror=0 exceptions=1 standalone=1
Compute Library requires OpenCL 1.1 and above with support of non uniform workgroup sizes, which is officially supported in the Arm® Mali™ OpenCL DDK r8p0 and above as an extension (respective extension flag is -cl-arm-non-uniform-work-group-size).
Enabling 16-bit floating point calculations require cl_khr_fp16 extension to be supported. All Arm® Mali™ GPUs with compute capabilities have native support for half precision floating points.
Integer dot product built-in function extensions (and therefore optimized kernels) are available with Arm® Mali™ OpenCL DDK r22p0 and above for the following GPUs : G71, G76. The relevant extensions are cl_arm_integer_dot_product_int8, cl_arm_integer_dot_product_accumulate_int8 and cl_arm_integer_dot_product_accumulate_int16.
OpenCL kernel level debugging can be simplified with the use of printf, this requires the cl_arm_printf extension to be supported.
SVM allocations are supported for all the underlying allocations in Compute Library. To enable this OpenCL 2.0 and above is a requirement.