Outils pour utilisateurs

Outils du site


linux_compiler_tensorflow_avec_bazel

Ceci est une ancienne révision du document !


Linux Compiler Tensorflow avec Bazel

Ressources

Installation de Bazel

# Bazel
sudo apt install apt-transport-https curl gnupg
curl -fsSL https://bazel.build/bazel-release.pub.gpg | gpg --dearmor > bazel.gpg
sudo mv bazel.gpg /etc/apt/trusted.gpg.d/
echo "deb [arch=amd64] https://storage.googleapis.com/bazel-apt stable jdk1.8" | sudo tee /etc/apt/sources.list.d/bazel.list
sudo apt update
sudo apt install bazel

Sans GPU sur Debian

Désinstaller les packages python protobuf et tensorflow si ils sont installés.

Récupération des sources de tensorflow:

cd /votre/dossier/src
git clone https://github.com/tensorflow/tensorflow 

Installation du nécessaire:

python3 -m pip install pip six numpy wheel setuptools mock future>=0.17.1
python3 -m pip install keras_applications==1.0.6 --no-deps
python3 -m pip install keras_preprocessing==1.0.5 --no-deps

Install Bazel, the build tool used to compile TensorFlow. In my case, after downloading bazel-0.26.0-installer-darwin-x86_64.sh:

chmod +x bazel-0.26.0-installer-darwin-x86_64.sh ./bazel-0.26.0-installer-darwin-x86_64.sh --user export PATH="$PATH:$HOME/bin"           
bazel version

Avec GPU sur Ubuntu

Contexte

  • Ubuntu Mate 20.04
  • python 3.8
  • venv

L'objectif est de compiler Tensorflow avec les options AVX2 et FMA !

Cuda

CudNN

  • cuDNN is a framework built on the top of CUDA and developed by NVIDIA for deep learning primitives. It stands for CUDA Deep Neural Network. It serves as a building block for deep learning and machine learning frameworks.
sudo apt install libcudnn8 libcudnn8-dev

TensorRT

  • NVIDIA® TensorRT™ is an SDK for high-performance deep learning inference. It includes a deep learning inference optimizer and runtime that delivers low latency and high throughput for deep learning inference applications.
sudo apt install libnvinfer8 libnvinfer8-dev

Tensorflow

# Dépendances
python3 -m pip install numpy wheel
python3 -m pip install keras_preprocessing --no-deps
 
# Récupération des sources de tensorflow
mkdir src
cd src/
git clone https://github.com/tensorflow/tensorflow.git
cd tensorflow
./configure

Le plus abcons: Configurer la compilation

Très important Des questions et des réponses:

  • non à ROCm (non à Rocky)
  • non à TensorRT sinon ça plante
  • non à clang
  • non à download a fresh release of clang ça plante
  • oui à GCC
  • –config: -march=native
  • Pour 1060 GTX –> 6.1, Pour GTX 850m –> 5.0

Comment compiler avec AVX2 et FMA: répondre -march=native

$ ./configure
You have bazel 4.2.1 installed.
Please specify the location of python. [Default is /usr/bin/python3]: 
 
Found possible Python library paths:
  /usr/lib/python3/dist-packages
  /usr/local/lib/python3.8/dist-packages
Please input the desired Python library path to use.  Default is [/usr/lib/python3/dist-packages]
 
Do you wish to build TensorFlow with ROCm support? [y/N]: n
No ROCm support will be enabled for TensorFlow.
 
Do you wish to build TensorFlow with CUDA support? [y/N]: y
CUDA support will be enabled for TensorFlow.
 
Do you wish to build TensorFlow with TensorRT support? [y/N]: n
No TensorRT support will be enabled for TensorFlow.
 
Found CUDA 11.5 in:
    /usr/local/cuda-11.5/targets/x86_64-linux/lib
    /usr/local/cuda-11.5/targets/x86_64-linux/include
Found cuDNN 8 in:
    /usr/lib/x86_64-linux-gnu
    /usr/include
 
Please specify a list of comma-separated CUDA compute capabilities you want to build with.
You can find the compute capability of your device at: https://developer.nvidia.com/cuda-gpus. Each capability can be specified as "x.y" or "compute_xy" to include both virtual and binary GPU code, or as "sm_xy" to only include the binary code.
Please note that each additional compute capability significantly increases your build time and binary size, and that TensorFlow only supports compute capabilities >= 3.5 [Default is: 3.5,7.0]: 5.0
 
Do you want to use clang as CUDA compiler? [y/N]: n
nvcc will be used as CUDA compiler.
 
Please specify which gcc should be used by nvcc as the host compiler. [Default is /usr/bin/gcc]: 
 
Please specify optimization flags to use during compilation when bazel option "--config=opt" is specified [Default is -Wno-sign-compare]: -march=native
 
Would you like to interactively configure ./WORKSPACE for Android builds? [y/N]: n
Not configuring the WORKSPACE for Android builds.
 
Preconfigured Bazel build configs. You can use any of the below by adding "--config=<>" to your build command. See .bazelrc for more details.
	--config=mkl         	# Build with MKL support.
	--config=mkl_aarch64 	# Build with oneDNN and Compute Library for the Arm Architecture (ACL).
	--config=monolithic  	# Config for mostly static monolithic build.
	--config=numa        	# Build with NUMA support.
	--config=dynamic_kernels	# (Experimental) Build kernels into separate shared objects.
	--config=v1          	# Build with TensorFlow 1 API instead of TF 2 API.
Preconfigured Bazel build configs to DISABLE default on features:
	--config=nogcp       	# Disable GCP support.
	--config=nonccl      	# Disable NVIDIA NCCL support.
Configuration finished

Reset de la config

bazel clean

Compilation Création du package pip Installation

Parce que en plus bazel exige la bonne version:

sudo apt update && sudo apt install bazel-4.2.1
bazel build //tensorflow/tools/pip_package:build_pip_package

Après 5 heures de travail d'un I7 à fond!
INFO: Build completed successfully, 32129 total actions

./bazel-bin/tensorflow/tools/pip_package/build_pip_package /media/data/tensorflow_pkg
python3 -m pip install /media/data/src/tensorflow-2.8.0-cp38-cp38-linux_x86_64.whl

Test

  • Ne marche pas !
linux_compiler_tensorflow_avec_bazel.1639215153.txt.gz · Dernière modification : 2021/12/11 09:32 de serge