Introduction
In Change your computer GPU hardware in 7 steps to achieve faster Deep Learning on your Windows PC, I discussed how you can upgrade your Windows PC hardware to incorporate a CUDA Toolkit compatible graphics processing card and I installed an Nvidia GTX 1060 6GB. [[Part 2 of the series]] covered the installation of CUDA, cuDNN and Tensorflow on Windows 10. In Part 3, I wiped Windows 10 from my PC and installed Ubuntu 18.04 LTS from a bootable DVD.
In this [Part 4 of the series], I am installing drivers for the Nvidia GPU which are compatible with the version of CUDA Toolkit, cuDNN and Tensorflow I wish to install on Ubuntu 18.04, namely Tensorflow 2.1 — this requires CUDA 10.1 or above. In doing so, in my case this involves also handling my current installations of Nvidia drivers, CUDA, cuDNN, and Tensorflow (details of which are set out at Step 1).
includes Nvidia Drivers, CUDA 10 and cuDNN
- Introduction
- Step 1: Checking versions of drivers and software to install for compatibility with Tensorflow 2.1
- Step 2: Pre-CUDA installation: check existing installations
- Step 2.1: Check the system has a CUDA capable GPU
- Step 2.2: Check for your upcoming CUDA installation that your version of linux is supported
- Step 2.3: Check gcc installation
- Step 2.4: Check that Ubuntu system has the correct kernel headers and development packages are installed
- Step 2.5: Check any current Nvidia drivers
- Step 3: Download NVIDIA package repositories
- Step 4: Install NVIDIA Driver version 430
- Step 5: Check the new Nvidia GPU driver installation
- Step 6: Install CUDA 10.1 and CuDNN development and runtime libraries
- Step 7: Reboot the computer again (as per instructions above) and check the driver version
- Step 8: Downloading CuDNN for Ubuntu 18.04
- Step 9: Installing CuDNN for Ubuntu 18.04
- Step 10: Test the cuDNN installation
- Step 11: Download Tensorflow RT
- Step 12: Install Tensorflow RT
- Step 13: Install Tensorflow (recommended: installation in a virtual environment)
- Step 14: Checking Correct Installation of Tensorflow and GPU support
- Conclusions and a Note on Keras and Tensorflow
- Other postings of this article:
Step 1: Checking versions of drivers and software to install for compatibility with Tensorflow 2.1
The version of Tensorflow you select will determine the compatible versions of CUDA, cuDNN, compiler, toolchain and the Nvidia driver versions to install. Therefore before moving through the steps of installing an Nvidia driver, CUDA, cuDNN and then Tensorflow 2.1, I’m “beginning with the end in mind” and first checking the correct software versions compatible with my target version of Tensorflow.
According to the Tensorflow website and CUDA installation guide:
- NVIDIA GPU drivers — CUDA 10.1 requires 418.x or higher.
- CUDA® Toolkit — TensorFlow supports CUDA 10.1 (TensorFlow >= 2.1.0). Tensorflow 1.13 and above requires CUDA 10. I would like to be able to install various versions of Tensorflow (with GPU support) between 1.13–2.1, so CUDA 10.1 is definitely required
- CUPTI (ships with the CUDA Toolkit)
- g++ compiler and toolchain
- cuDNN SDK (>= 7.6)
- (Optional) TensorRT 6.0 to improve latency and throughput for inference on some models.
When I previously installed Tensorflow on this Ubuntu 18.04 machine, only Tensorflow 1.12/CUDA 9 was available and CUDA 10 was not yet compatible with Tensorflow. I therefore already have the following installed on this machine prior to completing the new steps below:
- Tensorflow version 1.12
- CUDA Toolkit version 9.0
- cuDNN version of 7.2, required for Tensorflow version 1.12
- gcc and g++ compilers and toolchain
- NVIDIA GPU driver 390.132 (CUDA 9.0 requires 384.x or greater)
Step 2: Pre-CUDA installation: check existing installations
Prior to starting CUDA download and installation, make the checks suggested here on the Nvidia CUDA website:
- Verify the system has a CUDA-capable GPU
- Verify the system is running a supported version of Linux.
- Verify the system has gcc installed.
- Verify the system has the correct kernel headers and development packages installed.
Step 2.1: Check the system has a CUDA capable GPU
Run the following command in your Ubuntu terminal to check your graphics card (GPU):
lspci | grep -i nvidia
If your graphics card is from NVIDIA and it is listed here , your GPU is CUDA-capable.
Step 2.2: Check for your upcoming CUDA installation that your version of linux is supported
Check your linux system version using the following command:
uname -m && cat /etc/*release
When you get that result relating to your system version, you can check the CUDA online documentation to ensure that that version is supported.
Step 2.3: Check gcc installation
gcc --version
Step 2.4: Check that Ubuntu system has the correct kernel headers and development packages are installed
The version of the kernel your system is running can be found by running the following command:
uname -r
For Ubuntu, the kernel headers and development packages for the currently running kernel can be installed with:
# my current kernel header is 4.15.0-99-generic, yours may differ
uname_r =”4.15.0-99-generic”
sudo apt-get install linux-headers-${uname_r}
Step 2.5: Check any current Nvidia drivers
You also can check what Nvidia GPU driver you currently have (if any) by running the ‘nvidia-smi’ command, and you can see that initially, I have Nvidia driver 390.132:
Step 3: Download NVIDIA package repositories
Next, download the Nvidia package repositories. These commands are based on the instructions from the Tensorflow website to add Nvidia package repositories. Start by downloading the CUDA 10.1 .deb file by typing the following into the bash terminal on your Ubuntu 18.04 machine:
wget https://developer.download.nvidia.com/compute/cuda/repos/ubuntu1804/x86_64/cuda-repo-ubuntu1804_10.1.243-1_amd64.deb
Next, get the keys:
sudo apt-key adv --fetch-keys https://developer.download.nvidia.com/compute/cuda/repos/ubuntu1804/x86_64/7fa2af80.pub
Then install the CUDA 10.1 .deb file for 64 bit Ubuntu 18.04:
sudo dpkg -i cuda-repo-ubuntu1804_10.1.243-1_amd64.deb
Run the update packages command to download the updated versions of various packages:
sudo apt-get update
Get further Nvidia package repositories for CUDA 10.1 (as per commands on Tensorflow website):
wget http://developer.download.nvidia.com/compute/machine-learning/repos/ubuntu1804/x86_64/nvidia-machine-learning-repo-ubuntu1804_1.0.0-1_amd64.deb
Once the Nvidia package repositories have been downloaded, install the Nvidia .deb package with privileges (sudo):
sudo apt install ./nvidia-machine-learning-repo-ubuntu1804_1.0.0-1_amd64.deb
And update packages again:
sudo apt-get update
Step 4: Install NVIDIA Driver version 430
I have NVIDIA Driver version 390 installed already; install nvidia driver version 430.
sudo apt-get install --no-install-recommends nvidia-driver-430
I get the message that my system has unmet dependencies, so I remedied this by removing the PPA by using the following command (note: -r is for remove):
sudo apt-add-repository -r ppa:graphics-drivers/ppa
Make sure package listing is up to date:
sudo apt update
Remove all the existing nvidia drivers:
sudo apt-get remove --purge '^nvidia-.*'
Then try reinstalling the Nvidia driver version 430:
sudo apt-get install --no-install-recommends nvidia-driver-430
The fresh Nvidia version 430 driver install gives the output below and completes successfully.
Once you have installed nvidia driver 430, shutdown and restart your PC.
Step 5: Check the new Nvidia GPU driver installation
Check that the new GPU driver is visible with the following command in your bash terminal:
nvidia-smi
The output of that command will look something like this and will confirm that the new Nvidia driver version is 430.50. Note that the CUDA Version is now 10.1 (as installed above):
Step 6: Install CUDA 10.1 and CuDNN development and runtime libraries
In order to install the CUDA and cuDNN development and runtime libraries, the Tensorflow installation page recommends the following commands:
First, the installation of CUDA 10.1; when this is is run, it will take about 20-30 min to download, unpack and install:
# Install development and runtime libraries (~4GB)
sudo apt-get install --no-install-recommends cuda-10-1
Note that amongst the terminal print out, there is an instruction to ‘reboot your computer and verify that the NVIDIA graphics driver can be loaded’ (Fig 6.2).
The installation messages complete as follows (Fig 6.3)
Step 7: Reboot the computer again (as per instructions above) and check the driver version
Reboot the computer (as per the instruction in Fig 6.2). Following reboot, run the following command to check the installation of drivers again:
nvidia-smi
Note that following the reboot, the NVIDIA driver has (unexpectedly) been upgraded to version 440.64.00, and the CUDA version upgraded from version 10.1 to 10.2, as shown in Fig 7.1. Accordingly, I am choosing the appropriate cuDNN for CUDA 10.2 for the instructions below.
Step 8: Downloading CuDNN for Ubuntu 18.04
You can download cuDNN from here. Below are the steps which I took to download the cuDNN files.
You have to register in order to download cuDNN (free) and the login screenshot is shown at Fig 8.2:
Once you have logged in, then you are taken to a cuDNN Download page:
In order to get the v.7.6.4 or any other slightly older cuDNN version, click on “Archived cuDNN Releases” shown in Fig 8.3. As it appears that CUDA 10.2 has been installed above on my machine (rather than CUDA 10.1), I have selected the appropriate cuDNN library for CUDA 10.2. This is cuDNN version 7.6.5:
From Fig. 8.4, choose the following three items from the list:
- cuDNN Runtime Library for Ubuntu 18.04 (Deb)
- cuDNN Developer Library for Ubuntu 18.04 (Deb)
- cuDNN Code Samples and User Guide for Ubuntu 18.04 (Deb) — optional
This gives the following .deb packages to be downloaded:
Step 9: Installing CuDNN for Ubuntu 18.04
In order to install cuDNN, instructions some instructions are available from the tensorflow website with more detailed commands for the cuDNN libraries from the guidance in the cuDNN installation guide. There is a screenshot shown from this below:
# change directory to where the cuDNN runtime library is downloaded
cd ~/Downloads
# Install cuDNN runtime library
sudo dpkg -i libcudnn7_7.6.5.32-1+cuda10.2_amd64.deb
# Install cuDNN developer library
sudo dpkg -i libcudnn7-dev_7.6.5.32-1+cuda10.2_amd64.deb
# (optional) install code samples and the cuDNN library documentation
sudo dpkg -i libcudnn7-doc_7.6.5.32-1+cuda10.2_amd64.deb
Step 10: Test the cuDNN installation
Following the downloads relating to cuDNN in Step 9, this step 10 covers testing the new cuDNN installation. Commands for testing of the cuDNN installation is covered in the cuDNN Installation guide:
Applying the in the cuDNN installation guide, verify that cuDNN is installed and running properly by compiling the mnistCUDNN
sample located in the /usr/scr/cudnn_samples_v7
, installed by the Debian (.deb
) files.
When you have installed the .deb files, you can find the example below by going to the /usr/src
folder and there you can see ‘cudnn_samples_v7
’ (Fig 10.2):
Then copy the cudnn_samples_v7
folder to the home directory, as shown in Fig 10.3:
After copying the samples folder to the home folder (where you can run them) change directory into the mnistCUDNN
folder, and run the following to compile the mnistCUDNN example:
make clean && make
The output of this command is shown in Fig 9.8:
Then run the mnistCUDNN example within the mnishCUDNN folder by using the following command:
./mnistCUDNN
When you run the mnistCUDNN example, a successful run should finish with “Test passed!” together with the classification result, as shown in Fig 10.5.
Step 11: Download Tensorflow RT
The final package listed as part of the software requirements is the optional Tensorflow RT, which can be downloaded and installed “to improve latency and throughput for inference on some models”.
The installation instructions for Tensorflow RT are outlined on Tensorflow’s own website here, but more detailed instructions and clearer information on tensorflow RT installation for different operating systems, including Linux Debian which I am using can be found here.
Clicking ‘Download’ in Fig 11.1 takes you to the page with the screenshot shown in Fig 11.2 (note that you have to have a free registration for this as with cuDNN, and log in again using your Nvidia developer credentials you used for cuDNN above, in order to download TensorRT):
This takes you to a further page (Fig 11.3) which shows the available TensorRT versions. I have chosen to download TensorRT 6 as per the Tensorflow suggestions at Step 1.
I’m choosing TensorRT 6.0.1.8 GA for Ubuntu 1804 and CUDA 10.2 .deb local repo packages (as my system changed to having CUDA 10.2 rather than 10.1 during the steps above).
You can check the version(s) of CUDA you have installed, as follows:
Fig. 11.6 shows that I have CUDA 9.0, CUDA 10.1 and CUDA 10.2 (CUDA 10.2 was installed last). Based on this excellent article called MultiCUDA: Multiple Versions of CUDA on One Machine, multiple versions of CUDA can live side by side. It states “Installing multiple versions won’t cause any of the previous versions to get overwritten, so no need to worry. Each version you install will overwrite the configurations that cause the operating system to use a certain version, but by default, they all get installed under /usr/local in separate directories by their version numbers.”
I have therefore left all three versions of CUDA in place — CUDA 10.2 will be used in the first instance.
Step 12: Install Tensorflow RT
Notes on installing Tensorflow RT are on the Nvidia website here.
I am installing the .deb version of Tensorflow RT, which as Fig 12.1 above states automatically installs dependencies. The version independent instructions for installing Tensorflow RT are as follows:
os=”ubuntu1x04”
tag=”cudax.x-trt7.x.x.x-ea-yyyymmdd”
sudo dpkg -i nv-tensorrt-repo-${os}-${tag}_1-1_amd64.deb
sudo apt-key add /var/nv-tensorrt-repo-${tag}/7fa2af80.pub
sudo apt-get update
sudo apt-get install tensorrt
The .deb file for TensorRT which I have downloaded is “nv-tensorrt-repo-ubuntu1804-cuda10.2-trt6.0.1.8-ga-20191108_1–1_amd64.deb
” and I have downloaded it to ‘Downloads
‘.
For my operating system and CUDA ‘tag’, these installation instructions become:
# change directory to where you have downloaded the .deb file
# (in my case, downloads)
cd ~/Downloads
# specific instructions for nv-tensorrt-repo-ubuntu1804-cuda10.2-trt6.0.1.8-ga-20191108_1–1_amd64.deb
os=”ubuntu1804”
tag=”cuda10.2-trt6.0.1.8-ga-20191108”
sudo dpkg -i nv-tensorrt-repo-${os}-${tag}_1-1_amd64.deb
sudo apt-key add /var/nv-tensorrt-repo-${tag}/7fa2af80.pub
Changing to the Downloads folder and installing the TensorflowRT .deb file is shown in the terminal in Fig 12.2.
Having added the key for the TensorRT installation, finally, update packages, and install Tensor RT. This will take a few minutes. You will be prompted for a ‘y/n’ answer during tensor RT installation, which you can pre-empt by adding a ‘-y
’ flag to the installation command.
sudo apt-get update
#install tensor rt - optionally add a '-y' flag at the end of the command below to pre-empt the prompt
sudo apt-get install tensorrt -y
As per the instructions here, use the following command:
sudo apt-get install python3-libnvinfer-dev
If you plan to use TensorRT with TensorFlow, run the following command:
sudo apt-get install uff-converter-tf
Verify the TensorRT installation using the following command:-
dpkg -l | grep TensorRT
Step 13: Install Tensorflow (recommended: installation in a virtual environment)
Every installation step I have carried out up until this point (as described in the steps above) has been a system wide installation. You could install Tensorflow on a similar system wide basis, but it is more advisable to install it within a virtual environment so that it stops your Tensorflow installation unintentionally installing/uninstalling (or generally interfering with) other packages.
This step sets out installation within a new virtual environment using the command line and Pycharm IDE, but you can create a virtual environment using your preferred method. The commands for installing Tensorflow 2 within a virtual environment are here. As of Tensorflow version [X], there is no separate installation command for the CPU and GPU supported versions respectively.
Step 13.1: Set up a new Pycharm project with virtual environment
I have created an illustrative project in Pycharm to show the initial creation of the virtual environment. The version of Pycharm is Community Edition 2020.1.1.
This starts by creating a new (demo) project in Pycharm, using File -> New Project as shown in Fig 13.1
This produces the window for creating a project within a virtual environment as shown in Fig 13.2
The window in Fig 13.2 gives the option to name the new project. If you click the arrow to the left of the words “Project Interpreter” it provides the options shown in Fig 13.3.
Naming the project “example_tf_2
” in the Location box makes changes to the window as shown in Fig 13.4:
When you click ‘Create’ to create a new project, the window shown in Fig 13.5 appears:
I choose ‘attach’ which adds this new project to the drop down list of other projects in Pycharm which I already have open. This now creates a new project folder called ‘example_tf_2
’; note that it has a ‘venv
’ folder.
In the bottom window of the Pycharm viewer (the Terminal), change directory to the relevant directory (in this case, the new project directory is “example_tf_2
”.)
Activate a new virtual environment (called ‘venv
’) by using the following command in the command line:
source venv/bin/activate
If you have called your virtual environment something else other than ‘venv
‘ e.g. ‘myvenv
’, then the corresponding command would instead be:
source myvenv/bin/activate
When you run the activation command, then the command line changes to show you “(venv)
” at the beginning which means that the virtual environment has now been activated.
Step 13.2: Install Tensorflow 2 in the virtual environment
To install Tensorflow within this virtual environment, run the following pip command in the command line window (either in Pycharm or in your own terminal):
# choosing 'tensorflow' without specifying the version installs the latest stable version of tensorflow (here, version 2.1.0)
# the command prompt should read something like:
# (venv) /your/particular/path$
# installs latest stable version of tensorflow, with GPU or CPU support
pip install tensorflow
When the installation of Tensorflow 2.1.0 has finalised within the virtual environment, the terminal will return to the command prompt “$
”:
Step 14: Checking Correct Installation of Tensorflow and GPU support
To test CUDA support for your Tensorflow installation and that Tensorflow has found your GPU devices, first invoke Python from within the terminal by typing ‘python
’ at the command line:
python
The command prompt should change from “$
” to “>>
”. Then import Tensorflow, following which you can run the build test in the shell:
# import tensorflow package
import tensorflow as tf
# test that tensorflow has been built with cuda - should return True
tf.test.is_built_with_cuda()
This should return the output ‘True
’.
In order to check the GPU(s) found by Tensorflow, you can list these out using the following command (in the Pycharm terminal):
tf.config.list_physical_devices(‘GPU’)
Tensorflow should output something like the example above in Fig 14.2 — the name and device type.
Finally, you could verify the install by using the command shown on the Tensorflow website (from a standard command prompt “$
”). N.B. If you have the python command prompt (“>>>
”) and wish to return to the standard shell command prompt (“$
”), press CTRL+ Z
.
The command below imports Tensorflow and carries out a computation e.g.:
python -c "import tensorflow as tf;print(tf.reduce_sum(tf.random.normal([1000, 1000])))"
This will output information on your GPU and installed packages, together with the output to the test function.
Conclusions and a Note on Keras and Tensorflow
This article has set out the process I used to install new Nvidia drivers, CUDA, cuDNN and TensorRT (optional), all precursors to using Tensorflow 2 with GPU support on my Ubuntu 18.04 machine.
Previously I have used stand-alone Keras with a Tensorflow backend. Now with Tensorflow 2.0 and above, Keras is included in the form “tf.keras”, so it is no longer necessary to install Keras separately (although I suppose you still can).
Tensorflow Keras (tf.keras) appears to have many of the same features of stand-alone Keras, and a guide for tf.keras can be found here.
The guide states that tf.keras
can run any Keras-compatible code, but note:
- The
tf.keras
version in the latest TensorFlow release might not be the same as the latestkeras
version from PyPI. Checktf.keras.version
. - When saving a model’s weights,
tf.keras
defaults to the checkpoint format. Passsave_format='h5'
to use HDF5 (or pass a filename that ends in.h5
).
Other postings of this article:
This tutorial was originally posted by Dr Joanne Kitson in Towards Data Science on medium.com.