Compilation
OpenCV with dnn module
I usually work with Tensorflow, Keras, PyTorch or Caffe but recently I had to use OpenCV for detection which requires “dnn” module because of which I compiled it with CUDA and cuDNN. The details are as follows:
NVIDIA driver installation
apt search nvidia-driver
apt-cache search nvidia-driver
sudo apt install nvidia-410
sudo reboot
nvidia-smi
sudo nvidia-settings
Install CUDA and cuDNN
Download cuda-linux.10.0.130–24817639.run
sudo ./cuda-linux.10.0.130–24817639.run
sudo bash -c “echo /usr/local/cuda/lib64/ > /etc/ld.so.conf.d/cuda.conf”
sudo ldconfig
sudo gedit /etc/environment
Append ‘:/usr/local/cuda/bin’ at the end of the PATH
Install cuDNN 7.3 for CUDA 10.0
Download all 3 .deb files: the runtime library, the developer library, and the code samples library for Ubuntu 16.04. Install them in the following order runtime, developer and code samples
sudo dpkg -i libcudnn7_7.3.0.29–1+cuda10.0_amd64.deb
sudo dpkg -i libcudnn7-dev_7.3.0.29–1+cuda10.0_amd64.deb
sudo dpkg -i libcudnn7-doc_7.3.0.29–1+cuda10.0_amd64.deb
export LD_LIBRARY_PATH=”$LD_LIBRARY_PATH:/usr/local/cuda/extras/CUPTI/lib64"
OpenCV Compilation
For this part, I have followed this article.
1. Install OpenCV and “dnn” GPU dependencies
sudo apt-get update
sudo apt-get upgrade
sudo apt-get install build-essential cmake unzip pkg-config
sudo apt-get install libjpeg-dev libpng-dev libtiff-dev
sudo apt-get install libavcodec-dev libavformat-dev libswscale-dev
sudo apt-get install libv4l-dev libxvidcore-dev libx264-dev
sudo apt-get install libgtk-3-dev
sudo apt-get install libatlas-base-dev gfortran
sudo apt-get install python3-dev
2. Download OpenCV source code
cd ~
wget -O opencv.zip https://github.com/opencv/opencv/archive/4.2.0.zip
wget -O opencv_contrib.zip https://github.com/opencv/opencv_contrib/archive/4.2.0.zip
unzip opencv.zip
unzip opencv_contrib.zip
mv opencv-4.2.0 opencv
mv opencv_contrib-4.2.0 opencv_contrib
3. Configure Python virtual environment
wget https://bootstrap.pypa.io/get-pip.py
sudo python3 get-pip.py
sudo pip install virtualenv virtualenvwrapper
sudo rm -rf ~/get-pip.py ~/.cache/pip
nano ~/.bashrc
Once you have the ~/.bashrc file open, scroll to the bottom of the file, and insert the following:
# virtualenv and virtualenvwrapper
export WORKON_HOME=$HOME/.virtualenvs
export VIRTUALENVWRAPPER_PYTHON=/usr/bin/python3
source /usr/local/bin/virtualenvwrapper.sh
You can then reload your ~/.bashrc file in your terminal session:
source ~/.bashrc
The final step is to create your Python virtual environment:
mkvirtualenv opencv_cuda -p python3
You should then install NumPy into the opencv_cuda environment:
pip install numpy
If you ever close your terminal or deactivate your Python virtual environment, you can access it again via the workon command:
workon opencv_cuda
4. Determine your CUDA architecture version
nvidia-smi
You can find your NVIDIA GPU architecture version for your particular GPU using this page:
5. Configure OpenCV with NVIDIA GPU support
workon opencv_cuda
cd ~/opencv
mkdir build
cd build
cmake -D CMAKE_BUILD_TYPE=RELEASE \
-D CMAKE_INSTALL_PREFIX=/usr/local \
-D INSTALL_PYTHON_EXAMPLES=ON \
-D INSTALL_C_EXAMPLES=OFF \
-D OPENCV_ENABLE_NONFREE=ON \
-D WITH_CUDA=ON \
-D WITH_CUDNN=ON \
-D OPENCV_DNN_CUDA=ON \
-D ENABLE_FAST_MATH=1 \
-D CUDA_FAST_MATH=1 \
-D CUDA_ARCH_BIN=7.0 \
-D WITH_CUBLAS=1 \
-D OPENCV_EXTRA_MODULES_PATH=~/opencv_contrib/modules \
-D HAVE_opencv_python3=ON \
-D PYTHON_EXECUTABLE=~/.virtualenvs/opencv_cuda/bin/python \
-D BUILD_EXAMPLES=ON ..
The most important, and error-prone, configuration is your CUDA_ARCH_BIN — make sure you set it correctly!
You can verify that your cmake command executed properly by looking at the output.
You can also look at the Python 3 section to verify that both your Interpreter and numpy point to your Python virtual environment.
Make sure you take note of the install path as well!
You’ll be needing that path when we finish the OpenCV install.
6. Compile OpenCV with “dnn” GPU support
make -j8
7. Install OpenCV with “dnn” GPU support
sudo make install
sudo ldconfig
The final step is to sym-link the OpenCV library into your Python virtual environment.
To do so, you need to know the location of where the OpenCV bindings were installed — you can determine that path via the install path configuration in Step #5.
In my case, the install path was
lib/python3.5/site-packages/cv2/python-3.5
That means that my OpenCV bindings should be in
/usr/local/lib/python3.5/site-packages/cv2/python-3.5
I can confirm the location by using the ls command:
ls -l /usr/local/lib/python3.5/site-packages/cv2/python-3.5
Now that I know the location of my OpenCV bindings, I need to sym-link them into my Python virtual environment using the ln command:
cd ~/.virtualenvs/opencv_cuda/lib/python3.5/site-packages/
ln -s /usr/local/lib/python3.5/site-packages/cv2/python-3.5/cv2.cpython 35m-x86_64-linux-gnu.so cv2.so
8. Verify OpenCV installation
workon opencv_cuda
python
import cv2
cv2.__version__
The OpenCV version should be v4.2, which is indeed the OpenCV version we compiled from.