Wir wollen aus Python mit onnxruntime-gpu in einer leeren Umgebung schließen.
OS | Ubuntu 18.04 |
GPU | GeForce GTX 2080Ti |
sudo apt-get update
sudo apt-get upgrade
sudo apt-get install -y vim csh flex gfortran libgfortran3 g++ \
cmake xorg-dev patch zlib1g-dev libbz2-dev \
libboost-all-dev openssh-server libcairo2 \
libcairo2-dev libeigen3-dev lsb-core \
lsb-base net-tools network-manager \
git-core git-gui git-doc xclip gdebi-core libffi-dev \
make build-essential libssl-dev zlib1g-dev libbz2-dev \
libreadline-dev libsqlite3-dev wget curl llvm \
libncurses5-dev libncursesw5-dev \
xz-utils tk-dev libffi-dev liblzma-dev python-openssl
▼ Die GPU-Support-Seite von TensorFlow wird schnell aktualisiert und ist sehr genau. https://www.tensorflow.org/install/gpu
Beschreiben Sie Folgendes in ~ / .bashrc usw.
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/usr/local/cuda/extras/CUPTI/lib64
export PATH="/usr/local/cuda/bin:$PATH"
export LD_LIBRARY_PATH="/usr/local/cuda/lib64:$LD_LIBRARY_PATH"
# Add NVIDIA package repositories
wget https://developer.download.nvidia.com/compute/cuda/repos/ubuntu1804/x86_64/cuda-repo-ubuntu1804_10.1.243-1_amd64.deb
sudo dpkg -i cuda-repo-ubuntu1804_10.1.243-1_amd64.deb
sudo apt-key adv --fetch-keys https://developer.download.nvidia.com/compute/cuda/repos/ubuntu1804/x86_64/7fa2af80.pub
sudo apt-get update
wget http://developer.download.nvidia.com/compute/machine-learning/repos/ubuntu1804/x86_64/nvidia-machine-learning-repo-ubuntu1804_1.0.0-1_amd64.deb
sudo apt install ./nvidia-machine-learning-repo-ubuntu1804_1.0.0-1_amd64.deb
sudo apt-get update
# Install NVIDIA driver
sudo apt-get install --no-install-recommends nvidia-driver-430
# Reboot. Check that GPUs are visible using the command: nvidia-smi
# Install development and runtime libraries (~4GB)
sudo apt-get install --no-install-recommends \
cuda-10-1 \
libcudnn7=7.6.4.38-1+cuda10.1 \
libcudnn7-dev=7.6.4.38-1+cuda10.1
# Install TensorRT. Requires that libcudnn7 is installed above.
sudo apt-get install -y --no-install-recommends libnvinfer6=6.0.1-1+cuda10.1 \
libnvinfer-dev=6.0.1-1+cuda10.1 \
libnvinfer-plugin6=6.0.1-1+cuda10.1
git clone https://github.com/yyuu/pyenv.git ~/.pyenv
Geh durch den Weg.
$ vim ~/.bashrc
export PYENV_ROOT="$HOME/.pyenv"
export PATH="$PYENV_ROOT/bin:$PATH"
eval "$(pyenv init -)"
$ source ~/.bashrc
Installieren Sie Ihr Lieblings-Python und stellen Sie es zur Verfügung.
pyenv install 3.6.8
pyenv install 3.7.2
pyenv global 3.6.8
pyenv rehash
python -V # 3.6.8
Dies ist das normale Verfahren, aber das Problem ist von hier.
pip install onnxruntime-gpu
Und importiere onnx Laufzeit in das Programm, den folgenden Fehler
self._handle = _dlopen(self._name, mode)
OSError: libcublas.so.10.0: cannot open shared object file: No such file or directory
Die Anforderungen für onnxruntime-gpu sind wie folgt
GPU builds require CUDA runtime libraries being installed on the system:
Version: CUDA 10.0 and cuDNN 7.6 on Linux
cuDNN 7.3 on Windows
-- Older ONNX Runtime releases used CUDA 9.1 and cuDNN 7.1 - please -refer to prior release notes for more details.
Lösung Benötigt CUDA 10.0 Bibliothek.
Natürlich gibt es keine 10.0-Bibliothek
ls -la /usr/local
drwxr-xr-x 12 root root 4096 21. Februar 13:52 .
drwxr-xr-x 11 root root 4096 7. August 2019..
drwxr-xr-x 2 root root 4096 21. Februar 13:52 bin
lrwxrwxrwx 1 root root 9 21. Februar 13 13:52 cuda -> cuda-10.1
drwxr-xr-x 15 root root 4096 21. Februar 13:52 cuda-10.1
drwxr-xr-x 3 root root 4096 21. Februar 13:51 cuda-10.2
drwxr-xr-x 2 root root 4096 7. August 2019 usw.
drwxr-xr-x 2 root root 4096 7. August 2019 Spiele
drwxr-xr-x 2 root root 4096 7. August 2019 enthalten
drwxr-xr-x 4 root root 4096 21. Februar 13:43 lib
lrwxrwxrwx 1 root root 9 21. Februar 13 13:35 man -> share/man
drwxr-xr-x 2 root root 4096 7. August 2019 sbin
drwxr-xr-x 6 root root 4096 7. August 2019 Aktie
drwxr-xr-x 2 root root 4096 7. August 2019 src
ls /usr/local/cuda/lib64
libOpenCL.so libnppicom.so.10
libOpenCL.so.1 libnppicom.so.10.2.0.243
libOpenCL.so.1.1 libnppicom_static.a
libaccinj64.so libnppidei.so
libaccinj64.so.10.1 libnppidei.so.10
libaccinj64.so.10.1.243 libnppidei.so.10.2.0.243
libcudadevrt.a libnppidei_static.a
libcudart.so libnppif.so
libcudart.so.10.1 libnppif.so.10
libcudart.so.10.1.243 libnppif.so.10.2.0.243
libcudart_static.a libnppif_static.a
libcufft.so libnppig.so
libcufft.so.10 libnppig.so.10
libcufft.so.10.1.1.243 libnppig.so.10.2.0.243
libcufft_static.a libnppig_static.a
libcufft_static_nocallback.a libnppim.so
libcufftw.so libnppim.so.10
libcufftw.so.10 libnppim.so.10.2.0.243
libcufftw.so.10.1.1.243 libnppim_static.a
libcufftw_static.a libnppist.so
libcuinj64.so libnppist.so.10
libcuinj64.so.10.1 libnppist.so.10.2.0.243
libcuinj64.so.10.1.243 libnppist_static.a
libculibos.a libnppisu.so
libcurand.so libnppisu.so.10
libcurand.so.10 libnppisu.so.10.2.0.243
libcurand.so.10.1.1.243 libnppisu_static.a
libcurand_static.a libnppitc.so
libcusolver.so libnppitc.so.10
libcusolver.so.10 libnppitc.so.10.2.0.243
libcusolver.so.10.2.0.243 libnppitc_static.a
libcusolverMg.so libnpps.so
libcusolverMg.so.10 libnpps.so.10
libcusolverMg.so.10.2.0.243 libnpps.so.10.2.0.243
libcusolver_static.a libnpps_static.a
libcusparse.so libnvToolsExt.so
libcusparse.so.10 libnvToolsExt.so.1
libcusparse.so.10.3.0.243 libnvToolsExt.so.1.0.0
libcusparse_static.a libnvgraph.so
liblapack_static.a libnvgraph.so.10
libmetis_static.a libnvgraph.so.10.1.243
libnppc.so libnvgraph_static.a
libnppc.so.10 libnvjpeg.so
libnppc.so.10.2.0.243 libnvjpeg.so.10
libnppc_static.a libnvjpeg.so.10.3.0.243
libnppial.so libnvjpeg_static.a
libnppial.so.10 libnvrtc-builtins.so
libnppial.so.10.2.0.243 libnvrtc-builtins.so.10.1
libnppial_static.a libnvrtc-builtins.so.10.1.243
libnppicc.so libnvrtc.so
libnppicc.so.10 libnvrtc.so.10.1
libnppicc.so.10.2.0.243 libnvrtc.so.10.1.243
libnppicc_static.a stubs
libnppicom.so
▼ Laden Sie die CUDA 10.0-Bibliothek vom NVIDIA-Mitarbeiter herunter und installieren Sie sie https://developer.nvidia.com/cuda-10.0-download-archive?target_os=Linux&target_arch=x86_64&target_distro=Ubuntu&target_version=1804&target_type=deblocal
▼ Installieren Sie CUDA 10.0
sudo apt-get install --no-install-recommends \
cuda-10-0 \
libcudnn7=7.6.5.32-1+cuda10.0 \
libcudnn7-dev=7.6.5.32-1+cuda10.0
Führen Sie den folgenden Befehl aus
import onnxruntime
print(onnxruntime.get_device())
model_path = "path/to/onnxfile"
session = onnxruntime.InferenceSession(model_path)
print(session.get_providers())
Erfolg, wenn Sie wie folgt zurückkehren
GPU
['CUDAExecutionProvider', 'CPUExecutionProvider']