入门 WSL 中由 GPU 加速的机器学习
Machine learning (ML) is becoming a key part of many development workflows. Whether you're a data scientist, ML engineer, or starting your learning journey with ML the Windows Subsystem for Linux (WSL) offers a great environment to run the most common and popular GPU accelerated ML tools.
There are lots of different ways to set up these tools. For example, NVIDIA CUDA in WSL, TensorFlow-DirectML and PyTorch-DirectML all offer different ways you can use your GPU for ML with WSL. To learn more about the reasons for choosing one versus another, see GPU accelerated ML training.
This guide will show how to set up:
- NVIDIA CUDA if you have an NVIDIA graphics card and run a sample ML framework container
- TensorFlow-DirectML and PyTorch-DirectML on your AMD, Intel, or NVIDIA graphics card
Prerequisites
- Ensure you are running Windows 11 or Windows 10, version 21H2 or higher.
- Install WSL and set up a username and password for your Linux distribution.
Setting up NVIDIA CUDA with Docker
-
Install Docker Desktop or install the Docker engine directly in WSL by running the following command
curl https://get.docker.com | sh
sudo service docker start
-
If you installed the Docker engine directly then install the NVIDIA Container Toolkit following the steps below.
Set up the stable repository for the NVIDIA Container Toolkit by running the following commands:
distribution=$(. /etc/os-release;echo $ID$VERSION_ID)
curl -s -L https://nvidia.github.io/nvidia-docker/gpgkey | sudo gpg --dearmor -o /usr/share/keyrings/nvidia-docker-keyring.gpg
curl -s -L https://nvidia.github.io/nvidia-docker/$distribution/nvidia-docker.list | sed 's#deb https://#deb [signed-by=/usr/share/keyrings/nvidia-docker-keyring.gpg] https://#g' | sudo tee /etc/apt/sources.list.d/nvidia-docker.list