We will create a project to run PyTorch for our mlops project

Setup

Use UV

I cannot recommend enough to use UV, install it and use it instead of poetry, pip, and others. It is many times faster and more convenient, and you can swap python versions like easily.
Installation | uv

Let’s create the environment, the following will

  1. Create a directory
  2. Initiate a proper python project there
  3. Install necessary packages
mkdir pytorch-sandbox && cd pytorch-sandbox
uv init
uv add mlflow torch torchvision torchaudio  --index https://download.pytorch.org/whl/cu128 

uv add creates .venv for us, so it auto executes uv venv [--python x.xx]
, but if we don’t want to use uv add, you may go with old school uv pip install.


Our initial directory will contain all necessary boilerplate environment configurations to begin with coding:

For more info on pyproject.toml and other environment files, refer to: Working on projects | uv

Enabling Jupyter

I do my work in VSCode and it integrates well with UV.
We can enable Jupyter the following way: uv add --dev ipykernel
For more info: Using uv with Jupyter | Using Jupyter from VS Code | uv

Install MLFlow for experiment tracking

Use MLFlow for experiment: tracking, logging and statistical insights and it is supported by within Databricks. You can install it in a python project via uv add mlflow

Link to original

If you use WSL 2 and Nvidia

To enable CUDA on WSL Ubuntu 24.04, install the following:

wget https://developer.download.nvidia.com/compute/cuda/repos/wsl-ubuntu/x86_64/cuda-wsl-ubuntu.pin
sudo mv cuda-wsl-ubuntu.pin /etc/apt/preferences.d/cuda-repository-pin-600
wget https://developer.download.nvidia.com/compute/cuda/12.9.0/local_installers/cuda-repo-wsl-ubuntu-12-9-local_12.9.0-1_amd64.deb
sudo dpkg -i cuda-repo-wsl-ubuntu-12-9-local_12.9.0-1_amd64.deb
sudo cp /var/cuda-repo-wsl-ubuntu-12-9-local/cuda-*-keyring.gpg /usr/share/keyrings/
sudo apt-get update
sudo apt-get -y install cuda-toolkit-12-9

Then place the following variables in your ~/.bashrc or ~/.zshrc:

export PATH=${PATH}:/usr/local/cuda-12.9/bin
export LD_LIBRARY_PATH=${LD_LIBRARY_PATH}:/usr/local/cuda-12.9/lib64

So you can run this to check your graphic card visibility and CUDA information

nvidia-smi && nvcc --version