家里闲置一段时间了的x79洋垃圾被我安装pve当做服务器了,但是它上面插了一张 nvidia GTX1060 6G 显卡,一直没用上,最近看到云游戏架构介绍之后,了解到显卡也可以虚拟化,决定自己动手将这张显卡用起来,在pve上虚拟化,这样一来可以在linux系统使用同时也可以在windows系统使用,让显卡发挥它的作用,以免浪费。
环境介绍:
- pve: 7.3
- 显卡: nvidia GTX1060 6G
- 主机平台: x79 E5 双路, 64G内存
一开始我参考这篇文章,首先在宿主机上安装必须软件以及显卡驱动。注意其中显卡驱动和mdevctl
是核心服务软件。
apt update && apt install dkms git build-essential pve-kernel-5.15 pve-headers-5.15 dkms cargo jq uuid-runtime -y
wget -P /opt/ http://ftp.br.debian.org/debian/pool/main/m/mdevctl/mdevctl_0.81-1_all.deb
dpkg -i /opt/mdevctl_0.81-1_all.deb
然后,配置内核
echo vfio >> /etc/modules
echo vfio_iommu_type1 >> /etc/modules
echo vfio_pci >> /etc/modules
echo vfio_virqfd >> /etc/modules
echo "blacklist nouveau" >> /etc/modprobe.d/blacklist.conf
echo "options kvm ignore_msrs=1" > /etc/modprobe.d/kvm.conf
# 更新 initramfs
update-initramfs -k all -u
配置引导
#编辑grub,请不要盲目改。根据自己的环境,选择设置
nano /etc/default/grub
#在里面找到:
GRUB_CMDLINE_LINUX_DEFAULT="quiet"
#然后修改为:
GRUB_CMDLINE_LINUX_DEFAULT="quiet intel_iommu=on"
#如果是amd cpu请改为:
GRUB_CMDLINE_LINUX_DEFAULT="quiet amd_iommu=on"
#更新引导
update-grub
重启宿主机一次
检查iommu
是否开启成功
出现有如下iommu group说明成功
root@pve3:~# dmesg |grep iommu
[ 0.000000] Command line: BOOT_IMAGE=/boot/vmlinuz-5.11.22-7-pve root=/dev/mapper/pve-root ro quiet iommu=pt intel_iommu=on
[ 0.075784] Kernel command line: BOOT_IMAGE=/boot/vmlinuz-5.11.22-7-pve root=/dev/mapper/pve-root ro quiet iommu=pt intel_iommu=on
[ 0.352588] iommu: Default domain type: Passthrough (set via kernel command line)
[ 1.373583] pci 0000:00:00.0: Adding to iommu group 0
[ 1.373592] pci 0000:00:02.0: Adding to iommu group 1
[ 1.373605] pci 0000:00:14.0: Adding to iommu group 2
[ 1.373613] pci 0000:00:17.0: Adding to iommu group 3
[ 1.373623] pci 0000:00:1c.0: Adding to iommu group 4
[ 1.373637] pci 0000:00:1d.0: Adding to iommu group 5
[ 1.373647] pci 0000:00:1d.2: Adding to iommu group 6
[ 1.373656] pci 0000:00:1d.3: Adding to iommu group 7
[ 1.373675] pci 0000:00:1f.0: Adding to iommu group 8
[ 1.373683] pci 0000:00:1f.2: Adding to iommu group 8
[ 1.373691] pci 0000:00:1f.3: Adding to iommu group 8
[ 1.373699] pci 0000:00:1f.4: Adding to iommu group 8
[ 1.373707] pci 0000:00:1f.6: Adding to iommu group 9
[ 1.373717] pci 0000:01:00.0: Adding to iommu group 10
[ 1.373726] pci 0000:03:00.0: Adding to iommu group 11
[ 1.373735] pci 0000:05:00.0: Adding to iommu group 12
[ 1.656483] intel_iommu=on
注意,上面日志中必须出现iommu group
相关内容,否则是iommu
开启失败,具体原因很可能是机器的bios中没有开启或者没有完全开启VT-d
,请将VT-d
及相关bios选项设置为enabled
状态,重启机器后再执行上述dmesg
命令查看日志是否正常。
安装驱动
# 将驱动下载至/opt目录
wget https://foxi.buduanwang.vip/pan/foxi/Virtualization/vGPU/NVIDIA-Linux-x86_64-460.73.01-grid-vgpu-kvm-v5-5.15.run -P /opt
# 给驱动添加可执行权限
chmod +x /opt/NVIDIA-Linux-x86_64-460.73.01-grid-vgpu-kvm-v5-5.15.run
# 安装
sh -c /opt/NVIDIA-Linux-x86_64-460.73.01-grid-vgpu-kvm-v5-5.15.run
安装过程可以参考上文中所指出的引用文章
配置 vgpu_unlock
vgpu_unlock 是为消费级显卡虚拟化功能解锁,nvidia消费级显卡默认是不能开启vgpu虚拟化的,想要专业的支持虚拟化的显卡,需要购买 nvidia Tesla 等型号的显卡。我们这里是 GTX1060 所以需要使用 vgpu_unlock 进行 vpgu 解锁。
# 下载vgpu_unlock-rs版本
cd /opt/ && git clone https://github.com/mbilker/vgpu_unlock-rs.git
# 编译
cd /opt/vgpu_unlock-rs && git checkout v2.0.1 && cargo build --release
# 安装 vgpu_unlock
cp /opt/vgpu_unlock-rs/target/release/libvgpu_unlock_rs.so /lib/nvidia/libvgpu_unlock_rs.so
重启宿主机一次
验证显卡驱动及 vgpu_unlock 是否成功。重启之后,使用nvidia-smi 确认是否如下,显示GPU信息。
➜ ~ nvidia-smi
Tue Nov 29 09:31:35 2022
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 460.73.01 Driver Version: 460.73.01 CUDA Version: 11.2 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|===============================+======================+======================|
| 0 GeForce GTX 106... On | 00000000:03:00.0 Off | N/A |
| 10% 53C P8 9W / 120W | 4084MiB / 6143MiB | 0% Default |
| | | N/A |
+-------------------------------+----------------------+----------------------+
+-----------------------------------------------------------------------------+
| Processes: |
| GPU GI CI PID Type Process name GPU Memory |
| ID ID Usage |
|=============================================================================|
| No running processes found |
+-----------------------------------------------------------------------------+
使用mdevctl types 验证是否出现mdev设备
➜ ~ mdevctl types
0000:03:00.0
nvidia-156
Available instances: 0
Device API: vfio-pci
Name: GRID P40-2B
Description: num_heads=4, frl_config=45, framebuffer=2048M, max_resolution=5120x2880, max_instance=12
nvidia-215
Available instances: 0
Device API: vfio-pci
Name: GRID P40-2B4
Description: num_heads=4, frl_config=45, framebuffer=2048M, max_resolution=5120x2880, max_instance=12
nvidia-241
Available instances: 0
Device API: vfio-pci
Name: GRID P40-1B4
Description: num_heads=4, frl_config=45, framebuffer=1024M, max_resolution=5120x2880, max_instance=24
nvidia-283
Available instances: 0
Device API: vfio-pci
Name: GRID P40-4C
Description: num_heads=1, frl_config=60, framebuffer=4096M, max_resolution=4096x2160, max_instance=6
nvidia-284
Available instances: 0
Device API: vfio-pci
Name: GRID P40-6C
Description: num_heads=1, frl_config=60, framebuffer=6144M, max_resolution=4096x2160, max_instance=4
nvidia-285
Available instances: 0
Device API: vfio-pci
Name: GRID P40-8C
Description: num_heads=1, frl_config=60, framebuffer=8192M, max_resolution=4096x2160, max_instance=3
nvidia-286
Available instances: 0
Device API: vfio-pci
Name: GRID P40-12C
Description: num_heads=1, frl_config=60, framebuffer=12288M, max_resolution=4096x2160, max_instance=2
nvidia-287
Available instances: 0
Device API: vfio-pci
Name: GRID P40-24C
Description: num_heads=1, frl_config=60, framebuffer=24576M, max_resolution=4096x2160, max_instance=1
nvidia-46
Available instances: 0
Device API: vfio-pci
Name: GRID P40-1Q
Description: num_heads=4, frl_config=60, framebuffer=1024M, max_resolution=5120x2880, max_instance=24
nvidia-47
Available instances: 0
Device API: vfio-pci
Name: GRID P40-2Q
Description: num_heads=4, frl_config=60, framebuffer=2048M, max_resolution=7680x4320, max_instance=12
nvidia-48
Available instances: 0
Device API: vfio-pci
Name: GRID P40-3Q
Description: num_heads=4, frl_config=60, framebuffer=3072M, max_resolution=7680x4320, max_instance=8
nvidia-49
Available instances: 0
Device API: vfio-pci
Name: GRID P40-4Q
Description: num_heads=4, frl_config=60, framebuffer=4096M, max_resolution=7680x4320, max_instance=6
nvidia-50
Available instances: 0
Device API: vfio-pci
Name: GRID P40-6Q
Description: num_heads=4, frl_config=60, framebuffer=6144M, max_resolution=7680x4320, max_instance=4
nvidia-51
Available instances: 0
Device API: vfio-pci
Name: GRID P40-8Q
Description: num_heads=4, frl_config=60, framebuffer=8192M, max_resolution=7680x4320, max_instance=3
nvidia-52
Available instances: 0
Device API: vfio-pci
Name: GRID P40-12Q
Description: num_heads=4, frl_config=60, framebuffer=12288M, max_resolution=7680x4320, max_instance=2
nvidia-53
Available instances: 0
Device API: vfio-pci
Name: GRID P40-24Q
Description: num_heads=4, frl_config=60, framebuffer=24576M, max_resolution=7680x4320, max_instance=1
nvidia-54
Available instances: 0
Device API: vfio-pci
Name: GRID P40-1A
Description: num_heads=1, frl_config=60, framebuffer=1024M, max_resolution=1280x1024, max_instance=24
nvidia-55
Available instances: 10
Device API: vfio-pci
Name: GRID P40-2A
Description: num_heads=1, frl_config=60, framebuffer=2048M, max_resolution=1280x1024, max_instance=12
nvidia-56
Available instances: 0
Device API: vfio-pci
Name: GRID P40-3A
Description: num_heads=1, frl_config=60, framebuffer=3072M, max_resolution=1280x1024, max_instance=8
nvidia-57
Available instances: 0
Device API: vfio-pci
Name: GRID P40-4A
Description: num_heads=1, frl_config=60, framebuffer=4096M, max_resolution=1280x1024, max_instance=6
nvidia-58
Available instances: 0
Device API: vfio-pci
Name: GRID P40-6A
Description: num_heads=1, frl_config=60, framebuffer=6144M, max_resolution=1280x1024, max_instance=4
nvidia-59
Available instances: 0
Device API: vfio-pci
Name: GRID P40-8A
Description: num_heads=1, frl_config=60, framebuffer=8192M, max_resolution=1280x1024, max_instance=3
nvidia-60
Available instances: 0
Device API: vfio-pci
Name: GRID P40-12A
Description: num_heads=1, frl_config=60, framebuffer=12288M, max_resolution=1280x1024, max_instance=2
nvidia-61
Available instances: 0
Device API: vfio-pci
Name: GRID P40-24A
Description: num_heads=1, frl_config=60, framebuffer=24576M, max_resolution=1280x1024, max_instance=1
nvidia-62
Available instances: 0
Device API: vfio-pci
Name: GRID P40-1B
Description: num_heads=4, frl_config=45, framebuffer=1024M, max_resolution=5120x2880, max_instance=24
如果上面两个验证不正常,请检查 nvidia-vgpud
服务和 nvidia-vgpu-mgr
服务的日志输出,命令如下
journalctl -u nvidia-vgpud
journalctl -u nvidia-vgpu-mgr
请根据日志错误自行谷歌,然后尝试修复。
配置 vgpu 参数,即前面安装的 vgpu_unlock-rs 的配置文件,路径 /etc/vgpu_unlock/profile_override.toml
。内容如下
➜ ~ cat /etc/vgpu_unlock/profile_override.toml
[profile.nvidia-55]
num_displays = 1
display_width = 1920
display_height = 1080
max_pixels = 2073600
cuda_enabled = 1
frl_enabled = 0
注意:framebuffer
,pci_id
,pci_device_id
这三个选项不要配置,因为这三个参数的值你拿不准,会导致后续创建虚拟机虚拟显卡后,虚拟机无法启动,常见报错如下
注意:我这里选择 nvidia-55
这个虚拟显卡,它显存是2G,我6G显卡可以虚拟出3个这个型号的显卡,你也可以选择其他显卡,具体显卡对应参数,参见上面mdevctl types
的输出内容。
Input/output error Verify all devices in group 29 are bound to vfio-<bus> or pci-stub and not already in use
具体配置可以参考 vgpu_unlock-rs 项目主页。
接下来就可以在pve-web上创建虚拟显卡了,首先设备选择 GTX 1060
然后 MDev
选择前面 vgpu_unlock/profile_override.toml
里配置的 nvidia-55
显卡创建完毕,虚拟机开机,windows系统中设备管理器可以看到未知的显示设备,安装驱动,从这里下载,选择grid_win10
原生驱动即可,也可以去NVIDIA官网下载grid驱动,但是据说虚拟机里的驱动版本不能比宿主机驱动版本高,具体我没验证。安装完毕如下
接下来我们在我们的 ubuntu-20.04
虚拟机上也添加一个显卡,添加步骤相同,看结果
root@ubuntu-gpu:~# lspci
00:00.0 Host bridge: Intel Corporation 440FX - 82441FX PMC [Natoma] (rev 02)
00:01.0 ISA bridge: Intel Corporation 82371SB PIIX3 ISA [Natoma/Triton II]
00:01.1 IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]
00:01.2 USB controller: Intel Corporation 82371SB PIIX3 USB [Natoma/Triton II] (rev 01)
00:01.3 Bridge: Intel Corporation 82371AB/EB/MB PIIX4 ACPI (rev 03)
00:02.0 VGA compatible controller: Device 1234:1111 (rev 02)
00:03.0 Unclassified device [00ff]: Red Hat, Inc. Virtio memory balloon
00:05.0 PCI bridge: Red Hat, Inc. QEMU PCI-PCI bridge
00:10.0 VGA compatible controller: NVIDIA Corporation GP102GL [Tesla P40] (rev a1)
00:12.0 Ethernet controller: Red Hat, Inc. Virtio network device
00:1e.0 PCI bridge: Red Hat, Inc. QEMU PCI-PCI bridge
00:1f.0 PCI bridge: Red Hat, Inc. QEMU PCI-PCI bridge
01:01.0 SCSI storage controller: Red Hat, Inc. Virtio SCSI
可以看到我们的NVIDIA显卡,表示虚拟显卡添加正常,驱动可以自己去官网下载安装Grid驱动。
可惜的是,显卡虚拟化成功了,但是需要买 nvidia 的 License 才能在虚拟机中用。老黄的刀法,不得不服啊……😅