OSX Fusion ubuntu16.04.2 虚拟机安装dpdk ovs

为什么会有这篇文章:

笔者想体验下ovs dpdk情况下,云主机使用最高网络io性能的vhostuser方式,可是却没有空闲的物理机器,且在物理机器调试也没有在虚拟机调试方便;于是笔者花了一些时间探究如何在虚拟机中完成dpdk vhostuser运行工作;本文涉及到kvm安装、ovs编译、dpdk编译、虚拟机中以kvm方式启动新的虚拟机、ovs流表等信息,以实践为主,先跑起来是更加理解的基础。因为环境差异,你参考这篇可能会遇到新的问题,请google解决吧。

注意:
1.为了虚拟机内也支持硬件虚拟化,笔者选择fusion8 安装ubuntu16.04操作系统(fusion下载要注册,说实话很麻烦),勾选Intel VT-x/EPT
2.本人笔记本mac pro16款的,不需要设置,就可以在虚拟机中支持硬件虚拟化;不要拿virtual box跑,因为其建立虚拟机中不支持kvm

安装调试平台介绍

osx操作系统VMware Fusion 8安装Ubuntu 16.04.2,中安装dpdk和ovs

1
2
3
4
5
6
root@qinlong:~/dpdk-16.11# lsb_release  -a
No LSB modules are available.
Distributor ID: Ubuntu
Description: Ubuntu 16.04.2 LTS
Release: 16.04
Codename: xenial

开启4个虚拟机网卡:

KVM安装启动虚拟机

是否支持硬件计算资源虚拟话?

1
egrep -c '(svm|vmx)' /proc/cpuinfo

如果得到结果大于0就代表所在host主机是支持虚拟话的

安装kvm相关依赖

1
apt-get install kvm qemu-kvm libvirt-bin virtinst bridge-utils

安装完成后尝试启动已经制作好的带有操作系统的硬盘

/root/ubuntu-16.04-root-1.img 为带有操作系统的硬盘

1
qemu-system-x86_64 -m 1024 -smp 2 -cpu host -hda /root/ubuntu-16.04-root-1.img -boot c -enable-kvm -no-reboot -net none -nographic   -boot c -vnc :0

DPDK编译安装

基础库安装

sudo apt-get install m4 bison flex

wget http://dpdk.org/browse/dpdk/snapshot/dpdk-16.11.tar.gz

1
2
tar -zxvf dpdk-16.11.tar.gz
cd dpdk-16.11/

修改一处代码:否则后续会报错

EAL: Error reading from file descriptor 23: Input/output error

vim lib/librte_eal/linuxapp/igb_uio/igb_uio.c

编译安装

1
2
3
4
mkdir -p /usr/src/dpdk
make config T=x86_64-native-linuxapp-gcc
make install T=x86_64-native-linuxapp-gcc DESTDIR=/usr/src/dpdk
make install T=x86_64-native-linuxapp-gcc DESTDIR=/usr

说明:/usr/src/dpdk /usr 路径安装ovs用到

ovs编译安装

下载

1
wget http://openvswitch.org/releases/openvswitch-2.7.0.tar.gz

编译

1
2
3
4
5
6
7
8
9
10
11
tar -zxvf openvswitch-2.7.0.tar.gz
cd openvswitch-2.7.0/
./boot.sh
./configure \
--with-dpdk=/usr/src/dpdk \
--prefix=/usr \
--exec-prefix=/usr \
--sysconfdir=/etc \
--localstatedir=/var
make
make install

运行ovs

设置

步骤1:

/etc/default/grub

添加iommu=pt intel_iommu=on

1
GRUB_CMDLINE_LINUX_DEFAULT="iommu=pt intel_iommu=on"

步骤2:

1
update-grub

步骤3:重启

步骤4:查看是否配置完成

1
2
root@qinlong:~/openvswitch-2.7.0# cat /proc/cmdline
BOOT_IMAGE=/vmlinuz-4.4.0-62-generic root=/dev/mapper/qinlong--vg-root ro iommu=pt intel_iommu=on

绑定网卡

绑定

1
2
modprobe uio
insmod dpdk-16.11/x86_64-native-linuxapp-gcc/kmod/igb_uio.ko

查看是否加载成功:

1
2
3
root@qinlong:~# lsmod |grep uio
igb_uio 16384 0
uio 20480 1 igb_uio

查看当前网卡状态

配置大页

修改大页占4G内存

1
2
echo 2048 > /proc/sys/vm/nr_hugepages
echo 'vm.nr_hugepages=2048' > /etc/sysctl.d/hugepages.conf

说明:2048*2M =4G,你要注意你系统内存是否有这么大,否则大页内存会分配失败,笔者虚拟机是8G内存

查看当前大页

1
2
3
4
5
root@qinlong:~/dpdk-16.11# grep HugePages_ /proc/meminfo
HugePages_Total: 2048
HugePages_Free: 2048
HugePages_Rsvd: 0
HugePages_Surp: 0

挂载大页

1
mount -t hugetlbfs none /dev/hugepages

启动ovs进程

1
2
3
4
5
6
7
8
9
10
11
root@qinlong:~/dpdk-16.11# mkdir -p /etc/openvswitch
root@qinlong:~/dpdk-16.11# mkdir -p /var/run/openvswitch
root@qinlong:~/dpdk-16.11# ovsdb-server /etc/openvswitch/conf.db \
-vconsole:emer -vsyslog:err -vfile:info \
--remote=punix:/var/run/openvswitch/db.sock \
--private-key=db:Open_vSwitch,SSL,private_key \
--certificate=db:Open_vSwitch,SSL,certificate \
--bootstrap-ca-cert=db:Open_vSwitch,SSL,ca_cert --no-chdir \
--log-file=/var/log/openvswitch/ovsdb-server.log \
--pidfile=/var/run/openvswitch/ovsdb-server.pid \
--detach --monitor

第一次运行

1
ovs-vsctl --no-wait init

初始化dpdk

ovs 启用dpdk

1
ovs-vsctl --no-wait set Open_vSwitch . other_config:dpdk-init=true

自定义dpdk的参数

1
ovs-vsctl --no-wait set Open_vSwitch . other_config:dpdk-socket-mem="1024,0"

指定dpdk运行的core

1
ovs-vsctl set Open_vSwitch . other_config:pmd-cpu-mask=0x03

查看确认dpdk设置

1
2
3
4
5
6
root@qinlong:~/dpdk-16.11# ovs-vsctl get Open_vSwitch . other_config:dpdk-socket-mem
"1024,0"
root@qinlong:~/dpdk-16.11# ovs-vsctl get Open_vSwitch . other_config:pmd-cpu-mask
"0x03"
root@qinlong:~/dpdk-16.11# ovs-vsctl get Open_vSwitch . other_config:dpdk-init
"true"

启动vswitchd进程

1
2
3
4
5
ovs-vswitchd unix:/var/run/openvswitch/db.sock \
-vconsole:emer -vsyslog:err -vfile:info --mlockall --no-chdir \
--log-file=/var/log/openvswitch/ovs-vswitchd.log \
--pidfile=/var/run/openvswitch/ovs-vswitchd.pid \
--detach --monitor

启动过程记录如下:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
root@qinlong:~/dpdk-16.11# ovs-vswitchd unix:/var/run/openvswitch/db.sock \
> -vconsole:emer -vsyslog:err -vfile:info --mlockall --no-chdir \
> --log-file=/var/log/openvswitch/ovs-vswitchd.log \
> --pidfile=/var/run/openvswitch/ovs-vswitchd.pid \
> --detach --monitor
EAL: Detected 8 lcore(s)
EAL: No free hugepages reported in hugepages-1048576kB
EAL: Probing VFIO support...
EAL: PCI device 0000:02:01.0 on NUMA socket -1
EAL: probe driver: 8086:100f net_e1000_em
EAL: PCI device 0000:02:02.0 on NUMA socket -1
EAL: probe driver: 8086:100f net_e1000_em
EAL: PCI device 0000:02:03.0 on NUMA socket -1
EAL: probe driver: 8086:100f net_e1000_em
EAL: PCI device 0000:02:04.0 on NUMA socket -1
EAL: probe driver: 8086:100f net_e1000_em
VHOST_CONFIG: vhost-user server: socket created, fd: 35
VHOST_CONFIG: bind to /var/run/openvswitch/vhost-user2
VHOST_CONFIG: vhost-user server: socket created, fd: 45
VHOST_CONFIG: bind to /var/run/openvswitch/vhost-user1

启动成功后,ovs-vswitchd进程占用率200%

ovs使用跑vhost-user并连通虚拟机

dd.png

建立bridge br0和两个云主机对应的dpdkvhostuser接口

1
2
3
sudo ovs-vsctl add-br br0 -- set bridge br0 datapath_type=netdev
sudo ovs-vsctl add-port br0 vhost-user1 -- set Interface vhost-user1 type=dpdkvhostuser
sudo ovs-vsctl add-port br0 vhost-user2 -- set Interface vhost-user2 type=dpdkvhostuser

查看是否bridge桥是否建立成功

1
2
3
4
5
6
7
8
9
10
11
12
root@qinlong:~# ovs-vsctl show
280e45c6-9143-4aad-ac4b-2c2305a96d0f
Bridge "br0"
Port "vhost-user2"
Interface "vhost-user2"
type: dpdkvhostuser
Port "vhost-user1"
Interface "vhost-user1"
type: dpdkvhostuser
Port "br0"
Interface "br0"
type: internal

启动虚拟机1:

设置vnc 登录号,设置vhost-user io,启动虚拟机,ubuntu-16.04-root-1.img为装有ubuntu操作系统的硬盘

1
2
3
4
5
6
qemu-system-x86_64 -m 1024 -smp 4 -cpu host -hda ~/ubuntu-16.04-root-1.img -boot c -enable-kvm -no-reboot -nographic -net none -vnc :0 \
-chardev socket,id=char1,path=/var/run/openvswitch/vhost-user1 \
-netdev type=vhost-user,id=mynet1,chardev=char1,vhostforce \
-device virtio-net-pci,mac=00:00:00:00:00:01,netdev=mynet1 \
-object memory-backend-file,id=mem,size=1024M,mem-path=/dev/hugepages,share=on \
-numa node,memdev=mem -mem-prealloc

启动虚拟机2:

设置vnc 登录号,设置vhost-user io,启动虚拟机,ubuntu-16.04-root-2.img为装有ubuntu操作系统的硬盘

1
2
3
4
5
6
qemu-system-x86_64 -m 1024 -smp 4 -cpu host -hda /root/ubuntu-16.04-root-2.img -boot c -enable-kvm -no-reboot -nographic -net none -vnc :1 \
-chardev socket,id=char2,path=/var/run/openvswitch/vhost-user2 \
-netdev type=vhost-user,id=mynet2,chardev=char2,vhostforce \
-device virtio-net-pci,mac=00:00:00:00:00:02,netdev=mynet2 \
-object memory-backend-file,id=mem,size=1024M,mem-path=/dev/hugepages,share=on \
-numa node,memdev=mem -mem-prealloc

虚拟机1和虚拟机2连通性测试

通过vnc登录后,给云主机配置上一个网段的地址,就可以互通了。