ovs 发包限速

netperf吞吐测试方法

步骤1:在ns4 namespace中运行netserver

1
Starting netserver with host 'IN(6)ADDR_ANY' port '12865' and family AF_UNSPEC

步骤2:在ns1 namespace中运行 netperf -H 1.1.1.4 -t UDP_STREAM

1
2
3
4
5
6
7
root@compute:~# netperf -H 1.1.1.4 -t UDP_STREAM
MIGRATED UDP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to 1.1.1.4 (1.1.1.4) port 0 AF_INET : demo
Socket Message Elapsed Messages
Size Size Time Okay Errors Throughput
bytes bytes secs # # 10^6bits/sec
212992 65507 10.01 795007 0 41640.99
212992 10.01 794006 41588.56

测速结果:41.58856Gbps

ovs限速操作方法

添加限速策略


步骤1:添加qos

1
ovs-vsctl --timeout=10 -- set Port firstbr qos=@newqos -- --id=@newqos create QoS type=linux-htb  other-config:max-rate=60000000000

注意:该命令限速60G,即使重复执行上述命令,也是最后一条命令生效

额外说明:qos有默认最大吞吐限制,如果超过最大吞吐限制,限制速度直接采用最大吞吐限制,如果没有超过最大吞吐限制,配置的max-rate才会起作用

举例

1
2
3
4
5
6
root@compute:~# tc -s -d class show dev firstbr
class htb 1:1 parent 1:fffe prio 0 quantum 1500 rate 12Kbit ceil 25640Mbit linklayer ethernet burst 1563b/1 mpu 0b overhead 0b cburst 0b/1 mpu 0b overhead 0b level 0
Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
backlog 0b 0p requeues 0
lended: 0 borrowed: 0 giants: 0
tokens: 16291666 ctokens: 7

限速60G,实际无法达到该速率,qos取默认最大速率为25640Mbit

其它说明:如果没有设置other-config:max-rate ,就按照接口速率设置qos吞吐

举例

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
root@compute:~# ethtool firstbr
Settings for firstbr:
Supported ports: [ ]
Supported link modes: Not reported
Supported pause frame use: No
Supports auto-negotiation: No
Advertised link modes: Not reported
Advertised pause frame use: No
Advertised auto-negotiation: No
Speed: 10000Mb/s
Duplex: Full
Port: Twisted Pair
PHYAD: 0
Transceiver: internal
Auto-negotiation: off
MDI-X: Unknown
Link detected: yes

如果没有陪着max-rate就按照 10000Mb/s进行qos限速


步骤2:添加queue,限速10G

1
ovs-vsctl --timeout=10 create Queue other-config:max-rate=10000000000

步骤3:绑定queue到具体的qos

ovs-vsctl –timeout=10 add qos 7cf6a845-ce22-4aa4-886f-1ad76e2914bc queues 0=826c230d-3f28-4ab6-b0d1-794f7e2a0602

注意:7cf6a845-ce22-4aa4-886f-1ad76e2914bc 为qos id,826c230d-3f28-4ab6-b0d1-794f7e2a0602为queue id,

0 为ID具体代号且有特殊含义,默认数据包均走0号ID(不明确指定queue队列,如果存在0号限速策略所有经过firstbr发出的流量均会被限速10G)

步骤4:通过流表引导流量到queue

删除限速策略

清除接口上qos

1
ovs-vsctl clear port firstbr qos

清除所有的qos和queue策略

1
ovs-vsctl -- --all destroy QoS -- --all destroy Queue

清除接口上的6e8b837c-2386-410e-a430-1ee319f25b01 qos策略

ovs-vsctl – destroy Qos 6e8b837c-2386-410e-a430-1ee319f25b01

该命令如果对已经应用到具体接口qos则会报错如下

1
2
root@compute:~# ovs-vsctl -- destroy Qos 6e8b837c-2386-410e-a430-1ee319f25b01
ovs-vsctl: transaction error: {"details":"cannot delete QoS row 6e8b837c-2386-410e-a430-1ee319f25b01 because of 1 remaining reference(s)","error":"referential integrity violation”}

要解决报错问题,需要清除接口的Qos策略

实验说明

拓扑

y.png

拓扑实现脚本

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
ip netns add ns1
ip netns add ns2
ip netns add ns3
ip netns add ns4
ovs-vsctl add-br br0
ovs-vsctl add-br br1
ovs-vsctl add-port br0 tap1 -- set Interface tap1 type=internal
ip link set tap1 netns ns1
ip netns exec ns1 ip addr add 1.1.1.1/24 dev tap1
ip netns exec ns1 ip link set tap1 up
ip netns exec ns1 ip link set lo up
ovs-vsctl add-port br0 tap2 -- set Interface tap2 type=internal
ip link set tap2 netns ns2
ip netns exec ns2 ip addr add 1.1.1.2/24 dev tap2
ip netns exec ns2 ip link set tap2 up
ip netns exec ns2 ip link set lo up
ovs-vsctl add-port br0 tap3 -- set Interface tap3 type=internal
ip link set tap3 netns ns3
ip netns exec ns3 ip addr add 1.1.1.3/24 dev tap3
ip netns exec ns3 ip link set tap3 up
ip netns exec ns3 ip link set lo up
ip link add firstbr type veth peer name firstif
ovs-vsctl add-port br0 firstbr
ovs-vsctl add-port br1 firstif
ip link set firstbr up
ip link set firstif up
ovs-vsctl add-port br1 tap4 -- set Interface tap4 type=internal
ip link set tap4 netns ns4
ip netns exec ns4 ip addr add 1.1.1.4/24 dev tap4
ip netns exec ns4 ip link set tap4 up
ip netns exec ns4 ip link set lo up

Qos策略添加

  • 添加qos和queue
1
2
3
ovs-vsctl --timeout=10 -- set Port firstbr qos=@newqos -- --id=@newqos create QoS type=linux-htb  other-config:max-rate=60000000000
ovs-vsctl --timeout=10 create Queue other-config:max-rate=10000000000
ovs-vsctl --timeout=10 create Queue other-config:max-rate=5000000000
  • 将queue和qos绑定
1
2
ovs-vsctl --timeout=10 add qos 418d3c99-073c-4509-b5cd-fa928423f47f queues 1=25831471-552b-4d5a-83b0-179f8f8e8991
ovs-vsctl --timeout=10 add qos 418d3c99-073c-4509-b5cd-fa928423f47f queues 2=70001227-95ae-469f-aa25-c93d6b876650
  • 将流量导向具体的queue
1
2
ovs-ofctl --timeout=5 add-flow br0 hard_timeout=0,idle_timeout=0,priority=50,ip,ip_src=1.1.1.1,actions=set_queue:1,NORMAL
ovs-ofctl --timeout=5 add-flow br0 hard_timeout=0,idle_timeout=0,priority=50,ip,ip_src=1.1.1.2,actions=set_queue:2,NORMAL

网络速度测试

  • ns1 -> ns4 UDP传输速率为9.217Gbits < 10G(queue1)
1
2
3
4
5
6
7
8
root@compute:~# netperf -H 1.1.1.4 -t UDP_STREAM
MIGRATED UDP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to 1.1.1.4 (1.1.1.4) port 0 AF_INET : demo
Socket Message Elapsed Messages
Size Size Time Okay Errors Throughput
bytes bytes secs # # 10^6bits/sec

212992 65507 10.00 570032 0 29871.54
212992 10.00 175900 9217.74
  • ns2 -> ns4 UDP传输速率为4.949Gbits < 5G(queue2)
1
2
3
4
5
6
7
8
root@compute:~# netperf -H 1.1.1.4 -t UDP_STREAM
MIGRATED UDP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to 1.1.1.4 (1.1.1.4) port 0 AF_INET : demo
Socket Message Elapsed Messages
Size Size Time Okay Errors Throughput
bytes bytes secs # # 10^6bits/sec

212992 65507 10.01 1379035 0 72220.42
212992 10.01 94505 4949.25

观察策略结果

查看br0的流表策略

1
2
3
4
5
root@compute:~# ovs-ofctl dump-flows br0
NXST_FLOW reply (xid=0x4):
cookie=0x0, duration=519.507s, table=0, n_packets=1271232, n_bytes=83320651952, idle_age=238, priority=50,ip,nw_src=1.1.1.1 actions=set_queue:1,NORMAL
cookie=0x0, duration=519.197s, table=0, n_packets=2130362, n_bytes=139571241079, idle_age=247, priority=50,ip,nw_src=1.1.1.2 actions=set_queue:2,NORMAL
cookie=0x0, duration=5773.205s, table=0, n_packets=12914956, n_bytes=736062122789, idle_age=238, priority=0 actions=NORMAL

查看限速TC queue

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
oot@compute:~# tc -s -d class show dev firstbr
class htb 1:fffe root rate 25640Mbit ceil 25640Mbit linklayer ethernet burst 0b/1 mpu 0b overhead 0b cburst 0b/1 mpu 0b overhead 0b level 7
Sent 17715662461 bytes 340 pkt (dropped 0, overlimits 0 requeues 0)
backlog 0b 0p requeues 0
lended: 270756 borrowed: 0 giants: 0
tokens: 6 ctokens: 6

class htb 1:1 parent 1:fffe prio 0 quantum 1500 rate 12Kbit ceil 25640Mbit linklayer ethernet burst 1563b/1 mpu 0b overhead 0b cburst 0b/1 mpu 0b overhead 0b level 0
Sent 168 bytes 4 pkt (dropped 0, overlimits 0 requeues 0)
backlog 0b 0p requeues 0
lended: 4 borrowed: 0 giants: 0
tokens: 15417840 ctokens: 6

class htb 1:2 parent 1:fffe prio 0 quantum 1500 rate 12Kbit ceil 10Gbit linklayer ethernet burst 1563b/1 mpu 0b overhead 0b cburst 1250b/1 mpu 0b overhead 0b level 0
Sent 11523732906 bytes 55 pkt (dropped 394118, overlimits 0 requeues 0)
backlog 0b 0p requeues 0
lended: 6 borrowed: 175964 giants: 0
tokens: -525916978 ctokens: 18

class htb 1:3 parent 1:fffe prio 0 quantum 1500 rate 12Kbit ceil 5Gbit linklayer ethernet burst 1563b/1 mpu 0b overhead 0b cburst 1250b/1 mpu 0b overhead 0b level 0
Sent 6191929387 bytes 281 pkt (dropped 1284513, overlimits 0 requeues 0)
backlog 0b 0p requeues 0
lended: 6 borrowed: 94792 giants: 0
tokens: -524514101 ctokens: 37

打印队列发送速度脚本

1
2
3
4
5
6
7
8
9
10
11
12
13
import os
import sys
import time

cmds="tc -s -d class show dev bond1|tail -n 6|grep Sent|awk '{print $2}'"
print 4*'start'

while True:

result1=os.popen(cmds).readlines()
time.sleep(1)
result2=os.popen(cmds).readlines()
print "queue tx speed:", (int(result2[0])-int(result1[0]))/1000

打印bond1口,tc queue,定位到具体class,单位Kbyte/s