TCP performance for OpenVPN vs delay

LovingFox

New Member
Joined
Apr 23, 2019
Messages
1
Reaction score
0
Credits
0
I have got some issue in TCP performance depended on delay.

My lab schema:
CPE --- (p1) Router (p2) --- Server​

The CPE connects to the Server by OpenVPN using TCP. I know about TCP-over-TCP but TCP proto is mandatory! Next I do series of network benchmarks through the OpenVPN tunnel.

Series:
  1. WITHOUT artificial delay.
  2. WITH artificial delay of 30 ms (RTT) made on a Router's p1 and p2.
  3. WITH artificial delay of 60 ms.
So the series resaults:
  1. "to Server": 40 mbps, "from Server": 50 mbps
  2. "to Server": 34 mbps, "from Server": 38 mbps
  3. "to Server": 20 mbps, "from Server": 10 mbps
Rate limit in test 1 due to CPU on CPE. It's normal. But why third test with delay 60ms reduces performance "from Server" multiple 5 times while "to Server" only 2 times?

P.S. CPE is an OpenWRT on a Banana PI R2 board. Server is a Ubuntu server, Router is a Ubuntu server too. All devices connect via Ethernet. CPE intefrace is 100M. Others — 1G.

TCP congestion control for both (CPE and Server) is BBR:

  • net.ipv4.tcp_congestion_control = bbr
  • net.core.default_qdisc = fq
Benchmark commands on a CPE:

  • "to Server": iperf3 -c ovpn_server_ip
  • "from Server": iperf3 -c ovpn_server_ip -R
Commands for artificial delay on a Router:

test 2:

  • p1: sudo tc qdisc add dev ens224 root handle 1: netem delay 15ms
  • p2: sudo tc qdisc add dev ens256 root handle 1: netem delay 15ms
test 3:

  • p1: sudo tc qdisc add dev ens224 root handle 1: netem delay 30ms
  • p2: sudo tc qdisc add dev ens256 root handle 1: netem delay 30ms
 

Staff online

Members online


Top