Steady-state RTT Fairness Experiment

This experiment evaluates how the two-way propagation delay influences the steady-state operating point of a congestion control algorithm (CCA). The sending rates of all flows are controlled by the same congestion control algorithm (CCA), i.e., it is an intra-protocol competition scenario.

The goal of the CCA should be to be efficient (bandwidth utilization) and reasonably fair despite the different two-way propagation delays of the flows. Larger two-way propagation delays increase the feedback cycle of CCAs, i.e., the time between a packet transmission and the reception of its acknowledgement. Typically, when flows compete for bandwidth, flows with larger two-way propagation delays allocate fewer bandwidth resources. The experiment tests whether that is the case for a CCA and therefore the experiment evaluates RTT fairness.

Scenario

Multiple flows are set up to operate compete against each other in a static dumbbell network. Greedy source traffic ensures that flows are network-limited. The flows start simultaneously, but have different same two-way propagation delays. The number of flows can be varied with the experiment parameter k. The two-way propagation delays are set with the parameter rtts.

To summarize the setup:

  • Topology: Dumbbell topology (\(K>1\)) with static network parameters defined by the path parameter

  • Flows: Multiple flows (\(K>1\)) with different two-way propagation delays using the same CCA (intra-protocol competition)

  • Traffic Generation Model: Greedy source traffic

Experiment Results

Experiment #3

Parameters

Command: ns3-dev-ccperf-static-dumbbell-default --experiment-name=steady_state_rtt_fairness --db-path=benchmark_TcpNewReno.db '--parameters={aut:TcpNewReno,k:10,path:static.default,rtts:[19ms,33ms,41ms,49ms,81ms,24ms,55ms,62ms,10ms,64ms]}' --aut=TcpNewReno --stop-time=15s --seed=42 --rtts=19ms,33ms,41ms,49ms,81ms,24ms,55ms,62ms,10ms,64ms --bw=160Mbps --loss=0.0 --qlen=200p --qdisc=FifoQueueDisc --sources=src_0,src_1,src_2,src_3,src_4,src_5,src_6,src_7,src_8,src_9 --destinations=dst_0,dst_1,dst_2,dst_3,dst_4,dst_5,dst_6,dst_7,dst_8,dst_9 --protocols=TCP,TCP,TCP,TCP,TCP,TCP,TCP,TCP,TCP,TCP --algs=TcpNewReno,TcpNewReno,TcpNewReno,TcpNewReno,TcpNewReno,TcpNewReno,TcpNewReno,TcpNewReno,TcpNewReno,TcpNewReno --recoveries=TcpPrrRecovery,TcpPrrRecovery,TcpPrrRecovery,TcpPrrRecovery,TcpPrrRecovery,TcpPrrRecovery,TcpPrrRecovery,TcpPrrRecovery,TcpPrrRecovery,TcpPrrRecovery --start-times=0s,0s,0s,0s,0s,0s,0s,0s,0s,0s --stop-times=15s,15s,15s,15s,15s,15s,15s,15s,15s,15s '--traffic-models=Greedy(bytes=0),Greedy(bytes=0),Greedy(bytes=0),Greedy(bytes=0),Greedy(bytes=0),Greedy(bytes=0),Greedy(bytes=0),Greedy(bytes=0),Greedy(bytes=0),Greedy(bytes=0)'

Flows

src dst transport_protocol cca cc_recovery_alg traffic_model start_time stop_time
src_0 dst_0 TCP TcpNewReno TcpPrrRecovery Greedy(bytes=0) 0.00 15.00
src_1 dst_1 TCP TcpNewReno TcpPrrRecovery Greedy(bytes=0) 0.00 15.00
src_2 dst_2 TCP TcpNewReno TcpPrrRecovery Greedy(bytes=0) 0.00 15.00
src_3 dst_3 TCP TcpNewReno TcpPrrRecovery Greedy(bytes=0) 0.00 15.00
src_4 dst_4 TCP TcpNewReno TcpPrrRecovery Greedy(bytes=0) 0.00 15.00
src_5 dst_5 TCP TcpNewReno TcpPrrRecovery Greedy(bytes=0) 0.00 15.00
src_6 dst_6 TCP TcpNewReno TcpPrrRecovery Greedy(bytes=0) 0.00 15.00
src_7 dst_7 TCP TcpNewReno TcpPrrRecovery Greedy(bytes=0) 0.00 15.00
src_8 dst_8 TCP TcpNewReno TcpPrrRecovery Greedy(bytes=0) 0.00 15.00
src_9 dst_9 TCP TcpNewReno TcpPrrRecovery Greedy(bytes=0) 0.00 15.00

Metrics

The following tables list the flow, link, and network metrics of experiment #3. Refer to the the metrics page for definitions of the listed metrics.

Flow Metrics

Flow metrics capture the performance of an individual flow. They are measured at the endpoints of a network path at either the source, the receiver, or both. Bold values indicate which flow achieved the best performance.

Metric flow_1 flow_2 flow_3 flow_4 flow_5 flow_6 flow_7 flow_8 flow_9 flow_10
cov_in_flight_l4 0.43 0.43 0.60 0.41 1.19 0.38 0.70 0.41 0.66 0.49
cov_throughput_l4 0.40 0.49 0.63 0.39 1.22 0.41 0.65 0.41 0.33 0.55
flow_completion_time_l4 15.00 15.00 15.00 15.00 15.00 15.00 15.00 15.00 15.00 15.00
mean_cwnd_l4 78.02 35.87 131.07 35.41 65.01 30.86 45.37 33.31 66.10 28.47
mean_delivery_rate_l4 31.62 9.13 29.31 6.85 8.32 9.97 8.08 5.31 38.80 4.22
mean_est_qdelay_l4 9.04 8.77 8.64 9.30 9.70 9.53 9.36 9.31 9.16 8.91
mean_idt_ewma_l4 0.44 1.58 1.13 1.87 2.71 1.76 1.80 2.32 0.33 3.51
mean_in_flight_l4 77.56 35.49 130.63 34.98 64.65 30.38 44.95 32.86 65.62 28.05
mean_network_power_l4 1155.36 215.54 585.01 117.44 92.61 300.27 127.04 74.42 2142.42 57.38
mean_one_way_delay_l7 1052.65 2952.87 1355.87 3903.68 4927.37 2544.72 3567.29 4412.50 828.40 5326.20
mean_recovery_time_l4 33.72 43.55 nan 65.89 100.18 99.85 75.32 79.70 26.89 68.29
mean_sending_rate_l4 31.70 9.22 29.53 6.93 8.42 10.03 8.12 5.34 39.09 4.29
mean_sending_rate_l7 33.76 11.27 31.44 8.99 10.45 12.11 10.22 7.45 40.94 6.36
mean_srtt_l4 28.04 41.77 49.64 58.30 90.70 33.53 64.36 71.31 19.16 72.91
mean_throughput_l4 31.63 9.15 29.39 6.87 8.33 9.97 8.09 5.32 38.81 4.24
mean_throughput_l7 31.63 9.15 29.39 6.87 8.33 9.97 8.09 5.32 38.81 4.24
mean_utility_mpdf_l4 -0.04 -0.14 -0.08 -0.17 -0.25 -0.12 -0.17 -0.22 -0.03 -0.34
mean_utility_pf_l4 3.36 2.18 3.19 1.84 1.67 2.29 1.94 1.59 3.62 1.38
mean_utilization_bdp_l4 0.32 0.08 0.25 0.06 0.06 0.10 0.06 0.04 0.51 0.03
mean_utilization_bw_l4 0.20 0.06 0.18 0.04 0.05 0.06 0.05 0.03 0.24 0.03
total_retransmissions_l4 57.00 32.00 19.00 34.00 74.00 70.00 32.00 20.00 333.00 15.00
total_rtos_l4 0.00 1.00 1.00 0.00 0.00 1.00 0.00 0.00 0.00 1.00

Network Metrics

Network metrics assess the entire network as a whole by aggregating other metrics, e.g., the aggregated throughput of all flows. Hence, the network metrics has only one column named net.

Metric net
mean_agg_in_flight_l4 545.18
mean_agg_throughput_l4 151.80
mean_agg_utility_mpdf_l4 -1.57
mean_agg_utility_pf_l4 23.06
mean_agg_utilization_bdp_l4 1.52
mean_agg_utilization_bw_l4 0.95
mean_entropy_fairness_throughput_l4 2.30
mean_jains_fairness_throughput_l4 0.61
mean_product_fairness_throughput_l4 34395244203.14

Figures

The following figures show the results of the experiment #3.

Time Series Plot of the Operating Point

Time series plot of the number of segments in flight, the smoothed round-trip time (sRTT), and the throughput at the transport layer.

Distribution of the Operating Point

The empirical cumulative distribution function (eCDF) of the throughput and smoothed round-trip time (sRTT) at the transport layer of each flow.

In Flight vs Mean Operating Point

The mean throughput and mean smoothed round-trip time (sRTT) at the transport layer of each flow. The optimal operating point is highlighted with a star (magenta). The joint operating point is given by the aggregated throughput and the mean sRTT over all flows

Mean Operating Point Plane

The mean throughput and mean smoothed round-trip time (sRTT) at the transport layer of each flow.

Comparison of Congestion Control Algorithms (CCAs)

Figures