Reno-Friendliness Experiment

To be deployable, new congestion control algorithms (CCAs) have to be able to compete against established CCAs. For a long time, Reno was the most widely used CCA, and fairness towards Reno was referred to as TCP-friendliness. Nowadays Reno is merely used anymore and Reno can be regarded as a legacy CCA. In the Reno-friendliness experiment, it is evaluated if a CCA is fair towards Reno. However, it is debatable if fairness towards Reno remains a favorable property of a CCA, because Reno is not widely used anymore. Regardless of that, ccperf includes this experiment for reasons of comprehensiveness.

Many newer CCAs that aim to be more efficient are unfair towards Reno as a consequence. That is, these CCAs grab a significantly larger portion of the bandwidth when competing against Reno. On the other hand, CCAs that try to avoid queueing delay may yield their bandwidth to Reno, which is a buffer filling algorithm. Reno blindly increases its congestion window (cwnd) as its steady-state behavior even if the bandwidth is already exhausted.

Scenario

In the Reno-friendliness experiment, multiple flows operate in a static dumbbell network. Each flow generates greedy source traffic and uses either the CCA under test or Reno. The experiment has one parameter k, which sets the number of flows. Half of the flows (rounded down) use the CCA under test, whereas the other half (rounded up) use Reno.

To summarize the experiment setup:

  • Topology: Dumbbell topology (\(K>1\)) with static network parameters

  • Flows: Multiple flows (\(K>1\)) that use either the CCA under test or Reno

  • Traffic Generation Model: Greedy source traffic

Experiment Results

Experiment #74

Parameters

Command: ns3-dev-ccperf-static-dumbbell-default --experiment-name=reno_fairness --db-path=benchmark_TcpNewReno.db '--parameters={aut:TcpNewReno,k:4}' --aut=TcpNewReno --stop-time=15s --seed=42 --bw=64Mbps --loss=0.0 --qlen=80p --qdisc=FifoQueueDisc --rtts=15ms,15ms,15ms,15ms --sources=src_0,src_1,src_2,src_3 --destinations=dst_0,dst_1,dst_2,dst_3 --protocols=TCP,TCP,TCP,TCP --algs=TcpNewReno,TcpNewReno,TcpLinuxReno,TcpLinuxReno --recoveries=TcpPrrRecovery,TcpPrrRecovery,TcpPrrRecovery,TcpPrrRecovery --start-times=0s,0s,0s,0s --stop-times=15s,15s,15s,15s '--traffic-models=Greedy(bytes=0),Greedy(bytes=0),Greedy(bytes=0),Greedy(bytes=0)'

Flows

src dst transport_protocol cca cc_recovery_alg traffic_model start_time stop_time
src_0 dst_0 TCP TcpNewReno TcpPrrRecovery Greedy(bytes=0) 0.00 15.00
src_1 dst_1 TCP TcpNewReno TcpPrrRecovery Greedy(bytes=0) 0.00 15.00
src_2 dst_2 TCP TcpLinuxReno TcpPrrRecovery Greedy(bytes=0) 0.00 15.00
src_3 dst_3 TCP TcpLinuxReno TcpPrrRecovery Greedy(bytes=0) 0.00 15.00

Metrics

The following tables list the flow, link, and network metrics of experiment #74. Refer to the the metrics page for definitions of the listed metrics.

Flow Metrics

Flow metrics capture the performance of an individual flow. They are measured at the endpoints of a network path at either the source, the receiver, or both. Bold values indicate which flow achieved the best performance.

Metric flow_1 flow_2 flow_3 flow_4
cov_in_flight_l4 0.24 0.25 0.06 0.05
cov_throughput_l4 0.21 0.20 0.16 0.16
flow_completion_time_l4 14.99 15.00 15.00 15.00
mean_cwnd_l4 31.88 31.84 40.50 40.49
mean_delivery_rate_l4 13.40 13.38 17.38 17.38
mean_est_qdelay_l4 11.72 11.74 11.70 11.69
mean_idt_ewma_l4 0.65 0.74 0.54 0.49
mean_in_flight_l4 31.43 31.41 40.00 40.00
mean_network_power_l4 501.16 499.39 661.99 660.80
mean_one_way_delay_l7 2153.43 2223.14 1713.23 1713.19
mean_recovery_time_l4 30.88 30.15 53.13 52.94
mean_sending_rate_l4 13.47 13.45 17.44 17.44
mean_sending_rate_l7 15.54 15.52 19.52 19.52
mean_srtt_l4 26.72 26.74 26.70 26.69
mean_throughput_l4 13.41 13.39 17.40 17.38
mean_throughput_l7 13.41 13.39 17.40 17.38
mean_utility_mpdf_l4 -0.08 -0.08 -0.06 -0.06
mean_utility_pf_l4 2.57 2.57 2.84 2.84
mean_utilization_bdp_l4 0.41 0.41 0.52 0.52
mean_utilization_bw_l4 0.21 0.21 0.27 0.27
total_retransmissions_l4 65.00 63.00 40.00 40.00
total_rtos_l4 0.00 0.00 0.00 0.00

Network Metrics

Network metrics assess the entire network as a whole by aggregating other metrics, e.g., the aggregated throughput of all flows. Hence, the network metrics has only one column named net.

Metric net
mean_agg_in_flight_l4 142.84
mean_agg_throughput_l4 61.57
mean_agg_utility_mpdf_l4 -0.28
mean_agg_utility_pf_l4 10.83
mean_agg_utilization_bdp_l4 1.86
mean_agg_utilization_bw_l4 0.96
mean_entropy_fairness_throughput_l4 1.39
mean_jains_fairness_throughput_l4 0.98
mean_product_fairness_throughput_l4 54280.02

Figures

The following figures show the results of the experiment #74.

Time Series Plot of the Operating Point

Time series plot of the number of segments in flight, the smoothed round-trip time (sRTT), and the throughput at the transport layer.

Distribution of the Operating Point

The empirical cumulative distribution function (eCDF) of the throughput and smoothed round-trip time (sRTT) at the transport layer of each flow.

In Flight vs Mean Operating Point

The mean throughput and mean smoothed round-trip time (sRTT) at the transport layer of each flow. The optimal operating point is highlighted with a star (magenta). The joint operating point is given by the aggregated throughput and the mean sRTT over all flows