UDP Background Traffic Experiment

Congestion control algorithms (CCAs) face competing traffic that uses transport layer protocols which are unresponsive to congestion. The most prominent example is UDP. With UDP, senders do not slow down even if congestion is present.

In the UDP background traffic experiment, it is evaluated if CCAs tolerate competing traffic that uses UDP. Flows may start and stop throughout the experiment to create on-off traffic with varying levels of traffic loads. On one hand, CCAs have to adapt to sudden traffic bursts that cause congestion. On the other hand, the disappearance of a competing flow may free bandwidth resources that can be utilized. A CCA should maintain a reasonable amount of bandwidth utilization despite the dynamics caused by the competing on-off traffic.

Scenario

In the UDP background traffic experiment, multiple flows operate in a static dumbbell network. Each flow generates traffic according to an experiment parameter and uses either TCP with the CCA under test or UDP. The traffic models, the start times, and the stop times can be set for each flow individually.

The experiment has the following parameters:

  • k_tcp, k_udp: The number of TCP and UDP flows, respectively

  • tcp_traffic, udp_traffic: The traffic model of each flow

  • start_times, stop_times: The start and stop time of each flow

To summarize the experiment setup:

  • Topology: Dumbbell topology (\(K>1\)) with static network parameters

  • Flows: Multiple flows (\(K>1\)) that use TCP with the CCA under test or UDP

  • Traffic Generation Model: Set for each flow individually by an experiment parameter

Experiment Results

Experiment #93

Parameters

Command: ns3-dev-ccperf-static-dumbbell-default --experiment-name=udp_background --db-path=benchmark_TcpNewReno.db '--parameters={aut:TcpNewReno,k_tcp:3,k_udp:3,tcp_traffic:CBR(bytes=0,rate=16Mbps),udp_traffic:Poisson(bytes=0,rate=8Mbps),start_times:[0s,0s,0s,0s,0s,0s],stop_times:[15s,15s,15s,15s,15s,15s]}' --aut=TcpNewReno --stop-time=15s --seed=42 --start-times=0s,0s,0s,0s,0s,0s --stop-times=15s,15s,15s,15s,15s,15s --bw=48Mbps --loss=0.0 --qlen=120p --qdisc=FifoQueueDisc --rtts=15ms,15ms,15ms,15ms,15ms,15ms --sources=src_0,src_1,src_2,src_3,src_4,src_5 --destinations=dst_0,dst_1,dst_2,dst_3,dst_4,dst_5 --protocols=TCP,TCP,TCP,UDP,UDP,UDP --algs=TcpNewReno,TcpNewReno,TcpNewReno,UDP,UDP,UDP --recoveries=TcpPrrRecovery,TcpPrrRecovery,TcpPrrRecovery,UDP,UDP,UDP '--traffic-models=CBR(bytes=0,' 'rate=16Mbps),CBR(bytes=0,' 'rate=16Mbps),CBR(bytes=0,' 'rate=16Mbps),Poisson(bytes=0,' 'rate=8Mbps),Poisson(bytes=0,' 'rate=8Mbps),Poisson(bytes=0,' 'rate=8Mbps)'

Flows

src dst transport_protocol cca cc_recovery_alg traffic_model start_time stop_time
src_0 dst_0 TCP TcpNewReno TcpPrrRecovery CBR(bytes=0, rate=16Mbps) 0.00 15.00
src_1 dst_1 TCP TcpNewReno TcpPrrRecovery CBR(bytes=0, rate=16Mbps) 0.00 15.00
src_2 dst_2 TCP TcpNewReno TcpPrrRecovery CBR(bytes=0, rate=16Mbps) 0.00 15.00
src_3 dst_3 UDP UDP UDP Poisson(bytes=0, rate=8Mbps) 0.00 15.00
src_4 dst_4 UDP UDP UDP Poisson(bytes=0, rate=8Mbps) 0.00 15.00
src_5 dst_5 UDP UDP UDP Poisson(bytes=0, rate=8Mbps) 0.00 15.00

Metrics

The following tables list the flow, link, and network metrics of experiment #93. Refer to the the metrics page for definitions of the listed metrics.

Flow Metrics

Flow metrics capture the performance of an individual flow. They are measured at the endpoints of a network path at either the source, the receiver, or both. Bold values indicate which flow achieved the best performance.

Metric flow_1 flow_2 flow_3 flow_4 flow_5 flow_6
cov_in_flight_l4 0.35 0.42 0.45 nan nan nan
cov_throughput_l4 0.33 0.46 0.46 0.12 0.12 0.13
flow_completion_time_l4 15.00 14.99 14.99 15.00 15.00 15.00
mean_cwnd_l4 20.38 17.04 16.35 nan nan nan
mean_delivery_rate_l4 8.68 6.58 6.81 nan nan nan
mean_est_qdelay_l4 11.22 12.08 11.51 nan nan nan
mean_idt_ewma_l4 1.23 1.93 2.40 nan nan nan
mean_in_flight_l4 19.92 16.67 15.95 nan nan nan
mean_network_power_l4 336.76 248.82 267.34 nan nan nan
mean_one_way_delay_l7 2678.10 3429.28 3239.28 18.51 18.49 18.51
mean_recovery_time_l4 40.75 142.12 66.99 nan nan nan
mean_sending_rate_l4 8.77 6.68 6.92 nan nan nan
mean_sending_rate_l7 10.82 8.72 8.95 8.02 8.00 7.97
mean_srtt_l4 26.22 27.08 26.51 nan nan nan
mean_throughput_l4 8.68 6.59 6.82 7.90 7.89 7.86
mean_throughput_l7 8.68 6.59 6.82 7.90 7.89 7.86
mean_utility_mpdf_l4 -0.13 -0.21 -0.28 -0.13 -0.13 -0.13
mean_utility_pf_l4 2.11 1.75 1.74 2.06 2.06 2.05
mean_utilization_bdp_l4 0.35 0.29 0.28 nan nan nan
mean_utilization_bw_l4 0.18 0.14 0.14 0.16 0.16 0.16
total_retransmissions_l4 113.00 129.00 146.00 0.00 0.00 0.00
total_rtos_l4 0.00 3.00 1.00 0.00 0.00 0.00

Network Metrics

Network metrics assess the entire network as a whole by aggregating other metrics, e.g., the aggregated throughput of all flows. Hence, the network metrics has only one column named net.

Metric net
mean_agg_in_flight_l4 52.54
mean_agg_throughput_l4 45.74
mean_agg_utility_mpdf_l4 -1.00
mean_agg_utility_pf_l4 11.77
mean_agg_utilization_bdp_l4 0.91
mean_agg_utilization_bw_l4 0.95
mean_entropy_fairness_throughput_l4 1.79
mean_jains_fairness_throughput_l4 0.92
mean_product_fairness_throughput_l4 142076.76

Figures

The following figures show the results of the experiment #93.

Time Series Plot of the Operating Point

Time series plot of the number of segments in flight, the smoothed round-trip time (sRTT), and the throughput at the transport layer.

Mean Operating Point Plane

The mean throughput and mean smoothed round-trip time (sRTT) at the transport layer of each flow.

In Flight vs Mean Operating Point

The mean throughput and mean smoothed round-trip time (sRTT) at the transport layer of each flow. The optimal operating point is highlighted with a star (magenta). The joint operating point is given by the aggregated throughput and the mean sRTT over all flows

Distribution of the Operating Point

The empirical cumulative distribution function (eCDF) of the throughput and smoothed round-trip time (sRTT) at the transport layer of each flow.