Show simple item record

Flowtune: Flowlet Control for Datacenter Networks

dc.date.accessioned2016-08-15T20:00:07Z
dc.date.accessioned2018-11-26T22:27:37Z
dc.date.available2016-08-15T20:00:07Z
dc.date.available2018-11-26T22:27:37Z
dc.date.issued2016-08-15
dc.identifier.urihttp://hdl.handle.net/1721.1/103920
dc.identifier.urihttp://repository.aust.edu.ng/xmlui/handle/1721.1/103920
dc.description.abstractRapid convergence to a desired allocation of network resources to endpoint traffic has been a long-standing challenge for packet-switched networks. The reason for this is that congestion control decisions are distributed across the endpoints, which vary their offered load in response to changes in application demand and network feedback on a packet-by-packet basis. We propose a different approach for datacenter networks, flowlet control, in which congestion control decisions are made at the granularity of a flowlet, not a packet. With flowlet control, allocations have to change only when flowlets arrive or leave. We have implemented this idea in a system called Flowtune using a centralized allocator that receives flowlet start and end notifications from endpoints. The allocator computes optimal rates using a new, fast method for network utility maximization, and updates endpoint congestion-control parameters. Experiments show that Flowtune outperforms DCTCP, pFabric, sfqCoDel, and XCP on tail packet delays in various settings, converging to optimal rates within a few packets rather than over several RTTs. Our implementation of Flowtune handles 10.4x more throughput per core and scales to 8x more cores than Fastpass, for an 83-fold throughput gain.en_US
dc.format.extent15 p.en_US
dc.titleFlowtune: Flowlet Control for Datacenter Networksen_US


Files in this item

FilesSizeFormatView
MIT-CSAIL-TR-2016-011.pdf384.3Kbapplication/pdfView/Open

This item appears in the following Collection(s)

Show simple item record