Thursday, November 15, 2007

Limiting outgoing bandwidth

I needed a way to limit the amount of bandwidth sent by an application.  Ideally the bandwidth limiting would be done in a way that didn't introduce large gaps in transmission (also known as jitter).  After looking around Wikipedia I came across an algorithm that looked like the perfect fit; the token bucket algorithm.

 

The simplicity of the Token Bucket algorithm for limiting bandwidth is exceeded only by its elegance.  Instead of attacking the problem of sending too much or too little data head on, e.g., by sending more or less data based on what has already been sent, it uses a generative approach. 

 

Controlling rate of flow based on what has already been sent requires keeping track of what has been sent and determining how much history to keep.  Those are 2 parameters that may vary depending on the characteristics of the flow (bursty vs non-bursty).  In other words, determining how much data to send based solely on how much data has been sent isn't an easy problem.

 

On the other hand, generating data in proportion to the desired rate of flow is relatively straightforward.  As indicated in the token bucket algorithm, you basically want to generate a token every 1/r seconds where r is the rate of flow.  If an application is only allowed to send data equivalent to the generated amount in the bucket then the application will tend to send data at the desired rate of flow.

 

The other parameter in this algorithm is b, the maximum bucket size.  Again this was a parameter largely dictated by the data; we have a pretty good idea of the largest burst we'll ever need to send.  In the absence of any other information I suppose you could always use the Maximum Transmission Unit for whatever network you're application is using.

No comments :

Post a Comment