Thread Links Date Links
Thread Prev Thread Next Thread Index Date Prev Date Next Date Index

RE: [RPRWG] Cut through definition?



Title: RE: [RPRWG] Cut through definition?
It is a good point that you make here. From our analysis and simulation study, we agree that flow control has to be faster than the traffic change, otherwise it cannot keep up with the fluctuation in traffic. If you revisit two presentations made at the March meeting, one by Dynarc (Lars) and another by Lantern (Adisak) it 's clearly shown that TCP ramp up time is an order of magnitude slower than the ring delay. Our results confirm that if the flow control takes less than a few ring-delays to converge, it will be able to track the traffic changes effectively and correctly.
 
Maybe the perfomance ad-hoc group can look into it and formulate criteria to evaluate various flow control schemes.
 
- Kanaiya
 
-----Original Message-----
From: Ashwin Moranganti [mailto:amoranganti@xxxxxxxxxxxxx]
Sent: Thursday, March 29, 2001 11:18 AM
To: 'Carey Kloss'; stds-802-17@xxxxxxxx
Subject: RE: [RPRWG] Cut through definition?


Carey, I believe you had done a good job of describing different schemes.
Now I think we can start discussing on these schemes.

The fundamental problem about messaging schemes to distribute traffic usage information is as follows.
The state of congestion, I beleive is "always on". ( To make maximum revenue, Service Providers will always over-subscribe the bandwidth). And added to that data traffic is bursty, the number of messages sent and the systems throtling in response to these messages is a constant process. Closed loop control algorithms fail terribly if the systems do not respond faster than the events that they are trying to control. Example, by the time you are ready to execute the solution to ease the congestion problem, the original problem ceases to exist and a new problem arises.

Note: In metro access rings the above state is a definite and these algorithms do not work.
But in metro core rings where the traffic patterns are not changing as much, these mechanisms can work


Thankyou
Ashwin
 
-----Original Message-----
From: Carey Kloss [mailto:ckloss@xxxxxxxxxxxxxxxx]
Sent: Wednesday, March 28, 2001 9:07 PM
To: stds-802-17@xxxxxxxx
Subject: [RPRWG] Cut through definition?



I would like to revisit the cut-through vs. store and forward, if nobody
objects?

The last discussion ended with a wish to get a more concrete definition
of cut-through. Towards that end, I would like to put out my own
understanding, and generate feedback on what's specifically different in
current schemes:

From what I understand, cut-through exists as Sanjay has explained it:
1. Transit (pass-thru) traffic always has priority over transmit
(add-on) traffic, regardless of class.
2. There is a small (1-2 MTU) transit buffer to hold incoming transit
traffic when sending transmit traffic.
3. All prioritization happens at a higher layer, when deciding what to
transmit.

I was also wondering if there is any agreement on cut-through congestion
control mechanisms? Looking through the presentations on the RPR
website, I've seen a number of schemes, and this is my understanding
from the slides. Please correct me if I've misunderstood:

1. The simplest, local fairness, which I'm not sure that anyone is
implementing: When HOL timer times out for high-pri traffic, send a
congestion packet upstream. This will stall the upstream neighbor from
sending low-pri traffic (after some delay).

2. Fujitsu: Keep a cache of the most active source nodes. If a node has
an HOL timer time out, it sends a unicast "pause" message to throttle
the most active source for a time. After another timeout, it will send
more "pause" messages to other sources. This can be extended to cover
multiple priorities, although I didn't see it explicitly stated in the
slides.

3. Nortel, iPT-CAP:  When an HOL timer expires, the node calculates the
number of sources sending through the congested link, and apportions the
link fairly (if the link is 150M, and there are 3 sources, it decides
that each souce can use 50M). To do this, it sets its B/W cap to 50M,
and then sends a message upstream to tell other nodes to start sending
at only 50M. Once the affected link becomes uncongested, new messages
are sent upstream, advising that more B/W is now available. This will
converge to a fair B/W allocation.

4. Dynarc: Token passing and credits. No detailed description. What is
the "goodput"?

5. Lantern: Per-SLA weighted fairness, with remaining bandwidth
apportioned fairly to SLAs. There wasn't a good explanation of
congestion handling, though. If the per-SLA rate limits are strictly
enforced to stop congestion, and traffic is bursty, what happens to the
"goodput"?

Thanks a lot,
--Carey Kloss


Sanjay Agrawal wrote:


     Please see comments inline.

     -Sanjay

        -----Original Message-----
        From: William Dai [mailto:wdai@xxxxxxxxxxxx]
        Sent: Thursday, March 22, 2001 2:37 PM
        To: Sanjay Agrawal; 'Devendra Tripathi'; Ajay Sahai; Ray Zeisz
        Cc: stds-802-17@xxxxxxxx
        Subject: Re: [RPRWG] MAC Question

        My understanding of the cut-through definition in Sanjay's
example is
            1. Pass-through packet is allowed to transmit before it is
completely received.

           [Sanjay Agarwal]
           Not necessarily. You have same result if you forward packet
after you completely receive it or you start
        transmitting before you receive. In the formar case you have one
packet delay, in latter you don't. 1500byte at 10
        gig gives you 1.2 microseconds.

                  2. There is only one transit buffer (regardless of
class).
           [Sanjay Agarwal]
           Yes that is what proposed cut through schemes have.  If you
have multiple classes of service and you allow
        priority than you have to arbitrate between add and pass classes
of traffic at that moment it becomes store and
        forward.

            3. Scheduling Algorithm always give pass-through traffic
(regardlesss of class)
                preference over add-on traffic.
           [Sanjay Agarwal]

           Yes that is what proposed cut through schemes have. If you
don't give pass higher priority than you don't have
        cut through scheme.
        which somewhat contradicts with his first statement. Thus the
interesting results.
           No. it doesn't.

        The debate should be based on a solid definition of cut-through
transmission, otherwise
        there will be no convergence at the end of the discussion.

           I agree.

        I fully agree Sanjay's first statement, but want to add that
each class should have its
        own transit buffer, (personally I prefer having 3 classes
supported as RPR MAC services).
        Whether each transit buffer should reside in MAC layer or
systemlayer is up to further
        discussion. Under this context, Circuit Emulation (or some may
prefer to call it
        Synchronous) class will benefit from the cut-through transit.
           [Sanjay Agarwal]
           I don't agree in the present cut through proposals case.
Unless you want to define cut though differently.
              Ideally it could further benefit from preemptive
transmission (yet another definition to be solidly defined).

        William Dai