Thread Links Date Links
Thread Prev Thread Next Thread Index Date Prev Date Next Date Index

Re: [RPRWG] MAC Question



Transit buffer should always be there, whether it is residing in the MAC or it is hidden in the
upper system layer.
 
The RPR ring is a shared "media", CoS support at layer 2 is definitely needed.
 
The 3 class support is not new concept, someone already implied that in their presentation
during last meeting. Based on what I observed from the meeting, we need the following,
 
Class A: Guaranteed provisional BW and low transit delay and jitter.
Class B: Guaranteed provisional BW.
Class C: Best Effort.
 
The Class B and C should be subject to "fairness" algorithm, Class A has strict priority over
Class B, and Class B has strict priority over Class C. Class A should self-regulate
its injection rate subject to its provisioned BW; Class B should self-regulate
its injection rate subject to its provisioned BW as well as the "fairness" algorithm;  
Class C should regulate its injection rate subject to the "fairness" algorithm only.
 
For Class A, pass-thru has strict priority over add-on; For Class B and C, pass-thru vs.
add-on priority is subject to "fairness" algorithm.
 
As a side talk, one of the objective of the "fairness" algorithm should be NO Packet
Loss in any transit node. Why ? Again, because RPR ring is a shared "media". A packet
on media can have bit error, but cannot just disappear. Of course, broken link is exception.
 
Just concept talk, no detail yet. Please be patient with my wording.
 
William Dai
----- Original Message -----
Sent: Thursday, March 22, 2001 3:59 PM
Subject: Re: [RPRWG] MAC Question


You talk about wanting 3 classes supported by 3 transit buffers. Is one congestion agent enough? Or  do we need to develop different ways to treat each class of traffic is cases of congestion?



"William Dai" <wdai@xxxxxxxxxxxx>
Sent by: owner-stds-802-17@xxxxxxxx

03/22/01 03:36 PM

       
        To:        "Sanjay Agrawal" <sanjay@xxxxxxxxxxxx>, "'Devendra Tripathi'" <tripathi@xxxxxxxxxxxx>, "Ajay Sahai" <Ajay.Sahai@xxxxxxxxxxxxxxx>, "Ray Zeisz" <Zeisz@xxxxxxxx>
        cc:        stds-802-17@xxxxxxxx
        Subject:        Re: [RPRWG] MAC Question




My understanding of the cut-through definition in Sanjay's  example is
    1. Pass-through packet is allowed to  transmit before it is completely received.
    2. There is only one transit buffer  (regardless of class).
    3. Scheduling Algorithm always give  pass-through traffic (regardlesss of class)
        preference over  add-on traffic.
which somewhat contradicts with his first statement. Thus the  interesting results.
 
The debate should be based on a solid definition of  cut-through transmission, otherwise
there will be no convergence at the end of the  discussion.
 
I fully agree Sanjay's first statement, but want to add that  each class should have its
own transit buffer, (personally I prefer having 3 classes  supported as RPR MAC services).
Whether each transit buffer should reside in MAC layer or  systemlayer is up to further
discussion. Under this context, Circuit Emulation (or some may  prefer to call it
Synchronous) class will benefit from the cut-through transit.  Ideally it could further
benefit from preemptive transmission (yet another definition  to be solidly defined).
 
William Dai
 
----- Original Message -----
From:  Sanjay  Agrawal
To: 'Devendra Tripathi' ; Ajay  Sahai ; Ray Zeisz  
Cc: stds-802-17@xxxxxxxx
Sent: Thursday, March 22, 2001 11:15  AM
Subject: RE: [RPRWG] MAC Question

Hi Ajay,

Latency and jitter requirements depend on the class of  traffic. For some type (class) of services it is critical for others it is  not.

Counter intuitive as it is, actually, store and for forward is  less end-to-end latency than cut through.

In cut through approach, high add priority traffic waits while  pass low priority upstream traffic passes through. It takes two RTT to shut up  the low priority traffic through BCN. Thus high priority waits 2RTT because of  the low priority stream. In this case low priority pass streams impose 2RTT  latency or jitter to add high priority stream. For 200km ring that is 2ms. For  200km it is 20ms.

Total end to end latency = add latency + N*pass latency  
In cut through end to end latency = 2RTT + N* packet delay at  link speed

In the store and forward approach, if pass traffic is low  priority it waits in the buffer while pass high priority and local high  pririty get to go in that order. Thus, max jitter or latency imposed on high  priority traffic is at worst imposed by high priority stream. Since high  priority traffic streams are committed services, they never over subscribe the  link only low priority streams do.

In store and forward end to end latency = pass hi priority  burst + N* packet delay at link speed.

Pass hi priority burst = At 10gig speeds depending on the hi  prority provisioning levels.  
                                   typically in the order  of microseconds

store and forward gives clear class based seperation. It  provides no latency panelties on committed high priority streams (typically  voice and video) due to overcommitted low priority streams (typically data).  

There is no RTT dependence here which can be .1msec at 20km to  10msec at 2000km

-Sanjay K. Agrawal
Luminous  networks


> -----Original Message-----
>  From: owner-stds-802-17@xxxxxxxx [mailto:owner-stds-802-17@xxxxxxxx]On  
> Behalf Of Ajay Sahai
> Sent:  Thursday, March 22, 2001 6:34 AM
> To: Ray  Zeisz
> Cc: stds-802-17@xxxxxxxx
> Subject: Re: [RPRWG] MAC Question
>
>
>  Ray:
>
> I guess the  answer is that the group is still debating this issue. Some
> vendors prefer to have a largish transit buffer where transit  frames
> are stored. Others are proposing "cut  through"  transit functionality.
>  
> I personally feel that latency will be larger in the  first approach.
>
> On  another note I do not believe that the similarity with 802.5 is  
> on the lines of claiming a token etc. etc. The MAC  mechanism
> is going to be different.  
>
> Hope this helps.  
>
> Ajay Sahai
>
> Ray Zeisz wrote:
>
> > I am following the .17 group  from afar, but I have a question:
> >  
> > Is it acceptable for each node in the ring to  buffer up an entire packet
> > before forwarding  it to its neighbor?  Would the latency be to
>  great if this
> > were done?  Or is the .17  direction more along the lines of
> 802.5 where  only
> > a few bits in each ring node are  buffered...just enough to
> detect a token  
> > and set a bit to claim it.
> >
> > Ray
> >
> > Ray Zeisz
> > Technology Advisor
> > LVL7  Systems
> > http://www.LVL7.com
> > (919) 865-2735
>