Thread Links Date Links
Thread Prev Thread Next Thread Index Date Prev Date Next Date Index

RE: [RPRWG] MAC Question



Title: RE: [RPRWG] MAC Question
This is a very good point. Once a packet starts going out, even if a high priority packet comes, it has to wait. Store
and forward at least provides check point for allowing priority packets to by pass the big ones (very likely an
e-mail or ftp or http load).
 
 

Regards,

Devendra Tripathi
VidyaWeb, Inc
90 Great Oaks Blvd #206
San Jose, Ca 95119
Tel: (408)226-6800,
Direct: (408)363-2375
Fax: (408)226-6862

-----Original Message-----
From: Sanjay Agrawal [mailto:sanjay@xxxxxxxxxxxx]
Sent: Thursday, March 22, 2001 11:15 AM
To: 'Devendra Tripathi'; Ajay Sahai; Ray Zeisz
Cc: stds-802-17@xxxxxxxx
Subject: RE: [RPRWG] MAC Question

Hi Ajay,

Latency and jitter requirements depend on the class of traffic. For some type (class) of services it is critical for others it is not.

Counter intuitive as it is, actually, store and for forward is less end-to-end latency than cut through.

In cut through approach, high add priority traffic waits while pass low priority upstream traffic passes through. It takes two RTT to shut up the low priority traffic through BCN. Thus high priority waits 2RTT because of the low priority stream. In this case low priority pass streams impose 2RTT latency or jitter to add high priority stream. For 200km ring that is 2ms. For 200km it is 20ms.

Total end to end latency = add latency + N*pass latency
In cut through end to end latency = 2RTT + N* packet delay at link speed

In the store and forward approach, if pass traffic is low priority it waits in the buffer while pass high priority and local high pririty get to go in that order. Thus, max jitter or latency imposed on high priority traffic is at worst imposed by high priority stream. Since high priority traffic streams are committed services, they never over subscribe the link only low priority streams do.

In store and forward end to end latency = pass hi priority burst + N* packet delay at link speed.

Pass hi priority burst = At 10gig speeds depending on the hi prority provisioning levels.
                                typically in the order of microseconds

store and forward gives clear class based seperation. It provides no latency panelties on committed high priority streams (typically voice and video) due to overcommitted low priority streams (typically data).

There is no RTT dependence here which can be .1msec at 20km to 10msec at 2000km


-Sanjay K. Agrawal
Luminous networks



> -----Original Message-----
> From: owner-stds-802-17@xxxxxxxx [mailto:owner-stds-802-17@xxxxxxxx]On
> Behalf Of Ajay Sahai
> Sent: Thursday, March 22, 2001 6:34 AM
> To: Ray Zeisz
> Cc: stds-802-17@xxxxxxxx
> Subject: Re: [RPRWG] MAC Question
>
>
> Ray:
>
> I guess the answer is that the group is still debating this issue. Some
> vendors prefer to have a largish transit buffer where transit frames
> are stored. Others are proposing "cut through"  transit functionality.
>
> I personally feel that latency will be larger in the first approach.
>
> On another note I do not believe that the similarity with 802.5 is
> on the lines of claiming a token etc. etc. The MAC mechanism
> is going to be different.
>
> Hope this helps.
>
> Ajay Sahai
>
> Ray Zeisz wrote:
>
> > I am following the .17 group from afar, but I have a question:
> >
> > Is it acceptable for each node in the ring to buffer up an entire packet
> > before forwarding it to its neighbor?  Would the latency be to
> great if this
> > were done?  Or is the .17 direction more along the lines of
> 802.5 where only
> > a few bits in each ring node are buffered...just enough to
> detect a token
> > and set a bit to claim it.
> >
> > Ray
> >
> > Ray Zeisz
> > Technology Advisor
> > LVL7 Systems
> > http://www.LVL7.com
> > (919) 865-2735
>