Thread Links Date Links
Thread Prev Thread Next Thread Index Date Prev Date Next Date Index

RE: [RPRWG] MAC Question



Title: RE: [RPRWG] MAC Question
Yes, Tx means transmit.  By blocking I meant a big non priority packet blocking a small priority packet because
transmit medium can take only one packet at a time.
 

Regards,

Devendra Tripathi
VidyaWeb, Inc
90 Great Oaks Blvd #206
San Jose, Ca 95119
Tel: (408)226-6800,
Direct: (408)363-2375
Fax: (408)226-6862

-----Original Message-----
From: William Dai [mailto:wdai@xxxxxxxxxxxx]
Sent: Thursday, March 22, 2001 6:04 PM
To: Devendra Tripathi; Sanjay Agrawal; Ajay Sahai; Ray Zeisz
Cc: stds-802-17@xxxxxxxx
Subject: Re: [RPRWG] MAC Question

I'm confused, hope your comments are positive.
 
Do you mean transit buffer when you say tx buffer ?
When you say blocking, do you mean pass-through packet blocking add-on packet or vice vesa ?
 
By the way, I think we need to somehow standardize on the term we use in the RPR discussion.
Say, "transit" and "pass-through" means the same thing here, "add-on", "injection", and "insertion"
means the same thing too.
 
William Dai
----- Original Message -----
Sent: Thursday, March 22, 2001 5:18 PM
Subject: RE: [RPRWG] MAC Question

Hi William,
 
I see your point now. If there are separate tx buffers, the additional probability that regular packet will block a priority
packet in cut through mode is  (PriorPktSize/AvgPktSize) and not by 1.0. If AvgPktSize is 512 bytes and
PriorPktSize (Priority packet size) is say 128 bytes we are talking about 25% additional blockage. If the network is congested, it may fall even more. When we want to translate the blockage into relative latency, we need to substract the
storage time (PrioPktSize- MinProcessTime) from this. The net result becomes very less (though it is still
positive).
 
I hope I have not confused things even more !
 

Regards,

Devendra Tripathi
VidyaWeb, Inc
90 Great Oaks Blvd #206
San Jose, Ca 95119
Tel: (408)226-6800,
Direct: (408)363-2375
Fax: (408)226-6862

-----Original Message-----
From: William Dai [mailto:wdai@xxxxxxxxxxxx]
Sent: Thursday, March 22, 2001 2:37 PM
To: Sanjay Agrawal; 'Devendra Tripathi'; Ajay Sahai; Ray Zeisz
Cc: stds-802-17@xxxxxxxx
Subject: Re: [RPRWG] MAC Question

My understanding of the cut-through definition in Sanjay's example is
    1. Pass-through packet is allowed to transmit before it is completely received.
    2. There is only one transit buffer (regardless of class).
    3. Scheduling Algorithm always give pass-through traffic (regardlesss of class)
        preference over add-on traffic.
which somewhat contradicts with his first statement. Thus the interesting results.
 
The debate should be based on a solid definition of cut-through transmission, otherwise
there will be no convergence at the end of the discussion.
 
I fully agree Sanjay's first statement, but want to add that each class should have its
own transit buffer, (personally I prefer having 3 classes supported as RPR MAC services).
Whether each transit buffer should reside in MAC layer or systemlayer is up to further
discussion. Under this context, Circuit Emulation (or some may prefer to call it
Synchronous) class will benefit from the cut-through transit. Ideally it could further
benefit from preemptive transmission (yet another definition to be solidly defined).
 
William Dai
 
----- Original Message -----
Sent: Thursday, March 22, 2001 11:15 AM
Subject: RE: [RPRWG] MAC Question

Hi Ajay,

Latency and jitter requirements depend on the class of traffic. For some type (class) of services it is critical for others it is not.

Counter intuitive as it is, actually, store and for forward is less end-to-end latency than cut through.

In cut through approach, high add priority traffic waits while pass low priority upstream traffic passes through. It takes two RTT to shut up the low priority traffic through BCN. Thus high priority waits 2RTT because of the low priority stream. In this case low priority pass streams impose 2RTT latency or jitter to add high priority stream. For 200km ring that is 2ms. For 200km it is 20ms.

Total end to end latency = add latency + N*pass latency
In cut through end to end latency = 2RTT + N* packet delay at link speed

In the store and forward approach, if pass traffic is low priority it waits in the buffer while pass high priority and local high pririty get to go in that order. Thus, max jitter or latency imposed on high priority traffic is at worst imposed by high priority stream. Since high priority traffic streams are committed services, they never over subscribe the link only low priority streams do.

In store and forward end to end latency = pass hi priority burst + N* packet delay at link speed.

Pass hi priority burst = At 10gig speeds depending on the hi prority provisioning levels.
                                typically in the order of microseconds

store and forward gives clear class based seperation. It provides no latency panelties on committed high priority streams (typically voice and video) due to overcommitted low priority streams (typically data).

There is no RTT dependence here which can be .1msec at 20km to 10msec at 2000km


-Sanjay K. Agrawal
Luminous networks



> -----Original Message-----
> From: owner-stds-802-17@xxxxxxxx [mailto:owner-stds-802-17@xxxxxxxx]On
> Behalf Of Ajay Sahai
> Sent: Thursday, March 22, 2001 6:34 AM
> To: Ray Zeisz
> Cc: stds-802-17@xxxxxxxx
> Subject: Re: [RPRWG] MAC Question
>
>
> Ray:
>
> I guess the answer is that the group is still debating this issue. Some
> vendors prefer to have a largish transit buffer where transit frames
> are stored. Others are proposing "cut through"  transit functionality.
>
> I personally feel that latency will be larger in the first approach.
>
> On another note I do not believe that the similarity with 802.5 is
> on the lines of claiming a token etc. etc. The MAC mechanism
> is going to be different.
>
> Hope this helps.
>
> Ajay Sahai
>
> Ray Zeisz wrote:
>
> > I am following the .17 group from afar, but I have a question:
> >
> > Is it acceptable for each node in the ring to buffer up an entire packet
> > before forwarding it to its neighbor?  Would the latency be to
> great if this
> > were done?  Or is the .17 direction more along the lines of
> 802.5 where only
> > a few bits in each ring node are buffered...just enough to
> detect a token
> > and set a bit to claim it.
> >
> > Ray
> >
> > Ray Zeisz
> > Technology Advisor
> > LVL7 Systems
> > http://www.LVL7.com
> > (919) 865-2735
>