Thread Links Date Links
Thread Prev Thread Next Thread Index Date Prev Date Next Date Index

Re: [RE] New white paper release (fixed, in response to Geoff's observation)



Sihai,

Thanks for you very timely and fundamental questions.

A slight aside: You should generally use the change-bars
to observe what has changed, but always use the new
draft (w/o change-bars) for reference purposes.

Sometimes (this version is an example), the change-bars
version slips clause, figure, table, or subclause
numbers.

In the new draft, I assume you were referring to Figure 10.3.


>> As my understanding to the Figure 11.3.(e),
>> the delay will not be recovered any more.

Under the worst case, this is true.
The more typical case will be that the actual
supplied bandwidth is less than the negotiated
amount, and the recovery time will depend on
the difference between these two.

>> If there is a bunch again in
>> following frames (and following time), more delay is needed for
>> following-following frames. Then along time axis, more and more delay
>> present until buffers overflow... Will it happen?

I don't think this will happen. The key is that there
will be a gap before the next bunch. The gap supplies
credits, sothe first post-bunch frame is not delayed
as much (if at all). The second post-bunch frame then
sustains the same delay as was present before the bunch.

The key is that bunches are normally preceded by
gaps, which is where the delayed bunch came from.
An intuitive proof:
"If there is constant bunching, without these gaps,
then the negotiated bandwidth has been exceeded
(which is presumed to not be the case)."

HOWEVER, there can be a transient where there are
a few bunches without interleaving gaps. The amount
of this is proportional to the number of bridges and
the MTU+0.75*cycle delay that may be incurred when
passing through each bridge.

That does mean that the bridge (not just the end
station) has to account for a buffer than can handle
the difference between best-case and worst-case delays.
But, if a bridge has such buffers, then the continued
bunching is bounded, the buffer is sufficient, and
latencies are guaranteed.

I hope that I answered the question that you asked,
but I might have read incorrectly. I appologize if
my response is not clear, but its hard to draw pictures
in email, and this is hard to explain without pictures.

Thanks again for you most insightful question,
DVJ


>> -----Original Message-----
>> From: Sihai Wang [mailto:sihai.wang@SAMSUNG.COM]
>> Sent: Wednesday, August 10, 2005 6:43 PM
>> To: STDS-802-3-RE@listserv.ieee.org
>> Subject: Re: [RE] New white paper release (fixed, in response to Geoff's
>> observation)
>>
>>
>> Hi all,
>>
>> Glad to see a new proposal in this version of white paper. I
>> have a question
>> about the new proposal.
>>
>> According to Figure 11.3 and its explanation, bunched flow will
>> be reshaped
>> by delaying following frames. As my understanding to the Figure 11.3.(e),
>> the delay will not be recovered any more. There is not problem
>> along space
>> axis because the hop number is limited and the maximum delay
>> will be bounded
>> (nXd). But let's turn our sight to time axis. If there is a
>> bunch again in
>> following frames (and following time), more delay is needed for
>> following-following frames. Then along time axis, more and more delay
>> present until buffers overflow... Will it happen?
>>
>> I am not sure that I understand this scheme right, so please
>> point out if I
>> missed something. Thanks.
>>
>> Regards,
>>
>> Sihai Wang
>>
>> ----- Original Message -----
>> From: "David V James" <dvj@ALUM.MIT.EDU>
>> To: <STDS-802-3-RE@listserv.ieee.org>
>> Sent: Thursday, August 11, 2005 5:40 AM
>> Subject: [RE] New white paper release (fixed, in response to Geoff's
>> observation)
>>
>>
>> > All,
>> >
>> > Sorry, this seems to happen everytime I don't
>> > check immediately after downloading. I have
>> > since done that. I can now more reliably state
>> > the following:
>> >
>> > I have updated the white papers to account for:
>> >  1) The addressing options discussed at the
>> >     last Wednesday meeting.
>> >  2) A rate-paced shaper proposal, that avoids
>> >     by using separate shapers for each distinct
>> >     combination of:
>> >       {source-port, target-port, priority}
>> >
>> > The clean and change-bars versions can be found
>> > at:
>> >   http://dvjames.com/esync/dvjReNext2005Aug10.pdf
>> >   http://dvjames.com/esync/dvjReBars2005Aug10.pdf
>> > I assume Michael will copy these to our web page.
>> >
>> > Cheers,
>> > DVJ
>> >
>> >