Thread Links Date Links
Thread Prev Thread Next Thread Index Date Prev Date Next Date Index

Re: [RE] Lastest white paper release and tomorrow's meeting



Kevin,

>> It seems like a couple big-ticket design elements (extra buffering,
>> queue sorting) are motivated by the analysis in the bursting and
>> bunching annex. I will definitely take another read through of that.
>> Who else has reviewed it in detail?

I believe that a couple of participants have scanned through this,
but it has not been discussed in a meeting/teleconference and I
have received written feedback from only Michael Johan Teener.

We have tried to address the non-pacing issues first, since the
pacing issues are known to be the most controversial. The hope
was that pacing discussions would be easier, if they could
be framed within an otherwise complete environment.

With only so much time in the day, detailed reviews of this
annex have not (to my knowledge) occurred.

So, a thorough read of this annex would be most appreciated,
by myself as well as others.

Respectfully,
DVJ


>> -----Original Message-----
>> From: owner-stds-802-3-re@ieee.org
>> [mailto:owner-stds-802-3-re@ieee.org]On Behalf Of Gross, Kevin
>> Sent: Thursday, July 07, 2005 7:51 AM
>> To: STDS-802-3-RE@listserv.ieee.org
>> Subject: Re: [RE] Lastest white paper release and tomorrow's meeting
>>
>>
>> Thank you for your response. This goes a long way towards
>> clarifying things
>> for me.
>>
>> A constructive counter-proposal to the multicast stream transport is a
>> reasonable request. I'll try to find some time to work something up.
>>
>> It seems like a couple big-ticket design elements (extra buffering, queue
>> sorting) are motivated by the analysis in the bursting and
>> bunching annex. I
>> will definitely take another read through of that. Who else has
>> reviewed it
>> in detail?
>>
>> -----Original Message-----
>> From: owner-stds-802-3-re@ieee.org
>> [mailto:owner-stds-802-3-re@ieee.org] On
>> Behalf Of David V James
>> Sent: Wednesday, July 06, 2005 8:20 PM
>> To: STDS-802-3-RE@listserv.ieee.org
>> Subject: Re: [RE] Lastest white paper release and tomorrow's meeting
>>
>> Kevin,
>>
>> I'll try for some quick answers right now, but we may want
>> to queue some topics for future discussion.
>>
>> >> The paragraph following figure 5.8 doesn't make much sense to me.
>>
>> It made sense to me, but I have the handicap of being the author.
>> Perhaps you could suggest modifications, or we could chat on the
>> phone to work out better wording?
>>
>>
>> >> Figure 5.20 shows FIFO's on the MII side of the PHY.
>> >> The main working FIFOs are typically on the MII side of the MAC.
>> >> Accurate receive packet time stamping can be obtained by
>> >> monitoring the MII receiver carrier sense (CRS) signal.
>>
>> I'll have to defer to the PHY experts on this one. I was
>> simply trying to illustrate that _if_ significant FIFOs
>> are in the PHY (as they may be on generalized PHY-addressible
>> OIF interconnects), then accuracy is better if a signal
>> could be provided when the frame first arrived.
>>
>> Do you have a preferred technology/implementation independent
>> wording/illustration?
>>
>>
>> >> In order to enforce bandwidth subscriptions, it seems
>> >> there needs to be an association between sourceID:plugID
>> >> and some sort of identifier in the stream data packets.
>> >> The network needs to identify these packets and associate
>> >> them with a stream, measure bandwidth consumed by the
>> >> stream and potentially drop packets if the subscription
>> >> terms are violated. Figure 5.22 and bullet (a) in
>> >> section 5.6.3 seem to indicate that a multicast destination
>> >> address for each stream is the preferred association.
>> >> We should be aware that this solution creates the following
>> side effects:
>> >> 1/ All media data is always multicast.
>> >> 2/ ClassA "routing tables" must store a multicast destination
>> >> MAC (I'll call it destinationID) along with sourceID and plugID.
>> >> 3/ destinationIDs must be unique on the network. We'll
>> >> need some network-wide means of allocating destinationIDs.
>> >> We'll need to deal with destinatationID collisions when two
>> >> separate working networks are connected together.
>>
>> On topic (1), this seemed to be a reasonable restriction.
>> That would allow smooth transitions from 1-to-N listeners.
>>
>> On topics (2,3), your concerns are valid. The alternative of
>> using a denigrated G.1.1 solves these problems, but appeared
>> to have "too many changes to bridges" concerns.
>>
>> Was there another option that you think should be considered
>> in more detail?
>>
>>
>> >> The pacing feature described in section 5.7 appears to
>> >> insert an isochronous cycle's worth of gating delay for
>> >> each switch hop. I assume the motivation here is to keep
>> >> traffic flowing smoothly.
>> Yes.
>>
>> >> Many applications would prefer to have a fixed, longer
>> >> latency than to have latency dependent on network location.
>>
>> Probably better to state this as "fixed, typically longer",
>> since the Annex illustrates that w/o this fixed delay,
>> bunching can happen and may actually generate a longer
>> worst case.
>>
>>
>> >> Also, holding gigabit data for 125us implies an additional
>> >> 120Kbits of buffer per port. There will be a cost associated
>> with this.
>>
>> Yes, there is a cost for this, but it is known to work.
>> And, the buffer may be smaller than those required in
>> a bunching-tolerant bridge.
>>
>> If you have a preferred alternative, I'm sure everyone has
>> the flexibility to consider alternative approaches.
>> While I (and I suspect others) don't like the buffer requirements,
>> good alternatives haven't been forthcoming.
>>
>>
>> >> Transmission gating described in section 5.7.4 will only work
>> >> properly if ClassA packets in the transmission queue are
>> >> sorted according to isochronous cycle number. Are the
>> >> multiple arrows into the queue in figures 5.26b and 5.27b
>> >> trying to indicate the presence of sorting hardware?
>> Yes.
>> Due to receiver and transmitter cycle-slip, early and late
>> frames must have distinct precedences. This could be distinct
>> FIFOs, tag-ordered non-FIFO queues, linked-lists with multiple
>> heads, etc..  The arrows were intended to illustrate this.
>>
>> Could this be described better?
>> Words or figures are most welcome.
>>
>>
>> >> I've taken in a first reading of Annex F. I would like to
>> >> understand how the graphs of latency vs. switch hops were
>> >> generated. These graphs show an unexpected exponential
>> >> relationship between switch hop count and latency.
>>
>> The exponential relationship was no surprize to me, but it
>> took a while to figure out how to illustrate the conditions.
>> The base problem is that if each of N stations has a burst,
>> the the resulting bridge output effectively has to deal
>> with an N-1 length burst.
>>
>> One can argue that this loading is not realistic, but that
>> was not the point. Guaranteed within any topology was the
>> constraint being evaluated/simulated.
>>
>> These graphs were generated in an amazingly primitive fashion.
>> The previous time sequences were generated manually, using
>> (ugh) FrameMaker graphics, with the snap-grid on.
>>
>> When generating the time sequences, an attempt was make to use
>> worst case collision conditions (although I can't guarantee
>> these are actually the worst). In some cases, contending
>> traffic was momentarily stopped, as though the variable-rate
>> traffic was dramatically reduced or was paused. It is a
>> bit ironic, that the worst case bunching can actually occur
>> when the offered load is reduced, but at just the wrong time!
>>
>> This is _certainly_ not a typical scenario, but does represent
>> a scenario that could occur. Since we are trying to illustrate
>> worst case guaranteed, not typical expected, this seemed
>> to be a fair methodology.
>>
>> The ticks on the time sequences were counted to generated
>> the "cycles" numbers in Table F.6 and others. Manual
>> processing used a Windows calculator to convert cycles
>> to time. Time values were plotted via visual placement.
>>
>> This certinly wasn't he most elegant of simulations
>> (Ooohhhh, I feel so embarassed...), but is easy to verify
>> via visual observation and any '+-*/' capable calculator.
>>
>> In a some cases, the behavior was exponential but the width
>> of the page was insufficient, so the dotted lines continue
>> beyond the last computed data points.
>>
>> The point of a pretty document is to improve readability and
>> the best judge of readability is not the overly familiar
>> editor. So, specific "from-to" text proposals are always
>> helpful when I can't clearly visualize the difficulty.
>>
>> As always, thanks for the most careful review. Feel free to
>> call if some of this doesn't make sense and an interactive
>> conversation would help.
>>
>> Respectfully,
>> DVJ
>>
>>
>>
>> -----Original Message-----
>> From: owner-stds-802-3-re@ieee.org
>> [mailto:owner-stds-802-3-re@ieee.org]On
>> Behalf Of Gross, Kevin
>> Sent: Wednesday, July 06, 2005 3:35 PM
>> To: STDS-802-3-RE@listserv.ieee.org
>> Subject: Re: [RE] Lastest white paper release and tomorrow's meeting
>>
>>
>> I've had a chance to read a bit further into the working paper.
>> I have a few
>> comments and question:
>>
>> The paragraph following figure 5.8 doesn't make much sense to me.
>>
>> Figure 5.20 shows FIFO's on the MII side of the PHY. The main
>> working FIFOs
>> are typically on the MII side of the MAC. Accurate receive packet time
>> stamping can be obtained by monitoring the MII receiver carrier
>> sense (CRS)
>> signal.
>>
>> In order to enforce bandwidth subscriptions, it seems there
>> needs to be an
>> association between sourceID:plugID and some sort of identifier in the
>> stream data packets. The network needs to identify these packets and
>> associate them with a stream, measure bandwidth consumed by the
>> stream and
>> potentially drop packets if the subscription terms are violated.
>> Figure 5.22
>> and bullet (a) in section 5.6.3 seem to indicate that a multicast
>> destination address for each stream is the preferred
>> association. We should
>> be aware that this solution creates the following side effects:
>> 1/ All media data is always multicast.
>> 2/ ClassA "routing tables" must store a multicast destination
>> MAC (I'll call
>> it destinationID) along with sourceID and plugID.
>> 3/ destinationIDs must be unique on the network. We'll need some
>> network-wide means of allocating destinationIDs. We'll need to deal with
>> destinatationID collisions when two separate working networks
>> are connected
>> together.
>>
>> The pacing feature described in section 5.7 appears to insert an
>> isochronous
>> cycle's worth of gating delay for each switch hop. I assume the
>> motivation
>> here is to keep traffic flowing smoothly. Many applications
>> would prefer to
>> have a fixed, longer latency than to have latency dependent on network
>> location. Also, holding gigabit data for 125us implies an additional
>> 120Kbits of buffer per port. There will be a cost associated with this.
>>
>> Transmission gating described in section 5.7.4 will only work properly if
>> ClassA packets in the transmission queue are sorted according to
>> isochronous
>> cycle number. Are the multiple arrows into the queue in figures 5.26b and
>> 5.27b trying to indicate the presence of sorting hardware?
>>
>> I've taken in a first reading of Annex F. I would like to
>> understand how the
>> graphs of latency vs. switch hops were generated. These graphs show an
>> unexpected exponential relationship between switch hop count and latency.
>>
>>
>>
>>
>> From: owner-stds-802-3-re@ieee.org
>> [mailto:owner-stds-802-3-re@ieee.org] On
>> Behalf Of David V James
>> Sent: Tuesday, July 05, 2005 6:44 PM
>> To: STDS-802-3-RE@listserv.ieee.org
>> Subject: [RE] Lastest white paper release and tomorrow's meeting
>>
>> All,
>>
>> Based on last meeting's consensus, the white
>> paper now includes multicast-address-selected
>> classes, pacing for classA and shaping for
>> classB.
>>
>> Things seemed to work out during the writing,
>> yielding a good compatibility classB as well
>> as achieveable low-latency bridge forwarding
>> possibilities:
>>   1 cycles (125 us) for Gb-to-Gb/100Mb
>>   2 cycles *250 us) for 100Mb-to-Gb/100Mb
>>
>> I have placed these on my DVJ web site, but assume
>> that Michael will transfer them to the SG web
>> site soon. You may find them at:
>>
>>   http://dvjames.com/esync/dvjReNext2005Jul05.pdf
>>   http://dvjames.com/esync/dvjReBars2005Jul05.pdf
>>
>> If anyone has a chance to read the pacing stuff,
>> we can discuss this briefly at tomorrow's adhoc
>> meeting. In case anyone missed the announcement,
>> tomorrow's interest group meeting is as follows:
>>
>> REsE interest group conference call, code 802373
>>
>> Wednesday July 6, 2005
>> 2:00 pm - 4:00 pm
>> Event Location: 1 877-827-6232 / 1 949-926-5900
>> Street: 3151 Zanker Rd
>> City, State, Zip: San Jose/CA/95134
>> Phone: 1 877-827-6232(US)/ 1 949-926-5900(non-US)
>> Notes:
>> RSVP mikejt@broadcom.com if you are calling in so I can confirm
>> the number
>> of phone ports
>>
>> I normally wouldn't want to discuss things on such
>> short notice, but I'm aware that Michael will be
>> there tomorrow and not a week later.
>>
>> Cheers,
>> DVJ