Thread Links Date Links
Thread Prev Thread Next Thread Index Date Prev Date Next Date Index

Re: [8023-CMSG] Purpose



Gary,

I would hope that even the most avid supporter of preemption would only
ask for two levels: preemptive & non-preemptive. Anything more would be
vanishingly useful.

As Arthur said, we could define two virtual circuits through the PHY
allowing a preemptive frame to interrupt the frame in progress with
corrupting it - in fact, if you are prepared to suffer the loss of the
preempted frame then you don't need any new standard. If, as Arthur
suggests, we make 2 MAC service interfaces for the two classes of
service then we will need to alter GMII & XGMII to identify these two
virtual circuits (much in the same manner as Utopia Class 2).

Hugh.

McAlpine, Gary L wrote:

>All,
>
>Doesn't preemption just introduce the same kinds of complexities at the
>MAC as SAR? As soon as you start breaking frames apart, you need to have
>special buffers and state machines for re-assembling them at the
>receiver. With 8 priorities, preemption could get stacked up to 8 deep
>and the receiver will need to sort all that out. It seems like a lot of
>trouble to go to to save <1 uS per hop (on average) when queuing
>latencies can easily reach 1 to 3 orders of magnitude more.
>
>Gary
>
>-----Original Message-----
>From: owner-stds-802-3-cm@listserv.ieee.org
>[mailto:owner-stds-802-3-cm@listserv.ieee.org] On Behalf Of Arthur
>Marris
>Sent: Wednesday, May 05, 2004 1:33 AM
>To: STDS-802-3-CM@listserv.ieee.org
>Subject: Re: [8023-CMSG] Purpose
>
>
>Tom,
>   Preemption can be specified in such a way that the preempted frames
>are not dropped. The transmission of the of the preempted frames would
>be suspended to allow higher priority frames to pass by and then resume
>without resending the previously sent frame data.
>
>   This could be done by specifying two (or more) channels in the MAC
>for different priority levels. The higher priority channel could preempt
>the lower priority channel. The priority level of the frame being
>transmitted and the preemption control would be communicated between
>MACs through the PHY using different codes for start of packet and end
>of packet. Using this mechanism means there would be no need for the
>MACs to examine the innards of the frame to discover the frame's
>priority.
>
>Arthur.
>
>
>-----Original Message-----
>From: owner-stds-802-3-cm@listserv.ieee.org
>[mailto:owner-stds-802-3-cm@listserv.ieee.org] On Behalf Of Thomas
>Dineen
>Sent: Tuesday, May 04, 2004 7:57 PM
>To: STDS-802-3-CM@listserv.ieee.org
>Subject: Re: [8023-CMSG] Purpose
>
>
>Gentle People:
>
>     An aspect of preemption that was not discussed below has just come
>to mind. What would be the effect on both overall link utilization and
>the low priority preempted flows?
>
>    First of all I assume that the preempted partial frames are just
>dropped and thus must be retransmitted later. The entrenched 803.2 mind
>set prevents any other viewpoint. As  I see it this would  in some cases
>reduce the effective link bandwidth for low priority flows by 50%. This
>would have a devastating effect on overall link utilization if
>preemption were constantly occurring.
>
>     Next the low priority preempted flows would suffer greatly in a
>preemption scheme due to the constant drop and retransmission. This
>would in effect be a form of double discrimination, first they are low
>priority at queuing and second they are constantly being dropped and
>retransmitted.
>
>Thomas Dineen
>
>
>
>>Hugh Barrass wrote:
>>
>>
>>
>>>Arthur,
>>>
>>>I agree that preemption is a fine idea, but in my view it falls into
>>>the "not worth the effort" category. Assuming that any new definition
>>>
>>>
>
>
>
>>>that we could make will not be standardized until 2006 & will be
>>>commonly available in silicon at least a year later, I think we can
>>>safely ignore any Ethernet interfaces below 1Gbps. Even Gigabit
>>>Ethernet seems somewhat pedestrian for high-end data center
>>>applications and I would suggest that anyone concerned about the
>>>latency penalty of the frame in progress at Gigabit speed would be
>>>well advised to migrate to 10G before 2007.
>>>
>>>In that timeframe, a user will have the choice of 10GBASE-CX4 and
>>>10GBASE-T for (cheap) copper interfaces. The former seems ideal for
>>>data center as it is extremely low latency and targeted at the
>>>shorter distances necessary for system-system communication. If the
>>>distances involved force a requirement of distances up to 100m,
>>>making 10GBASE-T a necessity, then the latency budget will be swamped
>>>
>>>
>
>
>
>>>by the physical distance (500ns @ 100m) and the PMA/PCS latency of
>>>10GBASE-T (probably ~1uS).
>>>
>>>A maximum length frame in progress at 10Gbps will take ~1.2uS, making
>>>
>>>
>
>
>
>>>the average gain due to pre-emption ~600uS (ignoring packet mix and
>>>link utilization). Even taking the maximum delay (which will map to
>>>the delay jitter component), the order of magnitude is similar to the
>>>
>>>
>
>
>
>>>fixed delay of 10GBASE-T and therefore cannot possibly lead to a
>>>significant reduction for systems using that technology.
>>>
>>>Assuming that the speed-crazed implementor chooses 10GBASE-CX4 and
>>>wishes to eliminate the 1.2uS max jitter then there are two options.
>>>The first is preemption - which can significantly reduce this
>>>(depending on the definition) but will involve significant new work.
>>>The alternative is to reduce the MTU - which involves no new work.
>>>Changing the MTU from 1500 bytes to 500 bytes reduces the maximum
>>>jitter to 400nS at the expense of  ~3% extra overhead. Further
>>>reductions can be achieved for larger overheads - which is a tradeoff
>>>
>>>
>
>
>
>>>that can be made at system configuration time. I'm fairly sure that
>>>some will argue that the MTU needs to be increased (to 9k, 16k, 64k
>>>or higher) because software/firmware based NICs cannot encapsulate
>>>small frames at line speed and 1982 vintage routers cannot switch
>>>line rate streams of minimum size packets. I would suggest that
>>>anyone who is serious enough to be asking for a new standard to
>>>improve latency should be using hardware acceleration for
>>>packetization and true wire speed switch fabrics.
>>>
>>>Assuming that the MTU has been reduced to 400nS, smart switch fabric
>>>designers might wish to employ some techniques which can reduce the
>>>jitter further at the expense of an increase in fixed latency. Given
>>>that the fixed latency of the copper interconnect is approaching the
>>>same magnitude, this seems like a reasonable tradeoff to make for
>>>system performance (assuming that delay variation is the problem).
>>>
>>>In summary, the net gain that can be achieved by preemption is too
>>>small to make a difference except in the most extreme circumstances.
>>>For most applications, current standards can be utilized (at layer 1
>>>& 2)  to attain acceptable performance therefore the demand for
>>>silicon implementing a new standard will be limited to a niche of a
>>>niche. If the application area is sufficiently small then more exotic
>>>
>>>
>
>
>
>>>(or
>>>targeted) technologies may have a competitive edge - there will be no
>>>"Ethernet advantage."
>>>
>>>Hugh.
>>>
>>>Arthur Marris wrote:
>>>
>>>
>>>
>>>>Jonathan,
>>>>  The presentation you gave in March at the Data Center Ethernet CFI
>>>>
>>>>
>
>
>
>>>>suggested preemption as an area for exploration.
>>>>
>>>>  Preemption would require a minor change to the PCS to support
>>>>extra control-codes.
>>>>
>>>>  Supporting preemption seems like a worthwhile objective as every
>>>>microsecond is precious in cluster computing.
>>>>
>>>>Arthur.
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>
>
>