Thread Links Date Links
Thread Prev Thread Next Thread Index Date Prev Date Next Date Index

Re: [HSSG] 40G MAC Rate Discussion



Dan,

>If we build two multi-rate-link chips, one nx10G/mx40G/px100G and one
>nx40/100 we can then build the four product classes I listed below. However,
>these chips are going to be more complicated than the originally proposed
>devices by definition. Its much more difficult to obtain the perfect
>resource balance for a switch chip that has different link speed
>requirements on individual ports. How wide a bus is defined, or what speed
>it runs is often based on the highest speed used, so it would be
>overprovisioned for the 10G/40G and 40G/100G links. Sometimes, this is OK,
>but usually when provisioning is inexpensive (aka 10/100/100) versus a
>system that will be pushing the boundaries as it is. 
>  
>

The limiting factors for a switch fabric implemented in silicon are
either the total switching capacity or the number of pins (serial
links) that you can afford. Assuming that 100Gb support is a given,
the former will be determined primarily by the number of 100Gb
links that you want to provide, and the latter will be driven by the
number of 10Gb links. Adding support for 40Gb involves bonding
four 10Gb links into 1 (similar to 100Gb bonding of 10x10Gb links).
So, in your above example, n=4*m for the maximum configuration
at either speed. How much more complicated is that? There is no
overprovisoning anywhere if you want a completely flexible product.
Of course, you can alway micro-optimize any implementation for a
given application.

I am not inventing this as we go. This is what the vendors of switch
silicon tell me how they would support 40Gb. This, combined with
the re-use of existing technology for PHYs is deemed by them as
having real ROI, with a negligible additional investment. Maybe this
is one of the reasons why all of the major silicon vendors support
the 40Gb effort.

>The product mix is greater, and therefore we have to determine which product
>to do first, or whether some classes of switch products will have such a low
>volume we would not do them at all. Short of a very high end server that
>might need 40G, I see the real volume being at 10G with 100G uplinks and the
>next highest volume being nx10G aggregators. Eventually, we would migrate to
>100G server connects but that is when we are going to be talking about 1T
>uplinks. :)
>  
>

What is a "very high-end server" these days?

Is it a SPARC system with 32 threads at 1.4GHz? Or is it a 4-socket,
dual-core x86 system at 3GHz? Not in my book. Both of the above
are the sweet spot in the server market covering the low and mid
range.

Following a very conservative rule of thumb of "1Gb of throughput
for 1GHz of processing", these systems can handle 20Gb/s today
(yes, I can prove it).

It is not unreasonable to assume that within 2-3 years these type
of systems will double in capacity. Therefore, we will need to double
the network capacity as well. LAG will help, but it is not good enough.
 From a single application's performance it is not the same.

This is where I am coming from.

>From the perspective of added delay/risk to the standard, I believe that
>specifying multiple options inherently is more complex and likely to
>encounter mistakes, not to mention simply adding pages to the spec that need
>to be written, reviewed and finalized.
>  
>

I don't know how to respond to a very general statement like this.
I do not believe that you can prove that this is true any more than
I can prove that it is not. We (802.3) have a pretty good record in
doing much more complicated projects, with a lot more options,
and we did it quite well.

>Concurrent with the delay added to the schedule, is the likelihood (IMO)
>that 40G opportunity will be waning and 100G will be gearing up but delayed
>for those applications that can use it.
>  
>

We'll have to agree to disagree on this point.

Enough said,

Shimon.