Thread Links Date Links
Thread Prev Thread Next Thread Index Date Prev Date Next Date Index

Re: [HSSG] 40G MAC Rate Discussion




On 4/3/07, Shimon Muller <Shimon.Muller@sun.com> wrote:

> 1) What is so magical about 40Gps number for servers?
> Why any other multiple of 10 between 10Gbps - 100Gbps is not considered
> good for servers?


Nothing magic about 40Gb for servers, just as there is nothing magic about
a 10x scaling factor, except for tradition.
 
Yes, there is good magic about the 10x number in the Client-Server networks
built upon Ethernet. Perhaps this magical (not just tradition) 10x took over
ATM. The 40G doesn't work well when the clients sides are 10G interfaces.
 

 
40Gb happens to be in the right ballpark for server needs in that timeframe,
and it will be at the right cost.
 
Both "ball park" and "cost" are debatable and moving targets. One could very
easily argue that 50G is better since it is nice half of 100G number and should
cost half of what 100G and just run the same 100G architecture half the bit-rate and
we design everyhing just once and then it is just matter of clocking and no design change.
 
We should go for 50G and 100G. And 50G is in the same ballpark and cost too.
 

 
> 2) Do one really think that a processor (MC/MT included) and memory system
> could meaningfully fill the 40Gbps pipe and still do some useful work?
> I mean
> by 2011 or may be beyond?


Absolutely. I would be more than happy to demonstrate that to you.
 
 
Frankly, I would love to see this processor. Unless it is confedential would you
please forward me a link to such a design?
 

 
> 3) What is PCIe affinity to 40Gbps as I/O? PCIe doesn't seem to tied
> to 40Gbps.
> It supports or could support any multiple of 10 (or may be 2.5)
> between 2.5Gbps - 100Gbps.


Today, PCI-Ex 1.0 can do 32Gb raw, but at most 24Gb usable.
By 2010, PCI-Ex 2.0 will double that to 64Gb raw, 48Gb usable.
 
 
PCIe 2.0 (or PCI-Ex you call it) is already out which allows to aggregate
32 lanes of 5Gbps each which is 32 x 5 = 160Gbps. This is old news.
 
So, with this argument the servers should be able to handle 100Gbps Ethernet
port as this is well below 160Gbps PCIe today.
 
We are talking about 100Gbps Ethernet in 2010. With your I/O bw argument
the 100Gbps Ethernet is lagging behind PCIe I/O.
 

 
By 2015, PCI-Ex 3.0 will double that again to 128Gb raw, 96Gb usable.
It's all about timing.
 
 
I haven't seen anything for PCIe 3.0. But This has to be more than 160 Gbps.
And if you double than this is 320Gbps (usable 25% less) but than so is
the case for Ethernet (same 25% overhead).
 

 
> 4) Extending flexibility, if flexibility is desired then why limit to
> just 40G and 100G? If flexibility of
> multi-rate is a goal why not option for 20G or 50G (if not more)?


Some (very reasonable) people would argue that infinite flexibility is a
good thing. I am not one of them. Life is about trade-offs.
 
Yes, classical argument a steering wheel/wheel. -:)
 
Thanks,
Sanjeev
 
 

 
Shimon.