Thread Links Date Links
Thread Prev Thread Next Thread Index Date Prev Date Next Date Index

Re: [HSSG] 40G MAC Rate Discussion



>Please forgive a few questions from a lurker new to this process....
>  
>

Welcome.

>>>We believe that doing a standard for 100Gb is important but not enough,
>>>for the following reasons:
>>>- The 40Gb speed will enable us to get the most out of our servers
>>>until 100Gb becomes technically and economically viable for server
>>>connectivity. We believe that there is a 5-year window of opportunity
>>>for this market.
>>>      
>>>
>
>Just out of curiosity, what is the difference between running a 40G
>interface at wire speed and running a 100G interface at 40G?  Note that
>the first round of 10G interfaces were bus-limited to 8G (well,
>7.mumble) by the PCI-X bus, and this did not appear to damage early
>adoption.
>  
>

Actually it did, big time.

The penetration of 10Gb to the server has been dismal to this day.
There are plenty of market surveys to prove that. Only now is it
starting to get any meaningful volumes. A big part of it was that
the server I/O subsystems were not ready. Today it is PCI-Ex 1.0
which can handle 10Gb comfortably, and with the costs dropping
10Gb is starting to be deployed in servers.

PCI-Ex 2.0 (DDR) will be deployed in servers in the next 2-3 years.
It will be able to handle 40Gb comfortably, but nowhere close to
100Gb. We will need PCI-Ex 3.0 (QDR) for that, and it is not expected
to happen until much later.


>>>- The two speeds should be addressing two distinct markets. This can be
>>>accomplished by defining the PMDs based on reach, with 40Gb defined for
>>>short-reach datacenter connectivity only.
>>>      
>>>
>
>It seems to me that if the market for 40G were limited in this way, it
>would increase the cost of both 40G and 100G since the economy of scale
>for both would be diluted (less 100G in data centers, no 40G in carrier
>networks)....thoughts?
>  
>

I take at face value the argument that the carrier networks are screaming
for speed, so 40Gb does not play there. It is a 100Gb market, going to even
higher speeds in the future.
I also maintain that 100Gb is not going to play in the server market --- see
above.

So, I don't see how the economies of scale come into play here.
However, the combination of the two should increase the need for 100Gb:
faster on-ramps require faster backbones in the datacenter.

>>>- 40Gb connectivity at the server will require a faster aggregation speed
>>>even in the datacenter. This will increase the market potential for 100Gb.
>>>      
>>>
>
>Given that it is currently difficult for servers to make effective use
>of 10Gbps host interfaces (one often has to devote an entire high-speed
>host CPU to network I/O, though TOE cards are getting better), I would
>expect that 10G aggregation would fill the same role for providing
>demand for 100G uplink ports.  It seems to me that 10G host interfaces
>have seen limited market penetration, and have quite a ways to go yet.
>Is there data that says otherwise?  (note that vandoorn_01_0307.pdf
>states that 10G host interfaces are being adopted slowly due to cost
>concerns....it seems likely that the cost barrier for 40G will be
>significantly higher, unless there is data to suggest otherwise).
>  
>

Several points:
- The network performance problem at 10Gb is yesterday's problem,
  and has been already addressed. A big part of it had to do with I/O,
  as we discussed above. We would not be pushing for a higher speed
  at the server if we did not think that we needed it and could take full
  advantage of it.
- For the same reasons that link aggregation is not good enough in
  carrier networks, it is not good enough for server networking.
- From everything that I have heard so far, the cost premium for 40Gb
  will be very reasonable compared to 10Gb, and substantially lower
  than 100Gb.

> It appears that there is broad consensus supporting the need for 100G
>
>(my organization certainly needs 100G by 2010).  If it can be shown
>that there is also broad consensus supporting the need for 40G for
>datacenter applications, then spinning off a 40G datacenter effort
>seems like a productive way to go.  That way neither effort would
>impede the other from a development perspective, and each could satisfy
>the needs of its market.
>  
>

40+ individuals from 30 companies (and counting) indicated at the March
meeting that they believe that they can make money selling 40Gb products.
That's broad enough consensus to me.

As for spinning off the effort:
There is going to be a lot of commonality and overlap in both standards.
The solutions can and should be very similar. They will require the same
body of experts. I do not believe it would be in our best interest to have
people multitasking in two groups.

Shimon.