Thread Links Date Links
Thread Prev Thread Next Thread Index Date Prev Date Next Date Index

Re: [HSSG] Soliciting Support for my Presentation



Hello. I'm just a lowly end user, so please have patience with my 
(perhaps) uninformed views...

On Tue, 12 Jun 2007, Scott Kipp wrote:

> I've heard that 40G transceivers would consume about 40% of the power of 
> 100G transceivers.  Transceiver power consumption adds up and consuming 
> 60% less power in a component is significant.  I would welcome more 
> input from the transceiver vendors about the expected power consumption 
> and initial cost of 40G vs. 100G transceivers.

I think this is one place where 100G and 40G are quite different in their 
intended application. I expect 100G to be (at least initially) used in 
units which have 300-1000W per slot of allocated cooling and power, with 
half to full-rack units that draw 5-15kW. 5-10W difference there in 
transceiver power is not as significant as it might be in a server, even 
though servers today also use a lot of power.

In my world (primarily focused on networking equipment) 100G would first 
be used in the router/switch to router/switch scenario, over both DWDM and 
dark fiber, both between sites and intra-site. Apart from the DWDM 
scenario, all of those have been identified and (I believe) agreed upon, 
and customers like me need it asap. It will be used in equipment that is 
very costly anyway, so an initial high cost for the optics and electronics 
for 100G is expected and acceptable by the intended customer base, since 
we will use few ports in the beginning. This is the same reason why 10GE 
hasn't taken off like some seem to have expected it to, because it's been 
primarily used in the beforementioned application in networking equipment.

I would expect that similar high cost is not acceptable for servers, as 
they tend to be much higher volume so a per unit cost is more significant.

So if I'm correct in my assumption above that cost issues are much more 
important to 40G than to 100G then there is a distinct difference in goal 
right there which might cause conflicts in case they would be developed in 
the same group.

Also, the timeframe 2012-2014 has been mentioned for 40G for servers to 
start require 40G. Wouldn't it be better if work on 40G then started 
approximately 3 years before it's needed instead of doing it today? By 
2009 a lot of the work with 100G is hopefully done and 40G can then either 
draw from that to create a cost efficient solution, or decide that 100G 
will come down enough in cost by 2012 that 40G is not needed?

Bringing out 40G in a very short timeframe as have been suggested (by 
using existing technology like QSFP) seems to me to be more applicable in 
the networking scenario than the server scenario. I don't really 
understand how the 2012 timeframe for server need of 40G and the 
suggestion that 40G can be brought out quickly, fit together. Perhaps 
someone could give some insight?

-- 
Mikael Abrahamsson    email: swmike@swm.pp.se