Thread Links Date Links
Thread Prev Thread Next Thread Index Date Prev Date Next Date Index

Re: [802.3_100GNGOPTX] Warehouse Scale Computing, impact on reach objective



Ali,

I found the following interesting. Pg 40

"Datacenter sizes vary widely. Two thirds of US servers are housed in datacenters smaller than

5,000 sq ft (450 sqm) and with less than 1 MW of critical power [26] (p. 27). Most large datacenters

are built to host servers from multiple companies (often called co-location datacenters, or “colos”)

and can support a critical load of 10–20 MW. Very few datacenters today exceed 30 MW of critical

capacity."


5000 sq ft translates to a mere 70ft by 70ft.

I have to wonder where this leads us? We hear of massive data centers taking up city blocks, and then we see something like this telling us that a super-majority of data centers can fit in a medium sized McMansion.

I tend to believe both, and would like to see the distribution of actual data center reach requirements. Now digging into some of the reference material :i.e. Data Center Report to Congress

Andy,

Thanks. This is great stuff!

Regards,

Dan
From: Ali Ghiasi <aghiasi@xxxxxxxxxxxx>
Reply-To: Ali Ghiasi <aghiasi@xxxxxxxxxxxx>
Date: Mon, 28 Nov 2011 10:00:08 -0800
To: 100G Group <STDS-802-3-100GNGOPTX@xxxxxxxxxxxxxxxxx>
Subject: Re: [802.3_100GNGOPTX] Warehouse Scale Computing, impact on reach objective

Andy

Thank you for forwarding this report, I glanced through the report but I was not able to get any specific cable length 
distribution from this report.    Figure 1.2 is a good example of TOR architecture. Can we assume that the from 
TOR to each CPU blade Cu will be used and MMF fiber will be used from TOR switch to the data center switch?
The next question is what is the typical reach between TOR switch and data center switch?

You also elude to more flat architecture I assume something like Clos where data center switch is replaced with distributed fabric.
Do you have inside what typical cable reach will be in this case?

Thanks,
Ali


On Nov 26, 2011, at 12:53 PM, Andy Moorwood wrote:

Study Group Members,

I share the regret, expressed in several posts to this list, that large internet data center operators are unwilling to make their requirements known in an open non confidential manner.  I would like to forward to the group a paper, recommended by a colleague, that may help close this information gap.

The Datacenter as a Computer: An Introduction to the Design of Warehouse-Scale Machines

Luiz André Barroso and Urs Hölzle”

ISBN: 9781598295566 paperback

ISBN: 9781598295573 ebook

The paper may be viewed, without charge, at the Morgan & Claypool site, but please be aware of the restrictive notices on page (iv)

 http://www.morganclaypool.com/doi/pdf/10.2200/S00193ED1V01Y200905CAC006

Copies may also be purchased at internet book sites.

The paper, 120 pages in all, describes the specific challenges faced when applications implemented by internet content providers, such as Google and Microsoft, require many thousands, even tens of thousands of servers.  Indeed the usage model of the data center changes from “a place to house servers” to a “building to host an application”.  

The introduction, pages 1 to 11, gives insight into why these warehouse computers differ from traditional data centers and how this impacts the need for communication bandwidth within the data center.

The ideal system, as described by Barosso and Hölzle, would be one where the cross sectional communication bandwidth of the data center would equal to the bandwidth of the servers, i.e. a network without over subscription.  In such a system the application developer can freely locate functions throughout the network, optimally distributing load and minimizing computational and HVAC hotspots.  The authors admit that economic considerations cannot support such a model and that over subscription levels of 5:1  are evident between racks of servers (80 servers per rack), 10 racks in a group (800 servers).  Using the terminology as per kolesar_02_0911_NG100GOPTX, page 4, citing barbieri_01_0107.pdf, this over subscription would refer to the “access" to “distribution” network layers.

http://www.ieee802.org/3/100GNGOPTX/public/sept11/kolesar_02_0911_NG100GOPTX.pdf

 

Decreasing the relative cost of these access to distribution layer links would enable warehouse scale computer builders to reduce the level over subscription and get closer to their ideal system.   Throughout the paper the authors use a system wide approach to find the lowest cost.  By this I mean reducing cost in one area is not beneficial if it results in an increase in overall costs, since cost is just shifted from one area to the other.   

 

Such considerations should play a part in the determination of a reach objective.  As we increase the reach to include an ever higher percentage of the links as described in  kolesar_02_0911_NG100GOPTX, we should be cognizant of the increase in relative cost to achieve this increased reach; and evaluate if, when considered at a network level with a distribution of link lengths as per Paul’s presentation, we are decreasing overall cost, or not.

 

Best Regards

Andy