Thread Links Date Links
Thread Prev Thread Next Thread Index Date Prev Date Next Date Index

Re: [HSSG] Fw: [hssg] CORRECTED 10GigE LR vs SR



Paul,
 
The delay in coming out with SR wasn't necessarily because it was any harder to make SR than LR. We were starting to produce 10 Gig ports mostly for switch uplinks and the volume was low. With low volume, people didn't want to qualify, stock and support two transceivers splitting that volume. LR would satisfy all the distances we needed to support and SR would only do a subset. So LR was supported and deployed first until volumes justified adding another transceiver. I think that is likely to repeat when we introduce a new speed even if there is no technical difficulty with producing short range parts. A PHY that can satisfy the whole of a low-volume high-end market will deploy before one that satisfies only a part of that market even if the partial coverage PHY has some better cost factors.
 
Pat


From: Paul Kolesar [mailto:PKOLESAR@xxxxxxxxxxxx]
Sent: Monday, August 07, 2006 3:14 PM
To: STDS-802-3-HSSG@xxxxxxxxxxxxxxxxx
Subject: Re: [HSSG] Fw: [hssg] CORRECTED 10GigE LR vs SR


Adam,
if copper fulfills most of your need for short range Ethernet interconnects, then I assume that the copper you refer to is either 1000BASE-T or 10GBASE-CX4.  These have significantly differing distance capabilities and media.  The first being 100m on UTP and the second 15m on twinax.   If both of these suffice, then "sufficient short range coverage" for Yahoo! means the lesser of the two.  Please confirm or correct my interpretation, as this can help scope out objectives.  

I disagree that supporting new speeds on MM always causes significant delay.  The Ethernet standards at both 1G and 10G were published with both MM and SM PMDs in their initial addenda; both completed simultaneously.  There was a few month delay in finishing the 1GbE standard in order to support 1000BASE-LX on MM, and its additional value to GbE may be questioned since 1000BASE-SX is by far the leading optical PMD for 1GbE.  But 1000BASE-SX was immediately available as the standard was finished.  No delay whatsoever.  At 10G, telecoms drove SM PMD product development prior to 802.3ae publication and 10GBASE-LR and -ER tapped into that.  Comparatively, MM solutions were developed as 10GbE unfolded.  And while I believe it is true that initial 10GbE deployments were primarily -LR, in 2005 MM deployments grew to be half the market as 10G continued to penetrate the data center.  I suspect the lead time problems with -SR in the past were more an issue of inadequate forecasting than of product availability.  -SR was and is available from multiple suppliers.   Demand outstripped the supply as more customers realized the value in -SR, providing incentive to qualify more of the available suppliers.  While many of these customers are not the earliest adopters of 10GbE, such as Yahoo!, they are part of the growing base that arrives behind the leading edge and represents the bulk of sales.  

There were phased introductions of additional PMDs to both GbE and 10GbE.  Later addenda brought about 1000BASE-T, 10GBASE-CX4 (with 10GBASE-T and -LRM in the wings).  Despite stating delay as a cause for spurn, Yahoo! has apparently deployed these with some fervor, along with their associated media.  If relative tardiness in standardization were truly a reason for disdain, Yahoo! should have ignored these.  

At the next higher speed there are technologies available for both MM and SM that can be either repackaged or reconfigured versions of 10G technologies.  Given these, I see no inherent delay in standardization due to inclusion of MM solutions.  

Regards,
Paul Kolesar
CommScope Enterprise® Solutions
1300 East Lookout Drive
Richardson, TX 75082
Phone:  972.792.3155
Fax:      972.792.3111
eMail:   pkolesar@xxxxxxxxxxxxx



Adam Bechtel <abechtel@xxxxxxxxxxxxx>

08/07/2006 12:10 PM
Please respond to
Adam Bechtel <abechtel@xxxxxxxxxxxxx>

To
STDS-802-3-HSSG@xxxxxxxxxxxxxxxxx
cc
Subject
Re: [HSSG] Fw: [hssg] CORRECTED 10GigE LR vs SR





We (Yahoo!) don’t use SR either, and we definitely are a datacenter centric business.  In addition to Lane’s reasons below:
 1. Copper technologies have replaced 95% of my need for short range optics and cabling (the remaining 5% being storage related).  
 2. There is always a significant delay to support new speeds on MM (e.g., GE, 10GE).  We adopted 10GE before SR was available.
 3. In the 2004-5 timeframe we were experiencing a 3x leadtime on procuring SR vs. LR optics.  That has since changed.
 
In the end, the flexibility of laying SMF in our datacenters outweighed the cost difference.  
 
-Adam
 



From: Paul Kolesar [mailto:PKOLESAR@xxxxxxxxxxxx]
Sent:
Monday, August 07, 2006 9:35 AM
To:
STDS-802-3-HSSG@xxxxxxxxxxxxxxxxx
Subject:
[HSSG] Fw: [hssg] CORRECTED 10GigE LR vs SR

 

John DAmbrosia asked that I forward this thread to the 802.3 HSSG reflector so that discussions could continue after the EA-facilitated HSSG reflector shuts down.  I have modified the subject line to make it more to the point.

Regards,
Paul Kolesar
CommScope Enterprise® Solutions
1300 East Lookout Drive
Richardson, TX 75082
Phone:  972.792.3155
Fax:      972.792.3111
eMail:   pkolesar@xxxxxxxxxxxxx


----- Forwarded by Paul F Kolesar/CommScope on 08/07/2006 11:28 AM -----

"John DAmbrosia" <jdambrosia@xxxxxxxxxxxxxxxxxxx>

08/04/2006 08:28 PM


To
<PKOLESAR@xxxxxxxxxxxx>
cc
 
Subject
RE: [hssg] CORRECTED (RE: 10GigE LR vs SR (RE: [hssg] Update of CFI Presentation to IEEE))

 


   





Paul,

Please note that the Ethernet Alliance facilitated HSSG reflector is in the process of being shut down.  

 
The HSSG Reflector has been set up and is ready to go.  Please go to the URL below for directions on how to join the IEEE 802.3 HSSG Reflector.  

 
http://www.ieee802.org/3/hssg/reflector.html
 
 
In addition, please note that the HSSG website is up and running, and may be viewed at

http://grouper.ieee.org/groups/802/3/hssg/
 
 
Upon joining I would suggest forwarding this message to the IEEE 802.3 reflector.  The cost model is a big one that the SG must address.

 
Hope you had a good vacation.

 
John

 
 


 



From:
PKOLESAR@xxxxxxxxxxxx [mailto:PKOLESAR@xxxxxxxxxxxx]
Sent:
Friday, August 04, 2006 7:16 PM
To:
Lane Patterson
Cc:
hssg@xxxxxxxxxxxxxxxxxxxx
Subject:
Re: [hssg] CORRECTED (RE: 10GigE LR vs SR (RE: [hssg] Update of CFI Presentation to IEEE))

 


Lane,

sorry for the delayed response.  Vacationitus interuptus.  


I appreciate the particular circumstances of your business and how they lead to the choices you have made.  But as you say, your center is not like that of the typical data center.  


While the discussion of price is a sensitive issue, and direct prices are not to be mentioned on the reflector, I have confirmed with my PLM that those you stated (in your previous version which you have since corrected below) are two orders of magnitude too large.  Perhaps the decimal point was left out.  I bring this up because such large discrepancies provide extreme distortion to the view.   In addition, it is not really possible to do a relative cost assessment when absolute costs of one item are compared to percentage cost differences of another item.  What is needed is for all to be distilled to the same units ($), then made into relative costs for comparison.  


I agree that MM cable costs more than SM cable, and that the relative cost for the same cable construction is in the ballpark of what you stated for very high count cables (3 to 4x).  


However, one of the challenges in making sensible relative cost comparisons is picking a set of assumptions that is relevant.  In this discussion, than means comparing similar units of scale.  For example, if one installs a very high fiber count cable that can support many channels, then one should not expect the differential in cost between a single channel's worth of PMDs to be used in justifying the added cost of the entire cable.  It needs to be broken down to the same relative units.  In this case, that means the differential between 2 strands of the MM cable compared to two strands of the SM cable, for the channel length of interest, plus the associated connector, panel and patch cord hardware that make up the channels.  Here, the channel lengths of interest must be confined to those that can be supported by both PMDs, since lengths exceeding the capability of one of the PMDs are out of scope.  If you examine the cost models from on that basis, perhaps you will have a better appreciation for my statements.  


The cross connect lengths of 150m that you mentioned are not clear to me.  Is that the length from the equipment to the cross connect or the length of the entire channel from equipment thru cross connect to equipment?  If the latter, then -S with OM3 will certainly work.  If the former, which is what I think you likely meant, then it is a matter of connection loss vs supportable distance.   Solutions exist that can support 300m channels thru a cross connect for 10GBASE-S.  



Regards,
Paul Kolesar
CommScope Enterprise® Solutions
1300 East Lookout Drive
Richardson, TX 75082
Phone:  972.792.3155
Fax:      972.792.3111
eMail:   pkolesar@xxxxxxxxxxxxx

"Lane Patterson" <lpatterson@xxxxxxxxxxx>

07/25/2006 04:03 PM

 


To
<hssg@xxxxxxxxxxxxxxxxxxxx>
cc
 
Subject
[hssg] CORRECTED (RE: 10GigE LR vs SR (RE: [hssg] Update of CFI Presentation to IEEE))


 


   





My apologies to John and to the list for inadvertently putting pricing data in my last post, it won't happen again :-)


I have corrected this below, so folks who wish could reply to the message.


Cheers,

-Lane

-----Original Message-----
From:
Lane Patterson
Sent:
Monday, July 24, 2006 5:53 PM
To:
'PKOLESAR@xxxxxxxxxxxx'; 'hssg@xxxxxxxxxxxxxxxxxxxx'
Subject:
10GigE LR vs SR (RE: [hssg] Update of CFI Presentation to IEEE)


Paul,


Very much appreciate your comments on this.  As an Internet exchange point, I realize we're probably not representative of the typical single-company data center environment here, but wanted to share the reasons why SR did not make it into our operating environment.  Apologies in advance if this is a bit too off-topic for the HSSG reflector.


1.  We already had legacy 62.5 micron Multimode as well as SMF pulled in conduits approx 1.5km in our multi-building campus sites

2.  On these campus conduit builds, MMF cost us more than 4x the price per linear foot, compared to SMF.

3.  Within our data centers (sized at roughly 100K-230K sq ft), cross-connect lengths routinely hit 150m

4.  There's tremendous OpEx involved in standardizing on a new type of fiber--I am checking now to see what's involved in supporting OM3 and it is about a 6 month process to evaluate, stock, productize, and train folks.

5.  Most of our 10GigE customers are ISPs using Cisco or Juniper routers, and commonly request LR

6.  Our cost for SR is only about 30% less than cost of LR, which is not enough to justify stocking two types of parts, spares, etc. when we can standarize on LR-only and simplify OpEx and pre-provisioning and support process.


Cheers,

-Lane


Lane Patterson

lane@xxxxxxxxxxx
Chief Technologist

Equinix, Inc.

+1 650-513-7012 (w)

+1 408-829-6464 (c)

skype:  lane_p

sip:17476493559@xxxxxxxxxxxxxxxxxxxx


-----Original Message-----
From:
PKOLESAR@xxxxxxxxxxxx [mailto:PKOLESAR@xxxxxxxxxxxx]
Sent:
Friday, July 21, 2006 4:59 PM
To:
Lane Patterson; hssg@xxxxxxxxxxxxxxxxxxxx
Subject:
RE: [hssg] Update of CFI Presentation to IEEE



Lane,

I find it odd that Equinix has not realized the advantages of deploying SR.  While its distance capability is rather limited on legacy multimode fibers, it is rated up to 300 m on OM3 (a.k.a. 850nm laser-optimized 50um) fiber, a distance sufficient to serve the vast majority of both in-building backbones and data centers.  


From recent presentation materials from a major Ethernet networking gear supplier, 10GbE multimode port shipments grew to equal singlemode port shipments in 2005.  


From this I conclude that multimode is providing value to a significant percentage of customers.  That value includes the fact that those who have installed OM3 cabling are able to deploy either SR or LX4 to 300 m.  This freedom allows the customer to choose from these PHYs based on several criteria including not only cost, but also availability, and port-type homogeniety considerations.  

In most cases cost will be the primary factor.  While it is true that over time the cost differential between port types compresses, the differential between SR and either LR or LX4 has been, and continues to be, quite significant, easily justifying the deployment of OM3 cabling for new buildouts.  

Data center cabling must often be deployed under tight schedules.  This has lead to great acceptance of solutions that provide cabling in predetermined lengths terminated with array connectors at the factory.   The array terminations are compact and allow easier deployment of the pre-terminated cables.  The arrays plug into fanout modules or hydra-cords for administration of duplex circuits.  Factory termination can provide high-quality polish, and fanouts provide worry-free transmit-to-receive signal routing (a.k.a. polarity), along with very rapid turn up in the field because the installer simply plugs components together instead of handling the termination process on site.  Virtually all of our data center projects deploy this type of solution.


There is an additional advantage to these cabling solutions.  They protect the customer's investment by providing a migration path for support of parallel fiber applications, such as those defined by InfiniBand.  One simply removes the fanout and administers the parallel application using array patch cords, thus reusing the cables.  

TIA TR-42 has standardized these types of structured cabling solutions in TIA-568-B.1-7 "Commercial Building Telecommunications Cabling Standard, Part 1 - General Requirements, Addendum 7 - Guidelines for Maintaining Polarity Using Array Connectors".   This standard provides a useful reference for committees that develop parallel fiber applications.   The parallel methods defined within this standard support all the parallel applications of Fibre Channel, OIF, and InfiniBand.


An increasing installation rate of these solutions is building the installed base of cabling that not only fulfills the immediate demands of tight construction schedules, but also protects the customer's investment by providing the flexibility to be easily reconfigured for future parallel applications.  And while this solution offers the same benefits to both multimode and singlemode media, 850nm laser-optimized 50um fiber represents about 80% of the cabling mix in our sales.  


Given that the commonly held view regarding deployment of a higher speed Ethernet is that it will occur initially within data centers, it would be an obvious error not to define a PHY/PMD that operates over this cabling infrastructure.


Paul Kolesar
CommScope Enterprise® Solutions
1300 East Lookout Drive
Richardson, TX 75082
Phone:  972.792.3155
Fax:      972.792.3111
eMail:   pkolesar@xxxxxxxxxxxxx

"Lane Patterson" <lpatterson@xxxxxxxxxxx>

07/20/2006 05:15 AM

 


To
"David Martin" <dwmartin@xxxxxxxxxx>, <hssg@xxxxxxxxxxxxxxxxxxxx>
cc
 
Subject
RE: [hssg] Update of CFI Presentation to IEEE


 


   






As an end user, I couldn't agree more.  Our view is that 10GigE has already radically changed the economics of data center/campus (LR) and metro (ER/ZR) connectivity, compared to the OC192 alternative, and somewhat limited scalability of LAG and ECMP.  I would expect that 100G would be equally successful at a 4x/2.5x benefit to cost ratio.

I also agree with Aaron and Bruce's comments about PMD/PHY--the 2-10km range serves data center, in-building riser fiber, and campus environments nicely.  Most early uses of 100G links will be for such aggregated trunking.  In contrast, with our 10GigE experience, SR was almost completely useless with its distance limitations and eventual marginal price diff with respect to LR.

Cheers,
-Lane


-----Original Message-----
From:   David Martin [
mailto:dwmartin@xxxxxxxxxx]
Sent:   Wed Jul 19 10:58:33 2006
To:     hssg@xxxxxxxxxxxxxxxxxxxx
Subject:        RE: [hssg] Update of CFI Presentation to IEEE

John,



Several comments were made during the CFI last night that 10GigE hasn't
yet achieved the traditional "10x rate for 3x the cost" economic
feasibility, and as such it's unlikely that a higher speed Ethernet rate
would be any more successful.



Some other comments were made that since 10GigE (and quite likely the
next rate) broke new ground as network infrastructure, rather than
traditional NICs and switch ports, the "10x rate for 3x the cost" rule
of thumb should be revisited.



In carrier transport networks, the equivalent rule has been "4x rate for
2.5x the cost". Just thought I'd pass that along for reference for when
this issue is considered.

...Dave

David W. Martin
Nortel Networks
dwmartin@xxxxxxxxxx <
mailto:dwmartin@xxxxxxxxxx>
+1 613 765 2901 (esn 395)
~~~~~~~~~~~~~~~~~~~~

________________________________

From: John DAmbrosia [
mailto:jdambrosia@xxxxxxxxxxxxxxxxxxx]
Sent: Wednesday, July 19, 2006 12:37 PM
To: hssg@xxxxxxxxxxxxxxxxxxxx
Subject: [hssg] Update of CFI Presentation to IEEE



All,

Last night's presentation went extremely well.  Approximately 200 to 220
people were present throughout the presentation.



After the presentation, the following straw polls were asked:



Straw Poll #1 - (For the Call-For-Interest)

Should a Study Group be formed for "Higher Speed Ethernet"?



Results

Yes - 147

No - 9

Abstain - 31



Straw Poll #2 (For Participation)

I would participate in the "Higher Speed" Study Group in IEEE 802.3.

Tally: 108



Straw Poll #3 (For Participation)

My company would support participation in the "Higher Speed" Study Group
in IEEE 802.3

Tally: 76



Thus, the results were very positive and encouraging.  This does not
mean that the Study Group has been formed yet.



A motion will be made at the IEEE 802.3 Closing Plenary on Thursday.
Thus, for those individuals who registered and are at the IEEE Plenary
this week; please make sure you stay until the motion has been made and
the vote taken.  If the motion is successful on Thursday, then a request
will be made to the IEEE 802.3 EC for approval of the formation of the
study group.



John D'Ambrosia