Thread Links Date Links
Thread Prev Thread Next Thread Index Date Prev Date Next Date Index

Re: [802.3_100GNGOPTX] Emerging new reach space



I'm not sure that measuring the outside of a building tells you much about link length distributions unless you also know what the data center architecture is inside the building.

I would guess that very very  large data centers would have a more modular architecture, to make build, bring up, and operations more scalable.

Maybe we should consult an expert ?



-----Original Message-----
From: Ali Ghiasi [mailto:aghiasi@xxxxxxxxxxxx] 
Sent: Friday, November 18, 2011 5:18 PM
To: STDS-802-3-100GNGOPTX@xxxxxxxxxxxxxxxxx
Subject: Re: [802.3_100GNGOPTX] Emerging new reach space

Jeff

In Chicago presentation I do identify several mega data centers larger than 400,000 sq-ft, one possibly as large as
1000,000 sq-ft.
http://www.ieee802.org/3/100GNGOPTX/public/sept11/ghiasi_01_a_0911_NG100GOPTX.pdf

Thanks,
Ali

On Nov 18, 2011, at 5:04 PM, Jeffery Maki wrote:

> Scott,
>
> Was the choice to end your table at 400,000 sq. ft. arbitrary?
>
> All,
>
> I believe we need to know if the potential square footage may or may not grow larger over the coming years for what is known as a mega datacenter.  How big is a mega datacenter to be?  At some point, 100GBASE-LR4 will be the right choice just based on loss budget.  We need to know the distribution of reaches to understand where to draw the line in selecting a break in the PMD definitions.
>
> Jeff
>
>
> -----Original Message-----
> From: Scott Kipp [mailto:skipp@xxxxxxxxxxx]
> Sent: Friday, November 18, 2011 1:38 PM
> To: STDS-802-3-100GNGOPTX@xxxxxxxxxxxxxxxxx
> Subject: Re: [802.3_100GNGOPTX] Emerging new reach space
>
> Chris and all,
>
> I have been wanting to discuss the reach objective for 100GBASE-nR4, so thanks for kicking off this discussion.
>
> You referenced the 10X10 MSA white paper that calls out a maximum distance of <500 meters.  You reference the authors of Vijay and Bikash, but I was the co-author that wrote this section of the paper and did the mathematical analysis which they agreed to. The actual distance of 414 meters is a simple calculation based on a 400,000 sq ft data center.  Even if the data center is 550,000 sq ft, the link distance is less than 500 meters long.  So I propose that 500 meters is long enough for the largest data centers that we should target.
>
> The problem with a 500 meter distance is in the way that the IEEE defines the maximum link length.  The IEEE defines the reach objective for SM fibers and gives an insertion loss based on the 2.0dB of connector and splice loss and the fiber attenuation loss.  Specifically, 802.3ba states this below table 87.9:
>
> The channel insertion loss is calculated using the maximum distance 
> specified in Table 87–6 and cabled optical fiber attenuation of 0.47 dB/km at 1264.5 nm plus an allocation for connection and splice loss given in 87.11.2.1.
>
> For the 10km link of 100GBASE-LR4, the attenuation is 6.7dB = 10km * 0.47dB/km + 2.0dB of connector and splice loss.
>
> If this project follows this example for a 500 meter nR4 link, then the insertion loss would only be 2.2dB = 0.5km * 0.47dB/km + 2.0dB for connector and splice loss. Many attendees know that this could limit the applicability of the nR4 link because it won't support structured cabling environments.  With many MPO ribbon connectors in a link, it could be difficult to support a typical link in the structured cabling environments that will be required in large data centers.
>
> To make nR4 a success, we need to take these structured cabling environments into account and increase the connector loss.  I would like to hear from some cabling vendors and especially end users as to range of insertion losses that they have seen and what they expect to see if ribbon fibers are used instead of the usual duplex SM fibers.
>
> Jonathan King did a great statistical analysis of 4 duplex MM connection loss in king_01_0508.  We should do a similar analysis for 6 SM ribbon connectors to determine the loss of a long link.
>
> If we determine the loss to be 3.5dB, then the insertion loss for the nR4 link could be 3.7dB = 0.5km * 0.47dB/km + 3.5dB for connector and splice loss.  With these parallel solutions, this loss might even be larger since they don't have the WDM losses in the link.
>
> That's my 42 cents,
> Scott
>
>
> -----Original Message-----
> From: Chris Cole [mailto:chris.cole@xxxxxxxxxxx]
> Sent: Friday, November 18, 2011 11:12 AM
> To: STDS-802-3-100GNGOPTX@xxxxxxxxxxxxxxxxx
> Subject: Re: [802.3_100GNGOPTX] Emerging new reach space
>
> Jack,
>
> Thank you for continuing to lead the discussion. I am hoping it encourages others to jump in with their perspectives, otherwise you will be stuck architecting the new standard by yourself with the rest of us sitting back and observing.
>
> Your email is also a good prompt to start discussing the specific reach objective for 100GE-nR4. Since you mention 2000m reach multiple times in your email, can you give a single example of a 2000m Ethernet IDC link?
>
> I am aware of many 150m to 600m links, with 800m mentioned as long term future proofing, so rounding up to 1000m is already conservative. I understand why several IDC operators have asked for 2km; it was the next closest existing standard reach above their 500m/600m need; see for example page 10 of Donn Lee's March 2007 presentation to the HSSG (http://www.ieee802.org/3/hssg/public/mar07/lee_01_0307.pdf). It is very clear what the need is, and why 2km is being brought up.
>
> Another example of IDC needs is in a 10x10G MSA white paper (http://www.10x10msa.org/documents/10X10%20White%20Paper%20final.pdf), where Bikash Koley and Vijay Vusirikala of Google show that their largest data center requirements are met by a <500m reach interface.
>
> In investigating the technology for 100GE-nR4, we may find as Pete Anslow has pointed out in NG 100G SG, that the incremental cost for going from 1000m to 2000m is negligible. We may then chose to increase the standardized reach. However to conclude today that this is in fact where the technology will end up is premature. We should state the reach objective to reflect the need, not our speculation about the capabilities of yet to be defined technology.
>
> Thank you
>
> Chris
>
> -----Original Message-----
> From: Jack Jewell [mailto:jack@xxxxxxxxxxxxxx]
> Sent: Friday, November 18, 2011 9:38 AM
> To: STDS-802-3-100GNGOPTX@xxxxxxxxxxxxxxxxx
> Subject: Re: [802.3_100GNGOPTX] Emerging new reach space
>
> Hello All,
> Thanks for all the contributions to this discussion. Here's a synopsis 
> and my current take on where it's heading (all in the context of 
> 150-2000m links).
> Starting Point: Need for significantly-lower cost/power links over 
> 150-2000m reaches has been expressed for several years. Last week in 
> Atlanta, four technical presentations on the subject all dealt with 
> parallel SMF media. Straw polls of "like to hear more about ___" 
> received 41, 48, 55, and 48 votes, the 41 for one additionally involving new fiber.
> The poll "to encourage more on…duplex SMF PMDs" received 35 votes. 
> Another straw poll gave strong support for the most-aggressive low-cost target.
> Impressions from discussion and Atlanta meeting: Systems users 
> (especially the largest ones) are strongly resistant to adopting 
> parallel SMF. (not addressing reasons for that position, just stating 
> an observation.) LR4 platform can be extended over duplex SMF via WDM 
> by at least one more "factor-4" generation, and probably another (DWDM 
> for latter); PAM and line-rate increase may extend duplex-SMF's 
> lifetime yet another generation.
> My Current Take: Given a 2-or-3-generation (factor-4; beyond 
> 100GNGOPTX) longevity of duplex SMF, I'm finding it harder to make a 
> compelling case for systems vendors to adopt parallel SMF for 
> 100GNGOPTX. My current expectation is that duplex SMF will be the 
> interconnection medium. My ongoing efforts will have more duplex-SMF 
> content. I still think parallel SMF should deliver lowest cost/power 
> for 100GNGOPTX, and provide an additional 1-2 generations of 
> longevity; just don't see system vendors ready to adopt it now.
> BUT: What about the Starting Point (above), and the need for 
> significantly-lower cost/power?? If a compelling case is to be made 
> for an alternative to duplex SMF, it will require a very crisp and 
> convincing argument for significantly-lower cost/power than LR4 
> ("fair" comparison such as mentioned earlier), or other duplex SMF 
> approaches. Perhaps a modified version of LR4 can be developed with 
> lower-cost/power lasers that doesn't reach 10km. If, for whatever 
> reasons, systems vendors insist on duplex SMF, but truly need 
> significantly-lower cost/power, it may require some compromise, e.g. 
> "wavelength-shifted" SMF, or something else. Would Si Photonics really 
> satisfy the needs with no compromise? Without saying they won't, it 
> seems people aren't convinced, because we're having these discussions.
> Cheers, Jack
>
>
> On 11/17/11 10:23 AM, "Arlon Martin" <amartin@xxxxxxxxxx> wrote:
>
>> Hello Jack,
>> To your first question, yes, we are very comfortable with LAN WDM 
>> spacing. That never was a challenge for the technology. We have 
>> chosen to perfect reflector gratings because of the combination of 
>> small size and great performance. I am not sure exactly what you are 
>> asking in your second question. There may be a slightly lower loss to 
>> AWGs than reflector gratings. That difference has decreased as we 
>> have gained more experience with gratings. For many applications like 
>> LR and mR, the much, much smaller size (cost is related to size) of 
>> reflector gratings makes them the best choice.
>>
>> Thanks, Arlon
>>
>> -----Original Message-----
>> From: Jack Jewell [mailto:jack@xxxxxxxxxxxxxx]
>> Sent: Thursday, November 17, 2011 6:42 AM
>> To: STDS-802-3-100GNGOPTX@xxxxxxxxxxxxxxxxx
>> Subject: Re: [802.3_100GNGOPTX] Emerging new reach space
>>
>> Hi Arlon,
>> Thanks very much for this. You are right; I was referring to thin 
>> film filters. My gut still tells me that greater tolerances should 
>> accompany wider wavelength spacing. So I'm guessing that your 
>> manufacturing tolerances are already "comfortable" at the LAN WDM 
>> spacing, and thus the difference is negligible to you. Is that a fair 
>> statement? Same could be true for thin film filters. At any rate, LAN 
>> WDM appears to have one
>> factor-4 generation advantage over CWDM in this discussion, and it's 
>> good to hear of its cost effectiveness. Which brings up the next 
>> question. Your data on slide 15 of Chris's presentation referenced in 
>> his message shows lower insertion loss for your array waveguide (AWG) 
>> DWDM filter than for the grating filters. Another factor-of-4 data 
>> throughput may be gained in the future via DWDM.
>> Cheers, Jack
>>
>> On 11/16/11 10:51 PM, "Arlon Martin" <amartin@xxxxxxxxxx> wrote:
>>
>>> Hello Jack,
>>> As a maker of both LAN WDM and CWDM filters, I would like to comment 
>>> on the filter discussion. WDM filters can be thin film filters (to 
>>> which you may be referring) but more likely, they are PIC-based AWGs 
>>> or PIC-based reflector gratings. In our experience at Kotura with 
>>> reflector gratings made in silicon, both CWDM and LAN WDM filters 
>>> work equally well and are roughly the same size. It is practical to 
>>> put 40 or more wavelengths on a single chip. We have done so for 
>>> other applications. There is plenty of headroom for more channels when the need arises for 400 Gb/s or 1 Tbs.
>>> There may be other reasons to select CWDM over LAN WDM, but, in our 
>>> experience, filters do not favor one choice over the other.
>>>
>>> Arlon Martin, Kotura
>>>
>>> -----Original Message-----
>>> From: Jack Jewell [mailto:jack@xxxxxxxxxxxxxx]
>>> Sent: Wednesday, November 16, 2011 9:09 PM
>>> To: STDS-802-3-100GNGOPTX@xxxxxxxxxxxxxxxxx
>>> Subject: Re: [802.3_100GNGOPTX] Emerging new reach space
>>>
>>> Thanks Chris for your additions.
>>> 1. "CWDM leads to simpler optical filters versus "closer" WDM (LAN WDM)"
>>> -
>>> For a given throughput transmission and suppression of 
>>> adjacent-wavelength signals (assuming use of same available optical 
>>> filter materials), use of a wider wavelength spacing can be 
>>> accomplished with wider thickness tolerance and usually with fewer 
>>> layers. The wider thickness tolerance is basic physics, with which I 
>>> won't argue. In this context, I consider "wider thickness tolerance" 
>>> as "simpler."
>>> 2. "CWDM leads to lower cost versus "closer" WDM because cooling is 
>>> eliminated" - I stated no such thing, though it's a common perception.
>>> Ali
>>> Ghiasi suggested CWDM (implied by basing implementation on 
>>> 40GBASE-LR4) might be lower cost, without citing the cooling issue. 
>>> Cost is a far more complex issue than filter simplicity. You made 
>>> excellent points regarding costs in your presentation cited for 
>>> point 1, and I cited LAN WDM
>>> (100GBASE-LR4) advantages as "better-suited-for-integration, and 
>>> "clipping off" the highest-temp performance requirement." We must 
>>> recognize that at 1km vs 10km, chirp issues are considerably 
>>> reduced.
>>> 3. "CWDM is lower power than "closer" WDM power" - I stated no such 
>>> thing, though it's a common perception. I did say "More wavelengths 
>>> per fiber means more power per channel," which is an entirely 
>>> different statement, and it's darned hard to argue against the 
>>> physics of it (assuming same technological toolkit).
>>> All I stated in the previous message are the advantages of CWDM 
>>> (adopted by 40GBASE-LR4) and LAN WDM (adopted by 100GBASE-LR4), 
>>> without favoring one over the other for 100GbE (remember we're talking ~1km, not 10km).
>>> But
>>> my forward-looking (crude) analysis of 400GbE and 1.6TbE clearly 
>>> favors LAN WDM over CWDM - e.g. "CWDM does not look attractive on 
>>> duplex SMF beyond 100GbE," whereas the wavelength range for 400GbE 
>>> LAN 16WDM over duplex SMF "is realistic." Quasi-technically speaking 
>>> Chris, we're on the same wavelength (pun obviously intended) :-) 
>>> Paul Kolesar stated the jist succinctly: "that parallel fiber 
>>> technologies appear inevitable at some point in the evolution of 
>>> single-mode solutions.
>>> So the question becomes a matter of when it is best to embrace 
>>> them." [I would replace "inevitable" with "desirable."] From a 
>>> module standpoint, it's easier, cheaper, lower-power to produce a 
>>> x-parallel solution than a x-WDM one (x is number of channels), and 
>>> it's no surprise that last week's technical presentations (by 3 
>>> module vendors and 1 independent) had a parallel-SMF commonality for 
>>> 100GNGOPTX. There is a valid argument for initial parallel SMF 
>>> implementation, to be later supplanted by WDM, particularly LAN WDM. 
>>> With no fiber re-installations.
>>> To very recent messages, we can choose which pain to feel first, 
>>> parallel fiber or PAM, but by 10TbE we're likely get both - in your 
>>> face or innuendo :-) Jack
>>>
>>>
>>>
>>> On 11/16/11 6:53 PM, "Chris Cole" <chris.cole@xxxxxxxxxxx> wrote:
>>>
>>>> Hello Jack,
>>>>
>>>> You really are on a roll; lots of insightful perspectives.
>>>>
>>>> Let me clarify a few of items so that they don't detract from your 
>>>> broader ideas.
>>>>
>>>> 1. CWDM leads to simpler optical filters versus "closer" WDM (LAN 
>>>> WDM)
>>>>
>>>> This claim may have had some validity in the past, however it has 
>>>> not been the case for many years. This claim received a lot of 
>>>> attention in 802.3ba TF during the 100GE-LR4 grid debate. An 
>>>> example presentation is 
>>>> http://www.ieee802.org/3/ba/public/mar08/cole_02_0308.pdf, where on 
>>>> pages 13, 14, 15, and 16 multiple companies showed there is no 
>>>> practical implementation difference between 20nm and 4.5nm spaced filters.
>>>> Further,
>>>> this has now been confirmed in practice with 4.5nm spaced LAN WDM
>>>> 100GE-LR4 filters in TFF and Si technologies manufactured with no 
>>>> significant cost difference versus 20nm spaced CWDM 40GE-LR4 filters.
>>>>
>>>> If there is specific technical information to the contrary, it 
>>>> would be helpful to see it as a  presentation in NG 100G SG.
>>>>
>>>> 2. CWDM leads to lower cost versus "closer" WDM because cooling is 
>>>> eliminated
>>>>
>>>> This claim has some validity at lower rates like 1G or 2.5G, but is 
>>>> not the case at 100G. This has been discussed at multiple 802.3 
>>>> optical track meetings, including as recently as the last NG 100G 
>>>> SG meeting. We again agreed that the cost of cooling is a fraction 
>>>> of a percent of the total module cost. Even for a 40GE-LR4 module, 
>>>> the cost of cooling, if it had to be added for some reason, would 
>>>> be insignificant. Page 4 of the above
>>>> cole_02_0308 presentation discusses why that is.
>>>>
>>>> This claim to some extent defocuses from half a dozen other cost 
>>>> contributors which are far more significant. Those should be at the 
>>>> top of the list instead of cooling. Further, if cooling happens to 
>>>> enable a technology which reduces by a lot a significant cost 
>>>> contributor, then it becomes a big plus instead of an insignificant 
>>>> minus.
>>>>
>>>> If there is specific technical information to the contrary, a NG 
>>>> 100G SG presentation would be a great way to introduce it.
>>>>
>>>> 3. CWDM is lower power than "closer" WDM power.
>>>>
>>>> The real difference between CWDM and LAN DWDM is that un-cooled is 
>>>> lower power. However how much lower strongly depends on the 
>>>> specific transmit optics and operating conditions. In 100G module 
>>>> context it can be 10% to 30%. However, for some situations it could 
>>>> be a lot more savings, and for others even less. No general 
>>>> quantification of the total power savings can be made; it has to be 
>>>> done on a case by case basis.
>>>>
>>>> Chris
>>>>
>>>> -----Original Message-----
>>>> From: Jack Jewell [mailto:jack@xxxxxxxxxxxxxx]
>>>> Sent: Wednesday, November 16, 2011 3:20 PM
>>>> To: STDS-802-3-100GNGOPTX@xxxxxxxxxxxxxxxxx
>>>> Subject: Re: [802.3_100GNGOPTX] Emerging new reach space
>>>>
>>>> Great inputs! :-)
>>>> Yes, 40GBASE-LR4 is the first alternative to 100GBASE-LR4 that 
>>>> comes to mind for duplex SMF. Which begs the question: why are they 
>>>> different?? I can see advantages to either: (40G CWDM vs 100G 
>>>> closerWDM) - uncooled, simple optical filters vs better-suited-for-integration, and "clipping"
>>>> off" the highest-temp performance requirement.
>>>> It's constructive to look forward, and try to avoid unpleasant 
>>>> surprises of "future-proof" assumptions (think 802.3z and FDDI 
>>>> fiber - glad I wasn't there!). No one likes "forklift upgrades" 
>>>> except maybe forklift operators, who aren't well-represented here. 
>>>> Data centers are being built, so here's a chance to avoid 
>>>> short-sighted mistakes. How do we want 100GbE, 400GbE and 1.6TbE to 
>>>> look (rough guesses at the next generations)? Here are 3 basic 
>>>> likely scenarios, assuming (hate to, but must) 25G electrical 
>>>> interface and no electrical mux/demux. Considering duplex SMF,
>>>> 4+4parallel
>>>> SMF, and 16+16parallel SMF:
>>>> Generation
>>>> 100GbE       duplex-SMF /  4WDM      4+4parallel / no WDM
>>>> 16+16parallel / dark fibers
>>>> 400GbE       duplex-SMF / 16WDM      4+4parallel /  4WDM
>>>> 16+16parallel / no WDM
>>>> 1.6TbE       duplex-SMF / 64WDM      4+4parallel / 16WDM
>>>> 16+16parallel /  4WDM
>>>> The above is independent of distances in the 300+ meter range we're 
>>>> considering. Yes, there are possibilities of PAM encoding and 
>>>> electrical interface speed increases. Historically we've avoided 
>>>> the former, and the latter is expected to bring a factor of 2, at 
>>>> most, for these generations.
>>>> Together, they might bring us forward 1 factor-of-4 generation further.
>>>> For 40GbE or 100GbE, 20nm-spaced CWDM is nice for 4WDM (4 wavelengths).
>>>> At
>>>> 400GbE, 16WDM CWDM is a 1270-1590nm stretch, with 16 laser products 
>>>> (ouch!). 20nm spacing is out of the question for 64WDM (1.6TbE). 
>>>> CWDM does not look attractive on duplex SMF beyond 100GbE.
>>>> OTOH, a 100GBASE-LR4 - based evolution on duplex SMF, with ~4.5nm 
>>>> spacing, is present at 100GbE. For 400GbE, it could include the 
>>>> same 4 wavelengths, plus 4-below and 12-above - a 1277.5-1349.5nm 
>>>> wavelength span, which is realistic. The number of "laser products" 
>>>> is fuzzy, as the same epitaxial structure and process (except 
>>>> grating spacing) may be used for maybe a few, but nowhere near all, 
>>>> of the wavelengths. For 1.6TbE 64WDM, LR4's 4.5nm spacing implies a 
>>>> 288nm wavelength span and a plethora of "laser products." 
>>>> Unattractive.
>>>> On a "4X / generational speed increase," 4+4parallel SMF gains one 
>>>> generation over duplex SMF and 16+16parallel SMF gains 2 
>>>> generations over duplex SMF. Other implementations, e.g. channel 
>>>> rate increase and/or encoding, may provide another generation or 
>>>> two of "future accommodation."
>>>> The larger the number of wavelengths that are multiplexed, the 
>>>> higher the loss budget that must be applied to the 
>>>> laser-to-detector (TPlaser to
>>>> TPdetector) link budget. More wavelengths per fiber means more 
>>>> power per channel, i.e. more power/Gbps and larger faceplate area. 
>>>> While duplex SMF looks attractive to systems implementations, it 
>>>> entails significant(!!) cost implications to laser/transceiver 
>>>> vendors, who may not be able to bear "cost assumptions," and 
>>>> additional power requirements, which may not be tolerable for 
>>>> systems vendors.
>>>> I don't claim to "have the answer," rather attempt to frame the 
>>>> question pointedly "How do we want to architect the next few 
>>>> generations of Structured Data Center interconnects?" Insistence on 
>>>> duplex SMF works for this-and-maybe-next-generation, then may hit a 
>>>> wall. Installation of parallel SMF provides a 1-or-2-generation-gap 
>>>> of "proofing," with higher initial cost, but with lower power 
>>>> throughout, and pushing back the need for those abominable 
>>>> "forklift upgrades."
>>>> Jack
>>>>
>>>>
>>>> On 11/16/11 1:00 PM, "Kolesar, Paul" <PKOLESAR@xxxxxxxxxxxxx> wrote:
>>>>
>>>>> Brad,
>>>>> The fiber type mix in one of my contributions in September is all 
>>>>> based on cabling that is pre-terminated with MPO(MTP)array connectors.
>>>>> Recall
>>>>> that single-mode fiber represents about 10 to 15% of those channels.
>>>>> Such cabling infrastructure provides the ability to support either 
>>>>> multiple 2-fiber or parallel applications by applying or removing 
>>>>> fan-outs from the ends of the cables at the patch panels.  The 
>>>>> fan-outs transition the MPO terminated cables to collections of LC 
>>>>> or SC connectors.  If fan-outs are not present, the cabling is 
>>>>> ready to support parallel applications by using array equipment 
>>>>> cords.  As far as I am aware this pre-terminated cabling approach 
>>>>> is the primary way data centers are built today, and has been in 
>>>>> practice for many years.  So array terminations are commonly used 
>>>>> on single-mode cabling infrastructures.  While that last statement 
>>>>> is true, it could leave a distorted impression if I also did not 
>>>>> say that virtually the entire existing infrastructure e!
>>>>> mploys fan-outs today simply because parallel applications have 
>>>>> not been deployed in significant numbers.  But migration to 
>>>>> parallel optic interfaces is a matter of removing the existing 
>>>>> fan-outs.  This is what I tried to describe at the microphone 
>>>>> during November's meeting.
>>>>>
>>>>> Regards,
>>>>> Paul
>>>>>
>>>>> -----Original Message-----
>>>>> From: Brad Booth [mailto:Brad_Booth@xxxxxxxx]
>>>>> Sent: Wednesday, November 16, 2011 11:34 AM
>>>>> To: STDS-802-3-100GNGOPTX@xxxxxxxxxxxxxxxxx
>>>>> Subject: Re: [802.3_100GNGOPTX] Emerging new reach space
>>>>>
>>>>> Anyone have any data on distribution of parallel vs duplex volume 
>>>>> for
>>>>> OM3/4 and OS1?
>>>>>
>>>>> Is most SMF is duplex (or simplex) given the alignment requirements?
>>>>>
>>>>> It would be nice to have a MMF version of 100G that doesn't 
>>>>> require parallel fibers, but we'd need to understand relative cost differences.
>>>>>
>>>>> Thanks,
>>>>> Brad
>>>>>
>>>>>
>>>>>
>>>>> -----Original Message-----
>>>>> From: Ali Ghiasi 
>>>>> [aghiasi@xxxxxxxxxxxx<mailto:aghiasi@xxxxxxxxxxxx>]
>>>>> Sent: Wednesday, November 16, 2011 11:04 AM Central Standard Time
>>>>> To: STDS-802-3-100GNGOPTX@xxxxxxxxxxxxxxxxx
>>>>> Subject: Re: [802.3_100GNGOPTX] Emerging new reach space
>>>>>
>>>>> Jack
>>>>>
>>>>> If there is another LR4 PMD out there the best starting point 
>>>>> would be 40Gbase-LR4, look at its cost structure, and build a 
>>>>> 40G/100G compatible PMD.
>>>>>
>>>>> We also need to understand the cost difference between parallel 
>>>>> MR4 vs
>>>>> 40Gbase-LR4 (CWDM).  The 40Gbase-LR4 cost with time could be 
>>>>> assumed identical to the new 100G MR4 PMD.  Having this baseline 
>>>>> cost then we can compare its cost with 100GBase-LR4 and parallel 
>>>>> MR4.  The next step is to take into account higher cable and 
>>>>> connector cost associated with parallel implementation then 
>>>>> identify at what reach it gets to parity with 100G
>>>>> (CWDM) or
>>>>> 100G (LAN-WDM).
>>>>>
>>>>> In the mean time we need to get more direct feedback from end 
>>>>> users if the parallel SMF is even an acceptable solution for 
>>>>> reaches of 500-1000 m.
>>>>>
>>>>> Thanks,
>>>>> Ali
>>>>>
>>>>>
>>>>>
>>>>> On Nov 15, 2011, at 8:41 PM, Jack Jewell wrote:
>>>>>
>>>>> Thanks for this input Chris.
>>>>> I'm not "proposing" anything here, rather trying to frame the 
>>>>> challenge, so that we become better aligned in how cost-aggressive 
>>>>> we should be, which guides the technical approach. As for names, 
>>>>> "whatever works" :-) It would be nice to have a (whatever)R4, be 
>>>>> it nR4 or something else, and an english name to go with it. The 
>>>>> Structured Data Center (SDC) links you describe in your Nov2011 
>>>>> presentation are what I am referencing, except for the restriction 
>>>>> to "duplex SMF." My input is based on use of any interconnection 
>>>>> medium that provides the overall lowest-cost, lowest-power 
>>>>> solution, including e.g. parallel SMF.
>>>>> Cost comparisons are necessary, but I agree tend to be dicey. 
>>>>> Present 10GbE costs are much better defined than projected 100GbE 
>>>>> NextGen costs, but there's no getting around having to estimate 
>>>>> NextGen costs, and specifying the comparison. Before the straw 
>>>>> poll, I got explicit clarification that "LR4" did NOT include 
>>>>> mux/demux IC's, and therefore did not refer to what is built 
>>>>> today. My assumption was a "fair" cost comparison between LR4 and 
>>>>> (let's call it)nR4 - at similar stage of development and market 
>>>>> maturity. A relevant stage is during delivery of high volumes 
>>>>> (prototype costs are of low relevance). This does NOT imply same 
>>>>> volumes. It wouldn't be fair to project ER costs based on SR or 
>>>>> copper volumes. I'm guessing these assumptions are mainstream in 
>>>>> this group. That would make the 25% cost target very aggressive, 
>>>>> and a 50% cost target probably sufficient to justify an optimized 
>>>>> solution. Power requirements are a part of the total cost of 
>>>>> ownership, and should be consider!
>>>>> ed, but perhaps weren't.
>>>>> The kernel of this discussion is whether to pursue "optimized 
>>>>> solutions"
>>>>> vs "restricted solutions." LR4 was specified through great 
>>>>> scrutiny and is expected to be a very successful solution for 10km 
>>>>> reach over duplex SMF. Interoperability with LR4 is obviously 
>>>>> desirable, but would a 1km-spec'd-down version of LR4 provide 
>>>>> sufficient cost/power savings over
>>>>> LR4 to justify a new PMD and product development? Is there another 
>>>>> duplex SMF solution that would provide sufficient cost/power 
>>>>> savings over LR4 to justify a new PMD and product development? If 
>>>>> so, why wouldn't it be essentially a 1km-spec'd-down version of 
>>>>> LR4? There is wide perception that SDC's will require costs/powers 
>>>>> much lower than are expected from LR4, so much lower that it's 
>>>>> solution is a major topic in HSSG. So far, it looks to me like an 
>>>>> optimized solution is probably warranted. But I'm not yet 
>>>>> convinced of that, and don't see consensus on the issue in the 
>>>>> group, hence the discussion.
>>>>> Cheers, Jack
>>>>>
>>>>> From: Chris Cole
>>>>> <chris.cole@xxxxxxxxxxx<mailto:chris.cole@xxxxxxxxxxx>>
>>>>> Reply-To: Chris Cole
>>>>> <chris.cole@xxxxxxxxxxx<mailto:chris.cole@xxxxxxxxxxx>>
>>>>> Date: Tue, 15 Nov 2011 17:33:17 -0800
>>>>> To:
>>>>> <STDS-802-3-100GNGOPTX@xxxxxxxxxxxxxxxxx<mailto:STDS-802-3-100GNGO
>>>>> PTX@L
>>>>> I
>>>>> S
>>>>> T
>>>>> SERV.IEEE.ORG>>
>>>>> Subject: Re: [802.3_100GNGOPTX] Emerging new reach space
>>>>>
>>>>> Hello Jack,
>>>>>
>>>>> Nice historical perspective on the new reach space.
>>>>>
>>>>> Do I interpret your email as proposing to call the new 150m to 
>>>>> 1000m standard 100GE-MR4? ☺
>>>>>
>>>>> One of the problems in using today’s 100GE-LR4 cost as a 
>>>>> comparison metric for new optics is that there is at least an 
>>>>> order of magnitude variation in the perception of what that cost 
>>>>> is. Given such a wide disparity in perception, 25% can either be impressive or inadequate.
>>>>>
>>>>> What I had proposed as reference baselines for making comparisons 
>>>>> is 10GE-SR (VCSEL based TX), 10GE-LR (DFB laser based TX) and 
>>>>> 10GE-ER (EML based TX) bit/sec cost. This not only allows us to 
>>>>> make objective relative comparisons but also to decide if the 
>>>>> technology is suitable for wide spread adoption by using rules of 
>>>>> thumb like 10x the  bandwidth (i.e. 100G) at 4x the cost (i.e. 40% 
>>>>> of 10GE-nR cost) at similar high volumes.
>>>>>
>>>>> Using these reference baselines, in order for the new reach space 
>>>>> optics to be compelling, they must have a cost structure that is 
>>>>> referenced to a fraction of 10GE-SR (VCSEL based) cost, NOT 
>>>>> referenced to a fraction of 10GE-LR (DFB laser based) cost. 
>>>>> Otherwise, the argument can be made that
>>>>> 100GE-LR4 will get to a fraction of 10GE-LR cost, at similar 
>>>>> volumes, so why propose something new.
>>>>>
>>>>> Chris
>>>>>
>>>>> From: Jack Jewell [mailto:jack@xxxxxxxxxxxxxx]
>>>>> Sent: Tuesday, November 15, 2011 3:06 PM
>>>>> To:
>>>>> STDS-802-3-100GNGOPTX@xxxxxxxxxxxxxxxxx<mailto:STDS-802-3-100GNGOP
>>>>> TX@LI
>>>>> S
>>>>> T
>>>>> S
>>>>> ERV.IEEE.ORG>
>>>>> Subject: [802.3_100GNGOPTX] Emerging new reach space
>>>>>
>>>>> Following last week's meetings, I think the following is relevant 
>>>>> to frame our discussions of satisfying data center needs for 
>>>>> low-cost low-power interconnections over reaches in the roughly 150-1000m range.
>>>>> This is a "30,000ft view,"without getting overly specific.
>>>>> Throughout GbE, 10GbE, 100GbE and into our discussions of 100GbE 
>>>>> NextGenOptics, there have been 3 distinct spaces, with solutions 
>>>>> optimized for each: Copper, MMF, and SMF. With increasing data 
>>>>> rates, both copper and MMF specs focused on maintaining minimal 
>>>>> cost, and their reach lengths decreased. E.g. MMF reach was up to 
>>>>> 550m in GbE, then 300m in 10GbE (even shorter reach defined 
>>>>> outside of IEEE), then 100-150m in 100GbE. MMF reach for 100GbE 
>>>>> NextGenOptics will be even shorter unless electronics like EQ or 
>>>>> FEC are included. Concurrently, MMF solutions have become 
>>>>> attractive over copper at shorter and shorter distances. Both 
>>>>> copper and MMF spaces have "literally" shrunk. In contrast, SMF 
>>>>> solutions have maintained a 10km reach (not worrying about the 
>>>>> initial 5km spec in GbE, or 40km solutions). To maintain the 10km 
>>>>> reach, SMF solutions evolved from FP lasers, to DFB lasers, to WDM 
>>>>> with cooled DFB lasers.
>>>>> The
>>>>> 10km solutions increasingly resemble longer-haul telecom solutions. T!
>>>>> here is an increasing cost disparity between MMF and SMF solutions.
>>>>> This
>>>>> is an observation, not a questioning of the reasons behind these 
>>>>> trends.
>>>>> The increasing cost disparity between MMF and SMF solutions is 
>>>>> accompanied by rapidly-growing data center needs for links longer 
>>>>> than MMF can accommodate, at costs less than 10km SMF can 
>>>>> accommodate. This has the appearance of the emergence of a new 
>>>>> "reach space," which warrants its own optimized solution. The 
>>>>> emergence of the new reach space is the crux of this discussion.
>>>>> Last week, a straw poll showed heavy support for "a PMD supporting 
>>>>> a 500m reach at 25% the cost of 100GBASE-LR4" (heavily favored 
>>>>> over targets of 75% or 50% the cost of 100GBASE-LR4). By heavily 
>>>>> favoring the most aggressive low-cost target, this vote further 
>>>>> supports the need for an "optimized solution" for this reach 
>>>>> space. By "optimized solution" I mean one which is free from 
>>>>> constraints, e.g. interoperability with other solutions. Though 
>>>>> interoperability is desirable, an interoperable solution is 
>>>>> unlikely to achieve the cost target. In the 3 reach spaces 
>>>>> discussed so far, there is NO interoperability between copper/MMF, 
>>>>> MMF/SMF, or copper/SMF. Copper, MMF and SMF are optimized 
>>>>> solutions. It will likely take an optimized solution to satisfy this "mid-reach"
>>>>> space
>>>>> at the desired costs. To repeat: This has the appearance of the 
>>>>> emergence of a new "reach space," which warrants its own optimized 
>>>>> solution.
>>>>> Since
>>>>> the reach target lies between "short reach" and "long reach," "mid!
>>>>> reach" is a reasonable term
>>>>> Without discussing specific technical solutions, it is noteworthy 
>>>>> that all 4 technical presentations last week for this "mid-reach" 
>>>>> space involved parallel SMF, which would not interoperate with 
>>>>> either 100GBASE-LR4, MMF, or copper. They would be optimized 
>>>>> solutions, and interest in their further work received the highest 
>>>>> support in straw polls. Given the high-density environment of 
>>>>> datacenters, a solution for the mid-reach space would have most 
>>>>> impact if its operating power was sufficiently low to be 
>>>>> implemented in a form factor compatible with MMF and copper 
>>>>> sockets.
>>>>> Cheers, Jack
>