Thread Links Date Links
Thread Prev Thread Next Thread Index Date Prev Date Next Date Index

Re: [HSSG] BER Objective




If we think about using FEC (which would be appropriate for wide area
or backhauling applications anyway) then this whole discussion here
becomes a bit more relaxed anyway .... :-)


Roger Merel wrote:

Regarding (1) The carrier & wide-area network world already uses a worst case BER of 1E-12 at 10G.

 

Regarding (2) Agree with your point that design verification & quality control testing can be more extensive.

 

 


From: Marcus Duelk [mailto:duelk@xxxxxxxxxx]
Sent: Tuesday, August 29, 2006 12:20 PM
To: Roger Merel
Cc: STDS-802-3-HSSG@xxxxxxxxxxxxxxxxx
Subject: Re: [HSSG] BER Objective

 


Roger,

two comments:

1) To scale the BER target of 10GbE simply by a factor of ten for
100GbE is maybe not adequate. This implies that you are still targeting
the same market. BER requirements in a low-cost LAN environment
might be different to BER requirements in the Carrier Ethernet world or
in wide area applications. One reason might be that the RTT is much larger,
therefore resending TCP packets due to errors will have a much larger impact.
I think we should define a proper BER based on the targeted application and
not by simply scaling it by ten.

2) For your cost argument, for component vendors there are sometimes
test requirements that you specify (for example) you do the quick & dirty
test with every component, and you do the extended test (like the real BER
measurement) with every 1,000th component or so. It would be impractical
to make a BER 1E-15 test with every transceiver/transponder, that is unrealistic.

Marcus


Roger Merel wrote:

There is nothing wrong with the idea that Howard suggest.  It is commonly used now and would certainly continue to be used for HSSG.

 

The greater concern is the one raised by Pete Anslow.

 

In any event, most HPC applications do not stress the worst case optical link budget although they do benefit from and require higher BER.  Thus is not inconsistent to say that the product is designed to support 1E-15 by not having an inherent noise floor at 1E-12, but at the same time only spec’ing the worst-case corner performance at 1E-12.  Since the different in power budget between 1E-12 and 1E-15 is <1dB, 1E-15 would thus be available in any instance where the product is deployed with a path loss (cable, connectors, and patch panel) which didn’t utilize the full budget (and instead left up to 1dB of margin).  For instance, an optical format which tolerates the loss of 1 patch panel (~1.5dB), any applications which used direct connections (common in HPC applications) would achieve better than 1E-15 without requiring the standard to support 1E-15.  Another example would be operating optical transceivers in an air conditioned computer room (common in HPC applications) compared to in a remote closet (which can see much higher temperatures).  Again the link budget can improve by at least 1dB (due to laser output power as a function of temperature).

 

Requiring 1E-15 for the worst-case corner will add significantly to the cost of the Optical Phy for no benefit to any of the applications (including HPC).

 

Since 100G has 10xBPS of 10G, the worst-case BER spec should only be altered by a factor of 10.  Not by a factor of 1000.

 

 


From: Mike Bennett [mailto:mjbennett@xxxxxxx]
Sent: Tuesday, August 29, 2006 11:39 AM
To: STDS-802-3-HSSG@xxxxxxxxxxxxxxxxx
Subject: Re: [HSSG] BER Objective

 

Roger,

Thank you for making a key point regarding packet loss and it's impact on getting work done.  A fair amount of the CFI material referenced high throughput flow data.  Jugnu illustrates one case in which packet loss doesn't have much of a noticeable affect - high transaction rate, low throughput flows such as web hosting services, etc.  On the other hand, sites that transport fewer high-throughput flows will suffer greatly from an error per second.  The following link illustrates the impact of packet loss on TCP sessions (and what some folks are attempting to do about it): http://www.csm.ornl.gov/~dunigan/net100/wad.html.  There are many more examples and a multitude of research regarding TCP performance degradation due to packet loss.

I've not see a response to Howard's suggestion of using a method similar to that of the one found in EFM.  Can someone tell me what's wrong with that idea?

Thanks,

Mike

Roger Merel wrote:

Jugnu,

 

Indeed some applications will not truly be adversely affected with a poorer BER even if the errors occurred once per second or more on such a high speed link (although no one really likes looking at a the error racking up that fast.  While this requires a packet resend, this does not in these applications significantly degrade working throughput.

 

However there are applications where such an error rate does have a serious impact (and these represent some of Ethernet’s important early adopters).  When the data is being used in a computational pipeline, such a resend stalls the pipeline and wastes all of the time until the resent data arrives.  In a multi-processor world which seeks to have all N processors cache sync’ed, this can effectively stall the entire system and since the system may be composed of up to N^2 links, the effect of BER can be very significant… 1 error per second per link could mean that no productive work is occurring.

 

-Roger

 


From: OJHA,JUGNU [mailto:jugnu.ojha@xxxxxxxxxxxxx]
Sent: Tuesday, August 29, 2006 10:48 AM
To: Roger Merel; STDS-802-3-HSSG@xxxxxxxxxxxxxxxxx
Subject: RE: [HSSG] BER Objective

 

Roger, 

 

I understand that test time is the issue.  The point I’m getting at (and which I’ve always wondered about) is, if the errors are so few and far between that it takes so long to find them, how much impact can they really be having on the system/network performance?  I.e., are we being too demanding with the BER requirements.  

 

Jugnu

 


From: Roger Merel [mailto:roger@xxxxxxxxxxx]
Sent: Tuesday, August 29, 2006 10:44 AM
To: STDS-802-3-HSSG@xxxxxxxxxxxxxxxxx
Subject: Re: [HSSG] BER Objective

 

It’s not hard to measure, just time consuming.  If one wants to keep optics affordable, one need manufacturing test to be <minutes, not >10 minutes.

 

Although my position is that 1E-15 BER is not required; but only 1E-13 at most.

 


From: OJHA,JUGNU [mailto:jugnu.ojha@xxxxxxxxxxxxx]
Sent: Tuesday, August 29, 2006 10:37 AM
To: STDS-802-3-HSSG@xxxxxxxxxxxxxxxxx
Subject: Re: [HSSG] BER Objective

 

All of this raises the following question:  If this is so hard to measure, how much impact can it really have in the real world?  Why not back the BER requirement off to 10e-10? 

 

Regards,

Jugnu

 


From: Petar Pepeljugoski [mailto:petarp@xxxxxxxxxx]
Sent: Monday, August 28, 2006 8:03 PM
To: STDS-802-3-HSSG@xxxxxxxxxxxxxxxxx
Subject: Re: [HSSG] BER Objective

 


I agree with Howard. It is impractical and expensive to test for very low BERs - the specs should be such that the power budget is capable of achieving BER =1e-15, yet the testing can be some kind of accelerated BER at lower value that is derived from the curve interpolation.

However, the as with any extrapolation of testing results one has to be careful, so in this case it will be manufacturers' responsibility to guarantee the BER=1e-15.  

Regards,

Petar Pepeljugoski
IBM Research
P.O.Box 218 (mail)
1101 Kitchawan Road, Rte. 134 (shipping)
Yorktown Heights, NY 10598

e-mail: petarp@xxxxxxxxxx
phone: (914)-945-3761
fax:        (914)-945-4134

Howard Frazier <hfrazier@xxxxxxxxxxxx>

08/28/2006 05:39 PM

Please respond to
Howard Frazier <hfrazier@xxxxxxxxxxxx>

To

STDS-802-3-HSSG@xxxxxxxxxxxxxxxxx

cc

 

Subject

Re: [HSSG] BER Objective

 

 

 




 
For the 100 Mbps EFM fiber optic links (100BASE-LX10 and 100BASE-BX10)
we specified a BER requirement of 1E-12, consistent with the BER requirement
for gigabit links. We recognized that this would be impractical to test in a
production environment, so we defined a means to extrapolate a BER of 1E-12
by testing to a BER of 1E-10 with an additional 1 dB of attenuation.  See
58.3.2 and 58.4.2.
 
Howard Frazier
Broadcom Corporation


From: Roger Merel [mailto:roger@xxxxxxxxxxx]
Sent:
Monday, August 28, 2006 1:54 PM
To:
STDS-802-3-HSSG@xxxxxxxxxxxxxxxxx
Subject:
Re: [HSSG] BER Objective


David,
 
Prior to 10G, the BER standard (for optical communications) was set at 1E-10 (155M-2.5G).  At 10G, the BER standard was revised to 1E-12.  For unamplified links, the difference between 1E-12 and 1E-15 is only a difference of 1dB in power delivered to the PD.  However, the larger issue is one of margin and testability (as the duration required to reliably verify 1E-15 for 10G is impractical as a factory test on every unit) especially since we’d want to spec worst case product distribution at worst case path loss (cable+connector loss) and at EOL with margin.  Thus in reality, all products ship at BOL from the factory with a BER of 1E-15 and in fact nearly all will continue to deliver 1E-15 for their entire life under their actual operating conditions and with their actual cable losses.
 
Thus, if by “design target”, you mean a worst case-worst case with margin to be assured at EOL on every factory unit, then this is overkill.  I might be willing to entertain a 1E-13 BER as this would imply that same number of errors per second (on an absolute basis; irrespective of the number of bits being passed; this takes the same time in the factory as verifying 1E-12 at 10G although this is in fact a real cost burden which adversely product economics); however, this would not substantially change the reality of the link budget.  It would make for a sensible policy for the continued future of bit error rate specs (should their be future “Still-Higher-Speed” SG’s).
 
-Roger
 
 


 



From: Martin, David (CAR:Q840)
Sent:
Friday, August 25, 2006 12:22 PM
To:
STDS-802-3-HSSG@xxxxxxxxxxxxxxxxx
Subject:
BER Objective

 
During the discussion on Reach Objectives there didn’t appear to be any mention of corresponding BER.
 
Recall the comments from the floor during the July meeting CFI, regarding how 10GigE has been used more as infrastructure rather than as typical end user NICs. And that the application expectation for 100GigE would be similar.
 
Based on that view, I’d suggest a BER design target of (at least) 1E-15. That has been the defacto expectation from most carriers since the introduction of OC-192 systems.
 
The need for strong FEC (e.g., G.709 RS), lighter FEC (e.g., BCH-3), or none at all would then depend on various factors, like the optical technology chosen for each of the target link lengths.

...Dave

David W. Martin
Nortel Networks

dwmartin@xxxxxxxxxx
+1 613 765 2901 (esn 395)
~~~~~~~~~~~~~~~~~~~~

 





-- 
Michael J. Bennett
Sr. Network Engineer
LBLnet Services Group
Lawrence Berkeley Laboratory
Tel. 510.486.7913



-- 
___________________________
Marcus Duelk
Bell Labs / Lucent Technologies
Data Optical Networks Research
 
Crawford Hill HOH R-237
791 Holmdel-Keyport Road
Holmdel, NJ 07733, USA
fon +1 (732) 888-7086
fax +1 (732) 888-7074

-- 
___________________________
Marcus Duelk
Bell Labs / Lucent Technologies
Data Optical Networks Research

Crawford Hill HOH R-237
791 Holmdel-Keyport Road
Holmdel, NJ 07733, USA
fon +1 (732) 888-7086
fax +1 (732) 888-7074