Thread Links Date Links
Thread Prev Thread Next Thread Index Date Prev Date Next Date Index

Re: [HSSG] BER Objective


you are right, I did a typo in the formula. In case the BER on all
10G lanes is the same then BER100G = BER10G, as I was writing
below. But, I fully agree, the formula does not reflect that. The correct
formula for a 10x10G PHY would be:

BER100G = [BER10G(1)*BER10G(2)*....*BER10G(10)]^(1/10)

What I meant with the other sentence about the different speeds of
error accumulation was the following:

If you have a 100G MAC, for example, and you look at the errors or
the error rate at the MAC device then it doesn't matter anymore
whether your bit errors were accumulated on one 100G interfaces or
on ten 10G interfaces. If you have a BER here of 1E-11 then you know
that you will have one error every second.

But, we were talking about the errors that are measured at the PHY
and what kind of BER values make sense for (optical) component
vendors. Here, it depends whether we are discussing a 1x100G PHY
or a 10x10G PHY. If we take the same example of a BER 1E-11 at the
100G MAC and we think of a 10x10G PHY, then each 10G PHY would
need to run at a BER of 1E-12.

I think I should have been clearer on that ...


The total bit errors

Pat Thaler wrote:
That equation isn't right. If I've got 10 parallel links, the number of bit errors on the accumulation is the sum of the bit errors on the individual links.
Send a seconds worth of data on 10 10 Gb/s links with BER10G and one will get 10^10 * BER10G bit errors on each one. Therefore one will get a total of
10 * 10^10 * BER10G = 10^11 * BER10G errors during the transmission of 10^11 bits. The BER of the accumulated link is therefore
10^11 * BER10G / 10^11 = BER10G. Your equation would yield BER100G = BER10G^10/10 which isn't the same as BER10G.
It is true that one gets the same bit error rate for the 10 Gig links and the accumulated 100 Gig link but I don't understand what you mean by:
"which means that the error accumulation time is given by the 10G speed, not the 100G speed."
Both error rates are the same so for a given number of bits transmitted one gets the same error rate from sending on 10 parallel 10 Gig links with a given BER as one would get on one 100 Gig link with the same BER.

From: Marcus Duelk [mailto:duelk@xxxxxxxxxx]
Sent: Tuesday, August 29, 2006 10:53 AM
To: STDS-802-3-HSSG@xxxxxxxxxxxxxxxxx
Subject: Re: [HSSG] BER Objective


I guess you have to look at it also from the "packet world". A transmission error
means that TCP does not get an acknowledgment which means that the packet
gets resend. If we target 100 Gb/s (1E11 b/s) and you have a BER of 1E-12
then you have an error every 10 seconds. This is a little too high and will cause
too many packets being resent. I don't know exactly what the right or good
numbers are but I would guess that BER goals in the range of 1E-14 to 1E-15
will be practical. BER 1E-15 will mean an error every 1E4 (10,000) seconds,
that should be good enough for Carrier Ethernet equipment.

The interesting thing about the time it takes to accumulate an error is
that it depends a lot on the PHY. If you use 100G serial PHY then the
numbers I have given above are correct. If you use 10x10G PHY then
the BER accumulate rather at 10G rate, i.e.

BER100G = [BER10G(1)*BER10G(2)*....BER10G(10)]/10

where BER10G(i) is the BER on each of ten 10G lanes. If they
are all the same then BER10G = BER100G, which means that
the error accumulation time is given by the 10G speed, not the 100G



All of this raises the following question:  If this is so hard to measure, how much impact can it really have in the real world?  Why not back the BER requirement off to 10e-10? 



From: Petar Pepeljugoski [mailto:petarp@xxxxxxxxxx]
Sent: Monday, August 28, 2006 8:03 PM
To: STDS-802-3-HSSG@xxxxxxxxxxxxxxxxx
Subject: Re: [HSSG] BER Objective

I agree with Howard. It is impractical and expensive to test for very low BERs - the specs should be such that the power budget is capable of achieving BER =1e-15, yet the testing can be some kind of accelerated BER at lower value that is derived from the curve interpolation.

However, the as with any extrapolation of testing results one has to be careful, so in this case it will be manufacturers' responsibility to guarantee the BER=1e-15.  


Petar Pepeljugoski
IBM Research
P.O.Box 218 (mail)
1101 Kitchawan Road, Rte. 134 (shipping)
Yorktown Heights, NY 10598

e-mail: petarp@xxxxxxxxxx
phone: (914)-945-3761
fax:        (914)-945-4134

Howard Frazier <hfrazier@xxxxxxxxxxxx>

08/28/2006 05:39 PM

Please respond to
Howard Frazier <hfrazier@xxxxxxxxxxxx>





Re: [HSSG] BER Objective

For the 100 Mbps EFM fiber optic links (100BASE-LX10 and 100BASE-BX10)
we specified a BER requirement of 1E-12, consistent with the BER requirement
for gigabit links. We recognized that this would be impractical to test in a
production environment, so we defined a means to extrapolate a BER of 1E-12
by testing to a BER of 1E-10 with an additional 1 dB of attenuation.  See
58.3.2 and 58.4.2.
Howard Frazier
Broadcom Corporation

From: Roger Merel [mailto:roger@xxxxxxxxxxx]
Monday, August 28, 2006 1:54 PM
Re: [HSSG] BER Objective

Prior to 10G, the BER standard (for optical communications) was set at 1E-10 (155M-2.5G).  At 10G, the BER standard was revised to 1E-12.  For unamplified links, the difference between 1E-12 and 1E-15 is only a difference of 1dB in power delivered to the PD.  However, the larger issue is one of margin and testability (as the duration required to reliably verify 1E-15 for 10G is impractical as a factory test on every unit) especially since we’d want to spec worst case product distribution at worst case path loss (cable+connector loss) and at EOL with margin.  Thus in reality, all products ship at BOL from the factory with a BER of 1E-15 and in fact nearly all will continue to deliver 1E-15 for their entire life under their actual operating conditions and with their actual cable losses.
Thus, if by “design target”, you mean a worst case-worst case with margin to be assured at EOL on every factory unit, then this is overkill.  I might be willing to entertain a 1E-13 BER as this would imply that same number of errors per second (on an absolute basis; irrespective of the number of bits being passed; this takes the same time in the factory as verifying 1E-12 at 10G although this is in fact a real cost burden which adversely product economics); however, this would not substantially change the reality of the link budget.  It would make for a sensible policy for the continued future of bit error rate specs (should their be future “Still-Higher-Speed” SG’s).


From: Martin, David (CAR:Q840)
Friday, August 25, 2006 12:22 PM
BER Objective

During the discussion on Reach Objectives there didn’t appear to be any mention of corresponding BER.
Recall the comments from the floor during the July meeting CFI, regarding how 10GigE has been used more as infrastructure rather than as typical end user NICs. And that the application expectation for 100GigE would be similar.
Based on that view, I’d suggest a BER design target of (at least) 1E-15. That has been the defacto expectation from most carriers since the introduction of OC-192 systems.
The need for strong FEC (e.g., G.709 RS), lighter FEC (e.g., BCH-3), or none at all would then depend on various factors, like the optical technology chosen for each of the target link lengths.


David W. Martin
Nortel Networks

+1 613 765 2901 (esn 395)


Marcus Duelk
Bell Labs / Lucent Technologies
Data Optical Networks Research

Crawford Hill HOH R-237
791 Holmdel-Keyport Road
Holmdel, NJ 07733, USA
fon +1 (732) 888-7086
fax +1 (732) 888-7074

Marcus Duelk
Bell Labs / Lucent Technologies
Data Optical Networks Research

Crawford Hill HOH R-237
791 Holmdel-Keyport Road
Holmdel, NJ 07733, USA
fon +1 (732) 888-7086
fax +1 (732) 888-7074