I think that "bit error ratio" is commonly referred to as "bit error
but strictly speaking you are correct.
Thanks for the correction, I guess I screwed up again ... my goodness.
Your numbers are, of course, correct. The BER is the same on all PHYs
but the time to accumulate a given number of errors is different ...
Hugh Barrass wrote:
BER stands for Bit Error Ratio - not rate.
If you have 10 x 10G PHYs each running with a BER of 10^-11, then your
100G link is running at a BER of 10^-11. Each PHY is seeing (on
average) one error every 10 seconds, therefore the 100G link is seeing
one error every second.
A BER of 1E-12 says that you should receive approximately 100Gbyte of
"good data" for every error that you see. This is the same regardless
of link speed.
Marcus Duelk wrote:
If you have a 100G MAC, for example, and you look at the errors or
the error rate at the MAC device then it doesn't matter anymore
whether your bit errors were accumulated on one 100G interfaces or
on ten 10G interfaces. If you have a BER here of 1E-11 then you know
that you will have one error every second.
But, we were talking about the errors that are measured at the PHY
and what kind of BER values make sense for (optical) component
vendors. Here, it depends whether we are discussing a 1x100G PHY
or a 10x10G PHY. If we take the same example of a BER 1E-11 at the
100G MAC and we think of a 10x10G PHY, then each 10G PHY would
need to run at a BER of 1E-12.
I think I should have been clearer on that ...
Bell Labs / Lucent Technologies
Data Optical Networks Research
Crawford Hill HOH R-237
791 Holmdel-Keyport Road
Holmdel, NJ 07733, USA
fon +1 (732) 888-7086
fax +1 (732) 888-7074