Thread Links Date Links
Thread Prev Thread Next Thread Index Date Prev Date Next Date Index

[802.3_50G] 答复: [802.3_50G] CAUI-4 operating modes



Hi Chris,

 

For backward compatible from switch/host, it is make sense for KR4 FEC only right now.

For LAUI-2,  we should consider the concern from most guys on how to get 50G per lane from FEC capability perspective. I assume no this kind of implementation in industry right now. If wrong, please correct me.

 

I am not propose either one from S1, S2 and S3 in this email. I raise this option for us to keep in mind of the impact of  KR4 FEC with LAUI at 1E-5 BER if referring to current CDAUI-8 specification in 802.3bs Atlanta meeting. This question also be asked by Mike Dudek after Helen Xu presented.

I think it is necessary to support interoperating ability between LAUI with LAUI-2 in IEEE 50GbE.

 

I primary think it is much better that either of 25.78125G and 26.5625G electrical interface should operate at less than 1E-15 as in Gary first email. They it is neglect interference from electrical interface with pluggable module PMDs links.

 

Thanks!

Xinyuan

 

发件人: Chris Cole [mailto:chris.cole@xxxxxxxxxxx]
发送时间: 201633 3:52
收件人: Wangxinyuan (Xinyuan); STDS-802-3-50G@xxxxxxxxxxxxxxxxx
主题: RE: [802.3_50G] CAUI-4 operating modes

 

Hello Xinyuan,

 

Are you aware of any switch silicon that supports KP4 FEC on CAUI-4 or proprietary LAU-2 interface?

 

I am not aware of any, so in which situation would we use CAUI-4 or LAUI-2 with KP4?

 

Thank you

 

Chris

 

From: Wangxinyuan (Xinyuan) [mailto:wangxinyuan@xxxxxxxxxx]
Sent: Tuesday, March 01, 2016 7:22 PM
To: Chris Cole; STDS-802-3-50G@xxxxxxxxxxxxxxxxx
Subject:
答复: [802.3_50G] CAUI-4 operating modes

 

Another option?

Just borrow from Chris table, with  “S3”.

Scenario

Supported

FEC Type

50Gb/s xAUI

100Gb/s xAUI

200Gb/s xAUI

Applications

S3

Long-term

KP4 RS-544

LAUI
LAUI-2

CAUI-2
CAUI-4?

CCAUI-4
CCAUI-8

Near-term

 

 

发件人: Chris Cole [mailto:chris.cole@xxxxxxxxxxx]
发送时间: 201632 6:48
收件人: STDS-802-3-50G@xxxxxxxxxxxxxxxxx
主题: Re: [802.3_50G] CAUI-4 operating modes

 

The 50/100/200G SG will be making a choice in Macau between two objective setting scenarios for 50, 100, 200Gb/s xAUI chip-to-module interfaces.

 

Scenario

Supported

Applications

FEC Type

50Gb/s xAUI

100Gb/s xAUI

200Gb/s xAUI

S1

Backwards

Compatibility

KR4 RS-528

LAUI

LAUI-2

CAUI-2

N.A.

Long-term

Mainstream

KP4 RS-544

LAUI

CAUI-2

CCAUI-4

S2

Long-term

Mainstream

KP4 RS-544

LAUI

CAUI-2

CCAUI-4

 

If the SG elects S1, there is an auxiliary question whether we will need two sets of 50G & 100G PMD objectives, for example SR and SR2 each with both KR4 and KP4 FEC.

 

If the SG elects S2, than the backwards compatibility interfaces will be defined outside of the IEEE, in an MSA or Consortium. My view is that complex Ethernet logic is best defined in the IEEE, as it has the most rigorous process and leads to the broadest industry input and review. However, it is also understandable that if there is a desire to move quickly, S1 requires more work and puts more pressure than S2 on an aggressive schedule.

 

Chris

 

From: Mark Nowell (mnowell) [mailto:mnowell@xxxxxxxxx]
Sent: Thursday, February 25, 2016 1:23 PM
To: STDS-802-3-50G@xxxxxxxxxxxxxxxxx
Subject: Re: [802.3_50G] CAUI-4 operating modes

 

Jeff,

 

I think those are all market questions, not necessarily IEEE issues.   

 

But I agree option 2 seems to me what the market would drive to…

 

Mark 

 

On 2/25/16, 3:00 PM, "Jeffery Maki" <jmaki@xxxxxxxxxxx> wrote:

 

Mark,

 

My point was that once one considers the needs for interoperation over three major system generations, then one will probably need options (2) when one wishes to maintain density on the latest system generation and therefore skip option (1). Or said differently, no matter how hard we work to make (3) true, we’ll still find cases where a given system may need to adopt option (2). Option (2) seems inevitable, but will module integrators make such modules.

 

Today, we find use of 100GBASE-SR10 in CFP and CFP2. Nobody wishes to make 100GBASE-SR10 in QSFP28. It would be nice to migrate users to 100GBASE-SR4 that is found in QSFP28, but nobody wishes to make 100GBASE-SR4 in CFP or CFP2 with the FEC included. There is no interoperation over MMF between CFP/CFP2 and QSFP28 system generations.

 

Jeff

 

 

From: Mark Nowell (mnowell) [mailto:mnowell@xxxxxxxxx]
Sent: Thursday, February 25, 2016 7:42 AM
To: Jeffery Maki <jmaki@xxxxxxxxxxx>; STDS-802-3-50G@xxxxxxxxxxxxxxxxx
Subject: Re: [802.3_50G] CAUI-4 operating modes

 

Jeff, 

 

You are absolutely correct that interoperability is the prime purpose of standards.

 

When the technology allows us to proceed through multiple technology generations without breaking interoperability that works well for the industry (see my recent email to Ali using 10GE as an example).

 

When the technology changes so that we wish to move to a new generation in such a way that it is no longer interoperable with the old generation but we see some market need (density, cost etc) we still do it and figure out the optimal way.  If the PCS and FEC architecture doesn’t change ? but perhaps the number of lanes do, then it  is a fairly self contained change limited to only implementation ? for example the industry move from CAUI-10 based 100GBASE-LR4 (using CFP modules) to CAUI-4 based 100GBASE-LR4 (using CFP2/4 or CPAK modules).  Both of these still interoperate with each other.

 

If the PCS/FEC architecture changes then, we again do it if it addresses a market need and we deal with that by having multiple modes in our silicon.  For example, when 100GBASE-SR10 migrated to 100GBASE-SR4 we not only changed the AUI & PMD we supported but also needed to add the RS(528,514) FEC into our silicon.  This is a different PCS/FEC architecture than the original 100GE PCS/FEC architecture and they would not interoperate if you plugged a common PMD in between them.  The system therefore needed to have management capabilities to know which mode PCS/FEC architecture mode to put the silicon in order to have interoperability.   

 

So the reality is that we have really non-interoperable PCS/FEC modes for 100GE in our systems to support the various 100GE flavors out there and we live with that since it was the right thing to do for the market and it works.

 

The debate, as I’m interpreting it here, is that we as an industry are very interested in moving to PMDs based on 50Gb/s signaling because we see cost advantages in lowering the number of lanes.  We’re therefore looking at the PCS/FEC architectures to see what is the best solution to run over those PMDs and it is looking like it might, again, be a new PCS/FEC architecture in order to maximize the performance or minimize the cost of the PMDs.  I believe there is a lot of consensus on this being the right long term thing we standardize.  And it will be implemented as yet another mode in the silicon and the system management will figure out how to switch between the various modes.

 

Now we’re also seeing people saying that it would be great to be able to use these “better” PMDs with the old PCS/FEC architectures so we are able to interconnect these “new” products with these new AUIs to the "current" products with the current AUIs.

 

We have 3 choices to do that:

  1. run the new products in the existing current mode which will definitely be implemented in the silicon.  The disadvantage is that you lose half density on the new products as you are running over current AUIs at half the bit  rate
  2. Push the translation into a pluggable module that would insert into the “current” product.  For 100GE as an example, essentially running no-FEC over the CAUI into the module then generating the “new” FEC in the module.  No new PMDs are required to be developed but new pluggable modules are.  The burden is that these new QSFP modules that support the new PMD specs will also have some extra digital logic in them generate/terminate the new FEC.
  3. Support the old PCS/FEC/AUI architecture and develop new PMDs based on this 50Gb/s signaling to guarantee interop. Again new modules will need to be built. The QSFPs supporting the new PMDs and the new denser pluggable module designed the new AUI will need to have variants that support the potentially two sets of new PMDs.  I don’t have enough information to know how different those are.

My confusion from a standards process with option 3 is I don’t know how to implement a new AUI that alters the interoperability without defining at least one PHY to do that.  Hence my original question which was what other objectives would we need to add to support this.

 

Mark 

 

On 2/24/16, 11:22 PM, "Jeffery Maki" <jmaki@xxxxxxxxxxx> wrote:

 

Mark,

 

Great summary. This problem of interoperation is, I believe, the number one reason we have standards. Not all system vendors will offer product on the same time horizon, so to keep things working we need interoperation over disparate generations.

 

This problem appears to persist when we move one day to define 100G electrical lanes. We will need a scheme for interoperating systems still using 50G electrical lanes with those using the new 100G electrical lanes. This interoperation problem then will not just be for 100GbE, but also for 200GbE and 400GbE. We should figure out the best scalable approach.

 

The system with 50G electrical lanes is needing to interoperate with old systems using 25G electrical lanes and new systems with 100G electrical lanes. It seems at this point the new system with 100G electrical lanes may only be able to interoperate with the old system using 25G electrical lanes if an extender sublayer is put in a module on the legacy 25G electrical lane system that converts to whatever FEC is used for the 100G electrical lane system. Here we can recover interoperation over more than two system generations with the use of the extended sublayer with FEC accommodation. This would mean lots of different modules with the correct FEC and correct optics.

 

How many generations of interoperation do we need? If only two, then the use of an optional end-to-end FEC (second PHY) would appear to be sufficient for FEC accommodation, but we still need new optics (new modules) on the legacy system. Is anyone able to argue that only two generations of interoperation are required? I’m not, but certainly a minimum that I believe we have to achieve.

 

Jeff

 

 

From: Mark Nowell (mnowell) [mailto:mnowell@xxxxxxxxx]
Sent: Wednesday, February 24, 2016 7:03 PM
To: STDS-802-3-50G@xxxxxxxxxxxxxxxxx
Subject: Re: [802.3_50G] CAUI-4 operating modes

 

All,

 

Since I started this burst of activity with my questions on the ad hoc call today, let me re-iterate the point I was making.  This is purely coming from my chair’s perspective and looking at what the SG needs to close out in terms of objectives and making sure we all understand the implications and consequences of what we adopt so we don’t get wrapped into knots in Task Force.

 

The proposal from Ali today was to support an objective for an optional 50GAUI-2 and an optional 100GAUI-4.

 

My question was whether that was sufficient to achieve what is intended.  I think 50GE and 100GE cases are slightly different, so I’ll tackle them separately.

 

A general comment first

To try and clarify the confusion that is happening around CAUI-4 modes, let me try another way.  We only have one mode of CAUI-4 defined (by 802.3bm), and we have a FECs defined RS(528) and RS(544) (by 802.3bj).  Because the RS(528) FEC runs at the same bit rate as CAUI-4 and because CAUI-4 was defined to run @ a BER that doesn’t require FEC, we can run the RS(528) FEC over CAUI-4 without consequence and have the advantage of the FEC gain being able to be used completely for the optical PMD link.  Key point here is that we’re not running the CAUi-4 at different bit rates.

 

50GE

As Ali says we do not want to sacrifice performance on the single lane specifications which I’m guessing will be based on an end to end RS(544) FEC that  covers both the AUI and the PMD and this family of PHYs will be defined by the TF in line with the objectives set (which  for the PHYs with AUIs are 100m MMF, 2km SMF and 10km SMF).

 

If an optional 50GAUI-2 is defined, I’m assuming that the interest is to use a RS(528) FEC and therefore this is a new family of PHYs since they won’t interoperate with the above family of PHYs from a bits on the wire perspective.  Further assumptions as to different PCSes reinforce this non-interoperable conclusion.  Since, I believe the assumption is that the PMD is still a single lane PMD, it’s tx/rx specs will either be different from the single lane PHY to achieve the same reaches as above or the reaches will be different to use the same tx/rx as above.  

 

The “simple” addition of an option 50GAUI-2 to  the 50GAUI-1  is more complex as they will be running at different bit rates, different modulation formats and different BERs. 

 

All of this CAN be considered by the SG/TF BUT just adopting only an objective to support an optional 50GAUI-2 doesn’t really seem to provide any insight into what the TF needs to do.  It also doesn’t enable the TF to develop a more than one solution for an objective (e.g. 100m MMF).  Unless there are PHYs that this proposed 50GAUI-2 is associated with ? it is not clear to me that we have a way of including this 50GAUI-2 in the specification alone but need more consideration on how to do it.  

 

100GE

 

I originally thought 100GE was different but the discussion above actually carries across almost the same.  The difference we have is that with 100GE we only have one objective adopted that need an AUI right now ?  2-fiber 100m MMF.

 

My assumption again is that there is interest in this objective being met with a baseline based on end-to-end RS(544) FEC.

 

As I understand the optional AUI proposal, the goal would be to have the 100GAUI-2 end of the link to run the existing PCS/RS(528)FEC (defined in 802.3ba and 802.3bj) in order to interoperate with a host at the other end that is using the CAUI-4 (and supporting RS(528)).  Again the consequence of this is that this is a different PHY as it is running at a different bit rate.  There are potentially two different 100GAUI-2 interfaces here running at different bit rates with different FEC gain coverage.  This will also obviously impact the PMD specification too so either reach or PMD specs will need to change.  

 

Again, anything CAN be defined as long as we know what we are defining.  I believe that it is insufficient to suggest that an objective to define an optional AUI is enough.  It is a good in providing clarity on the intention of what people want to specify though.

 

In summary, if these proposals are to be brought into the SG for adoption, I would hope we have some better clarity on how it would fit into the specification we would write (as that is our only goal within IEEE).  I’d suggest looking at Table 80-2, as Gary pointed out, and figuring out how this table would be updated with these proposals.

 

I do recognize that it is hard to separate the implementations issues in the products we are all looking to build from the IEEE specifications that we are trying to write, but as chair, I need to remind the group on the IEEE specification aspects.

 

For what it is worth, I think we can achieve all of the intended goals that Ali and Rob Stone are trying to achieve without causing any of these specification challenges just by selecting the other options in their slides.  The bottom option on Ali’s slide 7 and 8 http://www.ieee802.org/3/50G/public/adhoc/archive/ghiasi_022416_50GE_NGOATH_adhoc.pdf and Rob’s “Brown Field Option B” on Slide 5 of http://www.ieee802.org/3/50G/public/adhoc/archive/stone_021716_50GE_NGOATH_adhoc-v2.pdf.  These all support the legacy hosts, do not require the creation of a new family of PHYs and PMDs in the industry (or the IEEE specification), and are essentially already architecturally  supported.

 

Mark 

 

 

 

 

 

On 2/24/16, 6:04 PM, "Jeffery Maki" <jmaki@xxxxxxxxxxx> wrote:

 

Rob,

 

My “strictly speaking” was meant at a head nod to what you say. I was trying to narrow subject when trying to understand Chris. Confusion is occurring from the use of the terms KR4 and KP4, and what all is meant in the context of 50G connects.

 

Below, I have a typo. “…LAUI-2 could be devised to need to coding gain…” should be “…LAUI-2 could be devised to need no coding gain…”.

 

Jeff

 

 

From: Rob Stone [mailto:rob.stone@xxxxxxxxxxxx]
Sent: Wednesday, February 24, 2016 2:53 PM
To: STDS-802-3-50G@xxxxxxxxxxxxxxxxx
Subject: Re: [802.3_50G] CAUI-4 operating modes

 

Hi Jeff

 

You are correct that there is no IEEE 50G Ethernet, but there is a 50G Ethernet standard out there based on 2 x 25G lanes (25G Consortium) ? and it has been put into hosts supplied by several companies. This data was shared in the Atlanta meeting, it can be seen in the Dell Oro forecast on slide 3, (http://www.ieee802.org/3/50G/public/Jan16/stone_50GE_NGOATH_02a_0116.pdf).

 

Thanks

 

Rob

 

From: Jeffery Maki [mailto:jmaki@xxxxxxxxxxx]
Sent: Wednesday, February 24, 2016 2:41 PM
To: STDS-802-3-50G@xxxxxxxxxxxxxxxxx
Subject: Re: [802.3_50G] CAUI-4 operating modes

 

Chris and others,

 

I am a bit confused. Strictly speaking, no host has 50G Ethernet today so when one is built to have 50G Ethernet it can also be built to have any required FEC.

 

Are you mentioning KR4 and KP4 just to give a flavor of the difference in these two potential codes to be adopted? In this way, when mentioning KR4, you mean LAUI-2 could be devised to need to coding gain itself just as CAUI-4 does not need any coding gain to operate.

 

Jeff

 

 

From: Chris Cole [mailto:chris.cole@xxxxxxxxxxx]
Sent: Wednesday, February 24, 2016 1:47 PM
To: STDS-802-3-50G@xxxxxxxxxxxxxxxxx
Subject: Re: [802.3_50G] CAUI-4 operating modes

 

Mike,

 

The optics we would use with LAUI-2 with KR4 RS-528 FEC would be the same optics as those we would use with LAUI-2 with KP4 RS-544 FEC, except running at 3% lower rate. The SG will have to decide which we define in the project, and which outside of the project, if any.

 

Chris  

 

From: Mike Dudek [mailto:mike.dudek@xxxxxxxxxx]
Sent: Wednesday, February 24, 2016 12:05 PM
To: STDS-802-3-50G@xxxxxxxxxxxxxxxxx
Subject: Re: [802.3_50G] CAUI-4 operating modes

 

But what PMD is LAUI-2 going to support.    If we don’t have an objective for a PMD that requires it then in my opinion it would be out of scope to develop it without an explicit objective. 

 

Mike Dudek 

QLogic Corporation

Director Signal Integrity

26650 Aliso Viejo Parkway

Aliso Viejo  CA 92656

949 389 6269 - office.

Mike.Dudek@xxxxxxxxxx

 

 

From: Kapil Shrikhande [mailto:kapils@xxxxxxxx]
Sent: Wednesday, February 24, 2016 11:32 AM
To: STDS-802-3-50G@xxxxxxxxxxxxxxxxx
Subject: Re: [802.3_50G] CAUI-4 operating modes

 

To match the capabilities of CAUI-4 (4x25G), the LAUI-2 (2x25G) C2M interface should operate without FEC at a BER of 1e-15 or better (Gary also points to the BER requirement for CAUI-4), so that a no-FEC PHY using LAUI-2 could operate at 1e-12. And as stated by Chris, LAUI-2 will also support RS-FEC encoded signal (KR4 and KP4 FEC) for those PMDs that require FEC.

 

Kapil.

 

 

 

On Wed, Feb 24, 2016 at 10:53 AM, Brad Booth <bbooth@xxxxxxxx> wrote:

I like this topic as it does highlight one of the aspects previously mentioned in January about the need to have a low or zero FEC latency AUI.

 

For the 25G-based interface (CAUI-4), the task force(s) wisely provided the ability for the interface to operate with and without FEC. This has permitted flexibility in implementations. For example, the ability to use a CAUI-4 without FEC between an Ethernet adapter's ASIC and FPGA will permit a low latency interface; whereas, between the adapter's FPGA and the switch's ASIC, FEC can be used to provide end-to-end error correction.

 

It would be great if we continue to provide interfaces like CAUI-4 that can transport either FEC or non-FEC data. This would provide the greatest level of flexibility for various implementations that could occur.

 

I've requested time to make a presentation in Macau to discuss these use cases in both the 50G and 100G market.

 

Thanks,
Brad

 

On Wed, Feb 24, 2016 at 10:31 AM, John D'Ambrosia <jdambrosia@xxxxxxxxx> wrote:

Chris,

You are mistaken.

The FS FEC was developed and a 4 lane solution was developed but it was not CAUI-4.  That was done in 802.3bm.  There is no way I would ever try to steal that credit away from Dan Dove who had the fortitude of a saint with that effort.

 

I think the confusion is coming from FEC and non-FEC protected interfaces.

 

From what I see

802.3bj RS-FEC clause  developed to support the cr4, -kr4, -kp4

802.3bm developed CAUI-4 and specified its operation to 10^-15 without FEC.  However the architecture itself can be done in such a way that the CAUI-4 is carrying either non-FEC or FEC protected data.  802.3bm also developed -sr4 where FEC is mandatory.

 

So the AUI based on 25Gb/s signaling can be independent of whether there is FEC or not.

 

I think everyone is right, but it clearly points out we have to be very specific with language.

 

John

 

From: Chris Cole [mailto:chris.cole@xxxxxxxxxxx]
Sent: Wednesday, February 24, 2016 1:24 PM
To: John D'Ambrosia <jdambrosia@xxxxxxxxx>; STDS-802-3-50G@xxxxxxxxxxxxxxxxx
Subject: RE: [802.3_50G] CAUI-4 operating modes

 

CAUI-4 with KR4 RS-528 FEC was developed in the P802.3bj project you led to first support CR4. P802.3bm then defined SR4 with CAUI-4 with KR4 FEC. This enable subsequent efforts to quickly define optical PMDs that use KR4 FEC. P802.3bm also defined CAUI-4 with no FEC to support existing PMDs; LR4 and ER4.

 

So coming out of 802.3bm we had two CAUI-4 operating modes, one without FEC for backwards compatibility, and one with FEC for new PMDs.


Chris

 

From: John D'Ambrosia [mailto:jdambrosia@xxxxxxxxx]
Sent: Wednesday, February 24, 2016 10:04 AM
To: STDS-802-3-50G@xxxxxxxxxxxxxxxxx
Subject: Re: [802.3_50G] CAUI-4 operating modes

 

Reading the spec ? it looks more like the specification of CAUI-4 is done not assuming FEC, but a port type may include FEC that could go over the CAUI-4.

 

From: Rick Rabinovich [mailto:rrabinovich@xxxxxxxxxxx]
Sent: Wednesday, February 24, 2016 12:20 PM
To: STDS-802-3-50G@xxxxxxxxxxxxxxxxx
Subject: Re: [802.3_50G] CAUI-4 operating modes

 

Correct, CAUI-4 does not include FEC.

 

Rick Rabinovich

Hardware Architect ? Signal Integrity

cid:image007.png@01CE6DA7.29CB7A10

rrabinovich@xxxxxxxxxxx

Phone: +1 (818) 208-7328

26601 W. Agoura Rd.

Calabasas, CA 91302 US

visit: www.ixiacom.com

 

From: Gary Nicholl (gnicholl) [mailto:gnicholl@xxxxxxxxx]
Sent: Wednesday, February 24, 2016 9:18 AM
To: Rick Rabinovich <rrabinovich@xxxxxxxxxxx>
Cc: STDS-802-3-50G@xxxxxxxxxxxxxxxxx
Subject: Re: CAUI-4 operating modes

 

Perhaps, but 802.3bj did not define CAUI-4 and Chris’s comment was  specifically on CAUI-4.

 

Gary 

 

From: Rick Rabinovich <rrabinovich@xxxxxxxxxxx>
Date: Wednesday, February 24, 2016 at 12:15 PM
To: Gary Nicholl <gnicholl@xxxxxxxxx>
Cc: "STDS-802-3-50G@xxxxxxxxxxxxxxxxx" <STDS-802-3-50G@xxxxxxxxxxxxxxxxx>
Subject: RE: CAUI-4 operating modes

 

Hi Gary,

 

Thank you for bringing this up . CAUI-4 defined in IEEE802.3bm was specified without FEC to eliminate the latency incurred.

 

Perhaps Chris was also referring to 4x25G as defined in IEEE802.3bj which includes RS-FEC for 100GBASE-CR4.

 

Cordially,

 

 

Rick Rabinovich

Hardware Architect ? Signal Integrity

cid:image007.png@01CE6DA7.29CB7A10

rrabinovich@xxxxxxxxxxx

Phone: +1 (818) 208-7328

26601 W. Agoura Rd.

Calabasas, CA 91302 US

visit: www.ixiacom.com

 

From: Gary Nicholl (gnicholl) [mailto:gnicholl@xxxxxxxxx]
Sent: Wednesday, February 24, 2016 9:08 AM
To: STDS-802-3-50G@xxxxxxxxxxxxxxxxx
Subject: [802.3_50G] CAUI-4 operating modes

 

Following on from the discussion this morning I checked 802.3bm and there is only a single operating mode for CAUI-4. 

 

CAUI-4 C2M is defined in Annex 83E. There is only one operating mode and that assumes no FEC.

 

 

There is no separate FEC operating mode, where some of the FEC gain is used to relax the CAUI-4 electrical specifications. 

 

In 802.3bm if RS-FEC is being used, it is  carried completely transparently over the CAUI-4 interface, and all of the FEC gain is used for the PMD (i.e. 100GBASE-SR4). The CAUI-4 specification is completely independent  of whether FEC is being used on the link or not.  Perhaps this is what Chris meant by “two CAUI-4 operating modes” on the call this morning, even though from a CAUI-4 perspective there  is only a single operating mode? 

 

Another way to state this is that the FEC requirements for the host are defined by the PMDs to be supported and not the CAUI.

 

Gary