|Thread Links||Date Links|
|Thread Prev||Thread Next||Thread Index||Date Prev||Date Next||Date Index|
As you are aware there has been much off-line discussion of this and we have been encouraged to post our thoughts to the email reflector for the benefit of others.
I have made a comment against draft 3.1 to request making implementation of the link fault signalling state diagram optional for 2.5G and 5G data-rates. The actual change is in 46.3.4 as follows:
"The RS shall implement the link fault signaling state diagram (see Figure 46-11) for data rates of 10 Gb/s and above. For 2.5 Gb/s and 5 Gb/s data rates implementation of the link fault signaling state diagram is optional." I would be satisfied if it is only made optional for 2.5G rates.
The benefit of making the link fault state machine optional is that it allows legacy 2.5G implementations to more easily inter-operate with 2.5GBASE-T PHYs.
I have heard the following objections to making link fault signalling optional. I will list them below with my comments:
1. Your email where you say 2.5GBASE-T needs it to recover more quickly if there are problems during LPI
a. This a fair point although I would point out that if the state machine starts sending remote fault, data will be lost and the host system will be seeing a fault condition being reported from the link fault state machine
2. It will be difficult to configure
a. Configuration can be done through MDIO
3. You will fail compliance testing if you use speeded up SGMII
a. Seeing as the only interface specified is XGMII, you cannot fail compliance testing if XGMII is not exposed
4. 802.3cb allows you to use SGMII if you use a shim layer in the PHY
a. Using a shim layer in the PHY is something I agree with. The problem is that this is 802.3bz and not 802.3cb and that the slides I have seen so far from 802.3cb address the issue of link fault signalling rather than the requirement of the state machine to respond to local and remote fault.
If the state machine is made optional I accept you would still need a shim layer in the PHY to do the byte to four-byte alignment in the transmit path if a speeded up SGMII is used for the MAC/PHY interconnect. If it is made mandatory you also need to implement the link fault state machine in the shim layer.
I would like to address the question of whether 2.5GBASE-T needs link fault signaling.
Like 10GBASE-T, the design of 5GBASE-T and 2.5GBASE-T is based upon the XGMII and associated Reconciliation Sublayer (RS) that performs link fault signaling specified in 18.104.22.168. We should not remove this piece of the system without a thorough understanding of the consequences.
Removing the requirement of link fault signaling at the RS will have undesireable consequences on 2.5GBASE-T. One example is the recovery from a fault during EEE low power idle.
xGBASE-T EEE low power idle mode is asymmetric and may be entered by one or both PHY transmitters. During LPI, the transmitters and receivers are powered down except during short refresh transmit periods. The PHY must handle long periods without any signal at the receiver, but maintain extremely precise clock synchronization with the link partner as well as keep all of the adaptive equalizers and cancellers updated for changing conditions.
Should anything go wrong in the PHY receiver, link fault signaling provides the mechanism for recovery without dropping link and performing a multi-second link retrain.
A PHY with a receiver fault during LPI uses fault messaging to wake up the link partner transmitter and clear the fault in order to avoid a link drop. It sends local fault toward the local RS. The RS responds by sending remote fault towards the link partner RS which responds by sending idle ( instead of LPI) into the link partner PHY causing the transmitter to wake up and transmit idle symbols to the near-end PHY until the fault is cleared. Without fault signaling there is no other way to force the link partner to wake and transmit continuous signal for recovery without dropping link and retraining.
I think it is necessary to keep fault signaling as a requirement in Clause 46.
If the vast majority of people all agree that having 2.5G Ethernet (in all its various and future forms) support link fault signalling is the right thing to do then fair enough. My concern is that this has slipped in un-noticed and people are not aware of the extra requirement.
Also I am not sure that 2.5GBASE-T “needs” link fault signalling. My understanding is that it is unnecessary for 2.5GBASE-T but is required for 10GBASE-T and possibly for 5GBASE-T.
I would have preferred to simply take the 1G PCS, RS, and MAC and simply scale it to 2.5G. However 802.3bz chose to scale down 10G RS and MAC for 2.5G. I don't think it is a good idea to have RS attached to one kind of PHY not support link fault signaling while supporting it in another when both PHYs are operating the same speed.
Note that 802.3cb could simply scale down 10GBASE-R to 2.5G and avoided all this link fault signaling issue but didn't. We are aware of the scaled up 1000BASE-X solutions in the field for 2.5G and crafted something something that will allow legacy solutions to be compatible (see annex 127B) and yet align to the decisions that 802.3bz task force has made.
I do not think it is justified to ask 2.5GBASE-T to disable something that it needs simply for the sake that some legacy non-IEEE sped up version of 1000BASE-X, RS, MAC can't handle fault signaling. If there is a market demand to connect up this legacy interface to 2.5GBASE-T there are vendor specific workarounds that can be applied.