|Thread Links||Date Links|
|Thread Prev||Thread Next||Thread Index||Date Prev||Date Next||Date Index|
Why not for optics? What’s the fundamental difference?
We have Auto Neg for twinax copper cables … (40G, 100G, 25G .. and 50G, 200G in future).
If optical module is used, there is no Auto Neg, and the user selects the desired port speed.
I don’t see much value in adding Auto Neg for optics.
From: Matt Traverso [mailto:matt.traverso@xxxxxxxxx]
Scott, & All,
"Do you want 1, 2 or 4 lanes 25, 50 or 100G? Maybe we can still support 10G on each port as well. This shows the versatility that ASICs will hopefully support and the roadmap that Fibre Channel has supported for years."
Do you intend to advocate auto-negotiation or some sort of rate negotiation on a per lane basis? I like this idea from a user perspective. However, I worry that the cost/complexity could be daunting:
- I assume adding auto-neg/similar to each SerDes has some die area impact (cost/power)
- Co-existence of duplex fiber media with parallel fiber media (implications for alarms, monitoring) detection of multi-lambda on a fiber that is single lambda on the far end...
However, Ethernet going down a path of 2x data rate increase like Fiber Channel has done, provides a roadmap towards this sort of interop. Fast forwarding to implementation, I can imagine a model where we support the same baud rate with higher order modulation. Yet there will be many link budget implications with such an approach.
On IEEE, I agree with Ali that I'd like to see the discussion take place in this forum - I'm reminded of the Churchill quote, "It has been said that democracy is the worst form of government except all the others that have been tried."
On Fri, May 8, 2015 at 8:18 AM, Scott Kipp <skipp@xxxxxxxxxxx> wrote:
I see a different and much more prolific progression for 1RU switches.
The switch Vineet mentions is based on a 64-port ASIC while higher density switches are using 128 port ASICs today. This exceeds the port density of SFP (my first form factor standard that I worked on) and pushes us towards my beloved QSFP family.
Here is a progression with 128 Port ASIC in 1RU Switch
Today = 32 x QSFP+ with 10G downlinks and 40G uplinks – End users decide the ratio of up to downlinks with breakout cables.
2015/2016 = 32 x QSFP28 with 10/25G downlinks and 40/100G uplinks.
50G era – probably deployed in 2019 = 32 x QSFP56 with 10/25/50G downlinks and 40/100/200G uplinks. Do you want 1, 2 or 4 lanes at 10,25 or 50G?
Future (dream for mid 2020s) = 32 x QSFP100 with 25/50/100G downlinks and 100/200/400G uplinks. Do you want 1, 2 or 4 lanes 25, 50 or 100G? Maybe we can still support 10G on each port as well. This shows the versatility that ASICs will hopefully support and the roadmap that Fibre Channel has supported for years.
You can see a vision for the future in the 2015 Ethernet Roadmap in exquisite detail at www.ethernetalliance.org/roadmap/. The Ethernet Alliance will be giving out free printed copies of the 18” X24” roadmap in Pittsburgh. There will also be a special gift related to the roadmap at the social on Tuesday night – don’t miss it.
Are we limited to 128 port ASICs? No.
Higher port count ASICs and multi-ASIC configurations are driving COBO and other embedded solutions that will surpass the capability of the venerable QSFP. Maybe the uQSFP will be useful in matching the needs of these higher port count ASICs. The future is dense!
These are the port configurations for “1RU fixed switches” (Top of Rack) that will be enabled by 50G / 200G ports.
The uplink / downlink bandwidth ratio is 3:1 or 2:1, depending on 4 versus 6 QSFPs.
Note that this applies to any 1RU box, including Aggregation Switches, Routers (not just Server connections).
Today = 48 x SFP 10G downlinks + 6 x QSFP 40G uplinks.
Soon = 48 x SFP 25G downlinks + 6 x QSFP 100G uplinks.
Future = 48 x SFP 50G downlinks + 6 x QSFP 200G uplinks
Future (dream) = 48 x SFP 100G downlinks + 6 x QSFP 400G uplinks
I agree there is a lot of merit to standardize 200G as a partner with 50G serial IO and continue the factor of 4 down / uplink – especially given that the SI and module challenges seem relatively do-able.
One additional thought – if we agree that 50/200 makes sense, would it follow that 100 / 400 would also pair up? That would enable a two lane twinax DAC server interconnect paired with a 400G uplink. The 400G would be already covered in .bs, and the 100G may “come for free” with 200G, just less lanes?
So it would seem in my opinion that 50, 100 and 200G based on 50G IO would be relatively mainstream PMDs, and would merit discussion for inclusion (at the risk of project overload!).
And 50G SFP / 200G QSFP for Ethernet will have nice alignment and re-use with Fiber Channel roadmap for 64GFC SFP / 256GFC QSFP ….
These are great examples.
Standardizing 50G and 200G PMDs will continue the successful progression of single and quad channel devices for high volume datacenter applications.
Another great example of multi-lane 50G technology application was cited in your SMF Ad Hoc presentation survey of relevant papers from OFC 2015.
In this post-deadline paper Cisco authors presented a 2x50G PAM-4 (optical) 100Gb/s QSFP28 transceiver using Cisco 50G PAM-4 optics and Broadcom 50G PAM-4 (line side) PHY. Measurement results were for 10km SMF and 100m OM3.
I see opportunity for full spectrum of PMDs for both 50 GbE and 200 GbE including popular break out option with combination of QSFP56 and SFP56:
- SMF PSM4/FR/LR
On May 7, 2015, at 1:31 PM, John DAmbrosia <John_DAmbrosia@xxxxxxxx> wrote:
I would like to request clarification of your stated intent below. You state the CFI will focus on single lane 50Gb/s Ethernet. While I realize you are initiating this effort – in my opinion the discussion that I am seeing happen is essentially “n” by 50Gb/s per lane with 50GbE and 200GbE being discussed.
As this is a consensus building process, will you be allowing interested parties to bring presentations forward to state justification for why 200GbE should also be considered? Based on my conversations, I believe there are a number of individuals who would like these topics discussed together.
Could you also provide any more insight into what you are proposing for single lane 50GbE? Will this be like the .3by project – Backplane, Cu Twin-as, and MMF? Or is that a TBD in your mind that you hope to address during consensus building?
Thanks in advance for your answers.
I wanted to let everyone know that a number of people have started preliminary discussions that would lead towards having a Call-for-Interest on the topic of single lane 50 Gigabit/s Ethernet at a future plenary meeting of 802.3. If anyone is interested in helping and contributing, please let me know or talk to me In Pittsburgh. As we get further along, we will be sharing some of the plans and data we are gathering to support the CFI.