|Thread Links||Date Links|
|Thread Prev||Thread Next||Thread Index||Date Prev||Date Next||Date Index|
Paul/Brad/GeoffThe Ethernet network of today is not your network of a decade ago, which was driven by the Enterprise LAN. Two distinct network have emerged:Cloud data center - where Ethernet fabric is commonly used to build very large Clos network this segment is leading the Ethernet speed-feedTraditional Enterprise LAN - Still exist with greater volume than Cloud data centers but speed requirements are lagged by at least 5 yearsCloud data center are more aligned with CMOS node and CPU cycle, where they want to take advantage of Morse Law efficiency to increase performance and reduce operating expenses on a shorter cycle than traditional LAN network where longevity is desired.As the ASIC/Switch IO migrated from 10 Gb/s to 25 Gb/s/lane, 25 GbE merged as natural break-out solution instead of more complex MLG transport scheme. I expect as we move from 25G to 50G I/O, 50 GbE will be the natural break out and with minimum cost.On May 12, 2015, at 5:22 AM, Kolesar, Paul <PKOLESAR@xxxxxxxxxxxxx> wrote:Geoff, Brad,You raise good points. But while a trend of smaller rate increments does raise those questions, it is also true that life duration is extending for each rate. This is because as the Ethernet market continues to expand, the user needs are becoming more spread out. ROI projections need to also consider that the higher rates composed of multiple lanes will evolve towards fewer lanes over time. All this speaks to future solution sets that have increasing variety, even if IEEE is able to put standards in place before the market fragments via MSAs. The picture is undoubtedly becoming more complex and more difficult to manage well.PaulBrad-Thanks for throwing some additional real factors into the decision.There is another one that I would like to two in.Developing both the standard and the new hard ware for each speed step is not free.In order for high speed Ethernet to remain a viable business, each speed has to have a long enough market life to recoup the up-front investment and make some profit.GeoffOn May 11, 2015, at 5:49 PMPDT, Brad Booth <bbooth@xxxxxxxx> wrote:This discussion is interesting in that on one hand we're conversing about 50G which is bleeding edge today but then we're shooting for aggregated bandwidth links in 3-5 years that show no sign of being on the edge.50G serial (or even 100G serial) is interesting as a server link if it provides better economics that the existing solution. The same applies to uplinks. If 200G is economically competitive compared to 100G or 400G or 1.6T, then it will gain traction in the market. But it's not just the cost of the optical module, it's the cost of the whole ecosystem. The uplink bandwidth has an impact on FIB which has an impact on switch memory requirements.There are a couple of factors that I believe need to be considered: the laws of physics (how much bandwidth can we put down a single lane) and the laws of economics (how do you make sure there's sufficient market to justify the solution). When Ethernet operated at the 10x speed increments, it was much simpler to ensure the laws of economics were being met. Does 200G satisfy the laws of economics? Does 800G satisfy it?All of this is directly impacted by the time it takes to create a standard. Is it two years? Three years? Or four years? Or, would it be wiser for the working group to reconsider how it does projects? Should we look at a project that decouples the speed of the MAC (which only takes mere seconds to change for each new project speed) from the speed of the PHY (which, as we all know, is where the lion's share of the work occurs)? This could permit the speed of the MAC to merely be an aggregate of similar speed PHYs in a base2 scale (1, 2, 4, 8, 16, etc.).Just food for thought,BradOn Fri, May 8, 2015 at 8:18 AM, Scott Kipp <skipp@xxxxxxxxxxx> wrote:All,I see a different and much more prolific progression for 1RU switches.The switch Vineet mentions is based on a 64-port ASIC while higher density switches are using 128 port ASICs today. This exceeds the port density of SFP (my first form factor standard that I worked on) and pushes us towards my beloved QSFP family.Here is a progression with 128 Port ASIC in 1RU SwitchToday = 32 x QSFP+ with 10G downlinks and 40G uplinks – End users decide the ratio of up to downlinks with breakout cables.2015/2016 = 32 x QSFP28 with 10/25G downlinks and 40/100G uplinks.50G era – probably deployed in 2019 = 32 x QSFP56 with 10/25/50G downlinks and 40/100/200G uplinks. Do you want 1, 2 or 4 lanes at 10,25 or 50G?Future (dream for mid 2020s) = 32 x QSFP100 with 25/50/100G downlinks and 100/200/400G uplinks. Do you want 1, 2 or 4 lanes 25, 50 or 100G? Maybe we can still support 10G on each port as well. This shows the versatility that ASICs will hopefully support and the roadmap that Fibre Channel has supported for years.You can see a vision for the future in the 2015 Ethernet Roadmap in exquisite detail at www.ethernetalliance.org/roadmap/. The Ethernet Alliance will be giving out free printed copies of the 18” X24” roadmap in Pittsburgh. There will also be a special gift related to the roadmap at the social on Tuesday night – don’t miss it.Are we limited to 128 port ASICs? No.Higher port count ASICs and multi-ASIC configurations are driving COBO and other embedded solutions that will surpass the capability of the venerable QSFP. Maybe the uQSFP will be useful in matching the needs of these higher port count ASICs. The future is dense!Kind regards,ScottThese are the port configurations for “1RU fixed switches” (Top of Rack) that will be enabled by 50G / 200G ports.The uplink / downlink bandwidth ratio is 3:1 or 2:1, depending on 4 versus 6 QSFPs.Note that this applies to any 1RU box, including Aggregation Switches, Routers (not just Server connections).Today = 48 x SFP 10G downlinks + 6 x QSFP 40G uplinks.Soon = 48 x SFP 25G downlinks + 6 x QSFP 100G uplinks.Future = 48 x SFP 50G downlinks + 6 x QSFP 200G uplinksFuture (dream) = 48 x SFP 100G downlinks + 6 x QSFP 400G uplinks--vineetI agree there is a lot of merit to standardize 200G as a partner with 50G serial IO and continue the factor of 4 down / uplink – especially given that the SI and module challenges seem relatively do-able.One additional thought – if we agree that 50/200 makes sense, would it follow that 100 / 400 would also pair up? That would enable a two lane twinax DAC server interconnect paired with a 400G uplink. The 400G would be already covered in .bs, and the 100G may “come for free” with 200G, just less lanes?So it would seem in my opinion that 50, 100 and 200G based on 50G IO would be relatively mainstream PMDs, and would merit discussion for inclusion (at the risk of project overload!).Thanks
RobAnd 50G SFP / 200G QSFP for Ethernet will have nice alignment and re-use with Fiber Channel roadmap for 64GFC SFP / 256GFC QSFP ….--vineetAli,These are great examples.Standardizing 50G and 200G PMDs will continue the successful progression of single and quad channel devices for high volume datacenter applications.
Per lane rateGb/s Single lane rateForm factor Quad lane rateForm factor Quad data rateGb/s 10 SFP+ QSFP+ 40 25 SFP28 QSFP28 100 50 SFP56 QSFP56 200Another great example of multi-lane 50G technology application was cited in your SMF Ad Hoc presentation survey of relevant papers from OFC 2015.In this post-deadline paper Cisco authors presented a 2x50G PAM-4 (optical) 100Gb/s QSFP28 transceiver using Cisco 50G PAM-4 optics and Broadcom 50G PAM-4 (line side) PHY. Measurement results were for 10km SMF and 100m OM3.ChrisJohnI see opportunity for full spectrum of PMDs for both 50 GbE and 200 GbE including popular break out option with combination of QSFP56 and SFP56:- CR- KR- MMF- SMF PSM4/FR/LROn May 7, 2015, at 1:31 PM, John DAmbrosia <John_DAmbrosia@xxxxxxxx> wrote:
Mark,I would like to request clarification of your stated intent below. You state the CFI will focus on single lane 50Gb/s Ethernet. While I realize you are initiating this effort – in my opinion the discussion that I am seeing happen is essentially “n” by 50Gb/s per lane with 50GbE and 200GbE being discussed.As this is a consensus building process, will you be allowing interested parties to bring presentations forward to state justification for why 200GbE should also be considered? Based on my conversations, I believe there are a number of individuals who would like these topics discussed together.Could you also provide any more insight into what you are proposing for single lane 50GbE? Will this be like the .3by project – Backplane, Cu Twin-as, and MMF? Or is that a TBD in your mind that you hope to address during consensus building?Thanks in advance for your answers.Regards,John D’AmbrosiaDear Colleagues:I wanted to let everyone know that a number of people have started preliminary discussions that would lead towards having a Call-for-Interest on the topic of single lane 50 Gigabit/s Ethernet at a future plenary meeting of 802.3. If anyone is interested in helping and contributing, please let me know or talk to me In Pittsburgh. As we get further along, we will be sharing some of the plans and data we are gathering to support the CFI.Regards,Mark