|Thread Links||Date Links|
|Thread Prev||Thread Next||Thread Index||Date Prev||Date Next||Date Index|
Thanks Chris for opening the discussion.
You have expressed my views better than I would have myself. J
The Industry Connections meeting last night missed an opportunity to substantially discuss the major issue in front of us: what next Ethernet rate(s) and associated PMD(s) are important to industry. We heard well prepared presentations which raised important technical and market considerations and presented two different views of what’s important. Unfortunately there was no time for questions and most of the subsequent discussion time was allocated to administrative issues. At end of the meeting, we agreed to start discussion on the 802.3 Dialog Reflector.
Mark’s presentation proposed 50G Ethernet and 40G and 50G Serial PMDs as the next important industry priority. Scott’s presentation proposed 50G and 200G Ethernet together as the next important industry priority, supported by nx50G (n=1,2,4) PMDs. There was broad agreement before and during the meeting that the workload to standardize everything is huge. Regardless of how we end up slicing the work: by Ethernet rate, by PMDs, or by functionality (logic, copper, optics, etc.), we have to prioritize.
As a contributor to Scott’s presentation my position is that what’s important to industry is the standardizing of 50G and 200G Ethernet together, and at a minimum backplane, chip-to-chip and chip-to-module interfaces (LAUI, CAUI-2, CCAUI-4) to enable designing next generation mainstream data center ASICs and switches based on 50G PAM4 technology. This is followed by copper cable, MMF, and short reach SMF to enable 50G and 100G server I/O, and 200G switch uplinks. 400G is already in standardization so that will be available as an uplink option. (As an aside, the pronunciation of CCAUI: kha-ka-wee, sounds like an espresso drink as in “I would like a double kha-ka-wee, 2% milk.”)
The need for 200G Ethernet has been repeatedly challenged as not demonstrated. That misses the point. The need for an Ethernet rate above 100G is clear. 50G servers need faster speed switch uplinks than 100G. The real question is whether that rate is 200G or 400G. Industry roadmap of 10G server I/O with 40G switch uplinks, followed by 25G server I/O with 100G switch uplinks, followed by 50G server I/O with 200G switch uplinks has many strong arguments in favor of it, as in Scott’s presentation.
What has not been demonstrated is a broad industry need for 40G Serial PMDs. In fact just the opposite is the case. Once 50G Ethernet and PMDs are standardized, 40G Ethernet will plateau and decline. That’s because 50G Serial cost will be the same as 40G Serial cost, but offer 25% more bandwidth. Few new deployments will chose 25% less free bandwidth. There will be some 40G Serial applications, but the importance of these is insignificant compared to the need for 200G Ethernet as the next high-volume low-cost datacenter switch uplink rate. This is where the need to prioritize comes into play.
Hopefully the above is enough to start a vigorous discussion on the Reflector of what next Ethernet rate(s) and PMD(s) are important to industry, including their relative priority.