Thread Links Date Links
Thread Prev Thread Next Thread Index Date Prev Date Next Date Index

Re: [HSSG] Topics for Consideration


Ali Ghiasi wrote:
Mike and Jugnu

You both have some very good points. On one hand we have the high performance computing demanding highest bandwidth
achievable but on the other hand Jugnu brings up the fundamental requirement for having successful Ethernet standard "mass market potential".
I would say cost-effectively achievable.  Bell labs demonstrated multi-terabit transmission in 2000, but that system wasn't/isn't a product, we can't buy it, (or afford it if it were a real product), nor do we need it now for that matter.
Mike also mentions this project will take 3.5-5 years, IEEE project traditionally have not taken this long this means then we are
too early. 
I have to disagree.  If you include the time spent by study groups, several projects fall within that range.  I expect this study group to take a year.  Add 2.5-3.5 years to get through 5 drafts of a standard and again we're right in the range I've stated. 

Based on input I've seen from end users, taken from a survey prior to the CFI, I believe we've started just in time. 

For example when 802.3ae was started in the 1999 10Gig lasers/modulators existed for more than
15 years prior. 
Then 10G should not have been so "expensive" nor slow on market uptake since we reused existing components, right?   I'm not quite sure what your point is.

I have listed dilemma we are facing:
    - Implementing 100 Gig in the near term means Nx10Gig
Having not seen a single presentation regarding possible solutions to the problem, I wouldn't be so sure this is the only cost-effective way to implement 100G (if that's the speed we include in our objectives).
    - Implementing 100Gig in few years the right answer might be nx25Gig
and it might be something else.  I don't see your point.
    - Carrier want to leverage their existing DWDM layer which mean baudrate in the 9.95-12.5 Gig

    - If LAG implemented why not allow n to be 4?
You must have heard the numerous complaints by now from the people who actually have to live with operating and troubleshooting Link Aggregation.  Link-layer aggregation is an unacceptable option.
    - Operation with different width
    - Backward compatibility XAUI, LX4 ?
    - Greatest bandwidth demands (100+Gig) are on VSR links <50 m but the longer reach >10Km
    may be able to live with 4x10Gig.

All these means we should either define some sort of scalable architecture or just define LAG method and
do not define any PMDs!
I think it's a bit premature to come to this conclusion, but it makes for lively discussion.







It seems more reasonable to me to consider finally decoupling the physical pipe size from the rigid hierarchy used in the past.  Why not simply define a scalable interface that allows inverse multiplexing (physical layer aggregation – not the type of aggregation you have described, which sounds like the current LAG) of an arbitrary (within some bounds, obviously) number of physical channels (10G) into a single logical link?  The SONET/SDH and Digital Wrapper/OTN world already has mechanisms to do this (VCAT, LCAS), and dynamically to boot.  


This would provide a much more flexible, scalable solution to customers. 

On the surface, it seems to me that with flexibility comes complexity which leads to higher cost.  It's also not clear to me how the physical layer aggregation you propose translates to a port on a switch.  Would I have to buy an N x 10G transceiver?  Would it be a WDM-like transceiver?  Would this work with a single fiber-pair or multi-strand?  What would the relative incremental cost be (in percentages, non monetary units) to scale up?   Also, are you proposing that this would scale beyond 100G?  If so, how far?  You mention boundaries - I'm curious what you think the upper bound would be.  I hope you're planning to present something at the interim  as it would help me understand what you're really proposing and how that compares to other ideas.



In particular, it would allow them to grow capacity on any given link as needed, instead of having to install 10x10G channels up front.  Further, when they hit 100G, they wouldn’t be stuck until some other solution is defined – they could continue to grow.  



Jugnu Ojha

Avago Technologies


From: Mike Bennett [mailto:mjbennett@xxxxxxx]
Sent: Wednesday, August 02, 2006 12:21 PM
To: STDS-802-3-HSSG@xxxxxxxxxxxxxxxxx
Subject: Re: [HSSG] Topics for Consideration


John, et al.,

>During our first meeting, I anticipate spending a lot of time focusing on objectives.  At the >closing plenary I highlighted two issues / objectives that the SG would have to consider:
     Tradition of 10x leap in speed

I think the speed increase has to be 10x.  The standards development process will take at least 3.5 to 4 years to complete.  Anything less than 100G will force people who are currently aggregating 10G links to continue to use aggregation, only using fewer higher-speed, and more expensive links.   End users prefer using a single link over aggregating physical-layer links into a logical link because of the complications that come with aggregation.  The data in the CFI presentation was just a sample of cases in which network operators we're aggregating 10G links to accommodate the demand on their networks.  There will be many more by 2011 (when I expect there would be 'real' products on the market).

>    Multiple Reach Targets
>  It was also presented that the focus of this effort wasn’t for a desktop application, and >that  the cost model needs to be considered.

I believe we need to adjust the cost model in such a way that it is aligned with the ecosystem.  It is unreasonable, in my opinion, to expect a 10x/3x model to apply to systems designed for wide-area/metro-area networks.  I also think it's short-sighted to ignore the rest of the ecosystem and develop Ethernet only in the part of the ecosystem where the original cost model applies.



Michael J. Bennett
Sr. Network Engineer
LBLnet Services Group
Lawrence Berkeley Laboratory
Tel. 510.486.7913

Michael J. Bennett
Sr. Network Engineer
LBLnet Services Group
Lawrence Berkeley Laboratory
Tel. 510.486.7913