| Thread Links | Date Links | ||||
|---|---|---|---|---|---|
| Thread Prev | Thread Next | Thread Index | Date Prev | Date Next | Date Index |
|
Hi Chris, I agree that a good starting point is to look at what you have today but on the other hand it also makes sense to think about the type of network you will employ that higher-speed Ethernet interface, and backhauling as well as broadband content delivery were mentioned as key market drivers, but these applications are found in Metropolitan networks and backbones where you exceed a transmission distance of 10 or 40 km. But, in any case 10/40/80 km is, to my understanding, the dispersion limit and not the actual reach or transmission limit. What is more important is that the protocol (and framing) that is going to be used is suitable for the type of network that is carrying that higher- speed Ethernet. The classical Ethernet framing is a LAN framing for a broadcast medium. Higher-speed Ethernet will be used in point-to-point links and a great amount of interfaces will be infrastructure interfaces found, for example, in provider networks. This should be considered accordingly. Marcus Chris Cole wrote: Hi Steve, I am in agreement with you that at this point we should not take anything as a given, and distribution of applications should be open for discussion. However, a good place to start is to see what it takes to define 100Gb solutions for the existing infrastructure. If we move away from distances used for today's Ethernet deployment (10km, 40km, 80km) then we will complicate how 100Gb is deployed. If the payoff for this complication is worth it, then alternate applications will certainly be considered. You will need to make a very compelling argument to convince end users that they have to change their network configurations to accommodate a new product. Chris -----Original Message----- From: Trowbridge, Stephen J (Steve) [mailto:sjtrowbridge@xxxxxxxxxx] Sent: Thursday, September 14, 2006 12:21 PM To: STDS-802-3-HSSG@xxxxxxxxxxxxxxxxx Subject: Re: [HSSG] Higher speed trade offs Chris, I am not sure that it is a given that the distribution of applications for 100G will be the same as that for 10G. One of the "mistakes" sometimes quoted for 10G was failure to recognize early enough that a significant amount of deployment of this interface was for infrastructure rather than as an end station interface. I would guess that deployment of 100G will be even more heavily weighted in the infrastructure space than 10G. While the short reach, data center or supercomputer type interface may be the easiest problem to solve, it may also represent only a tiny fraction of the market and may not be the most important problem to solve. Regards, Steve -----Original Message----- From: Chris Cole [mailto:chris.cole@xxxxxxxxxxx] Sent: Thursday, September 14, 2006 12:36 PM To: STDS-802-3-HSSG@xxxxxxxxxxxxxxxxx Subject: Re: [HSSG] Higher speed trade offs Stephen, The main stream SMF application at 10G is 10km (10GBASE-LR,) which uses un-cooled optics. Assuming that 100Gb will use un-cooled optics for low cost, this requires CWDM. A 100nm window is one alternative that can be used for skew calculations. The dispersion across the window is approximately 10ps/nm/km, so we get 10nsec after 10km. That's 1000 bits, which will get split up among however many CWDM channels are proposed. An 80km application will require the use of cooled optics. A 10 channel 200GHz DWDM grid will be 16nm, so that also gives about 10nsec after 80km. So in round numbers, you can use 1Kb as a starting point. Chris -----Original Message----- From: Stephen Bates [mailto:stephen.bates@xxxxxxxxxxxxxxx] Sent: Thursday, September 14, 2006 10:58 AM To: STDS-802-3-HSSG@xxxxxxxxxxxxxxxxx Subject: Re: [HSSG] Higher speed trade offs Hugh and others We have been thinking about 100G for a while and there does seem to be potential for maximum-likelihood detection. This is somewhat similar to some of the PRML work done in hard-drives, which also tend to use binary signaling. Also, assuming a binary signaling scheme, the constellation size is not too much larger than the 625 used in 1000BASE-T (trellis code with 5.5dB coding gain) and a lot smaller than the 65536 used in 10GBASE-T (LDPC with about 10-12db coding gain depending on your decoder). The latency through the 1000BASE-T decoder was only about 20 symbol periods so its impact on latency at 10G would be small compared to the potential bulk skew delay (I presume). If there is interest from the group for an FEC we would like to develop these ideas further. Also, can anyone tell me what the worst case bulk skew is likely to be in some of these multi-wavelength schemes (as a function of cable length)? cheers Stephen ------------------------------------------------------------------------ Dr. Stephen Bates PhD PEng SMIEEE High Capacity Digital Communications Laboratory Department of Electrical and Computer Engineering Phone: +1 780 492 2691 The University of Alberta Fax: +1 780 492 1811 Edmonton Canada, T6G 2V4 stephen.bates@xxxxxxxxxxxxxxx www.ece.ualberta.ca/~sbates ------------------------------------------------------------------------ Hugh Barrass wrote: -- ___________________________ Marcus Duelk Bell Labs / Lucent Technologies Data Optical Networks Research Crawford Hill HOH R-237 791 Holmdel-Keyport Road Holmdel, NJ 07733, USA fon +1 (732) 888-7086 fax +1 (732) 888-7074 |