Thread Links Date Links
Thread Prev Thread Next Thread Index Date Prev Date Next Date Index

Re: [HSSG] Higher speed trade offs




Hi Chris,

I agree that a good starting point is to look at what you have today
but on the other hand it also makes sense to think about the type of
network you will employ that higher-speed Ethernet interface, and
backhauling as well as broadband content delivery were mentioned
as key market drivers, but these applications are found in Metropolitan
networks and backbones where you exceed a transmission distance of
10 or 40 km. But, in any case 10/40/80 km is, to my understanding, the
dispersion limit and not the actual reach or transmission limit.

What is more important is that the protocol (and framing) that is going
to be used is suitable for the type of network that is carrying that higher-
speed Ethernet. The classical Ethernet framing is a LAN framing for a
broadcast medium. Higher-speed Ethernet will be used in point-to-point
links and a great amount of interfaces will be infrastructure interfaces found,
for example, in provider networks. This should be considered accordingly.

Marcus



Chris Cole wrote:
Hi Steve,

I am in agreement with you that at this point we should not take
anything as a given, and distribution of applications should be open for
discussion.

However, a good place to start is to see what it takes to define 100Gb
solutions for the existing infrastructure. If we move away from
distances used for today's Ethernet deployment (10km, 40km, 80km) then
we will complicate how 100Gb is deployed. If the payoff for this
complication is worth it, then alternate applications will certainly be
considered. You will need to make a very compelling argument to convince
end users that they have to change their network configurations to
accommodate a new product.

Chris

-----Original Message-----
From: Trowbridge, Stephen J (Steve) [mailto:sjtrowbridge@xxxxxxxxxx] 
Sent: Thursday, September 14, 2006 12:21 PM
To: STDS-802-3-HSSG@xxxxxxxxxxxxxxxxx
Subject: Re: [HSSG] Higher speed trade offs

Chris,
I am not sure that it is a given that the distribution of applications
for 100G will be the same as that for 10G.

One of the "mistakes" sometimes quoted for 10G was failure to recognize
early enough that a significant amount of deployment of this interface
was for infrastructure rather than as an end station interface. I would
guess that deployment of 100G will be even more heavily weighted in the
infrastructure space than 10G. While the short reach, data center or
supercomputer type interface may be the easiest problem to solve, it may
also represent only a tiny fraction of the market and may not be the
most important problem to solve.
Regards,
Steve 

-----Original Message-----
From: Chris Cole [mailto:chris.cole@xxxxxxxxxxx] 
Sent: Thursday, September 14, 2006 12:36 PM
To: STDS-802-3-HSSG@xxxxxxxxxxxxxxxxx
Subject: Re: [HSSG] Higher speed trade offs

Stephen,

The main stream SMF application at 10G is 10km (10GBASE-LR,) which uses
un-cooled optics.

Assuming that 100Gb will use un-cooled optics for low cost, this
requires CWDM. A 100nm window is one alternative that can be used for
skew calculations. The dispersion across the window is approximately
10ps/nm/km, so we get 10nsec after 10km. That's 1000 bits, which will
get split up among however many CWDM channels are proposed.

An 80km application will require the use of cooled optics. A 10 channel
200GHz DWDM grid will be 16nm, so that also gives about 10nsec after
80km.

So in round numbers, you can use 1Kb as a starting point.

Chris

-----Original Message-----
From: Stephen Bates [mailto:stephen.bates@xxxxxxxxxxxxxxx]
Sent: Thursday, September 14, 2006 10:58 AM
To: STDS-802-3-HSSG@xxxxxxxxxxxxxxxxx
Subject: Re: [HSSG] Higher speed trade offs

Hugh and others

We have been thinking about 100G for a while and there does seem to be
potential for maximum-likelihood detection. This is somewhat similar to
some of the PRML work done in hard-drives, which also tend to use binary

signaling.

Also, assuming a binary signaling scheme, the constellation size is not
too much larger than the 625 used in 1000BASE-T (trellis code with 5.5dB

coding gain) and a lot smaller than the 65536 used in 10GBASE-T (LDPC
with about 10-12db coding gain depending on your decoder). The latency
through the 1000BASE-T decoder was only about 20 symbol periods so its
impact on latency at 10G would be small compared to the potential bulk
skew delay (I presume).

If there is interest from the group for an FEC we would like to develop
these ideas further. Also, can anyone tell me what the worst case bulk
skew is likely to be in some of these multi-wavelength schemes (as a
function of cable length)?

cheers

Stephen

------------------------------------------------------------------------

Dr. Stephen Bates PhD PEng SMIEEE

High Capacity Digital Communications Laboratory Department of Electrical
and Computer Engineering Phone: +1 780 492 2691
The University of Alberta                         Fax:   +1 780 492 1811
Edmonton
Canada, T6G 2V4                            stephen.bates@xxxxxxxxxxxxxxx
 
www.ece.ualberta.ca/~sbates
------------------------------------------------------------------------

Hugh Barrass wrote:
  
Stephen,

Regarding the FEC & latency - a FEC that is designed to exploit the 
transverse dimension (i.e. correlation between channels) would not
    
need 
  
to add significant latency. The FEC block size (or equivalent) need
    
only 
  
be the same order as the maximum skew between the channels. This will 
govern the minimum latency for a non-FEC channel in any case. At its 
simplest, a Trellis code could be applied across the channels with an 
additional latency of ~ 1 code block. A cleverly designed maximum 
likelihood code (is anybody in Alberta or Cork working on that? :-) 
could offer similar gain with lower overhead. In particular, the
    
optical 
  
channel with binary signaling offers a much smaller problem matrix
    
than 
  
1000BASE-T multi-level FEXT/NEXT channels.

The processing overhead is largely dependent on the amount of state
    
and 
  
to a first approximation would scale with latency.

Hugh.

Stephen Bates wrote:

    
Hi Hugh and others

I have been following this mailing list with interest and wanted to 
comment on Hugh's statement in his email.

"It strikes me that if the sources and destinations of many carriers 
are co-located and correlated then coding can eliminate inter signal 
interference."

I am trying to understand what the advantage is of merging 10 10G 
channels into one 100G channel versus keeping the 10G channels 
separate. It seems to me all the buffering and SAR requirements 
required are only of value if we do take advantage of all the 
dimensions. Obviously this is something we've done in 1000BASE-T and 
10GBASE-T by running some kind of FEC over all four dimensions.

However we've already had a discussion on how FEC adds latency and 
that may not be acceptable in short-haul applications. Also, decoding
      

  
a 10 dimensional code would not be trivial, though the potential 
coding gain would be large, allowing dense packing of wavelengths.
Also, if there is significant correlation across the 
dimensions/wavelengths we can take advantage of that using 
maximum-likelihood detection techniques. Again the complexity and 
latency become issues. However the maximum likelihood approach is 
interesting in that it can be utilized without compensating for any 
bulk skew mis-match between the dimensions/wavelengths.

I look forward to seeing how this work develops.

Cheers

Stephen


      
------------------------------------------------------------------------
  
Dr. Stephen Bates PhD PEng SMIEEE

High Capacity Digital Communications Laboratory Department of 
Electrical and Computer Engineering Phone: +1 780 492
      
2691
  
The University of Alberta                         Fax:   +1 780 492
      
1811
  
Edmonton
Canada, T6G 2V4
      
stephen.bates@xxxxxxxxxxxxxxx
  
www.ece.ualberta.ca/~sbates
  
------------------------------------------------------------------------
  
Hugh Barrass wrote:

      
Andrew and others,

It often amuses me that technical principles from one field of 
invention seem to leak into other fields. The mechanism that you 
suggest strikes me as very similar to Discrete Multi-Tone
        
modulation, 
  
used in DSL. There are some considerable advantages of compact multi
        

  
carrier systems over higher baud rate single carrier systems. I
        
guess 
  
it's only a matter of time before someone comes in (or back) with 
optical multi-level signaling to make the matrix complete :-)

Not being an optical expert allows me the freedom to look at this 
from the outside and to suggest some ideas that may (or may not) be 
completely hopeless. Has anyone considered the use of FEC codes 
designed to correct errors caused by ultra-fine WDM spacing? It 
strikes me that if the sources and destinations of many carriers are
        

  
co-located and correlated then coding can eliminate inter signal 
interference.

Isn't communications theory fun? :-)

Hugh.

        

-- 
___________________________
Marcus Duelk
Bell Labs / Lucent Technologies
Data Optical Networks Research

Crawford Hill HOH R-237
791 Holmdel-Keyport Road
Holmdel, NJ 07733, USA
fon +1 (732) 888-7086
fax +1 (732) 888-7074