Thread Links Date Links
Thread Prev Thread Next Thread Index Date Prev Date Next Date Index

Re: [HSSG] Higher speed trade offs



Hi Roger,
Roger,

I am in full agreement with you that we should not pre-suppose any
solution at this point. I have advocated caution in jumping to
conclusions with respect to what the 100Gb standard will look like in
multiple prior presentations, and will do so again in my talk at HSSG.

The specific issues you raise are also very important questions that
need to be investigated in depth. Is 20nm the right spacing for
un-cooled optics? Is CWDM the lowest cost solution? And the list of
questions we need to answer is much longer. I will propose some of the
questions we need to answer at HSSG, and I encourage everyone to propose
basic questions we need to answer.

Having said this, it is also important to explore alternatives, and to
investigate multiple areas in parallel. In order to do that, one has to
make assumptions so that we can start quantifying the problem. 

A question was raised as to what is a reasonable skew to be used in
investigating coding alternatives. Based on multiple proposals, 1Kb is
one possible target. It is simply a tool to make progress in one area of
investigation.

I invite you to suggest another skew number to assist those
investigating coding approaches.

Chris

-----Original Message-----
From: Roger Merel [mailto:roger@xxxxxxxxxxx] 
Sent: Thursday, September 14, 2006 11:47 AM
To: STDS-802-3-HSSG@xxxxxxxxxxxxxxxxx
Subject: Re: [HSSG] Higher speed trade offs

Chris,

I don't think it is correct to pre-suppose that 20nm spaced CWDM is the
tightest uncooled optical spacing, or even the lowest-cost WDM solution
for shorter distance applications.  There are other options which will
undoubtedly be discussed in Knoxville.  

I will concede that it is the likely the tightest manufacturable
uncooled spacing for independent discrete lasers; however, is the use of
10 discrete lasers really the best or lowest-cost solution?

-Roger





-----Original Message-----
From: Chris Cole [mailto:chris.cole@xxxxxxxxxxx] 
Sent: Thursday, September 14, 2006 1:36 PM
To: STDS-802-3-HSSG@xxxxxxxxxxxxxxxxx
Subject: Re: [HSSG] Higher speed trade offs

Stephen,

The main stream SMF application at 10G is 10km (10GBASE-LR,) which uses
un-cooled optics.

Assuming that 100Gb will use un-cooled optics for low cost, this
requires CWDM. A 100nm window is one alternative that can be used for
skew calculations. The dispersion across the window is approximately
10ps/nm/km, so we get 10nsec after 10km. That's 1000 bits, which will
get split up among however many CWDM channels are proposed.

An 80km application will require the use of cooled optics. A 10 channel
200GHz DWDM grid will be 16nm, so that also gives about 10nsec after
80km.

So in round numbers, you can use 1Kb as a starting point.

Chris

-----Original Message-----
From: Stephen Bates [mailto:stephen.bates@xxxxxxxxxxxxxxx] 
Sent: Thursday, September 14, 2006 10:58 AM
To: STDS-802-3-HSSG@xxxxxxxxxxxxxxxxx
Subject: Re: [HSSG] Higher speed trade offs

Hugh and others

We have been thinking about 100G for a while and there does seem to be 
potential for maximum-likelihood detection. This is somewhat similar to 
some of the PRML work done in hard-drives, which also tend to use binary

signaling.

Also, assuming a binary signaling scheme, the constellation size is not 
too much larger than the 625 used in 1000BASE-T (trellis code with 5.5dB

coding gain) and a lot smaller than the 65536 used in 10GBASE-T (LDPC 
with about 10-12db coding gain depending on your decoder). The latency 
through the 1000BASE-T decoder was only about 20 symbol periods so its 
impact on latency at 10G would be small compared to the potential bulk 
skew delay (I presume).

If there is interest from the group for an FEC we would like to develop 
these ideas further. Also, can anyone tell me what the worst case bulk 
skew is likely to be in some of these multi-wavelength schemes (as a 
function of cable length)?

cheers

Stephen

------------------------------------------------------------------------

Dr. Stephen Bates PhD PEng SMIEEE

High Capacity Digital Communications Laboratory
Department of Electrical and Computer Engineering Phone: +1 780 492 2691
The University of Alberta                         Fax:   +1 780 492 1811
Edmonton
Canada, T6G 2V4                            stephen.bates@xxxxxxxxxxxxxxx
 
www.ece.ualberta.ca/~sbates
------------------------------------------------------------------------

Hugh Barrass wrote:
> Stephen,
> 
> Regarding the FEC & latency - a FEC that is designed to exploit the 
> transverse dimension (i.e. correlation between channels) would not
need 
> to add significant latency. The FEC block size (or equivalent) need
only 
> be the same order as the maximum skew between the channels. This will 
> govern the minimum latency for a non-FEC channel in any case. At its 
> simplest, a Trellis code could be applied across the channels with an 
> additional latency of ~ 1 code block. A cleverly designed maximum 
> likelihood code (is anybody in Alberta or Cork working on that? :-) 
> could offer similar gain with lower overhead. In particular, the
optical 
> channel with binary signaling offers a much smaller problem matrix
than 
> 1000BASE-T multi-level FEXT/NEXT channels.
> 
> The processing overhead is largely dependent on the amount of state
and 
> to a first approximation would scale with latency.
> 
> Hugh.
> 
> Stephen Bates wrote:
> 
>> Hi Hugh and others
>>
>> I have been following this mailing list with interest and wanted to 
>> comment on Hugh's statement in his email.
>>
>> "It strikes me that if the sources and destinations of many carriers 
>> are co-located and correlated then coding can eliminate inter signal 
>> interference."
>>
>> I am trying to understand what the advantage is of merging 10 10G 
>> channels into one 100G channel versus keeping the 10G channels 
>> separate. It seems to me all the buffering and SAR requirements 
>> required are only of value if we do take advantage of all the 
>> dimensions. Obviously this is something we've done in 1000BASE-T and 
>> 10GBASE-T by running some kind of FEC over all four dimensions.
>>
>> However we've already had a discussion on how FEC adds latency and 
>> that may not be acceptable in short-haul applications. Also, decoding

>> a 10 dimensional code would not be trivial, though the potential 
>> coding gain would be large, allowing dense packing of wavelengths. 
>> Also, if there is significant correlation across the 
>> dimensions/wavelengths we can take advantage of that using 
>> maximum-likelihood detection techniques. Again the complexity and 
>> latency become issues. However the maximum likelihood approach is 
>> interesting in that it can be utilized without compensating for any 
>> bulk skew mis-match between the dimensions/wavelengths.
>>
>> I look forward to seeing how this work develops.
>>
>> Cheers
>>
>> Stephen
>>
>>
------------------------------------------------------------------------
>>
>> Dr. Stephen Bates PhD PEng SMIEEE
>>
>> High Capacity Digital Communications Laboratory
>> Department of Electrical and Computer Engineering Phone: +1 780 492
2691
>> The University of Alberta                         Fax:   +1 780 492
1811
>> Edmonton
>> Canada, T6G 2V4
stephen.bates@xxxxxxxxxxxxxxx
>>
www.ece.ualberta.ca/~sbates
>>
------------------------------------------------------------------------
>>
>> Hugh Barrass wrote:
>>
>>> Andrew and others,
>>>
>>> It often amuses me that technical principles from one field of 
>>> invention seem to leak into other fields. The mechanism that you 
>>> suggest strikes me as very similar to Discrete Multi-Tone
modulation, 
>>> used in DSL. There are some considerable advantages of compact multi

>>> carrier systems over higher baud rate single carrier systems. I
guess 
>>> it's only a matter of time before someone comes in (or back) with 
>>> optical multi-level signaling to make the matrix complete :-)
>>>
>>> Not being an optical expert allows me the freedom to look at this 
>>> from the outside and to suggest some ideas that may (or may not) be 
>>> completely hopeless. Has anyone considered the use of FEC codes 
>>> designed to correct errors caused by ultra-fine WDM spacing? It 
>>> strikes me that if the sources and destinations of many carriers are

>>> co-located and correlated then coding can eliminate inter signal 
>>> interference.
>>>
>>> Isn't communications theory fun? :-)
>>>
>>> Hugh.
>>>
>>