Thread Links Date Links
Thread Prev Thread Next Thread Index Date Prev Date Next Date Index

[HSSG] Please choose a sensible rate(s)



All,

"a good place to start is to see what it takes to define 100Gb solutions for the existing infrastructure"

With regard to "existing infrastructure", any new kind of high-end networking scheme with aspirations to carry data outside of a campus needs to allow that data to be carried by the existing infrastructure (SONET/SDH and OTN) without excessive waste and cost.  This seems a very obvious objective.

I do not agree that "a good place to start is to see what it takes to define 100Gb solutions".  100Gb is not a solution, it's an unnecessary problem: see above.  All it is good for is doing VERY approximate costings of heat, skew, dispersion or whatever, until this group has developed its thinking on the reasons to choose a rate.

If this group really believes that "40 is not enough" it can pick 2 x OC-768 or 3 x OC-768 or 4 x OC-768 payload size.  Or just a little less.
Not 2.608 (approx.) x OC-768 - that's REALLY ugly.

Piers

> -----Original Message-----
> From: Chris Cole [mailto:chris.cole@xxxxxxxxxxx]
> Sent: 14 September 2006 21:57
> To: STDS-802-3-HSSG@xxxxxxxxxxxxxxxxx
> Subject: Re: [HSSG] Higher speed trade offs
> 
> Hi Steve,
> 
> I am in agreement with you that at this point we should not take
> anything as a given, and distribution of applications should 
> be open for
> discussion.
> 
> However, a good place to start is to see what it takes to define 100Gb
> solutions for the existing infrastructure. If we move away from
> distances used for today's Ethernet deployment (10km, 40km, 80km) then
> we will complicate how 100Gb is deployed. If the payoff for this
> complication is worth it, then alternate applications will 
> certainly be
> considered. You will need to make a very compelling argument 
> to convince
> end users that they have to change their network configurations to
> accommodate a new product.
> 
> Chris
> 
> -----Original Message-----
> From: Trowbridge, Stephen J (Steve) [mailto:sjtrowbridge@xxxxxxxxxx] 
> Sent: Thursday, September 14, 2006 12:21 PM
> To: STDS-802-3-HSSG@xxxxxxxxxxxxxxxxx
> Subject: Re: [HSSG] Higher speed trade offs
> 
> Chris,
> I am not sure that it is a given that the distribution of applications
> for 100G will be the same as that for 10G.
> 
> One of the "mistakes" sometimes quoted for 10G was failure to 
> recognize
> early enough that a significant amount of deployment of this interface
> was for infrastructure rather than as an end station 
> interface. I would
> guess that deployment of 100G will be even more heavily 
> weighted in the
> infrastructure space than 10G. While the short reach, data center or
> supercomputer type interface may be the easiest problem to 
> solve, it may
> also represent only a tiny fraction of the market and may not be the
> most important problem to solve.
> Regards,
> Steve 
> 
> -----Original Message-----
> From: Chris Cole [mailto:chris.cole@xxxxxxxxxxx] 
> Sent: Thursday, September 14, 2006 12:36 PM
> To: STDS-802-3-HSSG@xxxxxxxxxxxxxxxxx
> Subject: Re: [HSSG] Higher speed trade offs
> 
> Stephen,
> 
> The main stream SMF application at 10G is 10km (10GBASE-LR,) 
> which uses
> un-cooled optics.
> 
> Assuming that 100Gb will use un-cooled optics for low cost, this
> requires CWDM. A 100nm window is one alternative that can be used for
> skew calculations. The dispersion across the window is approximately
> 10ps/nm/km, so we get 10nsec after 10km. That's 1000 bits, which will
> get split up among however many CWDM channels are proposed.
> 
> An 80km application will require the use of cooled optics. A 
> 10 channel
> 200GHz DWDM grid will be 16nm, so that also gives about 10nsec after
> 80km.
> 
> So in round numbers, you can use 1Kb as a starting point.
> 
> Chris
> 
> -----Original Message-----
> From: Stephen Bates [mailto:stephen.bates@xxxxxxxxxxxxxxx]
> Sent: Thursday, September 14, 2006 10:58 AM
> To: STDS-802-3-HSSG@xxxxxxxxxxxxxxxxx
> Subject: Re: [HSSG] Higher speed trade offs
> 
> Hugh and others
> 
> We have been thinking about 100G for a while and there does seem to be
> potential for maximum-likelihood detection. This is somewhat 
> similar to
> some of the PRML work done in hard-drives, which also tend to 
> use binary
> 
> signaling.
> 
> Also, assuming a binary signaling scheme, the constellation 
> size is not
> too much larger than the 625 used in 1000BASE-T (trellis code 
> with 5.5dB
> 
> coding gain) and a lot smaller than the 65536 used in 10GBASE-T (LDPC
> with about 10-12db coding gain depending on your decoder). The latency
> through the 1000BASE-T decoder was only about 20 symbol periods so its
> impact on latency at 10G would be small compared to the potential bulk
> skew delay (I presume).
> 
> If there is interest from the group for an FEC we would like 
> to develop
> these ideas further. Also, can anyone tell me what the worst case bulk
> skew is likely to be in some of these multi-wavelength schemes (as a
> function of cable length)?
> 
> cheers
> 
> Stephen
> 
> --------------------------------------------------------------
> ----------
> 
> Dr. Stephen Bates PhD PEng SMIEEE
> 
> High Capacity Digital Communications Laboratory Department of 
> Electrical
> and Computer Engineering Phone: +1 780 492 2691
> The University of Alberta                         Fax:   +1 
> 780 492 1811
> Edmonton
> Canada, T6G 2V4                            
> stephen.bates@xxxxxxxxxxxxxxx
>  
> www.ece.ualberta.ca/~sbates
> --------------------------------------------------------------
> ----------
> 
> Hugh Barrass wrote:
> > Stephen,
> > 
> > Regarding the FEC & latency - a FEC that is designed to exploit the 
> > transverse dimension (i.e. correlation between channels) would not
> need 
> > to add significant latency. The FEC block size (or equivalent) need
> only 
> > be the same order as the maximum skew between the channels. 
> This will 
> > govern the minimum latency for a non-FEC channel in any 
> case. At its 
> > simplest, a Trellis code could be applied across the 
> channels with an 
> > additional latency of ~ 1 code block. A cleverly designed maximum 
> > likelihood code (is anybody in Alberta or Cork working on that? :-) 
> > could offer similar gain with lower overhead. In particular, the
> optical 
> > channel with binary signaling offers a much smaller problem matrix
> than 
> > 1000BASE-T multi-level FEXT/NEXT channels.
> > 
> > The processing overhead is largely dependent on the amount of state
> and 
> > to a first approximation would scale with latency.
> > 
> > Hugh.
> > 
> > Stephen Bates wrote:
> > 
> >> Hi Hugh and others
> >>
> >> I have been following this mailing list with interest and 
> wanted to 
> >> comment on Hugh's statement in his email.
> >>
> >> "It strikes me that if the sources and destinations of 
> many carriers 
> >> are co-located and correlated then coding can eliminate 
> inter signal 
> >> interference."
> >>
> >> I am trying to understand what the advantage is of merging 10 10G 
> >> channels into one 100G channel versus keeping the 10G channels 
> >> separate. It seems to me all the buffering and SAR requirements 
> >> required are only of value if we do take advantage of all the 
> >> dimensions. Obviously this is something we've done in 
> 1000BASE-T and 
> >> 10GBASE-T by running some kind of FEC over all four dimensions.
> >>
> >> However we've already had a discussion on how FEC adds latency and 
> >> that may not be acceptable in short-haul applications. 
> Also, decoding
> 
> >> a 10 dimensional code would not be trivial, though the potential 
> >> coding gain would be large, allowing dense packing of wavelengths.
> >> Also, if there is significant correlation across the 
> >> dimensions/wavelengths we can take advantage of that using 
> >> maximum-likelihood detection techniques. Again the complexity and 
> >> latency become issues. However the maximum likelihood approach is 
> >> interesting in that it can be utilized without 
> compensating for any 
> >> bulk skew mis-match between the dimensions/wavelengths.
> >>
> >> I look forward to seeing how this work develops.
> >>
> >> Cheers
> >>
> >> Stephen
> >>
> >>
> --------------------------------------------------------------
> ----------
> >>
> >> Dr. Stephen Bates PhD PEng SMIEEE
> >>
> >> High Capacity Digital Communications Laboratory Department of 
> >> Electrical and Computer Engineering Phone: +1 780 492
> 2691
> >> The University of Alberta                         Fax:   +1 780 492
> 1811
> >> Edmonton
> >> Canada, T6G 2V4
> stephen.bates@xxxxxxxxxxxxxxx
> >>
> www.ece.ualberta.ca/~sbates
> >>
> --------------------------------------------------------------
> ----------
> >>
> >> Hugh Barrass wrote:
> >>
> >>> Andrew and others,
> >>>
> >>> It often amuses me that technical principles from one field of 
> >>> invention seem to leak into other fields. The mechanism that you 
> >>> suggest strikes me as very similar to Discrete Multi-Tone
> modulation, 
> >>> used in DSL. There are some considerable advantages of 
> compact multi
> 
> >>> carrier systems over higher baud rate single carrier systems. I
> guess 
> >>> it's only a matter of time before someone comes in (or back) with 
> >>> optical multi-level signaling to make the matrix complete :-)
> >>>
> >>> Not being an optical expert allows me the freedom to look at this 
> >>> from the outside and to suggest some ideas that may (or 
> may not) be 
> >>> completely hopeless. Has anyone considered the use of FEC codes 
> >>> designed to correct errors caused by ultra-fine WDM spacing? It 
> >>> strikes me that if the sources and destinations of many 
> carriers are
> 
> >>> co-located and correlated then coding can eliminate inter signal 
> >>> interference.
> >>>
> >>> Isn't communications theory fun? :-)
> >>>
> >>> Hugh.
> >>>
> >>
>