Thread Links Date Links
Thread Prev Thread Next Thread Index Date Prev Date Next Date Index

Re: [10GMMF] Notes from Aug 3rd Meeting on TP3 Definition



Bob,

Some comments on your message:

First, on the testing philosophy issue: "stressed eye test should be an accurate reflection of how to represent all known impairments (which could subsequently be test reduced by the vendor, depending on their design choices) or whether the test should be made as simple as could be defended in order to support making the test practical and repeatable."

If these were mutually exclusive options, it would be like asking if the glass is half full or half empty.  Basically, if a test suite doesn't take account of the major risks and problems, how could it be defended?  (though I guess we can decide that an impairment is known but minor.)  But we should not see the objective of good "fault coverage" and the objective of simplicity as contradictory; both are good. (Since writing this, I have received Tom's message somewhat similarly listing three objectives).

Second, I don't believe the standard needs to write production tests, any more than it tells manufacturers how to make things.  But the relation between the standard and procurement specifications is relevant.

A standard is not a procurement specification.  But I agree with you, if it gives a clear lead that avoids each customer/vendor pair inventing their own acceptance tests, that would make life simpler.  I do not wish to encumber either the standard or the procurement with impracticality.  I observe that the "TP3- Normative (Static) Stressed Sensitivity Test" of aronson_1_0704.pdf and the "TP3- Adaptation Speed Test" on the next page are basically the same thing.

The differences are:
Clock dither in first only
Sinusoidal amplitude interferer in first only
Sinusoidal tap weight modulator in second only
Detail of ISI-generating filter differs

Therefore an apparatus which can be used for the first needs to have taps which can be modulated to be used for the second (as opposed to tap weights which are set e.g. with a screwdriver).  That's not a huge delta, and leaves us contemplating two test rigs (combination of p10 and p11, and "TP3- Simple Informative Sensitivity Test" from p13) instead of three test rigs.  Two sounds simpler than three to me.

I have not yet heard why the ISI in the first case might not be time-variant if the ISI in the second case will.
Nor have I heard, for the second case, why we would want to find the difference between sensitivity with tap modulation running and sensitivity with tap modulation off.  How much is too much?  Why?  What about the difference between overload with tap modulation running and overload with tap modulation off?

I have not heard a reason to believe that the "temporal variance penalty" can be treated as orthogonal to the penalties of the impairments screened for in the static test.  It sounds unlikely, would need to be proved - or shown that the temporal variance penalty is negligible and move on.

If we think of the combination of the two tests, we can achieve good fault coverage with better rigour without answering some of these unnecessary questions.

Then, by switching off each of the three elements:
        Clock dither
        Sinusoidal interferer
        Tap weight modulator
we may be able to show that:
an element is not a risk item: can delete it from the normative procedure;
or
an element relates wholly to e.g. CDR, or laser, or PCS: the standard can then describe how to switch that item off (e.g., for clock dither, any "off" phase is as good as any other.  For tap weight modulator, define the phase to stop at: maybe whatever we think is the "hardest" phase.);
or
an element is separable (example: we might find that penalty to clock dither and penalty to sinusoidal interferer have nothing to do with each other: can give guidance on how to split out the tests, if it helps.  As Tom suggests, this might be additional (informative) material (like the "Simple Informative Sensitivity Test").

    This is how we avoid a proliferation of acceptance tests: by guessing what alternatives could be used and making choices, and by giving guidance to drive out ambiguity.

On your other worry "Things will default back to unstressed receiver testing which has been shown to ignore important and primary performance issues."  I think we have consensus that that would be a bad thing, Lew's p13 "TP3- Simple Informative Sensitivity Test" is there to defend us from that threat.

Hope this allays your concerns,

Piers

> -----Original Message-----
> From: owner-stds-802-3-10gmmf@listserv.ieee.org
> [mailto:owner-stds-802-3-10gmmf@listserv.ieee.org]On Behalf Of Zona,
> Robert
> Sent: 05 August 2004 22:13
> To: STDS-802-3-10GMMF@listserv.ieee.org
> Subject: Re: [10GMMF] Notes from Aug 3rd Meeting on TP3 Definition
>
> Hi All,
>
> I apologize I could not attend the meeting. I do want to
> comment on the
> testing philosophy issue. I feel very strongly that the right
> way to go
> is with the "simple as could be defended in order to support
> making the
> test practical and repeatable" methodology as proposed by Lew Aronson.
>
> I believe it is very important that the main normative tests
> be designed
> in such a way that they can easily be implemented as production tests.
> The stressed sensitivity test is the most important of these. If we
> encumber this primary normative test to the point that it is
> impractical
> for use as a production screen one of two things will happen;
>
> -       Each customer/vendor pair will invent their own acceptance
> tests, which results in chaos.
>
> -       Things will default back to unstressed receiver testing which
> has been shown to ignore important and primary performance issues.
>
> For this reason I believe it is important to break the normative tests
> down into practical, repeatable and digestible blocks, e.g.
> keeping the
> static stressed test and the dynamic channel affects tests separate.
> This way each customer/vendor can choose which tests should be 100%
> production tests and which can be sample tests or qualification only
> tests, but the tests that are performed in either case are
> well defined
> by the standard.
>
> With this in mind I suggest we adopt Lew's proposal as the starting
> point and continue to move it forward as a framework in parallel with
> the channel modeling effort with the intention of adjusting parameters
> consistent with the final channel model later. We made great progress
> leading up to and during the July plenary meeting and I
> believe this is
> the best way to keep that momentum going.
>
> Thanks,
>
> Bob Zona
> Marketing Director, Enterprise Optics
> Intel Optical Platform Division
>