Thread Links Date Links
Thread Prev Thread Next Thread Index Date Prev Date Next Date Index

[EFM] RE: [EFM-P2MP] RE: EDMA R-PON




Glen, thanks for the comments, and I have some differences of opinion that I
try to describe below. There seems to be some confusion on the operation and
logistics for EDMA rPON, so I try to address these.  Many of the comments
you made are very broad-brush, and it seems they apply equally well to TDMA,
rather than being specifically critical of EDMA .... Comments in-line below.
--Dave Horne

-----Original Message-----
From: glen.kramer@xxxxxxxxxxxx <mailto:glen.kramer@xxxxxxxxxxxx>
[mailto:glen.kramer@xxxxxxxxxxxx]
Sent: Friday, November 23, 2001 4:41 PM
To: david.m.horne@xxxxxxxxx <mailto:david.m.horne@xxxxxxxxx>;
crick@xxxxxxxxxxxxxxxxxx <mailto:crick@xxxxxxxxxxxxxxxxxx>;
stds-802-3-efm-p2mp@ieee.org <mailto:stds-802-3-efm-p2mp@ieee.org>;
stds-802-3-efm@ieee.org <mailto:stds-802-3-efm@ieee.org>
Subject: RE: [EFM-P2MP] RE: EDMA R-PON

David,

What you propose is a hub-polling scheme where each ONU polls its next
neighbor, 

>>>Nope, there is no polling going on. The stations send a broadcast (via
the reflective coupler)
"end of transmit" event. That would be about as similar to polling as, for
example, a system reset signal. Polling implies a 2-way communication to me.
I won't get bogged down in a terminology debate though, and will adapt my
response accordingly. It seems an important distinction though, based on
some of your later comments. You also use the word "token" later in your
reply, which has implications that don't really apply to the EDMA scheme.
"Event" seems the most accurate and unambiguous to me. 

as opposed to roll-call polling where OLT (Master) polls each ONU.
Here are some reasons not to do hub polling:

1. Hub polling relies on correct operation of each ONU. One misbehaving ONU
will break the entire cycle.

>>>Though EDMA is not polling, the same generic operational statement made
above can be said for TDMA. Any shared network relies on correct operation
of end stations. One could take down hundreds of TDMA cable modem users with
a few select register settings, if one were so inclined, or by blasting the
upstream with noise--whether intentional or not. For EDMA rPON, maybe, maybe
not. Depends on the failure mode and recovery method, and that is true for
TDMA also.   


The solution may be to implement some complex timeout mechanisms when each
ONU knows that it predecessor has failed. That will make all ONUs more
expensive.

>>>I wouldn't phrase it quite that way. A timer function is fractions of a
penny in silicon cost, and I see no reason it would be complex. TDMA uses
high-precision, low-jitter timestamps, precision time bases, timeouts, and
timers that FAR outnumber any simple timing circuitry used in EDMA rPON.
More on this later. 

2. Since idle ONUs use very short transmission time (little data and
end-of-transmission token), the polling cycle time reduces. That allows busy
ONUs to send data more often, i.e. to have more bandwidth. 

>>>Not necessarily, and that is only one possible way idle might be
implemented. It MAY allow that, depending on the scheduling and allocation
scheme used. Network operator has total control over this, via the control
message. The way this is enabled or enforced in actual operation is a
business model decision (bandwidth charging basis), and the protocol should
not mandate or preferentially accommodate favorite business models at the
peril of others. That just makes sense for broad applicability and
deployment. It should be flexible, and EDMA rPON allows that. Note that most
so-called broadband last mile services today are in a state of financial
peril, despite subscriber growth. Profitability is elusive under the current
models. This speaks for the need to allow the potential for changes in the
models, rather than viewing fiber-based EFM as simply migrating a template
of what exists today onto an optical fiber (which has no economic
justification). 


That, believe it or not, creates many problems for network operators: users
get accustomed to
higher bandwidth during slow hours and complain during peak hours.  They
also don't want to upgrade if lots of best effort bandwidth is available.
And finally, it is very difficult to charge for such best effort bandwidth.
Thus, operators must have the ability to control maximum bandwidth per ONU,
i.e., be able to control minimum cycle time. 


>>> This point applies equally well to a TDMA system also, and I can make
the same statement about a TDMA cable modem from personal experience.
Existing legacy network experiences are of little value in comparison,
unless one views EFM as the "template migration" I mention above. I don't
view it that way, but that's a discussion for another day. This sounds more
like a network overprovisioning issue anyway, not relevant as a criticism
specific to EDMA rPON, or TDMA for that matter.  The criticism belongs with
the allocation model and how it is set up, or how flexible it is. EDMA would
be flexible in this regard, via the control settings. I describe this more
later.

 
ONUs can control minimum cycle time by delaying its end-of-transmission
token even in the absence of data. 

>>>EDMA rPON ONUs can only do what they are told by the headend, which may
or may not include this. Such operational details are at the discretion of
the network operator, via the control settings. EDMA  is flexible in this
regard. I see the "no data" case as having several simple operating modes as
a possibility. That supports several bandwidth views of the world.

That requires a timer in each ONU and a protocol for setting minimum cycle
value.

>>>Again, implementing a timer is of nano-significance. The TDMA you are
arguing FOR is replete with complex, high-precision, time-synchronous
signaling that outweighs what is necessary for EDMA by a long shot. More on
this later. I don't quite see why a *protocol* is needed for setting the
minimum. If you are viewing this as a multi-step, iterative, back and forth
negotiation, I don't envision it that way at all. 


It is much simpler, if OLT controls cycle time. 

>>>As stated above, the OLT/headend/PoP is controlling this with EDMA rPON,
so we are in agreement. But EDMA has greater flexibility because it does not
have the complex and rigid time constraints mandated by a strict time base,
nor does EDMA have any of the associated circuitry on both ends of the
connection to accomplish this. The headend gives an order in EDMA rPON, and
the end nodes carry out that order, triggered by simple received events from
end stations. The headend in EDMA can change that order at any time,
including a halt of any or all end stations. These are simple one-sided
commands, and don't require a protocol or 2-way communications dedicated to
them. 

>>>It sounds like there is a misconception on the operation. The cycle is
centrally controlled by the headend, except the mechanism of this is more
flexible than with TDMA. The intra-cycle events are between the endpoints,
but are defined (structured is probably a better word) by the headend, and
every station receives them (including the headend). Headend does not have
to schedule every endpoint for every cycle, and much flexibility is possible
here. Per-station allocations for a given cycle also do not have to be
equal, nor is there any waste if a station doesn't need most or all of its
allocation in a given cycle. Just before a cycle ends, headend might send a
new schedule for the next cycle that takes into consideration the actual
usage of the current cycle, or several recent cycles averaged out, or any
algorithm a silicon vendor chooses, to implement any bandwidth-charging
business model one could imagine. It is totally up to the silicon vendor,
rather than being mandated by the spec. There are even some fairly simple
ways the headend can remove stations from the cycle due to inactivity and
define an event position in the schedule for idle stations to re-enter. To
that end, one could even emulate request/grant TDMA, or even fixed slot
TDMA, via EDMA, if one wanted to. They would be indistinguishable one hop
away, when all the precise timing, cycles, and overhead request/grant
messaging of TDMA disappear.   

>>>The transmit allocation per user could even grow dynamically per cycle,
as well as back off dynamically, all based on utilization stats in prior
cycles, collected by the headend, and/or includes network policy thresholds,
if they exist. It allows a lot of room for vendor differentiation in headend
silicon, since these added capabilities wouldn't be part of the baseline
spec. Yet, ONUs from different vendors would interoperate with any vendor's
headend silicon because the ONUs just follow orders based on a simple common
control message that states transmit order and max transmit size per
station. Cycle interval is controlled by the headend (as rigidly or flexibly
as is desired; this doesn't affect ONUs in any way since they just follow
the simple control message, whenever it arrives). The mechanics of this
would also not be mandated by the spec; only a baseline capability would be
required, and beyond that is at the vendor's option. The headend silicon
could be as simplistic or as flexible and all-encompassing as a vendor wants
to make it. ONU silicon from any vendor would all implement the same basic
order-following functions, based on the simple control message which
contains the cycle parameters as just the sequence and per-station max
allocation.


It can do it by either specifying the exact transmission window for each
ONU, or delaying next polling cycle. Delaying polling cycle is inefficient
because it punishes all
ONUs. 

>>>In a sense, you are defining a prerequisite rigid operational and
allocation scheme to support the inefficiency conclusion. That prerequisite
does not exist for EDMA rPON, and several modes of operation are possible,
as I indicated in the previous section. Under the same scenario you define,
TDMA would have empty slots. I don't view it as punishing anyone because the
capacity wasn't being used. Punishment is only perceived if a particular
business model says to interpret it that way. That comes down to one's view
of EFM's role and coupling method into the overall network picture. I guess
we have differing views here, but that too is a discussion for another day. 

Specifying individual transmission slots for each ONU provides much
more flexibility.

>>>I'm not following how a rigidly-defined timeslot allocation falls under
the category of "flexibility." If I am told to check my mail at 8:17:03
every day, because that is the only time slot when the mailbox is unlocked,
I wouldn't call that flexible. If someone calls around 8 o'clock and says
the mail is in and you can check it now if you want, or later if you prefer,
THAT is flexible. EDMA has that flexibility; TDMA is highly rigid and
inflexible by my definition. That rigidity costs a lot in gate count,
compared to EDMA, yet all that precision time alignment, and all the trouble
to achieve it, disappears one hop away in the network (once the data leaves
the EFM realm where all the timing constraints are unknown). 

3. If ONU is to detect end-of-transmission token from its neighbor, it
should have either 2 receivers (at different lambdas) or a burst-mode
receiver, since the power level of data from OLT and neighbor ONU would be
different. 

>>>The second receiver (if that is the desired implementation choice), for
the EVENT, is extremely simple. It doesn't even need a forwarding layer
(though one could be added if a case could be made for it, e.g. peer to peer
data transfer support). It just performs a simple pattern match. A few
comparators and registers. It costs pennies in silicon. If we are trading
off gates between the 2 approaches, EDMA doesn't even begin to approach the
gate count for TDMA's complex timing circuitry.  


The better solution if OLT tells ONU when to transmit. 

>>>OLT can't rightly know the requirement of ONU's data transmission unless
ONU tells it. The statement you make is a bit too simplistic, and leaves out
the many TDMA operational details that are at the root of the complexity
that keeps it from being better-simpler, such as: circuitry for constructing
and sending the request from the ONU (possibly in a contention slot, but in
any case a precise time slot), and its transit-time delay; processing the
request type (and the associated processing circuitry) at the headend;
scheduling the request into the master schedule some cycle in the future (or
deferring it and subsequent deferral notification); creating and sending a
time-synchronized grant definition message to the ONU (and its associated
transit time); receiving, decoding, and setting up for this grant at the
ONU, then acting on it by precisely time-aligning data into a pre-designated
timeslot, all this after completing a complex synchronization operation with
the remote headend initially. This is a lot of complexity and circuitry for
TDMA, and it is distributed on both ends of the connection. EDMA has none of
this at all.
 
It becomes even easier task if OLT assigns exact transmission window to each
ONU.

>>>I don't think I'd call the complex TDMA process I described above
"easier," and certainly not in comparison to NO such process at all for EDMA
rPON. Also note that these deficiencies with TDMA cannot be overcome--they
are inherent--and I qualified the penalty in an earlier message. Parts of
the penalty may vary slightly between various flavors of TDMA, but they are
still complex and create inefficiencies compared to EDMA's simple
event-based operation. 


Then it is enough to deliver a common time reference to all ONUs to allow
each ONU to transmit at right time. To deliver this time reference, all that
needs to be done is a timestamp in the slot assignment message.

>>>The TDMA common time reference that you described above as a triviality,
is part of a complex, programmable, distributed multi-step, multi-element
system with multiple messages and a dedicated protocol with various sync and
frame markers. Each end station must have timebase recovery circuitry that
tracks the headend to a very small error margin. It has to accommodate
numerous error sources with high stability and accuracy. It has to remain in
constant lock-step with the remote headend in order to accurately transmit
within the strict bounds of a time slot. EDMA rPON has none of this (none of
the circuitry, none of the operational complexity, and none of the complex
simulations to prove out this timing functionality).


4. Some operators are considering deploying EPON with a splitter located in
central office. That will allow PON to be split into two or to migrate some
users to P2P links when bandwidth demand grows. 

>>>I don't doubt that someone is talking about this, but it would make
little economic sense for a provider to purchase and deploy high-power
complex TDMA ONUs to every home, while at the same time purchase and deploy
point-to-point fiber home runs. Then later, on top of that
high per-user expense, swap out the TDMA ONU for a P2P ONU. At best this
is an infrequent niche/ vendor-financed :o) case. Maybe it is more
justifiable for businesses/institutions, or customer-owned networks, which
have different sensitivities. In any event, I addressed this issue in my
reply to Bill, as the pathological case. No matter who would deploy it, the
relative cost is far greater than either a "normal" PON (regardless or
TDMA/EDMA) or normal P2P.  EDMA does not preclude this; I just question its
viability from a deployment economics perspective. 

However, accumulation of "walk times" in such a system will make it highly
inefficient for hub-polling. 

>>>Even though EDMA is not a polling system, if walk times accumulated in
some hypothetical hub polling system it would be inefficient. That is true,
but I don't see it as applicable to EDMA. If grant denials/deferrals were
accumulated in a TDMA system it would be very highly inefficient too.

One solution may be to have each ONU to look for token from
its pre-predecessor. That may reduce the walk time, but will limit maximum
transmission from one ONU to a minimum round-trip time. Again, that will
make system more complicated.

>>>I don't agree that it would make it more complicated for EDMA (nothing
changes functionally due to distance; that is one advantage of using
events), but let me elaborate on the conditions behind that. Ignoring how
realistic and widespread this deployment scenario will or wont be, one thing
your (and Bill and Carlos mentioned similar things) "long-legged PON"
example has pointed out to me is that the performance advantages of EDMA are
not as intuitive for the case where the drop fiber length is the full PON
length.  If the TDMA operated more as a per-packet DAMA, the 2 would be very
close, but if the TDMA operated in an unsolicited grant type mode for most
packets (not a likely scenario, but anyway...), a lack of ranging
compensation would penalize EDMA. For that case, ranging would be important
to add to EDMA. I don't think that's a big deal and I've thought of a couple
very simple ways to do this. As the legs all get shorter, however, the
significance of ranging for EDMA drops off.   I'd have to think about this
some more but if the average leg length is much less than half the full PON
length, ranging would seem to provide insignificant benefit. 
But the question remains on what the true market for this long-legged PON
scenario is, whether TDMA, EDMA, or whatever.

5. Hub polling scheme has a major limitation: it requires connectivity
(communicability) between ONUs. 

>>>Though not a polling scheme, I will adjust your question in reference to
EDMA. Exactly the opposite is true. The connectivity between ONUs via the
reflective coupler is a key enabler for EDMA rPON to be less complex, less
expensive, and more efficient than TDMA PON.


That imposes some constraints on PON
topology, namely, the network should be deployed as a ring or as a
broadcasting star. This requirement is not always satisfiable as (a) it may
require more fiber to be deployed, or (b) fiber plant with different
topology might be already pre-deployed. 

>>> The same points can be made against TDMA PON though. (I don't quite get
the ring comment, especially for residential deployments). These are all
pretty hypothetical. I can't imagine adding complexity penalties to
accommodate such one-of-a-kind legacy mistakes. The market share for those
is of no influencing significance, if they even exist at all. There aren't
any "marooned" PON first mile infrastructures collecting dust that I am
aware of. Actually I wish there were....deployment could proceed much
faster. 

In general, we want our algorithm
to be able to support whatever PON topology is given. In an access network
we can count only on connectivity from the OLT to every ONU and every ONU to
the OLT. That is true for all PON topologies. 

>>>There is only one PON topology we are deploying with: Point to
multipoint, 1:N. I don't quite see where you were leading with this. It
starts out wanting to support general, but ends up saying there is only one
topology. 

Therefore, the OLT is the only device that can arbitrate the time-divided
access to the shared channel.

>>>This is just making a defining statement about the OLT for a master/slave
TDMA PON, as I interpret it. So yes, it is true. However, the implication
that the shared channel can only be arbitrated by time division is not true,
and EDMA rPON is an alternate method. Polling and tokens would be another
couple ways.  EDMA R-PON does not have the complex, expensive, and
inefficient time division issues of TDMA, so it does not need the system-
and circuit-level complexity of time division at all. 

In general, if some intelligence is to be split between OLT and ONUs, it
must be specified in the standard to ensure device interoperability. If that
intelligence is confined to OLT and ONU is done as simple as possible, that
requires less standardization efforts and provides more robust system.

>>>That is a very good argument for EDMA rPON, but I don't see it as very
applicable for TDMA. To claim TDMA has low or no intelligence in ONU,
especially compared to EDMA, is not true, as I pointed out earlier. The
large amount of timing circuitry found in TDMA is not present at all in
EDMA, for both ONU and headend. I don't know about the claim the TDMA
standardization effort is smaller. To me it seems less complexity in EDMA
leads to proportionately smaller standardization effort. Robustness does not
automatically follow from simplicity. I don't know how a robust system
argument can be made about either case at this point. 

   
Glen



-----Original Message-----
From: Horne, David M [mailto:david.m.horne@xxxxxxxxx]
Sent: Friday, November 23, 2001 10:07 AM
To: 'Bill Crick'; 'stds-802-3-efm-p2mp@ieee.org'; 'stds-802-3-efm@ieee.org'
Subject: [EFM-P2MP] RE: EDMA R-PON


Bill:
The 5 cents a foot was a hypothetical cost for the individual fibers, not
the trenching. I didn't have a ballpark trenching cost figure handy, but
know it is much higher. In any case, it's a pretty shocking number and seems
to reduce the likelihood that long drop fibers will be deployed in PONs.
 
On your reply to Carlos re: distributed splitters, I think that could still
work. The only constraint there is that only the splitter farthest from the
subscriber (at the top of the hierarchy)  is reflective; all the others that
are closer to the subscriber are transmissive. Though not as bad as the
5k/5k split, the distributed splitter method still suffers the cost
penalties for extra trenching and fiber, regardless of whether reflective or
transmissive splitters. 
-----Original Message-----
From: Bill Crick [mailto:crick@xxxxxxxxxxxxxxxxxx]
Sent: Friday, November 23, 2001 10:50 AM
To: 'Horne, David M'; 'stds-802-3-efm-p2mp@ieee.org';
'stds-802-3-efm@ieee.org'
Subject: RE: EDMA R-PON


David: I agree. That is why I labeled it the 'Pathological case'.
 
BTW at 5 cents a foot, who is doing your trenching;-) Anyone got  figure for
residential trenching?
I've heard $2000/meter for trenched, and $20/m for aerial fiber, but I've
always assumed this was for
urban core, not residential?
 
Bill
-----Original Message-----
From: Horne, David M [mailto:david.m.horne@xxxxxxxxx]
Sent: Friday, November 23, 2001 11:20 AM
To: Crick, Bill [CAR:1A00:EXCH]; 'stds-802-3-efm-p2mp@ieee.org';
'stds-802-3-efm@ieee.org'
Subject: RE: EDMA R-PON


Bill, in the pathological case you describe (distribution fiber -> 0m, drop
fiber-> home run), the EDMA R-PON degrades down to roughly equal transit
time performance of TDMA (though there is still no request/grant
processing/scheduling delay).  However, that is not a realistic deployment
scenario for a PON. The PON business case loses its attractiveness in
comparison to P2P or active as the ratio of drop fiber to distribution fiber
increases. In terms of civil works ( the most costly part of deployment),
the PON advantages of less trenching, and less total fiber, evaporate as the
N:1 PON layout converges to an N-way P2P layout. These costs seem innocent
individually, but they really add up across a deployment. 
 
For example, take a 16:1  10Km PON:
Case 1:  16-5km drops, 1-5km distribution = 85 km total fiber
Case 2: 16-0.3 km drops, 1-9.7km distribution=14.5 km total fiber
Difference=70.5 km (or 231,299 feet).
Even when shared 16 ways, this is quite a high cost since these are
individual strands in their own trench, not bundles.
For the sake of argument, at 5 cents a foot, this is over $700 per user in
additional fiber costs for the long drop fiber case.  Add to that an
additional 231,299 feet of trenching. Fiber "pair gain" indeed!
 
-----Original Message-----
From: Bill Crick [mailto:crick@xxxxxxxxxxxxxxxxxx]
Sent: Friday, November 23, 2001 8:00 AM
To: 'Horne, David M'; 'Angeloni Cesare, IT'; 'stds-802-3-efm-p2mp@ieee.org';
'stds-802-3-efm@ieee.org'
Subject: RE: EDMA R-PON


How much time do you lose between when one end station stops transmitting
and the next to transmit detects 
this fact if they are max time of flight apart? 
assume a splitter 5km from Head end, T1, and T2. 
T1 stops. 10km later T2 detects this, and another 
10km of dark fiber until T2's signal gets to the head end. 
However in this case the head end only sees 10km worth of darkness 
Move the splitter closer to the head end and it gets worse? 
Pathological case is :Splitter at the head end, T1, T2 10 km each. 
Head End sees 20km worth of darkness which is the round trip time from
splitter to T2 
However if the splitters are close to the endstations, and far from the HE,
then its not too bad. 

Bill Crick 
Nortel Networks 
-----Original Message----- 
From: Horne, David M [mailto:david.m.horne@xxxxxxxxx] 
Sent: Friday, November 23, 2001 1:37 AM 
To: 'Angeloni Cesare, IT'; 'stds-802-3-efm-p2mp@ieee.org'; 
'stds-802-3-efm@ieee.org' 
Subject: [EFM] RE: [EFM-P2MP] Point-to-Point plus Shared Media 



Cesare, what I had in mind would be contention-free/collision-free, yet 
simple to implement. It would be CSMA-like, but no CD (collision detect) is 
needed. In other words, it wouldn't be a transmit free-for-all. There needs 
to be a predictable, bounded transmit scheme and some level of 
prioritization in order to accommodate quality of service requirements for 
the streams that pass through the EFM realm. 
Instead of being time-synchronized and filling pre-assigned time slots based

on a precise time base, a la TDMA or DAMA, it would be event-synchronized. 
The event being the "end-of-transmit" for a given end station. Consequently,

there is no requirement for a separate ranging protocol, or periodic 
re-ranging either. I can think of a couple ways ranging could be done 
though, if there was a reason for it. 
Unlike a pure LAN, there would be a master (e.g. the headend) that 
orchestrates the simple event-based transmit scheme. The headend sends a 
single control message downstream (broadcast to all stations since 1:N) for 
this purpose. In its simplest form, this message (e.g. for a 16:1 PON) would

include: 
  
1) the transmit sequence for a transmit round (for example, say there are 
only 5 subscribers on this PON: 
station #1, then #2, then #5, then #14, then #16; repeat) and 
2) the maximum transmit size per station (for example, 10 Ethernet frames of

any size up to max, or, some max number of bytes without fragmenting frames)

Once this control message is received, the stations begin transmitting, in 
sequence, based on the rules in the control message. These same rules could 
apply for seconds, minutes, hours, or days. It could potentially be days 
before the headend sends a new transmit control message, since the stations 
just cycle through the last set of rules sent. Very simple operation with 
little protocol overhead, and bounded cycle time. It is uniformly fair to 
all stations in the most basic form, but has the flexibility to allocate 
bandwidth asymmetrically, and dynamically, if desired. 
All stations know their position in the transmit sequence for a given round,

so they know when it is about to be their turn. The "end-of-transmit" event 
from the prior station is the signaling mechanism. There are a number of 
ways this could be implemented. At a high level it is similar to a station 
transmitting into an allocated time slot that the station is responsible for

hitting accurately (as in TDMA), except in this case there is no 
time-base-dependence. It just detects events in this case, and reacts 
accordingly. This will only work with a reflective splitter/combiner, and is

really only practical in an optical P2MP network. 
There are lots of other operational details, and many possible ways to do 
them, but this is the essence of the operation. The basic idea needs to be 
clear first, before going into those details. It has great flexibility in 
the way the transmit rounds are defined by the headend. Only the most simple

is described above. So there is great latitude for vendor-specific 
value-adds. OAM could just be a control message type that drops right in to 
the overall scheme. 
So in summary, it is much lower complexity (both design and operationally), 
and has higher bandwidth efficiency, than a TDMA PON. Efficiency-wise, 
compared to fixed-slot TDMA, this has variable and dynamically-adaptive 
transmit opportunities; compared to dynamic request/grant TDMA, there is no 
processing/scheduling delay for requests/grants at the headend, and no 
round-trip transit time delay due to the distribution fiber from headend to 
splitter. 
Any comments welcome. 
PS: How about I give it a name for discussion purposes: EDMA R-PON 
(Event-Driven-Multiple-Access Reflective PON) 
--Dave Horne 


-----Original Message----- 
From: Angeloni Cesare, IT [mailto:cesare.angeloni@xxxxxxxxxxx] 
Sent: Thursday, November 22, 2001 2:01 AM 
To: 'Horne, David M'; 'stds-802-3-efm-p2mp@ieee.org' 
Subject: RE: [EFM-P2MP] Point-to-Point plus Shared Media 


Horne, good point about reflection. 
Reflection is not the only way to implement a "true" Ethernet like PON. As 
matter of fact other start topology might meet cost target in a even better 
way. See the old 10BaseFP standard. 
The problem is the distance and the speed. 
For the Collisions to be detected there have to be only a # of Bytes out on 
the optical path not returned to the Carrier Sense of connected ONUs. This 
and the bit rate limits the distance to a great extent. 
TDMA allows for a point to point protocol such as the GE to share a fiber 
segment. 
I'm sure you have in mind a way to solve this shortcoming. 
I'm puzzled. 
Let me know more. 
Cesare 
> -----Original Message----- 
> From: Horne, David M [SMTP:david.m.horne@xxxxxxxxx] 
> Sent: Wednesday 21 November 2001 17:27 
> To:   'John Pickens'; stds-802-3-efm@ieee.org; 
> stds-802-3-efm-p2mp@ieee.org 
> Subject:      RE: [EFM-P2MP] Point-to-Point plus Shared Media 
> 
> 
> John, have you given any thought to the use of *reflective* 
> splitter/combiners, as opposed to the transmissive variety that is being 
> assumed for TDMA PON? It would be much more LAN-like; i.e. more true to 
> Ethernet operation. 
> 
> In addition, the reflective splitter/combiner (tree coupler) would be 
> roughly half the cost of a transmissive coupler, since it has half as many

> 2x2 sections, with essentially the same loss. 
> 
> Silicon costs and development time would also be much lower, since the 
> multiple access design complexity would be far lower (as would the 
> operational complexity of the overall network). The need for TDMA 
> complexity 
> essentially disappears, since the reflected signal serves essentially the 
> role of CSMA in traditional Ethernet. Variable-size frames could be 
> transmitted without any explicit size reservation, and without any of the 
> waste associated with fixed slot size.  
> 
> As well, because there would be no request/grant protocol or 2-way 
> transit-time-delay wait time of the distribution fiber, transmission 
> efficiency is higher.  About 8 full-sized Ethernet frames of additional 
> capacity can be recovered (between any 2 user transmissions) from the 
> 2-way 
> transit time of a 10km distribution fiber. This recovered capacity per 
> user 
> is on par with the *allocated* capacity per user, for TDMA with fixed 
> slots 
> size that was being discussed.  Not to mention no need for the processing 
> and scheduling delay for the request/grant at the headend, which recovers 
> even more of the capacity that is lost to the TDMA protocol overhead. 
> 
> Overall, the idea is that changing out one passive component in the 
> outside 
> plant for another lower-cost passive component with the same signal loss 
> would allow a high degree of simplification in the design and operation of

> PON, and an improvement in transmission efficiency. It would also be more 
> consistent with traditional Ethernet. 
> 
> --dave horne 
> 
> 
> -----Original Message----- 
> From: John Pickens [mailto:jpickens@xxxxxxxxx] 
> Sent: Tuesday, November 20, 2001 10:49 AM 
> To: Norman Finn; stds-802-3-efm@ieee.org; stds-802-3-efm-p2mp@ieee.org 
> Subject: Re: [EFM-P2MP] Point-to-Point plus Shared Media 
> 
> 
> 
> Good clarification. 
> 
> I would like to study one additional question related to this topic. 
> 
> How can an operator offer the benefits (in the EPON link segment) of both 
> point to point AND point to multipoint to a single endpoint beyond the ONU