Thread Links Date Links
Thread Prev Thread Next Thread Index Date Prev Date Next Date Index

RE: [RPRWG] More comments on preemption



I agree with the analysis, here. My thinking is that jumbo frames on Ethernet will be more used
in SANs rather than LAN/WAN area. It may be good though to have non-normative annex on
pre-emption given the fact that we have good expertise. This will defintely enrich standard.
 

Regards,

Devendra Tripathi
VidyaWeb, Inc
90 Great Oaks Blvd #206
San Jose, Ca 95119
Tel: (408)226-6800,
Direct: (408)363-2375
Fax: (408)226-6862

-----Original Message-----
From: owner-stds-802-17@xxxxxxxx [mailto:owner-stds-802-17@xxxxxxxx]On Behalf Of Aybay, Gunes
Sent: Thursday, April 12, 2001 7:03 PM
To: 'stds-802-17@xxxxxxxx'
Subject: [RPRWG] More comments on preemption

Here is my retake on what has been discussed so far:
 
- There seems to be concern about low or medium priority traffic (especially
  when jumbo frames are used) causing excessive latency and jitter for high priority
  traffic since high priority packets arriving at the transit queue has to wait
  until the current packet in progress (which may be lower priority) is sent.
 
- Packet pre-emption may be useful to reduce latency and jitter for high priority
  traffic especially at low bit rates (e.g. 155Mb/s) when MTU is large (e.g. 64K)
 
- Latency and jitter for high priority traffic is minimal at high bit rates (1Gb/s and higher)
 
- Latency and jitter for high priority traffic is minimal when MTU is limited to 1500 bytes.
 
- The goal is to keep end-to-end latency below 100ms and end-to-end jitter below 10ms
 
Few comments:
 
- We are already living in a world where almost all traffic over the internet
  starts and ends at Ethernet interfaces, on clients and servers. Even if Jumbo
  frame support is part of RPR, in most real rings, the low and medium class
  packets we are worried about will be limited to 1500B MTU.
 
- By the time RPR is standardized, I am not sure how much interest
  will exist for building new rings operating at 155Mb/s and below.
 
- RPR's primary competition will be Ethernet. If we want this technology
  to be successful, we should try avoiding adding complexity whenever
  possible.
 
I suggest we don't define packet pre-emption as part of the RPR standard:
 
- This will keep the definition of the standard simple (so that we can complete it on time)
 
- High speed rings, and rings operating with 1500B MTU does not have to
  carry the HW complexity (i.e. cost) of packet pre-emption.
 
- However, RPR should not preclude packet pre-emption. Vendors who want to,
  or need to implement packet pre-emption should be able implement this feature
  without jeopardazing base level interoperability with other vendors. 
  Pre-empted packets can be signalled through CRC errors, which will make
  it possible for packet pre-empting systems to interoperate with non packet
  pre-empting systems.
 
Gunes
 
 -----Original Message-----
From: Harmen van As [mailto:Harmen.R.van-As@xxxxxxxxxxxx]
Sent: Thursday, April 12, 2001 12:07 PM
To: stds-802-17@xxxxxxxx
Subject: [RPRWG] Additional comments on preemption, cut-through and store-and-forward

Additional comments on preemption, cut-through and store-and-forward.
 
Cut-through means the packet on the ring keeps moving unless the insertion buffer filling increases because the node clocks a packet out of the transmit buffer. Moving means here, one internal word unit is written into the insertion buffer and at the same time one word unit is read out. Upon the arrival of empty words, the filling of the insertion buffer deceases until zero. In front of the insertion buffer stage, a pipe-lined header recognition stage also permits the packet to move through. Thus, all header information is available at the end of the header-recognition pipeline. In case that the packet is addressed to the node, it is clocked to the receive buffer, otherwise to the insertion buffer part. Scheduling between insertion and transmit buffers is done at the transmit-side. The complexity of the implementations for cut-through and store-and-forward are similar, with cut-through being slightly more complex.
 
Since some companies prefer to use store-and-forward and other ones cut-through, we propose to allow both types because they easily interwork. The difference is in end-to-end delay performance and the required.size of the transit or insertion buffer, respectively.
 
 
Packet preemption is not yet an established technik. Therefore, the immediate reaction of my exploder mail was that is too complex! Or, why not use ATM in the first place? And my answer to one of the comments is: Of course, I am talking about preemption during transmission, nobody missed something. It is not complex and error handling has been included!.
 
Without packet preemption by IP-telephony or IP-conferencing packets, an all-IP world never will be able to achieve the voice conversation quality that circuit-switching and ATM can provide. A natural voice communication requires a maximal end-to-end delay of  80-100 ms. Not more!  Above that it becomes more and more cumbersome. For free or low cost private calls, higher delays might be acceptable. But not in the business world. For conversations over larger distances, the propagation delay takes already a big part of the permitted total delay (10.000 km gives 50 ms). Packetization for a payload of only 40 bytes of  64 kbit/s voice gives an additional delay of 10 ms. This means for this distance 20-40 ms ist left for delays in the network, including the playout buffer for delay jitter. Delays in the endsystems have not been taken into account and for larger communication distances, the remaining margin becomes proportionally smaller. IP-telephony or IP-conferencing is not yet a commodity. The circuit-switched networks and ATM networks still do their excellent job for voice, and they carry the bulk of that service. Currently, network operators still not have to worry much about the end-to-end delay issue. Everything is rather new, current customers except the inferior quality, billing is not really established, calls are currently much cheaper or free, so who cares at the moment. IP-telephony or IP-conferencing is so sexy and hyp that already that might justify it usage, even when it not works so fine all the time.
 
The ATM cell size has been chosen so small because of voice conversations. It is not at all adequate for massive data communications. Natural communication between humans is low volume compared with data, but it is certainly the most important form for global human interactivity. Handling interactive voice in packetized networks adequately is the most difficult and most sentive form of communications. Why, one should return to walky-talky communications with commands like 'over' as we want to move towards all-IP networks. Also MPLS will not solve this issue.
 
Since there is futher an increasing pressure to use very large data-packets in order that users can exploit the TCP-protocol with much larger throughputs as today, packet preemption will be become unavoidable. It is just a matter of time. In fact, packet preemption is already been applied inside of some routers. I am sure preemption will also be seen on lower speed router links soon. The first company with such a feature on the market will immediately outperform all other routers in that respect. IETF standardization will certainly follow that up. It has to be said that the larger the distances and the higher the link speeds, the poorer works the window mechanism of TCP with respect to throughput. Therefore, larger packet will be required to keep the data explosion going.
 
Considering the max. size of a IP-packet of  64 Kbytes, one obtains without preemption a jitter of SONET/SDH
 
155 Mbit/s    - 3.495 ms per node
622 Mbit/s    - 0, 873 ms
2.5 Gbit/s     - 0,218 ms
10 Gbit/s      - 0,054 ms
 
For SONET/SDH 155 Mbit/s, the per node jitter delay is
 
1 Kbytes     -  0.054 ms per node
5 Kbytes     - 0.273 ms
10 Kbytes   - 0,835 ms
20 Kbytes   - 1,092 ms
40 Kbytes   - 2,184 ms
60 Kbytes   - 3,277 ms
 
Multiplied with the number of passing ring nodes, these figures determine in fact the playout buffer, and that only caused by the RPR. Not included are the additional delays in other network nodes of the connection.
 
For high-bit rates rings, the figures are perhaps not so impressive. However, for lower bit rate rings for manufactury plants, public access, campus, or in-building areas, providing a huge market,  it really counts up. The more areas where the RPR can be used, the more successful will be the IEEE 802.17 standard.
 
 
Since the preemptive mechanism raised some questions, here some details.
 
- Three ring classes are considered
  Class 1: Premium class (circuit emulation): guaranteed throughput, tight delay jitter
  Class 2: High- priority packet switching: guaranteed throughput, bounded delay jitter
  Class 3: Low- priority packet switching: best- effort
 
- Class 1 may preempt classes 2 and 3
- Class 2 may preempt classe 3
- Class 1 uses cut-through
- Classes 2 and 3 must be store-and-forward to keep it simple
- The preemption mechanism holds both for packets clocked out of the insertion buffer and those leaving the transmit buffer
 
- At the ring receive side, all packet embeddings are resolved by forwarding the packets of the different priority classes into their corresponding receive or insertion buffers
- Resolving means that within the considered received packet, a new packet start of a higher class may appear indicating an embedded packet lasting until its end of packet shows up. This might happen more than once in a packet.
 
- Packets of the highest class are immediately forwarded, thereby possibly preempting a packet of a lower class, either from the insertion or the transmit buffer
- Due to store-and-forward operation of the insertion buffers for the two lower classes all holes left back by the embedded packets schrink together before they are forwarded onto the next transmission hop.
 
By this operation it is assured that the time-sensitive packets for IP-telephony and IP-conferencing pracktically shoot through the network with minimal delay. Note that the ring might only be a small part of the global connection and as previously explained all delay saving may count to achieve the required end-to-end delay bound of 80-100 ms.
 
A mail on implementation complexity will follow.
It can also already be said that getting the start/end of packets occurs in the same way as with an operation without preemption. The MAC is thus agnostic.
 
best regards
Harmen
------------------------------------------------------------------
Prof.Dr. Harmen R. van As       Institute of Communication Networks
Head of Institute                      Vienna University of Technology
Tel  +43-1-58801-38800           Favoritenstrasse 9/388
Fax  +43-1-58801-38898          A-1040 Vienna, Austria
http://www.ikn.tuwien.ac.at      email: Harmen.R.van-As@xxxxxxxxxxxx
------------------------------------------------------------------