Thread Links Date Links
Thread Prev Thread Next Thread Index Date Prev Date Next Date Index

Re: [8023-CMSG] Questions



Brad
I agree that the way to go, is to standardize provisions within the
protocol, that will allow L2 to pass information that mirrors the
congestion issues from one end to the other. Let the switch / bridge /
router implementer, make the best use of the capabilities the standard
offers.
Hence, in a worst case scenario, if one end does not support the CM new
provisioning, the congestion  will skip the  Distinctive Backpressure
stage, and will move directly to Link Level, as the PAUSE frame does
today.
I would add to that "Distinctive Backpressure" type of Xoff, the ability
to Xoff traffic from a specific MAC address.
We can also consider transferring buffer load state ( useful mainly for
routers), and/or request for rate reduction, as mentioned in some other
emails.


Thanks

Gadi




________________________________

From: Booth, Bradley [mailto:bradley.booth@INTEL.COM]
Sent: Wednesday, May 19, 2004 19:37
To: STDS-802-3-CM@LISTSERV.IEEE.ORG
Subject: Re: [8023-CMSG] Questions



Hugh,

That's where I'm having a problem.  I see some of this as not increasing
the intelligence of the MAC, but rather affecting the net bandwidth
available on the link.  I believe in some discussions there was the
concept of XUP and XDOWN to augment XON and XOFF, so that instead of
halting the traffic completely the upper layer could create a decrement
or increment in the available bandwidth.  It would be the same MAC
Control frame, but with finer granularity.


That concept is not radically different (at least to me as a PHY guy)
than using the XON/XOFF messaging with an indication of which traffic
priority to halt.  This way, the upper layer can control which
priorities should be paused, which again reduces the net bandwidth on
the link.  No "intelligence" is added to the MAC.  The group just
creates extensions in MAC Control for the upper layers to have better
control of the bandwidth on the link.


Thoughts?

Thanks,
Brad

-----Original Message-----
From: Hugh Barrass [mailto:hbarrass@cisco.com
<mailto:hbarrass@cisco.com> ]
Sent: Tuesday, May 18, 2004 1:41 PM
To: Booth, Bradley
Cc: STDS-802-3-CM@listserv.ieee.org
Subject: Re: [8023-CMSG] Questions


Brad,

If you step back & think about it - what PAUSE does is reduced the net
bandwidth used on a link. If the devices at both ends of the link were
smart, they would somehow negotiate what that net rate should be (so
that the output of one drains at a rate that doesn't mess up the input
of the other). Making  this work using an XOFF/XON mechanism operating
across an (potentially very long) link is far from optimal. Perhaps
there is scope to change the PAUSE definition to say "send me no more
than X % of bandwidth on this link."

From the perspective of a higher layer, setting the effective width of a
pipe should be perfectly acceptable; other changes that increase the
"intelligence" of the MAC layer would require much more scrutiny.

Hugh.

Booth, Bradley wrote:

JT,

I got the answer I needed, which is that there is a base assumption that

an 802.1 layer needs to exist above the 802.3 MAC if there is going to
be any use of priorities.  It was the interaction between the MAC's
queues and 802.1 queues that I didn't understand as I spend most of my
time at the physical layer.

I'm still mulling over the statement by Matt that PAUSE makes a bigger
pipe into a smaller pipe.  Over a long period of time and if it was
implemented correctly, I could understand that analogy.  The trouble I'm

having with that statement is that it seems to me that PAUSE is
performed because of back pressure from upper layers (memory has passed
a watermark).  If upper layers can handle QoS/CoS, then surely they'd be

able to handle making a big pipe run like a small pipe.  If they cannot,

then it seems that PAUSE would want some finer granularity rather than
XON/XOFF.

Thanks,
Brad

-----Original Message-----
From: owner-stds-802-3-cm@listserv.ieee.org
[mailto:owner-stds-802-3-cm@listserv.ieee.org
<mailto:owner-stds-802-3-cm@listserv.ieee.org> ] On Behalf Of Jonathan
Thatcher
Sent: Monday, May 17, 2004 8:49 AM
To: STDS-802-3-CM@listserv.ieee.org
Subject: Re: [8023-CMSG] Questions


Brad,

The way you choose to ask the question sends the response in a
particular
direction that you may, or may not be intending.

If I were to ask you if 10GBASE-T knows how to forward packets from the
MAC-Client interface one could respond in two different ways where both,

depending on perspective, are technically correct:

1. 10GBASE-T does not know anything about the MAC-Client interface as
that
is exposed only in layers above 10GBASE-T.
2. Of course it does. By definition, 10GBASE-T references the upper
layers.
These are, therefore, explicitly included in the 10GBASE-T
specification.

Now someone might argue with each of these. For instance, the argument
to
the second might be, "you don't understand, the MAC is common across
multiple port types." This argument is true, but misses the point. The
fact
is, that is the beauty of the layered architecture.

Ethernet is not just the PMD. Ethernet is the PMD and all layers above
the
PMD that provide a complete solution, whether those layers are shared or

not.

Just because 802.1 is shared with other 802 "dots" does not mean that
when
it is integrated with Ethernet that it isn't part of Ethernet.

Some in 802.1 would argue that all of 802.1 is part of the MAC. 802.1 is

part of Layer 2. 802.1 is part of an Ethernet solution.

There are any number of ways that you could modify your question to get
opposite responses.

Example: Is it understood or implied that 802.3 knows how to direct to
and
from multiple queues? Answer: Absolutely. See EPON. But, even without
EPON,
MAC-Control knows how to deal with packets to/from control and data
queues.

Etc.

My response to 1) would therefore be: 802.1 knows. Therefore, by
definition
Ethernet knows.**

jonathan

** Exception: if there is no 802.1, then there are no queues and
Ethernet
doesn't know because there is nothing to know. In this case, the
question is
moot. :-)


-----Original Message-----
From: owner-stds-802-3-cm@LISTSERV.IEEE.ORG
[mailto:owner-stds-802-3-cm@LISTSERV.IEEE.ORG]On
<mailto:owner-stds-802-3-cm@LISTSERV.IEEE.ORG]On>  Behalf Of Booth,
Bradley
Sent: Sunday, May 16, 2004 6:50 PM
To: STDS-802-3-CM@LISTSERV.IEEE.ORG
Subject: Re: [8023-CMSG] Questions


Norm,

Thanks for the response.  Two follow-up questions:
1) Is it understood or implied that Ethernet knows how to
direct frames
to and from these 8 queues?
2) What if the device does not use a bridge as in an adapter?

Thanks,
Brad

-----Original Message-----
From: owner-stds-802-3-cm@LISTSERV.IEEE.ORG
[mailto:owner-stds-802-3-cm@LISTSERV.IEEE.ORG
<mailto:owner-stds-802-3-cm@LISTSERV.IEEE.ORG> ] On Behalf Of
Norman Finn
Sent: Sunday, May 16, 2004 11:11 AM
To: STDS-802-3-CM@LISTSERV.IEEE.ORG
Subject: Re: [8023-CMSG] Questions


Brad,

I think you did miss the mark, particularly with:

  "Considering that Ethernet doesn't know in advance about the
provisioning
   of the network and does not care about which packets it delays or
drops,
   then it is likely that 802.1 and the upper layers can do all the
   priorities or differentiated services that they want but will see
   diminishing returns as the load on the network increases."

I would agree with, "Ethernet doesn't know in advance about the
provisioning
of the network", but 802.1D bridges certainly do care about
which frames
are
delayed or dropped.  Bridges define the use of 8 queues per
output port,
and
frames are marked with 8 levels of priority.  Although strict priority
scheduling is the only queue draining algorithm specified in the
standard,
others are explicitly allowed, and most vendors implement
varieties that
provide very good latency and bandwidth guarantees.  Furthermore, a
great
many bridges are able to assign priorities to 802.3 frames based on
criteria
such as IP DSCP code points.

In short, ethernet is *far* from "best effort".

-- Norm

Booth, Bradley wrote:

My apologies in advanced if the answers are obvious, but

I've been so

focused on cabling and physical layer the last couple of

weeks, so I'm
a

bit brain dead to upper layer stuff.

There has been some talk about differentiated services and

priorities

associated with 802.1 and the upper layers.  Here are my questions:
1) If the network is overprovisioned (available bandwidth >= maximum

instantaneous throughput), then am I correct in assuming that

these differentiated services and priorities operate just

fine because

the upper layer protocols within the switches have sufficient
bandwidth?  Should I also assume that the available

bandwidth is based

upon what the end stations (adapters, servers, etc.) can handle?
2) If the network is not overprovisioned (either in the switches or
adapters), then is it fair to assume that these differentiated

services

and priorities will provide diminishing returns as throughput

increases

over the available bandwidth?

I keep coming back to the statement others have made that

802.1 or the

upper layers can handle this, but I cannot help think that

would only
be

true for an overprovisioned network.  Considering that Ethernet

doesn't

know in advance about the provisioning of the network and does not

care

about which packets it delays or drops, then it is likely that 802.1

and

the upper layers can do all the priorities or

differentiated services

that they want but will see diminishing returns as the load on the
network increases.

This would seem to me like going out and buying a Formula 1 race car

to

use to drive to work in Silicon Valley.  A lot of money in fuel and
equipment only to sit on 101 during rush hour(s).

Am I off the mark here?

Thanks,
Brad