Thread Links Date Links
Thread Prev Thread Next Thread Index Date Prev Date Next Date Index

Re: [8023-CMSG] Proposed Upper Layer Compatibility Objective



Title: RE: [8023-CMSG] Proposed Upper Layer Compatibility Objective

Matt,

 

Thanks for the reply.

 

I'm trying to stay away from the organizational "where to do it" angle of .3 vs .1 and consider whether there is some backpressure that could be done at a finer granularity like per-VLAN ID or per-CoS.

 

WG locale aside, are you saying you're not a fan of either a per-VLAN ID or per-CoS backpressure mechanism?

 

...Dave

David W. Martin
Nortel Networks
dwmartin@ieee.org
+1 613 765-2901 (esn 395)
~~~~~~~~~~~~~~~~~~~~

-----Original Message-----
From:
Matt Squire [mailto:MSquire@HATTERASNETWORKS.COM]
Sent: Thursday, May 20, 2004 3:43 PM
To:
STDS-802-3-CM@listserv.ieee.org
Subject: Re: [8023-CMSG] Proposed Upper Layer Compatibility Objective

 

Hi David -

 

In my head, the bottom line is that this is an unsupportable problem at the Etherent layer.  Congestion is an end-to-end problem - telling someone to slow down transmissions is an application/host level issue, not a link layer issue.  

 

Within a system as Hugh pointed out, congestion starts by having too much traffic going out at the egress ports, which is then fed back somehow to ingress ports which then do intelligent discard based on some parameter (p-bits, dscp, something).  Every device I've ever worked on hits the same problems and follows the same basic paradigms, and seems like Hugh has run into the same thing.   This is not specific to a given bridge implementation.  For feedback to work properly, you have to know where that traffic is going.  It isn't based on the classification algorithms allowed by the switch (e.g. doesn't matter if they're using different classification algorithms), but is based on what they do to each packet.  For example, in a bridge, you'd want to know which packets are destined to which egress ports on the bridge to which you're forwarding so that you could selectively pause traffic for the congested egress port.  Thats just not a problem .3 can or should solve. 

 

IMHO, .3 helps best by maintaining a reliable pipe for some higher layer QoS mechanism to use in the manner it is configured to do so.  Creating multiple, different class channels within a single .3 link is not a good direction for us to go. 

 

- Matt

 

 

-----Original Message-----
From: David Martin [mailto:dwmartin@NORTELNETWORKS.COM]
Sent: Thursday, May 20, 2004 2:31 PM
To: STDS-802-3-CM@LISTSERV.IEEE.ORG
Subject: Re: [8023-CMSG] Proposed Upper Layer Compatibility Objective

 

Hugh,

For sake of discussion, if you take as a given that all bridges have 8 output queues per port, where frames are classified based on CoS (MEF terminology) aka user_priority (802.1 terminology), then if a downstream bridge could send a Pause_CoS_n (where n=0..7) backwards / upstream, wouldn't that alleviate the scalability issue you mentioned?

In arithmetic terms, Device 1 would need 8*P queues, where P is the number of ports it has.

Doesn't the scalability issue you mentioned only arise when different bridges use different output queue classification approaches? Then I could imagine a squared-type of relationship.

Just trying to follow the line of reasoning here.

Thanks.

...Dave

David W. Martin
Nortel Networks
dwmartin@ieee.org
+1 613 765-2901 (esn 395)
~~~~~~~~~~~~~~~~~~~~

 

-----Original Message-----
From: Hugh Barrass [mailto:hbarrass@CISCO.COM]
Sent: Thursday, May 20, 2004 9:23 AM
To: STDS-802-3-CM@listserv.ieee.org
Subject: Re: [8023-CMSG] Proposed Upper Layer Compatibility Objective

Brad,

I've answered below:

Booth, Bradley wrote:

>Hugh,
>
>I want to lock in on one paragraph you mentioned.  It is listed below:
>
>"The only use of PAUSE that would (might) work would be if device 2
>could
>signal to device 1 that only frames destined for port K should be
>paused. This would require that device 1 must understand how device 2 is
>going to classify and direct the traffic and device 1 must maintain
>separate queues on its output port corresponding to the output ports of
>device 2. This means that device 1 will wind up with a total number of
>queues equal to the sum of all the queues on all of the devices
>connected to it. Scaling for more than 2 devices is left as an exercise
>for the reader."
>
>You said that this would require device 1 to understand how device 2
>classify and directs traffic.  When you say that are you referring to
>the 802.3 portion of device 1?  If you are, then I would agree that we
>have an issue of adding "intelligence" to the 802.3 MAC.  If not, would
>not 802.1 know how to classify this traffic?  Maybe this has to do with
>the definition of port, priority and queue.  It might help if you could
>explain your use of the terminology to a layman.
>
>
>
If device 2 wants to send a message that says, "pause the traffic that
will be directed to my output queue K" then device 1 must understand the
criteria that device 2 will use to forward traffic to its output K. That
may be MAC address, that may be IP address, or something else entirely.

Note that the "output queues" of device 2 may be physical ports, virtual
ports or even s/w queues for an edge device.

Hugh.