Thread Links Date Links
Thread Prev Thread Next Thread Index Date Prev Date Next Date Index

Re: [8023-CMSG] Server/NIC analogy



Title:

Jonathan,

Your presumptions are the same as mine. This does not mean that
someone more clever than I may come up with a justification for
expanding this idea but this is what I could come up with.

I suppose this could easily be mapped to any "edge" switch that
is connected directly to servers. Since the source of all traffic is
only 1 hop away, sending feedback to the source is a straight
forward effort. As I mentioned before, receivers that are not
sources of data can ignore the messages and nothing changes.

Regards,
Ben

Jonathan Thatcher wrote:
Ben,

If I get this right, you are painting a picture where the switch/bridge and
the server(s)/NIC(s) are integrated into a single system.

The feedback from the switch/bridge would extend back solely to the
server(s)/NIC(s).

It is presumed that the NIC already has an implementation specific means to
throttle the processor. It is presumed that an implementation specific means
will be created to tie the feedback mechanism to the throttle.

It is further presumed that the switch/bridge/line-card can readily identify
the source(s) of the traffic that are causing congestion.

Finally, it is presumed that the congestion is, in Bob Grow's words,
transitory. As Hugh implies below, if this problem is a subscription
problem, then rate limiting is an adequate, if not ideal, solution.

Did I capture this correctly?

Presuming so, you have defined the problem as local to the specific system.

You have also taken an interesting twist on Hugh's point of moving the
"choke point" by putting it back to where it would have gone anyway, to the
source. In short, as there is only one hop, there is no other place for it
to migrate to.

This is a curious concept in that there is no communication between bridges,
nor is there an implied bridge in the NIC. If so, there is no question about
ownership of the problem :-)

Hmmmmmmmmm.

jonathan

  
-----Original Message-----
From: owner-stds-802-3-cm@listserv.ieee.org
[mailto:owner-stds-802-3-cm@listserv.ieee.org]On Behalf Of Benjamin
Brown
Sent: Friday, June 04, 2004 5:45 PM
To: STDS-802-3-CM@listserv.ieee.org
Subject: Re: [8023-CMSG] Server/NIC analogy


Hugh,

For the BPE link between the server and the port interface line
card, static rate control is a good idea. Depending upon the speed
of that link (1G, 2.5G, 10G, other) the server should absolutely
be rate limited to that speed. However, since that "NIC" is on the
same card as the server, I view that as capable of using the same
proprietary mechanism that the stand-alone server/NIC solution
uses.

The kind of thing that is more interesting is the situation where
the port interface line card is a switch, serving multiple servers.
If the output port(s) of the switch can't support the bandwidth
destined to it (them), it may be reasonable to throttle back the
source server(s).

The reason I think this works is because of the (very) small
number of hops back to the source. In a multiple hop, multiple
switch, poorly constrained network, it might be difficult if not
impossible to expect to push feedback to the source where it
can be effectively used. It may not work at all if the number of
hops is greater than 1, unless the protocol is such that the message
returned somehow reflects enough information (is DA enough?)
to tell the source which path is congested. However, the number
of packets in the pipe all the way back to the source may simply
be too high to make this feasible at all.

To me, it seems the question is whether a protocol that may be
limited to only 1 hop (if the receiver is not the source, the message
is ignored and the old packet drop mechanism is used) can
satisfy any set of 5 criteria. If the protocol is not limited to a
single hop, then I would be curious to know how it gets back
to the source, with what latency (round trip time in terms of
more packets to receive), and how it doesn't move the point
of congestion.

Regards,
Ben

Hugh Barrass wrote:

    
Ben,

I think that the situation described is a good
      
justification for rate
    
limiting.

In general, if a device cannot service its input queue at
      
line rate then
    
it will be unable to implement the prioritization policy
      
decisions. In
    
that case it is much better to limit the rate of the link
      
so that the
    
sender can make the appropriate decision regarding re-ordering,
buffering, discarding etc.

I think that the question remains whether this rate
      
limiting mechanism
    
needs to be dynamic or whether it is sufficient to be
      
pseudo-static. In
    
your example, I assume that the server that is unable to
      
cope with line
    
rate traffic will always be uniformly incapable. There is
      
no reason for
    
it to tell the sender on a packet by packet basis that it
      
cannot handle
    
line rate traffic.

Hugh.


Benjamin Brown wrote:

      
All,

During a private discussion this afternoon regarding the results of
last week's meeting, the concept of feedback came up - whether
it was necessary or not. There was some level of discussion about
this during the meeting but no one seemed to be able to provide an
adequate justification for providing congestion feedback and why
the more common approach of packet drop wasn't adequate.

During this afternoon's discussion, I came up with something that
I think might be justification. I'm probably just painting
        
a big target
    
on my chest but let's see how this goes.

Consider a stand-alone server with a 1G Ethernet NIC. Today's
CPUs could easily generate enough traffic to swamp the 1G
Ethernet link (okay this is a bit of an assumption on my part
but if they can't today they will be able to tomorrow). I don't
build these things, nor have I looked at their architecture all
that closely in a number of years, but I'll step out on a limb and
state that there's a (most likely proprietary) mechanism
        
for the NIC
    
to tell the CPU that the link is too slow to handle all the packets
that it is trying to transmit. I'll step even farther out
        
on that same
    
limb and state that the mechanism is not packet drop.

Now, let's use this analogy to consider a server card in a
        
backplane
    
that communicates to the world via a port interface line card. The
server card communicates to the port interface line card using a
link compliant with the newly emerging Backplane Ethernet standard.
(Okay, so I'm looking a little into the future.) If you
        
consider the
    
entire chassis analogous to the server/NIC in my initial example
then it would seem plausible that you would want to communicate
buffer congestion on the port interface line card back to the
server card using a mechanism other than packet drop.

I'll just close my eyes now. Fire at will.

Ben

--
-----------------------------------------
Benjamin Brown
178 Bear Hill Road
Chichester, NH 03258
603-491-0296 - Cell
603-798-4115 - Office
benjamin-dot-brown-at-ieee-dot-org
(Will this cut down on my spam???)
-----------------------------------------

        
--
-----------------------------------------
Benjamin Brown
178 Bear Hill Road
Chichester, NH 03258
603-491-0296 - Cell
603-798-4115 - Office
benjamin-dot-brown-at-ieee-dot-org
(Will this cut down on my spam???)
-----------------------------------------
    

  

--
-----------------------------------------
Benjamin Brown
178 Bear Hill Road
Chichester, NH 03258
603-491-0296 - Cell
603-798-4115 - Office
benjamin-dot-brown-at-ieee-dot-org
(Will this cut down on my spam???)
-----------------------------------------