Good points ... Thanks, Harry, for the
input.
We'll need to look into these.
Khaled Amer President,
AmerNet
Architecture Analysis and Performance Modeling Specialists Phone:
(949)552-1114
13711 Solitaire Way, Irvine, CA 92620 Fax:
(949)552-1116
e-mail: khaledamer@xxxxxxx
----- Original Message -----
Sent: Wednesday, August 16, 2000 7:47
AM
Subject: Re: RPR Perf. Metrics
Discussions
Khaled,
From the operation point of view, you will have to define what your metrics
points are. This is what users like Excite, Sprint, or the Navy can use to
help them 1) engineer their network and 2) measure its performance.
That is what I see the EXTERNAL metrics are, regardless of the
scenario.
For each scenario you may have to define what is acceptable. This includes
full mesh, partial meshed, hubbed, over-subscribed network,
under-subscribed network etc.....
You have captured the basic EXTERNAL metrics point from my point of
view: delay and goodput (end to end). If you want to oversubscribe
the network, stat gain factor is another point for the metrics, but is can be
derived from the goodput.
I agree with Raj and Taylor that we need to look at the scenarios and
other INTERNAL metrics may be come up.
Harry
Taylor Salman wrote:
While I agree that it is probably still a little
premature to be finalizing the metrics (They certainly can't be finalized in
advance of the PAR), it is not premature to discuss them. The process
should proceed from the PAR criteria to specific questions supporting the
PAR criteria. These questions should then evolve into specific
metrics. The metrics are then grouped by like issues and scenarios are
developed that adequately test proposals against these criteria. Once
the scenarios are developed, models are built, etc.
This is also an iterative process. In the development of the
scenarios, we will come across issues that weren't originally considered and
we'll need to update the metrics...
I think once the PAR is finalized, we will see additional discussion on
the metrics question. Of course, if we had some existing models
available for general consumption to provide a common understanding of what
people intended, we might also generate some significant discussion... :-)
Taylor
At 01:35 PM 8/15/00 -0700, Raj Sharma wrote:
Khaled,
Dont lose hope yet
!
My take on the
silence is that it maybe still premature to discuss
this. Although, it needs to be launched now.
Without having a good reference model it maybe be difficult to scope
the perfomance. Not to be pedantic, but, Taylor's tutorial suggested
we that we define a model before establishing requirements or metrics
for performance.
I personally feel we
are biting more based on the broad nature of
performance metrics we are going after. It maybe
time to approach this from a reference model point of view and then
establish performance metrics.
Comments?
raj
-----Original
Message----- From: Khaled Amer [mailto:khaledamer@xxxxxxx] Sent: Wednesday, August 16, 2000 12:25
AM To: Reflector
RPRSG Subject:
Fw: RPR Perf. Metrics Discussions
- RPR'ers,
-
- Based on the 'underwhelming'
responses that I got back on this, we'll consider the performance
metrics discussion closed for now, and move on.
-
- As I mentioned before, I want for us
to work on identifying the scenarios, and metrics that we want to use as
a starting point. I know that some of you already put extensive efforts
into producing simulation results, but we want to all get to run a
consistent subset for various proposals. I've already posted some
preliminary thoughts and suggestions (and got the same underwhelming
response!).
-
- Please share with us any thoughts
that you have on this subset, and if you have suggestions on more
details. In the meantime, I'll work on putting together more details
too. I'd like to see us go to the Interim meeting with some of these
discussions already started, so that we just work on ironing out any
points of disagreements that we may have. Hopefully, we can reach to
some consensus on a starting subset that we all agree to, and start
running some coherent simulations after the Interim and share the
results in the November plenary. As you know, we're going to have a lot
of simulation work to do, so getting a head start on it can help us
avoid the situation where this becomes a gating factor in the future
causing delays in establishing the standard.
-
- Thanks.
-
- Khaled Amer
- President, AmerNet
- Architecture Analysis and Performance Modeling Specialists
- Phone:
(949)552-1114
13711 Solitaire Way, Irvine, CA 92620
- Fax:
(949)552-1116
e-mail: khaledamer@xxxxxxx
-
- ----- Original Message -----
- From: Khaled Amer
- To: Reflector
RPRSG
- Sent: Saturday, August 05, 2000 8:48 PM
- Subject: RPR Perf. Metrics Discussions
- All,
-
- Attached is the last version of the
presentation that I gave in the La Jolla plenary meeting in July. This
includes the suggestions that were raised in the discussions that we had
during the meeting. Please have a look at them and make sure that they
cover everything that we discussed. If not, please point out any points
that I may have missed.
-
- I'd like for us to reach an agreement
on this before the interim meeting that we're having in a few weeks in
Santa Clara.
- Please provide any input or concerns that you may have by Friday
8/11. I'll plan on posting the 'final' version the following Monday for
your review. After that, we'll consider this closed (at least for now).
-
- Another couple of
points:
-
- - I would like to request that we
begin thinking about some simulation scenarios that we can use as a
starting point. We'll have to start with some subset and then add on as
we decide is appropriate. Along with this, we should also identify in
these scenarios a subset of the output results or metrics for these
simulation runs. We can start by addressing one or more of the of the
metrics that we have in the attached presentation.
-
- For example, I'll go ahead and make a
strawman attempt at this. We can use this strawman to open up the
discussion for bashing it! How about starting with the following subset
of the goals that we identified for this work such as:
- Ring performance:
- Metrics: ring utilization under
heavy loads, global throughput and goodput
- Fairness:
- Metrics: per class and per node
throughput, end-to-end packet delay for scenarios that demonstrate
fairness (TBD)
- Congestion control:
- Metrics: per class and per node
throughput in response to a congestion condition occurring
(TBD)
I know that this is a rough strawman, and most of the items need to
be defined in more details. But we can use this as a starting point to
trigger some thoughts and discussions on the reflector. If you have any
thoughts on this, or just any thoughts on your mind (well ... related to
this!), please speak up. Let's start discussing this on the reflector in
the coming couple of weeks so that we can make good progress in the
meeting, instead of starting the discussion then.
- Several of you indicated availability
of traffic patterns that were taken off the Internet or other networking
environments that would be useful for us to use in the simulations. In my
mind, we need things like packet size distributions, interarrival times,
burstiness, application and protocol distributions, and any other relevant
characteristics of network traffics that we may want to consider in the
simulations. If you have information that would help with this, please let
us know on the reflector and we would appreciate you making it available
and discussing it during the next meeting.
Any other suggestions are most
welcome.
Khaled Amer President, AmerNet Architecture Analysis and Performance Modeling
Specialists Phone:
(949)552-1114
13711 Solitaire Way, Irvine, CA 92620 Fax:
(949)552-1116
e-mail: khaledamer@xxxxxxx
**********************************************************************P.
Taylor SalmanPhone:
202-364-4700x2297Manager, Market ResearchMobile:
202-427-3319OPNET Technologies, Inc.E-mail:
tsalman@xxxxxxxxxxxxx International Drive,
NWWeb: www.opnet.comWashington,
DC
20008********************************************************************** --
Harry Peng
------------------------------------------------------------------
Dept: 1E11
Email: hpeng@xxxxxxxxxxxxxxxxxx
ESN: 39-52277
Phone 613-765-2277
Fax: 613-768-4904
Web: http://skywww/~hpeng/
-------------------------------------------------------------------
|