Commit graph

5 commits

Author SHA1 Message Date
Erik Boasson
d700657cb7 ddsperf latnecy should include median, 90% and 99%
Signed-off-by: Erik Boasson <eb@ilities.com>
2019-05-02 20:53:20 +08:00
Erik Boasson
6011422566 ddsperf: fix calculation of data rate in Mb/s
Multiplying time-in-ns since previous output line by 1e9 instead of
dividing it by 1e9 resulted in bit rate showing up as 0Mb/s.

Signed-off-by: Erik Boasson <eb@ilities.com>
2019-05-02 20:53:20 +08:00
Jeroen Koekkoek
c9d827e420 Fix warnings related to fixed type integers
Signed-off-by: Jeroen Koekkoek <jeroen@koekkoek.nl>
2019-04-29 19:22:11 +02:00
Erik Boasson
d2ebbbc880 address a handful of compiler warnings in ddsperf
These are fortunately all false positives.

Signed-off-by: Erik Boasson <eb@ilities.com>
2019-04-29 11:15:41 +02:00
Erik Boasson
06245d0d4a initial version of permance/network check tool
The current situation for performance measurements and checking network
behaviour is rather unsatisfactory, as the only tools available are
``pubsub`` and the ``roundtrip`` and ``throughput`` examples.  The first
can do many things thanks to its thousand-and-one options, but its
purpose really is to be able to read/write arbitrary data with arbitrary
QoS -- though the arbitrary data bit was lost in the hacked conversion
from the original code.  The latter two have a terrible user interface,
don't perform any verification that the measurement was successful and
do not provide the results in a convenient form.

Furthermore, the abuse of the two examples as the primary means for
measuring performance has resulted in a reduction of their value as an
example, e.g., they can do waitset- or listener-based reading (and the
throughput one also polling-based), but that kind of complication does
not help a new user understand what is going on.  Especially not given
that these features were simply hacked in.

Hence the need for a new tool, one that integrates the common
measurements and can be used to verify that the results make sense.  It
is not quite done yet, in particular it is lacking in a number of
aspects:

* no measurement of CPU- and network load, memory usage and context
  switches yet;
* very limited statistics (min/max/average, if you're lucky; no
  interesting things such as jitter on a throughput test yet);
* it can't yet gather the data from all participants in the network
  using DDS;
* it doesn't output the data in a convenient file format yet;
* it doesn't allow specifying boundaries within which the results
  must fall for the run to be successful.

What it does verify is that all the endpoint matches that should exist
given the discovered participant do in fact come into existence,
reporting an error (and exiting with an exit status code of 1) if they
don't, as well as checking the number of participants.  With the way the
DDSI protocol works, this is a pretty decent network connectivity check.

The raw measurements needed for the desired statistics (apart from
system-level measurements) are pretty much made, so the main thing that
still needs to be done is exploit them and output them.  It can already
replace the examples for most benchmarks (only the 50%/90%/99%
percentiles are still missing for a complete replacement).

Signed-off-by: Erik Boasson <eb@ilities.com>
2019-04-24 14:09:30 +02:00