Deleting a writer causes unregisters (and possibly disposes) in the rest
of the network, and these updates to the instances should trigger
DATA_AVAILABLE.
Signed-off-by: Erik Boasson <eb@ilities.com>
When data arrives before a heartbeat has been received, it is impossible
to know whether this is a new "live" sample or a retransmit, and for
this reason the requesting of historical data is delayed until a
heartbeat arrives that informs the readers of the range of sequence
numbers to request as historical data.
However, by this time, and without this new condition in place, the
reader may have already received some data directly, and may
consequently request some data twice. That's not right.
Requiring a heartbeat to have been received before delivering the data
avoids this problem, but potentially delays receiving data after a new
writer/reader pair has been matched. The delay caused by a full
handshake at that point seems less bad that the odd case of stuttering
where that isn't expected. There are almost certainly some tricks
possible to avoid that delay in the common cases, but there are more
important things to do ...
Best-effort readers on a reliable proxy writer are a bit special: if
there are only best-effort readers, there is no guarantee that a
heartbeat will be received, and so the condition does not apply. This
commit attempts to deal with that by only requiring a heartbeat if some
reliable readers exist, but that doesn't allow a smooth transition from
"only best-effort readers" to "some reliable readers".
One could moreover argue that this condition should not be imposed on
volatile readers (at worst you get a little bit of data from before the
match), but equally well that it should (there's no guarantee that no
sample would be skipped in the case of a keep-all writer, if the first
sample happened to be a retransmit).
Signed-off-by: Erik Boasson <eb@ilities.com>
For compatibility with TwinOaks CoreDX, ignore an all-zero durability
service QoS received over SEDP for volatile and transient-local
endpoints.
Signed-off-by: Erik Boasson <eb@ilities.com>
The introduction of multiple receive threads could trigger the assertion
because a set of samples ready for delivery may have been received by
multiple threads (the problem manifests itself most easily with
fragmented samples). This is actually a non-issue:
* while synchronously processing a packet, there is a bias of 2**31
added to the refcount, to prevent any thread under any circumstance
from ever freeing the data;
* while data lives in the defragment buffers or reorder buffer of the
proxy writer, a bias of 2**20 is added to it until this particular
function is called, after delivery of the data to the readers, and
(if needed) after inserting the samples in the reorder buffer of
any readers that are out-of-sync with the proxy writer;
* the relevant refcount is updated atomically in such a manner that this
particular operation atomically removes the bias and performs the
delayed increment of the refcount to account for the data being stored
in any of the defragmenting or reorder buffers;
* the only ordinary decrementing of the refcount happens either
synchronously (if synchronous delivery is chosen), or asynchronously
in a delivery queue thread, and so the entire mechanism exists to
avoid premature freeing of the underlying data because the data is
delivered very quickly (possibly synchronously);
* as the biases are removed after all the delayed refcount increments
are taken into account and there are no increments following the call
to rmbias_and_adjust, the "ordinary" decrements can do no harm.
* the case of data from multiple writers being combined in a single
packet is dealt with by the 2**20 bias, and so there is potentially a
problem if there are more than 2**20 out-of-sync readers attached to
a single proxy writer, or data submessages from more than 2**11
writers in a single packet. The minimum possible data message is 32
bytes (headers, encoding, data, padding), so packets up to 64kB are
safe.
None of this is in any way related to which threads originally accepted
the packets, and therefore I see no argument for the existence of the
assertion.
That said, it is a rather complicated mechanism of unknown benefit, and
a major simplification is definitely something to be considered. In UDP
mode I see no chance of abuse, but there may be network protocols (TCP,
for sure) where there might be packets larger than 64kB and those could,
under worst-case assumptions, cause trouble. That, too, is a reason to
rethink it.
The call to rmbias_and_adjust was sometimes called with the proxy writer
locked, and sometimes after unlocking it. This commit changes it to
consistently call it with the lock held.
Signed-off-by: Erik Boasson <eb@ilities.com>
Sizing/ReceiveBufferSize must be >= Sizing/ReceiveBufferChunkSize + N
for some small N, and if it is not, Cyclone will crash reading beyond
allocated memory in a nasty way. Ordinarily this should be handled by
the configuration validation, but that would put the burden of knowing
the details of computing N upon the user, an unreasonable requirement.
The old state of an assertion presupposes a check, and brings us back
that same requirement.
Thus, a change to ensure that ReceiveBufferSize will be taken as the
minimum of the configured value and the actual minimal value as
determined by ChunkSize and whatever N happens to be.
Signed-off-by: Erik Boasson <eb@ilities.com>
During shutdown, the optional "debmon" thread for getting some
information about internal state of the DDSI stack had a tendency to run
into errors from calling write on a connection that had already been
closed immediately after connecting successfully to wake the thread.
Instead of blindly writing into the connection, it now checks whether it
is supposed to shutdown before doing anything, avoiding this particular
problem.
Signed-off-by: Erik Boasson <eb@ilities.com>
The rewrite of the abstraction layer changed some details in thread ids
used in tracing and functions to get those ids, with a result of always
printing the parent thread's id in create_thread rather than the newly
create thread's id. As all supported platforms use thread names in the
trace, it is a rather insignificant matter, and so this provides the
trivial fix by letting the new thread log the message.
Signed-off-by: Erik Boasson <eb@ilities.com>
The rtps_init function used to initialize all data structures and start
all threads used by the protocol stack, allowing discovery of remote
entities before the built-in topic data structures were initialized.
(Very) early discovery of a remote participant thus led to a crash.
This commit splits the initialisation, providing a separate function for
starting, in particular, the threads receiving data from the network.
In terms of threads created, it matches exactly with the rtps_stop /
rtps_fini split that already existed to address the exact same problem
on termination.
Signed-off-by: Erik Boasson <eb@ilities.com>
The primary reason is that this allows the implementator of the sertopic
to freely select an allocation strategy, instead of being forced to
allocate the sertopic itself and the names it contains in the common
header with ddsrt_malloc. The secondary reason is that it brings it in
line with the serdata.
Signed-off-by: Erik Boasson <eb@ilities.com>
The name parameter and the name in the sertopic parameter had to match
because it used the one as a key in a lookup checking whether the topic
exists already, and the other as key for the nodes in that index. As
the name is (currently) included in the sertopic, it shouldn't be passed
in separately as well.
Signed-off-by: Erik Boasson <eb@ilities.com>
The thread state management used by the GC and the liveliness monitoring
lazily creates entries for application threads that call (certain)
Cyclone API functions. It arranges for the entry allocated to such a
thread to be cleared once the thread terminates.
This means that if such a thread still exists once the last participant
is deleted (and Cyclone deinitializes), the corresponding thread entry
still exists as well. The assertion that all (known) threads must have
stopped on shutting down Cyclone is therefore incorrect.
This commit introduces a special state for thread entries that were
created lazily. It does monitor that they do not stay in the "awake"
state for too long, but otherwise doesn't care much about them.
Signed-off-by: Erik Boasson <eb@ilities.com>
The Cyclone DDS configuration is in principle an XML document, but it is
possible to write configuration fragments directly in the CYCLONEDDS_URI
environment variable. In that case, it is quite annoying to have to
enter the full closing tags all the time, and so it now allows closing
elements with a simple </> when not reading them from a file.
While it would be trivial to also allow this when reading the
configuration from a file, it seems that promulgating invalid XML would
be bad form ... and besides, in that case editors can help keep
everything in order.
Signed-off-by: Erik Boasson <eb@ilities.com>
The XML parser has two modes: it can parse a file or parse a
caller-owned string. In the former case, it owns its buffer and shifts
the contents to make room for more data read in from the file. This
shifting may not happen when parsing a string.
Signed-off-by: Erik Boasson <eb@ilities.com>
The entirely historical "DDSI2E" element within the CycloneDDS
configuration element is herewith eliminated. All settings contained in
that element (such as General, Discovery, Tracing) are now subelements
of the CycloneDDS top-level element. Old configurations continue to
work but will print a deprecation warning:
//CycloneDDS/DDSI2E: settings moved to //CycloneDDS
Any warnings/errors related for an element //CycloneDDS/DDSI2E/x will be
reported as errors for the new location, that is, for //CycloneDDS/x.
As the "settings moved" warning always precedes any other such warning,
confusion will hopefully be avoided.
Signed-off-by: Erik Boasson <eb@ilities.com>
The Internal/SuppressSPDPMulticast setting was one of several ways to
prevent the sending of participant discovery multicast messages while
still allowing multicast to be used for data communications. That
functionality has long since been subsumed by the AllowMulticast setting
(AllowMulticast = spdp,amc & Internal/SuppressSPDPMulticast is
equivalent to AllowMulticast = amc).
Signed-off-by: Erik Boasson <eb@ilities.com>
These settings all constitute settings from the long history of the DDSI
stack predating Eclipse Cyclone DDS and can reasonably be presumed never
to have been used in Cyclone. Their removal is therefore not expected
to break backwards compatibility (which would be anyway be limited to
Cyclone complaining about undefined settings at startup):
* Tracing/Timestamps[@absolute]: has always been ignored
* Tracing/Timestamps: has always been ignored
* General/EnableLoopback: ignored for quite some time, before that
changing it from the default resulted in crashes.
* General/StartupModeDuration: it did what it advertised (retain data in
the history caches of volatile writers as-if they were transient-local
with a durability history setting of keep-last 1 for the first few
seconds after startup of the DDSI stack) but had no purpose other than
complicating things as the volatile readers ignored the data anyway.
* General/StartupModeCoversTransient: see previous -- and besides,
transient data is not supported yet in Cyclone.
* Compatibility/RespondToRtiInitZeroAckWithInvalidHeartbeat: arguably a
good setting given that DDSI < 2.3 explicitly requires that all
HEARTBEAT messages sent by a writer advertise the existence of at least
1 sample, but this has been fixed in DDSI 2.3. As this requirement was
never respected by most DDSI implementations, there is no point in
retaining the setting, while it does remove a rather tricky problem
immediately after writer startup involving the conjuring up of a
sample that was annihilated immediately before it could have been
observed.
That conjuring up (as it turns out) can cause a malformed message to go
out (one that is harmless in itself). Fixing the generation of that
malformed message while the entire point of the trick is moot in DDSI
2.3 is a bit silly.
Note that full DDSI 2.3 compliance needs a bit more work, so not
bumping the DDSI protocol version number yet.
* Compatibility/AckNackNumbitsEmptySet: changing it from 0 breaks
compatibility with (at least) RTI Connext, and its reason for
existence disappers with a fix in DDSI 2.3.
* Internal/AggressiveKeepLastWhc: changing the setting from the default
made no sense whatsoever in Cyclone -- it would only add flow-control
and potentially block a keep-last writer where the spec forbids that.
* Internal/LegacyFragmentation: a left-over from almost a decade ago when
it was discovered that the specification was inconsistent in the use
of the message header flags for fragmented data, and this stack for a
while used the non-common interpretation. There is no reasonable way of
making the two modes compatible, and this setting merely existed to
deal with the compatibility issue with some ancient OpenSplice DDS
version.
* Durability/Encoding: historical junk.
* WatchDog and Lease: never had any function in Cyclone.
Signed-off-by: Erik Boasson <eb@ilities.com>
High sample rates require rather high rates of allocating and freeing
WHC nodes, serialised samples (serdata), and RTPS message fragments
(xmsg). A bunch of dedicated parallel allocators help take some
pressure off the regular malloc/free calls. However, these used to
gobble up memory like crazy, in part because of rather generous limits,
and in part because there was no restriction on the size of the samples
that would be cached, and it could end up caching large numbers of
multi-MB samples. It should be noted that there is no benefit to
caching large samples anyway, because the sample rate will be that much
lower.
This commit reduces the maximum number of entries for all three cases,
it furthermore limits the maximum size of a serdata or xmsg that can be
cached, and finally instead of instantiating a separate allocator for
WHC nodes per WHC, it now shares one across all WHCs. Total memory use
should now be limited to a couple of MB.
The caching can be disabled by setting ``FREELIST_TYPE`` to
``FREELIST_NONE`` in ``q_freelist.h``.
Signed-off-by: Erik Boasson <eb@ilities.com>
Multiplying time-in-ns since previous output line by 1e9 instead of
dividing it by 1e9 resulted in bit rate showing up as 0Mb/s.
Signed-off-by: Erik Boasson <eb@ilities.com>
nn_bitset_one sets the specified number of bits by first memset'ing the
words, then clearing bits set in a final partial word. It mishandled
the case where the number of bits is a multiple of 32, clearing the
entire word following the last one it was to touch.
Signed-off-by: Erik Boasson <eb@ilities.com>
Generally one doesn't need to include any internal header files in an
application, but the (unstable) interface for application-defined sample
representation and serialization does require including some. It turns
out a keyword clash had to be resolved (typename => type_name) and that
a whole bunch of them were missing the #ifdef __cplusplus / extern "C"
bit.
It further turned out that one had to pull in nearly all of the type
definitions, including some typedefs that are illegal in C++, e.g.,
typedef struct os_sockWaitset *os_sockWaitset;
C++ is right to forbid this, but Cyclone's header files were wrong to
force inclusion of so much irrelevant stuff. This commit leaves these
typedefs in place, but eliminates a few header file inclusions to avoid
the problem.
Signed-off-by: Erik Boasson <eb@ilities.com>
The current situation for performance measurements and checking network
behaviour is rather unsatisfactory, as the only tools available are
``pubsub`` and the ``roundtrip`` and ``throughput`` examples. The first
can do many things thanks to its thousand-and-one options, but its
purpose really is to be able to read/write arbitrary data with arbitrary
QoS -- though the arbitrary data bit was lost in the hacked conversion
from the original code. The latter two have a terrible user interface,
don't perform any verification that the measurement was successful and
do not provide the results in a convenient form.
Furthermore, the abuse of the two examples as the primary means for
measuring performance has resulted in a reduction of their value as an
example, e.g., they can do waitset- or listener-based reading (and the
throughput one also polling-based), but that kind of complication does
not help a new user understand what is going on. Especially not given
that these features were simply hacked in.
Hence the need for a new tool, one that integrates the common
measurements and can be used to verify that the results make sense. It
is not quite done yet, in particular it is lacking in a number of
aspects:
* no measurement of CPU- and network load, memory usage and context
switches yet;
* very limited statistics (min/max/average, if you're lucky; no
interesting things such as jitter on a throughput test yet);
* it can't yet gather the data from all participants in the network
using DDS;
* it doesn't output the data in a convenient file format yet;
* it doesn't allow specifying boundaries within which the results
must fall for the run to be successful.
What it does verify is that all the endpoint matches that should exist
given the discovered participant do in fact come into existence,
reporting an error (and exiting with an exit status code of 1) if they
don't, as well as checking the number of participants. With the way the
DDSI protocol works, this is a pretty decent network connectivity check.
The raw measurements needed for the desired statistics (apart from
system-level measurements) are pretty much made, so the main thing that
still needs to be done is exploit them and output them. It can already
replace the examples for most benchmarks (only the 50%/90%/99%
percentiles are still missing for a complete replacement).
Signed-off-by: Erik Boasson <eb@ilities.com>
Most of the places where the status flags were reset, this happened
without holding m_observer_lock protecting these status flags. For most
of these statuses, they are only ever set/reset while also holding the
entity lock, but this is not true for all of them (DATA_AVAILABLE for
example), and thus there are some cases where retrieving the status
could lead to losing the raising of a (at least a DATA_AVAILABLE)
status.
The problem was introduced in ba46cb1140.
Signed-off-by: Erik Boasson <eb@ilities.com>
The DATA_AVAILABLE status was reset by read and take while holding the
upper-layer reader lock, but after completing the read/take operation on
the RHC. As data can be written into the RHC without holding the
upper-layer reader lock, new data could arrive in between the
reading/taking and the resetting of the DATA_AVAILABLE status, leading
to a missed detection. Resetting DATA_AVAILABLE prior to accessing the
RHC solves this.
Signed-off-by: Erik Boasson <eb@ilities.com>
Changes the semantics of dds_qget_{user,group,topic}data to always
append a 0 byte to any non-empty value without counting it in the size.
(An empty value is always represented by a null pointer and a size of
0). The advantage is that any code treating the data as the octet
sequence it formally is will do exactly the same, but any code written
with the knowledge that it should be a string can safely interpret it as
one.
Signed-off-by: Erik Boasson <eb@ilities.com>
As was the plan with the introduction of ddsrt; this includes renaming
the identifiers to match the capitalization style and removes old junk.
Signed-off-by: Erik Boasson <eb@ilities.com>
Extend the endpoint built-in topic data with the participant instance
handle (the GUID was already present). Having the instance handle
available makes it trivial to look up the participant, whereas a lookup
of the GUID is rather impractical.
Signed-off-by: Erik Boasson <eb@ilities.com>
The built-in topics for readers and writers should be published before a
subscription or publication matched listener is invoked, otherwise the
instance handle provided to the listener is not yet available in a
reader for the corresponding topic.
Signed-off-by: Erik Boasson <eb@ilities.com>
Adds a new "ignorelocal" QoS to the readers/writers to ignore local
matching readers/writers, with three settings:
* DDS_IGNORELOCAL_NONE: default
* DDS_IGNORELOCAL_PARTICIPANT: ignores readers/writers in the same
participant
* DDS_IGNORELOCAL_PROCESS: ignores readers/writers in the same process
These can be set/got using dds_qset_ignorelocal and
dds_qget_ignorelocal.
If a matching reader or writer is ignored because of this setting, it is
as-if that reader or writer doesn't exist. No traffic will be generated
or data retained on its behalf.
There are no consequences for interoperability as this is (by
definition) a local affair.
Signed-off-by: Erik Boasson <eb@ilities.com>