Dropping a match from a data reader generated unregisters and signalled
a SUBSCRIPTION_MATCHED event even when another thread raced it and did
so before. The consequence is possible undercounting of the number of
matched writers.
Signed-off-by: Erik Boasson <eb@ilities.com>
The only place it was used for more than printing the name (which is
available via the QoS object) was in deserializing the sample, but there
is never a need to deserialize a sample if there is no local reader. So
instead of caching the topic object in the proxy endpoint, take it from
a reader to which it is to be delivered and avoid having to keep things
in sync.
Signed-off-by: Erik Boasson <eb@ilities.com>
Submit to the tyranny of the majority:
The DCPS specification makes no distinction between topics with and
topics without key fields: in the latter case, there is simply a single
instance but that instance obeys all the normal rules. In particular,
this implies dispose/unregister work regardless of whether a topic has
key fields.
The DDSI specification makes a distinction between WITH_KEY endpoints
and NO_KEY endpoints and does not support dispose/unregister operations
on NO_KEY endpoints. This implies that a DCPS implementation must limit
itself to the use of WITH_KEY endpoints.
Most implementations nonetheless map topics without key fields to the
NO_KEY type, and as the DDSI specification also states that a WITH_KEY
reader/writer does not match a NO_KEY writer/reader, Cyclone's correctly
mapping everything to WITH_KEY means there are interoperability problems
for topics without key fields.
This commit changes Cyclone to use NO_KEY like the others, but without
changing any other part of its behaviour: it continues to support
dispose/unregister operations regardless of whether a topic has key
fields or not. That is the lesser of the two evils.
Signed-off-by: Erik Boasson <eb@ilities.com>
The pre-emptive ACKNACK messages from FastRTPS have a base sequence
number of 0, which is malformed and must be rejected according to DDSI
8.3.7.1 / 8.3.5.5.
Signed-off-by: Erik Boasson <eb@ilities.com>
Change the structure of the configuration file (in a backwards
compatible manner) to allow specifying configurations for multiple
domains in a file. (Listing multiple files in CYCLONEDDS_URI was
already supported.) A configuration specifies an id, with a default of
any, configurations for an incompatible id are ignored.
If the application specifies an id other than DDS_DOMAIN_DEFAULT in the
call to create_participant, then only configuration specifications for
Domain elements with that id or with id "any" will be used. If the
application does specify DDS_DOMAIN_DEFAULT, then the id will be taken
from the first Domain element that specifies an id. If none do, the
domain id defaults to 0. Each applicable domain specification is taken
as a separate source and may override settings made previously.
All settings moved from the top-level CycloneDDS element to the
CycloneDDS/Domain element. The CycloneDDS/Domain/Id element moved to
become the "id" attribute of CycloneDDS/Domain. The old locations still
work, with appropriate deprecation warnings.
Signed-off-by: Erik Boasson <eb@ilities.com>
The default participant QoS/plist that is used for defaulting received
QoS and for determining which QoS/plist entries to send in discovery
data was mixed up with the one that contains local process information
such as hostname and process id.
It moreover was modified after starting up the protocol stack, and hence
after discovery of remote participants. While unlikely, this could lead
to an assertion in plist_or_xqos_mergein_missing.
Signed-off-by: Erik Boasson <eb@ilities.com>
A QoS change can happen at the same time that a new reader for a
built-in topic is provisioned with historical data, and so cause reading
in inconsistent QoS, use-after-free or other fun things.
During QoS matching it is also necessary to guarantee the QoS doesn't
change (QoS changes affecting matching will be supported at some point,
and manipulating complex data structures where bitmasks determine which
parts are defined while reading the same data concurrently is a recipe
for disaster.
Signed-off-by: Erik Boasson <eb@ilities.com>
Historical data can processed after new data, effectively going backward
in time, so only process data that is newer than the current state.
Signed-off-by: Erik Boasson <eb@ilities.com>
* constness in ternary expressions
* removal of OS_MAX_INTEGER
* inclusion of dds/ddsrt/attributes.h everywhere DDS_EXPORT inline
occurs
* _POSIX_PTHREAD_SEMANTICS in ddsperf
Signed-off-by: Erik Boasson <eb@ilities.com>
The big issue is the there is still only a single log output that gets
opened on creating a domain and closed on deleting one, but otherwise at
least this minimal test works.
The other issue is that the GC waits until threads in all domains have
made sufficient progress, rather than just the threads in its own
domain.
Signed-off-by: Erik Boasson <eb@ilities.com>
This commit moves all but a handful of the global variables into the
domain object, in particular including the DDSI configuration, globals
and all transport internal state.
The goal of this commit is not to produce the nicest code possible, but
to get a working version that can support multiple simultaneous domains.
Various choices are driven by this desire and it is expected that some
of the changes will have to be undone. (E.g., passing the DDSI globals
into address set operations and locator printing because there is no
other way to figure out what transport to use for a given locator;
storing the transport pointer inside the locator would solve that.)
Signed-off-by: Erik Boasson <eb@ilities.com>
Thread liveliness monitoring moves to dds_global and there is one
monitor running if there is at least one domain that requests it. The
synchronization over freeing the thread name when reaping the thread
state is gone by no longer dynamically allocating the thread name.
Signed-off-by: Erik Boasson <eb@ilities.com>
This moves DDSI stack initialisation and finalisation to the creating
and deleting of a domain, and modifies the related code to trigger all
that from creating/deleting participants.
Built-in topic generation is partially domain-dependent, so that moves
as well. The underlying ddsi_sertopics can be created are domain
independent and created without initialising DDSI, which necessitates
moving the IID generation (and thus init/fini) out of the DDSI stack and
to what will remain global data.
Signed-off-by: Erik Boasson <eb@ilities.com>
This makes it possible to use a different RHC implementations for
different readers and removes the need for the RHC interface to be part
of the global state.
Signed-off-by: Erik Boasson <eb@ilities.com>
Commit 3afce30c37 introduced an error in
the calculation of the size of these submessages, making them (and
requiring them to be) larger than correct and putting the "count" field
at the wrong offset, breaking interoperability.
Signed-off-by: Erik Boasson <eb@ilities.com>
Caused by the changes in a652ecb78e; the
sample that matters is the first in what may now be a chain of samples,
which requires some overlooked adjustments.
Signed-off-by: Erik Boasson <eb@ilities.com>
* Move the project top-level CMakeLists.txt to the root of the project;
this allows building Cyclone as part of ROS2 without any special
tricks;
* Clean up the build options:
ENABLE_SSL: whether to check for and include OpenSSL support if a
library can be found (default = ON); this used to be
called DDSC_ENABLE_OPENSSL, the old name is deprecated
but still works
BUILD_DOCS: whether to build docs (default = OFF)
BUILD_TESTING: whether to build test (default = OFF)
* Collect all documentation into top-level "docs" directory;
* Move the examples to the top-level directory;
* Remove the unused and somewhat misleading pseudo-default
cyclonedds.xml;
* Remove unused cmake files
Signed-off-by: Erik Boasson <eb@ilities.com>
* use multicast only for participant discovery if using a WiFi network
* default to using unicast for retransmits
Signed-off-by: Erik Boasson <eb@ilities.com>
It is an excellent platform for catching bugs: big-endian, slow enough
that a context switch in the middle of an operation becomes a regular
occurrence, and all that on a SMP box. Or: I just wanted to see if it
would work.
Signed-off-by: Erik Boasson <eb@ilities.com>
Creating a reader/writer in a listener for a built-in topic (as ddsperf
does) recursively ends up in reader/writer matching, recursively
read-locking qoslock. As nobody ever takes a write-lock, it is a
non-issue provided the rwlock really is an rwlock. On Solaris 2.6 those
don't exist, and mapping it onto a mutex deadlocks.
This commit removes the thing in its entirety, the fact that it is
currently only ever locked for reading is hint that perhaps it is not
that valuable a thing. The way it was used in the code would in any
case not have helped with re-matching on QoS changes (save for
duplicating all the matching code), and it is doubtful that serializing
the matching in that case would be necessary in the first place.
Signed-off-by: Erik Boasson <eb@ilities.com>
The parameter list table indices ought to be a small as possible to
avoid wasting space, and that means the index size is dependent on
whether or not DDSI_INCLUDE_SSM is set.
Signed-off-by: Erik Boasson <eb@ilities.com>
The payload in a struct serdata_default is assumed to be at a 64-bit
offset for conversion to/from a dds_{i,o}stream_t and getting padding
calculations in the serialised representation correct. The definition
did not guarantee this and got it wrong on a 32-bit release build.
This commit computes the required padding at compile time and at
verifies the assumption holds where it matters.
Signed-off-by: Erik Boasson <eb@ilities.com>
Signed-off-by: Thijs Sassen <thijs.sassen@adlinktech.com>
Adjusted the close methode not to expand by the lwip close macro and added a check for DDSI_INCLUDE_SSM to match the correct pid table size.
Signed-off-by: Thijs Sassen <thijs.sassen@adlinktech.com>
Currently each DDSC (not DDSI) writer has its own "xpack" for packing
submessages into larger messages, but that is a bit wasteful, especially
when a lot of samples are being generated that never need to go onto the
wire. Lazily allocating them and only pushing message into them when
they have a destination address saves memory and improves speed for
local communications.
Signed-off-by: Erik Boasson <eb@ilities.com>
Multiple writers for a single instance is pretty rare, so it makes sense
to lazily allocate the tables for keeping track of them. The more
elegant solution would be to have a single lock-free table.
Signed-off-by: Erik Boasson <eb@ilities.com>
Rather than allocate a HH_HOP_RANGE large array of buckets, allocate
just 1 if the initial size is 1, then jump to HH_HOP_RANGE as soon as a
second element is added to the table. There are quite a few cases where
hash tables are created where there never be more than 1 (or even 0)
elements in the table (e.g., a writer without readers, a reader for a
keyless topic).
Signed-off-by: Erik Boasson <eb@ilities.com>
There were inconsistencies in the order in which entity locks were taken
when multiple entities needed to be locked at the same time. In most
cases, the order was first locking entity X, then locking the parent
entity of X. However, in some cases the order was reversed, a likely
cause of deadlocks.
This commit sorts these problems, and in particular propagating
operations into children. The entity refcount is now part of the handle
administration so that it is no longer necessary to lock an entity to
determine whether it is still allowed to be used (previously it had to
check the CLOSED flag afterward). This allows recursing into the
children while holding handles and the underlying objects alive, but
without violating lock order.
Attendant changes that would warrant there own commits but are too hard
to split off:
* Children are now no longer in a singly linked list, but in an AVL
tree; this was necessary at some intermediate stage to allow unlocking
an entity and restarting iteration over all children at the "next"
child (all thanks to the eternally unique instance handle);
* Waitsets shifted to using arrays of attached entities instead of
linked lists; this was a consequence of dealing with some locking
issues in reading triggers and considering which operations on the
"triggered" and "observed" sets are actually needed.
* Entity status flags and waitset/condition trigger counts are now
handled using atomic operations. Entities are now classified as
having a "status" with a corresponding mask, or as having a "trigger
count" (conditions). As there are fewer than 16 status bits, the
status and its mask can squeeze into the same 32-bits as the trigger
count. These atomic updates avoid the need for a separate lock just
for the trigger/status values and results in a significant speedup
with waitsets.
* Create topic now has a more rational behaviour when multiple
participants attempt to create the same topic: each participant now
gets its own topic definition, but the underlying type representation
is shared.
Signed-off-by: Erik Boasson <eb@ilities.com>
Add the instance handle to the DDSC entity type, initialize it properly
for all types, and remove the per-type handling of
dds_get_instance_handle. Those entities that have a DDSI variant take
the instance handle from DDSI (which plays tricks to get the instance
handles of the entities matching the built-in topics). For those that
do not have a DDSI variant, just generate a unique identifier using the
same generate that DDSI uses.
Signed-off-by: Erik Boasson <eb@ilities.com>
Lease_renew is the key one, and that one only ever shifts the lease
expiry to the future, providing the lease hasn't expired already. All
other operations work within leaseheap_lock.
Other updates to lease end time are in set_expiry (which is used in some
special cases). So ... the number of ways this can go wrong is rather
limited.