Stopping and restarting the DDSI stack in a single process would not re-initialise the TCP support code properly
Signed-off-by: Erik Boasson <eb@ilities.com>
Listener/status management invocation was rather expensive, and
especially the cost of checking listeners, then setting status flags and
triggering waitsets ran into severe lock contention.
A major cost was the repeated use of dds_entity_lock and
dds_entity_unlock, these have been eliminated. Another cost was that
each time an event occurred (with DATA_AVAILABLE the most problematic
one) it would walk the chain of ancestors to see if any had a relevant
listener, and only if none of them had any, it would set the status
flags.
The locking/unlocking of the entity has been eliminated by moving the
listener/status flag manipulation from the general entity lock to its
m_observers_lock. That lock has a much smaller scope, and consequently
contention has been significantly reduced.
Instead of walking the entity hierarchy looking for listeners, an entity
now inherits the ancestors' listeners. The set_listener operation has
been made a little more complicated by the need to not only set the
listeners for the specified entity, but to also update any inherited
listeners its descendants.
The commit is a bit larger than strictly needed ... I've started
reformatting the code to reduce the variety of styles ... as there I
haven't been able to find a single tool that does what I want, it may
well end up as manual work.
Signed-off-by: Erik Boasson <eb@ilities.com>
dds_init generates an entity name for the participant derived from the
process name and process id, but the name is currently uninitialised
(temporarily so, caused by a recent update to the OS abstraction layer).
Signed-off-by: Erik Boasson <eb@ilities.com>
When a remote writer is discovery, a proxy_writer object representation
that writer is created without yet having any knowledge of what the
current sequence number for that writer is. If a local reader is
matched with that proxy writer before a Heartbeat has been recevied and
this sequence number information is known, all historical data will be
made available to that reader, even if it is volatile.
By treating the first Heartbeat specially, by moving the next sequence
number to be delivered as fresh data forward to the next sequence
number, retrieval of historical data is avoided. Transient-local
readers have a separate ("out-of-sync") route to request it anyway.
Signed-off-by: Erik Boasson <eb@ilities.com>
Move details of built-in topics out of the DDSI core (so the only hooks
remain). For this, rtps_term had to be split, so now it is "stop"
followed by "fini".
Add a notion of local writers that are not bound to a participant ("local
orphans"), so that the local built-in topic writers can be created during
initialization. This eliminates the "builtin" participant. This
uncovered in inconsistency in the unit tests: on the one hand, a newly
created participant is expected to have no child entities; on the other
hand, the built-in topics were expected to be returned by find_topic ...
This inconsistency has been resolved by creating them lazily and
accepting that find_topic can't return them until they have been
created. Special code was in place in dds_create_reader anyway, so it
is not expected to have any real consequence for applications.
Use a special WHC implementation that regenerates the data on the fly
using the internal discovery tables of DDSI, so that the samples are only
stored by readers. This eliminates the memory overhead of that existed
previously when the WHC of the writers stored the data.
No longer return topic name and type name in the built-in topics, they
have been extracted already and are not accessible through the normal
interface but do cause problems when comparing QoS.
Signed-off-by: Erik Boasson <eb@ilities.com>
The only consequence is that the tkmap would probably map the same topic to a different iid each time one was written, or that a different topic would get mapped to some other iid. The latter would cause the WHC to overwrite the older topic. Actual damage is minimal as it would only result in incomplete topic discovery by OpenSplice. That it is mostly harmless today does not mean it couldn't cause any number of interesting surprises in the future.
Signed-off-by: Erik Boasson <eb@ilities.com>
and that is implied by the x86/x64's memory model ... avoiding the mfence instruction is a significant win
Signed-off-by: Erik Boasson <eb@ilities.com>
simply switching from dds_alloc to os_malloc in alloc_sample removes a redundant memset, which gives 5% improvement in a throughput test (on my laptop); other analogous changes for consistency
Signed-off-by: Erik Boasson <eb@ilities.com>
without this, deleting the last reader/writer that references the topic results in a dangling pointer ... but there is another intriguing solution: erase the topic from the proxy reader/writer when the last matching local one disappears, so that the topic completely disappears. I rather like this second solution, but I am not yet sure of the consequences and the first (implemented one) is such a simple change that fixes a real problem that it is a no-brainer
Signed-off-by: Erik Boasson <eb@ilities.com>
This addresses the deadlock of #41 but leaves another issue open: sequencing of listener invocations on publication/subscription matched events: there is a risk that the "unmatch" even precedes the "match" event from the application perspective, even though it is quite unlike in practice. Various ways of addressing it exist, but it looks like sequencing at the level of the "dds" entities suffers from similar risks. So better to just avoid the deadlock for now.
Signed-off-by: Erik Boasson <eb@ilities.com>
Previously it would fall through and assert in a debug build or return an error in a release build. The behaviour in a release build was almost correct, as the flag means the entity should be completely ignored if the parameter is not understood by the implementation, but I don't believe it should result in a warning — certainly not that claims the parameter list is invalid. A specific return code is now used to indicate a parameter list that was rejected because of this flag, and that suppresses the warning.
Signed-off-by: Erik Boasson <eb@ilities.com>