The "rhc" test runs a random sequence of operations (writes, reads, &c.)
through an RHC with conditions attached to it. All possible state masks
are used, and query conditions are tried with a condition that only
tests the key value, and one that tests attribute values. It depends on
the internal checking logic of the RHC, which is currently enabled only
in Debug builds because of the associated run-time overhead.
Signed-off-by: Erik Boasson <eb@ilities.com>
The RHC tracing produces so much junk that is hardly ever useful that a
normal trace should definitely not include it.
Signed-off-by: Erik Boasson <eb@ilities.com>
The name change missed the uses of the macro, with the result that
datagram truncation on reception does not result in warning (but in
the default configuration, truncation cannot occur); and that the
message flags are undefined on sending datagrams, but judging by the
man page, the likelihood of this causing problems is also small in
practice.
Signed-off-by: Erik Boasson <eb@ilities.com>
Fallback to unicast should set options for unicast discovery (#104)
A very simple change that addresses a real usability issue, does not rely on any platform-specific changes and moreover builds cleanly on the source branch. So I am not going to wait until the AppVeyor build completes.
The default behaviour is to allow multicast, with a fallback of
disabling multicast altogether if the selected interface doesn't support
it. The trouble is that the default discovery configuration assumes that
multicast is available, avoiding the "well-known" port numbers and
avoiding sending any participant discovery messages via unicast. The
result is that the process will run in isolation, which is typically not
the desired result. (It is a quite annoying problem because it happens
on Linux when only a loopback interface is available. It appears that
multicast over loopback works fine, if only you try it, but the kernel
doesn't advertise it and so it doesn't get used.)
This commit changes two things: firstly, it this case it forces the
allocation of "well-known" unicast ports, and secondly, it automatically
adds the local interface's address to the discovery address set. This
way, at least communication inside the machine works.
(Note: AssumeMulticastCapable can be still be used to force it to treat
a Linux loopback interface has multicast capable. It is only the default
behaviour that has changed.)
With thanks to @jwcesign who did all the work except writing the commit
message.
Signed-off-by: Erik Boasson <eb@ilities.com>
One is reasonable (a difference between size_t and the type used for a blob in an iovec), the others are SOCKET-to-int conversions that are caused by the openssl API. Since I'm not going to fix openssl and every indication is that the conversion is safe in practice, silencing the compiler is a sensible thing to do.
Signed-off-by: Erik Boasson <eb@ilities.com>
A configuration setting to allow negotiating TLS1.2 is included. If I
got things done correctly, OpenSSL pre-1.1 can be used but requires
explicitly lowering the minimum allowed TLS version in the
configuration file.
Signed-off-by: Erik Boasson <eb@ilities.com>
With TLS1.3 the socket can indicate data is available even when there is no application data. This commit ignores a 0-byte read when no data is required. Long-term, handling short reads/writes in TCP mode need to be handled completely differently, but that is a story for another day.
Signed-off-by: Erik Boasson <eb@ilities.com>
Fixing that produces a lot of noise on stderr because of inappropriate use of the "info" category in various place and, on macOS, because of a rather stupid way of messing with thread scheduling priorities even when none have been specified explicitly in the configuration.
Signed-off-by: Erik Boasson <eb@ilities.com>
Stopping and restarting the DDSI stack in a single process would not re-initialise the TCP support code properly
Signed-off-by: Erik Boasson <eb@ilities.com>
When a remote writer is discovery, a proxy_writer object representation
that writer is created without yet having any knowledge of what the
current sequence number for that writer is. If a local reader is
matched with that proxy writer before a Heartbeat has been recevied and
this sequence number information is known, all historical data will be
made available to that reader, even if it is volatile.
By treating the first Heartbeat specially, by moving the next sequence
number to be delivered as fresh data forward to the next sequence
number, retrieval of historical data is avoided. Transient-local
readers have a separate ("out-of-sync") route to request it anyway.
Signed-off-by: Erik Boasson <eb@ilities.com>
Move details of built-in topics out of the DDSI core (so the only hooks
remain). For this, rtps_term had to be split, so now it is "stop"
followed by "fini".
Add a notion of local writers that are not bound to a participant ("local
orphans"), so that the local built-in topic writers can be created during
initialization. This eliminates the "builtin" participant. This
uncovered in inconsistency in the unit tests: on the one hand, a newly
created participant is expected to have no child entities; on the other
hand, the built-in topics were expected to be returned by find_topic ...
This inconsistency has been resolved by creating them lazily and
accepting that find_topic can't return them until they have been
created. Special code was in place in dds_create_reader anyway, so it
is not expected to have any real consequence for applications.
Use a special WHC implementation that regenerates the data on the fly
using the internal discovery tables of DDSI, so that the samples are only
stored by readers. This eliminates the memory overhead of that existed
previously when the WHC of the writers stored the data.
No longer return topic name and type name in the built-in topics, they
have been extracted already and are not accessible through the normal
interface but do cause problems when comparing QoS.
Signed-off-by: Erik Boasson <eb@ilities.com>
The only consequence is that the tkmap would probably map the same topic to a different iid each time one was written, or that a different topic would get mapped to some other iid. The latter would cause the WHC to overwrite the older topic. Actual damage is minimal as it would only result in incomplete topic discovery by OpenSplice. That it is mostly harmless today does not mean it couldn't cause any number of interesting surprises in the future.
Signed-off-by: Erik Boasson <eb@ilities.com>
without this, deleting the last reader/writer that references the topic results in a dangling pointer ... but there is another intriguing solution: erase the topic from the proxy reader/writer when the last matching local one disappears, so that the topic completely disappears. I rather like this second solution, but I am not yet sure of the consequences and the first (implemented one) is such a simple change that fixes a real problem that it is a no-brainer
Signed-off-by: Erik Boasson <eb@ilities.com>
This addresses the deadlock of #41 but leaves another issue open: sequencing of listener invocations on publication/subscription matched events: there is a risk that the "unmatch" even precedes the "match" event from the application perspective, even though it is quite unlike in practice. Various ways of addressing it exist, but it looks like sequencing at the level of the "dds" entities suffers from similar risks. So better to just avoid the deadlock for now.
Signed-off-by: Erik Boasson <eb@ilities.com>
Previously it would fall through and assert in a debug build or return an error in a release build. The behaviour in a release build was almost correct, as the flag means the entity should be completely ignored if the parameter is not understood by the implementation, but I don't believe it should result in a warning — certainly not that claims the parameter list is invalid. A specific return code is now used to indicate a parameter list that was rejected because of this flag, and that suppresses the warning.
Signed-off-by: Erik Boasson <eb@ilities.com>
this is just an intermediate step: firstly, iid-to-key lookup is a linear scan; secondly, the lookup should happen on the RHC or the WHC and not on some internal map to associates key values with instance handles and is independent of entities
Signed-off-by: Erik Boasson <eb@ilities.com>