Fix conversion of {sec,nsec} to msec in timedwait on Windows
Internally time stamps and durations are all in nanoseconds, but the platform abstraction uses {sec,nsec} (essentially a struct timespec) and Windows uses milliseconds. The conversion to milliseconds with upwards rounding was broken, adding ~1s to each timeout. In most of the handful of uses the effect is minor in practice, but it does matter a lot in the scheduling of Heartbeat and AckNack messages, e.g., by causing a simple throughput test to exhibit periodic drops in throughput. Signed-off-by: Erik Boasson <eb@ilities.com>
This commit is contained in:
parent
2e9685221a
commit
aa6a6442c2
1 changed files with 1 additions and 1 deletions
|
@ -97,7 +97,7 @@ os_result os_condTimedWait(os_cond *cond, os_mutex *mutex, const os_time *time)
|
|||
assert(cond != NULL);
|
||||
assert(mutex != NULL);
|
||||
|
||||
timems = time->tv_sec * 1000 + (time->tv_nsec + 999999999) / 1000000;
|
||||
timems = time->tv_sec * 1000 + (time->tv_nsec + 999999) / 1000000;
|
||||
if (SleepConditionVariableSRW(&cond->cond, &mutex->lock, timems, 0)) {
|
||||
return os_resultSuccess;
|
||||
} else if (GetLastError() != ERROR_TIMEOUT) {
|
||||
|
|
Loading…
Add table
Add a link
Reference in a new issue