windows Sun Apr 13 21:02:39 WEDT 2025

This commit is contained in:
Niklas Halle 2025-04-13 21:02:39 +02:00
parent 4fe95bfd4c
commit 4e7c63701a
18 changed files with 115050 additions and 12974 deletions

View file

@ -70,7 +70,9 @@
"typeindex": "cpp", "typeindex": "cpp",
"typeinfo": "cpp", "typeinfo": "cpp",
"variant": "cpp", "variant": "cpp",
"codecvt": "cpp" "codecvt": "cpp",
"forward_list": "cpp",
"valarray": "cpp"
}, },
"cmake.ignoreCMakeListsMissing": true, "cmake.ignoreCMakeListsMissing": true,
"python.autoComplete.extraPaths": [ "python.autoComplete.extraPaths": [

233
README.md
View file

@ -1,124 +1,159 @@
# Dynamic Priority Scheduling for ROS-EDF Executor # ROS 2 Dynamic Priority Executor with Performance Monitoring
This repository contains experiments for the ROS-EDF executor with priority-based and deadline-based scheduling capabilities. This repository contains a ROS 2 executor implementation with priority-based and deadline-based scheduling capabilities, including comprehensive performance monitoring.
## Usage ## Performance Monitoring Features
Clone with submodules enabled: - High-resolution event tracking for callbacks
``` - Chain-aware monitoring for multi-callback sequences
- Deadline tracking and violation detection
- Thread-aware monitoring in multi-threaded executors
- Automatic JSON log generation
- Configurable buffer sizes and auto-dumping
## Quick Start
1. Clone the repository:
```bash
git clone --recurse-submodules https://github.com/user/ROS-Dynamic-Executor-Experiments.git git clone --recurse-submodules https://github.com/user/ROS-Dynamic-Executor-Experiments.git
``` ```
Make sure you have sourced your ROS 2 environment. 2. Build the project:
```bash
Run the example: colcon build
```
ros2 run casestudy casestudy_example
``` ```
Then run the analysis notebook: 3. Source the workspace:
```bash
source install/setup.bash
``` ```
## Using the Performance Monitor
### Basic Setup
```cpp
#include "priority_executor/priority_executor.hpp"
// Create executor with monitoring enabled
auto executor = std::make_shared<priority_executor::TimedExecutor>(options, "my_executor");
// Configure monitoring options
executor->setMonitoringOptions(
10000, // Buffer size
5000, // Auto-dump threshold
"performance_logs" // Output directory
);
```
### Monitoring Configuration
- **Buffer Size**: Maximum number of events to hold in memory
- **Auto-dump Threshold**: Number of events that triggers automatic file dump
- **Output Directory**: Where performance logs are saved
### Event Types Tracked
- `CALLBACK_READY`: Callback is ready for execution
- `CALLBACK_START`: Callback execution started
- `CALLBACK_END`: Callback execution completed
- `DEADLINE_MISSED`: A deadline was missed
- `DEADLINE_MET`: A deadline was successfully met
- `CHAIN_START`: Start of a callback chain
- `CHAIN_END`: End of a callback chain
### Performance Data
Each event captures:
- High-resolution timestamps
- Node and callback names
- Chain IDs and positions
- Deadlines and processing times
- Thread IDs (for multi-threaded executors)
- Additional context data
### Output Format
Performance logs are saved as JSON files with the following structure:
```json
[
{
"timestamp": 1234567890,
"type": "callback_start",
"node_name": "example_node",
"callback_name": "timer_callback",
"chain_id": 1,
"is_first_in_chain": true,
"deadline": 1000000,
"processing_time": 500,
"executor_id": 0,
"additional_data": {
"thread_id": 1,
"cpu_affinity": 1
}
}
]
```
### Multi-threaded Monitoring
The monitoring system automatically handles multi-threaded executors:
- Tracks per-thread execution
- Records CPU affinity
- Thread-safe event recording
- Maintains event ordering
### Best Practices
1. Set appropriate buffer sizes based on your system's memory constraints
2. Enable auto-dumping for long-running systems
3. Use meaningful executor names for better log analysis
4. Monitor deadline compliance in real-time systems
5. Track callback chains for end-to-end latency analysis
### Analyzing Results
1. Performance logs are saved in the configured output directory
2. Use the provided Jupyter notebook for analysis:
```bash
jupyter notebook analysis/analysis.ipynb jupyter notebook analysis/analysis.ipynb
``` ```
--- ## Advanced Usage
## Dynamically Adjusting Scheduler Approaches
The PriorityMemoryStrategy supports different scheduling policies for ROS callbacks. To extend it with dynamically adjusting scheduling that reacts to semantic events, there are several viable approaches:
### Approach A: Custom PriorityExecutableComparator
Replace the existing comparator with a custom implementation:
**Pros:**
- Direct control over scheduling logic
- Clean separation of concerns
- Can implement complex scheduling policies
- Doesn't require modifying deadline values
**Cons:**
- Requires understanding and maintaining the comparison logic
- May need to add new fields to PriorityExecutable to track dynamic priorities
- Could become complex if multiple factors affect priority
### Approach B: Dynamic Deadline Adjustment
Keep the EDF comparator but adjust deadlines based on priority logic:
**Pros:**
- Works within the existing EDF framework
- Conceptually simple - just manipulate deadline values
- Doesn't require changing the core sorting mechanism
- Easier to debug (you can log deadline changes)
**Cons:**
- Potentially confusing semantics (using deadlines to represent non-deadline priorities)
- May interfere with actual deadline-based requirements
- Could lead to instability if not carefully managed
### Approach C: Event-Driven Priority Field
Add a dynamic boost factor field to PriorityExecutable:
### Manual Log Dumping
```cpp ```cpp
int dynamic_priority_boost = 0; auto& monitor = PerformanceMonitor::getInstance();
monitor.dumpToFile("custom_log_name.json");
``` ```
Then modify the comparator to consider this value: ### Temporary Monitoring Disable
```cpp ```cpp
if (p1->sched_type == DEADLINE) { executor->enableMonitoring(false);
// First compare deadline as usual // ... execute some callbacks ...
// Then use dynamic_priority_boost as a tiebreaker executor->enableMonitoring(true);
if (p1_deadline == p2_deadline) {
return p1->dynamic_priority_boost > p2->dynamic_priority_boost;
}
return p1_deadline < p2_deadline;
}
``` ```
**Pros:** ### Custom Event Context
- Clearer semantics than manipulating deadlines Additional context can be added to events through the `additional_data` field in JSON format.
- Keeps deadline scheduling intact for real-time guarantees
- Easy to adjust at runtime
**Cons:** ## Performance Impact
- Requires modifying both PriorityExecutable and comparison logic
### Approach D: Priority Multiplier System The monitoring system is designed to be lightweight:
- Lock-free recording for most operations
- Efficient event buffering
- Configurable buffer sizes
- Optional auto-dumping
- Minimal overhead during normal operation
Implement a system where chains can have priority multipliers applied: ## Contributing
```cpp Contributions are welcome! Please follow these steps:
float priority_multiplier = 1.0f; 1. Fork the repository
``` 2. Create a feature branch
3. Commit your changes
4. Create a pull request
And then in the comparator: ## License
```cpp Apache License 2.0 - See LICENSE file for details
// For priority-based scheduling
int effective_p1 = p1->priority * p1->priority_multiplier;
int effective_p2 = p2->priority * p2->priority_multiplier;
return effective_p1 < effective_p2;
```
**Pros:**
- Scales existing priorities rather than replacing them
- Preserves relative importance within chains
- Intuitive model for temporary priority boosts
**Cons:**
- May need to handle overflow/boundary cases
- Requires careful tuning of multiplier ranges
## Recommendation
Approach C (Event-Driven Priority Field) offers the best balance of:
1. Clean semantics
2. Minimal interference with existing scheduling logic
3. Clear separation between baseline priorities and dynamic adjustments
4. Straightforward implementation
This approach maintains real-time guarantees while enabling dynamic behaviors based on semantic events.

File diff suppressed because it is too large Load diff

View file

@ -0,0 +1,46 @@
===========================
Performance Monitor Report
===========================
Executor: priority_executor (ID: 51052544)
----------------------------------------
Callback Execution Report:
=========================
experiment_parameters//node_0 (chain:0 end)
Calls: 499
Avg time: 19848.2 µs
Min time: 16832 µs
Max time: 40185 µs
Total time: 9904264 µs
experiment_parameters//node_2 (chain:1 end)
Calls: 499
Avg time: 24434 µs
Min time: 20683 µs
Max time: 59658 µs
Total time: 12192560 µs
experiment_parameters//parameter_events (chain:0)
Calls: 1
Avg time: 185 µs
Min time: 185 µs
Max time: 185 µs
Total time: 185 µs
experiment_parameters/timer_100ms (chain:0 start)
Calls: 500
Avg time: 11759.7 µs
Min time: 8683 µs
Max time: 23513 µs
Total time: 5879840 µs
experiment_parameters/timer_100ms (chain:1 start)
Calls: 500
Avg time: 16964.2 µs
Min time: 13176 µs
Max time: 32020 µs
Total time: 8482083 µs

File diff suppressed because it is too large Load diff

View file

@ -1,43 +0,0 @@
[
{"entry": {"operation": "start_work", "chain": 0, "node": "CustomListenerNode/CustomListenerNode-wildfire_talk", "count": 0}, "time": 18545986},
{"entry": {"operation": "end_work", "chain": 0, "node": "CustomListenerNode/CustomListenerNode-wildfire_talk", "count": 0, "duration_us": 19}, "time": 18545986},
{"entry": {"operation": "start_work", "chain": 0, "node": "CustomListenerNode/CustomListenerNode-wildfire_talk", "count": 0}, "time": 18546487},
{"entry": {"operation": "end_work", "chain": 0, "node": "CustomListenerNode/CustomListenerNode-wildfire_talk", "count": 0, "duration_us": 91}, "time": 18546487},
{"entry": {"operation": "start_work", "chain": 0, "node": "CustomListenerNode/CustomListenerNode-wildfire_talk", "count": 0}, "time": 18546987},
{"entry": {"operation": "end_work", "chain": 0, "node": "CustomListenerNode/CustomListenerNode-wildfire_talk", "count": 0, "duration_us": 30}, "time": 18546987},
{"entry": {"operation": "start_work", "chain": 0, "node": "CustomListenerNode/CustomListenerNode-wildfire_talk", "count": 0}, "time": 18547487},
{"entry": {"operation": "end_work", "chain": 0, "node": "CustomListenerNode/CustomListenerNode-wildfire_talk", "count": 0, "duration_us": 76}, "time": 18547487},
{"entry": {"operation": "start_work", "chain": 0, "node": "CustomListenerNode/CustomListenerNode-wildfire_talk", "count": 0}, "time": 18547986},
{"entry": {"operation": "end_work", "chain": 0, "node": "CustomListenerNode/CustomListenerNode-wildfire_talk", "count": 0, "duration_us": 46}, "time": 18547986},
{"entry": {"operation": "start_work", "chain": 0, "node": "CustomListenerNode/CustomListenerNode-wildfire_talk", "count": 0}, "time": 18548488},
{"entry": {"operation": "end_work", "chain": 0, "node": "CustomListenerNode/CustomListenerNode-wildfire_talk", "count": 0, "duration_us": 61}, "time": 18548488},
{"entry": {"operation": "start_work", "chain": 0, "node": "CustomListenerNode/CustomListenerNode-wildfire_talk", "count": 0}, "time": 18548986},
{"entry": {"operation": "end_work", "chain": 0, "node": "CustomListenerNode/CustomListenerNode-wildfire_talk", "count": 0, "duration_us": 23}, "time": 18548986},
{"entry": {"operation": "start_work", "chain": 0, "node": "CustomListenerNode/CustomListenerNode-wildfire_talk", "count": 0}, "time": 18549487},
{"entry": {"operation": "end_work", "chain": 0, "node": "CustomListenerNode/CustomListenerNode-wildfire_talk", "count": 0, "duration_us": 73}, "time": 18549487},
{"entry": {"operation": "start_work", "chain": 0, "node": "CustomListenerNode/CustomListenerNode-wildfire_talk", "count": 0}, "time": 18549987},
{"entry": {"operation": "end_work", "chain": 0, "node": "CustomListenerNode/CustomListenerNode-wildfire_talk", "count": 0, "duration_us": 13}, "time": 18549987},
{"entry": {"operation": "start_work", "chain": 0, "node": "CustomListenerNode/CustomListenerNode-wildfire_talk", "count": 0}, "time": 18550486},
{"entry": {"operation": "end_work", "chain": 0, "node": "CustomListenerNode/CustomListenerNode-wildfire_talk", "count": 0, "duration_us": 141}, "time": 18550487},
{"entry": {"operation": "start_work", "chain": 0, "node": "CustomTalkerNode/CustomTalkerNode-timer", "count": 0}, "time": 18545986},
{"entry": {"operation": "end_work", "chain": 0, "node": "CustomTalkerNode/CustomTalkerNode-timer", "count": 0, "duration_us": 255}, "time": 18545986},
{"entry": {"operation": "start_work", "chain": 0, "node": "CustomTalkerNode/CustomTalkerNode-timer", "count": 0}, "time": 18546486},
{"entry": {"operation": "end_work", "chain": 0, "node": "CustomTalkerNode/CustomTalkerNode-timer", "count": 0, "duration_us": 459}, "time": 18546486},
{"entry": {"operation": "start_work", "chain": 0, "node": "CustomTalkerNode/CustomTalkerNode-timer", "count": 0}, "time": 18546986},
{"entry": {"operation": "end_work", "chain": 0, "node": "CustomTalkerNode/CustomTalkerNode-timer", "count": 0, "duration_us": 264}, "time": 18546986},
{"entry": {"operation": "start_work", "chain": 0, "node": "CustomTalkerNode/CustomTalkerNode-timer", "count": 0}, "time": 18547486},
{"entry": {"operation": "end_work", "chain": 0, "node": "CustomTalkerNode/CustomTalkerNode-timer", "count": 0, "duration_us": 338}, "time": 18547486},
{"entry": {"operation": "start_work", "chain": 0, "node": "CustomTalkerNode/CustomTalkerNode-timer", "count": 0}, "time": 18547986},
{"entry": {"operation": "end_work", "chain": 0, "node": "CustomTalkerNode/CustomTalkerNode-timer", "count": 0, "duration_us": 153}, "time": 18547986},
{"entry": {"operation": "start_work", "chain": 0, "node": "CustomTalkerNode/CustomTalkerNode-timer", "count": 0}, "time": 18548487},
{"entry": {"operation": "end_work", "chain": 0, "node": "CustomTalkerNode/CustomTalkerNode-timer", "count": 0, "duration_us": 227}, "time": 18548488},
{"entry": {"operation": "start_work", "chain": 0, "node": "CustomTalkerNode/CustomTalkerNode-timer", "count": 0}, "time": 18548986},
{"entry": {"operation": "end_work", "chain": 0, "node": "CustomTalkerNode/CustomTalkerNode-timer", "count": 0, "duration_us": 78}, "time": 18548986},
{"entry": {"operation": "start_work", "chain": 0, "node": "CustomTalkerNode/CustomTalkerNode-timer", "count": 0}, "time": 18549486},
{"entry": {"operation": "end_work", "chain": 0, "node": "CustomTalkerNode/CustomTalkerNode-timer", "count": 0, "duration_us": 294}, "time": 18549486},
{"entry": {"operation": "start_work", "chain": 0, "node": "CustomTalkerNode/CustomTalkerNode-timer", "count": 0}, "time": 18549986},
{"entry": {"operation": "end_work", "chain": 0, "node": "CustomTalkerNode/CustomTalkerNode-timer", "count": 0, "duration_us": 260}, "time": 18549986},
{"entry": {"operation": "start_work", "chain": 0, "node": "CustomTalkerNode/CustomTalkerNode-timer", "count": 0}, "time": 18550486},
{"entry": {"operation": "end_work", "chain": 0, "node": "CustomTalkerNode/CustomTalkerNode-timer", "count": 0, "duration_us": 183}, "time": 18550486},
{"entry": {"operation": "start_work", "chain": 0, "node": "CustomTalkerNode/CustomTalkerNode-timer", "count": 0}, "time": 18550986},
{"entry": {"operation": "end_work", "chain": 0, "node": "CustomTalkerNode/CustomTalkerNode-timer", "count": 0, "duration_us": 395}, "time": 18550986}]

File diff suppressed because it is too large Load diff

View file

@ -25,6 +25,7 @@ set(EXPERIMENTS
casestudy_2024ours_latency casestudy_2024ours_latency
casestudy_2024ours_executor2executor casestudy_2024ours_executor2executor
casestudy_example casestudy_example
casestudy_monitored_example
casestudy_fire_drone casestudy_fire_drone
) )

View file

@ -0,0 +1,175 @@
#include <cmath>
#include <chrono>
#include <string>
#include <fstream>
#include <iostream>
#include <filesystem>
#include <unistd.h>
#include <sys/prctl.h>
#include <rclcpp/rclcpp.hpp>
#include <std_msgs/msg/string.hpp>
#include "casestudy_tools/test_nodes.hpp"
#include "casestudy_tools/experiment.hpp"
#include "casestudy_tools/primes_workload.hpp"
#include "priority_executor/priority_executor.hpp"
#include "priority_executor/priority_memory_strategy.hpp"
std::atomic<bool> should_do_task(true);
// Global performance monitoring configuration
constexpr size_t MONITOR_BUFFER_SIZE = 10000;
constexpr size_t MONITOR_AUTO_DUMP_THRESHOLD = 5000;
const std::string MONITOR_OUTPUT_DIR = "performance_logs/monitored_example";
void configure_executor_monitoring(std::shared_ptr<priority_executor::TimedExecutor> executor,
const std::string& name) {
executor->setMonitoringOptions(
MONITOR_BUFFER_SIZE,
MONITOR_AUTO_DUMP_THRESHOLD,
MONITOR_OUTPUT_DIR + "/" + name
);
// Enable monitoring by default
executor->enableMonitoring(true);
}
void run_one_executor(
std::function<void(std::atomic<bool>&)> const& exec_fun,
int const idx) {
// Set up process scheduling
struct sched_param param;
param.sched_priority = sched_get_priority_max(SCHED_FIFO) - 2;
int const result = sched_setscheduler(0, SCHED_FIFO, &param);
if (result != 0) {
std::cerr << "ros_experiment: sched_setscheduler failed: " << result
<< ": " << strerror(errno) << std::endl;
std::cerr << "This operation requires root privileges. Please run the program with sufficient permissions."
<< std::endl;
#ifndef WIN_DOCKER_IS_BROKEN
exit(EXIT_FAILURE);
#else
std::cerr << "Continuing without setting scheduler, fuck Windows" << std::endl;
#endif
}
prctl(PR_SET_NAME, "ros_experiment", 0, 0, 0);
exec_fun(should_do_task);
}
int ros_experiment(
int const argc,
char** const argv,
std::string const& file_name,
ExperimentConfig config) {
rclcpp::init(argc, argv);
auto const node = rclcpp::Node::make_shared("experiment_parameters");
node->declare_parameter("executor_type", "edf");
std::string executor_type;
node->get_parameter("executor_type", executor_type);
std::cout << "using executor type: " << executor_type << std::endl;
config.executor_type = executor_type;
config.parallel_mode = true;
config.nodes.push_back(node);
Experiment experiment(config);
auto const exec_funs = experiment.getRunFunctions();
std::cout << "got " << exec_funs.size() << " executors" << std::endl;
// Configure monitoring for each executor
int executor_id = 0;
std::vector<std::shared_ptr<priority_executor::TimedExecutor>> monitored_executors;
for (const auto& executor : experiment.getExecutors()) {
if (auto timed_exec = std::dynamic_pointer_cast<priority_executor::TimedExecutor>(executor.executor)) {
std::string exec_name = "executor_" + std::to_string(executor_id++);
configure_executor_monitoring(timed_exec, exec_name);
monitored_executors.push_back(timed_exec);
}
}
std::vector<std::thread> exec_threads;
int i = 0;
experiment.resetTimers();
for (auto const& exec_fun : exec_funs) {
exec_threads.emplace_back(run_one_executor, exec_fun, i++);
}
for (auto& t : exec_threads) {
t.join();
}
// Write performance reports for each executor
std::ofstream report_file(MONITOR_OUTPUT_DIR + "/performance_report.txt");
if (report_file.is_open()) {
report_file << "===========================\n";
report_file << " Performance Monitor Report\n";
report_file << "===========================\n\n";
for (const auto& executor : monitored_executors) {
report_file << "Executor: " << executor->getExecutorName()
<< " (ID: " << executor->getExecutorId() << ")\n";
report_file << "----------------------------------------\n";
report_file << executor->getCallbackExecutionReport() << "\n\n";
}
report_file.close();
std::cout << "Performance report written to " << MONITOR_OUTPUT_DIR
<< "/performance_report.txt" << std::endl;
} else {
std::cerr << "Failed to open performance report file" << std::endl;
}
// Ensure all performance logs are written before experiment logs
auto& monitor = priority_executor::PerformanceMonitor::getInstance();
monitor.dumpToFile("final_dump.json");
std::string const outputname = "monitored_example";
experiment.writeLogsToFile(file_name, outputname);
return 0;
}
int main(int argc, char* argv[]) {
// Create the performance logs directory
std::filesystem::create_directories(MONITOR_OUTPUT_DIR);
// calibrate the dummy load for the current system
calibrate_dummy_load();
ExperimentConfig config;
// Set up two chains of callbacks to demonstrate monitoring
config.chain_lengths = {2, 2};
config.callback_ids = {{0, 1}, {2, 3}};
config.callback_priorities = {1, 0, 3, 2};
config.chain_timer_control = {0, 1};
// Set different runtimes to see varying processing times in the logs
config.callback_runtimes = {10, 20, 15, 25};
config.chain_periods = {100, 100};
config.callback_executor_assignments = {};
config.parallel_mode = true;
config.cores = 2;
sanity_check_config(config);
std::thread ros_task([&]() {
try {
ros_experiment(argc, argv, "monitored_example", config);
} catch (std::exception const& e) {
std::cerr << "Exception in ros_experiment: " << e.what() << std::endl;
} catch (...) {
std::cerr << "Unknown exception in ros_experiment." << std::endl;
}
});
std::cout << "tasks started" << std::endl;
ros_task.join();
std::cout << "tasks joined" << std::endl;
return 0;
}

View file

@ -88,6 +88,12 @@ public:
*/ */
experiment_executor getExecutor(int executor_idx = 0); experiment_executor getExecutor(int executor_idx = 0);
/**
* @brief Get the executors
* @return experiment_executors
*/
std::vector<experiment_executor> getExecutors();
node_time_logger getInternalLogger(); node_time_logger getInternalLogger();
/** /**

View file

@ -142,6 +142,10 @@ experiment_executor Experiment::getExecutor(int const executor_idx) {
return executors[executor_idx]; return executors[executor_idx];
} }
std::vector<experiment_executor> Experiment::getExecutors() {
return executors;
}
void Experiment::createExecutors() { void Experiment::createExecutors() {
if (config.callback_executor_assignments.empty()) { if (config.callback_executor_assignments.empty()) {
// Create a single executor // Create a single executor

View file

@ -1,69 +0,0 @@
cmake_minimum_required(VERSION 3.5)
project(existing_system_monitor VERSION 0.1.0)
# Set C++ standards
set(CMAKE_CXX_STANDARD 17)
set(CMAKE_CXX_STANDARD_REQUIRED ON)
set(CMAKE_CXX_EXTENSIONS OFF)
# Compiler options
if(CMAKE_COMPILER_IS_GNUCXX OR CMAKE_CXX_COMPILER_ID MATCHES "Clang")
add_compile_options(-Wall -Wextra -Wpedantic)
endif()
# Find dependencies
find_package(ament_cmake REQUIRED)
find_package(rclcpp REQUIRED)
find_package(casestudy_tools REQUIRED)
find_package(simple_timer REQUIRED)
# Library targets
add_library(${PROJECT_NAME} src/instrumented_node.cpp)
target_include_directories(${PROJECT_NAME} PUBLIC
$<BUILD_INTERFACE:${CMAKE_CURRENT_SOURCE_DIR}/include>
$<INSTALL_INTERFACE:include>)
ament_target_dependencies(${PROJECT_NAME}
rclcpp
casestudy_tools
simple_timer
)
# Executable targets
add_executable(uas src/uas.cpp)
target_include_directories(uas PUBLIC
$<BUILD_INTERFACE:${CMAKE_CURRENT_SOURCE_DIR}/include>
$<INSTALL_INTERFACE:include>
)
ament_target_dependencies(uas
simple_timer
casestudy_tools
rclcpp
)
target_link_libraries(uas ${PROJECT_NAME})
# Testing
if(BUILD_TESTING)
find_package(ament_lint_auto REQUIRED)
ament_lint_auto_find_test_dependencies()
endif()
# Installation
install(
DIRECTORY include/
DESTINATION include
)
install(
TARGETS ${PROJECT_NAME} uas
EXPORT export_${PROJECT_NAME}
LIBRARY DESTINATION lib
ARCHIVE DESTINATION lib
RUNTIME DESTINATION lib/${PROJECT_NAME}
INCLUDES DESTINATION include
)
# Export and package configuration
ament_export_include_directories(include)
ament_export_targets(export_${PROJECT_NAME} HAS_LIBRARY_TARGET)
ament_export_dependencies(rclcpp casestudy_tools simple_timer)
ament_package()

View file

@ -1,127 +0,0 @@
#ifndef EXISTING_SYSTEM_MONITOR_INSTRUMENTED_NODE_HPP
#define EXISTING_SYSTEM_MONITOR_INSTRUMENTED_NODE_HPP
#include <string>
#include <memory>
#include <functional>
#include <unordered_map>
#include <chrono>
#include "rclcpp/rclcpp.hpp"
#include "simple_timer/rt-sched.hpp"
namespace existing_system_monitor {
// Base class for instrumented callbacks
class InstrumentedCallback {
public:
InstrumentedCallback(std::string const &node_name, std::string const &callback_name, int chain_id)
: node_name_(node_name)
, callback_name_(callback_name)
, chain_id_(chain_id) {
logger_ = create_logger();
}
virtual ~InstrumentedCallback() = default;
template<typename... Args>
void before_callback(Args&&... args);
template<typename... Args>
void after_callback(Args&&... args);
friend class MetricsCollector;
protected:
node_time_logger logger_;
private:
std::string node_name_;
std::string callback_name_;
int chain_id_;
std::chrono::high_resolution_clock::time_point start_time_;
};
// Wrapper for a subscription callback
template<typename MessageT>
class InstrumentedSubscriptionCallback : public InstrumentedCallback {
public:
using CallbackT = std::function<void(typename MessageT::SharedPtr)>;
InstrumentedSubscriptionCallback(
const std::string& node_name,
const std::string& topic_name,
int chain_id,
CallbackT original_callback);
void callback(typename MessageT::SharedPtr msg) {
before_callback();
original_callback_(msg);
after_callback();
}
private:
CallbackT original_callback_;
};
// Wrapper for a timer callback
class InstrumentedTimerCallback : public InstrumentedCallback {
public:
using CallbackT = std::function<void()>;
InstrumentedTimerCallback(
const std::string& node_name,
const std::string& timer_name,
int chain_id,
CallbackT original_callback);
void callback();
private:
CallbackT original_callback_;
};
// Factory class for creating instrumented nodes
class NodeInstrumenter {
public:
static std::shared_ptr<rclcpp::Node> instrument_node(
std::shared_ptr<rclcpp::Node> original_node,
int chain_id = 0);
template<typename MessageT>
static typename rclcpp::Subscription<MessageT>::SharedPtr instrument_subscription(
std::shared_ptr<rclcpp::Node> node,
typename rclcpp::Subscription<MessageT>::SharedPtr original_subscription,
int chain_id = 0);
static rclcpp::TimerBase::SharedPtr instrument_timer(
std::shared_ptr<rclcpp::Node> node,
rclcpp::TimerBase::SharedPtr original_timer,
int chain_id = 0);
};
// Class to hold metrics collection and analysis
class MetricsCollector {
public:
struct Config {
std::string executor_type = "default";
};
static MetricsCollector& get_instance();
void register_callback(std::shared_ptr<InstrumentedCallback> callback);
void write_logs_to_file(const std::string& file_name, const std::string& experiment_name);
void set_config(Config config) { config_ = config; }
private:
MetricsCollector() {
logger_ = create_logger();
}
std::unordered_map<std::string, std::shared_ptr<InstrumentedCallback>> callbacks_;
node_time_logger logger_;
Config config_;
std::string buildLogs(std::vector<node_time_logger> node_logs = {});
};
} // namespace existing_system_monitor
#endif // EXISTING_SYSTEM_MONITOR_INSTRUMENTED_NODE_HPP

View file

@ -1,101 +0,0 @@
#ifndef EXISTING_SYSTEM_MONITOR_NODE_ADAPTER_HPP
#define EXISTING_SYSTEM_MONITOR_NODE_ADAPTER_HPP
#include "rclcpp/rclcpp.hpp"
#include "existing_system_monitor/instrumented_node.hpp"
namespace existing_system_monitor {
/**
* Adapter class that wraps an existing ROS2 node with instrumentation
* Usage: instead of inheriting from rclcpp::Node, inherit from InstrumentedNode
*/
template <typename NodeT>
class InstrumentedNode : public NodeT {
public:
struct NodeIdentifier {
int chain_id;
std::string node_name;
};
template <typename... Args>
InstrumentedNode(NodeIdentifier id, Args&&... args)
: NodeT(std::forward<Args>(args)...), chain_id_(id.chain_id), node_name_(id.node_name) {
}
// Override subscription creation
template <typename MessageT, typename CallbackT>
typename rclcpp::Subscription<MessageT>::SharedPtr create_subscription(
const std::string& topic_name,
const rclcpp::QoS& qos,
CallbackT&& callback,
const rclcpp::SubscriptionOptions& options = rclcpp::SubscriptionOptions())
{
// Create a wrapper for the callback
auto instrumented_callback =
std::make_shared<InstrumentedSubscriptionCallback<MessageT>>(
this->get_name(),
node_name_ + "-" + topic_name,
chain_id_,
std::forward<CallbackT>(callback)
);
// Register the callback with the MetricsCollector
MetricsCollector::get_instance().register_callback(instrumented_callback);
// Create the subscription with the instrumented callback
auto subscription =
NodeT::template create_subscription<MessageT>(
topic_name,
qos,
[instrumented_callback](typename MessageT::SharedPtr msg) {
instrumented_callback->callback(msg);
},
options
);
return subscription;
}
// Override timer creation
template <typename DurationT, typename CallbackT>
rclcpp::TimerBase::SharedPtr create_wall_timer(
DurationT period,
CallbackT&& callback,
rclcpp::CallbackGroup::SharedPtr group = nullptr)
{
// Create a wrapper for the callback
auto instrumented_callback =
std::make_shared<InstrumentedTimerCallback>(
this->get_name(),
node_name_ + "-timer",
chain_id_,
std::forward<CallbackT>(callback)
);
// Register the callback with the MetricsCollector
MetricsCollector::get_instance().register_callback(instrumented_callback);
// Create the timer with the instrumented callback
auto timer =
NodeT::create_wall_timer(
period,
[instrumented_callback]() {
instrumented_callback->callback();
},
group
);
return timer;
}
private:
int chain_id_;
std::string node_name_;
};
} // namespace existing_system_monitor
#endif // EXISTING_SYSTEM_MONITOR_NODE_ADAPTER_HPP

View file

@ -1,22 +0,0 @@
<?xml version="1.0"?>
<?xml-model href="http://download.ros.org/schema/package_format3.xsd" schematypens="http://www.w3.org/2001/XMLSchema"?>
<package format="3">
<name>existing_system_monitor</name>
<version>0.1.0</version>
<description>Tools for monitoring existing ROS2 systems</description>
<maintainer email="niklas@niklashalle.net">niklas</maintainer>
<license>Apache License 2.0</license>
<buildtool_depend>ament_cmake</buildtool_depend>
<depend>rclcpp</depend>
<depend>casestudy_tools</depend>
<depend>simple_timer</depend>
<test_depend>ament_lint_auto</test_depend>
<test_depend>ament_lint_common</test_depend>
<export>
<build_type>ament_cmake</build_type>
</export>
</package>

View file

@ -1,161 +0,0 @@
#include <sstream>
#include <fstream>
#include <filesystem>
#include "std_msgs/msg/string.hpp"
#include "existing_system_monitor/instrumented_node.hpp"
namespace existing_system_monitor {
template<typename... Args>
void InstrumentedCallback::before_callback(Args&&... args) {
// Record start time
start_time_ = std::chrono::high_resolution_clock::now();
// Log start of callback execution
std::ostringstream ss;
ss << "{\"operation\": \"start_work\", \"chain\": " << chain_id_
<< ", \"node\": \"" << node_name_ << "/" << callback_name_ << "\", \"count\": 0}";
log_entry(logger_, ss.str());
}
template<typename... Args>
void InstrumentedCallback::after_callback(Args&&... args) {
// Calculate execution time
auto end_time = std::chrono::high_resolution_clock::now();
auto duration = std::chrono::duration_cast<std::chrono::microseconds>(
end_time - start_time_).count();
// Log end of callback execution
std::ostringstream ss;
ss << "{\"operation\": \"end_work\", \"chain\": " << chain_id_
<< ", \"node\": \"" << node_name_ << "/" << callback_name_
<< "\", \"count\": 0, \"duration_us\": " << duration << "}";
log_entry(logger_, ss.str());
}
template<typename MessageT>
InstrumentedSubscriptionCallback<MessageT>::InstrumentedSubscriptionCallback(
const std::string& node_name,
const std::string& topic_name,
int chain_id,
CallbackT original_callback)
: InstrumentedCallback(node_name, topic_name, chain_id),
original_callback_(original_callback) {
}
InstrumentedTimerCallback::InstrumentedTimerCallback(
const std::string& node_name,
const std::string& timer_name,
int chain_id,
CallbackT original_callback)
: InstrumentedCallback(node_name, timer_name, chain_id),
original_callback_(original_callback) {
}
void InstrumentedTimerCallback::callback() {
before_callback();
original_callback_();
after_callback();
}
std::shared_ptr<rclcpp::Node> NodeInstrumenter::instrument_node(
std::shared_ptr<rclcpp::Node> original_node,
int chain_id) {
// For now, we just return the original node
// In a more advanced implementation, we might create a proxy node
return original_node;
}
template<typename MessageT>
typename rclcpp::Subscription<MessageT>::SharedPtr NodeInstrumenter::instrument_subscription(
std::shared_ptr<rclcpp::Node> node,
typename rclcpp::Subscription<MessageT>::SharedPtr original_subscription,
int chain_id) {
// To be implemented
return original_subscription;
}
rclcpp::TimerBase::SharedPtr NodeInstrumenter::instrument_timer(
std::shared_ptr<rclcpp::Node> node,
rclcpp::TimerBase::SharedPtr original_timer,
int chain_id) {
// To be implemented
return original_timer;
}
MetricsCollector& MetricsCollector::get_instance() {
static MetricsCollector instance;
return instance;
}
void MetricsCollector::register_callback(std::shared_ptr<InstrumentedCallback> callback) {
// Create a unique key for the callback using its node name and callback name
std::string key = std::to_string(callback->chain_id_) + "_" +
callback->node_name_ + "_" +
callback->callback_name_;
callbacks_[key] = callback;
}
std::string MetricsCollector::buildLogs(std::vector<node_time_logger> node_logs) {
std::stringstream output_file;
output_file << "[" << std::endl;
// Add logs from MetricsCollector's logger
for (auto const& log : *(logger_.recorded_times)) {
output_file << "{\"entry\": " << log.first
<< ", \"time\": " << log.second << "}," << std::endl;
}
// Add logs from registered callbacks
for (auto const& [name, callback] : callbacks_) {
for (auto const& log : *(callback->logger_.recorded_times)) {
output_file << "{\"entry\": " << log.first
<< ", \"time\": " << log.second << "}," << std::endl;
}
}
// Add logs from additional node loggers passed in
for (auto const& logger : node_logs) {
for (auto const& log : *(logger.recorded_times)) {
output_file << "{\"entry\": " << log.first
<< ", \"time\": " << log.second << "}," << std::endl;
}
}
// Remove the last comma if there are entries
if (!logger_.recorded_times->empty() || !callbacks_.empty() || !node_logs.empty()) {
output_file.seekp(-2, std::ios_base::end);
}
output_file << "]" << std::endl;
return output_file.str();
}
void MetricsCollector::write_logs_to_file(const std::string& file_name, const std::string& experiment_name) {
std::filesystem::path results_dir = "results";
std::filesystem::path exp_dir = results_dir / experiment_name;
// Create directories if they don't exist
try {
std::filesystem::create_directories(exp_dir);
} catch (std::filesystem::filesystem_error const &e) {
std::cout << "Could not create results directory: " << e.what() << std::endl;
return;
}
std::filesystem::path output_path = exp_dir / (file_name + "_" + config_.executor_type + ".json");
std::ofstream output_file(output_path);
output_file << buildLogs();
output_file.close();
std::cout << "results written to " << output_path.string() << std::endl;
}
template class InstrumentedSubscriptionCallback<std_msgs::msg::String>;
} // namespace existing_system_monitor

View file

@ -1,125 +0,0 @@
#include <chrono>
#include <memory>
#include <string>
#include "rclcpp/rclcpp.hpp"
#include "std_msgs/msg/string.hpp"
#include "casestudy_tools/primes_workload.hpp"
#include "existing_system_monitor/node_adapter.hpp"
#include "priority_executor/priority_executor.hpp"
#include "priority_executor/priority_memory_strategy.hpp"
using namespace std::chrono_literals;
// the topic
constexpr char const* TOPIC_NAME = "wildfire_talk";
// Modified Node - Inherit from InstrumentedNode instead of rclcpp::Node
class CustomTalkerNode : public existing_system_monitor::InstrumentedNode<rclcpp::Node> {
public:
// Pass chain_id to InstrumentedNode constructor
CustomTalkerNode(int chain_id, std::string const &node_name) : InstrumentedNode({chain_id, node_name}, node_name) {
// The rest of the code stays the same!
timer_ = create_wall_timer(
std::chrono::milliseconds(100),
std::bind(&CustomTalkerNode::timer_callback, this));
publisher_ = this->create_publisher<std_msgs::msg::String>(TOPIC_NAME, 10);
timer_ = this->create_wall_timer(
500ms, std::bind(&CustomTalkerNode::timer_callback, this));
}
rclcpp::TimerBase::SharedPtr timer_;
private:
void timer_callback() {
auto message = std_msgs::msg::String();
message.data = "Hello, world! " + std::to_string(count_++);
RCLCPP_INFO(this->get_logger(), "Publishing: '%s'", message.data.c_str());
publisher_->publish(message);
}
rclcpp::Publisher<std_msgs::msg::String>::SharedPtr publisher_;
int count_;
};
// Modified Node - Inherit from InstrumentedNode instead of rclcpp::Node
class CustomListenerNode : public existing_system_monitor::InstrumentedNode<rclcpp::Node> {
public:
// Pass chain_id to InstrumentedNode constructor
CustomListenerNode(int chain_id, const std::string& node_name) : InstrumentedNode({chain_id, node_name}, node_name) {
sub_ = this->create_subscription<std_msgs::msg::String>(
TOPIC_NAME, 10, std::bind(&CustomListenerNode::topic_callback, this, std::placeholders::_1));
}
rclcpp::Subscription<std_msgs::msg::String>::SharedPtr sub_;
private:
void topic_callback(const std_msgs::msg::String::SharedPtr msg) const {
RCLCPP_INFO(this->get_logger(), "I heard: '%s'", msg->data.c_str());
}
};
int main(int argc, char* argv[]) {
// calibrate the dummy load for the current system
calibrate_dummy_load();
existing_system_monitor::MetricsCollector::Config config;
config.executor_type = "edf";
auto& collector = existing_system_monitor::MetricsCollector::get_instance();
collector.set_config(config);
rclcpp::init(argc, argv);
// Create nodes for the experiment
/*
auto talker = std::make_shared<existing_system_monitor::InstrumentedNode<CustomTalkerNode>>(0, "CustomTalkerNode");
auto listener1 = std::make_shared<existing_system_monitor::InstrumentedNode<CustomListenerNode>>(0, "CustomListenerNode");
auto listener2 = std::make_shared<existing_system_monitor::InstrumentedNode<CustomListenerNode>>(0, "CustomListenerNode");
*/
auto talker = std::make_shared<CustomTalkerNode>(0, "CustomTalkerNode");
auto listener1 = std::make_shared<CustomListenerNode>(0, "CustomListenerNode");
auto listener2 = std::make_shared<CustomListenerNode>(0, "CustomListenerNode");
rclcpp::ExecutorOptions options;
auto strategy = std::make_shared<priority_executor::PriorityMemoryStrategy<>>();
options.memory_strategy = strategy;
auto executor = new priority_executor::TimedExecutor(options);
// must be set to post_execute can set new deadlines
executor->prio_memory_strategy_ = strategy;
// the new funcitons in PriorityMemoryStrategy accept the handle of the
// timer/subscription as the first argument
strategy->set_executable_deadline(talker->timer_->get_timer_handle(), 1000, priority_executor::ExecutableType::TIMER, 0);
// you _must_ set the timer_handle for each chain
strategy->get_priority_settings(talker->timer_->get_timer_handle())->timer_handle = talker->timer_;
// you _must_ mark the first executable in the chain
strategy->set_first_in_chain(talker->timer_->get_timer_handle());
// set the same period and chain_id for each callback in the chain
strategy->set_executable_deadline(listener1->sub_->get_subscription_handle(), 1000, priority_executor::ExecutableType::SUBSCRIPTION, 0);
strategy->set_executable_deadline(listener2->sub_->get_subscription_handle(), 1000, priority_executor::ExecutableType::SUBSCRIPTION, 0);
// you _must_ mark the last executable in the chain (used to keep track of different instances of the same chain)
strategy->set_last_in_chain(listener2->sub_->get_subscription_handle());
// add all the nodes to the executor
executor->add_node(talker);
executor->add_node(listener1);
executor->add_node(listener2);
// if the executor behaves unexpectedly, you can print the priority settings to make sure they are correct
std::cout << *strategy->get_priority_settings(talker->timer_->get_timer_handle()) << std::endl;
std::cout << *strategy->get_priority_settings(listener1->sub_->get_subscription_handle()) << std::endl;
std::cout << *strategy->get_priority_settings(listener2->sub_->get_subscription_handle()) << std::endl;
executor->spin();
rclcpp::shutdown();
// Get all collected metrics and write them to a file
collector.write_logs_to_file("uas", "existing_system_monitor");
return 0;
}

@ -1 +1 @@
Subproject commit f53b7047f1a936cc8f5a718df6134ca8af27fd73 Subproject commit d7ecdf0108d8decdda33b6dd7fb1bbc5f1d3ac43