ROS-Dynamic-Executor-Experi.../README.md
2025-04-13 21:02:39 +02:00

159 lines
4 KiB
Markdown

# ROS 2 Dynamic Priority Executor with Performance Monitoring
This repository contains a ROS 2 executor implementation with priority-based and deadline-based scheduling capabilities, including comprehensive performance monitoring.
## Performance Monitoring Features
- High-resolution event tracking for callbacks
- Chain-aware monitoring for multi-callback sequences
- Deadline tracking and violation detection
- Thread-aware monitoring in multi-threaded executors
- Automatic JSON log generation
- Configurable buffer sizes and auto-dumping
## Quick Start
1. Clone the repository:
```bash
git clone --recurse-submodules https://github.com/user/ROS-Dynamic-Executor-Experiments.git
```
2. Build the project:
```bash
colcon build
```
3. Source the workspace:
```bash
source install/setup.bash
```
## Using the Performance Monitor
### Basic Setup
```cpp
#include "priority_executor/priority_executor.hpp"
// Create executor with monitoring enabled
auto executor = std::make_shared<priority_executor::TimedExecutor>(options, "my_executor");
// Configure monitoring options
executor->setMonitoringOptions(
10000, // Buffer size
5000, // Auto-dump threshold
"performance_logs" // Output directory
);
```
### Monitoring Configuration
- **Buffer Size**: Maximum number of events to hold in memory
- **Auto-dump Threshold**: Number of events that triggers automatic file dump
- **Output Directory**: Where performance logs are saved
### Event Types Tracked
- `CALLBACK_READY`: Callback is ready for execution
- `CALLBACK_START`: Callback execution started
- `CALLBACK_END`: Callback execution completed
- `DEADLINE_MISSED`: A deadline was missed
- `DEADLINE_MET`: A deadline was successfully met
- `CHAIN_START`: Start of a callback chain
- `CHAIN_END`: End of a callback chain
### Performance Data
Each event captures:
- High-resolution timestamps
- Node and callback names
- Chain IDs and positions
- Deadlines and processing times
- Thread IDs (for multi-threaded executors)
- Additional context data
### Output Format
Performance logs are saved as JSON files with the following structure:
```json
[
{
"timestamp": 1234567890,
"type": "callback_start",
"node_name": "example_node",
"callback_name": "timer_callback",
"chain_id": 1,
"is_first_in_chain": true,
"deadline": 1000000,
"processing_time": 500,
"executor_id": 0,
"additional_data": {
"thread_id": 1,
"cpu_affinity": 1
}
}
]
```
### Multi-threaded Monitoring
The monitoring system automatically handles multi-threaded executors:
- Tracks per-thread execution
- Records CPU affinity
- Thread-safe event recording
- Maintains event ordering
### Best Practices
1. Set appropriate buffer sizes based on your system's memory constraints
2. Enable auto-dumping for long-running systems
3. Use meaningful executor names for better log analysis
4. Monitor deadline compliance in real-time systems
5. Track callback chains for end-to-end latency analysis
### Analyzing Results
1. Performance logs are saved in the configured output directory
2. Use the provided Jupyter notebook for analysis:
```bash
jupyter notebook analysis/analysis.ipynb
```
## Advanced Usage
### Manual Log Dumping
```cpp
auto& monitor = PerformanceMonitor::getInstance();
monitor.dumpToFile("custom_log_name.json");
```
### Temporary Monitoring Disable
```cpp
executor->enableMonitoring(false);
// ... execute some callbacks ...
executor->enableMonitoring(true);
```
### Custom Event Context
Additional context can be added to events through the `additional_data` field in JSON format.
## Performance Impact
The monitoring system is designed to be lightweight:
- Lock-free recording for most operations
- Efficient event buffering
- Configurable buffer sizes
- Optional auto-dumping
- Minimal overhead during normal operation
## Contributing
Contributions are welcome! Please follow these steps:
1. Fork the repository
2. Create a feature branch
3. Commit your changes
4. Create a pull request
## License
Apache License 2.0 - See LICENSE file for details