windows Sun Apr 13 21:02:39 WEDT 2025
This commit is contained in:
parent
4fe95bfd4c
commit
4e7c63701a
18 changed files with 115050 additions and 12974 deletions
233
README.md
233
README.md
|
@ -1,124 +1,159 @@
|
|||
# Dynamic Priority Scheduling for ROS-EDF Executor
|
||||
# ROS 2 Dynamic Priority Executor with Performance Monitoring
|
||||
|
||||
This repository contains experiments for the ROS-EDF executor with priority-based and deadline-based scheduling capabilities.
|
||||
This repository contains a ROS 2 executor implementation with priority-based and deadline-based scheduling capabilities, including comprehensive performance monitoring.
|
||||
|
||||
## Usage
|
||||
## Performance Monitoring Features
|
||||
|
||||
Clone with submodules enabled:
|
||||
```
|
||||
- High-resolution event tracking for callbacks
|
||||
- Chain-aware monitoring for multi-callback sequences
|
||||
- Deadline tracking and violation detection
|
||||
- Thread-aware monitoring in multi-threaded executors
|
||||
- Automatic JSON log generation
|
||||
- Configurable buffer sizes and auto-dumping
|
||||
|
||||
## Quick Start
|
||||
|
||||
1. Clone the repository:
|
||||
```bash
|
||||
git clone --recurse-submodules https://github.com/user/ROS-Dynamic-Executor-Experiments.git
|
||||
```
|
||||
|
||||
Make sure you have sourced your ROS 2 environment.
|
||||
|
||||
Run the example:
|
||||
```
|
||||
ros2 run casestudy casestudy_example
|
||||
2. Build the project:
|
||||
```bash
|
||||
colcon build
|
||||
```
|
||||
|
||||
Then run the analysis notebook:
|
||||
3. Source the workspace:
|
||||
```bash
|
||||
source install/setup.bash
|
||||
```
|
||||
|
||||
## Using the Performance Monitor
|
||||
|
||||
### Basic Setup
|
||||
|
||||
```cpp
|
||||
#include "priority_executor/priority_executor.hpp"
|
||||
|
||||
// Create executor with monitoring enabled
|
||||
auto executor = std::make_shared<priority_executor::TimedExecutor>(options, "my_executor");
|
||||
|
||||
// Configure monitoring options
|
||||
executor->setMonitoringOptions(
|
||||
10000, // Buffer size
|
||||
5000, // Auto-dump threshold
|
||||
"performance_logs" // Output directory
|
||||
);
|
||||
```
|
||||
|
||||
### Monitoring Configuration
|
||||
|
||||
- **Buffer Size**: Maximum number of events to hold in memory
|
||||
- **Auto-dump Threshold**: Number of events that triggers automatic file dump
|
||||
- **Output Directory**: Where performance logs are saved
|
||||
|
||||
### Event Types Tracked
|
||||
|
||||
- `CALLBACK_READY`: Callback is ready for execution
|
||||
- `CALLBACK_START`: Callback execution started
|
||||
- `CALLBACK_END`: Callback execution completed
|
||||
- `DEADLINE_MISSED`: A deadline was missed
|
||||
- `DEADLINE_MET`: A deadline was successfully met
|
||||
- `CHAIN_START`: Start of a callback chain
|
||||
- `CHAIN_END`: End of a callback chain
|
||||
|
||||
### Performance Data
|
||||
|
||||
Each event captures:
|
||||
- High-resolution timestamps
|
||||
- Node and callback names
|
||||
- Chain IDs and positions
|
||||
- Deadlines and processing times
|
||||
- Thread IDs (for multi-threaded executors)
|
||||
- Additional context data
|
||||
|
||||
### Output Format
|
||||
|
||||
Performance logs are saved as JSON files with the following structure:
|
||||
```json
|
||||
[
|
||||
{
|
||||
"timestamp": 1234567890,
|
||||
"type": "callback_start",
|
||||
"node_name": "example_node",
|
||||
"callback_name": "timer_callback",
|
||||
"chain_id": 1,
|
||||
"is_first_in_chain": true,
|
||||
"deadline": 1000000,
|
||||
"processing_time": 500,
|
||||
"executor_id": 0,
|
||||
"additional_data": {
|
||||
"thread_id": 1,
|
||||
"cpu_affinity": 1
|
||||
}
|
||||
}
|
||||
]
|
||||
```
|
||||
|
||||
### Multi-threaded Monitoring
|
||||
|
||||
The monitoring system automatically handles multi-threaded executors:
|
||||
- Tracks per-thread execution
|
||||
- Records CPU affinity
|
||||
- Thread-safe event recording
|
||||
- Maintains event ordering
|
||||
|
||||
### Best Practices
|
||||
|
||||
1. Set appropriate buffer sizes based on your system's memory constraints
|
||||
2. Enable auto-dumping for long-running systems
|
||||
3. Use meaningful executor names for better log analysis
|
||||
4. Monitor deadline compliance in real-time systems
|
||||
5. Track callback chains for end-to-end latency analysis
|
||||
|
||||
### Analyzing Results
|
||||
|
||||
1. Performance logs are saved in the configured output directory
|
||||
2. Use the provided Jupyter notebook for analysis:
|
||||
```bash
|
||||
jupyter notebook analysis/analysis.ipynb
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
|
||||
## Dynamically Adjusting Scheduler Approaches
|
||||
|
||||
The PriorityMemoryStrategy supports different scheduling policies for ROS callbacks. To extend it with dynamically adjusting scheduling that reacts to semantic events, there are several viable approaches:
|
||||
|
||||
### Approach A: Custom PriorityExecutableComparator
|
||||
|
||||
Replace the existing comparator with a custom implementation:
|
||||
|
||||
**Pros:**
|
||||
- Direct control over scheduling logic
|
||||
- Clean separation of concerns
|
||||
- Can implement complex scheduling policies
|
||||
- Doesn't require modifying deadline values
|
||||
|
||||
**Cons:**
|
||||
- Requires understanding and maintaining the comparison logic
|
||||
- May need to add new fields to PriorityExecutable to track dynamic priorities
|
||||
- Could become complex if multiple factors affect priority
|
||||
|
||||
### Approach B: Dynamic Deadline Adjustment
|
||||
|
||||
Keep the EDF comparator but adjust deadlines based on priority logic:
|
||||
|
||||
**Pros:**
|
||||
- Works within the existing EDF framework
|
||||
- Conceptually simple - just manipulate deadline values
|
||||
- Doesn't require changing the core sorting mechanism
|
||||
- Easier to debug (you can log deadline changes)
|
||||
|
||||
**Cons:**
|
||||
- Potentially confusing semantics (using deadlines to represent non-deadline priorities)
|
||||
- May interfere with actual deadline-based requirements
|
||||
- Could lead to instability if not carefully managed
|
||||
|
||||
### Approach C: Event-Driven Priority Field
|
||||
|
||||
Add a dynamic boost factor field to PriorityExecutable:
|
||||
## Advanced Usage
|
||||
|
||||
### Manual Log Dumping
|
||||
```cpp
|
||||
int dynamic_priority_boost = 0;
|
||||
auto& monitor = PerformanceMonitor::getInstance();
|
||||
monitor.dumpToFile("custom_log_name.json");
|
||||
```
|
||||
|
||||
Then modify the comparator to consider this value:
|
||||
|
||||
### Temporary Monitoring Disable
|
||||
```cpp
|
||||
if (p1->sched_type == DEADLINE) {
|
||||
// First compare deadline as usual
|
||||
// Then use dynamic_priority_boost as a tiebreaker
|
||||
if (p1_deadline == p2_deadline) {
|
||||
return p1->dynamic_priority_boost > p2->dynamic_priority_boost;
|
||||
}
|
||||
return p1_deadline < p2_deadline;
|
||||
}
|
||||
executor->enableMonitoring(false);
|
||||
// ... execute some callbacks ...
|
||||
executor->enableMonitoring(true);
|
||||
```
|
||||
|
||||
**Pros:**
|
||||
- Clearer semantics than manipulating deadlines
|
||||
- Keeps deadline scheduling intact for real-time guarantees
|
||||
- Easy to adjust at runtime
|
||||
### Custom Event Context
|
||||
Additional context can be added to events through the `additional_data` field in JSON format.
|
||||
|
||||
**Cons:**
|
||||
- Requires modifying both PriorityExecutable and comparison logic
|
||||
## Performance Impact
|
||||
|
||||
### Approach D: Priority Multiplier System
|
||||
The monitoring system is designed to be lightweight:
|
||||
- Lock-free recording for most operations
|
||||
- Efficient event buffering
|
||||
- Configurable buffer sizes
|
||||
- Optional auto-dumping
|
||||
- Minimal overhead during normal operation
|
||||
|
||||
Implement a system where chains can have priority multipliers applied:
|
||||
## Contributing
|
||||
|
||||
```cpp
|
||||
float priority_multiplier = 1.0f;
|
||||
```
|
||||
Contributions are welcome! Please follow these steps:
|
||||
1. Fork the repository
|
||||
2. Create a feature branch
|
||||
3. Commit your changes
|
||||
4. Create a pull request
|
||||
|
||||
And then in the comparator:
|
||||
## License
|
||||
|
||||
```cpp
|
||||
// For priority-based scheduling
|
||||
int effective_p1 = p1->priority * p1->priority_multiplier;
|
||||
int effective_p2 = p2->priority * p2->priority_multiplier;
|
||||
return effective_p1 < effective_p2;
|
||||
```
|
||||
|
||||
**Pros:**
|
||||
- Scales existing priorities rather than replacing them
|
||||
- Preserves relative importance within chains
|
||||
- Intuitive model for temporary priority boosts
|
||||
|
||||
**Cons:**
|
||||
- May need to handle overflow/boundary cases
|
||||
- Requires careful tuning of multiplier ranges
|
||||
|
||||
## Recommendation
|
||||
|
||||
Approach C (Event-Driven Priority Field) offers the best balance of:
|
||||
1. Clean semantics
|
||||
2. Minimal interference with existing scheduling logic
|
||||
3. Clear separation between baseline priorities and dynamic adjustments
|
||||
4. Straightforward implementation
|
||||
|
||||
This approach maintains real-time guarantees while enabling dynamic behaviors based on semantic events.
|
||||
Apache License 2.0 - See LICENSE file for details
|
||||
|
|
Loading…
Add table
Add a link
Reference in a new issue