.devcontainer | ||
.vscode | ||
analysis | ||
performance_logs/monitored_example | ||
results | ||
src | ||
.gitignore | ||
.gitmodules | ||
batch_run.sh | ||
build.sh | ||
build_run_copy.sh | ||
colcon_defaults.yaml | ||
debug.sh | ||
Dockerfile | ||
init.sh | ||
postCreate.sh | ||
README.md | ||
setup-venv.sh |
ROS 2 Dynamic Priority Executor with Performance Monitoring
This repository contains a ROS 2 executor implementation with priority-based and deadline-based scheduling capabilities, including comprehensive performance monitoring.
Performance Monitoring Features
- High-resolution event tracking for callbacks
- Chain-aware monitoring for multi-callback sequences
- Deadline tracking and violation detection
- Thread-aware monitoring in multi-threaded executors
- Automatic JSON log generation
- Configurable buffer sizes and auto-dumping
Quick Start
- Clone the repository:
git clone --recurse-submodules https://github.com/user/ROS-Dynamic-Executor-Experiments.git
- Build the project:
colcon build
- Source the workspace:
source install/setup.bash
Using the Performance Monitor
Basic Setup
#include "priority_executor/priority_executor.hpp"
// Create executor with monitoring enabled
auto executor = std::make_shared<priority_executor::TimedExecutor>(options, "my_executor");
// Configure monitoring options
executor->setMonitoringOptions(
10000, // Buffer size
5000, // Auto-dump threshold
"performance_logs" // Output directory
);
Monitoring Configuration
- Buffer Size: Maximum number of events to hold in memory
- Auto-dump Threshold: Number of events that triggers automatic file dump
- Output Directory: Where performance logs are saved
Event Types Tracked
CALLBACK_READY
: Callback is ready for executionCALLBACK_START
: Callback execution startedCALLBACK_END
: Callback execution completedDEADLINE_MISSED
: A deadline was missedDEADLINE_MET
: A deadline was successfully metCHAIN_START
: Start of a callback chainCHAIN_END
: End of a callback chain
Performance Data
Each event captures:
- High-resolution timestamps
- Node and callback names
- Chain IDs and positions
- Deadlines and processing times
- Thread IDs (for multi-threaded executors)
- Additional context data
Output Format
Performance logs are saved as JSON files with the following structure:
[
{
"timestamp": 1234567890,
"type": "callback_start",
"node_name": "example_node",
"callback_name": "timer_callback",
"chain_id": 1,
"is_first_in_chain": true,
"deadline": 1000000,
"processing_time": 500,
"executor_id": 0,
"additional_data": {
"thread_id": 1,
"cpu_affinity": 1
}
}
]
Multi-threaded Monitoring
The monitoring system automatically handles multi-threaded executors:
- Tracks per-thread execution
- Records CPU affinity
- Thread-safe event recording
- Maintains event ordering
Best Practices
- Set appropriate buffer sizes based on your system's memory constraints
- Enable auto-dumping for long-running systems
- Use meaningful executor names for better log analysis
- Monitor deadline compliance in real-time systems
- Track callback chains for end-to-end latency analysis
Analyzing Results
- Performance logs are saved in the configured output directory
- Use the provided Jupyter notebook for analysis:
jupyter notebook analysis/analysis.ipynb
Advanced Usage
Manual Log Dumping
auto& monitor = PerformanceMonitor::getInstance();
monitor.dumpToFile("custom_log_name.json");
Temporary Monitoring Disable
executor->enableMonitoring(false);
// ... execute some callbacks ...
executor->enableMonitoring(true);
Custom Event Context
Additional context can be added to events through the additional_data
field in JSON format.
Performance Impact
The monitoring system is designed to be lightweight:
- Lock-free recording for most operations
- Efficient event buffering
- Configurable buffer sizes
- Optional auto-dumping
- Minimal overhead during normal operation
Contributing
Contributions are welcome! Please follow these steps:
- Fork the repository
- Create a feature branch
- Commit your changes
- Create a pull request
License
Apache License 2.0 - See LICENSE file for details
colcon build
later also just:
colcon build --packages-select full_topology source install/setup.bash ros2 launch full_topology trace_full_topology.launch.py ros2 trace-analysis convert ./analysis/tracing/full_topology_tracing ros2 trace-analysis process ./analysis/tracing/full_topology_tracing