Update ReadMe for submission

This commit is contained in:
Niklas Halle 2025-08-08 09:56:55 +02:00
parent ed6209939c
commit 690265863f
4 changed files with 9278 additions and 169 deletions

169
README.md
View file

@ -1,169 +0,0 @@
# ROS 2 Dynamic Priority Executor with Performance Monitoring
This repository contains a ROS 2 executor implementation with priority-based and deadline-based scheduling capabilities, including comprehensive performance monitoring.
## Performance Monitoring Features
- High-resolution event tracking for callbacks
- Chain-aware monitoring for multi-callback sequences
- Deadline tracking and violation detection
- Thread-aware monitoring in multi-threaded executors
- Automatic JSON log generation
- Configurable buffer sizes and auto-dumping
## Quick Start
1. Clone the repository:
```bash
git clone --recurse-submodules https://github.com/user/ROS-Dynamic-Executor-Experiments.git
```
2. Build the project:
```bash
colcon build
```
3. Source the workspace:
```bash
source install/setup.bash
```
## Using the Performance Monitor
### Basic Setup
```cpp
#include "priority_executor/priority_executor.hpp"
// Create executor with monitoring enabled
auto executor = std::make_shared<priority_executor::TimedExecutor>(options, "my_executor");
// Configure monitoring options
executor->setMonitoringOptions(
10000, // Buffer size
5000, // Auto-dump threshold
"performance_logs" // Output directory
);
```
### Monitoring Configuration
- **Buffer Size**: Maximum number of events to hold in memory
- **Auto-dump Threshold**: Number of events that triggers automatic file dump
- **Output Directory**: Where performance logs are saved
### Event Types Tracked
- `CALLBACK_READY`: Callback is ready for execution
- `CALLBACK_START`: Callback execution started
- `CALLBACK_END`: Callback execution completed
- `DEADLINE_MISSED`: A deadline was missed
- `DEADLINE_MET`: A deadline was successfully met
- `CHAIN_START`: Start of a callback chain
- `CHAIN_END`: End of a callback chain
### Performance Data
Each event captures:
- High-resolution timestamps
- Node and callback names
- Chain IDs and positions
- Deadlines and processing times
- Thread IDs (for multi-threaded executors)
- Additional context data
### Output Format
Performance logs are saved as JSON files with the following structure:
```json
[
{
"timestamp": 1234567890,
"type": "callback_start",
"node_name": "example_node",
"callback_name": "timer_callback",
"chain_id": 1,
"is_first_in_chain": true,
"deadline": 1000000,
"processing_time": 500,
"executor_id": 0,
"additional_data": {
"thread_id": 1,
"cpu_affinity": 1
}
}
]
```
### Multi-threaded Monitoring
The monitoring system automatically handles multi-threaded executors:
- Tracks per-thread execution
- Records CPU affinity
- Thread-safe event recording
- Maintains event ordering
### Best Practices
1. Set appropriate buffer sizes based on your system's memory constraints
2. Enable auto-dumping for long-running systems
3. Use meaningful executor names for better log analysis
4. Monitor deadline compliance in real-time systems
5. Track callback chains for end-to-end latency analysis
### Analyzing Results
1. Performance logs are saved in the configured output directory
2. Use the provided Jupyter notebook for analysis:
```bash
jupyter notebook analysis/analysis.ipynb
```
## Advanced Usage
### Manual Log Dumping
```cpp
auto& monitor = PerformanceMonitor::getInstance();
monitor.dumpToFile("custom_log_name.json");
```
### Temporary Monitoring Disable
```cpp
executor->enableMonitoring(false);
// ... execute some callbacks ...
executor->enableMonitoring(true);
```
### Custom Event Context
Additional context can be added to events through the `additional_data` field in JSON format.
## Performance Impact
The monitoring system is designed to be lightweight:
- Lock-free recording for most operations
- Efficient event buffering
- Configurable buffer sizes
- Optional auto-dumping
- Minimal overhead during normal operation
## Contributing
Contributions are welcome! Please follow these steps:
1. Fork the repository
2. Create a feature branch
3. Commit your changes
4. Create a pull request
## License
Apache License 2.0 - See LICENSE file for details
---
colcon build
# later also just:
colcon build --packages-select full_topology
source install/setup.bash
ros2 launch full_topology trace_full_topology.launch.py
ros2 trace-analysis convert ./analysis/tracing/full_topology_tracing
ros2 trace-analysis process ./analysis/tracing/full_topology_tracing

162
ReadMe.md Normal file
View file

@ -0,0 +1,162 @@
# Full Topology Experimentation Framework
This repository contains the experimentation code used for my master's thesis on *semantic scheduling for multi-modal sensor data in ROS 2*.
It implements and evaluates multiple executor configurations (ROS 2 default, dynamic EDF) on a fixed test topology, with automated tracing and batch experiment execution.
> **Audience:** This version is intended for thesis reviewers.
> For the eventual open-source release, setup and dependency instructions will be expanded.
---
## Repository structure
Key files and directories:
```
src/full_topology/
├── CMakeLists.txt # CMake config, passes macro switches
├── package.xml # ROS 2 package manifest
├── launch/
│ └── trace_full_topology.launch.py # Starts topology with tracing
├── src/
│ ├── full_topology.cpp # Main implementation with macro-controlled behavior
│ ├── simple.cpp # Minimal setup to explore one-chain starvation
│ └── nodes.hpp # Node type definitions
src/cyclonedds/ # Submodule: Cyclone DDS
src/rcl/ # Submodule: ROS 2 rcl
src/rclcpp/ # Submodule: ROS 2 rclcpp
src/rmw_cyclonedds/ # Submodule: RMW layer
src/ros2_tracing/ # Submodule: ros2_tracing framework
src/ros_edf/ # Submodule: EDF executor implementation (Arafat et al.)
src/tracetools_analysis/ # Submodule: trace analysis tools
init.sh
build.sh
build_run_copy.sh
batch_run.sh
setup-venv.sh
```
---
## Platform & prerequisites
* **ROS 2 distribution:** Foxy (required)
* **Test environment:** Raspberry Pi 4 (4 GB) with Ubuntu 20.04 LTS, PREEMPT\_RT kernel, LTTng 2.13, 2 isolated CPU cores at fixed frequency (as in [ros-realtime/reference-system](https://github.com/ros-realtime/reference-system))
* **Python:** 3.8 (used via `setup-venv.sh`)
* **RMW:** `rmw_cyclonedds_cpp` (mandatory , the only implementation with our modifications)
* **Build concurrency:** limited to `-j1` to avoid overloading the Pis non-isolated cores.
System dependencies were installed via `apt`.
---
## Macro options and their flow
Experiment configurations are set at **build time** via CMake options, which `build_run_copy.sh` sets based on its arguments:
| CMake Option | Compile Definition | Meaning |
| --------------------------- | --------------------------- | ------------------------------------------------------- |
| `ROS_DEFAULT_EXECUTOR` | `ROS_DEFAULT_EXECUTOR` | Use ROS 2 default executor |
| `EDF_PRIORITY_EXECUTOR` | `EDF_PRIORITY_EXECUTOR` | Use EDF executor (Arafat et al.) |
| `MULTI_THREADED` | `MULTI_THREADED` | Multi-threaded executor |
| `USE_TIMER_IN_FUSION_NODES` | `USE_TIMER_IN_FUSION_NODES` | Enable internal timers in fusion nodes |
| `BOOSTED=<ms>` | `BOOSTED=<ms>` | Reduce chain deadlines to this value; `1000` = disabled |
**Flow:**
`batch_run.sh` → calls `build_run_copy.sh` with args → sets CMake options → `CMakeLists.txt` adds compile definitions → `full_topology.cpp` selects behavior via `#ifdef`.
---
## Quick start
```bash
# 1. Source ROS 2 Foxy environment
source /opt/ros/foxy/setup.bash
# 2. Source init script (sets RMW, build flags, venv, etc.)
source init.sh
# 3. Build & run a single timed, single-threaded ROS 2 default executor run
./build_run_copy.sh ros single timed 1000
# or: run the full batch (takes multiple hours)
./batch_run.sh
```
---
## Launch file: `trace_full_topology.launch.py`
* Declares a `length` argument (seconds; default: 60.0).
* Starts an LTTng tracing session `trace_full_topology` in `analysis/tracing/`.
* Traces UST events from DDS, ROS 2 layers, RMW, rclcpp callbacks & executors.
* Launches `full_topology` node executable.
* Automatically shuts down after `length` seconds.
---
## Running experiments
### `build_run_copy.sh` arguments
```bash
./build_run_copy.sh <scheduler> <threading> <timer> [boost] [no_rebuild]
```
* **scheduler:** `ros` (ROS 2 default) or `edf` (EDF executor)
* **threading:** `single` or `multi`
* **timer:** `timed` or `direct` (thesis uses `timed`)
* **boost:** integer (ms) deadline; `1000` = disabled boost
* **no\_rebuild:** skip rebuilding (useful for repeated runs in same config)
Each run takes \~3045 s (20 s experiment time + launch/stop/convert/copy).
---
### `batch_run.sh` defaults
Runs a full matrix:
* Timer modes: `direct`, `timed`
* Threading: `single`, `multi`
* Schedulers:
* ROS default (`ros`)
* EDF with boosts `{1000, 500, 100, 50, 10}`
* Iterations: `ITERATIONS=49` per configuration, thus with the one that did the rebuild, a total of 50
---
## Trace files
Traces are renamed after run completion (boost only when applicable):
```
<scheduler>_<threading>_<timer>_<runtime>_<boost>-<timestamp>
```
Example:
```
edf_multi_timed_20_boosted_500-20250301T101530
```
---
## Remote copy
After renaming, traces are optionally copied to a remote location via `scp`:
```bash
REMOTE_USER=user
REMOTE_HOST=host
REMOTE_PATH=/path/to/store/traces
```
> **Warning:** Update these in `build_run_copy.sh` before use.
> If unset or inaccessible, copying will fail (local trace is still kept).
---
## Disclaimer
Research code, provided as is. See thesis for methodology, hardware setup, and complete experiment design.
This repository reflects the full development history of the thesis work. Some commits represent intermediate or WIP states. For traceability, commit hashes referenced in the thesis remain valid.

4541
batch_run.log Normal file

File diff suppressed because it is too large Load diff

4575
trace_ids.txt Normal file

File diff suppressed because it is too large Load diff