feat(eval): export rollout video timing and ee trajectory
This commit is contained in:
44
docs/superpowers/plans/2026-03-31-rollout-artifacts.md
Normal file
44
docs/superpowers/plans/2026-03-31-rollout-artifacts.md
Normal file
@@ -0,0 +1,44 @@
|
||||
# Rollout Artifacts Implementation Plan
|
||||
|
||||
> **For agentic workers:** REQUIRED SUB-SKILL: Use superpowers:subagent-driven-development (recommended) or superpowers:executing-plans to implement this plan task-by-task. Steps use checkbox (`- [ ]`) syntax for tracking.
|
||||
|
||||
**Goal:** Extend rollout evaluation so one selected checkpoint can be run once with video capture, timing breakdown, and saved EE trajectory artifacts.
|
||||
|
||||
**Architecture:** Keep the implementation centered in `eval_vla.py` so existing training-time rollout validation remains compatible. Add config-gated artifact capture helpers, serialize outputs under the eval run directory, and add lightweight tests for helper behavior and summary wiring; default eval behavior must remain unchanged when artifact capture is off.
|
||||
|
||||
**Tech Stack:** Python, Hydra/OmegaConf, NumPy, OpenCV, JSON, PyTorch unittest/mocking.
|
||||
|
||||
---
|
||||
|
||||
### Task 1: Add artifact capture configuration and helper wiring
|
||||
|
||||
**Files:**
|
||||
- Modify: `roboimi/demos/vla_scripts/eval_vla.py`
|
||||
- Modify: `roboimi/vla/conf/eval/eval.yaml`
|
||||
- Test: `tests/test_eval_vla_rollout_artifacts.py`
|
||||
|
||||
- [ ] **Step 1: Write failing tests for optional artifact config / summary wiring**
|
||||
- [ ] **Step 2: Implement config-backed artifact flags and output paths with defaults that write nothing**
|
||||
- [ ] **Step 3: Verify existing eval call sites still work with defaults**
|
||||
|
||||
### Task 2: Add timing breakdown, video recording, and trajectory export
|
||||
|
||||
**Files:**
|
||||
- Modify: `roboimi/demos/vla_scripts/eval_vla.py`
|
||||
- Test: `tests/test_eval_vla_rollout_artifacts.py`
|
||||
|
||||
- [ ] **Step 1: Write failing tests for timing aggregation, trajectory serialization, and summary schema**
|
||||
- [ ] **Step 2: Implement per-step timing capture for `obs_read_ms`, `preprocess_ms`, `inference_ms`, `env_step_ms`, `loop_total_ms`**
|
||||
- [ ] **Step 3: Implement MP4 recording from a chosen camera stream and canonical `trajectory.npz` export using `left_link7/right_link7` executed poses after `env.step`**
|
||||
- [ ] **Step 4: Run focused tests and fix issues**
|
||||
|
||||
### Task 3: Stop training safely and execute one real rollout
|
||||
|
||||
**Files:**
|
||||
- Use: `roboimi/demos/vla_scripts/eval_vla.py`
|
||||
- Output: `runs/.../eval_artifacts/...`
|
||||
|
||||
- [ ] **Step 1: Stop the active training process, wait for exit, and confirm the target checkpoint is readable**
|
||||
- [ ] **Step 2: Select the latest completed checkpoint if an explicit one is not provided; fall back to prior completed / best checkpoint if needed**
|
||||
- [ ] **Step 3: Run one headless rollout with artifact capture enabled**
|
||||
- [ ] **Step 4: Verify the MP4 / timing summary / trajectory files exist and summarize findings**
|
||||
@@ -0,0 +1,16 @@
|
||||
# Rollout Artifacts Design
|
||||
|
||||
**Goal:** Add a one-off evaluation path that can record rollout video, export per-step timing breakdowns, and save executed end-effector trajectories for a selected checkpoint while preserving default eval behavior when artifact capture is disabled.
|
||||
|
||||
**Approach:** Extend `roboimi/demos/vla_scripts/eval_vla.py` with optional evaluation-time artifact capture that stays backward compatible when disabled. Reuse existing environment observation and camera streams, record one camera stream to MP4, collect per-step timing around observation read / preprocessing / model inference / env step / total loop, and save per-step raw predicted EE actions plus executed EE poses after stepping.
|
||||
|
||||
**Artifact contract:**
|
||||
- `video.mp4`: optional MP4 encoded from a selected camera stream (`r_vis`, `top`, `front`, etc.), written only when recording is enabled.
|
||||
- `trajectory.npz`: canonical trajectory export containing at minimum `step`, `reward`, `raw_action`, `executed_left_link7_pos`, `executed_left_link7_quat`, `executed_right_link7_pos`, `executed_right_link7_quat`, and optional duplicated tool-body poses if captured.
|
||||
- `timing.json`: JSON-serializable per-episode timing summary with millisecond units for `obs_read_ms`, `preprocess_ms`, `inference_ms`, `env_step_ms`, `loop_total_ms`, plus aggregate mean/std/min/max and counts. Raw per-step timing arrays should also be persisted in the NPZ for later analysis.
|
||||
|
||||
**Checkpoint selection:** Prefer an explicitly requested checkpoint path. If the caller asks for “latest” or omits a path in the execution helper, select the newest fully written checkpoint file by mtime/name and fail clearly if none exists.
|
||||
|
||||
**Stop-training / execution safety:** Before rollout, stop any active training process using the target run, wait for process exit, then verify the chosen checkpoint exists and is readable. If the most recent checkpoint is missing or mid-write, fall back to the previous completed checkpoint or `vla_model_best.pt` with the decision logged.
|
||||
|
||||
**Backward compatibility:** With all new eval flags left at default values, `_run_eval` return shape must remain compatible with existing callers, training-time rollout validation should continue to work without passing new options, and no artifact files should be written.
|
||||
Reference in New Issue
Block a user