feat(eval): export rollout video timing and ee trajectory
This commit is contained in:
44
docs/superpowers/plans/2026-03-31-rollout-artifacts.md
Normal file
44
docs/superpowers/plans/2026-03-31-rollout-artifacts.md
Normal file
@@ -0,0 +1,44 @@
|
||||
# Rollout Artifacts Implementation Plan
|
||||
|
||||
> **For agentic workers:** REQUIRED SUB-SKILL: Use superpowers:subagent-driven-development (recommended) or superpowers:executing-plans to implement this plan task-by-task. Steps use checkbox (`- [ ]`) syntax for tracking.
|
||||
|
||||
**Goal:** Extend rollout evaluation so one selected checkpoint can be run once with video capture, timing breakdown, and saved EE trajectory artifacts.
|
||||
|
||||
**Architecture:** Keep the implementation centered in `eval_vla.py` so existing training-time rollout validation remains compatible. Add config-gated artifact capture helpers, serialize outputs under the eval run directory, and add lightweight tests for helper behavior and summary wiring; default eval behavior must remain unchanged when artifact capture is off.
|
||||
|
||||
**Tech Stack:** Python, Hydra/OmegaConf, NumPy, OpenCV, JSON, PyTorch unittest/mocking.
|
||||
|
||||
---
|
||||
|
||||
### Task 1: Add artifact capture configuration and helper wiring
|
||||
|
||||
**Files:**
|
||||
- Modify: `roboimi/demos/vla_scripts/eval_vla.py`
|
||||
- Modify: `roboimi/vla/conf/eval/eval.yaml`
|
||||
- Test: `tests/test_eval_vla_rollout_artifacts.py`
|
||||
|
||||
- [ ] **Step 1: Write failing tests for optional artifact config / summary wiring**
|
||||
- [ ] **Step 2: Implement config-backed artifact flags and output paths with defaults that write nothing**
|
||||
- [ ] **Step 3: Verify existing eval call sites still work with defaults**
|
||||
|
||||
### Task 2: Add timing breakdown, video recording, and trajectory export
|
||||
|
||||
**Files:**
|
||||
- Modify: `roboimi/demos/vla_scripts/eval_vla.py`
|
||||
- Test: `tests/test_eval_vla_rollout_artifacts.py`
|
||||
|
||||
- [ ] **Step 1: Write failing tests for timing aggregation, trajectory serialization, and summary schema**
|
||||
- [ ] **Step 2: Implement per-step timing capture for `obs_read_ms`, `preprocess_ms`, `inference_ms`, `env_step_ms`, `loop_total_ms`**
|
||||
- [ ] **Step 3: Implement MP4 recording from a chosen camera stream and canonical `trajectory.npz` export using `left_link7/right_link7` executed poses after `env.step`**
|
||||
- [ ] **Step 4: Run focused tests and fix issues**
|
||||
|
||||
### Task 3: Stop training safely and execute one real rollout
|
||||
|
||||
**Files:**
|
||||
- Use: `roboimi/demos/vla_scripts/eval_vla.py`
|
||||
- Output: `runs/.../eval_artifacts/...`
|
||||
|
||||
- [ ] **Step 1: Stop the active training process, wait for exit, and confirm the target checkpoint is readable**
|
||||
- [ ] **Step 2: Select the latest completed checkpoint if an explicit one is not provided; fall back to prior completed / best checkpoint if needed**
|
||||
- [ ] **Step 3: Run one headless rollout with artifact capture enabled**
|
||||
- [ ] **Step 4: Verify the MP4 / timing summary / trajectory files exist and summarize findings**
|
||||
@@ -0,0 +1,16 @@
|
||||
# Rollout Artifacts Design
|
||||
|
||||
**Goal:** Add a one-off evaluation path that can record rollout video, export per-step timing breakdowns, and save executed end-effector trajectories for a selected checkpoint while preserving default eval behavior when artifact capture is disabled.
|
||||
|
||||
**Approach:** Extend `roboimi/demos/vla_scripts/eval_vla.py` with optional evaluation-time artifact capture that stays backward compatible when disabled. Reuse existing environment observation and camera streams, record one camera stream to MP4, collect per-step timing around observation read / preprocessing / model inference / env step / total loop, and save per-step raw predicted EE actions plus executed EE poses after stepping.
|
||||
|
||||
**Artifact contract:**
|
||||
- `video.mp4`: optional MP4 encoded from a selected camera stream (`r_vis`, `top`, `front`, etc.), written only when recording is enabled.
|
||||
- `trajectory.npz`: canonical trajectory export containing at minimum `step`, `reward`, `raw_action`, `executed_left_link7_pos`, `executed_left_link7_quat`, `executed_right_link7_pos`, `executed_right_link7_quat`, and optional duplicated tool-body poses if captured.
|
||||
- `timing.json`: JSON-serializable per-episode timing summary with millisecond units for `obs_read_ms`, `preprocess_ms`, `inference_ms`, `env_step_ms`, `loop_total_ms`, plus aggregate mean/std/min/max and counts. Raw per-step timing arrays should also be persisted in the NPZ for later analysis.
|
||||
|
||||
**Checkpoint selection:** Prefer an explicitly requested checkpoint path. If the caller asks for “latest” or omits a path in the execution helper, select the newest fully written checkpoint file by mtime/name and fail clearly if none exists.
|
||||
|
||||
**Stop-training / execution safety:** Before rollout, stop any active training process using the target run, wait for process exit, then verify the chosen checkpoint exists and is readable. If the most recent checkpoint is missing or mid-write, fall back to the previous completed checkpoint or `vla_model_best.pt` with the decision logged.
|
||||
|
||||
**Backward compatibility:** With all new eval flags left at default values, `_run_eval` return shape must remain compatible with existing callers, training-time rollout validation should continue to work without passing new options, and no artifact files should be written.
|
||||
@@ -19,7 +19,7 @@ import torch
|
||||
import numpy as np
|
||||
import hydra
|
||||
from pathlib import Path
|
||||
from typing import Dict
|
||||
from typing import Any, Dict, Optional
|
||||
from tqdm import tqdm
|
||||
from omegaconf import DictConfig, OmegaConf
|
||||
from hydra.utils import instantiate
|
||||
@@ -27,6 +27,7 @@ from einops import rearrange
|
||||
|
||||
from roboimi.envs.double_pos_ctrl_env import make_sim_env
|
||||
from roboimi.utils.act_ex_utils import sample_transfer_pose
|
||||
from roboimi.vla.eval_utils import execute_policy_action
|
||||
|
||||
sys.path.append(os.getcwd())
|
||||
|
||||
@@ -121,6 +122,317 @@ def prepare_observation(obs: Dict, camera_names: list) -> Dict:
|
||||
return {'qpos': qpos, 'images': images}
|
||||
|
||||
|
||||
def _to_numpy_action(action: Any) -> np.ndarray:
|
||||
if isinstance(action, torch.Tensor):
|
||||
return action.detach().cpu().numpy().astype(np.float32, copy=True)
|
||||
return np.asarray(action, dtype=np.float32).copy()
|
||||
|
||||
|
||||
def _mean_or_zero(values: list[float]) -> float:
|
||||
return float(np.mean(values)) if values else 0.0
|
||||
|
||||
|
||||
def _stats_or_zero(values: list[float]) -> dict[str, float]:
|
||||
if not values:
|
||||
return {
|
||||
'mean': 0.0,
|
||||
'std': 0.0,
|
||||
'min': 0.0,
|
||||
'max': 0.0,
|
||||
}
|
||||
array = np.asarray(values, dtype=np.float64)
|
||||
return {
|
||||
'mean': float(array.mean()),
|
||||
'std': float(array.std()),
|
||||
'min': float(array.min()),
|
||||
'max': float(array.max()),
|
||||
}
|
||||
|
||||
|
||||
def _summarize_timing_breakdown(
|
||||
all_timings: dict[str, list[float]],
|
||||
model_forward_flags: list[bool],
|
||||
) -> dict[str, Any]:
|
||||
model_forward_flags = [bool(flag) for flag in model_forward_flags]
|
||||
return {
|
||||
'count': int(len(model_forward_flags)),
|
||||
'model_forward_count': int(sum(model_forward_flags)),
|
||||
'all_steps_ms': {
|
||||
stage: _stats_or_zero(values)
|
||||
for stage, values in all_timings.items()
|
||||
},
|
||||
'model_forward_steps_ms': {
|
||||
stage: _stats_or_zero(
|
||||
[value for value, should_keep in zip(values, model_forward_flags) if should_keep]
|
||||
)
|
||||
for stage, values in all_timings.items()
|
||||
},
|
||||
}
|
||||
|
||||
|
||||
def _json_friendly(value: Any) -> Any:
|
||||
if isinstance(value, dict):
|
||||
return {str(key): _json_friendly(item) for key, item in value.items()}
|
||||
if isinstance(value, (list, tuple)):
|
||||
return [_json_friendly(item) for item in value]
|
||||
if isinstance(value, Path):
|
||||
return str(value)
|
||||
if isinstance(value, np.ndarray):
|
||||
return value.tolist()
|
||||
if isinstance(value, (np.integer, np.floating)):
|
||||
return value.item()
|
||||
return value
|
||||
|
||||
|
||||
def _resolve_artifact_paths(eval_cfg: DictConfig) -> dict[str, Optional[str]]:
|
||||
save_timing = bool(eval_cfg.get('save_timing', False))
|
||||
save_trajectory = bool(
|
||||
eval_cfg.get('save_trajectory', False) or eval_cfg.get('save_trajectory_npz', False)
|
||||
)
|
||||
wants_artifacts = any([
|
||||
bool(eval_cfg.get('save_artifacts', False)),
|
||||
save_timing,
|
||||
save_trajectory,
|
||||
bool(eval_cfg.get('record_video', False)),
|
||||
])
|
||||
output_dir: Optional[Path] = None
|
||||
if wants_artifacts:
|
||||
artifact_dir = eval_cfg.get('artifact_dir', None)
|
||||
if artifact_dir:
|
||||
output_dir = Path(str(artifact_dir)).expanduser().resolve()
|
||||
else:
|
||||
ckpt_stem = Path(str(eval_cfg.ckpt_path)).stem or 'rollout'
|
||||
timestamp = time.strftime('%Y%m%d-%H%M%S')
|
||||
output_dir = (Path.cwd() / 'rollout_artifacts' / f'{ckpt_stem}-{timestamp}').resolve()
|
||||
output_dir.mkdir(parents=True, exist_ok=True)
|
||||
|
||||
video_camera_name = None
|
||||
if bool(eval_cfg.get('record_video', False)):
|
||||
configured_camera_name = eval_cfg.get('video_camera_name', None)
|
||||
if configured_camera_name is None:
|
||||
configured_camera_name = eval_cfg.get('video_camera', None)
|
||||
if configured_camera_name is not None:
|
||||
video_camera_name = str(configured_camera_name)
|
||||
elif eval_cfg.get('camera_names'):
|
||||
video_camera_name = str(eval_cfg.camera_names[0])
|
||||
else:
|
||||
raise ValueError('record_video=true requires eval.video_camera_name or a non-empty eval.camera_names')
|
||||
|
||||
return {
|
||||
'output_dir': str(output_dir) if output_dir is not None else None,
|
||||
'summary_json': (
|
||||
str(output_dir / 'rollout_summary.json')
|
||||
if output_dir is not None and bool(eval_cfg.get('save_summary_json', False))
|
||||
else None
|
||||
),
|
||||
'timing_json': (
|
||||
str(output_dir / 'timing.json')
|
||||
if output_dir is not None and save_timing
|
||||
else None
|
||||
),
|
||||
'trajectory_npz': (
|
||||
str(output_dir / 'trajectory.npz')
|
||||
if output_dir is not None and save_trajectory
|
||||
else None
|
||||
),
|
||||
'video_mp4': (
|
||||
str(output_dir / f'rollout_{video_camera_name}.mp4')
|
||||
if output_dir is not None and bool(eval_cfg.get('record_video', False))
|
||||
and video_camera_name is not None
|
||||
else None
|
||||
),
|
||||
'video_camera_name': video_camera_name,
|
||||
}
|
||||
|
||||
|
||||
def _get_video_frame(obs: Dict, camera_name: Optional[str]) -> Optional[np.ndarray]:
|
||||
if camera_name is None:
|
||||
return None
|
||||
frame = obs['images'][camera_name]
|
||||
frame = np.asarray(frame)
|
||||
if frame.ndim != 3 or frame.shape[2] != 3:
|
||||
raise ValueError(
|
||||
f'Video frame for camera {camera_name} must have shape (H, W, 3), got {frame.shape}'
|
||||
)
|
||||
if frame.dtype != np.uint8:
|
||||
frame = np.clip(frame, 0, 255).astype(np.uint8)
|
||||
return frame
|
||||
|
||||
|
||||
def _open_video_writer(output_path: str, frame_size: tuple[int, int], fps: int):
|
||||
import cv2
|
||||
|
||||
output_path = str(output_path)
|
||||
fourcc = cv2.VideoWriter_fourcc(*'mp4v')
|
||||
writer = cv2.VideoWriter(output_path, fourcc, float(fps), frame_size)
|
||||
if not writer.isOpened():
|
||||
raise RuntimeError(f'无法打开视频输出: {output_path}')
|
||||
return writer
|
||||
|
||||
|
||||
class _RolloutVideoRecorder:
|
||||
def __init__(self, output_path: Optional[str], fps: int):
|
||||
self.output_path = output_path
|
||||
self.fps = int(fps)
|
||||
self.writer = None
|
||||
|
||||
def write(self, frame: Optional[np.ndarray]):
|
||||
if self.output_path is None or frame is None:
|
||||
return
|
||||
if self.writer is None:
|
||||
frame_size = (int(frame.shape[1]), int(frame.shape[0]))
|
||||
self.writer = _open_video_writer(self.output_path, frame_size, self.fps)
|
||||
self.writer.write(frame)
|
||||
|
||||
def close(self):
|
||||
if self.writer is not None:
|
||||
self.writer.release()
|
||||
self.writer = None
|
||||
|
||||
|
||||
def _read_body_pose(env, body_name: str):
|
||||
try:
|
||||
if callable(getattr(env, 'getBodyPos', None)) and callable(getattr(env, 'getBodyQuat', None)):
|
||||
pos = env.getBodyPos(body_name)
|
||||
quat = env.getBodyQuat(body_name)
|
||||
else:
|
||||
body = env.mj_data.body(body_name)
|
||||
pos = body.xpos
|
||||
quat = body.xquat
|
||||
except Exception:
|
||||
return None
|
||||
|
||||
return {
|
||||
'pos': np.asarray(pos, dtype=np.float32).copy(),
|
||||
'quat': np.asarray(quat, dtype=np.float32).copy(),
|
||||
}
|
||||
|
||||
|
||||
def _get_executed_ee_poses(env) -> dict[str, np.ndarray]:
|
||||
candidates = {
|
||||
'left_link7': ('left_link7', 'eef_left'),
|
||||
'right_link7': ('right_link7', 'eef_right'),
|
||||
'eef_left': ('eef_left', 'left_link7'),
|
||||
'eef_right': ('eef_right', 'right_link7'),
|
||||
}
|
||||
poses = {}
|
||||
for body_key, body_names in candidates.items():
|
||||
pose = None
|
||||
for body_name in body_names:
|
||||
pose = _read_body_pose(env, body_name)
|
||||
if pose is not None:
|
||||
break
|
||||
if pose is None:
|
||||
pose = {
|
||||
'pos': np.full(3, np.nan, dtype=np.float32),
|
||||
'quat': np.full(4, np.nan, dtype=np.float32),
|
||||
}
|
||||
poses[f'{body_key}_pos'] = pose['pos']
|
||||
poses[f'{body_key}_quat'] = pose['quat']
|
||||
return poses
|
||||
|
||||
|
||||
def _empty_rollout_trajectory() -> dict[str, list]:
|
||||
return {
|
||||
'episode_index': [],
|
||||
'step': [],
|
||||
'reward': [],
|
||||
'raw_action': [],
|
||||
'applied_action': [],
|
||||
'executed_left_link7_pos': [],
|
||||
'executed_left_link7_quat': [],
|
||||
'executed_right_link7_pos': [],
|
||||
'executed_right_link7_quat': [],
|
||||
'executed_eef_left_pos': [],
|
||||
'executed_eef_left_quat': [],
|
||||
'executed_eef_right_pos': [],
|
||||
'executed_eef_right_quat': [],
|
||||
'model_inference_triggered': [],
|
||||
'obs_read_time_ms': [],
|
||||
'preprocess_time_ms': [],
|
||||
'inference_time_ms': [],
|
||||
'env_step_time_ms': [],
|
||||
'total_time_ms': [],
|
||||
}
|
||||
|
||||
|
||||
def _append_rollout_step(
|
||||
storage: dict[str, list],
|
||||
episode_index: int,
|
||||
timestep: int,
|
||||
reward: Optional[float],
|
||||
raw_action: np.ndarray,
|
||||
executed_action: np.ndarray,
|
||||
executed_poses: dict[str, np.ndarray],
|
||||
timing_ms: dict[str, float],
|
||||
model_inference_triggered: bool,
|
||||
):
|
||||
storage['episode_index'].append(int(episode_index))
|
||||
storage['step'].append(int(timestep))
|
||||
storage['reward'].append(float(reward) if reward is not None else np.nan)
|
||||
storage['raw_action'].append(raw_action.astype(np.float32, copy=True))
|
||||
storage['applied_action'].append(executed_action.astype(np.float32, copy=True))
|
||||
storage['executed_left_link7_pos'].append(executed_poses['left_link7_pos'])
|
||||
storage['executed_left_link7_quat'].append(executed_poses['left_link7_quat'])
|
||||
storage['executed_right_link7_pos'].append(executed_poses['right_link7_pos'])
|
||||
storage['executed_right_link7_quat'].append(executed_poses['right_link7_quat'])
|
||||
storage['executed_eef_left_pos'].append(executed_poses['eef_left_pos'])
|
||||
storage['executed_eef_left_quat'].append(executed_poses['eef_left_quat'])
|
||||
storage['executed_eef_right_pos'].append(executed_poses['eef_right_pos'])
|
||||
storage['executed_eef_right_quat'].append(executed_poses['eef_right_quat'])
|
||||
storage['model_inference_triggered'].append(bool(model_inference_triggered))
|
||||
for key, value in timing_ms.items():
|
||||
storage[key].append(float(value))
|
||||
|
||||
|
||||
def _save_rollout_trajectory_npz(output_path: str, storage: dict[str, list]):
|
||||
step = np.asarray(storage['step'], dtype=np.int32)
|
||||
raw_action = np.asarray(storage['raw_action'], dtype=np.float32)
|
||||
applied_action = np.asarray(storage['applied_action'], dtype=np.float32)
|
||||
executed_left_link7_pos = np.asarray(storage['executed_left_link7_pos'], dtype=np.float32)
|
||||
executed_left_link7_quat = np.asarray(storage['executed_left_link7_quat'], dtype=np.float32)
|
||||
executed_right_link7_pos = np.asarray(storage['executed_right_link7_pos'], dtype=np.float32)
|
||||
executed_right_link7_quat = np.asarray(storage['executed_right_link7_quat'], dtype=np.float32)
|
||||
executed_eef_left_pos = np.asarray(storage['executed_eef_left_pos'], dtype=np.float32)
|
||||
executed_eef_left_quat = np.asarray(storage['executed_eef_left_quat'], dtype=np.float32)
|
||||
executed_eef_right_pos = np.asarray(storage['executed_eef_right_pos'], dtype=np.float32)
|
||||
executed_eef_right_quat = np.asarray(storage['executed_eef_right_quat'], dtype=np.float32)
|
||||
np.savez_compressed(
|
||||
output_path,
|
||||
episode_index=np.asarray(storage['episode_index'], dtype=np.int32),
|
||||
step=step,
|
||||
timestep=step,
|
||||
reward=np.asarray(storage['reward'], dtype=np.float32),
|
||||
raw_action=raw_action,
|
||||
raw_predicted_ee_action=raw_action,
|
||||
applied_action=applied_action,
|
||||
executed_ee_action=applied_action,
|
||||
executed_left_link7_pos=executed_left_link7_pos,
|
||||
executed_left_link7_quat=executed_left_link7_quat,
|
||||
executed_right_link7_pos=executed_right_link7_pos,
|
||||
executed_right_link7_quat=executed_right_link7_quat,
|
||||
executed_eef_left_pos=executed_eef_left_pos,
|
||||
executed_eef_left_quat=executed_eef_left_quat,
|
||||
executed_eef_right_pos=executed_eef_right_pos,
|
||||
executed_eef_right_quat=executed_eef_right_quat,
|
||||
left_ee_pos=executed_eef_left_pos,
|
||||
left_ee_quat=executed_eef_left_quat,
|
||||
right_ee_pos=executed_eef_right_pos,
|
||||
right_ee_quat=executed_eef_right_quat,
|
||||
model_inference_triggered=np.asarray(storage['model_inference_triggered'], dtype=bool),
|
||||
obs_read_time_ms=np.asarray(storage['obs_read_time_ms'], dtype=np.float32),
|
||||
preprocess_time_ms=np.asarray(storage['preprocess_time_ms'], dtype=np.float32),
|
||||
inference_time_ms=np.asarray(storage['inference_time_ms'], dtype=np.float32),
|
||||
env_step_time_ms=np.asarray(storage['env_step_time_ms'], dtype=np.float32),
|
||||
total_time_ms=np.asarray(storage['total_time_ms'], dtype=np.float32),
|
||||
)
|
||||
|
||||
|
||||
def _save_summary_json(output_path: str, summary: dict[str, Any]):
|
||||
with open(output_path, 'w', encoding='utf-8') as f:
|
||||
json.dump(_json_friendly(summary), f, ensure_ascii=False, indent=2)
|
||||
|
||||
|
||||
class ActionSmoother:
|
||||
"""
|
||||
动作平滑器(指数移动平均)
|
||||
@@ -157,8 +469,23 @@ class ActionSmoother:
|
||||
self.prev_action = None
|
||||
|
||||
|
||||
@hydra.main(version_base=None, config_path="../../vla/conf", config_name="config")
|
||||
def main(cfg: DictConfig):
|
||||
def _close_env(env):
|
||||
if env is None:
|
||||
return
|
||||
|
||||
if hasattr(env, 'exit_flag'):
|
||||
env.exit_flag = True
|
||||
|
||||
cam_thread = getattr(env, 'cam_thread', None)
|
||||
if cam_thread is not None and hasattr(cam_thread, 'join'):
|
||||
cam_thread.join(timeout=1.0)
|
||||
|
||||
viewer = getattr(env, 'viewer', None)
|
||||
if viewer is not None and hasattr(viewer, 'close'):
|
||||
viewer.close()
|
||||
|
||||
|
||||
def _run_eval(cfg: DictConfig):
|
||||
"""
|
||||
使用 agent 内置队列管理的简化版 VLA 评估
|
||||
|
||||
@@ -176,6 +503,18 @@ def main(cfg: DictConfig):
|
||||
eval_cfg = cfg.eval
|
||||
device = eval_cfg.device
|
||||
camera_names = list(eval_cfg.camera_names)
|
||||
artifact_paths = _resolve_artifact_paths(eval_cfg)
|
||||
video_recorder = _RolloutVideoRecorder(
|
||||
output_path=artifact_paths['video_mp4'],
|
||||
fps=int(eval_cfg.get('video_fps', 30)),
|
||||
)
|
||||
rollout_trajectory = _empty_rollout_trajectory()
|
||||
global_obs_read_times_ms = []
|
||||
global_preprocess_times_ms = []
|
||||
global_inference_times_ms = []
|
||||
global_env_step_times_ms = []
|
||||
global_total_times_ms = []
|
||||
global_model_forward_flags = []
|
||||
|
||||
# =========================================================================
|
||||
# 加载模型
|
||||
@@ -196,116 +535,261 @@ def main(cfg: DictConfig):
|
||||
# =========================================================================
|
||||
# 创建环境
|
||||
# =========================================================================
|
||||
env = make_sim_env(eval_cfg.task_name)
|
||||
env = make_sim_env(eval_cfg.task_name, headless=eval_cfg.headless)
|
||||
|
||||
# =========================================================================
|
||||
# 运行评估回合
|
||||
# =========================================================================
|
||||
all_stats = []
|
||||
episode_rewards = []
|
||||
episode_max_rewards = []
|
||||
try:
|
||||
for episode_idx in range(eval_cfg.num_episodes):
|
||||
print(f"\n{'='*60}")
|
||||
print(f"回合 {episode_idx + 1}/{eval_cfg.num_episodes}")
|
||||
print(f"{'='*60}\n")
|
||||
|
||||
for episode_idx in range(eval_cfg.num_episodes):
|
||||
box_pos = sample_transfer_pose()
|
||||
env.reset(box_pos)
|
||||
|
||||
# 为新回合重置 agent 队列
|
||||
agent.reset()
|
||||
if smoother:
|
||||
smoother.reset()
|
||||
|
||||
# 计时统计
|
||||
obs_read_times_ms = []
|
||||
preprocess_times_ms = []
|
||||
inference_times_ms = []
|
||||
env_step_times_ms = []
|
||||
total_times_ms = []
|
||||
model_forward_flags = []
|
||||
episode_reward = 0.0
|
||||
episode_max_reward = float('-inf')
|
||||
|
||||
with torch.inference_mode():
|
||||
for t in tqdm(range(eval_cfg.max_timesteps), desc=f"回合 {episode_idx + 1}"):
|
||||
start_total = time.perf_counter()
|
||||
|
||||
# 从环境获取观测
|
||||
obs = env._get_image_obs()
|
||||
qpos_obs = env._get_qpos_obs()
|
||||
obs['qpos'] = qpos_obs['qpos']
|
||||
end_obs_read = time.perf_counter()
|
||||
|
||||
video_frame = _get_video_frame(obs, artifact_paths['video_camera_name'])
|
||||
video_recorder.write(video_frame)
|
||||
|
||||
# 准备给 agent 的观测
|
||||
observation = prepare_observation(obs, camera_names)
|
||||
end_preprocess = time.perf_counter()
|
||||
|
||||
# 选择动作(agent 内部处理队列管理)
|
||||
action_queue = getattr(agent, '_queues', {}).get('action', None)
|
||||
model_inference_triggered = len(action_queue) == 0 if action_queue is not None else True
|
||||
start_inference = time.perf_counter()
|
||||
action = agent.select_action(observation)
|
||||
|
||||
if str(device).startswith('cuda') and torch.cuda.is_available():
|
||||
torch.cuda.synchronize()
|
||||
end_inference = time.perf_counter()
|
||||
|
||||
# 转换为 numpy
|
||||
raw_action = _to_numpy_action(action)
|
||||
|
||||
# 调试:打印当前时间步的动作(由配置控制)
|
||||
if eval_cfg.get('verbose_action', False):
|
||||
print(f"\n[Step {t:3d}] 预测动作: {raw_action}")
|
||||
print(f" - 动作形状: {raw_action.shape}")
|
||||
print(f" - 动作范围: [{raw_action.min():.4f}, {raw_action.max():.4f}]")
|
||||
print(f" - 动作均值: {raw_action.mean():.4f}, 标准差: {raw_action.std():.4f}")
|
||||
|
||||
# 可选:平滑动作
|
||||
executed_action = raw_action.copy()
|
||||
if smoother:
|
||||
executed_action = smoother.smooth(executed_action)
|
||||
|
||||
# 执行动作
|
||||
start_env_step = time.perf_counter()
|
||||
execute_policy_action(env, executed_action)
|
||||
end_env_step = time.perf_counter()
|
||||
executed_poses = _get_executed_ee_poses(env)
|
||||
reward = getattr(env, 'rew', None)
|
||||
if reward is not None:
|
||||
reward = float(reward)
|
||||
episode_reward += reward
|
||||
episode_max_reward = max(episode_max_reward, reward)
|
||||
if not eval_cfg.headless:
|
||||
env.render()
|
||||
|
||||
end_total = time.perf_counter()
|
||||
|
||||
step_timing_ms = {
|
||||
'obs_read_time_ms': (end_obs_read - start_total) * 1000.0,
|
||||
'preprocess_time_ms': (end_preprocess - end_obs_read) * 1000.0,
|
||||
'inference_time_ms': (end_inference - start_inference) * 1000.0,
|
||||
'env_step_time_ms': (end_env_step - start_env_step) * 1000.0,
|
||||
'total_time_ms': (end_total - start_total) * 1000.0,
|
||||
}
|
||||
|
||||
# 记录计时
|
||||
obs_read_times_ms.append(step_timing_ms['obs_read_time_ms'])
|
||||
preprocess_times_ms.append(step_timing_ms['preprocess_time_ms'])
|
||||
inference_times_ms.append(step_timing_ms['inference_time_ms'])
|
||||
env_step_times_ms.append(step_timing_ms['env_step_time_ms'])
|
||||
total_times_ms.append(step_timing_ms['total_time_ms'])
|
||||
model_forward_flags.append(bool(model_inference_triggered))
|
||||
global_obs_read_times_ms.append(step_timing_ms['obs_read_time_ms'])
|
||||
global_preprocess_times_ms.append(step_timing_ms['preprocess_time_ms'])
|
||||
global_inference_times_ms.append(step_timing_ms['inference_time_ms'])
|
||||
global_env_step_times_ms.append(step_timing_ms['env_step_time_ms'])
|
||||
global_total_times_ms.append(step_timing_ms['total_time_ms'])
|
||||
global_model_forward_flags.append(bool(model_inference_triggered))
|
||||
|
||||
if artifact_paths['trajectory_npz'] is not None:
|
||||
_append_rollout_step(
|
||||
rollout_trajectory,
|
||||
episode_index=episode_idx,
|
||||
timestep=t,
|
||||
reward=reward,
|
||||
raw_action=raw_action,
|
||||
executed_action=executed_action,
|
||||
executed_poses=executed_poses,
|
||||
timing_ms=step_timing_ms,
|
||||
model_inference_triggered=model_inference_triggered,
|
||||
)
|
||||
|
||||
# =========================================================================
|
||||
# 打印回合统计
|
||||
# =========================================================================
|
||||
avg_obs_read_time_ms = _mean_or_zero(obs_read_times_ms)
|
||||
avg_preprocess_time_ms = _mean_or_zero(preprocess_times_ms)
|
||||
avg_inference_time_ms = _mean_or_zero(inference_times_ms)
|
||||
avg_env_step_time_ms = _mean_or_zero(env_step_times_ms)
|
||||
avg_total_time_ms = _mean_or_zero(total_times_ms)
|
||||
timing_breakdown = _summarize_timing_breakdown(
|
||||
{
|
||||
'obs_read': obs_read_times_ms,
|
||||
'preprocess': preprocess_times_ms,
|
||||
'inference': inference_times_ms,
|
||||
'env_step': env_step_times_ms,
|
||||
'loop_total': total_times_ms,
|
||||
},
|
||||
model_forward_flags,
|
||||
)
|
||||
episode_artifact_paths = {
|
||||
'video': artifact_paths['video_mp4'],
|
||||
'trajectory': artifact_paths['trajectory_npz'],
|
||||
'timing': artifact_paths['timing_json'] or artifact_paths['summary_json'],
|
||||
}
|
||||
|
||||
stats = {
|
||||
'inference_fps': 1000.0 / avg_inference_time_ms if avg_inference_time_ms > 0 else 0.0,
|
||||
'control_fps': 1000.0 / avg_total_time_ms if avg_total_time_ms > 0 else 0.0,
|
||||
'avg_obs_read_time_ms': avg_obs_read_time_ms,
|
||||
'avg_preprocess_time_ms': avg_preprocess_time_ms,
|
||||
'avg_inference_time_ms': avg_inference_time_ms,
|
||||
'avg_env_step_time_ms': avg_env_step_time_ms,
|
||||
'avg_total_time_ms': avg_total_time_ms,
|
||||
'num_inferences': int(sum(model_forward_flags)),
|
||||
'num_model_forwards': int(sum(model_forward_flags)),
|
||||
'num_steps': len(total_times_ms),
|
||||
'episode_reward': float(episode_reward),
|
||||
'episode_max_reward': (
|
||||
float(episode_max_reward) if episode_max_reward != float('-inf') else None
|
||||
),
|
||||
'artifact_paths': episode_artifact_paths,
|
||||
'timing_breakdown_ms': timing_breakdown['all_steps_ms'],
|
||||
'timing_summary': timing_breakdown,
|
||||
}
|
||||
all_stats.append(stats)
|
||||
episode_rewards.append(float(episode_reward))
|
||||
if episode_max_reward != float('-inf'):
|
||||
episode_max_rewards.append(float(episode_max_reward))
|
||||
|
||||
print(f"\n回合 {episode_idx + 1} 完成 ({eval_cfg.max_timesteps} 时间步)")
|
||||
print(f" 模型推理 FPS: {stats['inference_fps']:.2f} Hz")
|
||||
print(f" 控制循环 FPS: {stats['control_fps']:.2f} Hz")
|
||||
print(f" 平均读观测时间: {stats['avg_obs_read_time_ms']:.2f} ms")
|
||||
print(f" 平均预处理时间: {stats['avg_preprocess_time_ms']:.2f} ms")
|
||||
print(f" 平均推理时间: {stats['avg_inference_time_ms']:.2f} ms")
|
||||
print(f" 平均环境步进时间: {stats['avg_env_step_time_ms']:.2f} ms")
|
||||
print(f" 平均总时间: {stats['avg_total_time_ms']:.2f} ms")
|
||||
print(f" 总推理次数: {stats['num_inferences']}")
|
||||
print(f" 回合累计奖励: {stats['episode_reward']:.2f}")
|
||||
|
||||
# =========================================================================
|
||||
# 总体统计
|
||||
# =========================================================================
|
||||
print(f"\n{'='*60}")
|
||||
print(f"回合 {episode_idx + 1}/{eval_cfg.num_episodes}")
|
||||
print(f"{'='*60}\n")
|
||||
print("评估完成!")
|
||||
print(f"{'='*60}")
|
||||
|
||||
box_pos = sample_transfer_pose()
|
||||
env.reset(box_pos)
|
||||
|
||||
# 为新回合重置 agent 队列
|
||||
agent.reset()
|
||||
if smoother:
|
||||
smoother.reset()
|
||||
|
||||
# 计时统计
|
||||
inference_times = []
|
||||
total_times = []
|
||||
|
||||
with torch.inference_mode():
|
||||
for t in tqdm(range(eval_cfg.max_timesteps), desc=f"回合 {episode_idx + 1}"):
|
||||
start_total = time.time()
|
||||
|
||||
# 从环境获取观测
|
||||
obs = env._get_image_obs()
|
||||
qpos_obs = env._get_qpos_obs()
|
||||
obs['qpos'] = qpos_obs['qpos']
|
||||
|
||||
# 准备给 agent 的观测
|
||||
observation = prepare_observation(obs, camera_names)
|
||||
|
||||
# 选择动作(agent 内部处理队列管理)
|
||||
start_inference = time.time()
|
||||
action = agent.select_action(observation)
|
||||
|
||||
if device == 'cuda':
|
||||
torch.cuda.synchronize()
|
||||
end_inference = time.time()
|
||||
|
||||
# 转换为 numpy
|
||||
action = action.cpu().numpy()
|
||||
|
||||
# 调试:打印当前时间步的动作(由配置控制)
|
||||
if eval_cfg.get('verbose_action', False):
|
||||
print(f"\n[Step {t:3d}] 预测动作: {action}")
|
||||
print(f" - 动作形状: {action.shape}")
|
||||
print(f" - 动作范围: [{action.min():.4f}, {action.max():.4f}]")
|
||||
print(f" - 动作均值: {action.mean():.4f}, 标准差: {action.std():.4f}")
|
||||
|
||||
# 可选:平滑动作
|
||||
if smoother:
|
||||
action = smoother.smooth(action)
|
||||
|
||||
# 执行动作
|
||||
env.step_jnt(action)
|
||||
env.render()
|
||||
|
||||
end_total = time.time()
|
||||
|
||||
# 记录计时
|
||||
inference_times.append(end_inference - start_inference)
|
||||
total_times.append(end_total - start_total)
|
||||
|
||||
# =========================================================================
|
||||
# 打印回合统计
|
||||
# =========================================================================
|
||||
avg_inference_time = np.mean(inference_times)
|
||||
avg_total_time = np.mean(total_times)
|
||||
|
||||
stats = {
|
||||
'inference_fps': 1.0 / avg_inference_time if avg_inference_time > 0 else 0.0,
|
||||
'control_fps': 1.0 / avg_total_time if avg_total_time > 0 else 0.0,
|
||||
'avg_inference_time_ms': avg_inference_time * 1000,
|
||||
'avg_total_time_ms': avg_total_time * 1000,
|
||||
'num_inferences': len([t for t in inference_times if t > 0.001]), # 统计实际推理次数
|
||||
'num_steps': len(total_times)
|
||||
summary = {
|
||||
'num_episodes': int(eval_cfg.num_episodes),
|
||||
'episode_rewards': episode_rewards,
|
||||
'episode_max_rewards': episode_max_rewards,
|
||||
'avg_reward': float(np.mean(episode_rewards)) if episode_rewards else 0.0,
|
||||
'avg_max_reward': float(np.mean(episode_max_rewards)) if episode_max_rewards else 0.0,
|
||||
'episodes': all_stats,
|
||||
'artifact_dir': artifact_paths['output_dir'],
|
||||
'artifacts': artifact_paths,
|
||||
}
|
||||
all_stats.append(stats)
|
||||
|
||||
print(f"\n回合 {episode_idx + 1} 完成 ({eval_cfg.max_timesteps} 时间步)")
|
||||
print(f" 模型推理 FPS: {stats['inference_fps']:.2f} Hz")
|
||||
print(f" 控制循环 FPS: {stats['control_fps']:.2f} Hz")
|
||||
print(f" 平均推理时间: {stats['avg_inference_time_ms']:.2f} ms")
|
||||
print(f" 平均总时间: {stats['avg_total_time_ms']:.2f} ms")
|
||||
print(f" 总推理次数: {stats['num_inferences']}")
|
||||
if all_stats:
|
||||
avg_inference_fps = np.mean([s['inference_fps'] for s in all_stats])
|
||||
avg_control_fps = np.mean([s['control_fps'] for s in all_stats])
|
||||
avg_obs_read_time = _mean_or_zero(global_obs_read_times_ms)
|
||||
avg_preprocess_time = _mean_or_zero(global_preprocess_times_ms)
|
||||
avg_inference_time = _mean_or_zero(global_inference_times_ms)
|
||||
avg_env_step_time = _mean_or_zero(global_env_step_times_ms)
|
||||
avg_total_time = _mean_or_zero(global_total_times_ms)
|
||||
summary.update({
|
||||
'avg_inference_fps': float(avg_inference_fps),
|
||||
'avg_control_fps': float(avg_control_fps),
|
||||
'avg_obs_read_time_ms': float(avg_obs_read_time),
|
||||
'avg_preprocess_time_ms': float(avg_preprocess_time),
|
||||
'avg_inference_time_ms': float(avg_inference_time),
|
||||
'avg_env_step_time_ms': float(avg_env_step_time),
|
||||
'avg_total_time_ms': float(avg_total_time),
|
||||
'timing_summary': _summarize_timing_breakdown(
|
||||
{
|
||||
'obs_read': global_obs_read_times_ms,
|
||||
'preprocess': global_preprocess_times_ms,
|
||||
'inference': global_inference_times_ms,
|
||||
'env_step': global_env_step_times_ms,
|
||||
'loop_total': global_total_times_ms,
|
||||
},
|
||||
global_model_forward_flags,
|
||||
),
|
||||
})
|
||||
|
||||
# =========================================================================
|
||||
# 总体统计
|
||||
# =========================================================================
|
||||
print(f"\n{'='*60}")
|
||||
print("评估完成!")
|
||||
print(f"{'='*60}")
|
||||
print(f"\n总体统计 ({eval_cfg.num_episodes} 个回合):")
|
||||
print(f" 平均模型推理 FPS: {avg_inference_fps:.2f} Hz")
|
||||
print(f" 平均控制循环 FPS: {avg_control_fps:.2f} Hz")
|
||||
print(f" 平均读观测时间: {avg_obs_read_time:.2f} ms")
|
||||
print(f" 平均预处理时间: {avg_preprocess_time:.2f} ms")
|
||||
print(f" 平均推理时间: {avg_inference_time:.2f} ms")
|
||||
print(f" 平均环境步进时间: {avg_env_step_time:.2f} ms")
|
||||
print(f" 平均总时间: {avg_total_time:.2f} ms")
|
||||
print(f" 平均累计奖励: {summary['avg_reward']:.2f}")
|
||||
|
||||
if all_stats:
|
||||
avg_inference_fps = np.mean([s['inference_fps'] for s in all_stats])
|
||||
avg_control_fps = np.mean([s['control_fps'] for s in all_stats])
|
||||
avg_inference_time = np.mean([s['avg_inference_time_ms'] for s in all_stats])
|
||||
avg_total_time = np.mean([s['avg_total_time_ms'] for s in all_stats])
|
||||
if artifact_paths['trajectory_npz'] is not None:
|
||||
_save_rollout_trajectory_npz(artifact_paths['trajectory_npz'], rollout_trajectory)
|
||||
if artifact_paths['summary_json'] is not None:
|
||||
_save_summary_json(artifact_paths['summary_json'], summary)
|
||||
if artifact_paths['timing_json'] is not None:
|
||||
_save_summary_json(artifact_paths['timing_json'], summary.get('timing_summary', {}))
|
||||
print()
|
||||
return _json_friendly(summary)
|
||||
finally:
|
||||
video_recorder.close()
|
||||
_close_env(env)
|
||||
|
||||
print(f"\n总体统计 ({eval_cfg.num_episodes} 个回合):")
|
||||
print(f" 平均模型推理 FPS: {avg_inference_fps:.2f} Hz")
|
||||
print(f" 平均控制循环 FPS: {avg_control_fps:.2f} Hz")
|
||||
print(f" 平均推理时间: {avg_inference_time:.2f} ms")
|
||||
print(f" 平均总时间: {avg_total_time:.2f} ms")
|
||||
print()
|
||||
|
||||
@hydra.main(version_base=None, config_path="../../vla/conf", config_name="config")
|
||||
def main(cfg: DictConfig):
|
||||
return _run_eval(cfg)
|
||||
|
||||
|
||||
if __name__ == '__main__':
|
||||
|
||||
@@ -29,6 +29,19 @@ smooth_alpha: 0.3
|
||||
# ====================
|
||||
# 调试选项
|
||||
# ====================
|
||||
headless: false # 是否禁用 MuJoCo / OpenCV GUI 渲染
|
||||
verbose_action: true # 是否打印每个时间步的动作信息
|
||||
|
||||
|
||||
# ====================
|
||||
# Rollout artifact 导出
|
||||
# ====================
|
||||
artifact_dir: null # 可选输出目录;为空时在启用导出时自动创建目录
|
||||
save_artifacts: false # 总开关;实际仍需搭配下面的具体导出项
|
||||
save_timing: false # 是否保存 timing.json(包含各阶段耗时统计)
|
||||
save_trajectory: false # 是否保存 trajectory.npz(原始 EE action + 执行后 EE pose)
|
||||
save_summary_json: false # 是否保存 JSON-friendly rollout summary
|
||||
save_trajectory_npz: false # 是否保存每步轨迹/时序/EE pose 为 NPZ
|
||||
record_video: false # 是否从单个相机流录制 rollout mp4
|
||||
video_camera: null # video_camera_name 的别名
|
||||
video_camera_name: null # 录制视频使用的相机名;为空时默认取 camera_names[0]
|
||||
video_fps: 30 # 导出 mp4 的目标帧率
|
||||
|
||||
228
tests/test_eval_vla_rollout_artifacts.py
Normal file
228
tests/test_eval_vla_rollout_artifacts.py
Normal file
@@ -0,0 +1,228 @@
|
||||
import json
|
||||
import tempfile
|
||||
import unittest
|
||||
from pathlib import Path
|
||||
from unittest import mock
|
||||
|
||||
import numpy as np
|
||||
import torch
|
||||
from omegaconf import OmegaConf
|
||||
|
||||
from roboimi.demos.vla_scripts import eval_vla
|
||||
|
||||
|
||||
class _FakeAgent:
|
||||
def __init__(self, actions):
|
||||
self._actions = [torch.tensor(action, dtype=torch.float32) for action in actions]
|
||||
self.reset_calls = 0
|
||||
|
||||
def eval(self):
|
||||
return self
|
||||
|
||||
def to(self, _device):
|
||||
return self
|
||||
|
||||
def reset(self):
|
||||
self.reset_calls += 1
|
||||
|
||||
def select_action(self, observation):
|
||||
del observation
|
||||
return self._actions.pop(0)
|
||||
|
||||
|
||||
class _FakeEnv:
|
||||
def __init__(self):
|
||||
self.step_count = 0
|
||||
self.rew = 0.0
|
||||
self.render_calls = 0
|
||||
self.reset_calls = []
|
||||
|
||||
def reset(self, box_pos):
|
||||
self.reset_calls.append(np.array(box_pos, copy=True))
|
||||
self.step_count = 0
|
||||
self.rew = 0.0
|
||||
|
||||
def _get_image_obs(self):
|
||||
frame_value = self.step_count
|
||||
front = np.full((6, 8, 3), fill_value=frame_value, dtype=np.uint8)
|
||||
top = np.full((6, 8, 3), fill_value=frame_value + 20, dtype=np.uint8)
|
||||
return {"images": {"front": front, "top": top}}
|
||||
|
||||
def _get_qpos_obs(self):
|
||||
return {"qpos": np.arange(16, dtype=np.float32)}
|
||||
|
||||
def step(self, action):
|
||||
del action
|
||||
self.step_count += 1
|
||||
self.rew = float(self.step_count)
|
||||
|
||||
def render(self):
|
||||
self.render_calls += 1
|
||||
|
||||
def getBodyPos(self, name):
|
||||
base = float(self.step_count)
|
||||
if name == 'eef_left':
|
||||
return np.array([base, base + 0.1, base + 0.2], dtype=np.float32)
|
||||
if name == 'eef_right':
|
||||
return np.array([base + 1.0, base + 1.1, base + 1.2], dtype=np.float32)
|
||||
raise KeyError(name)
|
||||
|
||||
def getBodyQuat(self, name):
|
||||
base = float(self.step_count)
|
||||
if name == 'eef_left':
|
||||
return np.array([1.0, base, 0.0, 0.0], dtype=np.float32)
|
||||
if name == 'eef_right':
|
||||
return np.array([1.0, 0.0, base, 0.0], dtype=np.float32)
|
||||
raise KeyError(name)
|
||||
|
||||
|
||||
class _FakeVideoWriter:
|
||||
def __init__(self, output_path):
|
||||
self.output_path = Path(output_path)
|
||||
self.output_path.parent.mkdir(parents=True, exist_ok=True)
|
||||
self.output_path.write_bytes(b'')
|
||||
self.frames = []
|
||||
self.released = False
|
||||
|
||||
def isOpened(self):
|
||||
return True
|
||||
|
||||
def write(self, frame):
|
||||
self.frames.append(np.array(frame, copy=True))
|
||||
|
||||
def release(self):
|
||||
self.released = True
|
||||
self.output_path.write_bytes(b'fake-mp4')
|
||||
|
||||
|
||||
class EvalVLARolloutArtifactsTest(unittest.TestCase):
|
||||
def test_eval_config_exposes_rollout_artifact_defaults(self):
|
||||
eval_cfg = OmegaConf.load(Path('roboimi/vla/conf/eval/eval.yaml'))
|
||||
|
||||
self.assertIn('artifact_dir', eval_cfg)
|
||||
self.assertFalse(eval_cfg.save_summary_json)
|
||||
self.assertFalse(eval_cfg.save_trajectory_npz)
|
||||
self.assertFalse(eval_cfg.record_video)
|
||||
self.assertIsNone(eval_cfg.artifact_dir)
|
||||
self.assertIsNone(eval_cfg.video_camera_name)
|
||||
self.assertEqual(eval_cfg.video_fps, 30)
|
||||
|
||||
def test_run_eval_exports_npz_summary_and_video_artifacts(self):
|
||||
actions = [
|
||||
np.arange(16, dtype=np.float32),
|
||||
np.arange(16, dtype=np.float32) + 10.0,
|
||||
]
|
||||
fake_agent = _FakeAgent(actions)
|
||||
fake_env = _FakeEnv()
|
||||
|
||||
with tempfile.TemporaryDirectory() as tmpdir:
|
||||
cfg = OmegaConf.create(
|
||||
{
|
||||
'agent': {},
|
||||
'eval': {
|
||||
'ckpt_path': 'checkpoints/vla_model_best.pt',
|
||||
'num_episodes': 1,
|
||||
'max_timesteps': 2,
|
||||
'device': 'cpu',
|
||||
'task_name': 'sim_transfer',
|
||||
'camera_names': ['front', 'top'],
|
||||
'use_smoothing': True,
|
||||
'smooth_alpha': 0.5,
|
||||
'verbose_action': False,
|
||||
'headless': True,
|
||||
'artifact_dir': tmpdir,
|
||||
'save_summary_json': True,
|
||||
'save_trajectory_npz': True,
|
||||
'record_video': True,
|
||||
'video_camera_name': 'front',
|
||||
'video_fps': 12,
|
||||
},
|
||||
}
|
||||
)
|
||||
|
||||
writer_holder = {}
|
||||
|
||||
def fake_open_video_writer(output_path, frame_size, fps):
|
||||
self.assertEqual(frame_size, (8, 6))
|
||||
self.assertEqual(fps, 12)
|
||||
writer = _FakeVideoWriter(output_path)
|
||||
writer_holder['writer'] = writer
|
||||
return writer
|
||||
|
||||
with mock.patch.object(
|
||||
eval_vla,
|
||||
'load_checkpoint',
|
||||
return_value=(fake_agent, None),
|
||||
), mock.patch.object(
|
||||
eval_vla,
|
||||
'make_sim_env',
|
||||
return_value=fake_env,
|
||||
), mock.patch.object(
|
||||
eval_vla,
|
||||
'sample_transfer_pose',
|
||||
return_value=np.array([0.1, 0.2, 0.3], dtype=np.float32),
|
||||
), mock.patch.object(
|
||||
eval_vla,
|
||||
'tqdm',
|
||||
side_effect=lambda iterable, **kwargs: iterable,
|
||||
), mock.patch.object(
|
||||
eval_vla,
|
||||
'_open_video_writer',
|
||||
side_effect=fake_open_video_writer,
|
||||
):
|
||||
summary = eval_vla._run_eval(cfg)
|
||||
|
||||
artifacts = summary['artifacts']
|
||||
trajectory_path = Path(artifacts['trajectory_npz'])
|
||||
summary_path = Path(artifacts['summary_json'])
|
||||
video_path = Path(artifacts['video_mp4'])
|
||||
|
||||
self.assertEqual(Path(artifacts['output_dir']), Path(tmpdir))
|
||||
self.assertEqual(artifacts['video_camera_name'], 'front')
|
||||
self.assertTrue(trajectory_path.exists())
|
||||
self.assertTrue(summary_path.exists())
|
||||
self.assertTrue(video_path.exists())
|
||||
|
||||
rollout_npz = np.load(trajectory_path)
|
||||
np.testing.assert_array_equal(rollout_npz['episode_index'], np.array([0, 0]))
|
||||
np.testing.assert_array_equal(rollout_npz['timestep'], np.array([0, 1]))
|
||||
np.testing.assert_array_equal(rollout_npz['reward'], np.array([1.0, 2.0], dtype=np.float32))
|
||||
np.testing.assert_array_equal(rollout_npz['raw_predicted_ee_action'][0], actions[0])
|
||||
np.testing.assert_array_equal(rollout_npz['raw_predicted_ee_action'][1], actions[1])
|
||||
np.testing.assert_array_equal(rollout_npz['executed_ee_action'][0], actions[0])
|
||||
np.testing.assert_array_equal(
|
||||
rollout_npz['executed_ee_action'][1],
|
||||
(actions[0] + actions[1]) / 2.0,
|
||||
)
|
||||
np.testing.assert_array_equal(
|
||||
rollout_npz['left_ee_pos'],
|
||||
np.array([[1.0, 1.1, 1.2], [2.0, 2.1, 2.2]], dtype=np.float32),
|
||||
)
|
||||
np.testing.assert_array_equal(
|
||||
rollout_npz['right_ee_pos'],
|
||||
np.array([[2.0, 2.1, 2.2], [3.0, 3.1, 3.2]], dtype=np.float32),
|
||||
)
|
||||
self.assertEqual(rollout_npz['obs_read_time_ms'].shape, (2,))
|
||||
self.assertEqual(rollout_npz['preprocess_time_ms'].shape, (2,))
|
||||
self.assertEqual(rollout_npz['inference_time_ms'].shape, (2,))
|
||||
self.assertEqual(rollout_npz['env_step_time_ms'].shape, (2,))
|
||||
self.assertEqual(rollout_npz['total_time_ms'].shape, (2,))
|
||||
|
||||
writer = writer_holder['writer']
|
||||
self.assertTrue(writer.released)
|
||||
self.assertEqual(len(writer.frames), 2)
|
||||
np.testing.assert_array_equal(writer.frames[0], np.zeros((6, 8, 3), dtype=np.uint8))
|
||||
np.testing.assert_array_equal(writer.frames[1], np.full((6, 8, 3), 1, dtype=np.uint8))
|
||||
|
||||
with summary_path.open('r', encoding='utf-8') as fh:
|
||||
saved_summary = json.load(fh)
|
||||
self.assertEqual(saved_summary['artifacts']['trajectory_npz'], str(trajectory_path))
|
||||
self.assertEqual(saved_summary['artifacts']['video_mp4'], str(video_path))
|
||||
self.assertEqual(saved_summary['episode_rewards'], [3.0])
|
||||
self.assertAlmostEqual(summary['avg_reward'], 3.0)
|
||||
self.assertIn('avg_obs_read_time_ms', summary)
|
||||
self.assertIn('avg_env_step_time_ms', summary)
|
||||
|
||||
|
||||
if __name__ == '__main__':
|
||||
unittest.main()
|
||||
Reference in New Issue
Block a user