code release

This commit is contained in:
Cheng Chi
2023-03-07 16:07:15 -05:00
parent 7b8d98f0c2
commit 4bf419aa5a
364 changed files with 49460 additions and 0 deletions

140
.gitignore vendored Normal file
View File

@@ -0,0 +1,140 @@
bin
logs
wandb
outputs
data
data_local
.vscode
_wandb
**/.DS_Store
fuse.cfg
*.ai
# Generation results
results/
ray/auth.json
# Byte-compiled / optimized / DLL files
__pycache__/
*.py[cod]
*$py.class
# C extensions
*.so
# Distribution / packaging
.Python
build/
develop-eggs/
dist/
downloads/
eggs/
.eggs/
lib/
lib64/
parts/
sdist/
var/
wheels/
pip-wheel-metadata/
share/python-wheels/
*.egg-info/
.installed.cfg
*.egg
MANIFEST
# PyInstaller
# Usually these files are written by a python script from a template
# before PyInstaller builds the exe, so as to inject date/other infos into it.
*.manifest
*.spec
# Installer logs
pip-log.txt
pip-delete-this-directory.txt
# Unit test / coverage reports
htmlcov/
.tox/
.nox/
.coverage
.coverage.*
.cache
nosetests.xml
coverage.xml
*.cover
*.py,cover
.hypothesis/
.pytest_cache/
# Translations
*.mo
*.pot
# Django stuff:
*.log
local_settings.py
db.sqlite3
db.sqlite3-journal
# Flask stuff:
instance/
.webassets-cache
# Scrapy stuff:
.scrapy
# Sphinx documentation
docs/_build/
# PyBuilder
target/
# Jupyter Notebook
.ipynb_checkpoints
# IPython
profile_default/
ipython_config.py
# pyenv
.python-version
# pipenv
# According to pypa/pipenv#598, it is recommended to include Pipfile.lock in version control.
# However, in case of collaboration, if having platform-specific dependencies or dependencies
# having no cross-platform support, pipenv may install dependencies that don't work, or not
# install all needed dependencies.
#Pipfile.lock
# PEP 582; used by e.g. github.com/David-OConnor/pyflow
__pypackages__/
# Celery stuff
celerybeat-schedule
celerybeat.pid
# SageMath parsed files
*.sage.py
# Spyder project settings
.spyderproject
.spyproject
# Rope project settings
.ropeproject
# mkdocs documentation
/site
# mypy
.mypy_cache/
.dmypy.json
dmypy.json
# Pyre type checker
.pyre/

374
README.md Normal file
View File

@@ -0,0 +1,374 @@
# Diffusion Policy
[[Project page]](https://diffusion-policy.cs.columbia.edu/)
[[Paper]](https://diffusion-policy.cs.columbia.edu/#paper)
[[Data]](https://diffusion-policy.cs.columbia.edu/data/)
[[Colab (state)]](https://colab.research.google.com/drive/1gxdkgRVfM55zihY9TFLja97cSVZOZq2B?usp=sharing)
[[Colab (vision)]](https://colab.research.google.com/drive/18GIHeOQ5DyjMN8iIRZL2EKZ0745NLIpg?usp=sharing)
[Cheng Chi](http://cheng-chi.github.io/)<sup>1</sup>,
[Siyuan Feng](https://www.cs.cmu.edu/~sfeng/)<sup>2</sup>,
[Yilun Du](https://yilundu.github.io/)<sup>3</sup>,
[Zhenjia Xu](https://yilundu.github.io/)<sup>1</sup>,
[Eric Cousineau](https://www.eacousineau.com/)<sup>2</sup>,
[Benjamin Burchfiel](http://www.benburchfiel.com/)<sup>2</sup>,
[Shuran Song](https://www.cs.columbia.edu/~shurans/)<sup>1</sup>
<sup>1</sup>Columbia University,
<sup>2</sup>Toyota Research Institute,
<sup>3</sup>MIT
<img src="media/teaser.png" alt="drawing" width="100%"/>
<img src="media/multimodal_sim.png" alt="drawing" width="100%"/>
## 🛝 Try it out!
Our self-contained Google Colab notebooks is the easiest way to play with Diffusion Policy. We provide separate notebooks for [state-based environment](https://colab.research.google.com/drive/1gxdkgRVfM55zihY9TFLja97cSVZOZq2B?usp=sharing) and [vision-based environment](https://colab.research.google.com/drive/18GIHeOQ5DyjMN8iIRZL2EKZ0745NLIpg?usp=sharing).
## 🧾 Checkout our experiment logs!
For each experiment used to generate Table I,II and IV in the [paper](https://diffusion-policy.cs.columbia.edu/#paper), we provide:
1. A `config.yaml` that contains all parameters needed to reproduce the experiment.
2. Detailed training/eval `logs.json.txt` for every training step.
3. Checkpoints for the best `epoch=*-test_mean_score=*.ckpt` and last `latest.ckpt` epoch of each run.
Experiment logs are hosted on our website as nested directories in format:
`https://diffusion-policy.cs.columbia.edu/data/experiments/<image|low_dim>/<task>/<method>/`
Within each experiment directory you may find:
```
.
├── config.yaml
├── metrics
│   └── logs.json.txt
├── train_0
│   ├── checkpoints
│   │   ├── epoch=0300-test_mean_score=1.000.ckpt
│   │   └── latest.ckpt
│   └── logs.json.txt
├── train_1
│   ├── checkpoints
│   │   ├── epoch=0250-test_mean_score=1.000.ckpt
│   │   └── latest.ckpt
│   └── logs.json.txt
└── train_2
├── checkpoints
│   ├── epoch=0250-test_mean_score=1.000.ckpt
│   └── latest.ckpt
└── logs.json.txt
```
The `metrics/logs.json.txt` file aggregates evaluation metrics from all 3 training runs every 50 epochs using `multirun_metrics.py`. The numbers reported in the paper correspond to `max` and `k_min_train_loss` aggregation keys.
To download all files in a subdirectory, use:
```console
$ wget --recursive --no-parent --no-host-directories --relative --reject="index.html*" https://diffusion-policy.cs.columbia.edu/data/experiments/low_dim/square_ph/diffusion_policy_cnn/
```
## 🛠️ Installation
### 🖥️ Simulation
To reproduce our simulation benchmark results, install our conda environment on a Linux machine with Nvidia GPU. On Ubuntu 20.04 you need to install the following apt packages for mujoco:
```console
sudo apt install -y libosmesa6-dev libgl1-mesa-glx libglfw3 patchelf
```
We recommend [Mambaforge](https://github.com/conda-forge/miniforge#mambaforge) instead of the standard anaconda distribution for faster installation:
```console
mamba env create -f conda_environment.yaml
```
but you can use conda as well:
```console
conda env create -f conda_environment.yaml
```
The `conda_environment_macos.yaml` file is only for development on MacOS and does not have full support for benchmarks.
### 🦾 Real Robot
Hardware (for Push-T):
* 1x [UR5-CB3](https://www.universal-robots.com/cb3) or [UR5e](https://www.universal-robots.com/products/ur5-robot/) ([RTDE Interface](https://www.universal-robots.com/articles/ur/interface-communication/real-time-data-exchange-rtde-guide/) is required)
* 2x [RealSense D415](https://www.intelrealsense.com/depth-camera-d415/)
* 1x [3Dconnexion SpaceMouse](https://3dconnexion.com/us/product/spacemouse-wireless/) (for teleop)
* 1x [Millibar Robotics Manual Tool Changer](https://www.millibar.com/manual-tool-changer/) (only need robot side)
* 1x 3D printed [End effector](https://cad.onshape.com/documents/a818888644a15afa6cc68ee5/w/2885b48b018cda84f425beca/e/3e8771c2124cee024edd2fed?renderMode=0&uiState=63ffcba6631ca919895e64e5)
* 1x 3D printed [T-block](https://cad.onshape.com/documents/f1140134e38f6ed6902648d5/w/a78cf81827600e4ff4058d03/e/f35f57fb7589f72e05c76caf?renderMode=0&uiState=63ffcbc9af4a881b344898ee)
* USB-C cables and screws for RealSense
Software:
* Ubuntu 20.04.3 (tested)
* Mujoco dependencies:
`sudo apt install libosmesa6-dev libgl1-mesa-glx libglfw3 patchelf`
* [RealSense SDK](https://github.com/IntelRealSense/librealsense/blob/master/doc/distribution_linux.md)
* Spacemouse dependencies:
`sudo apt apt install libspnav-dev spacenavd; sudo systemctl start spacenavd`
* Conda environment `mamba env create -f conda_environment_real.yaml`
## 🖥️ Reproducing Simulation Benchmark Results
### Download Training Data
Under the repo root, create data subdirectory:
```console
[diffusion_policy]$ mkdir data && cd data
```
Download the corresponding zip file from [https://diffusion-policy.cs.columbia.edu/data/training/](https://diffusion-policy.cs.columbia.edu/data/training/)
```console
[data]$ wget https://diffusion-policy.cs.columbia.edu/data/training/pusht.zip
```
Extract training data:
```console
[data]$ unzip pusht.zip && rm -f pusht.zip && cd ..
```
Grab config file for the corresponding experiment:
```console
[diffusion_policy]$ wget -O image_pusht_diffusion_policy_cnn.yaml https://diffusion-policy.cs.columbia.edu/data/experiments/image/pusht/diffusion_policy_cnn/config.yaml
```
### Running for a single seed
Activate conda environment and login to [wandb](https://wandb.ai) (if you haven't already).
```console
[diffusion_policy]$ conda activate robodiff
(robodiff)[diffusion_policy]$ wandb login
```
Launch training with seed 42 on GPU 0.
```console
(robodiff)[diffusion_policy]$ python train.py --config-dir=. --config-name=image_pusht_diffusion_policy_cnn.yaml training.seed=42 training.device=cuda:0 hydra.run.dir='data/outputs/${now:%Y.%m.%d}/${now:%H.%M.%S}_${name}_${task_name}'
```
This will create a directory in format `data/outputs/yyyy.mm.dd/hh.mm.ss_<method_name>_<task_name>` where configs, logs and checkpoints are written to. The policy will be evaluated every 50 epochs with the success rate logged as `test/mean_score` on wandb, as well as videos for some rollouts.
```console
(robodiff)[diffusion_policy]$ tree data/outputs/2023.03.01/20.02.03_train_diffusion_unet_hybrid_pusht_image -I wandb
data/outputs/2023.03.01/20.02.03_train_diffusion_unet_hybrid_pusht_image
├── checkpoints
│ ├── epoch=0000-test_mean_score=0.134.ckpt
│ └── latest.ckpt
├── .hydra
│ ├── config.yaml
│ ├── hydra.yaml
│ └── overrides.yaml
├── logs.json.txt
├── media
│ ├── 2k5u6wli.mp4
│ ├── 2kvovxms.mp4
│ ├── 2pxd9f6b.mp4
│ ├── 2q5gjt5f.mp4
│ ├── 2sawbf6m.mp4
│ └── 538ubl79.mp4
└── train.log
3 directories, 13 files
```
### Running for multiple seeds
Launch local ray cluster. For large scale experiments, you might want to setup an [AWS cluster with autoscaling](https://docs.ray.io/en/master/cluster/vms/user-guides/launching-clusters/aws.html). All other commands remain the same.
```console
(robodiff)[diffusion_policy]$ export CUDA_VISIBLE_DEVICES=0,1,2 # select GPUs to be managed by the ray cluster
(robodiff)[diffusion_policy]$ ray start --head --num-gpus=3
```
Launch a ray client which will start 3 training workers (3 seeds) and 1 metrics monitor worker.
```console
(robodiff)[diffusion_policy]$ python ray_train_multirun.py --config-dir=. --config-name=image_pusht_diffusion_policy_cnn.yaml --seeds=42,43,44 --monitor_key=test/mean_score -- multi_run.run_dir='data/outputs/${now:%Y.%m.%d}/${now:%H.%M.%S}_${name}_${task_name}' multi_run.wandb_name_base='${now:%Y.%m.%d-%H.%M.%S}_${name}_${task_name}'
```
In addition to the wandb log written by each training worker individually, the metrics monitor worker will log to wandb project `diffusion_policy_metrics` for the metrics aggregated from all 3 training runs. Local config, logs and checkpoints will be written to `data/outputs/yyyy.mm.dd/hh.mm.ss_<method_name>_<task_name>` in a directory structure identical to our [training logs](https://diffusion-policy.cs.columbia.edu/data/experiments/):
```console
(robodiff)[diffusion_policy]$ tree data/outputs/2023.03.01/22.13.58_train_diffusion_unet_hybrid_pusht_image -I 'wandb|media'
data/outputs/2023.03.01/22.13.58_train_diffusion_unet_hybrid_pusht_image
├── config.yaml
├── metrics
│ ├── logs.json.txt
│ ├── metrics.json
│ └── metrics.log
├── train_0
│ ├── checkpoints
│ │ ├── epoch=0000-test_mean_score=0.174.ckpt
│ │ └── latest.ckpt
│ ├── logs.json.txt
│ └── train.log
├── train_1
│ ├── checkpoints
│ │ ├── epoch=0000-test_mean_score=0.131.ckpt
│ │ └── latest.ckpt
│ ├── logs.json.txt
│ └── train.log
└── train_2
├── checkpoints
│ ├── epoch=0000-test_mean_score=0.105.ckpt
│ └── latest.ckpt
├── logs.json.txt
└── train.log
7 directories, 16 files
```
## 🦾 Demo, Training and Eval on a Real Robot
Make sure your UR5 robot is running and accepting command from its network interface (emergency stop button within reach at all time), your RealSense cameras plugged in to your workstation (tested with `realsense-viewer`) and your SpaceMouse connected with the `spacenavd` deamon running (verify with `systemctl status spacenavd`).
Start the demonstration collection script. Press "C" to start recording. Use SpaceMouse to move the robot. Press "S" to stop recording.
```console
(robodiff)[diffusion_policy]$ python demo_real_robot.py -o data/demo_pusht_real --robot_ip 192.168.0.204
```
This should result in a demonstration dataset in `data/demo_pusht_real` with in the same structure as our example [real Push-T training dataset](https://diffusion-policy.cs.columbia.edu/data/training/pusht_real.zip).
To train a Diffusion Policy, launch training with config:
```console
(robodiff)[diffusion_policy]$ python train.py --config-name=train_diffusion_unet_real_image_workspace task.dataset_path=data/demo_pusht_real
```
Edit [`diffusion_policy/config/task/real_pusht_image.yaml`](./diffusion_policy/config/task/real_pusht_image.yaml) if your camera setup is different.
Assuming the training has finished and you have a checkpoint at `data/outputs/blah/checkpoints/latest.ckpt`, launch the evaluation script with:
```console
python eval_real_robot.py -i data/outputs/blah/checkpoints/latest.ckpt -o data/eval_pusht_real --robot_ip 192.168.0.204
```
Press "C" to start evaluation (handing control over to the policy). Press "S" to stop the current episode.
## 🗺️ Codebase Tutorial
This codebase is structured under the requirement that:
1. implementing `N` tasks and `M` methods will only require `O(N+M)` amount of code instead of `O(N*M)`
2. while retaining maximum flexibility.
To achieve this requirement, we
1. maintained a simple unified interface between tasks and methods and
2. made the implementation of the tasks and the methods independent of each other.
These design decisions come at the cost of code repetition between the tasks and the methods. However, we believe that the benefit of being able to add/modify task/methods without affecting the remainder and being able understand a task/method by reading the code linearly outweighs the cost of copying and pasting 😊.
### The Split
On the task side, we have:
* `Dataset`: adapts a (third-party) dataset to the interface.
* `EnvRunner`: executes a `Policy` that accepts the interface and produce logs and metrics.
* `config/task/<task_name>.yaml`: contains all information needed to construct `Dataset` and `EnvRunner`.
* (optional) `Env`: an `gym==0.21.0` compatible class that encapsulates the task environment.
On the policy side, we have:
* `Policy`: implements inference according to the interface and part of the training process.
* `Workspace`: manages the life-cycle of training and evaluation (interleaved) of a method.
* `config/<workspace_name>.yaml`: contains all information needed to construct `Policy` and `Workspace`.
### The Interface
#### Low Dim
A [`LowdimPolicy`](./diffusion_policy/policy/base_lowdim_policy.py) takes observation dictionary:
- `"obs":` Tensor of shape `(B,To,Do)`
and predicts action dictionary:
- `"action": ` Tensor of shape `(B,Ta,Da)`
A [`LowdimDataset`](./diffusion_policy/dataset/base_dataset.py) returns a sample of dictionary:
- `"obs":` Tensor of shape `(To, Do)`
- `"action":` Tensor of shape `(Ta, Da)`
Its `get_normalizer` method returns a [`LinearNormalizer`](./diffusion_policy/model/common/normalizer.py) with keys `"obs","action"`.
The `Policy` handles normalization on GPU with its copy of the `LinearNormalizer`. The parameters of the `LinearNormalizer` is saved as part of the `Policy`'s weights checkpoint.
#### Image
A [`ImagePolicy`](./diffusion_policy/policy/base_image_policy.py) takes observation dictionary:
- `"key0":` Tensor of shape `(B,To,*)`
- `"key1":` Tensor of shape e.g. `(B,To,H,W,3)` ([0,1] float32)
and predicts action dictionary:
- `"action": ` Tensor of shape `(B,Ta,Da)`
A [`ImageDataset`](./diffusion_policy/dataset/base_dataset.py) returns a sample of dictionary:
- `"obs":` Dict of
- `"key0":` Tensor of shape `(To, *)`
- `"key1":` Tensor fo shape `(To,H,W,3)`
- `"action":` Tensor of shape `(Ta, Da)`
Its `get_normalizer` method returns a [`LinearNormalizer`](./diffusion_policy/model/common/normalizer.py) with keys `"key0","key1","action"`.
#### Example
```
To = 3
Ta = 4
T = 6
|o|o|o|
| | |a|a|a|a|
|o|o|
| |a|a|a|a|a|
| | | | |a|a|
```
Terminology in the paper: `varname` in the codebase
- Observation Horizon: `To|n_obs_steps`
- Action Horizon: `Ta|n_action_steps`
- Prediction Horizon: `T|horizon`
The classical (e.g. MDP) single step observation/action formulation is included as a special case where `To=1` and `Ta=1`.
## 🔩 Key Components
### `Workspace`
A `Workspace` object encapsulates all states and code needed to run an experiment.
* Inherits from [`BaseWorkspace`](./diffusion_policy/workspace/base_workspace.py).
* A single `OmegaConf` config object generated by `hydra` should contain all information needed to construct the Workspace object and running experiments. This config correspond to `config/<workspace_name>.yaml` + hydra overrides.
* The `run` method contains the entire pipeline for the experiment.
* Checkpoints happen at the `Workspace` level. All training states implemented as object attributes are automatically saved by the `save_checkpoint` method.
* All other states for the experiment should be implemented as local variables in the `run` method.
The entrypoint for training is `train.py` which uses `@hydra.main` decorator. Read [hydra](https://hydra.cc/)'s official documentation for command line arguments and config overrides. For example, the argument `task=<task_name>` will replace the `task` subtree of the config with the content of `config/task/<task_name>.yaml`, thereby selecting the task to run for this experiment.
### `Dataset`
A `Dataset` object:
* Inherits from `torch.utils.data.Dataset`.
* Returns a sample conforming to [the interface](#the-interface) depending on whether the task has Low Dim or Image observations.
* Has a method `get_normalizer` that returns a `LinearNormalizer` conforming to [the interface](#the-interface).
Normalization is a very common source of bugs during project development. It is sometimes helpful to print out the specific `scale` and `bias` vectors used for each key in the `LinearNormalizer`.
Most of our implementations of `Dataset` uses a combination of [`ReplayBuffer`](#replaybuffer) and [`SequenceSampler`](./diffusion_policy/common/sampler.py) to generate samples. Correctly handling padding at the beginning and the end of each demonstration episode according to `To` and `Ta` is important for good performance. Please read our [`SequenceSampler`](./diffusion_policy/common/sampler.py) before implementing your own sampling method.
### `Policy`
A `Policy` object:
* Inherits from `BaseLowdimPolicy` or `BaseImagePolicy`.
* Has a method `predict_action` that given observation dict, predicts actions conforming to [the interface](#the-interface).
* Has a method `set_normalizer` that takes in a `LinearNormalizer` and handles observation/action normalization internally in the policy.
* (optional) Might has a method `compute_loss` that takes in a batch and returns the loss to be optimized.
* (optional) Usually each `Policy` class correspond to a `Workspace` class due to the differences of training and evaluation process between methods.
### `EnvRunner`
A `EnvRunner` object abstracts away the subtle differences between different task environments.
* Has a method `run` that takes a `Policy` object for evaluation, and returns a dict of logs and metrics. Each value should be compatible with `wandb.log`.
To maximize evaluation speed, we usually vectorize environments using our modification of [`gym.vector.AsyncVectorEnv`](./diffusion_policy/gym_util/async_vector_env.py) which runs each individual environment in a separate process (workaround python GIL).
⚠️ Since subprocesses are launched using `fork` on linux, you need to be specially careful for environments that creates its OpenGL context during initialization (e.g. robosuite) which, once inherited by the child process memory space, often causes obscure bugs like segmentation fault. As a workaround, you can provide a `dummy_env_fn` that constructs an environment without initializing OpenGL.
### `ReplayBuffer`
The [`ReplayBuffer`](./diffusion_policy/common/replay_buffer.py) is a key data structure for storing a demonstration dataset both in-memory and on-disk with chunking and compression. It makes heavy use of the [`zarr`](https://zarr.readthedocs.io/en/stable/index.html) format but also has a `numpy` backend for lower access overhead.
On disk, it can be stored as a nested directory (e.g. `data/pusht_cchi_v7_replay.zarr`) or a zip file (e.g. `data/robomimic/datasets/square/mh/image_abs.hdf5.zarr.zip`).
Due to the relative small size of our datasets, it's often possible to store the entire image-based dataset in RAM with [`Jpeg2000` compression](./diffusion_policy/codecs/imagecodecs_numcodecs.py) which eliminates disk IO during training at the expense increasing of CPU workload.
Example:
```
data/pusht_cchi_v7_replay.zarr
├── data
│ ├── action (25650, 2) float32
│ ├── img (25650, 96, 96, 3) float32
│ ├── keypoint (25650, 9, 2) float32
│ ├── n_contacts (25650, 1) float32
│ └── state (25650, 5) float32
└── meta
└── episode_ends (206,) int64
```
Each array in `data` stores one data field from all episodes concatenated along the first dimension (time). The `meta/episode_ends` array stores the end index for each episode along the fist dimension.
## 🩹 Adding a Task
Read and imitate:
* `diffusion_policy/dataset/pusht_image_dataset.py`
* `diffusion_policy/env_runner/pusht_image_runner.py`
* `diffusion_policy/config/task/pusht_image.yaml`
Make sure that `shape_meta` correspond to input and output shapes for your task. Make sure `env_runner._target_` and `dataset._target_` point to the new classes you have added. When training, add `task=<your_task_name>` to `train.py`'s arguments.
## 🩹 Adding a Method
Read and imitate:
* `diffusion_policy/workspace/train_diffusion_unet_image_workspace.py`
* `diffusion_policy/policy/diffusion_unet_image_policy.py`
* `diffusion_policy/config/train_diffusion_unet_image_workspace.yaml`
Make sure your workspace yaml's `_target_` points to the new workspace class you created.

63
conda_environment.yaml Normal file
View File

@@ -0,0 +1,63 @@
name: robodiff
channels:
- pytorch
- pytorch3d
- nvidia
- conda-forge
dependencies:
- python=3.9
- pip=22.2.2
- cudatoolkit=11.6
- pytorch=1.12.1
- torchvision=0.13.1
- pytorch3d=0.7.0
- numpy=1.23.3
- numba==0.56.4
- scipy==1.9.1
- py-opencv=4.6.0
- cffi=1.15.1
- ipykernel=6.16
- matplotlib=3.6.1
- zarr=2.12.0
- numcodecs=0.10.2
- h5py=3.7.0
- hydra-core=1.2.0
- einops=0.4.1
- tqdm=4.64.1
- dill=0.3.5.1
- scikit-video=1.1.11
- scikit-image=0.19.3
- gym=0.21.0
- pymunk=6.2.1
- wandb=0.13.3
- threadpoolctl=3.1.0
- shapely=1.8.4
- cython=0.29.32
- imageio=2.22.0
- imageio-ffmpeg=0.4.7
- termcolor=2.0.1
- tensorboard=2.10.1
- tensorboardx=2.5.1
- psutil=5.9.2
- click=8.0.4
- boto3=1.24.96
- accelerate=0.13.2
- datasets=2.6.1
- diffusers=0.11.1
- av=10.0.0
- cmake=3.24.3
# trick to force reinstall imagecodecs via pip
- imagecodecs==2022.8.8
- pip:
- ray[default,tune]==2.2.0
# requires mujoco py dependencies libosmesa6-dev libgl1-mesa-glx libglfw3 patchelf
- free-mujoco-py==2.1.6
- pygame==2.1.2
- pybullet-svl==3.1.6.4
- robosuite @ https://github.com/cheng-chi/robosuite/archive/277ab9588ad7a4f4b55cf75508b44aa67ec171f0.tar.gz
- robomimic==0.2.0
- pytorchvideo==0.1.5
# pip package required for jpeg-xl
- imagecodecs==2022.9.26
- r3m @ https://github.com/facebookresearch/r3m/archive/b2334e726887fa0206962d7984c69c5fb09cceab.tar.gz
- dm-control==1.0.9

View File

@@ -0,0 +1,55 @@
name: robodiff
channels:
- pytorch
- conda-forge
dependencies:
- python=3.9
- pip=22.2.2
- pytorch=1.12.1
- torchvision=0.13.1
- numpy=1.23.3
- numba==0.56.4
- scipy==1.9.1
- py-opencv=4.6.0
- cffi=1.15.1
- ipykernel=6.16
- matplotlib=3.6.1
- zarr=2.12.0
- numcodecs=0.10.2
- h5py=3.7.0
- hydra-core=1.2.0
- einops=0.4.1
- tqdm=4.64.1
- dill=0.3.5.1
- scikit-video=1.1.11
- scikit-image=0.19.3
- gym=0.21.0
- pymunk=6.2.1
- wandb=0.13.3
- threadpoolctl=3.1.0
- shapely=1.8.4
- cython=0.29.32
- imageio=2.22.0
- imageio-ffmpeg=0.4.7
- termcolor=2.0.1
- tensorboard=2.10.1
- tensorboardx=2.5.1
- psutil=5.9.2
- click=8.0.4
- boto3=1.24.96
- accelerate=0.13.2
- datasets=2.6.1
- diffusers=0.11.1
- av=10.0.0
- cmake=3.24.3
# trick to force reinstall imagecodecs via pip
- imagecodecs==2022.8.8
- pip:
- ray[default,tune]==2.2.0
- pygame==2.1.2
- robomimic==0.2.0
- pytorchvideo==0.1.5
- atomics==1.0.2
# No support for jpeg-xl for MacOS
- imagecodecs==2022.9.26
- r3m @ https://github.com/facebookresearch/r3m/archive/b2334e726887fa0206962d7984c69c5fb09cceab.tar.gz

View File

@@ -0,0 +1,71 @@
name: robodiff
channels:
- pytorch
- pytorch3d
- nvidia
- conda-forge
dependencies:
- python=3.9
- pip=22.2.2
- cudatoolkit=11.6
- pytorch=1.12.1
- torchvision=0.13.1
- pytorch3d=0.7.0
- numpy=1.23.3
- numba==0.56.4
- scipy==1.9.1
- py-opencv=4.6.0
- cffi=1.15.1
- ipykernel=6.16
- matplotlib=3.6.1
- zarr=2.12.0
- numcodecs=0.10.2
- h5py=3.7.0
- hydra-core=1.2.0
- einops=0.4.1
- tqdm=4.64.1
- dill=0.3.5.1
- scikit-video=1.1.11
- scikit-image=0.19.3
- gym=0.21.0
- pymunk=6.2.1
- wandb=0.13.3
- threadpoolctl=3.1.0
- shapely=1.8.4
- cython=0.29.32
- imageio=2.22.0
- imageio-ffmpeg=0.4.7
- termcolor=2.0.1
- tensorboard=2.10.1
- tensorboardx=2.5.1
- psutil=5.9.2
- click=8.0.4
- boto3=1.24.96
- accelerate=0.13.2
- datasets=2.6.1
- diffusers=0.11.1
- av=10.0.0
- cmake=3.24.3
# trick to force reinstall imagecodecs via pip
- imagecodecs==2022.8.8
- pip:
- ray[default,tune]==2.2.0
# requires mujoco py dependencies libosmesa6-dev libgl1-mesa-glx libglfw3 patchelf
- free-mujoco-py==2.1.6
- pygame==2.1.2
- pybullet-svl==3.1.6.4
- robosuite @ https://github.com/cheng-chi/robosuite/archive/277ab9588ad7a4f4b55cf75508b44aa67ec171f0.tar.gz
- robomimic==0.2.0
- pytorchvideo==0.1.5
# requires librealsense https://github.com/IntelRealSense/librealsense/blob/master/doc/distribution_linux.md
- pyrealsense2==2.51.1.4348
# requires apt install libspnav-dev spacenavd; systemctl start spacenavd
- spnav @ https://github.com/cheng-chi/spnav/archive/c1c938ebe3cc542db4685e0d13850ff1abfdb943.tar.gz
- ur-rtde==1.5.5
- atomics==1.0.2
# pip package required for jpeg-xl
- imagecodecs==2022.9.26
- r3m @ https://github.com/facebookresearch/r3m/archive/b2334e726887fa0206962d7984c69c5fb09cceab.tar.gz
- dm-control==1.0.9
- pynput==1.7.6

160
demo_real_robot.py Normal file
View File

@@ -0,0 +1,160 @@
"""
Usage:
(robodiff)$ python demo_real_robot.py -o <demo_save_dir> --robot_ip <ip_of_ur5>
Robot movement:
Move your SpaceMouse to move the robot EEF (locked in xy plane).
Press SpaceMouse right button to unlock z axis.
Press SpaceMouse left button to enable rotation axes.
Recording control:
Click the opencv window (make sure it's in focus).
Press "C" to start recording.
Press "S" to stop recording.
Press "Q" to exit program.
Press "Backspace" to delete the previously recorded episode.
"""
# %%
import time
from multiprocessing.managers import SharedMemoryManager
import click
import cv2
import numpy as np
import scipy.spatial.transform as st
from diffusion_policy.real_world.real_env import RealEnv
from diffusion_policy.real_world.spacemouse_shared_memory import Spacemouse
from diffusion_policy.common.precise_sleep import precise_wait
from diffusion_policy.real_world.keystroke_counter import (
KeystrokeCounter, Key, KeyCode
)
@click.command()
@click.option('--output', '-o', required=True, help="Directory to save demonstration dataset.")
@click.option('--robot_ip', '-ri', required=True, help="UR5's IP address e.g. 192.168.0.204")
@click.option('--vis_camera_idx', default=0, type=int, help="Which RealSense camera to visualize.")
@click.option('--init_joints', '-j', is_flag=True, default=False, help="Whether to initialize robot joint configuration in the beginning.")
@click.option('--frequency', '-f', default=10, type=float, help="Control frequency in Hz.")
@click.option('--command_latency', '-cl', default=0.01, type=float, help="Latency between receiving SapceMouse command to executing on Robot in Sec.")
def main(output, robot_ip, vis_camera_idx, init_joints, frequency, command_latency):
dt = 1/frequency
with SharedMemoryManager() as shm_manager:
with KeystrokeCounter() as key_counter, \
Spacemouse(shm_manager=shm_manager) as sm, \
RealEnv(
output_dir=output,
robot_ip=robot_ip,
# recording resolution
obs_image_resolution=(1280,720),
frequency=frequency,
init_joints=init_joints,
enable_multi_cam_vis=True,
record_raw_video=True,
# number of threads per camera view for video recording (H.264)
thread_per_video=3,
# video recording quality, lower is better (but slower).
video_crf=21,
shm_manager=shm_manager
) as env:
cv2.setNumThreads(1)
# realsense exposure
env.realsense.set_exposure(exposure=120, gain=0)
# realsense white balance
env.realsense.set_white_balance(white_balance=5900)
time.sleep(1.0)
print('Ready!')
state = env.get_robot_state()
target_pose = state['TargetTCPPose']
t_start = time.monotonic()
iter_idx = 0
stop = False
is_recording = False
while not stop:
# calculate timing
t_cycle_end = t_start + (iter_idx + 1) * dt
t_sample = t_cycle_end - command_latency
t_command_target = t_cycle_end + dt
# pump obs
obs = env.get_obs()
# handle key presses
press_events = key_counter.get_press_events()
for key_stroke in press_events:
if key_stroke == KeyCode(char='q'):
# Exit program
stop = True
elif key_stroke == KeyCode(char='c'):
# Start recording
env.start_episode(t_start + (iter_idx + 2) * dt - time.monotonic() + time.time())
key_counter.clear()
is_recording = True
print('Recording!')
elif key_stroke == KeyCode(char='s'):
# Stop recording
env.end_episode()
key_counter.clear()
is_recording = False
print('Stopped.')
elif key_stroke == Key.backspace:
# Delete the most recent recorded episode
if click.confirm('Are you sure to drop an episode?'):
env.drop_episode()
key_counter.clear()
is_recording = False
# delete
stage = key_counter[Key.space]
# visualize
vis_img = obs[f'camera_{vis_camera_idx}'][-1,:,:,::-1].copy()
episode_id = env.replay_buffer.n_episodes
text = f'Episode: {episode_id}, Stage: {stage}'
if is_recording:
text += ', Recording!'
cv2.putText(
vis_img,
text,
(10,30),
fontFace=cv2.FONT_HERSHEY_SIMPLEX,
fontScale=1,
thickness=2,
color=(255,255,255)
)
cv2.imshow('default', vis_img)
cv2.pollKey()
precise_wait(t_sample)
# get teleop command
sm_state = sm.get_motion_state_transformed()
# print(sm_state)
dpos = sm_state[:3] * (env.max_pos_speed / frequency)
drot_xyz = sm_state[3:] * (env.max_rot_speed / frequency)
if not sm.is_button_pressed(0):
# translation mode
drot_xyz[:] = 0
else:
dpos[:] = 0
if not sm.is_button_pressed(1):
# 2D translation mode
dpos[2] = 0
drot = st.Rotation.from_euler('xyz', drot_xyz)
target_pose[:3] += dpos
target_pose[3:] = (drot * st.Rotation.from_rotvec(
target_pose[3:])).as_rotvec()
# execute teleop command
env.exec_actions(
actions=[target_pose],
timestamps=[t_command_target-time.monotonic()+time.time()],
stages=[stage])
precise_wait(t_cycle_end)
iter_idx += 1
# %%
if __name__ == '__main__':
main()

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,59 @@
from typing import Optional, Dict
import os
class TopKCheckpointManager:
def __init__(self,
save_dir,
monitor_key: str,
mode='min',
k=1,
format_str='epoch={epoch:03d}-train_loss={train_loss:.3f}.ckpt'
):
assert mode in ['max', 'min']
assert k >= 0
self.save_dir = save_dir
self.monitor_key = monitor_key
self.mode = mode
self.k = k
self.format_str = format_str
self.path_value_map = dict()
def get_ckpt_path(self, data: Dict[str, float]) -> Optional[str]:
if self.k == 0:
return None
value = data[self.monitor_key]
ckpt_path = os.path.join(
self.save_dir, self.format_str.format(**data))
if len(self.path_value_map) < self.k:
# under-capacity
self.path_value_map[ckpt_path] = value
return ckpt_path
# at capacity
sorted_map = sorted(self.path_value_map.items(), key=lambda x: x[1])
min_path, min_value = sorted_map[0]
max_path, max_value = sorted_map[-1]
delete_path = None
if self.mode == 'max':
if value > min_value:
delete_path = min_path
else:
if value < max_value:
delete_path = max_path
if delete_path is None:
return None
else:
del self.path_value_map[delete_path]
self.path_value_map[ckpt_path] = value
if not os.path.exists(self.save_dir):
os.mkdir(self.save_dir)
if os.path.exists(delete_path):
os.remove(delete_path)
return ckpt_path

View File

@@ -0,0 +1,150 @@
from typing import Tuple
import math
import cv2
import numpy as np
def draw_reticle(img, u, v, label_color):
"""
Draws a reticle (cross-hair) on the image at the given position on top of
the original image.
@param img (In/Out) uint8 3 channel image
@param u X coordinate (width)
@param v Y coordinate (height)
@param label_color tuple of 3 ints for RGB color used for drawing.
"""
# Cast to int.
u = int(u)
v = int(v)
white = (255, 255, 255)
cv2.circle(img, (u, v), 10, label_color, 1)
cv2.circle(img, (u, v), 11, white, 1)
cv2.circle(img, (u, v), 12, label_color, 1)
cv2.line(img, (u, v + 1), (u, v + 3), white, 1)
cv2.line(img, (u + 1, v), (u + 3, v), white, 1)
cv2.line(img, (u, v - 1), (u, v - 3), white, 1)
cv2.line(img, (u - 1, v), (u - 3, v), white, 1)
def draw_text(
img,
*,
text,
uv_top_left,
color=(255, 255, 255),
fontScale=0.5,
thickness=1,
fontFace=cv2.FONT_HERSHEY_SIMPLEX,
outline_color=(0, 0, 0),
line_spacing=1.5,
):
"""
Draws multiline with an outline.
"""
assert isinstance(text, str)
uv_top_left = np.array(uv_top_left, dtype=float)
assert uv_top_left.shape == (2,)
for line in text.splitlines():
(w, h), _ = cv2.getTextSize(
text=line,
fontFace=fontFace,
fontScale=fontScale,
thickness=thickness,
)
uv_bottom_left_i = uv_top_left + [0, h]
org = tuple(uv_bottom_left_i.astype(int))
if outline_color is not None:
cv2.putText(
img,
text=line,
org=org,
fontFace=fontFace,
fontScale=fontScale,
color=outline_color,
thickness=thickness * 3,
lineType=cv2.LINE_AA,
)
cv2.putText(
img,
text=line,
org=org,
fontFace=fontFace,
fontScale=fontScale,
color=color,
thickness=thickness,
lineType=cv2.LINE_AA,
)
uv_top_left += [0, h * line_spacing]
def get_image_transform(
input_res: Tuple[int,int]=(1280,720),
output_res: Tuple[int,int]=(640,480),
bgr_to_rgb: bool=False):
iw, ih = input_res
ow, oh = output_res
rw, rh = None, None
interp_method = cv2.INTER_AREA
if (iw/ih) >= (ow/oh):
# input is wider
rh = oh
rw = math.ceil(rh / ih * iw)
if oh > ih:
interp_method = cv2.INTER_LINEAR
else:
rw = ow
rh = math.ceil(rw / iw * ih)
if ow > iw:
interp_method = cv2.INTER_LINEAR
w_slice_start = (rw - ow) // 2
w_slice = slice(w_slice_start, w_slice_start + ow)
h_slice_start = (rh - oh) // 2
h_slice = slice(h_slice_start, h_slice_start + oh)
c_slice = slice(None)
if bgr_to_rgb:
c_slice = slice(None, None, -1)
def transform(img: np.ndarray):
assert img.shape == ((ih,iw,3))
# resize
img = cv2.resize(img, (rw, rh), interpolation=interp_method)
# crop
img = img[h_slice, w_slice, c_slice]
return img
return transform
def optimal_row_cols(
n_cameras,
in_wh_ratio,
max_resolution=(1920, 1080)
):
out_w, out_h = max_resolution
out_wh_ratio = out_w / out_h
n_rows = np.arange(n_cameras,dtype=np.int64) + 1
n_cols = np.ceil(n_cameras / n_rows).astype(np.int64)
cat_wh_ratio = in_wh_ratio * (n_cols / n_rows)
ratio_diff = np.abs(out_wh_ratio - cat_wh_ratio)
best_idx = np.argmin(ratio_diff)
best_n_row = n_rows[best_idx]
best_n_col = n_cols[best_idx]
best_cat_wh_ratio = cat_wh_ratio[best_idx]
rw, rh = None, None
if best_cat_wh_ratio >= out_wh_ratio:
# cat is wider
rw = math.floor(out_w / best_n_col)
rh = math.floor(rw / in_wh_ratio)
else:
rh = math.floor(out_h / best_n_row)
rw = math.floor(rh * in_wh_ratio)
# crop_resolution = (rw, rh)
return rw, rh, best_n_col, best_n_row

View File

@@ -0,0 +1,23 @@
import cv2
import numpy as np
def render_env_video(env, states, actions=None):
observations = states
imgs = list()
for i in range(len(observations)):
state = observations[i]
env.set_state(state)
if i == 0:
env.set_state(state)
img = env.render()
# draw action
if actions is not None:
action = actions[i]
coord = (action / 512 * 96).astype(np.int32)
cv2.drawMarker(img, coord,
color=(255,0,0), markerType=cv2.MARKER_CROSS,
markerSize=8, thickness=1)
imgs.append(img)
imgs = np.array(imgs)
return imgs

View File

@@ -0,0 +1,117 @@
from typing import Optional, Callable, Any, Sequence
import os
import copy
import json
import numbers
import pandas as pd
def read_json_log(path: str,
required_keys: Sequence[str]=tuple(),
**kwargs) -> pd.DataFrame:
"""
Read json-per-line file, with potentially incomplete lines.
kwargs passed to pd.read_json
"""
lines = list()
with open(path, 'r') as f:
while True:
# one json per line
line = f.readline()
if len(line) == 0:
# EOF
break
elif not line.endswith('\n'):
# incomplete line
break
is_relevant = False
for k in required_keys:
if k in line:
is_relevant = True
break
if is_relevant:
lines.append(line)
if len(lines) < 1:
return pd.DataFrame()
json_buf = f'[{",".join([line for line in (line.strip() for line in lines) if line])}]'
df = pd.read_json(json_buf, **kwargs)
return df
class JsonLogger:
def __init__(self, path: str,
filter_fn: Optional[Callable[[str,Any],bool]]=None):
if filter_fn is None:
filter_fn = lambda k,v: isinstance(v, numbers.Number)
# default to append mode
self.path = path
self.filter_fn = filter_fn
self.file = None
self.last_log = None
def start(self):
# use line buffering
try:
self.file = file = open(self.path, 'r+', buffering=1)
except FileNotFoundError:
self.file = file = open(self.path, 'w+', buffering=1)
# Move the pointer (similar to a cursor in a text editor) to the end of the file
pos = file.seek(0, os.SEEK_END)
# Read each character in the file one at a time from the last
# character going backwards, searching for a newline character
# If we find a new line, exit the search
while pos > 0 and file.read(1) != "\n":
pos -= 1
file.seek(pos, os.SEEK_SET)
# now the file pointer is at one past the last '\n'
# and pos is at the last '\n'.
last_line_end = file.tell()
# find the start of second last line
pos = max(0, pos-1)
file.seek(pos, os.SEEK_SET)
while pos > 0 and file.read(1) != "\n":
pos -= 1
file.seek(pos, os.SEEK_SET)
# now the file pointer is at one past the second last '\n'
last_line_start = file.tell()
if last_line_start < last_line_end:
# has last line of json
last_line = file.readline()
self.last_log = json.loads(last_line)
# remove the last incomplete line
file.seek(last_line_end)
file.truncate()
def stop(self):
self.file.close()
self.file = None
def __enter__(self):
self.start()
return self
def __exit__(self, exc_type, exc_val, exc_tb):
self.stop()
def log(self, data: dict):
filtered_data = dict(
filter(lambda x: self.filter_fn(*x), data.items()))
# save current as last log
self.last_log = filtered_data
for k, v in filtered_data.items():
if isinstance(v, numbers.Integral):
filtered_data[k] = int(v)
elif isinstance(v, numbers.Number):
filtered_data[k] = float(v)
buf = json.dumps(filtered_data)
# ensure one line per json
buf = buf.replace('\n','') + '\n'
self.file.write(buf)
def get_last_log(self):
return copy.deepcopy(self.last_log)

View File

@@ -0,0 +1,32 @@
import functools
def nested_dict_map(f, x):
"""
Map f over all leaf of nested dict x
"""
if not isinstance(x, dict):
return f(x)
y = dict()
for key, value in x.items():
y[key] = nested_dict_map(f, value)
return y
def nested_dict_reduce(f, x):
"""
Map f over all values of nested dict x, and reduce to a single value
"""
if not isinstance(x, dict):
return x
reduced_values = list()
for value in x.values():
reduced_values.append(nested_dict_reduce(f, value))
y = functools.reduce(f, reduced_values)
return y
def nested_dict_check(f, x):
bool_dict = nested_dict_map(f, x)
result = nested_dict_reduce(lambda x, y: x and y, bool_dict)
return result

View File

@@ -0,0 +1,223 @@
from diffusion_policy.model.common.normalizer import SingleFieldLinearNormalizer
from diffusion_policy.common.pytorch_util import dict_apply, dict_apply_reduce, dict_apply_split
import numpy as np
def get_range_normalizer_from_stat(stat, output_max=1, output_min=-1, range_eps=1e-7):
# -1, 1 normalization
input_max = stat['max']
input_min = stat['min']
input_range = input_max - input_min
ignore_dim = input_range < range_eps
input_range[ignore_dim] = output_max - output_min
scale = (output_max - output_min) / input_range
offset = output_min - scale * input_min
offset[ignore_dim] = (output_max + output_min) / 2 - input_min[ignore_dim]
return SingleFieldLinearNormalizer.create_manual(
scale=scale,
offset=offset,
input_stats_dict=stat
)
def get_image_range_normalizer():
scale = np.array([2], dtype=np.float32)
offset = np.array([-1], dtype=np.float32)
stat = {
'min': np.array([0], dtype=np.float32),
'max': np.array([1], dtype=np.float32),
'mean': np.array([0.5], dtype=np.float32),
'std': np.array([np.sqrt(1/12)], dtype=np.float32)
}
return SingleFieldLinearNormalizer.create_manual(
scale=scale,
offset=offset,
input_stats_dict=stat
)
def get_identity_normalizer_from_stat(stat):
scale = np.ones_like(stat['min'])
offset = np.zeros_like(stat['min'])
return SingleFieldLinearNormalizer.create_manual(
scale=scale,
offset=offset,
input_stats_dict=stat
)
def robomimic_abs_action_normalizer_from_stat(stat, rotation_transformer):
result = dict_apply_split(
stat, lambda x: {
'pos': x[...,:3],
'rot': x[...,3:6],
'gripper': x[...,6:]
})
def get_pos_param_info(stat, output_max=1, output_min=-1, range_eps=1e-7):
# -1, 1 normalization
input_max = stat['max']
input_min = stat['min']
input_range = input_max - input_min
ignore_dim = input_range < range_eps
input_range[ignore_dim] = output_max - output_min
scale = (output_max - output_min) / input_range
offset = output_min - scale * input_min
offset[ignore_dim] = (output_max + output_min) / 2 - input_min[ignore_dim]
return {'scale': scale, 'offset': offset}, stat
def get_rot_param_info(stat):
example = rotation_transformer.forward(stat['mean'])
scale = np.ones_like(example)
offset = np.zeros_like(example)
info = {
'max': np.ones_like(example),
'min': np.full_like(example, -1),
'mean': np.zeros_like(example),
'std': np.ones_like(example)
}
return {'scale': scale, 'offset': offset}, info
def get_gripper_param_info(stat):
example = stat['max']
scale = np.ones_like(example)
offset = np.zeros_like(example)
info = {
'max': np.ones_like(example),
'min': np.full_like(example, -1),
'mean': np.zeros_like(example),
'std': np.ones_like(example)
}
return {'scale': scale, 'offset': offset}, info
pos_param, pos_info = get_pos_param_info(result['pos'])
rot_param, rot_info = get_rot_param_info(result['rot'])
gripper_param, gripper_info = get_gripper_param_info(result['gripper'])
param = dict_apply_reduce(
[pos_param, rot_param, gripper_param],
lambda x: np.concatenate(x,axis=-1))
info = dict_apply_reduce(
[pos_info, rot_info, gripper_info],
lambda x: np.concatenate(x,axis=-1))
return SingleFieldLinearNormalizer.create_manual(
scale=param['scale'],
offset=param['offset'],
input_stats_dict=info
)
def robomimic_abs_action_only_normalizer_from_stat(stat):
result = dict_apply_split(
stat, lambda x: {
'pos': x[...,:3],
'other': x[...,3:]
})
def get_pos_param_info(stat, output_max=1, output_min=-1, range_eps=1e-7):
# -1, 1 normalization
input_max = stat['max']
input_min = stat['min']
input_range = input_max - input_min
ignore_dim = input_range < range_eps
input_range[ignore_dim] = output_max - output_min
scale = (output_max - output_min) / input_range
offset = output_min - scale * input_min
offset[ignore_dim] = (output_max + output_min) / 2 - input_min[ignore_dim]
return {'scale': scale, 'offset': offset}, stat
def get_other_param_info(stat):
example = stat['max']
scale = np.ones_like(example)
offset = np.zeros_like(example)
info = {
'max': np.ones_like(example),
'min': np.full_like(example, -1),
'mean': np.zeros_like(example),
'std': np.ones_like(example)
}
return {'scale': scale, 'offset': offset}, info
pos_param, pos_info = get_pos_param_info(result['pos'])
other_param, other_info = get_other_param_info(result['other'])
param = dict_apply_reduce(
[pos_param, other_param],
lambda x: np.concatenate(x,axis=-1))
info = dict_apply_reduce(
[pos_info, other_info],
lambda x: np.concatenate(x,axis=-1))
return SingleFieldLinearNormalizer.create_manual(
scale=param['scale'],
offset=param['offset'],
input_stats_dict=info
)
def robomimic_abs_action_only_dual_arm_normalizer_from_stat(stat):
Da = stat['max'].shape[-1]
Dah = Da // 2
result = dict_apply_split(
stat, lambda x: {
'pos0': x[...,:3],
'other0': x[...,3:Dah],
'pos1': x[...,Dah:Dah+3],
'other1': x[...,Dah+3:]
})
def get_pos_param_info(stat, output_max=1, output_min=-1, range_eps=1e-7):
# -1, 1 normalization
input_max = stat['max']
input_min = stat['min']
input_range = input_max - input_min
ignore_dim = input_range < range_eps
input_range[ignore_dim] = output_max - output_min
scale = (output_max - output_min) / input_range
offset = output_min - scale * input_min
offset[ignore_dim] = (output_max + output_min) / 2 - input_min[ignore_dim]
return {'scale': scale, 'offset': offset}, stat
def get_other_param_info(stat):
example = stat['max']
scale = np.ones_like(example)
offset = np.zeros_like(example)
info = {
'max': np.ones_like(example),
'min': np.full_like(example, -1),
'mean': np.zeros_like(example),
'std': np.ones_like(example)
}
return {'scale': scale, 'offset': offset}, info
pos0_param, pos0_info = get_pos_param_info(result['pos0'])
pos1_param, pos1_info = get_pos_param_info(result['pos1'])
other0_param, other0_info = get_other_param_info(result['other0'])
other1_param, other1_info = get_other_param_info(result['other1'])
param = dict_apply_reduce(
[pos0_param, other0_param, pos1_param, other1_param],
lambda x: np.concatenate(x,axis=-1))
info = dict_apply_reduce(
[pos0_info, other0_info, pos1_info, other1_info],
lambda x: np.concatenate(x,axis=-1))
return SingleFieldLinearNormalizer.create_manual(
scale=param['scale'],
offset=param['offset'],
input_stats_dict=info
)
def array_to_stats(arr: np.ndarray):
stat = {
'min': np.min(arr, axis=0),
'max': np.max(arr, axis=0),
'mean': np.mean(arr, axis=0),
'std': np.std(arr, axis=0)
}
return stat

View File

@@ -0,0 +1,208 @@
from typing import Union
import numbers
import numpy as np
import scipy.interpolate as si
import scipy.spatial.transform as st
def rotation_distance(a: st.Rotation, b: st.Rotation) -> float:
return (b * a.inv()).magnitude()
def pose_distance(start_pose, end_pose):
start_pose = np.array(start_pose)
end_pose = np.array(end_pose)
start_pos = start_pose[:3]
end_pos = end_pose[:3]
start_rot = st.Rotation.from_rotvec(start_pose[3:])
end_rot = st.Rotation.from_rotvec(end_pose[3:])
pos_dist = np.linalg.norm(end_pos - start_pos)
rot_dist = rotation_distance(start_rot, end_rot)
return pos_dist, rot_dist
class PoseTrajectoryInterpolator:
def __init__(self, times: np.ndarray, poses: np.ndarray):
assert len(times) >= 1
assert len(poses) == len(times)
if not isinstance(times, np.ndarray):
times = np.array(times)
if not isinstance(poses, np.ndarray):
poses = np.array(poses)
if len(times) == 1:
# special treatement for single step interpolation
self.single_step = True
self._times = times
self._poses = poses
else:
self.single_step = False
assert np.all(times[1:] >= times[:-1])
pos = poses[:,:3]
rot = st.Rotation.from_rotvec(poses[:,3:])
self.pos_interp = si.interp1d(times, pos,
axis=0, assume_sorted=True)
self.rot_interp = st.Slerp(times, rot)
@property
def times(self) -> np.ndarray:
if self.single_step:
return self._times
else:
return self.pos_interp.x
@property
def poses(self) -> np.ndarray:
if self.single_step:
return self._poses
else:
n = len(self.times)
poses = np.zeros((n, 6))
poses[:,:3] = self.pos_interp.y
poses[:,3:] = self.rot_interp(self.times).as_rotvec()
return poses
def trim(self,
start_t: float, end_t: float
) -> "PoseTrajectoryInterpolator":
assert start_t <= end_t
times = self.times
should_keep = (start_t < times) & (times < end_t)
keep_times = times[should_keep]
all_times = np.concatenate([[start_t], keep_times, [end_t]])
# remove duplicates, Slerp requries strictly increasing x
all_times = np.unique(all_times)
# interpolate
all_poses = self(all_times)
return PoseTrajectoryInterpolator(times=all_times, poses=all_poses)
def drive_to_waypoint(self,
pose, time, curr_time,
max_pos_speed=np.inf,
max_rot_speed=np.inf
) -> "PoseTrajectoryInterpolator":
assert(max_pos_speed > 0)
assert(max_rot_speed > 0)
time = max(time, curr_time)
curr_pose = self(curr_time)
pos_dist, rot_dist = pose_distance(curr_pose, pose)
pos_min_duration = pos_dist / max_pos_speed
rot_min_duration = rot_dist / max_rot_speed
duration = time - curr_time
duration = max(duration, max(pos_min_duration, rot_min_duration))
assert duration >= 0
last_waypoint_time = curr_time + duration
# insert new pose
trimmed_interp = self.trim(curr_time, curr_time)
times = np.append(trimmed_interp.times, [last_waypoint_time], axis=0)
poses = np.append(trimmed_interp.poses, [pose], axis=0)
# create new interpolator
final_interp = PoseTrajectoryInterpolator(times, poses)
return final_interp
def schedule_waypoint(self,
pose, time,
max_pos_speed=np.inf,
max_rot_speed=np.inf,
curr_time=None,
last_waypoint_time=None
) -> "PoseTrajectoryInterpolator":
assert(max_pos_speed > 0)
assert(max_rot_speed > 0)
if last_waypoint_time is not None:
assert curr_time is not None
# trim current interpolator to between curr_time and last_waypoint_time
start_time = self.times[0]
end_time = self.times[-1]
assert start_time <= end_time
if curr_time is not None:
if time <= curr_time:
# if insert time is earlier than current time
# no effect should be done to the interpolator
return self
# now, curr_time < time
start_time = max(curr_time, start_time)
if last_waypoint_time is not None:
# if last_waypoint_time is earlier than start_time
# use start_time
if time <= last_waypoint_time:
end_time = curr_time
else:
end_time = max(last_waypoint_time, curr_time)
else:
end_time = curr_time
end_time = min(end_time, time)
start_time = min(start_time, end_time)
# end time should be the latest of all times except time
# after this we can assume order (proven by zhenjia, due to the 2 min operations)
# Constraints:
# start_time <= end_time <= time (proven by zhenjia)
# curr_time <= start_time (proven by zhenjia)
# curr_time <= time (proven by zhenjia)
# time can't change
# last_waypoint_time can't change
# curr_time can't change
assert start_time <= end_time
assert end_time <= time
if last_waypoint_time is not None:
if time <= last_waypoint_time:
assert end_time == curr_time
else:
assert end_time == max(last_waypoint_time, curr_time)
if curr_time is not None:
assert curr_time <= start_time
assert curr_time <= time
trimmed_interp = self.trim(start_time, end_time)
# after this, all waypoints in trimmed_interp is within start_time and end_time
# and is earlier than time
# determine speed
duration = time - end_time
end_pose = trimmed_interp(end_time)
pos_dist, rot_dist = pose_distance(pose, end_pose)
pos_min_duration = pos_dist / max_pos_speed
rot_min_duration = rot_dist / max_rot_speed
duration = max(duration, max(pos_min_duration, rot_min_duration))
assert duration >= 0
last_waypoint_time = end_time + duration
# insert new pose
times = np.append(trimmed_interp.times, [last_waypoint_time], axis=0)
poses = np.append(trimmed_interp.poses, [pose], axis=0)
# create new interpolator
final_interp = PoseTrajectoryInterpolator(times, poses)
return final_interp
def __call__(self, t: Union[numbers.Number, np.ndarray]) -> np.ndarray:
is_single = False
if isinstance(t, numbers.Number):
is_single = True
t = np.array([t])
pose = np.zeros((len(t), 6))
if self.single_step:
pose[:] = self._poses[0]
else:
start_time = self.times[0]
end_time = self.times[-1]
t = np.clip(t, start_time, end_time)
pose = np.zeros((len(t), 6))
pose[:,:3] = self.pos_interp(t)
pose[:,3:] = self.rot_interp(t).as_rotvec()
if is_single:
pose = pose[0]
return pose

View File

@@ -0,0 +1,25 @@
import time
def precise_sleep(dt: float, slack_time: float=0.001, time_func=time.monotonic):
"""
Use hybrid of time.sleep and spinning to minimize jitter.
Sleep dt - slack_time seconds first, then spin for the rest.
"""
t_start = time_func()
if dt > slack_time:
time.sleep(dt - slack_time)
t_end = t_start + dt
while time_func() < t_end:
pass
return
def precise_wait(t_end: float, slack_time: float=0.001, time_func=time.monotonic):
t_start = time_func()
t_wait = t_end - t_start
if t_wait > 0:
t_sleep = t_wait - slack_time
if t_sleep > 0:
time.sleep(t_sleep)
while time_func() < t_end:
pass
return

View File

@@ -0,0 +1,248 @@
# ----------------------------------------------------------------------------
# pymunk
# Copyright (c) 2007-2016 Victor Blomqvist
#
# Permission is hereby granted, free of charge, to any person obtaining a copy
# of this software and associated documentation files (the "Software"), to deal
# in the Software without restriction, including without limitation the rights
# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
# copies of the Software, and to permit persons to whom the Software is
# furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in
# all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
# SOFTWARE.
# ----------------------------------------------------------------------------
"""This submodule contains helper functions to help with quick prototyping
using pymunk together with pygame.
Intended to help with debugging and prototyping, not for actual production use
in a full application. The methods contained in this module is opinionated
about your coordinate system and not in any way optimized.
"""
__docformat__ = "reStructuredText"
__all__ = [
"DrawOptions",
"get_mouse_pos",
"to_pygame",
"from_pygame",
"lighten",
"positive_y_is_up",
]
from typing import List, Sequence, Tuple
import pygame
import numpy as np
import pymunk
from pymunk.space_debug_draw_options import SpaceDebugColor
from pymunk.vec2d import Vec2d
positive_y_is_up: bool = False
"""Make increasing values of y point upwards.
When True::
y
^
| . (3, 3)
|
| . (2, 2)
|
+------ > x
When False::
+------ > x
|
| . (2, 2)
|
| . (3, 3)
v
y
"""
class DrawOptions(pymunk.SpaceDebugDrawOptions):
def __init__(self, surface: pygame.Surface) -> None:
"""Draw a pymunk.Space on a pygame.Surface object.
Typical usage::
>>> import pymunk
>>> surface = pygame.Surface((10,10))
>>> space = pymunk.Space()
>>> options = pymunk.pygame_util.DrawOptions(surface)
>>> space.debug_draw(options)
You can control the color of a shape by setting shape.color to the color
you want it drawn in::
>>> c = pymunk.Circle(None, 10)
>>> c.color = pygame.Color("pink")
See pygame_util.demo.py for a full example
Since pygame uses a coordiante system where y points down (in contrast
to many other cases), you either have to make the physics simulation
with Pymunk also behave in that way, or flip everything when you draw.
The easiest is probably to just make the simulation behave the same
way as Pygame does. In that way all coordinates used are in the same
orientation and easy to reason about::
>>> space = pymunk.Space()
>>> space.gravity = (0, -1000)
>>> body = pymunk.Body()
>>> body.position = (0, 0) # will be positioned in the top left corner
>>> space.debug_draw(options)
To flip the drawing its possible to set the module property
:py:data:`positive_y_is_up` to True. Then the pygame drawing will flip
the simulation upside down before drawing::
>>> positive_y_is_up = True
>>> body = pymunk.Body()
>>> body.position = (0, 0)
>>> # Body will be position in bottom left corner
:Parameters:
surface : pygame.Surface
Surface that the objects will be drawn on
"""
self.surface = surface
super(DrawOptions, self).__init__()
def draw_circle(
self,
pos: Vec2d,
angle: float,
radius: float,
outline_color: SpaceDebugColor,
fill_color: SpaceDebugColor,
) -> None:
p = to_pygame(pos, self.surface)
pygame.draw.circle(self.surface, fill_color.as_int(), p, round(radius), 0)
pygame.draw.circle(self.surface, light_color(fill_color).as_int(), p, round(radius-4), 0)
circle_edge = pos + Vec2d(radius, 0).rotated(angle)
p2 = to_pygame(circle_edge, self.surface)
line_r = 2 if radius > 20 else 1
# pygame.draw.lines(self.surface, outline_color.as_int(), False, [p, p2], line_r)
def draw_segment(self, a: Vec2d, b: Vec2d, color: SpaceDebugColor) -> None:
p1 = to_pygame(a, self.surface)
p2 = to_pygame(b, self.surface)
pygame.draw.aalines(self.surface, color.as_int(), False, [p1, p2])
def draw_fat_segment(
self,
a: Tuple[float, float],
b: Tuple[float, float],
radius: float,
outline_color: SpaceDebugColor,
fill_color: SpaceDebugColor,
) -> None:
p1 = to_pygame(a, self.surface)
p2 = to_pygame(b, self.surface)
r = round(max(1, radius * 2))
pygame.draw.lines(self.surface, fill_color.as_int(), False, [p1, p2], r)
if r > 2:
orthog = [abs(p2[1] - p1[1]), abs(p2[0] - p1[0])]
if orthog[0] == 0 and orthog[1] == 0:
return
scale = radius / (orthog[0] * orthog[0] + orthog[1] * orthog[1]) ** 0.5
orthog[0] = round(orthog[0] * scale)
orthog[1] = round(orthog[1] * scale)
points = [
(p1[0] - orthog[0], p1[1] - orthog[1]),
(p1[0] + orthog[0], p1[1] + orthog[1]),
(p2[0] + orthog[0], p2[1] + orthog[1]),
(p2[0] - orthog[0], p2[1] - orthog[1]),
]
pygame.draw.polygon(self.surface, fill_color.as_int(), points)
pygame.draw.circle(
self.surface,
fill_color.as_int(),
(round(p1[0]), round(p1[1])),
round(radius),
)
pygame.draw.circle(
self.surface,
fill_color.as_int(),
(round(p2[0]), round(p2[1])),
round(radius),
)
def draw_polygon(
self,
verts: Sequence[Tuple[float, float]],
radius: float,
outline_color: SpaceDebugColor,
fill_color: SpaceDebugColor,
) -> None:
ps = [to_pygame(v, self.surface) for v in verts]
ps += [ps[0]]
radius = 2
pygame.draw.polygon(self.surface, light_color(fill_color).as_int(), ps)
if radius > 0:
for i in range(len(verts)):
a = verts[i]
b = verts[(i + 1) % len(verts)]
self.draw_fat_segment(a, b, radius, fill_color, fill_color)
def draw_dot(
self, size: float, pos: Tuple[float, float], color: SpaceDebugColor
) -> None:
p = to_pygame(pos, self.surface)
pygame.draw.circle(self.surface, color.as_int(), p, round(size), 0)
def get_mouse_pos(surface: pygame.Surface) -> Tuple[int, int]:
"""Get position of the mouse pointer in pymunk coordinates."""
p = pygame.mouse.get_pos()
return from_pygame(p, surface)
def to_pygame(p: Tuple[float, float], surface: pygame.Surface) -> Tuple[int, int]:
"""Convenience method to convert pymunk coordinates to pygame surface
local coordinates.
Note that in case positive_y_is_up is False, this function wont actually do
anything except converting the point to integers.
"""
if positive_y_is_up:
return round(p[0]), surface.get_height() - round(p[1])
else:
return round(p[0]), round(p[1])
def from_pygame(p: Tuple[float, float], surface: pygame.Surface) -> Tuple[int, int]:
"""Convenience method to convert pygame surface local coordinates to
pymunk coordinates
"""
return to_pygame(p, surface)
def light_color(color: SpaceDebugColor):
color = np.minimum(1.2 * np.float32([color.r, color.g, color.b, color.a]), np.float32([255]))
color = SpaceDebugColor(r=color[0], g=color[1], b=color[2], a=color[3])
return color

View File

@@ -0,0 +1,52 @@
import pygame
import pymunk
import pymunk.pygame_util
import numpy as np
COLLTYPE_DEFAULT = 0
COLLTYPE_MOUSE = 1
COLLTYPE_BALL = 2
def get_body_type(static=False):
body_type = pymunk.Body.DYNAMIC
if static:
body_type = pymunk.Body.STATIC
return body_type
def create_rectangle(space,
pos_x,pos_y,width,height,
density=3,static=False):
body = pymunk.Body(body_type=get_body_type(static))
body.position = (pos_x,pos_y)
shape = pymunk.Poly.create_box(body,(width,height))
shape.density = density
space.add(body,shape)
return body, shape
def create_rectangle_bb(space,
left, bottom, right, top,
**kwargs):
pos_x = (left + right) / 2
pos_y = (top + bottom) / 2
height = top - bottom
width = right - left
return create_rectangle(space, pos_x, pos_y, width, height, **kwargs)
def create_circle(space, pos_x, pos_y, radius, density=3, static=False):
body = pymunk.Body(body_type=get_body_type(static))
body.position = (pos_x, pos_y)
shape = pymunk.Circle(body, radius=radius)
shape.density = density
shape.collision_type = COLLTYPE_BALL
space.add(body, shape)
return body, shape
def get_body_state(body):
state = np.zeros(6, dtype=np.float32)
state[:2] = body.position
state[2] = body.angle
state[3:5] = body.velocity
state[5] = body.angular_velocity
return state

View File

@@ -0,0 +1,82 @@
from typing import Dict, Callable, List
import collections
import torch
import torch.nn as nn
def dict_apply(
x: Dict[str, torch.Tensor],
func: Callable[[torch.Tensor], torch.Tensor]
) -> Dict[str, torch.Tensor]:
result = dict()
for key, value in x.items():
if isinstance(value, dict):
result[key] = dict_apply(value, func)
else:
result[key] = func(value)
return result
def pad_remaining_dims(x, target):
assert x.shape == target.shape[:len(x.shape)]
return x.reshape(x.shape + (1,)*(len(target.shape) - len(x.shape)))
def dict_apply_split(
x: Dict[str, torch.Tensor],
split_func: Callable[[torch.Tensor], Dict[str, torch.Tensor]]
) -> Dict[str, torch.Tensor]:
results = collections.defaultdict(dict)
for key, value in x.items():
result = split_func(value)
for k, v in result.items():
results[k][key] = v
return results
def dict_apply_reduce(
x: List[Dict[str, torch.Tensor]],
reduce_func: Callable[[List[torch.Tensor]], torch.Tensor]
) -> Dict[str, torch.Tensor]:
result = dict()
for key in x[0].keys():
result[key] = reduce_func([x_[key] for x_ in x])
return result
def replace_submodules(
root_module: nn.Module,
predicate: Callable[[nn.Module], bool],
func: Callable[[nn.Module], nn.Module]) -> nn.Module:
"""
predicate: Return true if the module is to be replaced.
func: Return new module to use.
"""
if predicate(root_module):
return func(root_module)
bn_list = [k.split('.') for k, m
in root_module.named_modules(remove_duplicate=True)
if predicate(m)]
for *parent, k in bn_list:
parent_module = root_module
if len(parent) > 0:
parent_module = root_module.get_submodule('.'.join(parent))
if isinstance(parent_module, nn.Sequential):
src_module = parent_module[int(k)]
else:
src_module = getattr(parent_module, k)
tgt_module = func(src_module)
if isinstance(parent_module, nn.Sequential):
parent_module[int(k)] = tgt_module
else:
setattr(parent_module, k, tgt_module)
# verify that all BN are replaced
bn_list = [k.split('.') for k, m
in root_module.named_modules(remove_duplicate=True)
if predicate(m)]
assert len(bn_list) == 0
return root_module
def optimizer_to(optimizer, device):
for state in optimizer.state.values():
for k, v in state.items():
if isinstance(v, torch.Tensor):
state[k] = v.to(device=device)
return optimizer

View File

@@ -0,0 +1,588 @@
from typing import Union, Dict, Optional
import os
import math
import numbers
import zarr
import numcodecs
import numpy as np
from functools import cached_property
def check_chunks_compatible(chunks: tuple, shape: tuple):
assert len(shape) == len(chunks)
for c in chunks:
assert isinstance(c, numbers.Integral)
assert c > 0
def rechunk_recompress_array(group, name,
chunks=None, chunk_length=None,
compressor=None, tmp_key='_temp'):
old_arr = group[name]
if chunks is None:
if chunk_length is not None:
chunks = (chunk_length,) + old_arr.chunks[1:]
else:
chunks = old_arr.chunks
check_chunks_compatible(chunks, old_arr.shape)
if compressor is None:
compressor = old_arr.compressor
if (chunks == old_arr.chunks) and (compressor == old_arr.compressor):
# no change
return old_arr
# rechunk recompress
group.move(name, tmp_key)
old_arr = group[tmp_key]
n_copied, n_skipped, n_bytes_copied = zarr.copy(
source=old_arr,
dest=group,
name=name,
chunks=chunks,
compressor=compressor,
)
del group[tmp_key]
arr = group[name]
return arr
def get_optimal_chunks(shape, dtype,
target_chunk_bytes=2e6,
max_chunk_length=None):
"""
Common shapes
T,D
T,N,D
T,H,W,C
T,N,H,W,C
"""
itemsize = np.dtype(dtype).itemsize
# reversed
rshape = list(shape[::-1])
if max_chunk_length is not None:
rshape[-1] = int(max_chunk_length)
split_idx = len(shape)-1
for i in range(len(shape)-1):
this_chunk_bytes = itemsize * np.prod(rshape[:i])
next_chunk_bytes = itemsize * np.prod(rshape[:i+1])
if this_chunk_bytes <= target_chunk_bytes \
and next_chunk_bytes > target_chunk_bytes:
split_idx = i
rchunks = rshape[:split_idx]
item_chunk_bytes = itemsize * np.prod(rshape[:split_idx])
this_max_chunk_length = rshape[split_idx]
next_chunk_length = min(this_max_chunk_length, math.ceil(
target_chunk_bytes / item_chunk_bytes))
rchunks.append(next_chunk_length)
len_diff = len(shape) - len(rchunks)
rchunks.extend([1] * len_diff)
chunks = tuple(rchunks[::-1])
# print(np.prod(chunks) * itemsize / target_chunk_bytes)
return chunks
class ReplayBuffer:
"""
Zarr-based temporal datastructure.
Assumes first dimention to be time. Only chunk in time dimension.
"""
def __init__(self,
root: Union[zarr.Group,
Dict[str,dict]]):
"""
Dummy constructor. Use copy_from* and create_from* class methods instead.
"""
assert('data' in root)
assert('meta' in root)
assert('episode_ends' in root['meta'])
for key, value in root['data'].items():
assert(value.shape[0] == root['meta']['episode_ends'][-1])
self.root = root
# ============= create constructors ===============
@classmethod
def create_empty_zarr(cls, storage=None, root=None):
if root is None:
if storage is None:
storage = zarr.MemoryStore()
root = zarr.group(store=storage)
data = root.require_group('data', overwrite=False)
meta = root.require_group('meta', overwrite=False)
if 'episode_ends' not in meta:
episode_ends = meta.zeros('episode_ends', shape=(0,), dtype=np.int64,
compressor=None, overwrite=False)
return cls(root=root)
@classmethod
def create_empty_numpy(cls):
root = {
'data': dict(),
'meta': {
'episode_ends': np.zeros((0,), dtype=np.int64)
}
}
return cls(root=root)
@classmethod
def create_from_group(cls, group, **kwargs):
if 'data' not in group:
# create from stratch
buffer = cls.create_empty_zarr(root=group, **kwargs)
else:
# already exist
buffer = cls(root=group, **kwargs)
return buffer
@classmethod
def create_from_path(cls, zarr_path, mode='r', **kwargs):
"""
Open a on-disk zarr directly (for dataset larger than memory).
Slower.
"""
group = zarr.open(os.path.expanduser(zarr_path), mode)
return cls.create_from_group(group, **kwargs)
# ============= copy constructors ===============
@classmethod
def copy_from_store(cls, src_store, store=None, keys=None,
chunks: Dict[str,tuple]=dict(),
compressors: Union[dict, str, numcodecs.abc.Codec]=dict(),
if_exists='replace',
**kwargs):
"""
Load to memory.
"""
src_root = zarr.group(src_store)
root = None
if store is None:
# numpy backend
meta = dict()
for key, value in src_root['meta'].items():
if len(value.shape) == 0:
meta[key] = np.array(value)
else:
meta[key] = value[:]
if keys is None:
keys = src_root['data'].keys()
data = dict()
for key in keys:
arr = src_root['data'][key]
data[key] = arr[:]
root = {
'meta': meta,
'data': data
}
else:
root = zarr.group(store=store)
# copy without recompression
n_copied, n_skipped, n_bytes_copied = zarr.copy_store(source=src_store, dest=store,
source_path='/meta', dest_path='/meta', if_exists=if_exists)
data_group = root.create_group('data', overwrite=True)
if keys is None:
keys = src_root['data'].keys()
for key in keys:
value = src_root['data'][key]
cks = cls._resolve_array_chunks(
chunks=chunks, key=key, array=value)
cpr = cls._resolve_array_compressor(
compressors=compressors, key=key, array=value)
if cks == value.chunks and cpr == value.compressor:
# copy without recompression
this_path = '/data/' + key
n_copied, n_skipped, n_bytes_copied = zarr.copy_store(
source=src_store, dest=store,
source_path=this_path, dest_path=this_path,
if_exists=if_exists
)
else:
# copy with recompression
n_copied, n_skipped, n_bytes_copied = zarr.copy(
source=value, dest=data_group, name=key,
chunks=cks, compressor=cpr, if_exists=if_exists
)
buffer = cls(root=root)
return buffer
@classmethod
def copy_from_path(cls, zarr_path, backend=None, store=None, keys=None,
chunks: Dict[str,tuple]=dict(),
compressors: Union[dict, str, numcodecs.abc.Codec]=dict(),
if_exists='replace',
**kwargs):
"""
Copy a on-disk zarr to in-memory compressed.
Recommended
"""
if backend == 'numpy':
print('backend argument is depreacted!')
store = None
group = zarr.open(os.path.expanduser(zarr_path), 'r')
return cls.copy_from_store(src_store=group.store, store=store,
keys=keys, chunks=chunks, compressors=compressors,
if_exists=if_exists, **kwargs)
# ============= save methods ===============
def save_to_store(self, store,
chunks: Optional[Dict[str,tuple]]=dict(),
compressors: Union[str, numcodecs.abc.Codec, dict]=dict(),
if_exists='replace',
**kwargs):
root = zarr.group(store)
if self.backend == 'zarr':
# recompression free copy
n_copied, n_skipped, n_bytes_copied = zarr.copy_store(
source=self.root.store, dest=store,
source_path='/meta', dest_path='/meta', if_exists=if_exists)
else:
meta_group = root.create_group('meta', overwrite=True)
# save meta, no chunking
for key, value in self.root['meta'].items():
_ = meta_group.array(
name=key,
data=value,
shape=value.shape,
chunks=value.shape)
# save data, chunk
data_group = root.create_group('data', overwrite=True)
for key, value in self.root['data'].items():
cks = self._resolve_array_chunks(
chunks=chunks, key=key, array=value)
cpr = self._resolve_array_compressor(
compressors=compressors, key=key, array=value)
if isinstance(value, zarr.Array):
if cks == value.chunks and cpr == value.compressor:
# copy without recompression
this_path = '/data/' + key
n_copied, n_skipped, n_bytes_copied = zarr.copy_store(
source=self.root.store, dest=store,
source_path=this_path, dest_path=this_path, if_exists=if_exists)
else:
# copy with recompression
n_copied, n_skipped, n_bytes_copied = zarr.copy(
source=value, dest=data_group, name=key,
chunks=cks, compressor=cpr, if_exists=if_exists
)
else:
# numpy
_ = data_group.array(
name=key,
data=value,
chunks=cks,
compressor=cpr
)
return store
def save_to_path(self, zarr_path,
chunks: Optional[Dict[str,tuple]]=dict(),
compressors: Union[str, numcodecs.abc.Codec, dict]=dict(),
if_exists='replace',
**kwargs):
store = zarr.DirectoryStore(os.path.expanduser(zarr_path))
return self.save_to_store(store, chunks=chunks,
compressors=compressors, if_exists=if_exists, **kwargs)
@staticmethod
def resolve_compressor(compressor='default'):
if compressor == 'default':
compressor = numcodecs.Blosc(cname='lz4', clevel=5,
shuffle=numcodecs.Blosc.NOSHUFFLE)
elif compressor == 'disk':
compressor = numcodecs.Blosc('zstd', clevel=5,
shuffle=numcodecs.Blosc.BITSHUFFLE)
return compressor
@classmethod
def _resolve_array_compressor(cls,
compressors: Union[dict, str, numcodecs.abc.Codec], key, array):
# allows compressor to be explicitly set to None
cpr = 'nil'
if isinstance(compressors, dict):
if key in compressors:
cpr = cls.resolve_compressor(compressors[key])
elif isinstance(array, zarr.Array):
cpr = array.compressor
else:
cpr = cls.resolve_compressor(compressors)
# backup default
if cpr == 'nil':
cpr = cls.resolve_compressor('default')
return cpr
@classmethod
def _resolve_array_chunks(cls,
chunks: Union[dict, tuple], key, array):
cks = None
if isinstance(chunks, dict):
if key in chunks:
cks = chunks[key]
elif isinstance(array, zarr.Array):
cks = array.chunks
elif isinstance(chunks, tuple):
cks = chunks
else:
raise TypeError(f"Unsupported chunks type {type(chunks)}")
# backup default
if cks is None:
cks = get_optimal_chunks(shape=array.shape, dtype=array.dtype)
# check
check_chunks_compatible(chunks=cks, shape=array.shape)
return cks
# ============= properties =================
@cached_property
def data(self):
return self.root['data']
@cached_property
def meta(self):
return self.root['meta']
def update_meta(self, data):
# sanitize data
np_data = dict()
for key, value in data.items():
if isinstance(value, np.ndarray):
np_data[key] = value
else:
arr = np.array(value)
if arr.dtype == object:
raise TypeError(f"Invalid value type {type(value)}")
np_data[key] = arr
meta_group = self.meta
if self.backend == 'zarr':
for key, value in np_data.items():
_ = meta_group.array(
name=key,
data=value,
shape=value.shape,
chunks=value.shape,
overwrite=True)
else:
meta_group.update(np_data)
return meta_group
@property
def episode_ends(self):
return self.meta['episode_ends']
def get_episode_idxs(self):
import numba
numba.jit(nopython=True)
def _get_episode_idxs(episode_ends):
result = np.zeros((episode_ends[-1],), dtype=np.int64)
for i in range(len(episode_ends)):
start = 0
if i > 0:
start = episode_ends[i-1]
end = episode_ends[i]
for idx in range(start, end):
result[idx] = i
return result
return _get_episode_idxs(self.episode_ends)
@property
def backend(self):
backend = 'numpy'
if isinstance(self.root, zarr.Group):
backend = 'zarr'
return backend
# =========== dict-like API ==============
def __repr__(self) -> str:
if self.backend == 'zarr':
return str(self.root.tree())
else:
return super().__repr__()
def keys(self):
return self.data.keys()
def values(self):
return self.data.values()
def items(self):
return self.data.items()
def __getitem__(self, key):
return self.data[key]
def __contains__(self, key):
return key in self.data
# =========== our API ==============
@property
def n_steps(self):
if len(self.episode_ends) == 0:
return 0
return self.episode_ends[-1]
@property
def n_episodes(self):
return len(self.episode_ends)
@property
def chunk_size(self):
if self.backend == 'zarr':
return next(iter(self.data.arrays()))[-1].chunks[0]
return None
@property
def episode_lengths(self):
ends = self.episode_ends[:]
ends = np.insert(ends, 0, 0)
lengths = np.diff(ends)
return lengths
def add_episode(self,
data: Dict[str, np.ndarray],
chunks: Optional[Dict[str,tuple]]=dict(),
compressors: Union[str, numcodecs.abc.Codec, dict]=dict()):
assert(len(data) > 0)
is_zarr = (self.backend == 'zarr')
curr_len = self.n_steps
episode_length = None
for key, value in data.items():
assert(len(value.shape) >= 1)
if episode_length is None:
episode_length = len(value)
else:
assert(episode_length == len(value))
new_len = curr_len + episode_length
for key, value in data.items():
new_shape = (new_len,) + value.shape[1:]
# create array
if key not in self.data:
if is_zarr:
cks = self._resolve_array_chunks(
chunks=chunks, key=key, array=value)
cpr = self._resolve_array_compressor(
compressors=compressors, key=key, array=value)
arr = self.data.zeros(name=key,
shape=new_shape,
chunks=cks,
dtype=value.dtype,
compressor=cpr)
else:
# copy data to prevent modify
arr = np.zeros(shape=new_shape, dtype=value.dtype)
self.data[key] = arr
else:
arr = self.data[key]
assert(value.shape[1:] == arr.shape[1:])
# same method for both zarr and numpy
if is_zarr:
arr.resize(new_shape)
else:
arr.resize(new_shape, refcheck=False)
# copy data
arr[-value.shape[0]:] = value
# append to episode ends
episode_ends = self.episode_ends
if is_zarr:
episode_ends.resize(episode_ends.shape[0] + 1)
else:
episode_ends.resize(episode_ends.shape[0] + 1, refcheck=False)
episode_ends[-1] = new_len
# rechunk
if is_zarr:
if episode_ends.chunks[0] < episode_ends.shape[0]:
rechunk_recompress_array(self.meta, 'episode_ends',
chunk_length=int(episode_ends.shape[0] * 1.5))
def drop_episode(self):
is_zarr = (self.backend == 'zarr')
episode_ends = self.episode_ends[:].copy()
assert(len(episode_ends) > 0)
start_idx = 0
if len(episode_ends) > 1:
start_idx = episode_ends[-2]
for key, value in self.data.items():
new_shape = (start_idx,) + value.shape[1:]
if is_zarr:
value.resize(new_shape)
else:
value.resize(new_shape, refcheck=False)
if is_zarr:
self.episode_ends.resize(len(episode_ends)-1)
else:
self.episode_ends.resize(len(episode_ends)-1, refcheck=False)
def pop_episode(self):
assert(self.n_episodes > 0)
episode = self.get_episode(self.n_episodes-1, copy=True)
self.drop_episode()
return episode
def extend(self, data):
self.add_episode(data)
def get_episode(self, idx, copy=False):
idx = list(range(len(self.episode_ends)))[idx]
start_idx = 0
if idx > 0:
start_idx = self.episode_ends[idx-1]
end_idx = self.episode_ends[idx]
result = self.get_steps_slice(start_idx, end_idx, copy=copy)
return result
def get_episode_slice(self, idx):
start_idx = 0
if idx > 0:
start_idx = self.episode_ends[idx-1]
end_idx = self.episode_ends[idx]
return slice(start_idx, end_idx)
def get_steps_slice(self, start, stop, step=None, copy=False):
_slice = slice(start, stop, step)
result = dict()
for key, value in self.data.items():
x = value[_slice]
if copy and isinstance(value, np.ndarray):
x = x.copy()
result[key] = x
return result
# =========== chunking =============
def get_chunks(self) -> dict:
assert self.backend == 'zarr'
chunks = dict()
for key, value in self.data.items():
chunks[key] = value.chunks
return chunks
def set_chunks(self, chunks: dict):
assert self.backend == 'zarr'
for key, value in chunks.items():
if key in self.data:
arr = self.data[key]
if value != arr.chunks:
check_chunks_compatible(chunks=value, shape=arr.shape)
rechunk_recompress_array(self.data, key, chunks=value)
def get_compressors(self) -> dict:
assert self.backend == 'zarr'
compressors = dict()
for key, value in self.data.items():
compressors[key] = value.compressor
return compressors
def set_compressors(self, compressors: dict):
assert self.backend == 'zarr'
for key, value in compressors.items():
if key in self.data:
arr = self.data[key]
compressor = self.resolve_compressor(value)
if compressor != arr.compressor:
rechunk_recompress_array(self.data, key, compressor=compressor)

View File

@@ -0,0 +1,47 @@
from omegaconf import OmegaConf
from robomimic.config import config_factory
import robomimic.scripts.generate_paper_configs as gpc
from robomimic.scripts.generate_paper_configs import (
modify_config_for_default_image_exp,
modify_config_for_default_low_dim_exp,
modify_config_for_dataset,
)
def get_robomimic_config(
algo_name='bc_rnn',
hdf5_type='low_dim',
task_name='square',
dataset_type='ph'
):
base_dataset_dir = '/tmp/null'
filter_key = None
# decide whether to use low-dim or image training defaults
modifier_for_obs = modify_config_for_default_image_exp
if hdf5_type in ["low_dim", "low_dim_sparse", "low_dim_dense"]:
modifier_for_obs = modify_config_for_default_low_dim_exp
algo_config_name = "bc" if algo_name == "bc_rnn" else algo_name
config = config_factory(algo_name=algo_config_name)
# turn into default config for observation modalities (e.g.: low-dim or rgb)
config = modifier_for_obs(config)
# add in config based on the dataset
config = modify_config_for_dataset(
config=config,
task_name=task_name,
dataset_type=dataset_type,
hdf5_type=hdf5_type,
base_dataset_dir=base_dataset_dir,
filter_key=filter_key,
)
# add in algo hypers based on dataset
algo_config_modifier = getattr(gpc, f'modify_{algo_name}_config_for_dataset')
config = algo_config_modifier(
config=config,
task_name=task_name,
dataset_type=dataset_type,
hdf5_type=hdf5_type,
)
return config

View File

@@ -0,0 +1,177 @@
import numpy as np
import copy
import h5py
import robomimic.utils.obs_utils as ObsUtils
import robomimic.utils.file_utils as FileUtils
import robomimic.utils.env_utils as EnvUtils
from scipy.spatial.transform import Rotation
from robomimic.config import config_factory
class RobomimicAbsoluteActionConverter:
def __init__(self, dataset_path, algo_name='bc'):
# default BC config
config = config_factory(algo_name=algo_name)
# read config to set up metadata for observation modalities (e.g. detecting rgb observations)
# must ran before create dataset
ObsUtils.initialize_obs_utils_with_config(config)
env_meta = FileUtils.get_env_metadata_from_dataset(dataset_path)
abs_env_meta = copy.deepcopy(env_meta)
abs_env_meta['env_kwargs']['controller_configs']['control_delta'] = False
env = EnvUtils.create_env_from_metadata(
env_meta=env_meta,
render=False,
render_offscreen=False,
use_image_obs=False,
)
assert len(env.env.robots) in (1, 2)
abs_env = EnvUtils.create_env_from_metadata(
env_meta=abs_env_meta,
render=False,
render_offscreen=False,
use_image_obs=False,
)
assert not abs_env.env.robots[0].controller.use_delta
self.env = env
self.abs_env = abs_env
self.file = h5py.File(dataset_path, 'r')
def __len__(self):
return len(self.file['data'])
def convert_actions(self,
states: np.ndarray,
actions: np.ndarray) -> np.ndarray:
"""
Given state and delta action sequence
generate equivalent goal position and orientation for each step
keep the original gripper action intact.
"""
# in case of multi robot
# reshape (N,14) to (N,2,7)
# or (N,7) to (N,1,7)
stacked_actions = actions.reshape(*actions.shape[:-1],-1,7)
env = self.env
# generate abs actions
action_goal_pos = np.zeros(
stacked_actions.shape[:-1]+(3,),
dtype=stacked_actions.dtype)
action_goal_ori = np.zeros(
stacked_actions.shape[:-1]+(3,),
dtype=stacked_actions.dtype)
action_gripper = stacked_actions[...,[-1]]
for i in range(len(states)):
_ = env.reset_to({'states': states[i]})
# taken from robot_env.py L#454
for idx, robot in enumerate(env.env.robots):
# run controller goal generator
robot.control(stacked_actions[i,idx], policy_step=True)
# read pos and ori from robots
controller = robot.controller
action_goal_pos[i,idx] = controller.goal_pos
action_goal_ori[i,idx] = Rotation.from_matrix(
controller.goal_ori).as_rotvec()
stacked_abs_actions = np.concatenate([
action_goal_pos,
action_goal_ori,
action_gripper
], axis=-1)
abs_actions = stacked_abs_actions.reshape(actions.shape)
return abs_actions
def convert_idx(self, idx):
file = self.file
demo = file[f'data/demo_{idx}']
# input
states = demo['states'][:]
actions = demo['actions'][:]
# generate abs actions
abs_actions = self.convert_actions(states, actions)
return abs_actions
def convert_and_eval_idx(self, idx):
env = self.env
abs_env = self.abs_env
file = self.file
# first step have high error for some reason, not representative
eval_skip_steps = 1
demo = file[f'data/demo_{idx}']
# input
states = demo['states'][:]
actions = demo['actions'][:]
# generate abs actions
abs_actions = self.convert_actions(states, actions)
# verify
robot0_eef_pos = demo['obs']['robot0_eef_pos'][:]
robot0_eef_quat = demo['obs']['robot0_eef_quat'][:]
delta_error_info = self.evaluate_rollout_error(
env, states, actions, robot0_eef_pos, robot0_eef_quat,
metric_skip_steps=eval_skip_steps)
abs_error_info = self.evaluate_rollout_error(
abs_env, states, abs_actions, robot0_eef_pos, robot0_eef_quat,
metric_skip_steps=eval_skip_steps)
info = {
'delta_max_error': delta_error_info,
'abs_max_error': abs_error_info
}
return abs_actions, info
@staticmethod
def evaluate_rollout_error(env,
states, actions,
robot0_eef_pos,
robot0_eef_quat,
metric_skip_steps=1):
# first step have high error for some reason, not representative
# evaluate abs actions
rollout_next_states = list()
rollout_next_eef_pos = list()
rollout_next_eef_quat = list()
obs = env.reset_to({'states': states[0]})
for i in range(len(states)):
obs = env.reset_to({'states': states[i]})
obs, reward, done, info = env.step(actions[i])
obs = env.get_observation()
rollout_next_states.append(env.get_state()['states'])
rollout_next_eef_pos.append(obs['robot0_eef_pos'])
rollout_next_eef_quat.append(obs['robot0_eef_quat'])
rollout_next_states = np.array(rollout_next_states)
rollout_next_eef_pos = np.array(rollout_next_eef_pos)
rollout_next_eef_quat = np.array(rollout_next_eef_quat)
next_state_diff = states[1:] - rollout_next_states[:-1]
max_next_state_diff = np.max(np.abs(next_state_diff[metric_skip_steps:]))
next_eef_pos_diff = robot0_eef_pos[1:] - rollout_next_eef_pos[:-1]
next_eef_pos_dist = np.linalg.norm(next_eef_pos_diff, axis=-1)
max_next_eef_pos_dist = next_eef_pos_dist[metric_skip_steps:].max()
next_eef_rot_diff = Rotation.from_quat(robot0_eef_quat[1:]) \
* Rotation.from_quat(rollout_next_eef_quat[:-1]).inv()
next_eef_rot_dist = next_eef_rot_diff.magnitude()
max_next_eef_rot_dist = next_eef_rot_dist[metric_skip_steps:].max()
info = {
'state': max_next_state_diff,
'pos': max_next_eef_pos_dist,
'rot': max_next_eef_rot_dist
}
return info

View File

@@ -0,0 +1,153 @@
from typing import Optional
import numpy as np
import numba
from diffusion_policy.common.replay_buffer import ReplayBuffer
@numba.jit(nopython=True)
def create_indices(
episode_ends:np.ndarray, sequence_length:int,
episode_mask: np.ndarray,
pad_before: int=0, pad_after: int=0,
debug:bool=True) -> np.ndarray:
episode_mask.shape == episode_ends.shape
pad_before = min(max(pad_before, 0), sequence_length-1)
pad_after = min(max(pad_after, 0), sequence_length-1)
indices = list()
for i in range(len(episode_ends)):
if not episode_mask[i]:
# skip episode
continue
start_idx = 0
if i > 0:
start_idx = episode_ends[i-1]
end_idx = episode_ends[i]
episode_length = end_idx - start_idx
min_start = -pad_before
max_start = episode_length - sequence_length + pad_after
# range stops one idx before end
for idx in range(min_start, max_start+1):
buffer_start_idx = max(idx, 0) + start_idx
buffer_end_idx = min(idx+sequence_length, episode_length) + start_idx
start_offset = buffer_start_idx - (idx+start_idx)
end_offset = (idx+sequence_length+start_idx) - buffer_end_idx
sample_start_idx = 0 + start_offset
sample_end_idx = sequence_length - end_offset
if debug:
assert(start_offset >= 0)
assert(end_offset >= 0)
assert (sample_end_idx - sample_start_idx) == (buffer_end_idx - buffer_start_idx)
indices.append([
buffer_start_idx, buffer_end_idx,
sample_start_idx, sample_end_idx])
indices = np.array(indices)
return indices
def get_val_mask(n_episodes, val_ratio, seed=0):
val_mask = np.zeros(n_episodes, dtype=bool)
if val_ratio <= 0:
return val_mask
# have at least 1 episode for validation, and at least 1 episode for train
n_val = min(max(1, round(n_episodes * val_ratio)), n_episodes-1)
rng = np.random.default_rng(seed=seed)
val_idxs = rng.choice(n_episodes, size=n_val, replace=False)
val_mask[val_idxs] = True
return val_mask
def downsample_mask(mask, max_n, seed=0):
# subsample training data
train_mask = mask
if (max_n is not None) and (np.sum(train_mask) > max_n):
n_train = int(max_n)
curr_train_idxs = np.nonzero(train_mask)[0]
rng = np.random.default_rng(seed=seed)
train_idxs_idx = rng.choice(len(curr_train_idxs), size=n_train, replace=False)
train_idxs = curr_train_idxs[train_idxs_idx]
train_mask = np.zeros_like(train_mask)
train_mask[train_idxs] = True
assert np.sum(train_mask) == n_train
return train_mask
class SequenceSampler:
def __init__(self,
replay_buffer: ReplayBuffer,
sequence_length:int,
pad_before:int=0,
pad_after:int=0,
keys=None,
key_first_k=dict(),
episode_mask: Optional[np.ndarray]=None,
):
"""
key_first_k: dict str: int
Only take first k data from these keys (to improve perf)
"""
super().__init__()
assert(sequence_length >= 1)
if keys is None:
keys = list(replay_buffer.keys())
episode_ends = replay_buffer.episode_ends[:]
if episode_mask is None:
episode_mask = np.ones(episode_ends.shape, dtype=bool)
if np.any(episode_mask):
indices = create_indices(episode_ends,
sequence_length=sequence_length,
pad_before=pad_before,
pad_after=pad_after,
episode_mask=episode_mask
)
else:
indices = np.zeros((0,4), dtype=np.int64)
# (buffer_start_idx, buffer_end_idx, sample_start_idx, sample_end_idx)
self.indices = indices
self.keys = list(keys) # prevent OmegaConf list performance problem
self.sequence_length = sequence_length
self.replay_buffer = replay_buffer
self.key_first_k = key_first_k
def __len__(self):
return len(self.indices)
def sample_sequence(self, idx):
buffer_start_idx, buffer_end_idx, sample_start_idx, sample_end_idx \
= self.indices[idx]
result = dict()
for key in self.keys:
input_arr = self.replay_buffer[key]
# performance optimization, avoid small allocation if possible
if key not in self.key_first_k:
sample = input_arr[buffer_start_idx:buffer_end_idx]
else:
# performance optimization, only load used obs steps
n_data = buffer_end_idx - buffer_start_idx
k_data = min(self.key_first_k[key], n_data)
# fill value with Nan to catch bugs
# the non-loaded region should never be used
sample = np.full((n_data,) + input_arr.shape[1:],
fill_value=np.nan, dtype=input_arr.dtype)
try:
sample[:k_data] = input_arr[buffer_start_idx:buffer_start_idx+k_data]
except Exception as e:
import pdb; pdb.set_trace()
data = sample
if (sample_start_idx > 0) or (sample_end_idx < self.sequence_length):
data = np.zeros(
shape=(self.sequence_length,) + input_arr.shape[1:],
dtype=input_arr.dtype)
if sample_start_idx > 0:
data[:sample_start_idx] = sample[0]
if sample_end_idx < self.sequence_length:
data[sample_end_idx:] = sample[-1]
data[sample_start_idx:sample_end_idx] = sample
result[key] = data
return result

View File

@@ -0,0 +1,222 @@
from typing import List, Tuple, Optional, Dict
import math
import numpy as np
def get_accumulate_timestamp_idxs(
timestamps: List[float],
start_time: float,
dt: float,
eps:float=1e-5,
next_global_idx: Optional[int]=0,
allow_negative=False
) -> Tuple[List[int], List[int], int]:
"""
For each dt window, choose the first timestamp in the window.
Assumes timestamps sorted. One timestamp might be chosen mulitple times due to dropped frames.
next_global_idx should start at 0 normally, and then use the returned next_global_idx.
However, when overwiting previous values are desired, set last_global_idx to None.
Returns:
local_idxs: which index in the given timestamps array to chose from
global_idxs: the global index of each chosen timestamp
next_global_idx: used for next call.
"""
local_idxs = list()
global_idxs = list()
for local_idx, ts in enumerate(timestamps):
# add eps * dt to timestamps so that when ts == start_time + k * dt
# is always recorded as kth element (avoiding floating point errors)
global_idx = math.floor((ts - start_time) / dt + eps)
if (not allow_negative) and (global_idx < 0):
continue
if next_global_idx is None:
next_global_idx = global_idx
n_repeats = max(0, global_idx - next_global_idx + 1)
for i in range(n_repeats):
local_idxs.append(local_idx)
global_idxs.append(next_global_idx + i)
next_global_idx += n_repeats
return local_idxs, global_idxs, next_global_idx
def align_timestamps(
timestamps: List[float],
target_global_idxs: List[int],
start_time: float,
dt: float,
eps:float=1e-5):
if isinstance(target_global_idxs, np.ndarray):
target_global_idxs = target_global_idxs.tolist()
assert len(target_global_idxs) > 0
local_idxs, global_idxs, _ = get_accumulate_timestamp_idxs(
timestamps=timestamps,
start_time=start_time,
dt=dt,
eps=eps,
next_global_idx=target_global_idxs[0],
allow_negative=True
)
if len(global_idxs) > len(target_global_idxs):
# if more steps avaliable, truncate
global_idxs = global_idxs[:len(target_global_idxs)]
local_idxs = local_idxs[:len(target_global_idxs)]
if len(global_idxs) == 0:
import pdb; pdb.set_trace()
for i in range(len(target_global_idxs) - len(global_idxs)):
# if missing, repeat
local_idxs.append(len(timestamps)-1)
global_idxs.append(global_idxs[-1] + 1)
assert global_idxs == target_global_idxs
assert len(local_idxs) == len(global_idxs)
return local_idxs
class TimestampObsAccumulator:
def __init__(self,
start_time: float,
dt: float,
eps: float=1e-5):
self.start_time = start_time
self.dt = dt
self.eps = eps
self.obs_buffer = dict()
self.timestamp_buffer = None
self.next_global_idx = 0
def __len__(self):
return self.next_global_idx
@property
def data(self):
if self.timestamp_buffer is None:
return dict()
result = dict()
for key, value in self.obs_buffer.items():
result[key] = value[:len(self)]
return result
@property
def actual_timestamps(self):
if self.timestamp_buffer is None:
return np.array([])
return self.timestamp_buffer[:len(self)]
@property
def timestamps(self):
if self.timestamp_buffer is None:
return np.array([])
return self.start_time + np.arange(len(self)) * self.dt
def put(self, data: Dict[str, np.ndarray], timestamps: np.ndarray):
"""
data:
key: T,*
"""
local_idxs, global_idxs, self.next_global_idx = get_accumulate_timestamp_idxs(
timestamps=timestamps,
start_time=self.start_time,
dt=self.dt,
eps=self.eps,
next_global_idx=self.next_global_idx
)
if len(global_idxs) > 0:
if self.timestamp_buffer is None:
# first allocation
self.obs_buffer = dict()
for key, value in data.items():
self.obs_buffer[key] = np.zeros_like(value)
self.timestamp_buffer = np.zeros(
(len(timestamps),), dtype=np.float64)
this_max_size = global_idxs[-1] + 1
if this_max_size > len(self.timestamp_buffer):
# reallocate
new_size = max(this_max_size, len(self.timestamp_buffer) * 2)
for key in list(self.obs_buffer.keys()):
new_shape = (new_size,) + self.obs_buffer[key].shape[1:]
self.obs_buffer[key] = np.resize(self.obs_buffer[key], new_shape)
self.timestamp_buffer = np.resize(self.timestamp_buffer, (new_size))
# write data
for key, value in self.obs_buffer.items():
value[global_idxs] = data[key][local_idxs]
self.timestamp_buffer[global_idxs] = timestamps[local_idxs]
class TimestampActionAccumulator:
def __init__(self,
start_time: float,
dt: float,
eps: float=1e-5):
"""
Different from Obs accumulator, the action accumulator
allows overwriting previous values.
"""
self.start_time = start_time
self.dt = dt
self.eps = eps
self.action_buffer = None
self.timestamp_buffer = None
self.size = 0
def __len__(self):
return self.size
@property
def actions(self):
if self.action_buffer is None:
return np.array([])
return self.action_buffer[:len(self)]
@property
def actual_timestamps(self):
if self.timestamp_buffer is None:
return np.array([])
return self.timestamp_buffer[:len(self)]
@property
def timestamps(self):
if self.timestamp_buffer is None:
return np.array([])
return self.start_time + np.arange(len(self)) * self.dt
def put(self, actions: np.ndarray, timestamps: np.ndarray):
"""
Note: timestamps is the time when the action will be issued,
not when the action will be completed (target_timestamp)
"""
local_idxs, global_idxs, _ = get_accumulate_timestamp_idxs(
timestamps=timestamps,
start_time=self.start_time,
dt=self.dt,
eps=self.eps,
# allows overwriting previous actions
next_global_idx=None
)
if len(global_idxs) > 0:
if self.timestamp_buffer is None:
# first allocation
self.action_buffer = np.zeros_like(actions)
self.timestamp_buffer = np.zeros((len(actions),), dtype=np.float64)
this_max_size = global_idxs[-1] + 1
if this_max_size > len(self.timestamp_buffer):
# reallocate
new_size = max(this_max_size, len(self.timestamp_buffer) * 2)
new_shape = (new_size,) + self.action_buffer.shape[1:]
self.action_buffer = np.resize(self.action_buffer, new_shape)
self.timestamp_buffer = np.resize(self.timestamp_buffer, (new_size,))
# potentally rewrite old data (as expected)
self.action_buffer[global_idxs] = actions[local_idxs]
self.timestamp_buffer[global_idxs] = timestamps[local_idxs]
self.size = max(self.size, this_max_size)

View File

@@ -0,0 +1,34 @@
name: blockpush_lowdim_seed
obs_dim: 16
action_dim: 2
keypoint_dim: 2
obs_eef_target: True
env_runner:
_target_: diffusion_policy.env_runner.blockpush_lowdim_runner.BlockPushLowdimRunner
n_train: 6
n_train_vis: 2
train_start_seed: 0
n_test: 50
n_test_vis: 4
test_start_seed: 100000
max_steps: 350
n_obs_steps: ${n_obs_steps}
n_action_steps: ${n_action_steps}
fps: 5
past_action: ${past_action_visible}
abs_action: False
obs_eef_target: ${task.obs_eef_target}
n_envs: null
dataset:
_target_: diffusion_policy.dataset.blockpush_lowdim_dataset.BlockPushLowdimDataset
zarr_path: data/block_pushing/multimodal_push_seed.zarr
horizon: ${horizon}
pad_before: ${eval:'${n_obs_steps}-1'}
pad_after: ${eval:'${n_action_steps}-1'}
obs_eef_target: ${task.obs_eef_target}
use_manual_normalizer: False
seed: 42
val_ratio: 0.02

View File

@@ -0,0 +1,34 @@
name: blockpush_lowdim_seed_abs
obs_dim: 16
action_dim: 2
keypoint_dim: 2
obs_eef_target: True
env_runner:
_target_: diffusion_policy.env_runner.blockpush_lowdim_runner.BlockPushLowdimRunner
n_train: 6
n_train_vis: 2
train_start_seed: 0
n_test: 50
n_test_vis: 4
test_start_seed: 100000
max_steps: 350
n_obs_steps: ${n_obs_steps}
n_action_steps: ${n_action_steps}
fps: 5
past_action: ${past_action_visible}
abs_action: True
obs_eef_target: ${task.obs_eef_target}
n_envs: null
dataset:
_target_: diffusion_policy.dataset.blockpush_lowdim_dataset.BlockPushLowdimDataset
zarr_path: data/block_pushing/multimodal_push_seed_abs.zarr
horizon: ${horizon}
pad_before: ${eval:'${n_obs_steps}-1'}
pad_after: ${eval:'${n_action_steps}-1'}
obs_eef_target: ${task.obs_eef_target}
use_manual_normalizer: False
seed: 42
val_ratio: 0.02

View File

@@ -0,0 +1,64 @@
name: can_image
shape_meta: &shape_meta
# acceptable types: rgb, low_dim
obs:
agentview_image:
shape: [3, 84, 84]
type: rgb
robot0_eye_in_hand_image:
shape: [3, 84, 84]
type: rgb
robot0_eef_pos:
shape: [3]
# type default: low_dim
robot0_eef_quat:
shape: [4]
robot0_gripper_qpos:
shape: [2]
action:
shape: [7]
task_name: &task_name can
dataset_type: &dataset_type ph
dataset_path: &dataset_path data/robomimic/datasets/${task.task_name}/${task.dataset_type}/image.hdf5
abs_action: &abs_action False
env_runner:
_target_: diffusion_policy.env_runner.robomimic_image_runner.RobomimicImageRunner
dataset_path: *dataset_path
shape_meta: *shape_meta
# costs 1GB per env
n_train: 6
n_train_vis: 2
train_start_idx: 0
n_test: 50
n_test_vis: 4
test_start_seed: 100000
# use python's eval function as resolver, single-quoted string as argument
max_steps: ${eval:'500 if "${task.dataset_type}" == "mh" else 400'}
n_obs_steps: ${n_obs_steps}
n_action_steps: ${n_action_steps}
render_obs_key: 'agentview_image'
fps: 10
crf: 22
past_action: ${past_action_visible}
abs_action: *abs_action
tqdm_interval_sec: 1.0
n_envs: 28
# evaluation at this config requires a 16 core 64GB instance.
dataset:
_target_: diffusion_policy.dataset.robomimic_replay_image_dataset.RobomimicReplayImageDataset
shape_meta: *shape_meta
dataset_path: *dataset_path
horizon: ${horizon}
pad_before: ${eval:'${n_obs_steps}-1+${n_latency_steps}'}
pad_after: ${eval:'${n_action_steps}-1'}
n_obs_steps: ${dataset_obs_steps}
abs_action: *abs_action
rotation_rep: 'rotation_6d'
use_legacy_normalizer: False
use_cache: True
seed: 42
val_ratio: 0.02

View File

@@ -0,0 +1,64 @@
name: can_image
shape_meta: &shape_meta
# acceptable types: rgb, low_dim
obs:
agentview_image:
shape: [3, 84, 84]
type: rgb
robot0_eye_in_hand_image:
shape: [3, 84, 84]
type: rgb
robot0_eef_pos:
shape: [3]
# type default: low_dim
robot0_eef_quat:
shape: [4]
robot0_gripper_qpos:
shape: [2]
action:
shape: [10]
task_name: &task_name can
dataset_type: &dataset_type ph
dataset_path: &dataset_path data/robomimic/datasets/${task.task_name}/${task.dataset_type}/image_abs.hdf5
abs_action: &abs_action True
env_runner:
_target_: diffusion_policy.env_runner.robomimic_image_runner.RobomimicImageRunner
dataset_path: *dataset_path
shape_meta: *shape_meta
# costs 1GB per env
n_train: 6
n_train_vis: 2
train_start_idx: 0
n_test: 50
n_test_vis: 4
test_start_seed: 100000
# use python's eval function as resolver, single-quoted string as argument
max_steps: ${eval:'500 if "${task.dataset_type}" == "mh" else 400'}
n_obs_steps: ${n_obs_steps}
n_action_steps: ${n_action_steps}
render_obs_key: 'agentview_image'
fps: 10
crf: 22
past_action: ${past_action_visible}
abs_action: *abs_action
tqdm_interval_sec: 1.0
n_envs: 28
# evaluation at this config requires a 16 core 64GB instance.
dataset:
_target_: diffusion_policy.dataset.robomimic_replay_image_dataset.RobomimicReplayImageDataset
shape_meta: *shape_meta
dataset_path: *dataset_path
horizon: ${horizon}
pad_before: ${eval:'${n_obs_steps}-1+${n_latency_steps}'}
pad_after: ${eval:'${n_action_steps}-1'}
n_obs_steps: ${dataset_obs_steps}
abs_action: *abs_action
rotation_rep: 'rotation_6d'
use_legacy_normalizer: False
use_cache: True
seed: 42
val_ratio: 0.02

View File

@@ -0,0 +1,45 @@
name: can_lowdim
obs_dim: 23
action_dim: 7
keypoint_dim: 3
obs_keys: &obs_keys ['object', 'robot0_eef_pos', 'robot0_eef_quat', 'robot0_gripper_qpos']
task_name: &task_name can
dataset_type: &dataset_type ph
dataset_path: &dataset_path data/robomimic/datasets/${task.task_name}/${task.dataset_type}/low_dim.hdf5
abs_action: &abs_action False
env_runner:
_target_: diffusion_policy.env_runner.robomimic_lowdim_runner.RobomimicLowdimRunner
dataset_path: *dataset_path
obs_keys: *obs_keys
n_train: 6
n_train_vis: 2
train_start_idx: 0
n_test: 50
n_test_vis: 4
test_start_seed: 100000
# use python's eval function as resolver, single-quoted string as argument
max_steps: ${eval:'500 if "${task.dataset_type}" == "mh" else 400'}
n_obs_steps: ${n_obs_steps}
n_action_steps: ${n_action_steps}
n_latency_steps: ${n_latency_steps}
render_hw: [128,128]
fps: 10
crf: 22
past_action: ${past_action_visible}
abs_action: *abs_action
n_envs: 28
dataset:
_target_: diffusion_policy.dataset.robomimic_replay_lowdim_dataset.RobomimicReplayLowdimDataset
dataset_path: *dataset_path
horizon: ${horizon}
pad_before: ${eval:'${n_obs_steps}-1+${n_latency_steps}'}
pad_after: ${eval:'${n_action_steps}-1'}
obs_keys: *obs_keys
abs_action: *abs_action
use_legacy_normalizer: False
seed: 42
val_ratio: 0.02

View File

@@ -0,0 +1,46 @@
name: can_lowdim
obs_dim: 23
action_dim: 10
keypoint_dim: 3
obs_keys: &obs_keys ['object', 'robot0_eef_pos', 'robot0_eef_quat', 'robot0_gripper_qpos']
task_name: &task_name can
dataset_type: &dataset_type ph
dataset_path: &dataset_path data/robomimic/datasets/${task.task_name}/${task.dataset_type}/low_dim_abs.hdf5
abs_action: &abs_action True
env_runner:
_target_: diffusion_policy.env_runner.robomimic_lowdim_runner.RobomimicLowdimRunner
dataset_path: *dataset_path
obs_keys: *obs_keys
n_train: 6
n_train_vis: 2
train_start_idx: 0
n_test: 50
n_test_vis: 4
test_start_seed: 100000
# use python's eval function as resolver, single-quoted string as argument
max_steps: ${eval:'500 if "${task.dataset_type}" == "mh" else 400'}
n_obs_steps: ${n_obs_steps}
n_action_steps: ${n_action_steps}
n_latency_steps: ${n_latency_steps}
render_hw: [128,128]
fps: 10
crf: 22
past_action: ${past_action_visible}
abs_action: *abs_action
n_envs: 28
dataset:
_target_: diffusion_policy.dataset.robomimic_replay_lowdim_dataset.RobomimicReplayLowdimDataset
dataset_path: *dataset_path
horizon: ${horizon}
pad_before: ${eval:'${n_obs_steps}-1+${n_latency_steps}'}
pad_after: ${eval:'${n_action_steps}-1'}
obs_keys: *obs_keys
abs_action: *abs_action
use_legacy_normalizer: False
rotation_rep: rotation_6d
seed: 42
val_ratio: 0.02

View File

@@ -0,0 +1,33 @@
name: kitchen_lowdim
obs_dim: 60
action_dim: 9
keypoint_dim: 3
dataset_dir: &dataset_dir data/kitchen
env_runner:
_target_: diffusion_policy.env_runner.kitchen_lowdim_runner.KitchenLowdimRunner
dataset_dir: *dataset_dir
n_train: 6
n_train_vis: 2
train_start_seed: 0
n_test: 50
n_test_vis: 4
test_start_seed: 100000
max_steps: 280
n_obs_steps: ${n_obs_steps}
n_action_steps: ${n_action_steps}
render_hw: [240, 360]
fps: 12.5
past_action: ${past_action_visible}
n_envs: null
dataset:
_target_: diffusion_policy.dataset.kitchen_lowdim_dataset.KitchenLowdimDataset
dataset_dir: *dataset_dir
horizon: ${horizon}
pad_before: ${eval:'${n_obs_steps}-1'}
pad_after: ${eval:'${n_action_steps}-1'}
seed: 42
val_ratio: 0.02

View File

@@ -0,0 +1,38 @@
name: kitchen_lowdim
obs_dim: 60
action_dim: 9
keypoint_dim: 3
abs_action: True
robot_noise_ratio: 0.1
env_runner:
_target_: diffusion_policy.env_runner.kitchen_lowdim_runner.KitchenLowdimRunner
dataset_dir: data/kitchen
n_train: 6
n_train_vis: 2
train_start_seed: 0
n_test: 50
n_test_vis: 4
test_start_seed: 100000
max_steps: 280
n_obs_steps: ${n_obs_steps}
n_action_steps: ${n_action_steps}
render_hw: [240, 360]
fps: 12.5
past_action: ${past_action_visible}
abs_action: ${task.abs_action}
robot_noise_ratio: ${task.robot_noise_ratio}
n_envs: null
dataset:
_target_: diffusion_policy.dataset.kitchen_mjl_lowdim_dataset.KitchenMjlLowdimDataset
dataset_dir: data/kitchen/kitchen_demos_multitask
horizon: ${horizon}
pad_before: ${eval:'${n_obs_steps}-1'}
pad_after: ${eval:'${n_action_steps}-1'}
abs_action: ${task.abs_action}
robot_noise_ratio: ${task.robot_noise_ratio}
seed: 42
val_ratio: 0.02

View File

@@ -0,0 +1,64 @@
name: lift_image
shape_meta: &shape_meta
# acceptable types: rgb, low_dim
obs:
agentview_image:
shape: [3, 84, 84]
type: rgb
robot0_eye_in_hand_image:
shape: [3, 84, 84]
type: rgb
robot0_eef_pos:
shape: [3]
# type default: low_dim
robot0_eef_quat:
shape: [4]
robot0_gripper_qpos:
shape: [2]
action:
shape: [7]
task_name: &task_name lift
dataset_type: &dataset_type ph
dataset_path: &dataset_path data/robomimic/datasets/${task.task_name}/${task.dataset_type}/image.hdf5
abs_action: &abs_action False
env_runner:
_target_: diffusion_policy.env_runner.robomimic_image_runner.RobomimicImageRunner
dataset_path: *dataset_path
shape_meta: *shape_meta
# costs 1GB per env
n_train: 6
n_train_vis: 1
train_start_idx: 0
n_test: 50
n_test_vis: 3
test_start_seed: 100000
# use python's eval function as resolver, single-quoted string as argument
max_steps: ${eval:'500 if "${task.dataset_type}" == "mh" else 400'}
n_obs_steps: ${n_obs_steps}
n_action_steps: ${n_action_steps}
render_obs_key: 'agentview_image'
fps: 10
crf: 22
past_action: ${past_action_visible}
abs_action: *abs_action
tqdm_interval_sec: 1.0
n_envs: 28
# evaluation at this config requires a 16 core 64GB instance.
dataset:
_target_: diffusion_policy.dataset.robomimic_replay_image_dataset.RobomimicReplayImageDataset
shape_meta: *shape_meta
dataset_path: *dataset_path
horizon: ${horizon}
pad_before: ${eval:'${n_obs_steps}-1+${n_latency_steps}'}
pad_after: ${eval:'${n_action_steps}-1'}
n_obs_steps: ${dataset_obs_steps}
abs_action: *abs_action
rotation_rep: 'rotation_6d'
use_legacy_normalizer: False
use_cache: True
seed: 42
val_ratio: 0.02

View File

@@ -0,0 +1,63 @@
name: lift_image
shape_meta: &shape_meta
# acceptable types: rgb, low_dim
obs:
agentview_image:
shape: [3, 84, 84]
type: rgb
robot0_eye_in_hand_image:
shape: [3, 84, 84]
type: rgb
robot0_eef_pos:
shape: [3]
# type default: low_dim
robot0_eef_quat:
shape: [4]
robot0_gripper_qpos:
shape: [2]
action:
shape: [10]
task_name: &task_name lift
dataset_type: &dataset_type ph
dataset_path: &dataset_path data/robomimic/datasets/${task.task_name}/${task.dataset_type}/image_abs.hdf5
abs_action: &abs_action True
env_runner:
_target_: diffusion_policy.env_runner.robomimic_image_runner.RobomimicImageRunner
dataset_path: *dataset_path
shape_meta: *shape_meta
n_train: 6
n_train_vis: 2
train_start_idx: 0
n_test: 50
n_test_vis: 4
test_start_seed: 100000
# use python's eval function as resolver, single-quoted string as argument
max_steps: ${eval:'500 if "${task.dataset_type}" == "mh" else 400'}
n_obs_steps: ${n_obs_steps}
n_action_steps: ${n_action_steps}
render_obs_key: 'agentview_image'
fps: 10
crf: 22
past_action: ${past_action_visible}
abs_action: *abs_action
tqdm_interval_sec: 1.0
n_envs: 28
# evaluation at this config requires a 16 core 64GB instance.
dataset:
_target_: diffusion_policy.dataset.robomimic_replay_image_dataset.RobomimicReplayImageDataset
shape_meta: *shape_meta
dataset_path: *dataset_path
horizon: ${horizon}
pad_before: ${eval:'${n_obs_steps}-1+${n_latency_steps}'}
pad_after: ${eval:'${n_action_steps}-1'}
n_obs_steps: ${dataset_obs_steps}
abs_action: *abs_action
rotation_rep: 'rotation_6d'
use_legacy_normalizer: False
use_cache: True
seed: 42
val_ratio: 0.02

View File

@@ -0,0 +1,46 @@
name: lift_lowdim
obs_dim: 19
action_dim: 7
keypoint_dim: 3
obs_keys: &obs_keys ['object', 'robot0_eef_pos', 'robot0_eef_quat', 'robot0_gripper_qpos']
task_name: &task_name lift
dataset_type: &dataset_type ph
dataset_path: &dataset_path data/robomimic/datasets/${task.task_name}/${task.dataset_type}/low_dim.hdf5
abs_action: &abs_action False
env_runner:
_target_: diffusion_policy.env_runner.robomimic_lowdim_runner.RobomimicLowdimRunner
dataset_path: *dataset_path
obs_keys: *obs_keys
n_train: 6
n_train_vis: 2
train_start_idx: 0
n_test: 50
n_test_vis: 4
test_start_seed: 100000
# use python's eval function as resolver, single-quoted string as argument
max_steps: ${eval:'500 if "${task.dataset_type}" == "mh" else 400'}
n_obs_steps: ${n_obs_steps}
n_action_steps: ${n_action_steps}
n_latency_steps: ${n_latency_steps}
render_hw: [128,128]
fps: 10
crf: 22
past_action: ${past_action_visible}
abs_action: *abs_action
tqdm_interval_sec: 1.0
n_envs: 28
dataset:
_target_: diffusion_policy.dataset.robomimic_replay_lowdim_dataset.RobomimicReplayLowdimDataset
dataset_path: *dataset_path
horizon: ${horizon}
pad_before: ${eval:'${n_obs_steps}-1+${n_latency_steps}'}
pad_after: ${eval:'${n_action_steps}-1'}
obs_keys: *obs_keys
abs_action: *abs_action
use_legacy_normalizer: False
seed: 42
val_ratio: 0.02

View File

@@ -0,0 +1,47 @@
name: lift_lowdim
obs_dim: 19
action_dim: 10
keypoint_dim: 3
obs_keys: &obs_keys ['object', 'robot0_eef_pos', 'robot0_eef_quat', 'robot0_gripper_qpos']
task_name: &task_name lift
dataset_type: &dataset_type ph
dataset_path: &dataset_path data/robomimic/datasets/${task.task_name}/${task.dataset_type}/low_dim_abs.hdf5
abs_action: &abs_action True
env_runner:
_target_: diffusion_policy.env_runner.robomimic_lowdim_runner.RobomimicLowdimRunner
dataset_path: *dataset_path
obs_keys: *obs_keys
n_train: 6
n_train_vis: 2
train_start_idx: 0
n_test: 50
n_test_vis: 3
test_start_seed: 100000
# use python's eval function as resolver, single-quoted string as argument
max_steps: ${eval:'500 if "${task.dataset_type}" == "mh" else 400'}
n_obs_steps: ${n_obs_steps}
n_action_steps: ${n_action_steps}
n_latency_steps: ${n_latency_steps}
render_hw: [128,128]
fps: 10
crf: 22
past_action: ${past_action_visible}
abs_action: *abs_action
tqdm_interval_sec: 1.0
n_envs: 28
dataset:
_target_: diffusion_policy.dataset.robomimic_replay_lowdim_dataset.RobomimicReplayLowdimDataset
dataset_path: *dataset_path
horizon: ${horizon}
pad_before: ${eval:'${n_obs_steps}-1+${n_latency_steps}'}
pad_after: ${eval:'${n_action_steps}-1'}
obs_keys: *obs_keys
abs_action: *abs_action
use_legacy_normalizer: False
rotation_rep: rotation_6d
seed: 42
val_ratio: 0.02

View File

@@ -0,0 +1,40 @@
name: pusht_image
image_shape: &image_shape [3, 96, 96]
shape_meta: &shape_meta
# acceptable types: rgb, low_dim
obs:
image:
shape: *image_shape
type: rgb
agent_pos:
shape: [2]
type: low_dim
action:
shape: [2]
env_runner:
_target_: diffusion_policy.env_runner.pusht_image_runner.PushTImageRunner
n_train: 6
n_train_vis: 2
train_start_seed: 0
n_test: 50
n_test_vis: 4
legacy_test: True
test_start_seed: 100000
max_steps: 300
n_obs_steps: ${n_obs_steps}
n_action_steps: ${n_action_steps}
fps: 10
past_action: ${past_action_visible}
n_envs: null
dataset:
_target_: diffusion_policy.dataset.pusht_image_dataset.PushTImageDataset
zarr_path: data/pusht/pusht_cchi_v7_replay.zarr
horizon: ${horizon}
pad_before: ${eval:'${n_obs_steps}-1'}
pad_after: ${eval:'${n_action_steps}-1'}
seed: 42
val_ratio: 0.02
max_train_episodes: 90

View File

@@ -0,0 +1,34 @@
name: pusht_lowdim
obs_dim: 20 # 9*2 keypoints + 2 state
action_dim: 2
keypoint_dim: 2
env_runner:
_target_: diffusion_policy.env_runner.pusht_keypoints_runner.PushTKeypointsRunner
keypoint_visible_rate: ${keypoint_visible_rate}
n_train: 6
n_train_vis: 2
train_start_seed: 0
n_test: 50
n_test_vis: 4
legacy_test: True
test_start_seed: 100000
max_steps: 300
n_obs_steps: ${n_obs_steps}
n_action_steps: ${n_action_steps}
n_latency_steps: ${n_latency_steps}
fps: 10
agent_keypoints: False
past_action: ${past_action_visible}
n_envs: null
dataset:
_target_: diffusion_policy.dataset.pusht_dataset.PushTLowdimDataset
zarr_path: data/pusht/pusht_cchi_v7_replay.zarr
horizon: ${horizon}
pad_before: ${eval:'${n_obs_steps}-1+${n_latency_steps}'}
pad_after: ${eval:'${n_action_steps}-1'}
seed: 42
val_ratio: 0.02
max_train_episodes: 90

View File

@@ -0,0 +1,47 @@
name: real_image
image_shape: [3, 240, 320]
dataset_path: data/pusht_real/real_pusht_20230105
shape_meta: &shape_meta
# acceptable types: rgb, low_dim
obs:
# camera_0:
# shape: ${task.image_shape}
# type: rgb
camera_1:
shape: ${task.image_shape}
type: rgb
# camera_2:
# shape: ${task.image_shape}
# type: rgb
camera_3:
shape: ${task.image_shape}
type: rgb
# camera_4:
# shape: ${task.image_shape}
# type: rgb
robot_eef_pose:
shape: [2]
type: low_dim
action:
shape: [2]
env_runner:
_target_: diffusion_policy.env_runner.real_pusht_image_runner.RealPushTImageRunner
dataset:
_target_: diffusion_policy.dataset.real_pusht_image_dataset.RealPushTImageDataset
shape_meta: *shape_meta
dataset_path: ${task.dataset_path}
horizon: ${horizon}
pad_before: ${eval:'${n_obs_steps}-1+${n_latency_steps}'}
pad_after: ${eval:'${n_action_steps}-1'}
n_obs_steps: ${dataset_obs_steps}
n_latency_steps: ${n_latency_steps}
use_cache: True
seed: 42
val_ratio: 0.00
max_train_episodes: null
delta_action: False

View File

@@ -0,0 +1,64 @@
name: square_image
shape_meta: &shape_meta
# acceptable types: rgb, low_dim
obs:
agentview_image:
shape: [3, 84, 84]
type: rgb
robot0_eye_in_hand_image:
shape: [3, 84, 84]
type: rgb
robot0_eef_pos:
shape: [3]
# type default: low_dim
robot0_eef_quat:
shape: [4]
robot0_gripper_qpos:
shape: [2]
action:
shape: [7]
task_name: &task_name square
dataset_type: &dataset_type ph
dataset_path: &dataset_path data/robomimic/datasets/${task.task_name}/${task.dataset_type}/image.hdf5
abs_action: &abs_action False
env_runner:
_target_: diffusion_policy.env_runner.robomimic_image_runner.RobomimicImageRunner
dataset_path: *dataset_path
shape_meta: *shape_meta
# costs 1GB per env
n_train: 6
n_train_vis: 2
train_start_idx: 0
n_test: 50
n_test_vis: 4
test_start_seed: 100000
# use python's eval function as resolver, single-quoted string as argument
max_steps: ${eval:'500 if "${task.dataset_type}" == "mh" else 400'}
n_obs_steps: ${n_obs_steps}
n_action_steps: ${n_action_steps}
render_obs_key: 'agentview_image'
fps: 10
crf: 22
past_action: ${past_action_visible}
abs_action: *abs_action
tqdm_interval_sec: 1.0
n_envs: 28
# evaluation at this config requires a 16 core 64GB instance.
dataset:
_target_: diffusion_policy.dataset.robomimic_replay_image_dataset.RobomimicReplayImageDataset
shape_meta: *shape_meta
dataset_path: *dataset_path
horizon: ${horizon}
pad_before: ${eval:'${n_obs_steps}-1+${n_latency_steps}'}
pad_after: ${eval:'${n_action_steps}-1'}
n_obs_steps: ${dataset_obs_steps}
abs_action: *abs_action
rotation_rep: 'rotation_6d'
use_legacy_normalizer: False
use_cache: True
seed: 42
val_ratio: 0.02

View File

@@ -0,0 +1,64 @@
name: square_image
shape_meta: &shape_meta
# acceptable types: rgb, low_dim
obs:
agentview_image:
shape: [3, 84, 84]
type: rgb
robot0_eye_in_hand_image:
shape: [3, 84, 84]
type: rgb
robot0_eef_pos:
shape: [3]
# type default: low_dim
robot0_eef_quat:
shape: [4]
robot0_gripper_qpos:
shape: [2]
action:
shape: [10]
task_name: &task_name square
dataset_type: &dataset_type ph
dataset_path: &dataset_path data/robomimic/datasets/${task.task_name}/${task.dataset_type}/image_abs.hdf5
abs_action: &abs_action True
env_runner:
_target_: diffusion_policy.env_runner.robomimic_image_runner.RobomimicImageRunner
dataset_path: *dataset_path
shape_meta: *shape_meta
# costs 1GB per env
n_train: 6
n_train_vis: 2
train_start_idx: 0
n_test: 50
n_test_vis: 4
test_start_seed: 100000
# use python's eval function as resolver, single-quoted string as argument
max_steps: ${eval:'500 if "${task.dataset_type}" == "mh" else 400'}
n_obs_steps: ${n_obs_steps}
n_action_steps: ${n_action_steps}
render_obs_key: 'agentview_image'
fps: 10
crf: 22
past_action: ${past_action_visible}
abs_action: *abs_action
tqdm_interval_sec: 1.0
n_envs: 28
# evaluation at this config requires a 16 core 64GB instance.
dataset:
_target_: diffusion_policy.dataset.robomimic_replay_image_dataset.RobomimicReplayImageDataset
shape_meta: *shape_meta
dataset_path: *dataset_path
horizon: ${horizon}
pad_before: ${eval:'${n_obs_steps}-1+${n_latency_steps}'}
pad_after: ${eval:'${n_action_steps}-1'}
n_obs_steps: ${dataset_obs_steps}
abs_action: *abs_action
rotation_rep: 'rotation_6d'
use_legacy_normalizer: False
use_cache: True
seed: 42
val_ratio: 0.02

View File

@@ -0,0 +1,46 @@
name: square_lowdim
obs_dim: 23
action_dim: 7
keypoint_dim: 3
obs_keys: &obs_keys ['object', 'robot0_eef_pos', 'robot0_eef_quat', 'robot0_gripper_qpos']
task_name: &task_name square
dataset_type: &dataset_type ph
dataset_path: &dataset_path data/robomimic/datasets/${task.task_name}/${task.dataset_type}/low_dim.hdf5
abs_action: &abs_action False
env_runner:
_target_: diffusion_policy.env_runner.robomimic_lowdim_runner.RobomimicLowdimRunner
dataset_path: *dataset_path
obs_keys: *obs_keys
n_train: 6
n_train_vis: 2
train_start_idx: 0
n_test: 50
n_test_vis: 4
test_start_seed: 100000
# use python's eval function as resolver, single-quoted string as argument
max_steps: ${eval:'500 if "${task.dataset_type}" == "mh" else 400'}
n_obs_steps: ${n_obs_steps}
n_action_steps: ${n_action_steps}
n_latency_steps: ${n_latency_steps}
render_hw: [128,128]
fps: 10
crf: 22
past_action: ${past_action_visible}
abs_action: *abs_action
n_envs: 28
dataset:
_target_: diffusion_policy.dataset.robomimic_replay_lowdim_dataset.RobomimicReplayLowdimDataset
dataset_path: *dataset_path
horizon: ${horizon}
pad_before: ${eval:'${n_obs_steps}-1+${n_latency_steps}'}
pad_after: ${eval:'${n_action_steps}-1'}
obs_keys: *obs_keys
abs_action: *abs_action
use_legacy_normalizer: False
seed: 42
val_ratio: 0.02
max_train_episodes: null

View File

@@ -0,0 +1,47 @@
name: square_lowdim
obs_dim: 23
action_dim: 10
keypoint_dim: 3
obs_keys: &obs_keys ['object', 'robot0_eef_pos', 'robot0_eef_quat', 'robot0_gripper_qpos']
task_name: &task_name square
dataset_type: &dataset_type ph
abs_action: &abs_action True
dataset_path: &dataset_path data/robomimic/datasets/${task.task_name}/${task.dataset_type}/low_dim_abs.hdf5
env_runner:
_target_: diffusion_policy.env_runner.robomimic_lowdim_runner.RobomimicLowdimRunner
dataset_path: *dataset_path
obs_keys: *obs_keys
n_train: 6
n_train_vis: 2
train_start_idx: 0
n_test: 50
n_test_vis: 4
test_start_seed: 100000
# use python's eval function as resolver, single-quoted string as argument
max_steps: ${eval:'500 if "${task.dataset_type}" == "mh" else 400'}
n_obs_steps: ${n_obs_steps}
n_action_steps: ${n_action_steps}
n_latency_steps: ${n_latency_steps}
render_hw: [128,128]
fps: 10
crf: 22
past_action: ${past_action_visible}
abs_action: *abs_action
n_envs: 28
dataset:
_target_: diffusion_policy.dataset.robomimic_replay_lowdim_dataset.RobomimicReplayLowdimDataset
dataset_path: *dataset_path
horizon: ${horizon}
pad_before: ${eval:'${n_obs_steps}-1+${n_latency_steps}'}
pad_after: ${eval:'${n_action_steps}-1'}
obs_keys: *obs_keys
abs_action: *abs_action
use_legacy_normalizer: False
seed: 42
val_ratio: 0.02
max_train_episodes: null

View File

@@ -0,0 +1,63 @@
name: tool_hang_image
shape_meta: &shape_meta
# acceptable types: rgb, low_dim
obs:
sideview_image:
shape: [3, 240, 240]
type: rgb
robot0_eye_in_hand_image:
shape: [3, 240, 240]
type: rgb
robot0_eef_pos:
shape: [3]
# type default: low_dim
robot0_eef_quat:
shape: [4]
robot0_gripper_qpos:
shape: [2]
action:
shape: [7]
task_name: &task_name tool_hang
dataset_type: &dataset_type ph
dataset_path: &dataset_path data/robomimic/datasets/${task.task_name}/${task.dataset_type}/image.hdf5
abs_action: &abs_action False
env_runner:
_target_: diffusion_policy.env_runner.robomimic_image_runner.RobomimicImageRunner
dataset_path: *dataset_path
shape_meta: *shape_meta
# costs 1GB per env
n_train: 6
n_train_vis: 2
train_start_idx: 0
n_test: 50
n_test_vis: 4
test_start_seed: 100000
max_steps: 700
n_obs_steps: ${n_obs_steps}
n_action_steps: ${n_action_steps}
render_obs_key: 'sideview_image'
fps: 10
crf: 22
past_action: ${past_action_visible}
abs_action: *abs_action
tqdm_interval_sec: 1.0
n_envs: 28
# evaluation at this config requires a 16 core 64GB instance.
dataset:
_target_: diffusion_policy.dataset.robomimic_replay_image_dataset.RobomimicReplayImageDataset
shape_meta: *shape_meta
dataset_path: *dataset_path
horizon: ${horizon}
pad_before: ${eval:'${n_obs_steps}-1+${n_latency_steps}'}
pad_after: ${eval:'${n_action_steps}-1'}
n_obs_steps: ${dataset_obs_steps}
abs_action: *abs_action
rotation_rep: 'rotation_6d'
use_legacy_normalizer: False
use_cache: True
seed: 42
val_ratio: 0.02

View File

@@ -0,0 +1,63 @@
name: tool_hang_image_abs
shape_meta: &shape_meta
# acceptable types: rgb, low_dim
obs:
sideview_image:
shape: [3, 240, 240]
type: rgb
robot0_eye_in_hand_image:
shape: [3, 240, 240]
type: rgb
robot0_eef_pos:
shape: [3]
# type default: low_dim
robot0_eef_quat:
shape: [4]
robot0_gripper_qpos:
shape: [2]
action:
shape: [10]
task_name: &task_name tool_hang
dataset_type: &dataset_type ph
dataset_path: &dataset_path data/robomimic/datasets/${task.task_name}/${task.dataset_type}/image_abs.hdf5
abs_action: &abs_action True
env_runner:
_target_: diffusion_policy.env_runner.robomimic_image_runner.RobomimicImageRunner
dataset_path: *dataset_path
shape_meta: *shape_meta
# costs 1GB per env
n_train: 6
n_train_vis: 2
train_start_idx: 0
n_test: 50
n_test_vis: 4
test_start_seed: 100000
max_steps: 700
n_obs_steps: ${n_obs_steps}
n_action_steps: ${n_action_steps}
render_obs_key: 'sideview_image'
fps: 10
crf: 22
past_action: ${past_action_visible}
abs_action: *abs_action
tqdm_interval_sec: 1.0
n_envs: 28
# evaluation at this config requires a 16 core 64GB instance.
dataset:
_target_: diffusion_policy.dataset.robomimic_replay_image_dataset.RobomimicReplayImageDataset
shape_meta: *shape_meta
dataset_path: *dataset_path
horizon: ${horizon}
pad_before: ${eval:'${n_obs_steps}-1+${n_latency_steps}'}
pad_after: ${eval:'${n_action_steps}-1'}
n_obs_steps: ${dataset_obs_steps}
abs_action: *abs_action
rotation_rep: 'rotation_6d'
use_legacy_normalizer: False
use_cache: True
seed: 42
val_ratio: 0.02

View File

@@ -0,0 +1,45 @@
name: tool_hang_lowdim
obs_dim: 53
action_dim: 7
keypoint_dim: 3
obs_keys: &obs_keys ['object', 'robot0_eef_pos', 'robot0_eef_quat', 'robot0_gripper_qpos']
task_name: &task_name tool_hang
dataset_type: &dataset_type ph
dataset_path: &dataset_path data/robomimic/datasets/${task.task_name}/${task.dataset_type}/low_dim.hdf5
abs_action: &abs_action False
env_runner:
_target_: diffusion_policy.env_runner.robomimic_lowdim_runner.RobomimicLowdimRunner
dataset_path: *dataset_path
obs_keys: *obs_keys
n_train: 6
n_train_vis: 2
train_start_idx: 0
n_test: 50
n_test_vis: 4
test_start_seed: 100000
max_steps: 700
n_obs_steps: ${n_obs_steps}
n_action_steps: ${n_action_steps}
n_latency_steps: ${n_latency_steps}
render_hw: [128,128]
fps: 10
crf: 22
past_action: ${past_action_visible}
abs_action: *abs_action
n_envs: 28
# seed 42 will crash MuJoCo for some reason.
dataset:
_target_: diffusion_policy.dataset.robomimic_replay_lowdim_dataset.RobomimicReplayLowdimDataset
dataset_path: *dataset_path
horizon: ${horizon}
pad_before: ${eval:'${n_obs_steps}-1+${n_latency_steps}'}
pad_after: ${eval:'${n_action_steps}-1'}
obs_keys: *obs_keys
abs_action: *abs_action
use_legacy_normalizer: False
seed: 42
val_ratio: 0.02

View File

@@ -0,0 +1,46 @@
name: tool_hang_lowdim
obs_dim: 53
action_dim: 10
keypoint_dim: 3
obs_keys: &obs_keys ['object', 'robot0_eef_pos', 'robot0_eef_quat', 'robot0_gripper_qpos']
task_name: &task_name tool_hang
dataset_type: &dataset_type ph
dataset_path: &dataset_path data/robomimic/datasets/${task.task_name}/${task.dataset_type}/low_dim_abs.hdf5
abs_action: &abs_action True
env_runner:
_target_: diffusion_policy.env_runner.robomimic_lowdim_runner.RobomimicLowdimRunner
dataset_path: *dataset_path
obs_keys: *obs_keys
n_train: 6
n_train_vis: 2
train_start_idx: 0
n_test: 50
n_test_vis: 4
test_start_seed: 100000
max_steps: 700
n_obs_steps: ${n_obs_steps}
n_action_steps: ${n_action_steps}
n_latency_steps: ${n_latency_steps}
render_hw: [128,128]
fps: 10
crf: 22
past_action: ${past_action_visible}
abs_action: *abs_action
n_envs: 28
# seed 42 will crash MuJoCo for some reason.
dataset:
_target_: diffusion_policy.dataset.robomimic_replay_lowdim_dataset.RobomimicReplayLowdimDataset
dataset_path: *dataset_path
horizon: ${horizon}
pad_before: ${eval:'${n_obs_steps}-1+${n_latency_steps}'}
pad_after: ${eval:'${n_action_steps}-1'}
obs_keys: *obs_keys
abs_action: *abs_action
use_legacy_normalizer: False
rotation_rep: rotation_6d
seed: 42
val_ratio: 0.02

View File

@@ -0,0 +1,75 @@
name: transport_image
shape_meta: &shape_meta
# acceptable types: rgb, low_dim
obs:
shouldercamera0_image:
shape: [3, 84, 84]
type: rgb
robot0_eye_in_hand_image:
shape: [3, 84, 84]
type: rgb
robot0_eef_pos:
shape: [3]
# type default: low_dim
robot0_eef_quat:
shape: [4]
robot0_gripper_qpos:
shape: [2]
shouldercamera1_image:
shape: [3, 84, 84]
type: rgb
robot1_eye_in_hand_image:
shape: [3, 84, 84]
type: rgb
robot1_eef_pos:
shape: [3]
# type default: low_dim
robot1_eef_quat:
shape: [4]
robot1_gripper_qpos:
shape: [2]
action:
shape: [14]
task_name: &task_name transport
dataset_type: &dataset_type ph
dataset_path: &dataset_path data/robomimic/datasets/${task.task_name}/${task.dataset_type}/image.hdf5
abs_action: &abs_action False
env_runner:
_target_: diffusion_policy.env_runner.robomimic_image_runner.RobomimicImageRunner
dataset_path: *dataset_path
shape_meta: *shape_meta
n_train: 6
n_train_vis: 2
train_start_idx: 0
n_test: 50
n_test_vis: 4
test_start_seed: 100000
max_steps: 700
n_obs_steps: ${n_obs_steps}
n_action_steps: ${n_action_steps}
render_obs_key: 'shouldercamera0_image'
fps: 10
crf: 22
past_action: ${past_action_visible}
abs_action: *abs_action
tqdm_interval_sec: 1.0
n_envs: 28
# evaluation at this config requires a 16 core 64GB instance.
dataset:
_target_: diffusion_policy.dataset.robomimic_replay_image_dataset.RobomimicReplayImageDataset
shape_meta: *shape_meta
dataset_path: *dataset_path
horizon: ${horizon}
pad_before: ${eval:'${n_obs_steps}-1+${n_latency_steps}'}
pad_after: ${eval:'${n_action_steps}-1'}
n_obs_steps: ${dataset_obs_steps}
abs_action: *abs_action
rotation_rep: 'rotation_6d'
use_legacy_normalizer: False
use_cache: True
seed: 42
val_ratio: 0.02

View File

@@ -0,0 +1,75 @@
name: transport_image
shape_meta: &shape_meta
# acceptable types: rgb, low_dim
obs:
shouldercamera0_image:
shape: [3, 84, 84]
type: rgb
robot0_eye_in_hand_image:
shape: [3, 84, 84]
type: rgb
robot0_eef_pos:
shape: [3]
# type default: low_dim
robot0_eef_quat:
shape: [4]
robot0_gripper_qpos:
shape: [2]
shouldercamera1_image:
shape: [3, 84, 84]
type: rgb
robot1_eye_in_hand_image:
shape: [3, 84, 84]
type: rgb
robot1_eef_pos:
shape: [3]
# type default: low_dim
robot1_eef_quat:
shape: [4]
robot1_gripper_qpos:
shape: [2]
action:
shape: [20]
task_name: &task_name transport
dataset_type: &dataset_type ph
dataset_path: &dataset_path data/robomimic/datasets/${task.task_name}/${task.dataset_type}/image_abs.hdf5
abs_action: &abs_action True
env_runner:
_target_: diffusion_policy.env_runner.robomimic_image_runner.RobomimicImageRunner
dataset_path: *dataset_path
shape_meta: *shape_meta
n_train: 6
n_train_vis: 2
train_start_idx: 0
n_test: 50
n_test_vis: 4
test_start_seed: 100000
max_steps: 700
n_obs_steps: ${n_obs_steps}
n_action_steps: ${n_action_steps}
render_obs_key: 'shouldercamera0_image'
fps: 10
crf: 22
past_action: ${past_action_visible}
abs_action: *abs_action
tqdm_interval_sec: 1.0
n_envs: 28
# evaluation at this config requires a 16 core 64GB instance.
dataset:
_target_: diffusion_policy.dataset.robomimic_replay_image_dataset.RobomimicReplayImageDataset
shape_meta: *shape_meta
dataset_path: *dataset_path
horizon: ${horizon}
pad_before: ${eval:'${n_obs_steps}-1+${n_latency_steps}'}
pad_after: ${eval:'${n_action_steps}-1'}
n_obs_steps: ${dataset_obs_steps}
abs_action: *abs_action
rotation_rep: 'rotation_6d'
use_legacy_normalizer: False
use_cache: True
seed: 42
val_ratio: 0.02

View File

@@ -0,0 +1,49 @@
name: transport_lowdim
obs_dim: 59 # 41+(3+4+2)*2
action_dim: 14 # 7*2
keypoint_dim: 3
obs_keys: &obs_keys [
'object',
'robot0_eef_pos', 'robot0_eef_quat', 'robot0_gripper_qpos',
'robot1_eef_pos', 'robot1_eef_quat', 'robot1_gripper_qpos'
]
task_name: &task_name transport
dataset_type: &dataset_type ph
dataset_path: &dataset_path data/robomimic/datasets/${task.task_name}/${task.dataset_type}/low_dim.hdf5
abs_action: &abs_action False
env_runner:
_target_: diffusion_policy.env_runner.robomimic_lowdim_runner.RobomimicLowdimRunner
dataset_path: *dataset_path
obs_keys: *obs_keys
n_train: 6
n_train_vis: 2
train_start_idx: 0
n_test: 50
n_test_vis: 5
test_start_seed: 100000
max_steps: 700
n_obs_steps: ${n_obs_steps}
n_action_steps: ${n_action_steps}
n_latency_steps: ${n_latency_steps}
render_hw: [128,128]
fps: 10
crf: 22
past_action: ${past_action_visible}
abs_action: *abs_action
n_envs: 28
# evaluation at this config requires a 16 core 64GB instance.
dataset:
_target_: diffusion_policy.dataset.robomimic_replay_lowdim_dataset.RobomimicReplayLowdimDataset
dataset_path: *dataset_path
horizon: ${horizon}
pad_before: ${eval:'${n_obs_steps}-1+${n_latency_steps}'}
pad_after: ${eval:'${n_action_steps}-1'}
obs_keys: *obs_keys
abs_action: *abs_action
use_legacy_normalizer: False
seed: 42
val_ratio: 0.02

View File

@@ -0,0 +1,49 @@
name: transport_lowdim
obs_dim: 59 # 41+(3+4+2)*2
action_dim: 20 # 10*2
keypoint_dim: 3
obs_keys: &obs_keys [
'object',
'robot0_eef_pos', 'robot0_eef_quat', 'robot0_gripper_qpos',
'robot1_eef_pos', 'robot1_eef_quat', 'robot1_gripper_qpos'
]
task_name: &task_name transport
dataset_type: &dataset_type ph
dataset_path: &dataset_path data/robomimic/datasets/${task.task_name}/${task.dataset_type}/low_dim_abs.hdf5
abs_action: &abs_action True
env_runner:
_target_: diffusion_policy.env_runner.robomimic_lowdim_runner.RobomimicLowdimRunner
dataset_path: *dataset_path
obs_keys: *obs_keys
n_train: 6
n_train_vis: 2
train_start_idx: 0
n_test: 50
n_test_vis: 4
test_start_seed: 100000
max_steps: 700
n_obs_steps: ${n_obs_steps}
n_action_steps: ${n_action_steps}
n_latency_steps: ${n_latency_steps}
render_hw: [128,128]
fps: 10
crf: 22
past_action: ${past_action_visible}
abs_action: *abs_action
n_envs: 28
# evaluation at this config requires a 16 core 64GB instance.
dataset:
_target_: diffusion_policy.dataset.robomimic_replay_lowdim_dataset.RobomimicReplayLowdimDataset
dataset_path: *dataset_path
horizon: ${horizon}
pad_before: ${eval:'${n_obs_steps}-1+${n_latency_steps}'}
pad_after: ${eval:'${n_action_steps}-1'}
obs_keys: *obs_keys
abs_action: *abs_action
use_legacy_normalizer: False
seed: 42
val_ratio: 0.02

View File

@@ -0,0 +1,132 @@
defaults:
- _self_
- task: blockpush_lowdim_seed
name: train_bet_lowdim
_target_: diffusion_policy.workspace.train_bet_lowdim_workspace.TrainBETLowdimWorkspace
obs_dim: ${task.obs_dim}
action_dim: ${task.action_dim}
keypoint_dim: ${task.keypoint_dim}
task_name: ${task.name}
exp_name: "default"
horizon: 3
n_obs_steps: 3
n_action_steps: 1
n_latency_steps: 0
past_action_visible: False
keypoint_visible_rate: 1.0
obs_as_local_cond: False
obs_as_global_cond: False
pred_action_steps_only: False
policy:
_target_: diffusion_policy.policy.bet_lowdim_policy.BETLowdimPolicy
action_ae:
_target_: diffusion_policy.model.bet.action_ae.discretizers.k_means.KMeansDiscretizer
num_bins: 24
action_dim: ${action_dim}
predict_offsets: True
obs_encoding_net:
_target_: torch.nn.Identity
output_dim: ${obs_dim}
state_prior:
_target_: diffusion_policy.model.bet.latent_generators.mingpt.MinGPT
discrete_input: false
input_dim: ${obs_dim}
vocab_size: ${policy.action_ae.num_bins}
# Architecture details
n_layer: 4
n_head: 4
n_embd: 72
block_size: ${horizon} # Length of history/context
predict_offsets: True
offset_loss_scale: 1000.0 # actions are very small
focal_loss_gamma: 2.0
action_dim: ${action_dim}
horizon: ${horizon}
n_obs_steps: ${n_obs_steps}
n_action_steps: ${n_action_steps}
dataloader:
batch_size: 256
num_workers: 1
shuffle: True
pin_memory: True
persistent_workers: False
val_dataloader:
batch_size: 256
num_workers: 1
shuffle: False
pin_memory: True
persistent_workers: False
optimizer:
learning_rate: 0.0001 # 1e-4
weight_decay: 0.1
betas: [0.9, 0.95]
training:
device: "cuda:0"
seed: 42
debug: False
resume: True
# optimization
lr_scheduler: cosine
lr_warmup_steps: 500
num_epochs: 5000
gradient_accumulate_every: 1
grad_norm_clip: 1.0
enable_normalizer: True
# training loop control
# in epochs
rollout_every: 50
checkpoint_every: 50
val_every: 1
sample_every: 5
# steps per epoch
max_train_steps: null
max_val_steps: null
# misc
tqdm_interval_sec: 1.0
logging:
project: diffusion_policy_debug
resume: True
mode: online
name: ${now:%Y.%m.%d-%H.%M.%S}_${name}_${task_name}
tags: ["${name}", "${task_name}", "${exp_name}"]
id: null
group: null
checkpoint:
topk:
monitor_key: test_mean_score
mode: max
k: 5
format_str: 'epoch={epoch:04d}-test_mean_score={test_mean_score:.3f}.ckpt'
save_last_ckpt: True
save_last_snapshot: False
multi_run:
run_dir: data/outputs/${now:%Y.%m.%d}/${now:%H.%M.%S}_${name}_${task_name}
wandb_name_base: ${now:%Y.%m.%d-%H.%M.%S}_${name}_${task_name}
hydra:
job:
override_dirname: ${name}
run:
dir: data/outputs/${now:%Y.%m.%d}/${now:%H.%M.%S}_${name}_${task_name}
sweep:
dir: data/outputs/${now:%Y.%m.%d}/${now:%H.%M.%S}_${name}_${task_name}
subdir: ${hydra.job.num}

View File

@@ -0,0 +1,141 @@
defaults:
- _self_
- task: lift_image_abs
name: train_diffusion_transformer_hybrid
_target_: diffusion_policy.workspace.train_diffusion_transformer_hybrid_workspace.TrainDiffusionTransformerHybridWorkspace
task_name: ${task.name}
shape_meta: ${task.shape_meta}
exp_name: "default"
horizon: 10
n_obs_steps: 2
n_action_steps: 8
n_latency_steps: 0
dataset_obs_steps: ${n_obs_steps}
past_action_visible: False
keypoint_visible_rate: 1.0
obs_as_cond: True
policy:
_target_: diffusion_policy.policy.diffusion_transformer_hybrid_image_policy.DiffusionTransformerHybridImagePolicy
shape_meta: ${shape_meta}
noise_scheduler:
_target_: diffusers.schedulers.scheduling_ddpm.DDPMScheduler
num_train_timesteps: 100
beta_start: 0.0001
beta_end: 0.02
beta_schedule: squaredcos_cap_v2
variance_type: fixed_small # Yilun's paper uses fixed_small_log instead, but easy to cause Nan
clip_sample: True # required when predict_epsilon=False
prediction_type: epsilon # or sample
horizon: ${horizon}
n_action_steps: ${eval:'${n_action_steps}+${n_latency_steps}'}
n_obs_steps: ${n_obs_steps}
num_inference_steps: 100
crop_shape: [76, 76]
obs_encoder_group_norm: True
eval_fixed_crop: True
n_layer: 8
n_cond_layers: 0 # >0: use transformer encoder for cond, otherwise use MLP
n_head: 4
n_emb: 256
p_drop_emb: 0.0
p_drop_attn: 0.3
causal_attn: True
time_as_cond: True # if false, use BERT like encoder only arch, time as input
obs_as_cond: ${obs_as_cond}
# scheduler.step params
# predict_epsilon: True
ema:
_target_: diffusion_policy.model.diffusion.ema_model.EMAModel
update_after_step: 0
inv_gamma: 1.0
power: 0.75
min_value: 0.0
max_value: 0.9999
dataloader:
batch_size: 64
num_workers: 8
shuffle: True
pin_memory: True
persistent_workers: False
val_dataloader:
batch_size: 64
num_workers: 8
shuffle: False
pin_memory: True
persistent_workers: False
optimizer:
transformer_weight_decay: 1.0e-3
obs_encoder_weight_decay: 1.0e-6
learning_rate: 1.0e-4
betas: [0.9, 0.95]
training:
device: "cuda:0"
seed: 42
debug: False
resume: True
# optimization
lr_scheduler: cosine
# Transformer needs LR warmup
lr_warmup_steps: 1000
num_epochs: 3050
gradient_accumulate_every: 1
# EMA destroys performance when used with BatchNorm
# replace BatchNorm with GroupNorm.
use_ema: True
# training loop control
# in epochs
rollout_every: 50
checkpoint_every: 50
val_every: 1
sample_every: 5
# steps per epoch
max_train_steps: null
max_val_steps: null
# misc
tqdm_interval_sec: 1.0
logging:
project: diffusion_policy_debug
resume: True
mode: online
name: ${now:%Y.%m.%d-%H.%M.%S}_${name}_${task_name}
tags: ["${name}", "${task_name}", "${exp_name}"]
id: null
group: null
checkpoint:
topk:
monitor_key: test_mean_score
mode: max
k: 5
format_str: 'epoch={epoch:04d}-test_mean_score={test_mean_score:.3f}.ckpt'
save_last_ckpt: True
save_last_snapshot: False
multi_run:
run_dir: data/outputs/${now:%Y.%m.%d}/${now:%H.%M.%S}_${name}_${task_name}
wandb_name_base: ${now:%Y.%m.%d-%H.%M.%S}_${name}_${task_name}
hydra:
job:
override_dirname: ${name}
run:
dir: data/outputs/${now:%Y.%m.%d}/${now:%H.%M.%S}_${name}_${task_name}
sweep:
dir: data/outputs/${now:%Y.%m.%d}/${now:%H.%M.%S}_${name}_${task_name}
subdir: ${hydra.job.num}

View File

@@ -0,0 +1,146 @@
defaults:
- _self_
- task: kitchen_lowdim_abs
name: train_diffusion_transformer_lowdim
_target_: diffusion_policy.workspace.train_diffusion_transformer_lowdim_workspace.TrainDiffusionTransformerLowdimWorkspace
obs_dim: ${task.obs_dim}
action_dim: ${task.action_dim}
task_name: ${task.name}
exp_name: "default"
horizon: 16
n_obs_steps: 4
n_action_steps: 8
n_latency_steps: 0
past_action_visible: False
keypoint_visible_rate: 1.0
obs_as_cond: True
pred_action_steps_only: False
policy:
_target_: diffusion_policy.policy.diffusion_transformer_lowdim_policy.DiffusionTransformerLowdimPolicy
model:
_target_: diffusion_policy.model.diffusion.transformer_for_diffusion.TransformerForDiffusion
input_dim: ${eval:'${action_dim} if ${obs_as_cond} else ${obs_dim} + ${action_dim}'}
output_dim: ${policy.model.input_dim}
horizon: ${horizon}
n_obs_steps: ${n_obs_steps}
cond_dim: ${eval:'${obs_dim} if ${obs_as_cond} else 0'}
n_layer: 8
n_head: 4
n_emb: 768
p_drop_emb: 0.0
p_drop_attn: 0.1
causal_attn: True
time_as_cond: True # if false, use BERT like encoder only arch, time as input
obs_as_cond: ${obs_as_cond}
n_cond_layers: 0 # >0: use transformer encoder for cond, otherwise use MLP
noise_scheduler:
_target_: diffusers.schedulers.scheduling_ddpm.DDPMScheduler
num_train_timesteps: 100
beta_start: 0.0001
beta_end: 0.02
beta_schedule: squaredcos_cap_v2
variance_type: fixed_small # Yilun's paper uses fixed_small_log instead, but easy to cause Nan
clip_sample: True # required when predict_epsilon=False
prediction_type: epsilon # or sample
horizon: ${horizon}
obs_dim: ${obs_dim}
action_dim: ${action_dim}
n_action_steps: ${n_action_steps}
n_obs_steps: ${n_obs_steps}
num_inference_steps: 100
obs_as_cond: ${obs_as_cond}
pred_action_steps_only: ${pred_action_steps_only}
# scheduler.step params
# predict_epsilon: True
ema:
_target_: diffusion_policy.model.diffusion.ema_model.EMAModel
update_after_step: 0
inv_gamma: 1.0
power: 0.75
min_value: 0.0
max_value: 0.9999
dataloader:
batch_size: 256
num_workers: 1
shuffle: True
pin_memory: True
persistent_workers: False
val_dataloader:
batch_size: 256
num_workers: 1
shuffle: False
pin_memory: True
persistent_workers: False
optimizer:
learning_rate: 1.0e-4
weight_decay: 1.0e-3
betas: [0.9, 0.95]
training:
device: "cuda:0"
seed: 42
debug: False
resume: True
# optimization
lr_scheduler: cosine
# Transformer needs LR warmup
lr_warmup_steps: 1000
num_epochs: 5000
gradient_accumulate_every: 1
use_ema: True
# training loop control
# in epochs
rollout_every: 50
checkpoint_every: 50
val_every: 1
sample_every: 5
# steps per epoch
max_train_steps: null
max_val_steps: null
# misc
tqdm_interval_sec: 1.0
logging:
project: diffusion_policy_debug
resume: True
mode: online
name: ${now:%Y.%m.%d-%H.%M.%S}_${name}_${task_name}
tags: ["${name}", "${task_name}", "${exp_name}"]
id: null
group: null
checkpoint:
topk:
monitor_key: test_mean_score
mode: max
k: 5
format_str: 'epoch={epoch:04d}-test_mean_score={test_mean_score:.3f}.ckpt'
save_last_ckpt: True
save_last_snapshot: False
multi_run:
run_dir: data/outputs/${now:%Y.%m.%d}/${now:%H.%M.%S}_${name}_${task_name}
wandb_name_base: ${now:%Y.%m.%d-%H.%M.%S}_${name}_${task_name}
hydra:
job:
override_dirname: ${name}
run:
dir: data/outputs/${now:%Y.%m.%d}/${now:%H.%M.%S}_${name}_${task_name}
sweep:
dir: data/outputs/${now:%Y.%m.%d}/${now:%H.%M.%S}_${name}_${task_name}
subdir: ${hydra.job.num}

View File

@@ -0,0 +1,146 @@
defaults:
- _self_
- task: pusht_lowdim
name: train_diffusion_transformer_lowdim
_target_: diffusion_policy.workspace.train_diffusion_transformer_lowdim_workspace.TrainDiffusionTransformerLowdimWorkspace
obs_dim: ${task.obs_dim}
action_dim: ${task.action_dim}
task_name: ${task.name}
exp_name: "default"
horizon: 16
n_obs_steps: 2
n_action_steps: 8
n_latency_steps: 0
past_action_visible: False
keypoint_visible_rate: 1.0
obs_as_cond: True
pred_action_steps_only: False
policy:
_target_: diffusion_policy.policy.diffusion_transformer_lowdim_policy.DiffusionTransformerLowdimPolicy
model:
_target_: diffusion_policy.model.diffusion.transformer_for_diffusion.TransformerForDiffusion
input_dim: ${eval:'${action_dim} if ${obs_as_cond} else ${obs_dim} + ${action_dim}'}
output_dim: ${policy.model.input_dim}
horizon: ${horizon}
n_obs_steps: ${n_obs_steps}
cond_dim: ${eval:'${obs_dim} if ${obs_as_cond} else 0'}
n_layer: 8
n_head: 4
n_emb: 256
p_drop_emb: 0.0
p_drop_attn: 0.01
causal_attn: True
time_as_cond: True # if false, use BERT like encoder only arch, time as input
obs_as_cond: ${obs_as_cond}
n_cond_layers: 0 # >0: use transformer encoder for cond, otherwise use MLP
noise_scheduler:
_target_: diffusers.schedulers.scheduling_ddpm.DDPMScheduler
num_train_timesteps: 100
beta_start: 0.0001
beta_end: 0.02
beta_schedule: squaredcos_cap_v2
variance_type: fixed_small # Yilun's paper uses fixed_small_log instead, but easy to cause Nan
clip_sample: True # required when predict_epsilon=False
prediction_type: epsilon # or sample
horizon: ${horizon}
obs_dim: ${obs_dim}
action_dim: ${action_dim}
n_action_steps: ${n_action_steps}
n_obs_steps: ${n_obs_steps}
num_inference_steps: 100
obs_as_cond: ${obs_as_cond}
pred_action_steps_only: ${pred_action_steps_only}
# scheduler.step params
# predict_epsilon: True
ema:
_target_: diffusion_policy.model.diffusion.ema_model.EMAModel
update_after_step: 0
inv_gamma: 1.0
power: 0.75
min_value: 0.0
max_value: 0.9999
dataloader:
batch_size: 256
num_workers: 1
shuffle: True
pin_memory: True
persistent_workers: False
val_dataloader:
batch_size: 256
num_workers: 1
shuffle: False
pin_memory: True
persistent_workers: False
optimizer:
learning_rate: 1.0e-4
weight_decay: 1.0e-1
betas: [0.9, 0.95]
training:
device: "cuda:0"
seed: 42
debug: False
resume: True
# optimization
lr_scheduler: cosine
# Transformer needs LR warmup
lr_warmup_steps: 1000
num_epochs: 8000
gradient_accumulate_every: 1
use_ema: True
# training loop control
# in epochs
rollout_every: 50
checkpoint_every: 50
val_every: 1
sample_every: 5
# steps per epoch
max_train_steps: null
max_val_steps: null
# misc
tqdm_interval_sec: 1.0
logging:
project: diffusion_policy_debug
resume: True
mode: online
name: ${now:%Y.%m.%d-%H.%M.%S}_${name}_${task_name}
tags: ["${name}", "${task_name}", "${exp_name}"]
id: null
group: null
checkpoint:
topk:
monitor_key: test_mean_score
mode: max
k: 5
format_str: 'epoch={epoch:04d}-test_mean_score={test_mean_score:.3f}.ckpt'
save_last_ckpt: True
save_last_snapshot: False
multi_run:
run_dir: data/outputs/${now:%Y.%m.%d}/${now:%H.%M.%S}_${name}_${task_name}
wandb_name_base: ${now:%Y.%m.%d-%H.%M.%S}_${name}_${task_name}
hydra:
job:
override_dirname: ${name}
run:
dir: data/outputs/${now:%Y.%m.%d}/${now:%H.%M.%S}_${name}_${task_name}
sweep:
dir: data/outputs/${now:%Y.%m.%d}/${now:%H.%M.%S}_${name}_${task_name}
subdir: ${hydra.job.num}

View File

@@ -0,0 +1,146 @@
defaults:
- _self_
- task: blockpush_lowdim_seed
name: train_diffusion_transformer_lowdim
_target_: diffusion_policy.workspace.train_diffusion_transformer_lowdim_workspace.TrainDiffusionTransformerLowdimWorkspace
obs_dim: ${task.obs_dim}
action_dim: ${task.action_dim}
task_name: ${task.name}
exp_name: "default"
horizon: 5
n_obs_steps: 3
n_action_steps: 1
n_latency_steps: 0
past_action_visible: False
keypoint_visible_rate: 1.0
obs_as_cond: True
pred_action_steps_only: False
policy:
_target_: diffusion_policy.policy.diffusion_transformer_lowdim_policy.DiffusionTransformerLowdimPolicy
model:
_target_: diffusion_policy.model.diffusion.transformer_for_diffusion.TransformerForDiffusion
input_dim: ${eval:'${action_dim} if ${obs_as_cond} else ${obs_dim} + ${action_dim}'}
output_dim: ${policy.model.input_dim}
horizon: ${horizon}
n_obs_steps: ${n_obs_steps}
cond_dim: ${eval:'${obs_dim} if ${obs_as_cond} else 0'}
n_layer: 8
n_head: 4
n_emb: 256
p_drop_emb: 0.0
p_drop_attn: 0.3
causal_attn: True
time_as_cond: True # if false, use BERT like encoder only arch, time as input
obs_as_cond: ${obs_as_cond}
n_cond_layers: 0 # >0: use transformer encoder for cond, otherwise use MLP
noise_scheduler:
_target_: diffusers.schedulers.scheduling_ddpm.DDPMScheduler
num_train_timesteps: 100
beta_start: 0.0001
beta_end: 0.02
beta_schedule: squaredcos_cap_v2
variance_type: fixed_small # Yilun's paper uses fixed_small_log instead, but easy to cause Nan
clip_sample: True # required when predict_epsilon=False
prediction_type: epsilon # or sample
horizon: ${horizon}
obs_dim: ${obs_dim}
action_dim: ${action_dim}
n_action_steps: ${n_action_steps}
n_obs_steps: ${n_obs_steps}
num_inference_steps: 100
obs_as_cond: ${obs_as_cond}
pred_action_steps_only: ${pred_action_steps_only}
# scheduler.step params
# predict_epsilon: True
ema:
_target_: diffusion_policy.model.diffusion.ema_model.EMAModel
update_after_step: 0
inv_gamma: 1.0
power: 0.75
min_value: 0.0
max_value: 0.9999
dataloader:
batch_size: 256
num_workers: 1
shuffle: True
pin_memory: True
persistent_workers: False
val_dataloader:
batch_size: 256
num_workers: 1
shuffle: False
pin_memory: True
persistent_workers: False
optimizer:
learning_rate: 1.0e-4
weight_decay: 1.0e-3
betas: [0.9, 0.95]
training:
device: "cuda:0"
seed: 42
debug: False
resume: True
# optimization
lr_scheduler: cosine
# Transformer needs LR warmup
lr_warmup_steps: 1000
num_epochs: 5000
gradient_accumulate_every: 1
use_ema: True
# training loop control
# in epochs
rollout_every: 50
checkpoint_every: 50
val_every: 1
sample_every: 5
# steps per epoch
max_train_steps: null
max_val_steps: null
# misc
tqdm_interval_sec: 1.0
logging:
project: diffusion_policy_debug
resume: True
mode: online
name: ${now:%Y.%m.%d-%H.%M.%S}_${name}_${task_name}
tags: ["${name}", "${task_name}", "${exp_name}"]
id: null
group: null
checkpoint:
topk:
monitor_key: test_mean_score
mode: max
k: 5
format_str: 'epoch={epoch:04d}-test_mean_score={test_mean_score:.3f}.ckpt'
save_last_ckpt: True
save_last_snapshot: False
multi_run:
run_dir: data/outputs/${now:%Y.%m.%d}/${now:%H.%M.%S}_${name}_${task_name}
wandb_name_base: ${now:%Y.%m.%d-%H.%M.%S}_${name}_${task_name}
hydra:
job:
override_dirname: ${name}
run:
dir: data/outputs/${now:%Y.%m.%d}/${now:%H.%M.%S}_${name}_${task_name}
sweep:
dir: data/outputs/${now:%Y.%m.%d}/${now:%H.%M.%S}_${name}_${task_name}
subdir: ${hydra.job.num}

View File

@@ -0,0 +1,144 @@
defaults:
- _self_
- task: real_pusht_image
name: train_diffusion_transformer_hybrid
_target_: diffusion_policy.workspace.train_diffusion_transformer_hybrid_workspace.TrainDiffusionTransformerHybridWorkspace
task_name: ${task.name}
shape_meta: ${task.shape_meta}
exp_name: "default"
horizon: 16
n_obs_steps: 2
n_action_steps: 8
n_latency_steps: 0
dataset_obs_steps: ${n_obs_steps}
past_action_visible: False
keypoint_visible_rate: 1.0
obs_as_cond: True
policy:
_target_: diffusion_policy.policy.diffusion_transformer_hybrid_image_policy.DiffusionTransformerHybridImagePolicy
shape_meta: ${shape_meta}
noise_scheduler:
_target_: diffusers.schedulers.scheduling_ddim.DDIMScheduler
num_train_timesteps: 100
beta_start: 0.0001
beta_end: 0.02
# beta_schedule is important
# this is the best we found
beta_schedule: squaredcos_cap_v2
clip_sample: True
set_alpha_to_one: True
steps_offset: 0
prediction_type: epsilon # or sample
horizon: ${horizon}
n_action_steps: ${eval:'${n_action_steps}+${n_latency_steps}'}
n_obs_steps: ${n_obs_steps}
num_inference_steps: 8
crop_shape: [216, 288] # ch, cw 320x240 90%
obs_encoder_group_norm: True
eval_fixed_crop: True
n_layer: 8
n_cond_layers: 0 # >0: use transformer encoder for cond, otherwise use MLP
n_head: 4
n_emb: 256
p_drop_emb: 0.0
p_drop_attn: 0.3
causal_attn: True
time_as_cond: True # if false, use BERT like encoder only arch, time as input
obs_as_cond: ${obs_as_cond}
# scheduler.step params
# predict_epsilon: True
ema:
_target_: diffusion_policy.model.diffusion.ema_model.EMAModel
update_after_step: 0
inv_gamma: 1.0
power: 0.75
min_value: 0.0
max_value: 0.9999
dataloader:
batch_size: 64
num_workers: 8
shuffle: True
pin_memory: True
persistent_workers: True
val_dataloader:
batch_size: 64
num_workers: 8
shuffle: False
pin_memory: True
persistent_workers: True
optimizer:
transformer_weight_decay: 1.0e-3
obs_encoder_weight_decay: 1.0e-6
learning_rate: 1.0e-4
betas: [0.9, 0.95]
training:
device: "cuda:0"
seed: 42
debug: False
resume: True
# optimization
lr_scheduler: cosine
# Transformer needs LR warmup
lr_warmup_steps: 500
num_epochs: 600
gradient_accumulate_every: 1
# EMA destroys performance when used with BatchNorm
# replace BatchNorm with GroupNorm.
use_ema: True
# training loop control
# in epochs
rollout_every: 50
checkpoint_every: 50
val_every: 1
sample_every: 5
# steps per epoch
max_train_steps: null
max_val_steps: null
# misc
tqdm_interval_sec: 1.0
logging:
project: diffusion_policy_debug
resume: True
mode: online
name: ${now:%Y.%m.%d-%H.%M.%S}_${name}_${task_name}
tags: ["${name}", "${task_name}", "${exp_name}"]
id: null
group: null
checkpoint:
topk:
monitor_key: train_loss
mode: min
k: 5
format_str: 'epoch={epoch:04d}-train_loss={train_loss:.3f}.ckpt'
save_last_ckpt: True
save_last_snapshot: False
multi_run:
run_dir: data/outputs/${now:%Y.%m.%d}/${now:%H.%M.%S}_${name}_${task_name}
wandb_name_base: ${now:%Y.%m.%d-%H.%M.%S}_${name}_${task_name}
hydra:
job:
override_dirname: ${name}
run:
dir: data/outputs/${now:%Y.%m.%d}/${now:%H.%M.%S}_${name}_${task_name}
sweep:
dir: data/outputs/${now:%Y.%m.%d}/${now:%H.%M.%S}_${name}_${task_name}
subdir: ${hydra.job.num}

View File

@@ -0,0 +1,140 @@
defaults:
- _self_
- task: lift_image_abs
name: train_diffusion_unet_hybrid
_target_: diffusion_policy.workspace.train_diffusion_unet_hybrid_workspace.TrainDiffusionUnetHybridWorkspace
task_name: ${task.name}
shape_meta: ${task.shape_meta}
exp_name: "default"
horizon: 16
n_obs_steps: 2
n_action_steps: 8
n_latency_steps: 0
dataset_obs_steps: ${n_obs_steps}
past_action_visible: False
keypoint_visible_rate: 1.0
obs_as_global_cond: True
policy:
_target_: diffusion_policy.policy.diffusion_unet_hybrid_image_policy.DiffusionUnetHybridImagePolicy
shape_meta: ${shape_meta}
noise_scheduler:
_target_: diffusers.schedulers.scheduling_ddim.DDIMScheduler
num_train_timesteps: 100
beta_start: 0.0001
beta_end: 0.02
# beta_schedule is important
# this is the best we found
beta_schedule: squaredcos_cap_v2
clip_sample: True
set_alpha_to_one: True
steps_offset: 0
prediction_type: epsilon # or sample
horizon: ${horizon}
n_action_steps: ${eval:'${n_action_steps}+${n_latency_steps}'}
n_obs_steps: ${n_obs_steps}
num_inference_steps: 8
obs_as_global_cond: ${obs_as_global_cond}
# crop_shape: [76, 76]
crop_shape: null
diffusion_step_embed_dim: 128
down_dims: [256,512,1024]
kernel_size: 5
n_groups: 8
cond_predict_scale: True
obs_encoder_group_norm: True
eval_fixed_crop: True
# scheduler.step params
# predict_epsilon: True
ema:
_target_: diffusion_policy.model.diffusion.ema_model.EMAModel
update_after_step: 0
inv_gamma: 1.0
power: 0.75
min_value: 0.0
max_value: 0.9999
dataloader:
batch_size: 64
num_workers: 8
shuffle: True
pin_memory: True
persistent_workers: False
val_dataloader:
batch_size: 64
num_workers: 8
shuffle: False
pin_memory: True
persistent_workers: False
optimizer:
_target_: torch.optim.AdamW
lr: 1.0e-4
betas: [0.95, 0.999]
eps: 1.0e-8
weight_decay: 1.0e-6
training:
device: "cuda:0"
seed: 42
debug: False
resume: True
# optimization
lr_scheduler: cosine
lr_warmup_steps: 500
num_epochs: 3000
gradient_accumulate_every: 1
# EMA destroys performance when used with BatchNorm
# replace BatchNorm with GroupNorm.
use_ema: True
# training loop control
# in epochs
rollout_every: 50
checkpoint_every: 50
val_every: 1
sample_every: 5
# steps per epoch
max_train_steps: null
max_val_steps: null
# misc
tqdm_interval_sec: 1.0
logging:
project: diffusion_policy_debug
resume: True
mode: online
name: ${now:%Y.%m.%d-%H.%M.%S}_${name}_${task_name}
tags: ["${name}", "${task_name}", "${exp_name}"]
id: null
group: null
checkpoint:
topk:
monitor_key: test_mean_score
mode: max
k: 5
format_str: 'epoch={epoch:04d}-test_mean_score={test_mean_score:.3f}.ckpt'
save_last_ckpt: True
save_last_snapshot: False
multi_run:
run_dir: data/outputs/${now:%Y.%m.%d}/${now:%H.%M.%S}_${name}_${task_name}
wandb_name_base: ${now:%Y.%m.%d-%H.%M.%S}_${name}_${task_name}
hydra:
job:
override_dirname: ${name}
run:
dir: data/outputs/${now:%Y.%m.%d}/${now:%H.%M.%S}_${name}_${task_name}
sweep:
dir: data/outputs/${now:%Y.%m.%d}/${now:%H.%M.%S}_${name}_${task_name}
subdir: ${hydra.job.num}

View File

@@ -0,0 +1,146 @@
defaults:
- _self_
- task: pusht_lowdim
name: train_diffusion_unet_lowdim
_target_: diffusion_policy.workspace.train_diffusion_unet_lowdim_workspace.TrainDiffusionUnetLowdimWorkspace
obs_dim: ${task.obs_dim}
action_dim: ${task.action_dim}
keypoint_dim: ${task.keypoint_dim}
task_name: ${task.name}
exp_name: "default"
horizon: 16
n_obs_steps: 2
n_action_steps: 8
n_latency_steps: 0
past_action_visible: False
keypoint_visible_rate: 1.0
obs_as_local_cond: False
obs_as_global_cond: False
pred_action_steps_only: False
policy:
_target_: diffusion_policy.policy.diffusion_unet_lowdim_policy.DiffusionUnetLowdimPolicy
model:
_target_: diffusion_policy.model.diffusion.conditional_unet1d.ConditionalUnet1D
input_dim: "${eval: ${task.action_dim} if ${obs_as_local_cond} or ${obs_as_global_cond} else ${task.obs_dim} + ${task.action_dim}}"
local_cond_dim: "${eval: ${task.obs_dim} if ${obs_as_local_cond} else None}"
global_cond_dim: "${eval: ${task.obs_dim}*${n_obs_steps} if ${obs_as_global_cond} else None}"
diffusion_step_embed_dim: 256
down_dims: [256, 512, 1024]
kernel_size: 5
n_groups: 8
cond_predict_scale: True
noise_scheduler:
_target_: diffusers.schedulers.scheduling_ddim.DDIMScheduler
num_train_timesteps: 100
beta_start: 0.0001
beta_end: 0.02
# beta_schedule is important
# this is the best we found
beta_schedule: squaredcos_cap_v2
clip_sample: True
set_alpha_to_one: True
steps_offset: 0
prediction_type: epsilon # or sample
horizon: ${horizon}
obs_dim: ${obs_dim}
action_dim: ${action_dim}
n_action_steps: ${eval:'${n_action_steps}+${n_latency_steps}'}
n_obs_steps: ${n_obs_steps}
num_inference_steps: 8
obs_as_local_cond: ${obs_as_local_cond}
obs_as_global_cond: ${obs_as_global_cond}
pred_action_steps_only: ${pred_action_steps_only}
oa_step_convention: True
# scheduler.step params
# predict_epsilon: True
ema:
_target_: diffusion_policy.model.diffusion.ema_model.EMAModel
update_after_step: 0
inv_gamma: 1.0
power: 0.75
min_value: 0.0
max_value: 0.9999
dataloader:
batch_size: 256
num_workers: 1
shuffle: True
pin_memory: True
persistent_workers: False
val_dataloader:
batch_size: 256
num_workers: 1
shuffle: False
pin_memory: True
persistent_workers: False
optimizer:
_target_: torch.optim.AdamW
lr: 1.0e-4
betas: [0.95, 0.999]
eps: 1.0e-8
weight_decay: 1.0e-6
training:
device: "cuda:0"
seed: 42
debug: False
resume: True
# optimization
lr_scheduler: cosine
lr_warmup_steps: 500
num_epochs: 8000
gradient_accumulate_every: 1
use_ema: True
# training loop control
# in epochs
rollout_every: 50
checkpoint_every: 50
val_every: 1
sample_every: 5
# steps per epoch
max_train_steps: null
max_val_steps: null
# misc
tqdm_interval_sec: 1.0
logging:
project: diffusion_policy_debug
resume: True
mode: online
name: ${now:%Y.%m.%d-%H.%M.%S}_${name}_${task_name}
tags: ["${name}", "${task_name}", "${exp_name}"]
id: null
group: null
checkpoint:
topk:
monitor_key: test_mean_score
mode: max
k: 5
format_str: 'epoch={epoch:04d}-test_mean_score={test_mean_score:.3f}.ckpt'
save_last_ckpt: True
save_last_snapshot: False
multi_run:
run_dir: data/outputs/${now:%Y.%m.%d}/${now:%H.%M.%S}_${name}_${task_name}
wandb_name_base: ${now:%Y.%m.%d-%H.%M.%S}_${name}_${task_name}
hydra:
job:
override_dirname: ${name}
run:
dir: data/outputs/${now:%Y.%m.%d}/${now:%H.%M.%S}_${name}_${task_name}
sweep:
dir: data/outputs/${now:%Y.%m.%d}/${now:%H.%M.%S}_${name}_${task_name}
subdir: ${hydra.job.num}

View File

@@ -0,0 +1,137 @@
defaults:
- _self_
- task: lift_image_abs
name: train_diffusion_unet_hybrid
_target_: diffusion_policy.workspace.train_diffusion_unet_hybrid_workspace.TrainDiffusionUnetHybridWorkspace
task_name: ${task.name}
shape_meta: ${task.shape_meta}
exp_name: "default"
horizon: 16
n_obs_steps: 2
n_action_steps: 8
n_latency_steps: 0
dataset_obs_steps: ${n_obs_steps}
past_action_visible: False
keypoint_visible_rate: 1.0
obs_as_global_cond: True
policy:
_target_: diffusion_policy.policy.diffusion_unet_hybrid_image_policy.DiffusionUnetHybridImagePolicy
shape_meta: ${shape_meta}
noise_scheduler:
_target_: diffusers.schedulers.scheduling_ddpm.DDPMScheduler
num_train_timesteps: 100
beta_start: 0.0001
beta_end: 0.02
beta_schedule: squaredcos_cap_v2
variance_type: fixed_small # Yilun's paper uses fixed_small_log instead, but easy to cause Nan
clip_sample: True # required when predict_epsilon=False
prediction_type: epsilon # or sample
horizon: ${horizon}
n_action_steps: ${eval:'${n_action_steps}+${n_latency_steps}'}
n_obs_steps: ${n_obs_steps}
num_inference_steps: 100
obs_as_global_cond: ${obs_as_global_cond}
crop_shape: [76, 76]
# crop_shape: null
diffusion_step_embed_dim: 128
down_dims: [512, 1024, 2048]
kernel_size: 5
n_groups: 8
cond_predict_scale: True
obs_encoder_group_norm: True
eval_fixed_crop: True
# scheduler.step params
# predict_epsilon: True
ema:
_target_: diffusion_policy.model.diffusion.ema_model.EMAModel
update_after_step: 0
inv_gamma: 1.0
power: 0.75
min_value: 0.0
max_value: 0.9999
dataloader:
batch_size: 64
num_workers: 8
shuffle: True
pin_memory: True
persistent_workers: False
val_dataloader:
batch_size: 64
num_workers: 8
shuffle: False
pin_memory: True
persistent_workers: False
optimizer:
_target_: torch.optim.AdamW
lr: 1.0e-4
betas: [0.95, 0.999]
eps: 1.0e-8
weight_decay: 1.0e-6
training:
device: "cuda:0"
seed: 42
debug: False
resume: True
# optimization
lr_scheduler: cosine
lr_warmup_steps: 500
num_epochs: 3050
gradient_accumulate_every: 1
# EMA destroys performance when used with BatchNorm
# replace BatchNorm with GroupNorm.
use_ema: True
# training loop control
# in epochs
rollout_every: 50
checkpoint_every: 50
val_every: 1
sample_every: 5
# steps per epoch
max_train_steps: null
max_val_steps: null
# misc
tqdm_interval_sec: 1.0
logging:
project: diffusion_policy_debug
resume: True
mode: online
name: ${now:%Y.%m.%d-%H.%M.%S}_${name}_${task_name}
tags: ["${name}", "${task_name}", "${exp_name}"]
id: null
group: null
checkpoint:
topk:
monitor_key: test_mean_score
mode: max
k: 5
format_str: 'epoch={epoch:04d}-test_mean_score={test_mean_score:.3f}.ckpt'
save_last_ckpt: True
save_last_snapshot: False
multi_run:
run_dir: data/outputs/${now:%Y.%m.%d}/${now:%H.%M.%S}_${name}_${task_name}
wandb_name_base: ${now:%Y.%m.%d-%H.%M.%S}_${name}_${task_name}
hydra:
job:
override_dirname: ${name}
run:
dir: data/outputs/${now:%Y.%m.%d}/${now:%H.%M.%S}_${name}_${task_name}
sweep:
dir: data/outputs/${now:%Y.%m.%d}/${now:%H.%M.%S}_${name}_${task_name}
subdir: ${hydra.job.num}

View File

@@ -0,0 +1,149 @@
defaults:
- _self_
- task: lift_image_abs
name: train_diffusion_unet_image
_target_: diffusion_policy.workspace.train_diffusion_unet_image_workspace.TrainDiffusionUnetImageWorkspace
task_name: ${task.name}
shape_meta: ${task.shape_meta}
exp_name: "default"
horizon: 16
n_obs_steps: 2
n_action_steps: 8
n_latency_steps: 0
dataset_obs_steps: ${n_obs_steps}
past_action_visible: False
keypoint_visible_rate: 1.0
obs_as_global_cond: True
policy:
_target_: diffusion_policy.policy.diffusion_unet_image_policy.DiffusionUnetImagePolicy
shape_meta: ${shape_meta}
noise_scheduler:
_target_: diffusers.schedulers.scheduling_ddpm.DDPMScheduler
num_train_timesteps: 100
beta_start: 0.0001
beta_end: 0.02
beta_schedule: squaredcos_cap_v2
variance_type: fixed_small # Yilun's paper uses fixed_small_log instead, but easy to cause Nan
clip_sample: True # required when predict_epsilon=False
prediction_type: epsilon # or sample
obs_encoder:
_target_: diffusion_policy.model.vision.multi_image_obs_encoder.MultiImageObsEncoder
shape_meta: ${shape_meta}
rgb_model:
_target_: diffusion_policy.model.vision.model_getter.get_resnet
name: resnet18
weights: IMAGENET1K_V1 # or r3m
resize_shape: [256, 256]
crop_shape: [224, 224]
random_crop: False
use_group_norm: False
share_rgb_model: True
imagenet_norm: True
horizon: ${horizon}
n_action_steps: ${eval:'${n_action_steps}+${n_latency_steps}'}
n_obs_steps: ${n_obs_steps}
num_inference_steps: 100
obs_as_global_cond: ${obs_as_global_cond}
# crop_shape: null
diffusion_step_embed_dim: 128
down_dims: [512, 1024, 2048]
kernel_size: 5
n_groups: 8
cond_predict_scale: True
# scheduler.step params
# predict_epsilon: True
ema:
_target_: diffusion_policy.model.diffusion.ema_model.EMAModel
update_after_step: 0
inv_gamma: 1.0
power: 0.75
min_value: 0.0
max_value: 0.9999
dataloader:
batch_size: 64
num_workers: 4
shuffle: True
pin_memory: True
persistent_workers: False
val_dataloader:
batch_size: 64
num_workers: 4
shuffle: False
pin_memory: True
persistent_workers: False
optimizer:
_target_: torch.optim.AdamW
lr: 1.0e-4
betas: [0.95, 0.999]
eps: 1.0e-8
weight_decay: 1.0e-6
training:
device: "cuda:0"
seed: 42
debug: False
resume: True
# optimization
lr_scheduler: cosine
lr_warmup_steps: 500
num_epochs: 8000
gradient_accumulate_every: 1
# EMA destroys performance when used with BatchNorm
# replace BatchNorm with GroupNorm.
use_ema: True
freeze_encoder: True
# training loop control
# in epochs
rollout_every: 50
checkpoint_every: 50
val_every: 1
sample_every: 5
# steps per epoch
max_train_steps: null
max_val_steps: null
# misc
tqdm_interval_sec: 1.0
logging:
project: diffusion_policy_debug
resume: True
mode: online
name: ${now:%Y.%m.%d-%H.%M.%S}_${name}_${task_name}
tags: ["${name}", "${task_name}", "${exp_name}"]
id: null
group: null
checkpoint:
topk:
monitor_key: test_mean_score
mode: max
k: 5
format_str: 'epoch={epoch:04d}-test_mean_score={test_mean_score:.3f}.ckpt'
save_last_ckpt: True
save_last_snapshot: False
multi_run:
run_dir: data/outputs/${now:%Y.%m.%d}/${now:%H.%M.%S}_${name}_${task_name}
wandb_name_base: ${now:%Y.%m.%d-%H.%M.%S}_${name}_${task_name}
hydra:
job:
override_dirname: ${name}
run:
dir: data/outputs/${now:%Y.%m.%d}/${now:%H.%M.%S}_${name}_${task_name}
sweep:
dir: data/outputs/${now:%Y.%m.%d}/${now:%H.%M.%S}_${name}_${task_name}
subdir: ${hydra.job.num}

View File

@@ -0,0 +1,150 @@
defaults:
- _self_
- task: lift_image_abs
name: train_diffusion_unet_image
_target_: diffusion_policy.workspace.train_diffusion_unet_image_workspace.TrainDiffusionUnetImageWorkspace
task_name: ${task.name}
shape_meta: ${task.shape_meta}
exp_name: "default"
horizon: 16
n_obs_steps: 2
n_action_steps: 8
n_latency_steps: 0
dataset_obs_steps: ${n_obs_steps}
past_action_visible: False
keypoint_visible_rate: 1.0
obs_as_global_cond: True
policy:
_target_: diffusion_policy.policy.diffusion_unet_image_policy.DiffusionUnetImagePolicy
shape_meta: ${shape_meta}
noise_scheduler:
_target_: diffusers.schedulers.scheduling_ddpm.DDPMScheduler
num_train_timesteps: 100
beta_start: 0.0001
beta_end: 0.02
beta_schedule: squaredcos_cap_v2
variance_type: fixed_small # Yilun's paper uses fixed_small_log instead, but easy to cause Nan
clip_sample: True # required when predict_epsilon=False
prediction_type: epsilon # or sample
obs_encoder:
_target_: diffusion_policy.model.vision.multi_image_obs_encoder.MultiImageObsEncoder
shape_meta: ${shape_meta}
rgb_model:
_target_: diffusion_policy.model.vision.model_getter.get_resnet
name: resnet18
weights: null
resize_shape: null
crop_shape: [76, 76]
# constant center crop
random_crop: True
use_group_norm: True
share_rgb_model: False
imagenet_norm: True
horizon: ${horizon}
n_action_steps: ${eval:'${n_action_steps}+${n_latency_steps}'}
n_obs_steps: ${n_obs_steps}
num_inference_steps: 100
obs_as_global_cond: ${obs_as_global_cond}
# crop_shape: null
diffusion_step_embed_dim: 128
down_dims: [512, 1024, 2048]
kernel_size: 5
n_groups: 8
cond_predict_scale: True
# scheduler.step params
# predict_epsilon: True
ema:
_target_: diffusion_policy.model.diffusion.ema_model.EMAModel
update_after_step: 0
inv_gamma: 1.0
power: 0.75
min_value: 0.0
max_value: 0.9999
dataloader:
batch_size: 64
num_workers: 4
shuffle: True
pin_memory: True
persistent_workers: False
val_dataloader:
batch_size: 64
num_workers: 4
shuffle: False
pin_memory: True
persistent_workers: False
optimizer:
_target_: torch.optim.AdamW
lr: 1.0e-4
betas: [0.95, 0.999]
eps: 1.0e-8
weight_decay: 1.0e-6
training:
device: "cuda:0"
seed: 42
debug: False
resume: True
# optimization
lr_scheduler: cosine
lr_warmup_steps: 500
num_epochs: 8000
gradient_accumulate_every: 1
# EMA destroys performance when used with BatchNorm
# replace BatchNorm with GroupNorm.
use_ema: True
freeze_encoder: False
# training loop control
# in epochs
rollout_every: 50
checkpoint_every: 50
val_every: 1
sample_every: 5
# steps per epoch
max_train_steps: null
max_val_steps: null
# misc
tqdm_interval_sec: 1.0
logging:
project: diffusion_policy_debug
resume: True
mode: online
name: ${now:%Y.%m.%d-%H.%M.%S}_${name}_${task_name}
tags: ["${name}", "${task_name}", "${exp_name}"]
id: null
group: null
checkpoint:
topk:
monitor_key: test_mean_score
mode: max
k: 5
format_str: 'epoch={epoch:04d}-test_mean_score={test_mean_score:.3f}.ckpt'
save_last_ckpt: True
save_last_snapshot: False
multi_run:
run_dir: data/outputs/${now:%Y.%m.%d}/${now:%H.%M.%S}_${name}_${task_name}
wandb_name_base: ${now:%Y.%m.%d-%H.%M.%S}_${name}_${task_name}
hydra:
job:
override_dirname: ${name}
run:
dir: data/outputs/${now:%Y.%m.%d}/${now:%H.%M.%S}_${name}_${task_name}
sweep:
dir: data/outputs/${now:%Y.%m.%d}/${now:%H.%M.%S}_${name}_${task_name}
subdir: ${hydra.job.num}

View File

@@ -0,0 +1,143 @@
defaults:
- _self_
- task: pusht_lowdim
name: train_diffusion_unet_lowdim
_target_: diffusion_policy.workspace.train_diffusion_unet_lowdim_workspace.TrainDiffusionUnetLowdimWorkspace
obs_dim: ${task.obs_dim}
action_dim: ${task.action_dim}
keypoint_dim: ${task.keypoint_dim}
task_name: ${task.name}
exp_name: "default"
horizon: 16
n_obs_steps: 2
n_action_steps: 8
n_latency_steps: 0
past_action_visible: False
keypoint_visible_rate: 1.0
obs_as_local_cond: False
obs_as_global_cond: True
pred_action_steps_only: False
policy:
_target_: diffusion_policy.policy.diffusion_unet_lowdim_policy.DiffusionUnetLowdimPolicy
model:
_target_: diffusion_policy.model.diffusion.conditional_unet1d.ConditionalUnet1D
input_dim: "${eval: ${task.action_dim} if ${obs_as_local_cond} or ${obs_as_global_cond} else ${task.obs_dim} + ${task.action_dim}}"
local_cond_dim: "${eval: ${task.obs_dim} if ${obs_as_local_cond} else None}"
global_cond_dim: "${eval: ${task.obs_dim}*${n_obs_steps} if ${obs_as_global_cond} else None}"
diffusion_step_embed_dim: 256
down_dims: [256, 512, 1024]
kernel_size: 5
n_groups: 8
cond_predict_scale: True
noise_scheduler:
_target_: diffusers.schedulers.scheduling_ddpm.DDPMScheduler
num_train_timesteps: 100
beta_start: 0.0001
beta_end: 0.02
beta_schedule: squaredcos_cap_v2
variance_type: fixed_small # Yilun's paper uses fixed_small_log instead, but easy to cause Nan
clip_sample: True # required when predict_epsilon=False
prediction_type: epsilon # or sample
horizon: ${horizon}
obs_dim: ${obs_dim}
action_dim: ${action_dim}
n_action_steps: ${eval:'${n_action_steps}+${n_latency_steps}'}
n_obs_steps: ${n_obs_steps}
num_inference_steps: 100
obs_as_local_cond: ${obs_as_local_cond}
obs_as_global_cond: ${obs_as_global_cond}
pred_action_steps_only: ${pred_action_steps_only}
oa_step_convention: True
# scheduler.step params
# predict_epsilon: True
ema:
_target_: diffusion_policy.model.diffusion.ema_model.EMAModel
update_after_step: 0
inv_gamma: 1.0
power: 0.75
min_value: 0.0
max_value: 0.9999
dataloader:
batch_size: 256
num_workers: 1
shuffle: True
pin_memory: True
persistent_workers: False
val_dataloader:
batch_size: 256
num_workers: 1
shuffle: False
pin_memory: True
persistent_workers: False
optimizer:
_target_: torch.optim.AdamW
lr: 1.0e-4
betas: [0.95, 0.999]
eps: 1.0e-8
weight_decay: 1.0e-6
training:
device: "cuda:0"
seed: 42
debug: False
resume: True
# optimization
lr_scheduler: cosine
lr_warmup_steps: 500
num_epochs: 5000
gradient_accumulate_every: 1
use_ema: True
# training loop control
# in epochs
rollout_every: 50
checkpoint_every: 50
val_every: 1
sample_every: 5
# steps per epoch
max_train_steps: null
max_val_steps: null
# misc
tqdm_interval_sec: 1.0
logging:
project: diffusion_policy_debug
resume: True
mode: online
name: ${now:%Y.%m.%d-%H.%M.%S}_${name}_${task_name}
tags: ["${name}", "${task_name}", "${exp_name}"]
id: null
group: null
checkpoint:
topk:
monitor_key: test_mean_score
mode: max
k: 5
format_str: 'epoch={epoch:04d}-test_mean_score={test_mean_score:.3f}.ckpt'
save_last_ckpt: True
save_last_snapshot: False
multi_run:
run_dir: data/outputs/${now:%Y.%m.%d}/${now:%H.%M.%S}_${name}_${task_name}
wandb_name_base: ${now:%Y.%m.%d-%H.%M.%S}_${name}_${task_name}
hydra:
job:
override_dirname: ${name}
run:
dir: data/outputs/${now:%Y.%m.%d}/${now:%H.%M.%S}_${name}_${task_name}
sweep:
dir: data/outputs/${now:%Y.%m.%d}/${now:%H.%M.%S}_${name}_${task_name}
subdir: ${hydra.job.num}

View File

@@ -0,0 +1,141 @@
defaults:
- _self_
- task: real_pusht_image
name: train_diffusion_unet_hybrid
_target_: diffusion_policy.workspace.train_diffusion_unet_hybrid_workspace.TrainDiffusionUnetHybridWorkspace
task_name: ${task.name}
shape_meta: ${task.shape_meta}
exp_name: "default"
horizon: 16
n_obs_steps: 2
n_action_steps: 8
n_latency_steps: 0
dataset_obs_steps: ${n_obs_steps}
past_action_visible: False
keypoint_visible_rate: 1.0
obs_as_global_cond: True
policy:
_target_: diffusion_policy.policy.diffusion_unet_hybrid_image_policy.DiffusionUnetHybridImagePolicy
shape_meta: ${shape_meta}
noise_scheduler:
_target_: diffusers.schedulers.scheduling_ddim.DDIMScheduler
num_train_timesteps: 100
beta_start: 0.0001
beta_end: 0.02
# beta_schedule is important
# this is the best we found
beta_schedule: squaredcos_cap_v2
clip_sample: True
set_alpha_to_one: True
steps_offset: 0
prediction_type: epsilon # or sample
horizon: ${horizon}
n_action_steps: ${eval:'${n_action_steps}+${n_latency_steps}'}
n_obs_steps: ${n_obs_steps}
num_inference_steps: 8
obs_as_global_cond: ${obs_as_global_cond}
# crop_shape: [76, 76] # 84x84 90%
crop_shape: [216, 288] # ch, cw 320x240 90%
# crop_shape: null
diffusion_step_embed_dim: 128
down_dims: [256,512,1024]
kernel_size: 5
n_groups: 8
cond_predict_scale: True
obs_encoder_group_norm: True
eval_fixed_crop: True
# scheduler.step params
# predict_epsilon: True
ema:
_target_: diffusion_policy.model.diffusion.ema_model.EMAModel
update_after_step: 0
inv_gamma: 1.0
power: 0.75
min_value: 0.0
max_value: 0.9999
dataloader:
batch_size: 64
num_workers: 8
shuffle: True
pin_memory: True
persistent_workers: True
val_dataloader:
batch_size: 64
num_workers: 8
shuffle: False
pin_memory: True
persistent_workers: True
optimizer:
_target_: torch.optim.AdamW
lr: 1.0e-4
betas: [0.95, 0.999]
eps: 1.0e-8
weight_decay: 1.0e-6
training:
device: "cuda:0"
seed: 42
debug: False
resume: True
# optimization
lr_scheduler: cosine
lr_warmup_steps: 500
num_epochs: 600
gradient_accumulate_every: 1
# EMA destroys performance when used with BatchNorm
# replace BatchNorm with GroupNorm.
use_ema: True
# training loop control
# in epochs
rollout_every: 50
checkpoint_every: 50
val_every: 1
sample_every: 5
# steps per epoch
max_train_steps: null
max_val_steps: null
# misc
tqdm_interval_sec: 1.0
logging:
project: diffusion_policy_debug
resume: True
mode: online
name: ${now:%Y.%m.%d-%H.%M.%S}_${name}_${task_name}
tags: ["${name}", "${task_name}", "${exp_name}"]
id: null
group: null
checkpoint:
topk:
monitor_key: train_loss
mode: min
k: 5
format_str: 'epoch={epoch:04d}-train_loss={train_loss:.3f}.ckpt'
save_last_ckpt: True
save_last_snapshot: False
multi_run:
run_dir: data/outputs/${now:%Y.%m.%d}/${now:%H.%M.%S}_${name}_${task_name}
wandb_name_base: ${now:%Y.%m.%d-%H.%M.%S}_${name}_${task_name}
hydra:
job:
override_dirname: ${name}
run:
dir: data/outputs/${now:%Y.%m.%d}/${now:%H.%M.%S}_${name}_${task_name}
sweep:
dir: data/outputs/${now:%Y.%m.%d}/${now:%H.%M.%S}_${name}_${task_name}
subdir: ${hydra.job.num}

View File

@@ -0,0 +1,152 @@
defaults:
- _self_
- task: real_pusht_image
name: train_diffusion_unet_image
_target_: diffusion_policy.workspace.train_diffusion_unet_image_workspace.TrainDiffusionUnetImageWorkspace
task_name: ${task.name}
shape_meta: ${task.shape_meta}
exp_name: "default"
horizon: 16
n_obs_steps: 2
n_action_steps: 8
n_latency_steps: 0
dataset_obs_steps: ${n_obs_steps}
past_action_visible: False
keypoint_visible_rate: 1.0
obs_as_global_cond: True
policy:
_target_: diffusion_policy.policy.diffusion_unet_image_policy.DiffusionUnetImagePolicy
shape_meta: ${shape_meta}
noise_scheduler:
_target_: diffusers.schedulers.scheduling_ddim.DDIMScheduler
num_train_timesteps: 100
beta_start: 0.0001
beta_end: 0.02
# beta_schedule is important
# this is the best we found
beta_schedule: squaredcos_cap_v2
clip_sample: True
set_alpha_to_one: True
steps_offset: 0
prediction_type: epsilon # or sample
obs_encoder:
_target_: diffusion_policy.model.vision.multi_image_obs_encoder.MultiImageObsEncoder
shape_meta: ${shape_meta}
rgb_model:
_target_: diffusion_policy.model.vision.model_getter.get_resnet
name: resnet18
weights: null
resize_shape: [240, 320]
crop_shape: [216, 288] # ch, cw 240x320 90%
random_crop: True
use_group_norm: True
share_rgb_model: False
imagenet_norm: True
horizon: ${horizon}
n_action_steps: ${eval:'${n_action_steps}+${n_latency_steps}'}
n_obs_steps: ${n_obs_steps}
num_inference_steps: 100
obs_as_global_cond: ${obs_as_global_cond}
# crop_shape: null
diffusion_step_embed_dim: 128
down_dims: [512, 1024, 2048]
kernel_size: 5
n_groups: 8
cond_predict_scale: True
# scheduler.step params
# predict_epsilon: True
ema:
_target_: diffusion_policy.model.diffusion.ema_model.EMAModel
update_after_step: 0
inv_gamma: 1.0
power: 0.75
min_value: 0.0
max_value: 0.9999
dataloader:
batch_size: 64
num_workers: 8
shuffle: True
pin_memory: True
persistent_workers: True
val_dataloader:
batch_size: 64
num_workers: 8
shuffle: False
pin_memory: True
persistent_workers: True
optimizer:
_target_: torch.optim.AdamW
lr: 1.0e-4
betas: [0.95, 0.999]
eps: 1.0e-8
weight_decay: 1.0e-6
training:
device: "cuda:0"
seed: 42
debug: False
resume: True
# optimization
lr_scheduler: cosine
lr_warmup_steps: 500
num_epochs: 600
gradient_accumulate_every: 1
# EMA destroys performance when used with BatchNorm
# replace BatchNorm with GroupNorm.
use_ema: True
freeze_encoder: False
# training loop control
# in epochs
rollout_every: 50
checkpoint_every: 50
val_every: 1
sample_every: 5
# steps per epoch
max_train_steps: null
max_val_steps: null
# misc
tqdm_interval_sec: 1.0
logging:
project: diffusion_policy_debug
resume: True
mode: online
name: ${now:%Y.%m.%d-%H.%M.%S}_${name}_${task_name}
tags: ["${name}", "${task_name}", "${exp_name}"]
id: null
group: null
checkpoint:
topk:
monitor_key: train_loss
mode: min
k: 5
format_str: 'epoch={epoch:04d}-train_loss={train_loss:.3f}.ckpt'
save_last_ckpt: True
save_last_snapshot: False
multi_run:
run_dir: data/outputs/${now:%Y.%m.%d}/${now:%H.%M.%S}_${name}_${task_name}
wandb_name_base: ${now:%Y.%m.%d-%H.%M.%S}_${name}_${task_name}
hydra:
job:
override_dirname: ${name}
run:
dir: data/outputs/${now:%Y.%m.%d}/${now:%H.%M.%S}_${name}_${task_name}
sweep:
dir: data/outputs/${now:%Y.%m.%d}/${now:%H.%M.%S}_${name}_${task_name}
subdir: ${hydra.job.num}

View File

@@ -0,0 +1,152 @@
defaults:
- _self_
- task: real_pusht_image
name: train_diffusion_unet_image
_target_: diffusion_policy.workspace.train_diffusion_unet_image_workspace.TrainDiffusionUnetImageWorkspace
task_name: ${task.name}
shape_meta: ${task.shape_meta}
exp_name: "default"
horizon: 16
n_obs_steps: 2
n_action_steps: 8
n_latency_steps: 0
dataset_obs_steps: ${n_obs_steps}
past_action_visible: False
keypoint_visible_rate: 1.0
obs_as_global_cond: True
policy:
_target_: diffusion_policy.policy.diffusion_unet_image_policy.DiffusionUnetImagePolicy
shape_meta: ${shape_meta}
noise_scheduler:
_target_: diffusers.schedulers.scheduling_ddim.DDIMScheduler
num_train_timesteps: 100
beta_start: 0.0001
beta_end: 0.02
# beta_schedule is important
# this is the best we found
beta_schedule: squaredcos_cap_v2
clip_sample: True
set_alpha_to_one: True
steps_offset: 0
prediction_type: epsilon # or sample
obs_encoder:
_target_: diffusion_policy.model.vision.multi_image_obs_encoder.MultiImageObsEncoder
shape_meta: ${shape_meta}
rgb_model:
_target_: diffusion_policy.model.vision.model_getter.get_resnet
name: resnet18
weights: IMAGENET1K_V1 # or r3m
resize_shape: [224,224]
crop_shape: null
random_crop: False
use_group_norm: False
share_rgb_model: True
imagenet_norm: True
horizon: ${horizon}
n_action_steps: ${eval:'${n_action_steps}+${n_latency_steps}'}
n_obs_steps: ${n_obs_steps}
num_inference_steps: 100
obs_as_global_cond: ${obs_as_global_cond}
# crop_shape: null
diffusion_step_embed_dim: 128
down_dims: [512, 1024, 2048]
kernel_size: 5
n_groups: 8
cond_predict_scale: True
# scheduler.step params
# predict_epsilon: True
ema:
_target_: diffusion_policy.model.diffusion.ema_model.EMAModel
update_after_step: 0
inv_gamma: 1.0
power: 0.75
min_value: 0.0
max_value: 0.9999
dataloader:
batch_size: 64
num_workers: 8
shuffle: True
pin_memory: True
persistent_workers: True
val_dataloader:
batch_size: 64
num_workers: 8
shuffle: False
pin_memory: True
persistent_workers: True
optimizer:
_target_: torch.optim.AdamW
lr: 1.0e-4
betas: [0.95, 0.999]
eps: 1.0e-8
weight_decay: 1.0e-6
training:
device: "cuda:0"
seed: 42
debug: False
resume: True
# optimization
lr_scheduler: cosine
lr_warmup_steps: 500
num_epochs: 600
gradient_accumulate_every: 1
# EMA destroys performance when used with BatchNorm
# replace BatchNorm with GroupNorm.
use_ema: True
freeze_encoder: True
# training loop control
# in epochs
rollout_every: 50
checkpoint_every: 50
val_every: 1
sample_every: 5
# steps per epoch
max_train_steps: null
max_val_steps: null
# misc
tqdm_interval_sec: 1.0
logging:
project: diffusion_policy_debug
resume: True
mode: online
name: ${now:%Y.%m.%d-%H.%M.%S}_${name}_${task_name}
tags: ["${name}", "${task_name}", "${exp_name}"]
id: null
group: null
checkpoint:
topk:
monitor_key: train_loss
mode: min
k: 5
format_str: 'epoch={epoch:04d}-train_loss={train_loss:.3f}.ckpt'
save_last_ckpt: True
save_last_snapshot: False
multi_run:
run_dir: data/outputs/${now:%Y.%m.%d}/${now:%H.%M.%S}_${name}_${task_name}
wandb_name_base: ${now:%Y.%m.%d-%H.%M.%S}_${name}_${task_name}
hydra:
job:
override_dirname: ${name}
run:
dir: data/outputs/${now:%Y.%m.%d}/${now:%H.%M.%S}_${name}_${task_name}
sweep:
dir: data/outputs/${now:%Y.%m.%d}/${now:%H.%M.%S}_${name}_${task_name}
subdir: ${hydra.job.num}

View File

@@ -0,0 +1,128 @@
defaults:
- _self_
- task: lift_image_abs
name: train_diffusion_unet_video
_target_: diffusion_policy.workspace.train_diffusion_unet_video_workspace.TrainDiffusionUnetVideoWorkspace
task_name: ${task.name}
shape_meta: ${task.shape_meta}
exp_name: "default"
horizon: 16
n_obs_steps: 4
n_action_steps: 8
past_action_visible: False
keypoint_visible_rate: 1.0
lowdim_as_global_cond: True
policy:
_target_: diffusion_policy.policy.diffusion_unet_video_policy.DiffusionUnetVideoPolicy
shape_meta: ${shape_meta}
noise_scheduler:
_target_: diffusers.schedulers.scheduling_ddpm.DDPMScheduler
num_train_timesteps: 100
beta_start: 0.0001
beta_end: 0.02
beta_schedule: squaredcos_cap_v2
variance_type: fixed_small # Yilun's paper uses fixed_small_log instead, but easy to cause Nan
clip_sample: True # required when predict_epsilon=False
prediction_type: epsilon # or sample
rgb_net:
_target_: diffusion_policy.model.obs_encoder.video_core.VideoCore
backbone:
_target_: diffusion_policy.model.obs_encoder.video_core.VideoResNet
norm_groups: 8
input_channel: 3
model_depth: 50 # ResNet 50 (18,34 not yet avaliable)
pool:
_target_: diffusion_policy.model.ibc.global_avgpool.GlobalAvgpool
dim: [2,3,4]
horizon: ${horizon}
n_action_steps: ${n_action_steps}
n_obs_steps: ${n_obs_steps}
num_inference_steps: 100
lowdim_as_global_cond: ${lowdim_as_global_cond}
diffusion_step_embed_dim: 128
down_dims: [512, 1024, 2048]
kernel_size: 5
n_groups: 8
cond_predict_scale: True
# TemporalAggregator Parameters
channel_mults: [1,1]
n_blocks_per_level: 1
ta_kernel_size: 3
ta_n_groups: 1
# scheduler.step params
# predict_epsilon: True
ema:
_target_: diffusion_policy.model.diffusion.ema_model.EMAModel
update_after_step: 0
inv_gamma: 1.0
power: 0.75
min_value: 0.0
max_value: 0.9999
dataloader:
batch_size: 32
num_workers: 1
shuffle: True
pin_memory: True
optimizer:
_target_: torch.optim.AdamW
lr: 0.0001 # 1e-4
betas: [0.95, 0.999]
eps: 0.00000001 # 1e-8
weight_decay: 0.000001 # 1e-6
training:
device: "cuda:0"
lr_scheduler: cosine
lr_warmup_steps: 500
num_epochs: 1500
gradient_accumulate_every: 1
eval_every: 5000
eval_first: False
val_every: 300
use_ema: False
tqdm_interval_sec: 1.0
seed: 42
logging:
project: diffusion_policy_debug
resume: True
mode: online
name: ${now:%Y.%m.%d-%H.%M.%S}_${name}_${task_name}
tags: ["${name}", "${task_name}", "${exp_name}"]
id: null
group: null
checkpoint:
topk:
monitor_key: test_score
mode: max
k: 5
format_str: 'epoch={epoch:03d}-test_score={test_score:.3f}.ckpt'
save_last_ckpt: False
save_last_snapshot: False
hydra:
job:
override_dirname: ${name}
run:
dir: data/outputs/${now:%Y.%m.%d}/${now:%H.%M.%S}_${name}_${task_name}
sweep:
dir: data/outputs/${now:%Y.%m.%d}/${now:%H.%M.%S}_${name}_${task_name}
subdir: ${hydra.job.num}

View File

@@ -0,0 +1,111 @@
defaults:
- _self_
- task: pusht_image
name: train_ibc_dfo_hybrid
_target_: diffusion_policy.workspace.train_ibc_dfo_hybrid_workspace.TrainIbcDfoHybridWorkspace
task_name: ${task.name}
shape_meta: ${task.shape_meta}
exp_name: "default"
horizon: 2
n_obs_steps: 2
n_action_steps: 1
n_latency_steps: 0
dataset_obs_steps: ${n_obs_steps}
past_action_visible: False
keypoint_visible_rate: 1.0
policy:
_target_: diffusion_policy.policy.ibc_dfo_hybrid_image_policy.IbcDfoHybridImagePolicy
shape_meta: ${shape_meta}
horizon: ${horizon}
n_action_steps: ${eval:'${n_action_steps}+${n_latency_steps}'}
n_obs_steps: ${n_obs_steps}
dropout: 0.1
train_n_neg: 1024
pred_n_iter: 5
pred_n_samples: 1024
kevin_inference: False
andy_train: False
obs_encoder_group_norm: True
eval_fixed_crop: True
crop_shape: [84, 84]
dataloader:
batch_size: 128
num_workers: 8
shuffle: True
pin_memory: True
persistent_workers: False
val_dataloader:
batch_size: 128
num_workers: 8
shuffle: False
pin_memory: True
persistent_workers: False
optimizer:
_target_: torch.optim.AdamW
lr: 1.0e-4
betas: [0.95, 0.999]
eps: 1.0e-8
weight_decay: 1.0e-6
training:
device: "cuda:0"
seed: 42
debug: False
resume: True
# optimization
lr_scheduler: cosine
lr_warmup_steps: 500
num_epochs: 3050
gradient_accumulate_every: 1
# training loop control
# in epochs
rollout_every: 50
checkpoint_every: 50
val_every: 1
sample_every: 5
sample_max_batch: 128
# steps per epoch
max_train_steps: null
max_val_steps: null
# misc
tqdm_interval_sec: 1.0
logging:
project: diffusion_policy_debug
resume: True
mode: online
name: ${now:%Y.%m.%d-%H.%M.%S}_${name}_${task_name}
tags: ["${name}", "${task_name}", "${exp_name}"]
id: null
group: null
checkpoint:
topk:
monitor_key: test_mean_score
mode: max
k: 5
format_str: 'epoch={epoch:04d}-test_mean_score={test_mean_score:.3f}.ckpt'
save_last_ckpt: True
save_last_snapshot: False
multi_run:
run_dir: data/outputs/${now:%Y.%m.%d}/${now:%H.%M.%S}_${name}_${task_name}
wandb_name_base: ${now:%Y.%m.%d-%H.%M.%S}_${name}_${task_name}
hydra:
job:
override_dirname: ${name}
run:
dir: data/outputs/${now:%Y.%m.%d}/${now:%H.%M.%S}_${name}_${task_name}
sweep:
dir: data/outputs/${now:%Y.%m.%d}/${now:%H.%M.%S}_${name}_${task_name}
subdir: ${hydra.job.num}

View File

@@ -0,0 +1,109 @@
defaults:
- _self_
- task: pusht_lowdim
name: train_ibc_dfo_lowdim
_target_: diffusion_policy.workspace.train_ibc_dfo_lowdim_workspace.TrainIbcDfoLowdimWorkspace
obs_dim: ${task.obs_dim}
action_dim: ${task.action_dim}
keypoint_dim: ${task.keypoint_dim}
task_name: ${task.name}
exp_name: "default"
horizon: 2
n_obs_steps: 2
n_action_steps: 1
n_latency_steps: 0
past_action_visible: False
keypoint_visible_rate: 1.0
policy:
_target_: diffusion_policy.policy.ibc_dfo_lowdim_policy.IbcDfoLowdimPolicy
horizon: ${horizon}
obs_dim: ${obs_dim}
action_dim: ${action_dim}
n_action_steps: ${eval:'${n_action_steps}+${n_latency_steps}'}
n_obs_steps: ${n_obs_steps}
dropout: 0.1
train_n_neg: 1024
pred_n_iter: 5
pred_n_samples: 1024
kevin_inference: False
andy_train: False
dataloader:
batch_size: 256
num_workers: 1
shuffle: True
pin_memory: True
persistent_workers: False
val_dataloader:
batch_size: 256
num_workers: 1
shuffle: False
pin_memory: True
persistent_workers: False
optimizer:
_target_: torch.optim.AdamW
lr: 1.0e-4
betas: [0.95, 0.999]
eps: 1.0e-8
weight_decay: 1.0e-6
training:
device: "cuda:0"
seed: 42
debug: False
resume: True
# optimization
lr_scheduler: cosine
lr_warmup_steps: 500
num_epochs: 5000
gradient_accumulate_every: 1
# training loop control
# in epochs
rollout_every: 50
checkpoint_every: 50
val_every: 1
sample_every: 5
sample_max_batch: 128
# steps per epoch
max_train_steps: null
max_val_steps: null
# misc
tqdm_interval_sec: 1.0
logging:
project: diffusion_policy_debug
resume: True
mode: online
name: ${now:%Y.%m.%d-%H.%M.%S}_${name}_${task_name}
tags: ["${name}", "${task_name}", "${exp_name}"]
id: null
group: null
checkpoint:
topk:
monitor_key: test_mean_score
mode: max
k: 5
format_str: 'epoch={epoch:04d}-test_mean_score={test_mean_score:.3f}.ckpt'
save_last_ckpt: True
save_last_snapshot: False
multi_run:
run_dir: data/outputs/${now:%Y.%m.%d}/${now:%H.%M.%S}_${name}_${task_name}
wandb_name_base: ${now:%Y.%m.%d-%H.%M.%S}_${name}_${task_name}
hydra:
job:
override_dirname: ${name}
run:
dir: data/outputs/${now:%Y.%m.%d}/${now:%H.%M.%S}_${name}_${task_name}
sweep:
dir: data/outputs/${now:%Y.%m.%d}/${now:%H.%M.%S}_${name}_${task_name}
subdir: ${hydra.job.num}

View File

@@ -0,0 +1,111 @@
defaults:
- _self_
- task: real_pusht_image
name: train_ibc_dfo_hybrid
_target_: diffusion_policy.workspace.train_ibc_dfo_hybrid_workspace.TrainIbcDfoHybridWorkspace
task_name: ${task.name}
shape_meta: ${task.shape_meta}
exp_name: "default"
horizon: 2
n_obs_steps: 2
n_action_steps: 1
n_latency_steps: 1
dataset_obs_steps: ${n_obs_steps}
past_action_visible: False
keypoint_visible_rate: 1.0
policy:
_target_: diffusion_policy.policy.ibc_dfo_hybrid_image_policy.IbcDfoHybridImagePolicy
shape_meta: ${shape_meta}
horizon: ${horizon}
n_action_steps: ${n_action_steps}
n_obs_steps: ${n_obs_steps}
dropout: 0.1
train_n_neg: 256
pred_n_iter: 3
pred_n_samples: 1024
kevin_inference: False
andy_train: False
obs_encoder_group_norm: True
eval_fixed_crop: True
crop_shape: [216, 288] # ch, cw 320x240 90%
dataloader:
batch_size: 128
num_workers: 8
shuffle: True
pin_memory: True
persistent_workers: False
val_dataloader:
batch_size: 128
num_workers: 1
shuffle: False
pin_memory: True
persistent_workers: False
optimizer:
_target_: torch.optim.AdamW
lr: 1.0e-4
betas: [0.95, 0.999]
eps: 1.0e-8
weight_decay: 1.0e-6
training:
device: "cuda:0"
seed: 42
debug: False
resume: True
# optimization
lr_scheduler: cosine
lr_warmup_steps: 500
num_epochs: 1000
gradient_accumulate_every: 1
# training loop control
# in epochs
rollout_every: 50
checkpoint_every: 5
val_every: 1
sample_every: 5
sample_max_batch: 128
# steps per epoch
max_train_steps: null
max_val_steps: null
# misc
tqdm_interval_sec: 1.0
logging:
project: diffusion_policy_debug
resume: True
mode: online
name: ${now:%Y.%m.%d-%H.%M.%S}_${name}_${task_name}
tags: ["${name}", "${task_name}", "${exp_name}"]
id: null
group: null
checkpoint:
topk:
monitor_key: train_action_mse_error
mode: min
k: 5
format_str: 'epoch={epoch:04d}-train_action_mse_error={train_action_mse_error:.3f}.ckpt'
save_last_ckpt: True
save_last_snapshot: False
multi_run:
run_dir: data/outputs/${now:%Y.%m.%d}/${now:%H.%M.%S}_${name}_${task_name}
wandb_name_base: ${now:%Y.%m.%d-%H.%M.%S}_${name}_${task_name}
hydra:
job:
override_dirname: ${name}
run:
dir: data/outputs/${now:%Y.%m.%d}/${now:%H.%M.%S}_${name}_${task_name}
sweep:
dir: data/outputs/${now:%Y.%m.%d}/${now:%H.%M.%S}_${name}_${task_name}
subdir: ${hydra.job.num}

View File

@@ -0,0 +1,92 @@
defaults:
- _self_
- task: lift_image
name: train_robomimic_image
_target_: diffusion_policy.workspace.train_robomimic_image_workspace.TrainRobomimicImageWorkspace
task_name: ${task.name}
shape_meta: ${task.shape_meta}
exp_name: "default"
horizon: &horizon 10
n_obs_steps: 1
n_action_steps: 1
n_latency_steps: 0
dataset_obs_steps: *horizon
past_action_visible: False
keypoint_visible_rate: 1.0
policy:
_target_: diffusion_policy.policy.robomimic_image_policy.RobomimicImagePolicy
shape_meta: ${shape_meta}
algo_name: bc_rnn
obs_type: image
# oc.select resolver: key, default
task_name: ${oc.select:task.task_name,lift}
dataset_type: ${oc.select:task.dataset_type,ph}
crop_shape: [76,76]
dataloader:
batch_size: 64
num_workers: 16
shuffle: True
pin_memory: True
persistent_workers: False
val_dataloader:
batch_size: 64
num_workers: 16
shuffle: False
pin_memory: True
persistent_workers: False
training:
device: "cuda:0"
seed: 42
debug: False
resume: True
# optimization
num_epochs: 3050
# training loop control
# in epochs
rollout_every: 50
checkpoint_every: 50
val_every: 1
sample_every: 5
# steps per epoch
max_train_steps: null
max_val_steps: null
# misc
tqdm_interval_sec: 1.0
logging:
project: diffusion_policy_debug
resume: True
mode: online
name: ${now:%Y.%m.%d-%H.%M.%S}_${name}_${task_name}
tags: ["${name}", "${task_name}", "${exp_name}"]
id: null
group: null
checkpoint:
topk:
monitor_key: test_mean_score
mode: max
k: 5
format_str: 'epoch={epoch:04d}-test_mean_score={test_mean_score:.3f}.ckpt'
save_last_ckpt: True
save_last_snapshot: False
multi_run:
run_dir: data/outputs/${now:%Y.%m.%d}/${now:%H.%M.%S}_${name}_${task_name}
wandb_name_base: ${now:%Y.%m.%d-%H.%M.%S}_${name}_${task_name}
hydra:
job:
override_dirname: ${name}
run:
dir: data/outputs/${now:%Y.%m.%d}/${now:%H.%M.%S}_${name}_${task_name}
sweep:
dir: data/outputs/${now:%Y.%m.%d}/${now:%H.%M.%S}_${name}_${task_name}
subdir: ${hydra.job.num}

View File

@@ -0,0 +1,92 @@
defaults:
- _self_
- task: pusht_lowdim
name: train_robomimic_lowdim
_target_: diffusion_policy.workspace.train_robomimic_lowdim_workspace.TrainRobomimicLowdimWorkspace
obs_dim: ${task.obs_dim}
action_dim: ${task.action_dim}
transition_dim: "${eval: ${task.obs_dim} + ${task.action_dim}}"
task_name: ${task.name}
exp_name: "default"
horizon: 10
n_obs_steps: 1
n_action_steps: 1
n_latency_steps: 0
past_action_visible: False
keypoint_visible_rate: 1.0
policy:
_target_: diffusion_policy.policy.robomimic_lowdim_policy.RobomimicLowdimPolicy
action_dim: ${action_dim}
obs_dim: ${obs_dim}
algo_name: bc_rnn
obs_type: low_dim
# oc.select resolver: key, default
task_name: ${oc.select:task.task_name,lift}
dataset_type: ${oc.select:task.dataset_type,ph}
dataloader:
batch_size: 256
num_workers: 1
shuffle: True
pin_memory: True
persistent_workers: False
val_dataloader:
batch_size: 256
num_workers: 1
shuffle: False
pin_memory: True
persistent_workers: False
training:
device: "cuda:0"
seed: 42
debug: False
resume: True
# optimization
num_epochs: 5000
# training loop control
# in epochs
rollout_every: 50
checkpoint_every: 50
val_every: 1
# steps per epoch
max_train_steps: null
max_val_steps: null
# misc
tqdm_interval_sec: 1.0
logging:
project: diffusion_policy_debug
resume: True
mode: online
name: ${now:%Y.%m.%d-%H.%M.%S}_${name}_${task_name}
tags: ["${name}", "${task_name}", "${exp_name}"]
id: null
group: null
checkpoint:
topk:
monitor_key: test_mean_score
mode: max
k: 5
format_str: 'epoch={epoch:04d}-test_mean_score={test_mean_score:.3f}.ckpt'
save_last_ckpt: True
save_last_snapshot: False
multi_run:
run_dir: data/outputs/${now:%Y.%m.%d}/${now:%H.%M.%S}_${name}_${task_name}
wandb_name_base: ${now:%Y.%m.%d-%H.%M.%S}_${name}_${task_name}
hydra:
job:
override_dirname: ${name}
run:
dir: data/outputs/${now:%Y.%m.%d}/${now:%H.%M.%S}_${name}_${task_name}
sweep:
dir: data/outputs/${now:%Y.%m.%d}/${now:%H.%M.%S}_${name}_${task_name}
subdir: ${hydra.job.num}

View File

@@ -0,0 +1,92 @@
defaults:
- _self_
- task: real_pusht_image
name: train_robomimic_image
_target_: diffusion_policy.workspace.train_robomimic_image_workspace.TrainRobomimicImageWorkspace
task_name: ${task.name}
shape_meta: ${task.shape_meta}
exp_name: "default"
horizon: &horizon 10
n_obs_steps: 1
n_action_steps: 1
n_latency_steps: 1
dataset_obs_steps: *horizon
past_action_visible: False
keypoint_visible_rate: 1.0
policy:
_target_: diffusion_policy.policy.robomimic_image_policy.RobomimicImagePolicy
shape_meta: ${shape_meta}
algo_name: bc_rnn
obs_type: image
# oc.select resolver: key, default
task_name: ${oc.select:task.task_name,tool_hang}
dataset_type: ${oc.select:task.dataset_type,ph}
crop_shape: [216, 288] # ch, cw 320x240 90%
dataloader:
batch_size: 32
num_workers: 8
shuffle: True
pin_memory: True
persistent_workers: True
val_dataloader:
batch_size: 32
num_workers: 1
shuffle: False
pin_memory: True
persistent_workers: False
training:
device: "cuda:0"
seed: 42
debug: False
resume: True
# optimization
num_epochs: 1000
# training loop control
# in epochs
rollout_every: 50
checkpoint_every: 50
val_every: 1
sample_every: 5
# steps per epoch
max_train_steps: null
max_val_steps: null
# misc
tqdm_interval_sec: 1.0
logging:
project: diffusion_policy_debug
resume: True
mode: online
name: ${now:%Y.%m.%d-%H.%M.%S}_${name}_${task_name}
tags: ["${name}", "${task_name}", "${exp_name}"]
id: null
group: null
checkpoint:
topk:
monitor_key: train_loss
mode: min
k: 5
format_str: 'epoch={epoch:04d}-train_loss={train_loss:.3f}.ckpt'
save_last_ckpt: True
save_last_snapshot: False
multi_run:
run_dir: data/outputs/${now:%Y.%m.%d}/${now:%H.%M.%S}_${name}_${task_name}
wandb_name_base: ${now:%Y.%m.%d-%H.%M.%S}_${name}_${task_name}
hydra:
job:
override_dirname: ${name}
run:
dir: data/outputs/${now:%Y.%m.%d}/${now:%H.%M.%S}_${name}_${task_name}
sweep:
dir: data/outputs/${now:%Y.%m.%d}/${now:%H.%M.%S}_${name}_${task_name}
subdir: ${hydra.job.num}

View File

@@ -0,0 +1,51 @@
from typing import Dict
import torch
import torch.nn
from diffusion_policy.model.common.normalizer import LinearNormalizer
class BaseLowdimDataset(torch.utils.data.Dataset):
def get_validation_dataset(self) -> 'BaseLowdimDataset':
# return an empty dataset by default
return BaseLowdimDataset()
def get_normalizer(self, **kwargs) -> LinearNormalizer:
raise NotImplementedError()
def get_all_actions(self) -> torch.Tensor:
raise NotImplementedError()
def __len__(self) -> int:
return 0
def __getitem__(self, idx: int) -> Dict[str, torch.Tensor]:
"""
output:
obs: T, Do
action: T, Da
"""
raise NotImplementedError()
class BaseImageDataset(torch.utils.data.Dataset):
def get_validation_dataset(self) -> 'BaseLowdimDataset':
# return an empty dataset by default
return BaseImageDataset()
def get_normalizer(self, **kwargs) -> LinearNormalizer:
raise NotImplementedError()
def get_all_actions(self) -> torch.Tensor:
raise NotImplementedError()
def __len__(self) -> int:
return 0
def __getitem__(self, idx: int) -> Dict[str, torch.Tensor]:
"""
output:
obs:
key: T, *
action: T, Da
"""
raise NotImplementedError()

View File

@@ -0,0 +1,126 @@
from typing import Dict
import torch
import numpy as np
import copy
from diffusion_policy.common.pytorch_util import dict_apply
from diffusion_policy.common.replay_buffer import ReplayBuffer
from diffusion_policy.common.sampler import SequenceSampler, get_val_mask
from diffusion_policy.model.common.normalizer import LinearNormalizer, SingleFieldLinearNormalizer
from diffusion_policy.dataset.base_dataset import BaseLowdimDataset
class BlockPushLowdimDataset(BaseLowdimDataset):
def __init__(self,
zarr_path,
horizon=1,
pad_before=0,
pad_after=0,
obs_key='obs',
action_key='action',
obs_eef_target=True,
use_manual_normalizer=False,
seed=42,
val_ratio=0.0
):
super().__init__()
self.replay_buffer = ReplayBuffer.copy_from_path(
zarr_path, keys=[obs_key, action_key])
val_mask = get_val_mask(
n_episodes=self.replay_buffer.n_episodes,
val_ratio=val_ratio,
seed=seed)
train_mask = ~val_mask
self.sampler = SequenceSampler(
replay_buffer=self.replay_buffer,
sequence_length=horizon,
pad_before=pad_before,
pad_after=pad_after,
episode_mask=train_mask)
self.obs_key = obs_key
self.action_key = action_key
self.obs_eef_target = obs_eef_target
self.use_manual_normalizer = use_manual_normalizer
self.train_mask = train_mask
self.horizon = horizon
self.pad_before = pad_before
self.pad_after = pad_after
def get_validation_dataset(self):
val_set = copy.copy(self)
val_set.sampler = SequenceSampler(
replay_buffer=self.replay_buffer,
sequence_length=self.horizon,
pad_before=self.pad_before,
pad_after=self.pad_after,
episode_mask=~self.train_mask
)
val_set.train_mask = ~self.train_mask
return val_set
def get_normalizer(self, mode='limits', **kwargs):
data = self._sample_to_data(self.replay_buffer)
normalizer = LinearNormalizer()
if not self.use_manual_normalizer:
normalizer.fit(data=data, last_n_dims=1, mode=mode, **kwargs)
else:
x = data['obs']
stat = {
'max': np.max(x, axis=0),
'min': np.min(x, axis=0),
'mean': np.mean(x, axis=0),
'std': np.std(x, axis=0)
}
is_x = np.zeros(stat['max'].shape, dtype=bool)
is_y = np.zeros_like(is_x)
is_x[[0,3,6,8,10,13]] = True
is_y[[1,4,7,9,11,14]] = True
is_rot = ~(is_x|is_y)
def normalizer_with_masks(stat, masks):
global_scale = np.ones_like(stat['max'])
global_offset = np.zeros_like(stat['max'])
for mask in masks:
output_max = 1
output_min = -1
input_max = stat['max'][mask].max()
input_min = stat['min'][mask].min()
input_range = input_max - input_min
scale = (output_max - output_min) / input_range
offset = output_min - scale * input_min
global_scale[mask] = scale
global_offset[mask] = offset
return SingleFieldLinearNormalizer.create_manual(
scale=global_scale,
offset=global_offset,
input_stats_dict=stat
)
normalizer['obs'] = normalizer_with_masks(stat, [is_x, is_y, is_rot])
normalizer['action'] = SingleFieldLinearNormalizer.create_fit(
data['action'], last_n_dims=1, mode=mode, **kwargs)
return normalizer
def get_all_actions(self) -> torch.Tensor:
return torch.from_numpy(self.replay_buffer['action'])
def __len__(self) -> int:
return len(self.sampler)
def _sample_to_data(self, sample):
obs = sample[self.obs_key] # T, D_o
if not self.obs_eef_target:
obs[:,8:10] = 0
data = {
'obs': obs,
'action': sample[self.action_key], # T, D_a
}
return data
def __getitem__(self, idx: int) -> Dict[str, torch.Tensor]:
sample = self.sampler.sample_sequence(idx)
data = self._sample_to_data(sample)
torch_data = dict_apply(data, torch.from_numpy)
return torch_data

View File

@@ -0,0 +1,91 @@
from typing import Dict
import torch
import numpy as np
import copy
import pathlib
from diffusion_policy.common.pytorch_util import dict_apply
from diffusion_policy.common.replay_buffer import ReplayBuffer
from diffusion_policy.common.sampler import SequenceSampler, get_val_mask
from diffusion_policy.model.common.normalizer import LinearNormalizer, SingleFieldLinearNormalizer
from diffusion_policy.dataset.base_dataset import BaseLowdimDataset
class KitchenLowdimDataset(BaseLowdimDataset):
def __init__(self,
dataset_dir,
horizon=1,
pad_before=0,
pad_after=0,
seed=42,
val_ratio=0.0
):
super().__init__()
data_directory = pathlib.Path(dataset_dir)
observations = np.load(data_directory / "observations_seq.npy")
actions = np.load(data_directory / "actions_seq.npy")
masks = np.load(data_directory / "existence_mask.npy")
self.replay_buffer = ReplayBuffer.create_empty_numpy()
for i in range(len(masks)):
eps_len = int(masks[i].sum())
obs = observations[i,:eps_len].astype(np.float32)
action = actions[i,:eps_len].astype(np.float32)
data = {
'obs': obs,
'action': action
}
self.replay_buffer.add_episode(data)
val_mask = get_val_mask(
n_episodes=self.replay_buffer.n_episodes,
val_ratio=val_ratio,
seed=seed)
train_mask = ~val_mask
self.sampler = SequenceSampler(
replay_buffer=self.replay_buffer,
sequence_length=horizon,
pad_before=pad_before,
pad_after=pad_after,
episode_mask=train_mask)
self.train_mask = train_mask
self.horizon = horizon
self.pad_before = pad_before
self.pad_after = pad_after
def get_validation_dataset(self):
val_set = copy.copy(self)
val_set.sampler = SequenceSampler(
replay_buffer=self.replay_buffer,
sequence_length=self.horizon,
pad_before=self.pad_before,
pad_after=self.pad_after,
episode_mask=~self.train_mask
)
val_set.train_mask = ~self.train_mask
return val_set
def get_normalizer(self, mode='limits', **kwargs):
data = {
'obs': self.replay_buffer['obs'],
'action': self.replay_buffer['action']
}
if 'range_eps' not in kwargs:
# to prevent blowing up dims that barely change
kwargs['range_eps'] = 5e-2
normalizer = LinearNormalizer()
normalizer.fit(data=data, last_n_dims=1, mode=mode, **kwargs)
return normalizer
def get_all_actions(self) -> torch.Tensor:
return torch.from_numpy(self.replay_buffer['action'])
def __len__(self) -> int:
return len(self.sampler)
def __getitem__(self, idx: int) -> Dict[str, torch.Tensor]:
sample = self.sampler.sample_sequence(idx)
data = sample
torch_data = dict_apply(data, torch.from_numpy)
return torch_data

View File

@@ -0,0 +1,112 @@
from typing import Dict
import torch
import numpy as np
import copy
import pathlib
from tqdm import tqdm
from diffusion_policy.common.pytorch_util import dict_apply
from diffusion_policy.common.replay_buffer import ReplayBuffer
from diffusion_policy.common.sampler import SequenceSampler, get_val_mask
from diffusion_policy.model.common.normalizer import LinearNormalizer, SingleFieldLinearNormalizer
from diffusion_policy.dataset.base_dataset import BaseLowdimDataset
from diffusion_policy.env.kitchen.kitchen_util import parse_mjl_logs
class KitchenMjlLowdimDataset(BaseLowdimDataset):
def __init__(self,
dataset_dir,
horizon=1,
pad_before=0,
pad_after=0,
abs_action=True,
robot_noise_ratio=0.0,
seed=42,
val_ratio=0.0
):
super().__init__()
if not abs_action:
raise NotImplementedError()
robot_pos_noise_amp = np.array([0.1 , 0.1 , 0.1 , 0.1 , 0.1 , 0.1 , 0.1 , 0.1 ,
0.1 , 0.005 , 0.005 , 0.0005, 0.0005, 0.0005, 0.0005, 0.0005,
0.0005, 0.005 , 0.005 , 0.005 , 0.1 , 0.1 , 0.1 , 0.005 ,
0.005 , 0.005 , 0.1 , 0.1 , 0.1 , 0.005 ], dtype=np.float32)
rng = np.random.default_rng(seed=seed)
data_directory = pathlib.Path(dataset_dir)
self.replay_buffer = ReplayBuffer.create_empty_numpy()
for i, mjl_path in enumerate(tqdm(list(data_directory.glob('*/*.mjl')))):
try:
data = parse_mjl_logs(str(mjl_path.absolute()), skipamount=40)
qpos = data['qpos'].astype(np.float32)
obs = np.concatenate([
qpos[:,:9],
qpos[:,-21:],
np.zeros((len(qpos),30),dtype=np.float32)
], axis=-1)
if robot_noise_ratio > 0:
# add observation noise to match real robot
noise = robot_noise_ratio * robot_pos_noise_amp * rng.uniform(
low=-1., high=1., size=(obs.shape[0], 30))
obs[:,:30] += noise
episode = {
'obs': obs,
'action': data['ctrl'].astype(np.float32)
}
self.replay_buffer.add_episode(episode)
except Exception as e:
print(i, e)
val_mask = get_val_mask(
n_episodes=self.replay_buffer.n_episodes,
val_ratio=val_ratio,
seed=seed)
train_mask = ~val_mask
self.sampler = SequenceSampler(
replay_buffer=self.replay_buffer,
sequence_length=horizon,
pad_before=pad_before,
pad_after=pad_after,
episode_mask=train_mask)
self.train_mask = train_mask
self.horizon = horizon
self.pad_before = pad_before
self.pad_after = pad_after
def get_validation_dataset(self):
val_set = copy.copy(self)
val_set.sampler = SequenceSampler(
replay_buffer=self.replay_buffer,
sequence_length=self.horizon,
pad_before=self.pad_before,
pad_after=self.pad_after,
episode_mask=~self.train_mask
)
val_set.train_mask = ~self.train_mask
return val_set
def get_normalizer(self, mode='limits', **kwargs):
data = {
'obs': self.replay_buffer['obs'],
'action': self.replay_buffer['action']
}
if 'range_eps' not in kwargs:
# to prevent blowing up dims that barely change
kwargs['range_eps'] = 5e-2
normalizer = LinearNormalizer()
normalizer.fit(data=data, last_n_dims=1, mode=mode, **kwargs)
return normalizer
def get_all_actions(self) -> torch.Tensor:
return torch.from_numpy(self.replay_buffer['action'])
def __len__(self) -> int:
return len(self.sampler)
def __getitem__(self, idx: int) -> Dict[str, torch.Tensor]:
sample = self.sampler.sample_sequence(idx)
data = sample
torch_data = dict_apply(data, torch.from_numpy)
return torch_data

View File

@@ -0,0 +1,97 @@
from typing import Dict
import torch
import numpy as np
import copy
from diffusion_policy.common.pytorch_util import dict_apply
from diffusion_policy.common.replay_buffer import ReplayBuffer
from diffusion_policy.common.sampler import (
SequenceSampler, get_val_mask, downsample_mask)
from diffusion_policy.model.common.normalizer import LinearNormalizer
from diffusion_policy.dataset.base_dataset import BaseLowdimDataset
class PushTLowdimDataset(BaseLowdimDataset):
def __init__(self,
zarr_path,
horizon=1,
pad_before=0,
pad_after=0,
obs_key='keypoint',
state_key='state',
action_key='action',
seed=42,
val_ratio=0.0,
max_train_episodes=None
):
super().__init__()
self.replay_buffer = ReplayBuffer.copy_from_path(
zarr_path, keys=[obs_key, state_key, action_key])
val_mask = get_val_mask(
n_episodes=self.replay_buffer.n_episodes,
val_ratio=val_ratio,
seed=seed)
train_mask = ~val_mask
train_mask = downsample_mask(
mask=train_mask,
max_n=max_train_episodes,
seed=seed)
self.sampler = SequenceSampler(
replay_buffer=self.replay_buffer,
sequence_length=horizon,
pad_before=pad_before,
pad_after=pad_after,
episode_mask=train_mask
)
self.obs_key = obs_key
self.state_key = state_key
self.action_key = action_key
self.train_mask = train_mask
self.horizon = horizon
self.pad_before = pad_before
self.pad_after = pad_after
def get_validation_dataset(self):
val_set = copy.copy(self)
val_set.sampler = SequenceSampler(
replay_buffer=self.replay_buffer,
sequence_length=self.horizon,
pad_before=self.pad_before,
pad_after=self.pad_after,
episode_mask=~self.train_mask
)
val_set.train_mask = ~self.train_mask
return val_set
def get_normalizer(self, mode='limits', **kwargs):
data = self._sample_to_data(self.replay_buffer)
normalizer = LinearNormalizer()
normalizer.fit(data=data, last_n_dims=1, mode=mode, **kwargs)
return normalizer
def get_all_actions(self) -> torch.Tensor:
return torch.from_numpy(self.replay_buffer[self.action_key])
def __len__(self) -> int:
return len(self.sampler)
def _sample_to_data(self, sample):
keypoint = sample[self.obs_key]
state = sample[self.state_key]
agent_pos = state[:,:2]
obs = np.concatenate([
keypoint.reshape(keypoint.shape[0], -1),
agent_pos], axis=-1)
data = {
'obs': obs, # T, D_o
'action': sample[self.action_key], # T, D_a
}
return data
def __getitem__(self, idx: int) -> Dict[str, torch.Tensor]:
sample = self.sampler.sample_sequence(idx)
data = self._sample_to_data(sample)
torch_data = dict_apply(data, torch.from_numpy)
return torch_data

View File

@@ -0,0 +1,102 @@
from typing import Dict
import torch
import numpy as np
import copy
from diffusion_policy.common.pytorch_util import dict_apply
from diffusion_policy.common.replay_buffer import ReplayBuffer
from diffusion_policy.common.sampler import (
SequenceSampler, get_val_mask, downsample_mask)
from diffusion_policy.model.common.normalizer import LinearNormalizer
from diffusion_policy.dataset.base_dataset import BaseImageDataset
from diffusion_policy.common.normalize_util import get_image_range_normalizer
class PushTImageDataset(BaseImageDataset):
def __init__(self,
zarr_path,
horizon=1,
pad_before=0,
pad_after=0,
seed=42,
val_ratio=0.0,
max_train_episodes=None
):
super().__init__()
self.replay_buffer = ReplayBuffer.copy_from_path(
zarr_path, keys=['img', 'state', 'action'])
val_mask = get_val_mask(
n_episodes=self.replay_buffer.n_episodes,
val_ratio=val_ratio,
seed=seed)
train_mask = ~val_mask
train_mask = downsample_mask(
mask=train_mask,
max_n=max_train_episodes,
seed=seed)
self.sampler = SequenceSampler(
replay_buffer=self.replay_buffer,
sequence_length=horizon,
pad_before=pad_before,
pad_after=pad_after,
episode_mask=train_mask)
self.train_mask = train_mask
self.horizon = horizon
self.pad_before = pad_before
self.pad_after = pad_after
def get_validation_dataset(self):
val_set = copy.copy(self)
val_set.sampler = SequenceSampler(
replay_buffer=self.replay_buffer,
sequence_length=self.horizon,
pad_before=self.pad_before,
pad_after=self.pad_after,
episode_mask=~self.train_mask
)
val_set.train_mask = ~self.train_mask
return val_set
def get_normalizer(self, mode='limits', **kwargs):
data = {
'action': self.replay_buffer['action'],
'agent_pos': self.replay_buffer['state'][...,:2]
}
normalizer = LinearNormalizer()
normalizer.fit(data=data, last_n_dims=1, mode=mode, **kwargs)
normalizer['image'] = get_image_range_normalizer()
return normalizer
def __len__(self) -> int:
return len(self.sampler)
def _sample_to_data(self, sample):
agent_pos = sample['state'][:,:2].astype(np.float32) # (agent_posx2, block_posex3)
image = np.moveaxis(sample['img'],-1,1)/255
data = {
'obs': {
'image': image, # T, 3, 96, 96
'agent_pos': agent_pos, # T, 2
},
'action': sample['action'].astype(np.float32) # T, 2
}
return data
def __getitem__(self, idx: int) -> Dict[str, torch.Tensor]:
sample = self.sampler.sample_sequence(idx)
data = self._sample_to_data(sample)
torch_data = dict_apply(data, torch.from_numpy)
return torch_data
def test():
import os
zarr_path = os.path.expanduser('~/dev/diffusion_policy/data/pusht/pusht_cchi_v7_replay.zarr')
dataset = PushTImageDataset(zarr_path, horizon=16)
# from matplotlib import pyplot as plt
# normalizer = dataset.get_normalizer()
# nactions = normalizer['action'].normalize(dataset.replay_buffer['action'])
# diff = np.diff(nactions, axis=0)
# dists = np.linalg.norm(np.diff(nactions, axis=0), axis=-1)

View File

@@ -0,0 +1,291 @@
from typing import Dict, List
import torch
import numpy as np
import zarr
import os
import shutil
from filelock import FileLock
from threadpoolctl import threadpool_limits
from omegaconf import OmegaConf
import cv2
import json
import hashlib
import copy
from diffusion_policy.common.pytorch_util import dict_apply
from diffusion_policy.dataset.base_dataset import BaseImageDataset
from diffusion_policy.model.common.normalizer import LinearNormalizer, SingleFieldLinearNormalizer
from diffusion_policy.common.replay_buffer import ReplayBuffer
from diffusion_policy.common.sampler import (
SequenceSampler, get_val_mask, downsample_mask)
from diffusion_policy.real_world.real_data_conversion import real_data_to_replay_buffer
from diffusion_policy.common.normalize_util import (
get_range_normalizer_from_stat,
get_image_range_normalizer,
get_identity_normalizer_from_stat,
array_to_stats
)
class RealPushTImageDataset(BaseImageDataset):
def __init__(self,
shape_meta: dict,
dataset_path: str,
horizon=1,
pad_before=0,
pad_after=0,
n_obs_steps=None,
n_latency_steps=0,
use_cache=False,
seed=42,
val_ratio=0.0,
max_train_episodes=None,
delta_action=False,
):
assert os.path.isdir(dataset_path)
replay_buffer = None
if use_cache:
# fingerprint shape_meta
shape_meta_json = json.dumps(OmegaConf.to_container(shape_meta), sort_keys=True)
shape_meta_hash = hashlib.md5(shape_meta_json.encode('utf-8')).hexdigest()
cache_zarr_path = os.path.join(dataset_path, shape_meta_hash + '.zarr.zip')
cache_lock_path = cache_zarr_path + '.lock'
print('Acquiring lock on cache.')
with FileLock(cache_lock_path):
if not os.path.exists(cache_zarr_path):
# cache does not exists
try:
print('Cache does not exist. Creating!')
replay_buffer = _get_replay_buffer(
dataset_path=dataset_path,
shape_meta=shape_meta,
store=zarr.MemoryStore()
)
print('Saving cache to disk.')
with zarr.ZipStore(cache_zarr_path) as zip_store:
replay_buffer.save_to_store(
store=zip_store
)
except Exception as e:
shutil.rmtree(cache_zarr_path)
raise e
else:
print('Loading cached ReplayBuffer from Disk.')
with zarr.ZipStore(cache_zarr_path, mode='r') as zip_store:
replay_buffer = ReplayBuffer.copy_from_store(
src_store=zip_store, store=zarr.MemoryStore())
print('Loaded!')
else:
replay_buffer = _get_replay_buffer(
dataset_path=dataset_path,
shape_meta=shape_meta,
store=zarr.MemoryStore()
)
if delta_action:
# replace action as relative to previous frame
actions = replay_buffer['action'][:]
# suport positions only at this time
assert actions.shape[1] <= 3
actions_diff = np.zeros_like(actions)
episode_ends = replay_buffer.episode_ends[:]
for i in range(len(episode_ends)):
start = 0
if i > 0:
start = episode_ends[i-1]
end = episode_ends[i]
# delta action is the difference between previous desired postion and the current
# it should be scheduled at the previous timestep for the curren timestep
# to ensure consistency with positional mode
actions_diff[start+1:end] = np.diff(actions[start:end], axis=0)
replay_buffer['action'][:] = actions_diff
rgb_keys = list()
lowdim_keys = list()
obs_shape_meta = shape_meta['obs']
for key, attr in obs_shape_meta.items():
type = attr.get('type', 'low_dim')
if type == 'rgb':
rgb_keys.append(key)
elif type == 'low_dim':
lowdim_keys.append(key)
key_first_k = dict()
if n_obs_steps is not None:
# only take first k obs from images
for key in rgb_keys + lowdim_keys:
key_first_k[key] = n_obs_steps
val_mask = get_val_mask(
n_episodes=replay_buffer.n_episodes,
val_ratio=val_ratio,
seed=seed)
train_mask = ~val_mask
train_mask = downsample_mask(
mask=train_mask,
max_n=max_train_episodes,
seed=seed)
sampler = SequenceSampler(
replay_buffer=replay_buffer,
sequence_length=horizon+n_latency_steps,
pad_before=pad_before,
pad_after=pad_after,
episode_mask=train_mask,
key_first_k=key_first_k)
self.replay_buffer = replay_buffer
self.sampler = sampler
self.shape_meta = shape_meta
self.rgb_keys = rgb_keys
self.lowdim_keys = lowdim_keys
self.n_obs_steps = n_obs_steps
self.val_mask = val_mask
self.horizon = horizon
self.n_latency_steps = n_latency_steps
self.pad_before = pad_before
self.pad_after = pad_after
def get_validation_dataset(self):
val_set = copy.copy(self)
val_set.sampler = SequenceSampler(
replay_buffer=self.replay_buffer,
sequence_length=self.horizon+self.n_latency_steps,
pad_before=self.pad_before,
pad_after=self.pad_after,
episode_mask=self.val_mask
)
val_set.val_mask = ~self.val_mask
return val_set
def get_normalizer(self, **kwargs) -> LinearNormalizer:
normalizer = LinearNormalizer()
# action
normalizer['action'] = SingleFieldLinearNormalizer.create_fit(
self.replay_buffer['action'])
# obs
for key in self.lowdim_keys:
normalizer[key] = SingleFieldLinearNormalizer.create_fit(
self.replay_buffer[key])
# image
for key in self.rgb_keys:
normalizer[key] = get_image_range_normalizer()
return normalizer
def get_all_actions(self) -> torch.Tensor:
return torch.from_numpy(self.replay_buffer['action'])
def __len__(self):
return len(self.sampler)
def __getitem__(self, idx: int) -> Dict[str, torch.Tensor]:
threadpool_limits(1)
data = self.sampler.sample_sequence(idx)
# to save RAM, only return first n_obs_steps of OBS
# since the rest will be discarded anyway.
# when self.n_obs_steps is None
# this slice does nothing (takes all)
T_slice = slice(self.n_obs_steps)
obs_dict = dict()
for key in self.rgb_keys:
# move channel last to channel first
# T,H,W,C
# convert uint8 image to float32
obs_dict[key] = np.moveaxis(data[key][T_slice],-1,1
).astype(np.float32) / 255.
# T,C,H,W
# save ram
del data[key]
for key in self.lowdim_keys:
obs_dict[key] = data[key][T_slice].astype(np.float32)
# save ram
del data[key]
action = data['action'].astype(np.float32)
# handle latency by dropping first n_latency_steps action
# observations are already taken care of by T_slice
if self.n_latency_steps > 0:
action = action[self.n_latency_steps:]
torch_data = {
'obs': dict_apply(obs_dict, torch.from_numpy),
'action': torch.from_numpy(action)
}
return torch_data
def zarr_resize_index_last_dim(zarr_arr, idxs):
actions = zarr_arr[:]
actions = actions[...,idxs]
zarr_arr.resize(zarr_arr.shape[:-1] + (len(idxs),))
zarr_arr[:] = actions
return zarr_arr
def _get_replay_buffer(dataset_path, shape_meta, store):
# parse shape meta
rgb_keys = list()
lowdim_keys = list()
out_resolutions = dict()
lowdim_shapes = dict()
obs_shape_meta = shape_meta['obs']
for key, attr in obs_shape_meta.items():
type = attr.get('type', 'low_dim')
shape = tuple(attr.get('shape'))
if type == 'rgb':
rgb_keys.append(key)
c,h,w = shape
out_resolutions[key] = (w,h)
elif type == 'low_dim':
lowdim_keys.append(key)
lowdim_shapes[key] = tuple(shape)
if 'pose' in key:
assert tuple(shape) in [(2,),(6,)]
action_shape = tuple(shape_meta['action']['shape'])
assert action_shape in [(2,),(6,)]
# load data
cv2.setNumThreads(1)
with threadpool_limits(1):
replay_buffer = real_data_to_replay_buffer(
dataset_path=dataset_path,
out_store=store,
out_resolutions=out_resolutions,
lowdim_keys=lowdim_keys + ['action'],
image_keys=rgb_keys
)
# transform lowdim dimensions
if action_shape == (2,):
# 2D action space, only controls X and Y
zarr_arr = replay_buffer['action']
zarr_resize_index_last_dim(zarr_arr, idxs=[0,1])
for key, shape in lowdim_shapes.items():
if 'pose' in key and shape == (2,):
# only take X and Y
zarr_arr = replay_buffer[key]
zarr_resize_index_last_dim(zarr_arr, idxs=[0,1])
return replay_buffer
def test():
import hydra
from omegaconf import OmegaConf
OmegaConf.register_new_resolver("eval", eval, replace=True)
with hydra.initialize('../diffusion_policy/config'):
cfg = hydra.compose('train_robomimic_real_image_workspace')
OmegaConf.resolve(cfg)
dataset = hydra.utils.instantiate(cfg.task.dataset)
from matplotlib import pyplot as plt
normalizer = dataset.get_normalizer()
nactions = normalizer['action'].normalize(dataset.replay_buffer['action'][:])
diff = np.diff(nactions, axis=0)
dists = np.linalg.norm(np.diff(nactions, axis=0), axis=-1)
_ = plt.hist(dists, bins=100); plt.title('real action velocity')

View File

@@ -0,0 +1,373 @@
from typing import Dict, List
import torch
import numpy as np
import h5py
from tqdm import tqdm
import zarr
import os
import shutil
import copy
import json
import hashlib
from filelock import FileLock
from threadpoolctl import threadpool_limits
import concurrent.futures
import multiprocessing
from omegaconf import OmegaConf
from diffusion_policy.common.pytorch_util import dict_apply
from diffusion_policy.dataset.base_dataset import BaseImageDataset, LinearNormalizer
from diffusion_policy.model.common.normalizer import LinearNormalizer, SingleFieldLinearNormalizer
from diffusion_policy.model.common.rotation_transformer import RotationTransformer
from diffusion_policy.codecs.imagecodecs_numcodecs import register_codecs, Jpeg2k
from diffusion_policy.common.replay_buffer import ReplayBuffer
from diffusion_policy.common.sampler import SequenceSampler, get_val_mask
from diffusion_policy.common.normalize_util import (
robomimic_abs_action_only_normalizer_from_stat,
robomimic_abs_action_only_dual_arm_normalizer_from_stat,
get_range_normalizer_from_stat,
get_image_range_normalizer,
get_identity_normalizer_from_stat,
array_to_stats
)
register_codecs()
class RobomimicReplayImageDataset(BaseImageDataset):
def __init__(self,
shape_meta: dict,
dataset_path: str,
horizon=1,
pad_before=0,
pad_after=0,
n_obs_steps=None,
abs_action=False,
rotation_rep='rotation_6d', # ignored when abs_action=False
use_legacy_normalizer=False,
use_cache=False,
seed=42,
val_ratio=0.0
):
rotation_transformer = RotationTransformer(
from_rep='axis_angle', to_rep=rotation_rep)
replay_buffer = None
if use_cache:
cache_zarr_path = dataset_path + '.zarr.zip'
cache_lock_path = cache_zarr_path + '.lock'
print('Acquiring lock on cache.')
with FileLock(cache_lock_path):
if not os.path.exists(cache_zarr_path):
# cache does not exists
try:
print('Cache does not exist. Creating!')
# store = zarr.DirectoryStore(cache_zarr_path)
replay_buffer = _convert_robomimic_to_replay(
store=zarr.MemoryStore(),
shape_meta=shape_meta,
dataset_path=dataset_path,
abs_action=abs_action,
rotation_transformer=rotation_transformer)
print('Saving cache to disk.')
with zarr.ZipStore(cache_zarr_path) as zip_store:
replay_buffer.save_to_store(
store=zip_store
)
except Exception as e:
shutil.rmtree(cache_zarr_path)
raise e
else:
print('Loading cached ReplayBuffer from Disk.')
with zarr.ZipStore(cache_zarr_path, mode='r') as zip_store:
replay_buffer = ReplayBuffer.copy_from_store(
src_store=zip_store, store=zarr.MemoryStore())
print('Loaded!')
else:
replay_buffer = _convert_robomimic_to_replay(
store=zarr.MemoryStore(),
shape_meta=shape_meta,
dataset_path=dataset_path,
abs_action=abs_action,
rotation_transformer=rotation_transformer)
rgb_keys = list()
lowdim_keys = list()
obs_shape_meta = shape_meta['obs']
for key, attr in obs_shape_meta.items():
type = attr.get('type', 'low_dim')
if type == 'rgb':
rgb_keys.append(key)
elif type == 'low_dim':
lowdim_keys.append(key)
# for key in rgb_keys:
# replay_buffer[key].compressor.numthreads=1
key_first_k = dict()
if n_obs_steps is not None:
# only take first k obs from images
for key in rgb_keys + lowdim_keys:
key_first_k[key] = n_obs_steps
val_mask = get_val_mask(
n_episodes=replay_buffer.n_episodes,
val_ratio=val_ratio,
seed=seed)
train_mask = ~val_mask
sampler = SequenceSampler(
replay_buffer=replay_buffer,
sequence_length=horizon,
pad_before=pad_before,
pad_after=pad_after,
episode_mask=train_mask,
key_first_k=key_first_k)
self.replay_buffer = replay_buffer
self.sampler = sampler
self.shape_meta = shape_meta
self.rgb_keys = rgb_keys
self.lowdim_keys = lowdim_keys
self.abs_action = abs_action
self.n_obs_steps = n_obs_steps
self.train_mask = train_mask
self.horizon = horizon
self.pad_before = pad_before
self.pad_after = pad_after
self.use_legacy_normalizer = use_legacy_normalizer
def get_validation_dataset(self):
val_set = copy.copy(self)
val_set.sampler = SequenceSampler(
replay_buffer=self.replay_buffer,
sequence_length=self.horizon,
pad_before=self.pad_before,
pad_after=self.pad_after,
episode_mask=~self.train_mask
)
val_set.train_mask = ~self.train_mask
return val_set
def get_normalizer(self, **kwargs) -> LinearNormalizer:
normalizer = LinearNormalizer()
# action
stat = array_to_stats(self.replay_buffer['action'])
if self.abs_action:
if stat['mean'].shape[-1] > 10:
# dual arm
this_normalizer = robomimic_abs_action_only_dual_arm_normalizer_from_stat(stat)
else:
this_normalizer = robomimic_abs_action_only_normalizer_from_stat(stat)
if self.use_legacy_normalizer:
this_normalizer = normalizer_from_stat(stat)
else:
# already normalized
this_normalizer = get_identity_normalizer_from_stat(stat)
normalizer['action'] = this_normalizer
# obs
for key in self.lowdim_keys:
stat = array_to_stats(self.replay_buffer[key])
if key.endswith('pos'):
this_normalizer = get_range_normalizer_from_stat(stat)
elif key.endswith('quat'):
# quaternion is in [-1,1] already
this_normalizer = get_identity_normalizer_from_stat(stat)
elif key.endswith('qpos'):
this_normalizer = get_range_normalizer_from_stat(stat)
else:
raise RuntimeError('unsupported')
normalizer[key] = this_normalizer
# image
for key in self.rgb_keys:
normalizer[key] = get_image_range_normalizer()
return normalizer
def get_all_actions(self) -> torch.Tensor:
return torch.from_numpy(self.replay_buffer['action'])
def __len__(self):
return len(self.sampler)
def __getitem__(self, idx: int) -> Dict[str, torch.Tensor]:
threadpool_limits(1)
data = self.sampler.sample_sequence(idx)
# to save RAM, only return first n_obs_steps of OBS
# since the rest will be discarded anyway.
# when self.n_obs_steps is None
# this slice does nothing (takes all)
T_slice = slice(self.n_obs_steps)
obs_dict = dict()
for key in self.rgb_keys:
# move channel last to channel first
# T,H,W,C
# convert uint8 image to float32
obs_dict[key] = np.moveaxis(data[key][T_slice],-1,1
).astype(np.float32) / 255.
# T,C,H,W
del data[key]
for key in self.lowdim_keys:
obs_dict[key] = data[key][T_slice].astype(np.float32)
del data[key]
torch_data = {
'obs': dict_apply(obs_dict, torch.from_numpy),
'action': torch.from_numpy(data['action'].astype(np.float32))
}
return torch_data
def _convert_actions(raw_actions, abs_action, rotation_transformer):
actions = raw_actions
if abs_action:
is_dual_arm = False
if raw_actions.shape[-1] == 14:
# dual arm
raw_actions = raw_actions.reshape(-1,2,7)
is_dual_arm = True
pos = raw_actions[...,:3]
rot = raw_actions[...,3:6]
gripper = raw_actions[...,6:]
rot = rotation_transformer.forward(rot)
raw_actions = np.concatenate([
pos, rot, gripper
], axis=-1).astype(np.float32)
if is_dual_arm:
raw_actions = raw_actions.reshape(-1,20)
actions = raw_actions
return actions
def _convert_robomimic_to_replay(store, shape_meta, dataset_path, abs_action, rotation_transformer,
n_workers=None, max_inflight_tasks=None):
if n_workers is None:
n_workers = multiprocessing.cpu_count()
if max_inflight_tasks is None:
max_inflight_tasks = n_workers * 5
# parse shape_meta
rgb_keys = list()
lowdim_keys = list()
# construct compressors and chunks
obs_shape_meta = shape_meta['obs']
for key, attr in obs_shape_meta.items():
shape = attr['shape']
type = attr.get('type', 'low_dim')
if type == 'rgb':
rgb_keys.append(key)
elif type == 'low_dim':
lowdim_keys.append(key)
root = zarr.group(store)
data_group = root.require_group('data', overwrite=True)
meta_group = root.require_group('meta', overwrite=True)
with h5py.File(dataset_path) as file:
# count total steps
demos = file['data']
episode_ends = list()
prev_end = 0
for i in range(len(demos)):
demo = demos[f'demo_{i}']
episode_length = demo['actions'].shape[0]
episode_end = prev_end + episode_length
prev_end = episode_end
episode_ends.append(episode_end)
n_steps = episode_ends[-1]
episode_starts = [0] + episode_ends[:-1]
_ = meta_group.array('episode_ends', episode_ends,
dtype=np.int64, compressor=None, overwrite=True)
# save lowdim data
for key in tqdm(lowdim_keys + ['action'], desc="Loading lowdim data"):
data_key = 'obs/' + key
if key == 'action':
data_key = 'actions'
this_data = list()
for i in range(len(demos)):
demo = demos[f'demo_{i}']
this_data.append(demo[data_key][:].astype(np.float32))
this_data = np.concatenate(this_data, axis=0)
if key == 'action':
this_data = _convert_actions(
raw_actions=this_data,
abs_action=abs_action,
rotation_transformer=rotation_transformer
)
assert this_data.shape == (n_steps,) + tuple(shape_meta['action']['shape'])
else:
assert this_data.shape == (n_steps,) + tuple(shape_meta['obs'][key]['shape'])
_ = data_group.array(
name=key,
data=this_data,
shape=this_data.shape,
chunks=this_data.shape,
compressor=None,
dtype=this_data.dtype
)
def img_copy(zarr_arr, zarr_idx, hdf5_arr, hdf5_idx):
try:
zarr_arr[zarr_idx] = hdf5_arr[hdf5_idx]
# make sure we can successfully decode
_ = zarr_arr[zarr_idx]
return True
except Exception as e:
return False
with tqdm(total=n_steps*len(rgb_keys), desc="Loading image data", mininterval=1.0) as pbar:
# one chunk per thread, therefore no synchronization needed
with concurrent.futures.ThreadPoolExecutor(max_workers=n_workers) as executor:
futures = set()
for key in rgb_keys:
data_key = 'obs/' + key
shape = tuple(shape_meta['obs'][key]['shape'])
c,h,w = shape
this_compressor = Jpeg2k(level=50)
img_arr = data_group.require_dataset(
name=key,
shape=(n_steps,h,w,c),
chunks=(1,h,w,c),
compressor=this_compressor,
dtype=np.uint8
)
for episode_idx in range(len(demos)):
demo = demos[f'demo_{episode_idx}']
hdf5_arr = demo['obs'][key]
for hdf5_idx in range(hdf5_arr.shape[0]):
if len(futures) >= max_inflight_tasks:
# limit number of inflight tasks
completed, futures = concurrent.futures.wait(futures,
return_when=concurrent.futures.FIRST_COMPLETED)
for f in completed:
if not f.result():
raise RuntimeError('Failed to encode image!')
pbar.update(len(completed))
zarr_idx = episode_starts[episode_idx] + hdf5_idx
futures.add(
executor.submit(img_copy,
img_arr, zarr_idx, hdf5_arr, hdf5_idx))
completed, futures = concurrent.futures.wait(futures)
for f in completed:
if not f.result():
raise RuntimeError('Failed to encode image!')
pbar.update(len(completed))
replay_buffer = ReplayBuffer(root)
return replay_buffer
def normalizer_from_stat(stat):
max_abs = np.maximum(stat['max'].max(), np.abs(stat['min']).max())
scale = np.full_like(stat['max'], fill_value=1/max_abs)
offset = np.zeros_like(stat['max'])
return SingleFieldLinearNormalizer.create_manual(
scale=scale,
offset=offset,
input_stats_dict=stat
)

View File

@@ -0,0 +1,168 @@
from typing import Dict, List
import torch
import numpy as np
import h5py
from tqdm import tqdm
import copy
from diffusion_policy.common.pytorch_util import dict_apply
from diffusion_policy.dataset.base_dataset import BaseLowdimDataset, LinearNormalizer
from diffusion_policy.model.common.normalizer import LinearNormalizer, SingleFieldLinearNormalizer
from diffusion_policy.model.common.rotation_transformer import RotationTransformer
from diffusion_policy.common.replay_buffer import ReplayBuffer
from diffusion_policy.common.sampler import (
SequenceSampler, get_val_mask, downsample_mask)
from diffusion_policy.common.normalize_util import (
robomimic_abs_action_only_normalizer_from_stat,
robomimic_abs_action_only_dual_arm_normalizer_from_stat,
get_identity_normalizer_from_stat,
array_to_stats
)
class RobomimicReplayLowdimDataset(BaseLowdimDataset):
def __init__(self,
dataset_path: str,
horizon=1,
pad_before=0,
pad_after=0,
obs_keys: List[str]=[
'object',
'robot0_eef_pos',
'robot0_eef_quat',
'robot0_gripper_qpos'],
abs_action=False,
rotation_rep='rotation_6d',
use_legacy_normalizer=False,
seed=42,
val_ratio=0.0,
max_train_episodes=None
):
obs_keys = list(obs_keys)
rotation_transformer = RotationTransformer(
from_rep='axis_angle', to_rep=rotation_rep)
replay_buffer = ReplayBuffer.create_empty_numpy()
with h5py.File(dataset_path) as file:
demos = file['data']
for i in tqdm(range(len(demos)), desc="Loading hdf5 to ReplayBuffer"):
demo = demos[f'demo_{i}']
episode = _data_to_obs(
raw_obs=demo['obs'],
raw_actions=demo['actions'][:].astype(np.float32),
obs_keys=obs_keys,
abs_action=abs_action,
rotation_transformer=rotation_transformer)
replay_buffer.add_episode(episode)
val_mask = get_val_mask(
n_episodes=replay_buffer.n_episodes,
val_ratio=val_ratio,
seed=seed)
train_mask = ~val_mask
train_mask = downsample_mask(
mask=train_mask,
max_n=max_train_episodes,
seed=seed)
sampler = SequenceSampler(
replay_buffer=replay_buffer,
sequence_length=horizon,
pad_before=pad_before,
pad_after=pad_after,
episode_mask=train_mask)
self.replay_buffer = replay_buffer
self.sampler = sampler
self.abs_action = abs_action
self.train_mask = train_mask
self.horizon = horizon
self.pad_before = pad_before
self.pad_after = pad_after
self.use_legacy_normalizer = use_legacy_normalizer
def get_validation_dataset(self):
val_set = copy.copy(self)
val_set.sampler = SequenceSampler(
replay_buffer=self.replay_buffer,
sequence_length=self.horizon,
pad_before=self.pad_before,
pad_after=self.pad_after,
episode_mask=~self.train_mask
)
val_set.train_mask = ~self.train_mask
return val_set
def get_normalizer(self, **kwargs) -> LinearNormalizer:
normalizer = LinearNormalizer()
# action
stat = array_to_stats(self.replay_buffer['action'])
if self.abs_action:
if stat['mean'].shape[-1] > 10:
# dual arm
this_normalizer = robomimic_abs_action_only_dual_arm_normalizer_from_stat(stat)
else:
this_normalizer = robomimic_abs_action_only_normalizer_from_stat(stat)
if self.use_legacy_normalizer:
this_normalizer = normalizer_from_stat(stat)
else:
# already normalized
this_normalizer = get_identity_normalizer_from_stat(stat)
normalizer['action'] = this_normalizer
# aggregate obs stats
obs_stat = array_to_stats(self.replay_buffer['obs'])
normalizer['obs'] = normalizer_from_stat(obs_stat)
return normalizer
def get_all_actions(self) -> torch.Tensor:
return torch.from_numpy(self.replay_buffer['action'])
def __len__(self):
return len(self.sampler)
def __getitem__(self, idx: int) -> Dict[str, torch.Tensor]:
data = self.sampler.sample_sequence(idx)
torch_data = dict_apply(data, torch.from_numpy)
return torch_data
def normalizer_from_stat(stat):
max_abs = np.maximum(stat['max'].max(), np.abs(stat['min']).max())
scale = np.full_like(stat['max'], fill_value=1/max_abs)
offset = np.zeros_like(stat['max'])
return SingleFieldLinearNormalizer.create_manual(
scale=scale,
offset=offset,
input_stats_dict=stat
)
def _data_to_obs(raw_obs, raw_actions, obs_keys, abs_action, rotation_transformer):
obs = np.concatenate([
raw_obs[key] for key in obs_keys
], axis=-1).astype(np.float32)
if abs_action:
is_dual_arm = False
if raw_actions.shape[-1] == 14:
# dual arm
raw_actions = raw_actions.reshape(-1,2,7)
is_dual_arm = True
pos = raw_actions[...,:3]
rot = raw_actions[...,3:6]
gripper = raw_actions[...,6:]
rot = rotation_transformer.forward(rot)
raw_actions = np.concatenate([
pos, rot, gripper
], axis=-1).astype(np.float32)
if is_dual_arm:
raw_actions = raw_actions.reshape(-1,20)
data = {
'obs': obs,
'action': raw_actions
}
return data

View File

@@ -0,0 +1,31 @@
<?xml version="0.0" ?>
<robot name="box.urdf">
<link name="baseLink">
<contact>
<lateral_friction value="1.0"/>
<rolling_friction value="0.0001"/>
<inertia_scaling value="3.0"/>
</contact>
<inertial>
<origin rpy="0 0 0" xyz="0 0 0"/>
<mass value=".01"/>
<inertia ixx="1" ixy="0" ixz="0" iyy="1" iyz="0" izz="1"/>
</inertial>
<visual>
<origin rpy="0 0 0" xyz="0 0 0"/>
<geometry>
<box size="0.04 0.04 0.04"/>
</geometry>
<material name="red">
<color rgba="1 0.3412 0.3490 1"/>
</material>
</visual>
<collision>
<origin rpy="0 0 0" xyz="0 0 0"/>
<geometry>
<box size="0.04 0.04 0.04"/>
</geometry>
</collision>
</link>
</robot>

View File

@@ -0,0 +1,31 @@
<?xml version="0.0" ?>
<robot name="box2.urdf">
<link name="baseLink">
<contact>
<lateral_friction value="1.0"/>
<rolling_friction value="0.0001"/>
<inertia_scaling value="3.0"/>
</contact>
<inertial>
<origin rpy="0 0 0" xyz="0 0 0"/>
<mass value=".01"/>
<inertia ixx="1" ixy="0" ixz="0" iyy="1" iyz="0" izz="1"/>
</inertial>
<visual>
<origin rpy="0 0 0" xyz="0 0 0"/>
<geometry>
<box size="0.04 0.04 0.04"/>
</geometry>
<material name="red">
<color rgba="0.3412 1 0.3490 1"/>
</material>
</visual>
<collision>
<origin rpy="0 0 0" xyz="0 0 0"/>
<geometry>
<box size="0.04 0.04 0.04"/>
</geometry>
</collision>
</link>
</robot>

View File

@@ -0,0 +1,30 @@
<?xml version="1.0" ?>
<robot name="blue_cube.urdf">
<link name="baseLink">
<contact>
<lateral_friction value="0.5"/>
<rolling_friction value="0.0001"/>
<inertia_scaling value="1.0"/>
</contact>
<inertial>
<origin rpy="0 0 0" xyz="0 0 0"/>
<mass value=".01"/>
<inertia ixx="1" ixy="0" ixz="0" iyy="1" iyz="0" izz="1"/>
</inertial>
<visual>
<origin rpy="0 0 0" xyz="0 0 0"/>
<geometry>
<mesh filename="cube.obj" scale="1.0 1.0 1.0"/>
</geometry>
<material name="blue">
<color rgba="0.4 0.4 1.0 1"/>
</material>
</visual>
<collision>
<origin rpy="0 0 0" xyz="0 0 0"/>
<geometry>
<mesh filename="cube.obj" scale="1.0 1.0 1.0"/>
</geometry>
</collision>
</link>
</robot>

View File

@@ -0,0 +1,446 @@
# Blender v2.92.0 OBJ File: ''
# www.blender.org
mtllib cube.mtl
o square_blue_block_Cube.001
v 0.000000 0.000000 -0.000000
v 0.000000 0.038100 -0.000000
v -0.014865 0.000000 0.018106
v -0.016277 0.002032 0.019826
v -0.015863 0.000595 0.019322
v -0.019826 0.002032 0.016277
v -0.018106 0.000000 0.014865
v -0.019322 0.000595 0.015863
v -0.018052 0.002032 0.019351
v -0.016494 0.000000 0.017681
v -0.017595 0.000595 0.018862
v -0.019351 0.002032 0.018052
v -0.017681 0.000000 0.016494
v -0.018862 0.000595 0.017595
v -0.018106 0.038100 0.014865
v -0.019826 0.036068 0.016277
v -0.019322 0.037505 0.015863
v -0.016277 0.036068 0.019826
v -0.014865 0.038100 0.018106
v -0.015863 0.037505 0.019322
v -0.017681 0.038100 0.016494
v -0.019351 0.036068 0.018052
v -0.018862 0.037505 0.017595
v -0.016494 0.038100 0.017681
v -0.018052 0.036068 0.019351
v -0.017595 0.037505 0.018862
v -0.018106 0.000000 -0.014865
v -0.019826 0.002032 -0.016277
v -0.019322 0.000595 -0.015863
v -0.016277 0.002032 -0.019826
v -0.014865 0.000000 -0.018106
v -0.015863 0.000595 -0.019322
v -0.017681 0.000000 -0.016494
v -0.019351 0.002032 -0.018052
v -0.018862 0.000595 -0.017595
v -0.016494 0.000000 -0.017681
v -0.018052 0.002032 -0.019351
v -0.017595 0.000595 -0.018862
v -0.014865 0.038100 -0.018106
v -0.016277 0.036068 -0.019826
v -0.015863 0.037505 -0.019322
v -0.019826 0.036068 -0.016277
v -0.018106 0.038100 -0.014865
v -0.019322 0.037505 -0.015863
v -0.018052 0.036068 -0.019351
v -0.016494 0.038100 -0.017681
v -0.017595 0.037505 -0.018862
v -0.019351 0.036068 -0.018052
v -0.017681 0.038100 -0.016494
v -0.018862 0.037505 -0.017595
v 0.018106 0.000000 0.014865
v 0.019826 0.002032 0.016277
v 0.019322 0.000595 0.015863
v 0.016277 0.002032 0.019826
v 0.014865 0.000000 0.018106
v 0.015863 0.000595 0.019322
v 0.017681 0.000000 0.016494
v 0.019351 0.002032 0.018052
v 0.018862 0.000595 0.017595
v 0.016494 0.000000 0.017681
v 0.018052 0.002032 0.019351
v 0.017595 0.000595 0.018862
v 0.014865 0.038100 0.018106
v 0.016277 0.036068 0.019826
v 0.015863 0.037505 0.019322
v 0.019826 0.036068 0.016277
v 0.018106 0.038100 0.014865
v 0.019322 0.037505 0.015863
v 0.018052 0.036068 0.019351
v 0.016494 0.038100 0.017681
v 0.017595 0.037505 0.018862
v 0.019351 0.036068 0.018052
v 0.017681 0.038100 0.016494
v 0.018862 0.037505 0.017595
v 0.014865 0.000000 -0.018106
v 0.016277 0.002032 -0.019826
v 0.015863 0.000595 -0.019322
v 0.019826 0.002032 -0.016277
v 0.018106 0.000000 -0.014865
v 0.019322 0.000595 -0.015863
v 0.016494 0.000000 -0.017681
v 0.018052 0.002032 -0.019351
v 0.017595 0.000595 -0.018862
v 0.017681 0.000000 -0.016494
v 0.019351 0.002032 -0.018052
v 0.018862 0.000595 -0.017595
v 0.018106 0.038100 -0.014865
v 0.019826 0.036068 -0.016277
v 0.019322 0.037505 -0.015863
v 0.016277 0.036068 -0.019826
v 0.014865 0.038100 -0.018106
v 0.015863 0.037505 -0.019322
v 0.019351 0.036068 -0.018052
v 0.017681 0.038100 -0.016494
v 0.018862 0.037505 -0.017595
v 0.018052 0.036068 -0.019351
v 0.016494 0.038100 -0.017681
v 0.017595 0.037505 -0.018862
vt 0.811987 0.285513
vt 0.811986 0.268118
vt 0.822561 0.276819
vt 0.831255 0.287393
vt 0.813860 0.287394
vt 0.836073 0.319870
vt 0.856069 0.319870
vt 0.856069 0.320950
vt 0.836073 0.320950
vt 0.813867 0.266245
vt 0.831262 0.266244
vt 0.832202 0.266496
vt 0.832884 0.267184
vt 0.856069 0.297507
vt 0.836074 0.297507
vt 0.836074 0.278381
vt 0.856070 0.278381
vt 0.812238 0.267178
vt 0.812926 0.266496
vt 0.856070 0.275143
vt 0.836074 0.275143
vt 0.836074 0.256018
vt 0.856070 0.256018
vt 0.833135 0.268125
vt 0.833136 0.285520
vt 0.833136 0.260613
vt 0.832884 0.261553
vt 0.822561 0.251912
vt 0.836074 0.342234
vt 0.856070 0.342234
vt 0.856070 0.343313
vt 0.836074 0.343313
vt 0.832196 0.287142
vt 0.831262 0.241337
vt 0.832202 0.241589
vt 0.832884 0.286460
vt 0.812238 0.286454
vt 0.811986 0.243211
vt 0.812238 0.242271
vt 0.836073 0.300745
vt 0.856069 0.300745
vt 0.836073 0.298586
vt 0.856069 0.298586
vt 0.856069 0.299665
vt 0.836073 0.299665
vt 0.856070 0.276222
vt 0.836074 0.276222
vt 0.836074 0.277302
vt 0.856070 0.277302
vt 0.836074 0.253859
vt 0.856070 0.253859
vt 0.856070 0.254938
vt 0.836074 0.254938
vt 0.812920 0.287142
vt 0.856069 0.322029
vt 0.836073 0.322029
vt 0.813860 0.262487
vt 0.812920 0.262235
vt 0.836073 0.323108
vt 0.856069 0.323108
vt 0.832884 0.242277
vt 0.833135 0.243218
vt 0.812238 0.261547
vt 0.811987 0.260606
vt 0.813868 0.241338
vt 0.812926 0.241589
vt 0.831255 0.262486
vt 0.832196 0.262235
vt 0.811196 0.267507
vt 0.811506 0.266500
vt 0.810258 0.267085
vt 0.810726 0.265896
vt 0.811523 0.262225
vt 0.811189 0.261224
vt 0.810732 0.262868
vt 0.810258 0.261658
vt 0.812248 0.265781
vt 0.811605 0.264990
vt 0.812242 0.262967
vt 0.811638 0.263747
vt 0.813249 0.265447
vt 0.812815 0.264516
vt 0.813249 0.263277
vt 0.812827 0.264215
vt 0.833926 0.261224
vt 0.833616 0.262231
vt 0.834864 0.261646
vt 0.834396 0.262835
vt 0.833599 0.266506
vt 0.833933 0.267507
vt 0.834390 0.265863
vt 0.834864 0.267073
vt 0.832874 0.262950
vt 0.833517 0.263741
vt 0.832880 0.265764
vt 0.833484 0.264984
vt 0.831873 0.263284
vt 0.832307 0.264215
vt 0.831873 0.265453
vt 0.832295 0.264516
vt 0.831873 0.240547
vt 0.832880 0.240857
vt 0.832295 0.239609
vt 0.833484 0.240077
vt 0.832874 0.287857
vt 0.831873 0.288191
vt 0.833517 0.288648
vt 0.832307 0.289122
vt 0.833599 0.241599
vt 0.834390 0.240956
vt 0.833616 0.287138
vt 0.834396 0.287742
vt 0.833933 0.242600
vt 0.834864 0.242166
vt 0.833926 0.286131
vt 0.834864 0.286553
vt 0.811196 0.242600
vt 0.811506 0.241593
vt 0.810258 0.242178
vt 0.810726 0.240989
vt 0.811523 0.287132
vt 0.811189 0.286131
vt 0.810732 0.287775
vt 0.810258 0.286565
vt 0.812248 0.240874
vt 0.811605 0.240083
vt 0.812242 0.287874
vt 0.811638 0.288654
vt 0.813249 0.240540
vt 0.812815 0.239609
vt 0.813249 0.288184
vt 0.812827 0.289122
vn 0.0000 -1.0000 0.0000
vn 0.9739 0.1816 -0.1363
vn 0.9739 -0.1816 -0.1363
vn 0.8444 -0.2052 -0.4948
vn 0.8444 0.2052 -0.4948
vn 0.1363 -0.1816 0.9739
vn 0.1363 0.1816 0.9739
vn -0.1363 0.1816 0.9739
vn -0.1363 -0.1816 0.9739
vn -0.9739 -0.1816 0.1362
vn -0.9739 0.1816 0.1362
vn -0.9739 0.1816 -0.1362
vn -0.9739 -0.1816 -0.1362
vn -0.0000 1.0000 -0.0000
vn -0.1363 0.1816 -0.9739
vn -0.1363 -0.1816 -0.9739
vn -0.4948 -0.2052 -0.8444
vn -0.4948 0.2052 -0.8444
vn 0.9739 0.1816 0.1363
vn 0.9739 -0.1816 0.1363
vn 0.4948 0.2052 0.8444
vn 0.4948 -0.2052 0.8444
vn 0.8444 -0.2052 0.4948
vn 0.8444 0.2052 0.4948
vn -0.8444 -0.2052 0.4948
vn -0.8444 0.2052 0.4948
vn -0.4948 0.2052 0.8444
vn -0.4948 -0.2052 0.8444
vn -0.8444 -0.2052 -0.4948
vn -0.8444 0.2052 -0.4948
vn 0.4948 -0.2052 -0.8444
vn 0.4948 0.2052 -0.8444
vn 0.1363 0.1816 -0.9739
vn 0.1363 -0.1816 -0.9739
vn -0.0965 -0.9775 0.1874
vn -0.0187 -0.9731 0.2298
vn -0.0935 -0.6743 0.7325
vn -0.3529 -0.6994 0.6215
vn -0.0187 0.9731 0.2298
vn -0.0965 0.9775 0.1874
vn -0.3529 0.6994 0.6215
vn -0.0935 0.6743 0.7325
vn -0.1874 -0.9775 0.0965
vn -0.6215 -0.6994 0.3529
vn -0.1874 0.9775 0.0965
vn -0.6215 0.6994 0.3529
vn -0.2298 -0.9731 0.0187
vn -0.7325 -0.6743 0.0935
vn -0.2298 0.9731 0.0187
vn -0.7325 0.6743 0.0935
vn -0.0965 0.9775 -0.1874
vn -0.0187 0.9731 -0.2298
vn -0.0935 0.6743 -0.7325
vn -0.3529 0.6994 -0.6215
vn -0.0187 -0.9731 -0.2298
vn -0.0965 -0.9775 -0.1874
vn -0.3529 -0.6994 -0.6215
vn -0.0935 -0.6743 -0.7325
vn -0.1874 0.9775 -0.0965
vn -0.6215 0.6994 -0.3529
vn -0.1874 -0.9775 -0.0965
vn -0.6215 -0.6994 -0.3529
vn -0.2298 0.9731 -0.0187
vn -0.7325 0.6743 -0.0935
vn -0.2298 -0.9731 -0.0187
vn -0.7325 -0.6743 -0.0935
vn 0.1874 0.9775 -0.0965
vn 0.2298 0.9731 -0.0187
vn 0.7325 0.6743 -0.0935
vn 0.6215 0.6994 -0.3529
vn 0.2298 -0.9731 -0.0187
vn 0.1874 -0.9775 -0.0965
vn 0.6215 -0.6994 -0.3529
vn 0.7325 -0.6743 -0.0935
vn 0.0965 0.9775 -0.1874
vn 0.3529 0.6994 -0.6215
vn 0.0965 -0.9775 -0.1874
vn 0.3529 -0.6994 -0.6215
vn 0.0187 0.9731 -0.2298
vn 0.0935 0.6743 -0.7325
vn 0.0187 -0.9731 -0.2298
vn 0.0935 -0.6743 -0.7325
vn 0.0965 0.9775 0.1874
vn 0.0187 0.9731 0.2298
vn 0.0935 0.6743 0.7325
vn 0.3529 0.6994 0.6215
vn 0.0187 -0.9731 0.2298
vn 0.0965 -0.9775 0.1874
vn 0.3529 -0.6994 0.6215
vn 0.0935 -0.6743 0.7325
vn 0.1874 0.9775 0.0965
vn 0.6215 0.6994 0.3529
vn 0.1874 -0.9775 0.0965
vn 0.6215 -0.6994 0.3529
vn 0.2298 0.9731 0.0187
vn 0.7325 0.6743 0.0935
vn 0.2298 -0.9731 0.0187
vn 0.7325 -0.6743 0.0935
usemtl toybox.001
s 1
f 55/1/1 3/2/1 1/3/1
f 79/4/1 51/5/1 1/3/1
f 88/6/2 78/7/3 85/8/4 93/9/5
f 7/10/1 27/11/1 1/3/1
f 33/12/1 36/13/1 1/3/1
f 54/14/6 64/15/7 18/16/8 4/17/9
f 10/18/1 13/19/1 1/3/1
f 6/20/10 16/21/11 42/22/12 28/23/13
f 31/24/1 75/25/1 1/3/1
f 39/26/14 46/27/14 2/28/14
f 40/29/15 30/30/16 37/31/17 45/32/18
f 84/33/1 79/4/1 1/3/1
f 87/34/14 94/35/14 2/28/14
f 75/25/1 81/36/1 1/3/1
f 60/37/1 55/1/1 1/3/1
f 63/38/14 70/39/14 2/28/14
f 78/7/3 88/6/2 66/40/19 52/41/20
f 69/42/21 61/43/22 58/44/23 72/45/24
f 13/19/1 7/10/1 1/3/1
f 12/46/25 22/47/26 16/21/11 6/20/10
f 81/36/1 84/33/1 1/3/1
f 4/17/9 18/16/8 25/48/27 9/49/28
f 45/50/18 37/51/17 34/52/29 48/53/30
f 72/45/24 58/44/23 52/41/20 66/40/19
f 57/54/1 60/37/1 1/3/1
f 93/9/5 85/8/4 82/55/31 96/56/32
f 9/49/28 25/48/27 22/47/26 12/46/25
f 48/53/30 34/52/29 28/23/13 42/22/12
f 36/13/1 31/24/1 1/3/1
f 15/57/14 21/58/14 2/28/14
f 30/30/16 40/29/15 90/59/33 76/60/34
f 64/15/7 54/14/6 61/43/22 69/42/21
f 97/61/14 91/62/14 2/28/14
f 51/5/1 57/54/1 1/3/1
f 24/63/14 19/64/14 2/28/14
f 67/65/14 87/34/14 2/28/14
f 73/66/14 67/65/14 2/28/14
f 21/58/14 24/63/14 2/28/14
f 43/67/14 15/57/14 2/28/14
f 70/39/14 73/66/14 2/28/14
f 19/64/14 63/38/14 2/28/14
f 46/27/14 49/68/14 2/28/14
f 49/68/14 43/67/14 2/28/14
f 91/62/14 39/26/14 2/28/14
f 96/56/32 82/55/31 76/60/34 90/59/33
f 94/35/14 97/61/14 2/28/14
f 27/11/1 33/12/1 1/3/1
f 10/18/35 3/2/36 5/69/37 11/70/38
f 11/70/38 5/69/37 4/71/9 9/72/28
f 19/64/39 24/63/40 26/73/41 20/74/42
f 20/74/42 26/73/41 25/75/27 18/76/8
f 13/19/43 10/18/35 11/70/38 14/77/44
f 14/77/44 11/70/38 9/72/28 12/78/25
f 24/63/40 21/58/45 23/79/46 26/73/41
f 26/73/41 23/79/46 22/80/26 25/75/27
f 7/10/47 13/19/43 14/77/44 8/81/48
f 8/81/48 14/77/44 12/78/25 6/82/10
f 21/58/45 15/57/49 17/83/50 23/79/46
f 23/79/46 17/83/50 16/84/11 22/80/26
f 46/27/51 39/26/52 41/85/53 47/86/54
f 47/86/54 41/85/53 40/87/15 45/88/18
f 31/24/55 36/13/56 38/89/57 32/90/58
f 32/90/58 38/89/57 37/91/17 30/92/16
f 49/68/59 46/27/51 47/86/54 50/93/60
f 50/93/60 47/86/54 45/88/18 48/94/30
f 36/13/56 33/12/61 35/95/62 38/89/57
f 38/89/57 35/95/62 34/96/29 37/91/17
f 43/67/63 49/68/59 50/93/60 44/97/64
f 44/97/64 50/93/60 48/94/30 42/98/12
f 33/12/61 27/11/65 29/99/66 35/95/62
f 35/95/62 29/99/66 28/100/13 34/96/29
f 94/35/67 87/34/68 89/101/69 95/102/70
f 95/102/70 89/101/69 88/103/2 93/104/5
f 79/4/71 84/33/72 86/105/73 80/106/74
f 80/106/74 86/105/73 85/107/4 78/108/3
f 97/61/75 94/35/67 95/102/70 98/109/76
f 98/109/76 95/102/70 93/104/5 96/110/32
f 84/33/72 81/36/77 83/111/78 86/105/73
f 86/105/73 83/111/78 82/112/31 85/107/4
f 91/62/79 97/61/75 98/109/76 92/113/80
f 92/113/80 98/109/76 96/110/32 90/114/33
f 81/36/77 75/25/81 77/115/82 83/111/78
f 83/111/78 77/115/82 76/116/34 82/112/31
f 70/39/83 63/38/84 65/117/85 71/118/86
f 71/118/86 65/117/85 64/119/7 69/120/21
f 55/1/87 60/37/88 62/121/89 56/122/90
f 56/122/90 62/121/89 61/123/22 54/124/6
f 73/66/91 70/39/83 71/118/86 74/125/92
f 74/125/92 71/118/86 69/120/21 72/126/24
f 60/37/88 57/54/93 59/127/94 62/121/89
f 62/121/89 59/127/94 58/128/23 61/123/22
f 67/65/95 73/66/91 74/125/92 68/129/96
f 68/129/96 74/125/92 72/126/24 66/130/19
f 57/54/93 51/5/97 53/131/98 59/127/94
f 59/127/94 53/131/98 52/132/20 58/128/23
f 6/82/10 28/100/13 29/99/66 8/81/48
f 8/81/48 29/99/66 27/11/65 7/10/47
f 30/92/16 76/116/34 77/115/82 32/90/58
f 32/90/58 77/115/82 75/25/81 31/24/55
f 78/108/3 52/132/20 53/131/98 80/106/74
f 80/106/74 53/131/98 51/5/97 79/4/71
f 54/124/6 4/71/9 5/69/37 56/122/90
f 56/122/90 5/69/37 3/2/36 55/1/87
f 15/57/49 43/67/63 44/97/64 17/83/50
f 17/83/50 44/97/64 42/98/12 16/84/11
f 63/38/84 19/64/39 20/74/42 65/117/85
f 65/117/85 20/74/42 18/76/8 64/119/7
f 66/130/19 88/103/2 89/101/69 68/129/96
f 68/129/96 89/101/69 87/34/68 67/65/95
f 90/114/33 40/87/15 41/85/53 92/113/80
f 92/113/80 41/85/53 39/26/52 91/62/79
f 3/2/1 10/18/1 1/3/1

View File

@@ -0,0 +1,30 @@
<?xml version="1.0" ?>
<robot name="green_star.urdf">
<link name="baseLink">
<contact>
<lateral_friction value="0.5"/>
<rolling_friction value="0.0001"/>
<inertia_scaling value="1.0"/>
</contact>
<inertial>
<origin rpy="0 0 0" xyz="0 0 0"/>
<mass value=".01"/>
<inertia ixx="1" ixy="0" ixz="0" iyy="1" iyz="0" izz="1"/>
</inertial>
<visual>
<origin rpy="0 0 0" xyz="0 0 0"/>
<geometry>
<mesh filename="star.obj" scale="1.0 1.0 1.0"/>
</geometry>
<material name="green">
<color rgba="0.4 1.0 0.4 1"/>
</material>
</visual>
<collision>
<origin rpy="0 0 0" xyz="0 0 0"/>
<geometry>
<mesh filename="star.obj" scale="1.0 1.0 1.0"/>
</geometry>
</collision>
</link>
</robot>

View File

@@ -0,0 +1,446 @@
# Blender v2.92.0 OBJ File: ''
# www.blender.org
mtllib block.mtl
o moon_red_block_Cylinder.012
v -0.009181 0.000000 -0.006901
v -0.012205 0.002032 -0.009549
v -0.011319 0.000595 -0.008773
v -0.021856 0.000000 -0.011560
v -0.021140 0.002032 -0.012614
v -0.021633 0.000595 -0.012305
v -0.021140 0.036068 -0.012614
v -0.021856 0.038100 -0.011560
v -0.021633 0.037505 -0.012305
v -0.014039 0.002032 0.009151
v -0.012620 0.000000 0.008341
v -0.013624 0.000595 0.008914
v -0.012620 0.038100 0.008341
v -0.014039 0.036068 0.009151
v -0.013624 0.037505 0.008914
v -0.024317 0.002032 -0.000986
v -0.021966 0.000000 -0.000780
v -0.023628 0.000595 -0.000925
v -0.012205 0.036068 -0.009549
v -0.009181 0.038100 -0.006901
v -0.011319 0.037505 -0.008773
v -0.021966 0.038100 -0.000780
v -0.024317 0.036068 -0.000986
v -0.023628 0.037505 -0.000925
v -0.023768 0.000000 -0.007207
v -0.026243 0.002032 -0.008076
v -0.025518 0.000595 -0.007821
v -0.023815 0.000000 -0.010019
v -0.026267 0.002032 -0.011168
v -0.025549 0.000595 -0.010832
v -0.024498 0.002032 -0.012861
v -0.023166 0.000000 -0.011548
v -0.023825 0.000595 -0.012476
v -0.026243 0.036068 -0.008076
v -0.023768 0.038100 -0.007207
v -0.025518 0.037505 -0.007821
v -0.023166 0.038100 -0.011548
v -0.024498 0.036068 -0.012861
v -0.023825 0.037505 -0.012476
v -0.026267 0.036068 -0.011168
v -0.023815 0.038100 -0.010019
v -0.025549 0.037505 -0.010832
v -0.000000 0.038100 0.001138
v -0.000000 0.000000 0.001138
v -0.000000 0.000000 -0.006070
v -0.000000 0.002032 -0.008427
v -0.000000 0.000595 -0.007737
v -0.000000 0.036068 -0.008427
v -0.000000 0.038100 -0.006070
v -0.000000 0.037505 -0.007737
v 0.009181 0.000000 -0.006901
v 0.012205 0.002032 -0.009549
v 0.011319 0.000595 -0.008773
v 0.021856 0.000000 -0.011560
v 0.021140 0.002032 -0.012614
v 0.021633 0.000595 -0.012305
v 0.021140 0.036068 -0.012614
v 0.021856 0.038100 -0.011560
v 0.021633 0.037505 -0.012305
v -0.000000 0.002032 0.012861
v -0.000000 0.000000 0.011641
v -0.000000 0.000595 0.012504
v -0.000000 0.038100 0.011640
v -0.000000 0.036068 0.012861
v -0.000000 0.037505 0.012503
v 0.014039 0.002032 0.009151
v 0.012620 0.000000 0.008341
v 0.013624 0.000595 0.008914
v 0.012620 0.038100 0.008341
v 0.014039 0.036068 0.009151
v 0.013624 0.037505 0.008914
v 0.024317 0.002032 -0.000986
v 0.021966 0.000000 -0.000780
v 0.023628 0.000595 -0.000925
v 0.012205 0.036068 -0.009549
v 0.009181 0.038100 -0.006901
v 0.011319 0.037505 -0.008773
v 0.021966 0.038100 -0.000780
v 0.024317 0.036068 -0.000986
v 0.023628 0.037505 -0.000925
v 0.023768 0.000000 -0.007207
v 0.026243 0.002032 -0.008076
v 0.025518 0.000595 -0.007821
v 0.023815 0.000000 -0.010019
v 0.026267 0.002032 -0.011168
v 0.025549 0.000595 -0.010832
v 0.024498 0.002032 -0.012861
v 0.023166 0.000000 -0.011548
v 0.023825 0.000595 -0.012476
v 0.026243 0.036068 -0.008076
v 0.023768 0.038100 -0.007207
v 0.025518 0.037505 -0.007821
v 0.023166 0.038100 -0.011548
v 0.024498 0.036068 -0.012861
v 0.023825 0.037505 -0.012476
v 0.026267 0.036068 -0.011168
v 0.023815 0.038100 -0.010019
v 0.025549 0.037505 -0.010832
vt 0.710579 0.167405
vt 0.710864 0.161788
vt 0.714864 0.161788
vt 0.686288 0.163021
vt 0.706306 0.163021
vt 0.706306 0.165002
vt 0.686288 0.165002
vt 0.708556 0.175784
vt 0.708475 0.175088
vt 0.721180 0.161788
vt 0.719512 0.169043
vt 0.714650 0.174731
vt 0.711007 0.176048
vt 0.720186 0.145021
vt 0.718589 0.144931
vt 0.714732 0.130671
vt 0.709411 0.176139
vt 0.721121 0.143971
vt 0.721041 0.144667
vt 0.686288 0.138411
vt 0.706306 0.138411
vt 0.706306 0.146951
vt 0.686288 0.146951
vt 0.706306 0.155441
vt 0.686288 0.155441
vt 0.706306 0.159762
vt 0.686288 0.159762
vt 0.714946 0.143614
vt 0.686288 0.177766
vt 0.686288 0.170557
vt 0.706306 0.170557
vt 0.706306 0.177766
vt 0.686288 0.161581
vt 0.706306 0.161581
vt 0.718732 0.130671
vt 0.719018 0.136287
vt 0.710084 0.137925
vt 0.708416 0.130671
vt 0.709447 0.168778
vt 0.709791 0.161788
vt 0.708408 0.169256
vt 0.708838 0.161788
vt 0.707932 0.175020
vt 0.707041 0.174762
vt 0.722722 0.161788
vt 0.720733 0.170191
vt 0.719979 0.169645
vt 0.721789 0.161788
vt 0.708863 0.139073
vt 0.706874 0.130671
vt 0.707807 0.130671
vt 0.709618 0.138527
vt 0.715051 0.176717
vt 0.714702 0.175778
vt 0.719806 0.130670
vt 0.720149 0.137661
vt 0.720758 0.130670
vt 0.721188 0.138138
vt 0.721665 0.143903
vt 0.722555 0.143645
vt 0.714546 0.145599
vt 0.714894 0.144661
vt 0.721564 0.145099
vt 0.720661 0.146046
vt 0.722312 0.145803
vt 0.721098 0.146965
vt 0.708935 0.177163
vt 0.708032 0.176217
vt 0.708498 0.178082
vt 0.707284 0.176921
vt 0.718902 0.146070
vt 0.719002 0.147136
vt 0.710695 0.177187
vt 0.710594 0.178254
vt 0.710579 0.156171
vt 0.686288 0.192510
vt 0.686288 0.190530
vt 0.706306 0.190530
vt 0.706306 0.192510
vt 0.708556 0.147792
vt 0.708475 0.148488
vt 0.719512 0.154534
vt 0.714650 0.148845
vt 0.711007 0.147528
vt 0.720186 0.116320
vt 0.718589 0.116410
vt 0.709411 0.147437
vt 0.721121 0.117370
vt 0.721041 0.116674
vt 0.686288 0.129870
vt 0.706306 0.129870
vt 0.686288 0.121380
vt 0.706306 0.121380
vt 0.686288 0.117059
vt 0.706306 0.117059
vt 0.714946 0.117727
vt 0.706306 0.184974
vt 0.686288 0.184974
vt 0.706306 0.193950
vt 0.686288 0.193950
vt 0.719018 0.125054
vt 0.710084 0.123416
vt 0.706306 0.115240
vt 0.686288 0.115240
vt 0.709447 0.154798
vt 0.708408 0.154321
vt 0.707932 0.148556
vt 0.707041 0.148814
vt 0.719979 0.153931
vt 0.720733 0.153385
vt 0.708863 0.122268
vt 0.709618 0.122814
vt 0.714702 0.147798
vt 0.715051 0.146860
vt 0.720149 0.123680
vt 0.721188 0.123203
vt 0.721665 0.117438
vt 0.722555 0.117696
vt 0.714546 0.115742
vt 0.714894 0.116680
vt 0.720661 0.115295
vt 0.721564 0.116241
vt 0.721098 0.114376
vt 0.722312 0.115538
vt 0.708032 0.147359
vt 0.708935 0.146413
vt 0.707284 0.146655
vt 0.708498 0.145494
vt 0.718901 0.115271
vt 0.719002 0.114205
vt 0.710695 0.146389
vt 0.710595 0.145323
vn 0.0000 -1.0000 -0.0000
vn -0.3609 -0.1941 -0.9122
vn -0.3609 0.1941 -0.9122
vn 0.2092 0.1359 -0.9684
vn 0.2092 -0.1359 -0.9684
vn 0.0000 1.0000 -0.0000
vn -0.0000 -0.1176 0.9931
vn -0.0000 0.1176 0.9931
vn -0.5010 0.1369 0.8546
vn -0.5010 -0.1369 0.8546
vn -0.8572 0.1847 0.4806
vn -0.8572 -0.1847 0.4806
vn -0.9678 0.2183 0.1257
vn -0.9678 -0.2183 0.1257
vn 0.0000 -0.2255 -0.9742
vn 0.2279 -0.1771 -0.9575
vn 0.2279 0.1771 -0.9575
vn 0.0000 0.2255 -0.9742
vn -0.8945 -0.2361 -0.3797
vn -0.8945 0.2361 -0.3797
vn 0.0000 -0.9870 -0.1606
vn 0.0247 -0.9852 -0.1694
vn 0.1624 -0.6740 -0.7206
vn 0.0000 -0.7551 -0.6556
vn 0.0583 -0.9460 -0.3190
vn 0.1517 -0.5680 -0.8089
vn -0.4024 -0.5836 0.7053
vn 0.0000 -0.5393 0.8421
vn -0.1318 -0.9470 0.2929
vn 0.0000 -0.9390 0.3440
vn 0.0000 0.5393 0.8421
vn -0.4024 0.5836 0.7053
vn 0.0000 0.9390 0.3440
vn -0.1318 0.9470 0.2929
vn -0.6387 -0.6655 0.3863
vn -0.2030 -0.9669 0.1548
vn 0.0247 0.9852 -0.1694
vn -0.0000 0.9870 -0.1606
vn 0.0000 0.7551 -0.6556
vn 0.1624 0.6740 -0.7206
vn 0.0583 0.9460 -0.3190
vn 0.1517 0.5680 -0.8089
vn -0.6387 0.6655 0.3863
vn -0.2030 0.9669 0.1548
vn -0.1804 0.9821 -0.0543
vn -0.0951 0.9531 -0.2874
vn -0.2234 0.6362 -0.7384
vn -0.6431 0.7203 -0.2600
vn -0.0951 -0.9531 -0.2874
vn -0.1804 -0.9821 -0.0543
vn -0.6431 -0.7203 -0.2600
vn -0.2234 -0.6362 -0.7384
vn -0.1858 0.9820 0.0343
vn -0.6801 0.7265 0.0987
vn -0.1858 -0.9820 0.0343
vn -0.6801 -0.7265 0.0987
vn 0.3609 -0.1941 -0.9122
vn -0.2092 -0.1359 -0.9684
vn -0.2092 0.1359 -0.9684
vn 0.3609 0.1941 -0.9122
vn 0.5010 -0.1369 0.8546
vn 0.5010 0.1369 0.8546
vn 0.8572 -0.1847 0.4806
vn 0.8572 0.1847 0.4806
vn 0.9678 -0.2183 0.1257
vn 0.9678 0.2183 0.1257
vn -0.2279 0.1771 -0.9575
vn -0.2279 -0.1771 -0.9575
vn 0.8945 0.2361 -0.3797
vn 0.8945 -0.2361 -0.3797
vn -0.1624 -0.6740 -0.7206
vn -0.0247 -0.9852 -0.1694
vn -0.1517 -0.5680 -0.8089
vn -0.0583 -0.9460 -0.3190
vn 0.4024 -0.5836 0.7053
vn 0.1318 -0.9470 0.2929
vn 0.4024 0.5836 0.7053
vn 0.1318 0.9470 0.2929
vn 0.6387 -0.6655 0.3863
vn 0.2030 -0.9669 0.1548
vn -0.0247 0.9852 -0.1694
vn -0.1624 0.6740 -0.7206
vn -0.0583 0.9460 -0.3190
vn -0.1517 0.5680 -0.8089
vn 0.6387 0.6655 0.3863
vn 0.2030 0.9669 0.1548
vn 0.1804 0.9821 -0.0543
vn 0.6431 0.7203 -0.2600
vn 0.2234 0.6362 -0.7384
vn 0.0951 0.9531 -0.2874
vn 0.0951 -0.9531 -0.2874
vn 0.2234 -0.6362 -0.7384
vn 0.6431 -0.7203 -0.2600
vn 0.1804 -0.9821 -0.0543
vn 0.1858 0.9820 0.0343
vn 0.6801 0.7265 0.0987
vn 0.6801 -0.7265 0.0987
vn 0.1858 -0.9820 0.0343
usemtl toybox
s 1
f 1/1/1 45/2/1 44/3/1
f 31/4/2 38/5/3 7/6/4 5/7/5
f 32/8/1 4/9/1 44/3/1
f 61/10/1 11/11/1 44/3/1
f 11/11/1 17/12/1 44/3/1
f 17/12/1 25/13/1 44/3/1
f 41/14/6 35/15/6 43/16/6
f 28/17/1 32/8/1 44/3/1
f 8/18/6 37/19/6 43/16/6
f 37/19/6 41/14/6 43/16/6
f 25/13/1 28/17/1 44/3/1
f 60/20/7 64/21/8 14/22/9 10/23/10
f 10/23/10 14/22/9 23/24/11 16/25/12
f 16/25/12 23/24/11 34/26/13 26/27/14
f 35/15/6 22/28/6 43/16/6
f 4/9/1 1/1/1 44/3/1
f 46/29/15 2/30/16 19/31/17 48/32/18
f 38/5/3 31/4/2 29/33/19 40/34/20
f 49/35/6 20/36/6 43/16/6
f 13/37/6 63/38/6 43/16/6
f 2/30/16 5/7/5 7/6/4 19/31/17
f 40/34/20 29/33/19 26/27/14 34/26/13
f 20/36/6 8/18/6 43/16/6
f 22/28/6 13/37/6 43/16/6
f 45/2/21 1/1/22 3/39/23 47/40/24
f 47/40/24 3/39/23 2/41/16 46/42/15
f 1/1/22 4/9/25 6/43/26 3/39/23
f 3/39/23 6/43/26 5/44/5 2/41/16
f 60/45/7 10/46/10 12/47/27 62/48/28
f 62/48/28 12/47/27 11/11/29 61/10/30
f 14/49/9 64/50/8 65/51/31 15/52/32
f 15/52/32 65/51/31 63/38/33 13/37/34
f 10/46/10 16/53/12 18/54/35 12/47/27
f 12/47/27 18/54/35 17/12/36 11/11/29
f 20/36/37 49/35/38 50/55/39 21/56/40
f 21/56/40 50/55/39 48/57/18 19/58/17
f 8/18/41 20/36/37 21/56/40 9/59/42
f 9/59/42 21/56/40 19/58/17 7/60/4
f 23/61/11 14/49/9 15/52/32 24/62/43
f 24/62/43 15/52/32 13/37/34 22/28/44
f 41/14/45 37/19/46 39/63/47 42/64/48
f 42/64/48 39/63/47 38/65/3 40/66/20
f 32/8/49 28/17/50 30/67/51 33/68/52
f 33/68/52 30/67/51 29/69/19 31/70/2
f 35/15/53 41/14/45 42/64/48 36/71/54
f 36/71/54 42/64/48 40/66/20 34/72/13
f 28/17/50 25/13/55 27/73/56 30/67/51
f 30/67/51 27/73/56 26/74/14 29/69/19
f 25/13/55 17/12/36 18/54/35 27/73/56
f 27/73/56 18/54/35 16/53/12 26/74/14
f 22/28/44 35/15/53 36/71/54 24/62/43
f 24/62/43 36/71/54 34/72/13 23/61/11
f 4/9/25 32/8/49 33/68/52 6/43/26
f 6/43/26 33/68/52 31/70/2 5/44/5
f 37/19/46 8/18/41 9/59/42 39/63/47
f 39/63/47 9/59/42 7/60/4 38/65/3
f 51/75/1 44/3/1 45/2/1
f 87/76/57 55/77/58 57/78/59 94/79/60
f 88/80/1 44/3/1 54/81/1
f 61/10/1 44/3/1 67/82/1
f 67/82/1 44/3/1 73/83/1
f 73/83/1 44/3/1 81/84/1
f 97/85/6 43/16/6 91/86/6
f 84/87/1 44/3/1 88/80/1
f 58/88/6 43/16/6 93/89/6
f 93/89/6 43/16/6 97/85/6
f 81/84/1 44/3/1 84/87/1
f 60/20/7 66/90/61 70/91/62 64/21/8
f 66/90/61 72/92/63 79/93/64 70/91/62
f 72/92/63 82/94/65 90/95/66 79/93/64
f 91/86/6 43/16/6 78/96/6
f 54/81/1 44/3/1 51/75/1
f 46/29/15 48/32/18 75/97/67 52/98/68
f 94/79/60 96/99/69 85/100/70 87/76/57
f 49/35/6 43/16/6 76/101/6
f 69/102/6 43/16/6 63/38/6
f 52/98/68 75/97/67 57/78/59 55/77/58
f 96/103/69 90/95/66 82/94/65 85/104/70
f 76/101/6 43/16/6 58/88/6
f 78/96/6 43/16/6 69/102/6
f 45/2/21 47/40/24 53/105/71 51/75/72
f 47/40/24 46/42/15 52/106/68 53/105/71
f 51/75/72 53/105/71 56/107/73 54/81/74
f 53/105/71 52/106/68 55/108/58 56/107/73
f 60/45/7 62/48/28 68/109/75 66/110/61
f 62/48/28 61/10/30 67/82/76 68/109/75
f 70/111/62 71/112/77 65/51/31 64/50/8
f 71/112/77 69/102/78 63/38/33 65/51/31
f 66/110/61 68/109/75 74/113/79 72/114/63
f 68/109/75 67/82/76 73/83/80 74/113/79
f 76/101/81 77/115/82 50/55/39 49/35/38
f 77/115/82 75/116/67 48/57/18 50/55/39
f 58/88/83 59/117/84 77/115/82 76/101/81
f 59/117/84 57/118/59 75/116/67 77/115/82
f 79/119/64 80/120/85 71/112/77 70/111/62
f 80/120/85 78/96/86 69/102/78 71/112/77
f 97/85/87 98/121/88 95/122/89 93/89/90
f 98/121/88 96/123/69 94/124/60 95/122/89
f 88/80/91 89/125/92 86/126/93 84/87/94
f 89/125/92 87/127/57 85/128/70 86/126/93
f 91/86/95 92/129/96 98/121/88 97/85/87
f 92/129/96 90/130/66 96/123/69 98/121/88
f 84/87/94 86/126/93 83/131/97 81/84/98
f 86/126/93 85/128/70 82/132/65 83/131/97
f 81/84/98 83/131/97 74/113/79 73/83/80
f 83/131/97 82/132/65 72/114/63 74/113/79
f 78/96/86 80/120/85 92/129/96 91/86/95
f 80/120/85 79/119/64 90/130/66 92/129/96
f 54/81/74 56/107/73 89/125/92 88/80/91
f 56/107/73 55/108/58 87/127/57 89/125/92
f 93/89/90 95/122/89 59/117/84 58/88/83
f 95/122/89 94/124/60 57/118/59 59/117/84

View File

@@ -0,0 +1,419 @@
# Blender v2.92.0 OBJ File: ''
# www.blender.org
mtllib pentagon.mtl
o pentagon_yellow_block_Cube.003
v -0.000000 0.000000 -0.001329
v -0.000000 0.038100 -0.001329
v -0.010789 0.000000 0.015873
v -0.011940 0.002032 0.017708
v -0.011603 0.000595 0.017170
v -0.014201 0.002032 0.016092
v -0.012833 0.000000 0.014413
v -0.013800 0.000595 0.015600
v -0.013310 0.002032 0.017235
v -0.012029 0.000000 0.015448
v -0.012935 0.000595 0.016711
v -0.012833 0.038100 0.014413
v -0.014201 0.036068 0.016092
v -0.013800 0.037505 0.015600
v -0.011940 0.036068 0.017708
v -0.010789 0.038100 0.015873
v -0.011603 0.037505 0.017170
v -0.012029 0.038100 0.015448
v -0.013310 0.036068 0.017235
v -0.012935 0.037505 0.016711
v -0.020074 0.000000 -0.006755
v -0.022213 0.002032 -0.007333
v -0.021586 0.000595 -0.007164
v -0.021345 0.002032 -0.009898
v -0.019288 0.000000 -0.009072
v -0.020743 0.000595 -0.009656
v -0.022188 0.002032 -0.008754
v -0.020054 0.000000 -0.008040
v -0.021563 0.000595 -0.008545
v -0.019288 0.038100 -0.009072
v -0.021345 0.036068 -0.009898
v -0.020743 0.037505 -0.009656
v -0.022213 0.036068 -0.007333
v -0.020074 0.038100 -0.006755
v -0.021586 0.037505 -0.007164
v -0.022188 0.036068 -0.008754
v -0.020054 0.038100 -0.008040
v -0.021563 0.037505 -0.008545
v 0.012833 0.000000 0.014413
v 0.014201 0.002032 0.016092
v 0.013800 0.000595 0.015600
v 0.011940 0.002032 0.017708
v 0.010789 0.000000 0.015873
v 0.011603 0.000595 0.017170
v 0.012029 0.000000 0.015448
v 0.013310 0.002032 0.017235
v 0.012935 0.000595 0.016711
v 0.010789 0.038100 0.015873
v 0.011940 0.036068 0.017708
v 0.011603 0.037505 0.017170
v 0.014201 0.036068 0.016092
v 0.012833 0.038100 0.014413
v 0.013800 0.037505 0.015600
v 0.013310 0.036068 0.017235
v 0.012029 0.038100 0.015448
v 0.012935 0.037505 0.016711
v 0.019288 0.000000 -0.009072
v 0.021345 0.002032 -0.009898
v 0.020743 0.000595 -0.009656
v 0.022213 0.002032 -0.007333
v 0.020074 0.000000 -0.006755
v 0.021586 0.000595 -0.007164
v 0.022188 0.002032 -0.008754
v 0.020054 0.000000 -0.008040
v 0.021563 0.000595 -0.008545
v 0.020074 0.038100 -0.006755
v 0.022213 0.036068 -0.007333
v 0.021586 0.037505 -0.007164
v 0.021345 0.036068 -0.009898
v 0.019288 0.038100 -0.009072
v 0.020743 0.037505 -0.009656
v 0.022188 0.036068 -0.008754
v 0.020054 0.038100 -0.008040
v 0.021563 0.037505 -0.008545
v 0.001283 0.038100 -0.021099
v 0.001420 0.036068 -0.023214
v 0.001380 0.037505 -0.022595
v -0.001420 0.036068 -0.023214
v -0.001283 0.038100 -0.021099
v -0.001380 0.037505 -0.022595
v 0.000000 0.036068 -0.023607
v -0.000000 0.038100 -0.021456
v -0.000000 0.037505 -0.022977
v -0.001283 0.000000 -0.021099
v -0.001420 0.002032 -0.023214
v -0.001380 0.000595 -0.022595
v 0.001420 0.002032 -0.023214
v 0.001283 0.000000 -0.021099
v 0.001380 0.000595 -0.022595
v -0.000000 0.000000 -0.021456
v 0.000000 0.002032 -0.023607
v -0.000000 0.000595 -0.022977
vt 0.718822 0.029303
vt 0.730852 0.033346
vt 0.721614 0.040891
vt 0.709480 0.040139
vt 0.717419 0.029729
vt 0.717181 0.051685
vt 0.709481 0.041565
vt 0.686374 0.033371
vt 0.706477 0.033371
vt 0.706477 0.034211
vt 0.686374 0.034211
vt 0.731711 0.034535
vt 0.731731 0.047644
vt 0.732155 0.013538
vt 0.732386 0.014251
vt 0.720023 0.014212
vt 0.724218 0.025374
vt 0.723581 0.025796
vt 0.731459 0.033813
vt 0.686374 0.065488
vt 0.706477 0.065488
vt 0.706477 0.066344
vt 0.686374 0.066344
vt 0.730865 0.048779
vt 0.718604 0.052165
vt 0.717826 0.052123
vt 0.710784 0.021757
vt 0.710178 0.021290
vt 0.706477 0.050528
vt 0.686374 0.050528
vt 0.686374 0.049672
vt 0.706477 0.049672
vt 0.709250 0.040852
vt 0.718055 0.029306
vt 0.686374 0.051384
vt 0.706477 0.051384
vt 0.686374 0.035050
vt 0.706477 0.035050
vt 0.686374 0.019217
vt 0.706477 0.019217
vt 0.731482 0.048351
vt 0.709906 0.007458
vt 0.710155 0.006751
vt 0.686374 0.003322
vt 0.706477 0.003322
vt 0.706477 0.017476
vt 0.686374 0.017476
vt 0.706477 0.081821
vt 0.686374 0.081821
vt 0.686374 0.067199
vt 0.706477 0.067199
vt 0.723811 0.002980
vt 0.724455 0.003418
vt 0.706477 0.082661
vt 0.686374 0.082661
vt 0.686374 0.002483
vt 0.706477 0.002483
vt 0.732157 0.014964
vt 0.723033 0.002938
vt 0.722814 0.025800
vt 0.686374 0.018347
vt 0.706477 0.018347
vt 0.709926 0.020568
vt 0.710771 0.006324
vt 0.733509 0.048629
vt 0.733072 0.049555
vt 0.732273 0.048949
vt 0.732572 0.048190
vt 0.732296 0.050225
vt 0.731620 0.049441
vt 0.734168 0.013254
vt 0.734379 0.014256
vt 0.733377 0.014254
vt 0.733156 0.013468
vt 0.734166 0.015257
vt 0.733156 0.015039
vt 0.707468 0.041849
vt 0.707257 0.040847
vt 0.708259 0.040849
vt 0.708480 0.041634
vt 0.707471 0.039846
vt 0.708480 0.040063
vt 0.708128 0.006474
vt 0.708564 0.005547
vt 0.709364 0.006153
vt 0.709064 0.006913
vt 0.709341 0.004877
vt 0.710016 0.005662
vt 0.731599 0.032714
vt 0.732251 0.033244
vt 0.732301 0.031933
vt 0.733073 0.032652
vt 0.723869 0.026728
vt 0.723033 0.026764
vt 0.724166 0.027687
vt 0.723129 0.027790
vt 0.732546 0.034029
vt 0.733509 0.033612
vt 0.724576 0.026281
vt 0.725089 0.027190
vt 0.710038 0.022388
vt 0.709386 0.021859
vt 0.709336 0.023170
vt 0.708563 0.022451
vt 0.717767 0.028375
vt 0.718604 0.028339
vt 0.717470 0.027416
vt 0.718508 0.027313
vt 0.709090 0.021074
vt 0.708128 0.021490
vt 0.717060 0.028822
vt 0.716547 0.027913
vt 0.723265 0.002005
vt 0.724118 0.002071
vt 0.723385 0.000970
vt 0.724440 0.001117
vt 0.717519 0.053032
vt 0.716801 0.052568
vt 0.717197 0.053986
vt 0.716269 0.053464
vt 0.724836 0.002535
vt 0.725367 0.001639
vt 0.718372 0.053098
vt 0.718251 0.054133
vn 0.0000 -1.0000 0.0000
vn -0.6839 0.1913 -0.7040
vn -0.6839 -0.1913 -0.7040
vn -0.9271 -0.2054 -0.3134
vn -0.9271 0.2054 -0.3134
vn 0.0000 1.0000 -0.0000
vn 0.1696 0.1897 0.9671
vn 0.1696 -0.1897 0.9671
vn 0.5696 -0.2019 0.7968
vn 0.5696 0.2019 0.7968
vn -0.5696 -0.2019 0.7968
vn -0.5696 0.2019 0.7968
vn -0.8602 0.1898 0.4734
vn -0.8602 -0.1898 0.4734
vn -0.1696 0.1897 0.9671
vn -0.1696 -0.1897 0.9671
vn -0.9708 0.1916 0.1446
vn -0.9708 -0.1916 0.1446
vn -0.4034 0.1878 -0.8955
vn -0.4034 -0.1878 -0.8955
vn 0.6839 0.1913 -0.7040
vn 0.6839 -0.1913 -0.7040
vn 0.4034 -0.1878 -0.8955
vn 0.4034 0.1878 -0.8955
vn 0.9708 -0.1916 0.1446
vn 0.9708 0.1916 0.1446
vn 0.8602 0.1898 0.4734
vn 0.8602 -0.1898 0.4734
vn 0.9271 -0.2054 -0.3134
vn 0.9271 0.2054 -0.3134
vn 0.0000 0.1985 -0.9801
vn 0.0000 -0.1985 -0.9801
vn -0.6793 -0.6969 -0.2300
vn -0.7180 -0.6862 0.1166
vn -0.2003 -0.9774 -0.0681
vn -0.2163 -0.9753 0.0454
vn -0.5001 -0.6857 -0.5289
vn -0.1444 -0.9752 -0.1679
vn -0.6793 0.6969 -0.2300
vn -0.5001 0.6857 -0.5289
vn -0.2003 0.9774 -0.0681
vn -0.1444 0.9752 -0.1679
vn -0.7180 0.6862 0.1166
vn -0.2163 0.9753 0.0454
vn 0.6793 -0.6969 -0.2300
vn 0.5001 -0.6857 -0.5289
vn 0.2003 -0.9774 -0.0681
vn 0.1444 -0.9752 -0.1679
vn 0.7180 -0.6862 0.1166
vn 0.2163 -0.9753 0.0454
vn 0.6793 0.6969 -0.2300
vn 0.7180 0.6862 0.1166
vn 0.2003 0.9774 -0.0681
vn 0.2163 0.9753 0.0454
vn 0.5001 0.6857 -0.5289
vn 0.1444 0.9752 -0.1679
vn -0.1241 -0.9769 0.1739
vn -0.0267 -0.9752 0.2199
vn -0.1178 -0.6841 0.7198
vn -0.4189 -0.6935 0.5862
vn -0.0267 0.9752 0.2199
vn -0.1242 0.9769 0.1739
vn -0.4189 0.6935 0.5862
vn -0.1178 0.6841 0.7198
vn -0.1992 -0.9752 0.0965
vn -0.6427 -0.6843 0.3444
vn -0.1992 0.9752 0.0965
vn -0.6427 0.6843 0.3444
vn 0.1242 0.9769 0.1739
vn 0.0267 0.9752 0.2199
vn 0.1178 0.6841 0.7198
vn 0.4189 0.6935 0.5862
vn 0.0267 -0.9752 0.2199
vn 0.1241 -0.9769 0.1739
vn 0.4189 -0.6935 0.5862
vn 0.1178 -0.6841 0.7198
vn 0.1992 0.9752 0.0965
vn 0.6427 0.6843 0.3444
vn 0.1992 -0.9752 0.0965
vn 0.6427 -0.6843 0.3444
vn -0.0000 0.9764 -0.2159
vn 0.1005 0.9750 -0.1981
vn 0.3067 0.6821 -0.6638
vn -0.0000 0.6900 -0.7238
vn 0.1005 -0.9750 -0.1981
vn 0.0000 -0.9764 -0.2159
vn 0.0000 -0.6900 -0.7238
vn 0.3067 -0.6821 -0.6638
vn -0.1005 0.9750 -0.1981
vn -0.3067 0.6821 -0.6638
vn -0.1005 -0.9750 -0.1981
vn -0.3067 -0.6821 -0.6638
usemtl toybox
s 1
f 43/1/1 3/2/1 1/3/1
f 61/4/1 39/5/1 1/3/1
f 88/6/1 57/7/1 1/3/1
f 31/8/2 24/9/3 27/10/4 36/11/5
f 7/12/1 21/13/1 1/3/1
f 30/14/6 37/15/6 2/16/6
f 12/17/6 18/18/6 2/16/6
f 10/19/1 7/12/1 1/3/1
f 49/20/7 42/21/8 46/22/9 54/23/10
f 25/24/1 84/25/1 1/3/1
f 90/26/1 88/6/1 1/3/1
f 3/2/1 10/19/1 1/3/1
f 48/27/6 55/28/6 2/16/6
f 9/29/11 19/30/12 13/31/13 6/32/14
f 64/33/1 61/4/1 1/3/1
f 39/5/1 45/34/1 1/3/1
f 42/21/8 49/20/7 15/35/15 4/36/16
f 6/32/14 13/31/13 33/37/17 22/38/18
f 24/9/3 31/8/2 78/39/19 85/40/20
f 21/13/1 28/41/1 1/3/1
f 66/42/6 73/43/6 2/16/6
f 69/44/21 58/45/22 87/46/23 76/47/24
f 45/34/1 43/1/1 1/3/1
f 60/48/25 67/49/26 51/50/27 40/51/28
f 82/52/6 79/53/6 2/16/6
f 84/25/1 90/26/1 1/3/1
f 67/49/26 60/48/25 63/54/29 72/55/30
f 28/41/1 25/24/1 1/3/1
f 72/56/30 63/57/29 58/45/22 69/44/21
f 37/15/6 34/58/6 2/16/6
f 79/53/6 30/14/6 2/16/6
f 75/59/6 82/52/6 2/16/6
f 18/18/6 16/60/6 2/16/6
f 54/23/10 46/22/9 40/51/28 51/50/27
f 81/61/31 91/62/32 85/40/20 78/39/19
f 16/60/6 48/27/6 2/16/6
f 52/63/6 66/42/6 2/16/6
f 36/11/5 27/10/4 22/38/18 33/37/17
f 73/43/6 70/64/6 2/16/6
f 57/7/1 64/33/1 1/3/1
f 70/64/6 75/59/6 2/16/6
f 34/58/6 12/17/6 2/16/6
f 76/47/24 87/46/23 91/62/32 81/61/31
f 55/28/6 52/63/6 2/16/6
f 22/65/18 27/66/4 29/67/33 23/68/34
f 23/68/34 29/67/33 28/41/35 21/13/36
f 27/66/4 24/69/3 26/70/37 29/67/33
f 29/67/33 26/70/37 25/24/38 28/41/35
f 31/71/2 36/72/5 38/73/39 32/74/40
f 32/74/40 38/73/39 37/15/41 30/14/42
f 36/72/5 33/75/17 35/76/43 38/73/39
f 38/73/39 35/76/43 34/58/44 37/15/41
f 58/77/22 63/78/29 65/79/45 59/80/46
f 59/80/46 65/79/45 64/33/47 57/7/48
f 63/78/29 60/81/25 62/82/49 65/79/45
f 65/79/45 62/82/49 61/4/50 64/33/47
f 67/83/26 72/84/30 74/85/51 68/86/52
f 68/86/52 74/85/51 73/43/53 66/42/54
f 72/84/30 69/87/21 71/88/55 74/85/51
f 74/85/51 71/88/55 70/64/56 73/43/53
f 10/19/57 3/2/58 5/89/59 11/90/60
f 11/90/60 5/89/59 4/91/16 9/92/11
f 16/60/61 18/18/62 20/93/63 17/94/64
f 17/94/64 20/93/63 19/95/12 15/96/15
f 7/12/65 10/19/57 11/90/60 8/97/66
f 8/97/66 11/90/60 9/92/11 6/98/14
f 18/18/62 12/17/67 14/99/68 20/93/63
f 20/93/63 14/99/68 13/100/13 19/95/12
f 55/28/69 48/27/70 50/101/71 56/102/72
f 56/102/72 50/101/71 49/103/7 54/104/10
f 43/1/73 45/34/74 47/105/75 44/106/76
f 44/106/76 47/105/75 46/107/9 42/108/8
f 52/63/77 55/28/69 56/102/72 53/109/78
f 53/109/78 56/102/72 54/104/10 51/110/27
f 45/34/74 39/5/79 41/111/80 47/105/75
f 47/105/75 41/111/80 40/112/28 46/107/9
f 82/52/81 75/59/82 77/113/83 83/114/84
f 83/114/84 77/113/83 76/115/24 81/116/31
f 88/6/85 90/26/86 92/117/87 89/118/88
f 89/118/88 92/117/87 91/119/32 87/120/23
f 79/53/89 82/52/81 83/114/84 80/121/90
f 80/121/90 83/114/84 81/116/31 78/122/19
f 90/26/86 84/25/91 86/123/92 92/117/87
f 92/117/87 86/123/92 85/124/20 91/119/32
f 6/98/14 22/65/18 23/68/34 8/97/66
f 8/97/66 23/68/34 21/13/36 7/12/65
f 60/81/25 40/112/28 41/111/80 62/82/49
f 62/82/49 41/111/80 39/5/79 61/4/50
f 42/108/8 4/91/16 5/89/59 44/106/76
f 44/106/76 5/89/59 3/2/58 43/1/73
f 12/17/67 34/58/44 35/76/43 14/99/68
f 14/99/68 35/76/43 33/75/17 13/100/13
f 48/27/70 16/60/61 17/94/64 50/101/71
f 50/101/71 17/94/64 15/96/15 49/103/7
f 51/110/27 67/83/26 68/86/52 53/109/78
f 53/109/78 68/86/52 66/42/54 52/63/77
f 84/25/91 25/24/38 26/70/37 86/123/92
f 86/123/92 26/70/37 24/69/3 85/124/20
f 30/14/42 79/53/89 80/121/90 32/74/40
f 32/74/40 80/121/90 78/122/19 31/71/2
f 87/120/23 58/77/22 59/80/46 89/118/88
f 89/118/88 59/80/46 57/7/48 88/6/85
f 69/87/21 76/115/24 77/113/83 71/88/55
f 71/88/55 77/113/83 75/59/82 70/64/56
f 4/36/16 15/35/15 19/30/12 9/29/11

View File

@@ -0,0 +1,30 @@
<?xml version="1.0" ?>
<robot name="red_moon.urdf">
<link name="baseLink">
<contact>
<lateral_friction value="0.5"/>
<rolling_friction value="0.0001"/>
<inertia_scaling value="1.0"/>
</contact>
<inertial>
<origin rpy="0 0 0" xyz="0 0 0"/>
<mass value=".01"/>
<inertia ixx="1" ixy="0" ixz="0" iyy="1" iyz="0" izz="1"/>
</inertial>
<visual>
<origin rpy="0 0 0" xyz="0 0 0"/>
<geometry>
<mesh filename="moon.obj" scale="1.0 1.0 1.0"/>
</geometry>
<material name="red">
<color rgba="1 0.4 0.4 1"/>
</material>
</visual>
<collision>
<origin rpy="0 0 0" xyz="0 0 0"/>
<geometry>
<mesh filename="moon.obj" scale="1.0 1.0 1.0"/>
</geometry>
</collision>
</link>
</robot>

View File

@@ -0,0 +1,689 @@
# Blender v2.92.0 OBJ File: ''
# www.blender.org
mtllib star.mtl
o star_green_block_star0_block
v -0.000030 0.000000 0.001549
v -0.000030 0.038100 0.001549
v 0.006429 0.000000 -0.009092
v 0.007380 0.002032 -0.010659
v 0.007101 0.000595 -0.010200
v 0.009909 0.002032 -0.008940
v 0.008636 0.000000 -0.007597
v 0.009536 0.000595 -0.008547
v 0.008636 0.038100 -0.007597
v 0.009909 0.036068 -0.008940
v 0.009536 0.037505 -0.008547
v 0.007380 0.036068 -0.010659
v 0.006429 0.038100 -0.009092
v 0.007101 0.037505 -0.010200
v -0.006436 0.038100 -0.009091
v -0.007380 0.036068 -0.010659
v -0.007103 0.037505 -0.010200
v -0.009909 0.036068 -0.008940
v -0.008642 0.038100 -0.007595
v -0.009538 0.037505 -0.008546
v -0.008642 0.000000 -0.007595
v -0.009909 0.002032 -0.008940
v -0.009538 0.000595 -0.008546
v -0.007380 0.002032 -0.010659
v -0.006436 0.000000 -0.009091
v -0.007103 0.000595 -0.010200
v -0.011672 0.000000 0.006854
v -0.013354 0.002032 0.007620
v -0.012861 0.000595 0.007396
v -0.014336 0.002032 0.004714
v -0.012526 0.000000 0.004313
v -0.013806 0.000595 0.004596
v -0.012526 0.038100 0.004313
v -0.014336 0.036068 0.004714
v -0.013806 0.037505 0.004596
v -0.013354 0.036068 0.007620
v -0.011672 0.038100 0.006854
v -0.012861 0.037505 0.007396
v 0.012521 0.000000 0.004314
v 0.014336 0.002032 0.004714
v 0.013805 0.000595 0.004597
v 0.013354 0.002032 0.007620
v 0.011668 0.000000 0.006855
v 0.012860 0.000595 0.007396
v 0.011668 0.038100 0.006855
v 0.013354 0.036068 0.007620
v 0.012860 0.037505 0.007396
v 0.014336 0.036068 0.004714
v 0.012521 0.038100 0.004314
v 0.013805 0.037505 0.004597
v -0.010002 0.000000 0.018088
v -0.011405 0.002032 0.020414
v -0.010994 0.000595 0.019733
v -0.013985 0.002032 0.018583
v -0.012256 0.000000 0.016473
v -0.013478 0.000595 0.017965
v -0.013300 0.002032 0.020351
v -0.011669 0.000000 0.018040
v -0.012822 0.000595 0.019674
v -0.012256 0.038100 0.016473
v -0.013985 0.036068 0.018583
v -0.013478 0.037505 0.017965
v -0.011405 0.036068 0.020414
v -0.010002 0.038100 0.018088
v -0.010994 0.037505 0.019733
v -0.011669 0.038100 0.018040
v -0.013300 0.036068 0.020351
v -0.012822 0.037505 0.019674
v -0.018964 0.000000 -0.003158
v -0.021672 0.002032 -0.003831
v -0.020879 0.000595 -0.003634
v -0.020696 0.002032 -0.006698
v -0.018097 0.000000 -0.005661
v -0.019935 0.000595 -0.006394
v -0.019426 0.000000 -0.004715
v -0.022190 0.002032 -0.005607
v -0.021380 0.000595 -0.005346
v -0.018097 0.038100 -0.005661
v -0.020696 0.036068 -0.006698
v -0.019935 0.037505 -0.006394
v -0.021672 0.036068 -0.003831
v -0.018964 0.038100 -0.003158
v -0.020879 0.037505 -0.003634
v -0.022190 0.036068 -0.005607
v -0.019426 0.038100 -0.004715
v -0.021380 0.037505 -0.005346
v 0.012251 0.000000 0.016476
v 0.013985 0.002032 0.018583
v 0.013477 0.000595 0.017966
v 0.011405 0.002032 0.020414
v 0.009996 0.000000 0.018089
v 0.010992 0.000595 0.019733
v 0.011663 0.000000 0.018043
v 0.013300 0.002032 0.020351
v 0.012821 0.000595 0.019675
v 0.009996 0.038100 0.018089
v 0.011405 0.036068 0.020414
v 0.010992 0.037505 0.019733
v 0.013985 0.036068 0.018583
v 0.012251 0.038100 0.016476
v 0.013477 0.037505 0.017966
v 0.013300 0.036068 0.020351
v 0.011663 0.038100 0.018043
v 0.012821 0.037505 0.019675
v 0.018094 0.000000 -0.005663
v 0.020696 0.002032 -0.006698
v 0.019934 0.000595 -0.006395
v 0.021672 0.002032 -0.003831
v 0.018961 0.000000 -0.003159
v 0.020878 0.000595 -0.003635
v 0.019423 0.000000 -0.004716
v 0.022190 0.002032 -0.005607
v 0.021380 0.000595 -0.005346
v 0.018961 0.038100 -0.003159
v 0.021672 0.036068 -0.003831
v 0.020878 0.037505 -0.003635
v 0.020696 0.036068 -0.006698
v 0.018094 0.038100 -0.005663
v 0.019934 0.037505 -0.006395
v 0.022190 0.036068 -0.005607
v 0.019423 0.038100 -0.004716
v 0.021380 0.037505 -0.005346
v 0.001426 0.038100 -0.016730
v 0.001638 0.036068 -0.019383
v 0.001576 0.037505 -0.018606
v -0.001638 0.036068 -0.019383
v -0.001434 0.038100 -0.016729
v -0.001578 0.037505 -0.018606
v 0.000000 0.036068 -0.020414
v -0.000004 0.038100 -0.017644
v -0.000001 0.037505 -0.019603
v -0.001434 0.000000 -0.016729
v -0.001638 0.002032 -0.019383
v -0.001578 0.000595 -0.018606
v 0.001638 0.002032 -0.019383
v 0.001426 0.000000 -0.016730
v 0.001576 0.000595 -0.018606
v -0.000004 0.000000 -0.017644
v 0.000000 0.002032 -0.020414
v -0.000001 0.000595 -0.019603
v 0.001733 0.000000 0.015081
v 0.001984 0.002032 0.017005
v 0.001910 0.000595 0.016441
v -0.001984 0.002032 0.017005
v -0.001741 0.000000 0.015080
v -0.001912 0.000595 0.016441
v -0.001741 0.038100 0.015080
v -0.001984 0.036068 0.017005
v -0.001912 0.037505 0.016441
v 0.001984 0.036068 0.017005
v 0.001733 0.038100 0.015081
v 0.001910 0.037505 0.016441
vt 0.884326 0.143054
vt 0.881637 0.147199
vt 0.874485 0.145636
vt 0.870501 0.139220
vt 0.873534 0.134807
vt 0.837942 0.112414
vt 0.837942 0.131533
vt 0.832314 0.131533
vt 0.832314 0.112414
vt 0.883572 0.123603
vt 0.883611 0.122735
vt 0.892897 0.129050
vt 0.881240 0.148682
vt 0.881539 0.153913
vt 0.898370 0.134009
vt 0.897030 0.139070
vt 0.873770 0.153188
vt 0.868788 0.154800
vt 0.903052 0.129620
vt 0.899208 0.132723
vt 0.862147 0.112414
vt 0.862147 0.131533
vt 0.855821 0.131533
vt 0.855821 0.112414
vt 0.830184 0.112414
vt 0.830184 0.131533
vt 0.824015 0.131533
vt 0.824015 0.112414
vt 0.874942 0.134807
vt 0.877932 0.139104
vt 0.880430 0.154782
vt 0.875317 0.153178
vt 0.866663 0.147297
vt 0.863946 0.143378
vt 0.884646 0.135960
vt 0.883837 0.135640
vt 0.874236 0.134337
vt 0.813888 0.131533
vt 0.813888 0.112414
vt 0.814927 0.112414
vt 0.814927 0.131533
vt 0.881273 0.154719
vt 0.863186 0.112414
vt 0.863186 0.131533
vt 0.867919 0.154747
vt 0.867585 0.153944
vt 0.845799 0.112414
vt 0.845799 0.131533
vt 0.840171 0.131533
vt 0.840171 0.112414
vt 0.903484 0.128858
vt 0.883832 0.141627
vt 0.884500 0.142196
vt 0.864368 0.141965
vt 0.869278 0.140166
vt 0.854097 0.112414
vt 0.854097 0.131533
vt 0.847929 0.131533
vt 0.847929 0.112414
vt 0.896527 0.139753
vt 0.867312 0.149178
vt 0.822292 0.112414
vt 0.822292 0.131533
vt 0.872132 0.131533
vt 0.872132 0.112414
vt 0.877999 0.112414
vt 0.877999 0.131533
vt 0.864225 0.131533
vt 0.864225 0.112414
vt 0.870414 0.112414
vt 0.870414 0.131533
vt 0.815966 0.112414
vt 0.815966 0.131533
vt 0.891348 0.136441
vt 0.889875 0.135971
vt 0.846864 0.131533
vt 0.846864 0.112414
vt 0.831249 0.131533
vt 0.831249 0.112414
vt 0.879159 0.140026
vt 0.896695 0.118860
vt 0.898202 0.123877
vt 0.895355 0.118422
vt 0.896169 0.118193
vt 0.863717 0.142542
vt 0.889642 0.122199
vt 0.891100 0.121679
vt 0.899083 0.125135
vt 0.805981 0.112414
vt 0.805981 0.131533
vt 0.800114 0.131533
vt 0.800114 0.112414
vt 0.807699 0.112414
vt 0.807699 0.131533
vt 0.884410 0.122387
vt 0.895707 0.139551
vt 0.879086 0.112414
vt 0.879086 0.131533
vt 0.883766 0.134776
vt 0.884979 0.130164
vt 0.903027 0.128111
vt 0.799027 0.131533
vt 0.799027 0.112414
vt 0.884945 0.128175
vt 0.882437 0.147351
vt 0.881976 0.149022
vt 0.883261 0.147612
vt 0.882837 0.149234
vt 0.898778 0.123306
vt 0.899783 0.124719
vt 0.899515 0.122813
vt 0.900468 0.124192
vt 0.899922 0.133113
vt 0.898965 0.134558
vt 0.900624 0.133616
vt 0.899718 0.135026
vt 0.878303 0.138385
vt 0.879696 0.139416
vt 0.878850 0.137688
vt 0.880184 0.138702
vt 0.889274 0.121462
vt 0.890920 0.120885
vt 0.889004 0.120636
vt 0.890592 0.120060
vt 0.875420 0.153985
vt 0.873676 0.154006
vt 0.875394 0.154873
vt 0.873704 0.154874
vt 0.891195 0.137238
vt 0.889532 0.136718
vt 0.890894 0.138073
vt 0.889289 0.137551
vt 0.868700 0.139582
vt 0.870085 0.138524
vt 0.868187 0.138882
vt 0.869517 0.137842
vt 0.862942 0.143128
vt 0.862743 0.142252
vt 0.862000 0.143160
vt 0.861880 0.141993
vt 0.883020 0.136246
vt 0.882908 0.135355
vt 0.882298 0.136785
vt 0.882012 0.135647
vt 0.863391 0.141636
vt 0.862626 0.141085
vt 0.883840 0.136603
vt 0.883310 0.137383
vt 0.897417 0.140057
vt 0.896867 0.140740
vt 0.898025 0.140801
vt 0.897174 0.141610
vt 0.874217 0.133293
vt 0.874968 0.133747
vt 0.874208 0.132371
vt 0.875284 0.132840
vt 0.896006 0.140567
vt 0.896002 0.141529
vt 0.873468 0.133750
vt 0.873134 0.132848
vt 0.895621 0.117396
vt 0.896476 0.117194
vt 0.895584 0.116434
vt 0.896753 0.116313
vt 0.881903 0.155553
vt 0.881031 0.155656
vt 0.882465 0.156286
vt 0.881325 0.156572
vt 0.897048 0.117859
vt 0.897631 0.117095
vt 0.882214 0.154732
vt 0.883023 0.155251
vt 0.882691 0.123051
vt 0.882773 0.122155
vt 0.881783 0.122789
vt 0.882031 0.121638
vt 0.867329 0.155578
vt 0.866946 0.154764
vt 0.866810 0.156317
vt 0.866182 0.155321
vt 0.883582 0.121769
vt 0.883025 0.121005
vt 0.868222 0.155664
vt 0.867960 0.156572
vt 0.904030 0.128049
vt 0.904478 0.128840
vt 0.904906 0.127731
vt 0.905368 0.128825
vt 0.885439 0.141872
vt 0.885289 0.142769
vt 0.886281 0.141582
vt 0.886221 0.142768
vt 0.904056 0.129646
vt 0.904943 0.129934
vt 0.884767 0.141260
vt 0.885502 0.140686
vt 0.865721 0.149838
vt 0.865007 0.147749
vt 0.865827 0.147457
vt 0.866565 0.149562
vt 0.883268 0.130306
vt 0.883225 0.128099
vt 0.884112 0.128070
vt 0.884138 0.130300
vn 0.0000 -1.0000 0.0000
vn 0.1760 -0.1821 0.9674
vn 0.1760 0.1821 0.9674
vn 0.1373 0.1995 0.9702
vn 0.1373 -0.1995 0.9702
vn -0.0000 1.0000 -0.0000
vn 0.8706 -0.1992 0.4499
vn 0.8706 0.1992 0.4499
vn 0.8522 0.1751 0.4930
vn 0.8522 -0.1751 0.4930
vn -0.9656 -0.1964 0.1706
vn -0.9656 0.1964 0.1706
vn -0.9767 0.1745 0.1252
vn -0.9767 -0.1745 0.1252
vn -0.4160 0.1947 -0.8883
vn -0.4160 -0.1947 -0.8883
vn -0.9185 -0.2428 -0.3119
vn -0.9185 0.2428 -0.3119
vn 0.9185 -0.2429 -0.3119
vn 0.9185 0.2429 -0.3119
vn -0.1372 -0.1993 0.9703
vn -0.1372 0.1993 0.9703
vn -0.1761 0.1819 0.9674
vn -0.1761 -0.1819 0.9674
vn 0.9766 -0.1749 0.1252
vn 0.9766 0.1749 0.1252
vn 0.9655 0.1968 0.1705
vn 0.9655 -0.1968 0.1705
vn -0.8523 -0.1748 0.4930
vn -0.8523 0.1748 0.4930
vn 0.7051 0.1756 -0.6870
vn 0.7051 -0.1756 -0.6870
vn 0.6736 -0.1964 -0.7125
vn 0.6736 0.1964 -0.7125
vn 0.4161 0.1947 -0.8883
vn 0.4161 -0.1947 -0.8883
vn 0.3770 -0.1709 -0.9103
vn 0.3770 0.1709 -0.9103
vn -0.8706 -0.1990 0.4499
vn -0.8706 0.1990 0.4499
vn 0.5621 0.2389 0.7918
vn 0.5621 -0.2389 0.7918
vn -0.5620 0.2388 0.7919
vn -0.5620 -0.2388 0.7919
vn -0.7052 -0.1752 -0.6870
vn -0.7052 0.1752 -0.6870
vn -0.6736 0.1961 -0.7126
vn -0.6736 -0.1961 -0.7126
vn -0.3770 -0.1709 -0.9103
vn -0.3770 0.1709 -0.9103
vn 0.0001 -0.2357 -0.9718
vn 0.0001 0.2357 -0.9718
vn 0.0822 -0.9807 -0.1776
vn 0.1339 -0.9813 -0.1382
vn 0.5148 -0.6952 -0.5016
vn 0.2776 -0.6869 -0.6716
vn 0.1339 0.9813 -0.1382
vn 0.0822 0.9807 -0.1776
vn 0.2776 0.6869 -0.6716
vn 0.5148 0.6952 -0.5016
vn -0.0822 0.9807 -0.1777
vn -0.1341 0.9812 -0.1385
vn -0.5153 0.6946 -0.5020
vn -0.2777 0.6868 -0.6717
vn -0.1341 -0.9812 -0.1385
vn -0.0822 -0.9807 -0.1777
vn -0.2777 -0.6868 -0.6717
vn -0.5153 -0.6946 -0.5020
vn 0.1701 0.9813 0.0905
vn 0.1906 0.9812 0.0307
vn 0.7146 0.6935 0.0914
vn 0.6235 0.6939 0.3603
vn 0.1906 -0.9812 0.0307
vn 0.1701 -0.9813 0.0905
vn 0.6235 -0.6939 0.3603
vn 0.7146 -0.6935 0.0914
vn -0.1911 0.9811 0.0308
vn -0.1703 0.9812 0.0907
vn -0.6239 0.6934 0.3605
vn -0.7153 0.6928 0.0914
vn -0.1703 -0.9812 0.0907
vn -0.1911 -0.9811 0.0308
vn -0.7153 -0.6928 0.0914
vn -0.6239 -0.6934 0.3605
vn -0.1140 -0.9807 0.1590
vn 0.0574 -0.9732 0.2225
vn 0.1230 -0.6920 0.7113
vn -0.3997 -0.7242 0.5619
vn 0.0574 0.9732 0.2225
vn -0.1140 0.9807 0.1590
vn -0.3997 0.7242 0.5619
vn 0.1230 0.6920 0.7113
vn -0.2413 -0.9704 0.0099
vn -0.7222 -0.6844 0.1004
vn -0.2413 0.9704 0.0099
vn -0.7222 0.6844 0.1004
vn -0.1821 0.9811 -0.0651
vn -0.0726 0.9686 -0.2377
vn -0.2844 0.6806 -0.6752
vn -0.6483 0.7279 -0.2233
vn -0.0726 -0.9686 -0.2377
vn -0.1821 -0.9811 -0.0651
vn -0.6483 -0.7279 -0.2233
vn -0.2844 -0.6806 -0.6752
vn -0.1966 0.9707 0.1379
vn -0.6321 0.6875 0.3576
vn -0.1966 -0.9707 0.1379
vn -0.6321 -0.6875 0.3576
vn 0.1820 0.9811 -0.0652
vn 0.1964 0.9708 0.1377
vn 0.6318 0.6878 0.3574
vn 0.6481 0.7280 -0.2234
vn 0.1964 -0.9708 0.1377
vn 0.1820 -0.9811 -0.0652
vn 0.6481 -0.7280 -0.2234
vn 0.6318 -0.6878 0.3574
vn 0.0726 0.9686 -0.2378
vn 0.2843 0.6806 -0.6752
vn 0.0726 -0.9686 -0.2378
vn 0.2843 -0.6806 -0.6752
vn 0.1137 0.9807 0.1592
vn -0.0576 0.9731 0.2229
vn -0.1231 0.6917 0.7116
vn 0.3994 0.7243 0.5620
vn -0.0576 -0.9731 0.2229
vn 0.1137 -0.9807 0.1592
vn 0.3994 -0.7243 0.5620
vn -0.1231 -0.6917 0.7116
vn 0.2406 0.9706 0.0100
vn 0.7217 0.6849 0.1003
vn 0.2407 -0.9706 0.0100
vn 0.7217 -0.6849 0.1003
vn -0.0002 0.9803 -0.1975
vn 0.1845 0.9711 -0.1512
vn 0.5181 0.6852 -0.5119
vn -0.0002 0.7214 -0.6926
vn 0.1845 -0.9711 -0.1512
vn -0.0002 -0.9803 -0.1975
vn -0.0002 -0.7214 -0.6926
vn 0.5181 -0.6852 -0.5119
vn -0.1850 0.9710 -0.1515
vn -0.5185 0.6847 -0.5122
vn -0.1850 -0.9710 -0.1515
vn -0.5185 -0.6847 -0.5122
vn 0.1263 -0.7034 0.6995
vn -0.1265 -0.7031 0.6997
vn 0.0277 -0.9822 0.1859
vn -0.0277 -0.9821 0.1861
vn -0.1265 0.7031 0.6997
vn 0.1263 0.7034 0.6995
vn -0.0277 0.9821 0.1861
vn 0.0277 0.9822 0.1859
usemtl toybox.002
s 1
f 136/1/1 3/2/1 1/3/1
f 31/4/1 69/5/1 1/3/1
f 144/6/2 148/7/3 63/8/4 52/9/5
f 96/10/6 103/11/6 2/12/6
f 7/13/1 105/14/1 1/3/1
f 19/15/6 78/16/6 2/12/6
f 43/17/1 87/18/1 1/3/1
f 127/19/6 15/20/6 2/12/6
f 15/20/6 19/15/6 2/12/6
f 108/21/7 115/22/8 48/23/9 40/24/10
f 54/25/11 61/26/12 36/27/13 28/28/14
f 73/29/1 21/30/1 1/3/1
f 109/31/1 39/32/1 1/3/1
f 1/3/1 145/33/1 51/34/1
f 60/35/6 66/36/6 2/12/6
f 75/37/1 73/29/1 1/3/1
f 79/38/15 72/39/16 76/40/17 84/41/18
f 111/42/1 109/31/1 1/3/1
f 115/22/8 108/21/7 112/43/19 120/44/20
f 93/45/1 91/46/1 1/3/1
f 90/47/21 97/48/22 150/49/23 142/50/24
f 130/51/6 127/19/6 2/12/6
f 132/52/1 138/53/1 1/3/1
f 55/54/1 27/55/1 1/3/1
f 42/56/25 46/57/26 99/58/27 88/59/28
f 78/16/6 85/60/6 2/12/6
f 91/46/1 141/61/1 1/3/1
f 69/5/1 75/37/1 1/3/1
f 30/62/29 28/28/14 36/27/13 34/63/30
f 12/64/31 4/65/32 135/66/33 124/67/34
f 117/68/35 106/69/36 6/70/37 10/71/38
f 84/41/18 76/40/17 70/72/39 81/73/40
f 33/74/6 37/75/6 2/12/6
f 42/56/25 40/24/10 48/23/9 46/57/26
f 102/76/41 94/77/42 88/59/28 99/58/27
f 31/4/1 1/3/1 27/55/1
f 52/9/5 63/8/4 67/78/43 57/79/44
f 25/80/1 132/52/1 1/3/1
f 118/81/6 9/82/6 2/12/6
f 114/83/6 121/84/6 2/12/6
f 58/85/1 55/54/1 1/3/1
f 148/7/3 144/6/2 142/50/24 150/49/23
f 57/79/44 67/78/43 61/26/12 54/25/11
f 45/86/6 49/87/6 2/12/6
f 120/44/20 112/43/19 106/69/36 117/68/35
f 138/53/1 136/1/1 1/3/1
f 9/82/6 13/88/6 2/12/6
f 30/62/29 34/63/30 81/73/40 70/72/39
f 39/32/1 43/17/1 1/3/1
f 51/34/1 58/85/1 1/3/1
f 24/89/45 16/90/46 126/91/47 133/92/48
f 24/89/45 22/93/49 18/94/50 16/90/46
f 87/18/1 93/45/1 1/3/1
f 103/11/6 100/95/6 2/12/6
f 105/14/1 111/42/1 1/3/1
f 82/96/6 33/74/6 2/12/6
f 37/75/6 60/35/6 2/12/6
f 72/39/16 79/38/15 18/94/50 22/93/49
f 121/84/6 118/81/6 2/12/6
f 100/95/6 45/86/6 2/12/6
f 124/67/34 135/66/33 139/97/51 129/98/52
f 25/80/1 1/3/1 21/30/1
f 64/99/6 147/100/6 2/12/6
f 123/101/6 130/51/6 2/12/6
f 66/36/6 64/99/6 2/12/6
f 129/102/52 139/103/51 133/92/48 126/91/47
f 49/87/6 114/83/6 2/12/6
f 13/88/6 123/101/6 2/12/6
f 85/60/6 82/96/6 2/12/6
f 2/12/6 151/104/6 96/10/6
f 3/2/1 7/13/1 1/3/1
f 147/100/6 151/104/6 2/12/6
f 141/61/1 145/33/1 1/3/1
f 97/48/22 90/47/21 94/77/42 102/76/41
f 7/13/53 3/2/54 5/105/55 8/106/56
f 8/106/56 5/105/55 4/107/32 6/108/37
f 13/88/57 9/82/58 11/109/59 14/110/60
f 14/110/60 11/109/59 10/111/38 12/112/31
f 19/15/61 15/20/62 17/113/63 20/114/64
f 20/114/64 17/113/63 16/115/46 18/116/50
f 25/80/65 21/30/66 23/117/67 26/118/68
f 26/118/68 23/117/67 22/119/49 24/120/45
f 49/87/69 45/86/70 47/121/71 50/122/72
f 50/122/72 47/121/71 46/123/26 48/124/9
f 43/17/73 39/32/74 41/125/75 44/126/76
f 44/126/76 41/125/75 40/127/10 42/128/25
f 37/75/77 33/74/78 35/129/79 38/130/80
f 38/130/80 35/129/79 34/131/30 36/132/13
f 31/4/81 27/55/82 29/133/83 32/134/84
f 32/134/84 29/133/83 28/135/14 30/136/29
f 58/85/85 51/34/86 53/137/87 59/138/88
f 59/138/88 53/137/87 52/139/5 57/140/44
f 64/99/89 66/36/90 68/141/91 65/142/92
f 65/142/92 68/141/91 67/143/43 63/144/4
f 55/54/93 58/85/85 59/138/88 56/145/94
f 56/145/94 59/138/88 57/140/44 54/146/11
f 66/36/90 60/35/95 62/147/96 68/141/91
f 68/141/91 62/147/96 61/148/12 67/143/43
f 85/60/97 78/16/98 80/149/99 86/150/100
f 86/150/100 80/149/99 79/151/15 84/152/18
f 73/29/101 75/37/102 77/153/103 74/154/104
f 74/154/104 77/153/103 76/155/17 72/156/16
f 82/96/105 85/60/97 86/150/100 83/157/106
f 83/157/106 86/150/100 84/152/18 81/158/40
f 75/37/102 69/5/107 71/159/108 77/153/103
f 77/153/103 71/159/108 70/160/39 76/155/17
f 121/84/109 114/83/110 116/161/111 122/162/112
f 122/162/112 116/161/111 115/163/8 120/164/20
f 109/31/113 111/42/114 113/165/115 110/166/116
f 110/166/116 113/165/115 112/167/19 108/168/7
f 118/81/117 121/84/109 122/162/112 119/169/118
f 119/169/118 122/162/112 120/164/20 117/170/35
f 111/42/114 105/14/119 107/171/120 113/165/115
f 113/165/115 107/171/120 106/172/36 112/167/19
f 103/11/121 96/10/122 98/173/123 104/174/124
f 104/174/124 98/173/123 97/175/22 102/176/41
f 91/46/125 93/45/126 95/177/127 92/178/128
f 92/178/128 95/177/127 94/179/42 90/180/21
f 100/95/129 103/11/121 104/174/124 101/181/130
f 101/181/130 104/174/124 102/176/41 99/182/27
f 93/45/126 87/18/131 89/183/132 95/177/127
f 95/177/127 89/183/132 88/184/28 94/179/42
f 130/51/133 123/101/134 125/185/135 131/186/136
f 131/186/136 125/185/135 124/187/34 129/188/52
f 136/1/137 138/53/138 140/189/139 137/190/140
f 137/190/140 140/189/139 139/191/51 135/192/33
f 127/19/141 130/51/133 131/186/136 128/193/142
f 128/193/142 131/186/136 129/188/52 126/194/47
f 138/53/138 132/52/143 134/195/144 140/189/139
f 140/189/139 134/195/144 133/196/48 139/191/51
f 27/55/82 55/54/93 56/145/94 29/133/83
f 29/133/83 56/145/94 54/146/11 28/135/14
f 60/35/95 37/75/77 38/130/80 62/147/96
f 62/147/96 38/130/80 36/132/13 61/148/12
f 69/5/107 31/4/81 32/134/84 71/159/108
f 71/159/108 32/134/84 30/136/29 70/160/39
f 33/74/78 82/96/105 83/157/106 35/129/79
f 35/129/79 83/157/106 81/158/40 34/131/30
f 21/30/66 73/29/101 74/154/104 23/117/67
f 23/117/67 74/154/104 72/156/16 22/119/49
f 78/16/98 19/15/61 20/114/64 80/149/99
f 80/149/99 20/114/64 18/116/50 79/151/15
f 87/18/131 43/17/73 44/126/76 89/183/132
f 89/183/132 44/126/76 42/128/25 88/184/28
f 45/86/70 100/95/129 101/181/130 47/121/71
f 47/121/71 101/181/130 99/182/27 46/123/26
f 9/82/58 118/81/117 119/169/118 11/109/59
f 11/109/59 119/169/118 117/170/35 10/111/38
f 105/14/119 7/13/53 8/106/56 107/171/120
f 107/171/120 8/106/56 6/108/37 106/172/36
f 39/32/74 109/31/113 110/166/116 41/125/75
f 41/125/75 110/166/116 108/168/7 40/127/10
f 114/83/110 49/87/69 50/122/72 116/161/111
f 116/161/111 50/122/72 48/124/9 115/163/8
f 123/101/134 13/88/57 14/110/60 125/185/135
f 125/185/135 14/110/60 12/112/31 124/187/34
f 3/2/54 136/1/137 137/190/140 5/105/55
f 5/105/55 137/190/140 135/192/33 4/107/32
f 132/52/143 25/80/65 26/118/68 134/195/144
f 134/195/144 26/118/68 24/120/45 133/196/48
f 15/20/62 127/19/141 128/193/142 17/113/63
f 17/113/63 128/193/142 126/194/47 16/115/46
f 142/197/24 144/198/2 146/199/145 143/200/146
f 143/200/146 146/199/145 145/33/147 141/61/148
f 148/201/3 150/202/23 152/203/149 149/204/150
f 149/204/150 152/203/149 151/104/151 147/100/152
f 141/61/148 91/46/125 92/178/128 143/200/146
f 143/200/146 92/178/128 90/180/21 142/197/24
f 96/10/122 151/104/151 152/203/149 98/173/123
f 98/173/123 152/203/149 150/202/23 97/175/22
f 51/34/86 145/33/147 146/199/145 53/137/87
f 53/137/87 146/199/145 144/198/2 52/139/5
f 147/100/152 64/99/89 65/142/92 149/204/150
f 149/204/150 65/142/92 63/144/4 148/201/3
f 12/64/31 10/71/38 6/70/37 4/65/32

View File

@@ -0,0 +1,30 @@
<?xml version="1.0" ?>
<robot name="yellow_pentagon.urdf">
<link name="baseLink">
<contact>
<lateral_friction value="0.5"/>
<rolling_friction value="0.0001"/>
<inertia_scaling value="1.0"/>
</contact>
<inertial>
<origin rpy="0 0 0" xyz="0 0 0"/>
<mass value=".01"/>
<inertia ixx="1" ixy="0" ixz="0" iyy="1" iyz="0" izz="1"/>
</inertial>
<visual>
<origin rpy="0 0 0" xyz="0 0 0"/>
<geometry>
<mesh filename="pentagon.obj" scale="1.0 1.0 1.0"/>
</geometry>
<material name="yellow">
<color rgba="1 1 0.4 1"/>
</material>
</visual>
<collision>
<origin rpy="0 0 0" xyz="0 0 0"/>
<geometry>
<mesh filename="pentagon.obj" scale="1.0 1.0 1.0"/>
</geometry>
</collision>
</link>
</robot>

View File

@@ -0,0 +1,66 @@
<?xml version="0.0" ?>
<robot name="ell.urdf">
<link name="baseLink">
<contact>
<lateral_friction value="0.3"/>
<rolling_friction value="0.0001"/>
<inertia_scaling value="3.0"/>
</contact>
<inertial>
<origin rpy="0 0 0" xyz="0 0 0"/>
<mass value=".1"/>
<inertia ixx="1" ixy="0" ixz="0" iyy="1" iyz="0" izz="1"/>
</inertial>
<visual>
<origin rpy="0 0 0" xyz="0.025 0 0"/>
<geometry>
<box size=".08 .03 .04"/>
</geometry>
<material name="red">
<color rgba="0. 0.3412 0.3490 1"/>
</material>
</visual>
<collision>
<origin rpy="0 0 0" xyz="0.025 0 0"/>
<geometry>
<box size=".08 .03 .04"/>
</geometry>
</collision>
<visual>
<origin rpy="0 0 0" xyz="0 0.025 0"/>
<geometry>
<box size=".03 .08 .0399"/>
</geometry>
<material name="dark">
<color rgba="0. 0.3412 0.3490 1"/>
</material>
</visual>
<collision>
<origin rpy="0 0 0" xyz="0 0.025 0"/>
<geometry>
<box size=".03 .08 .0399"/>
</geometry>
</collision>
<visual>
<origin rpy="0 0 0" xyz="0.075 0.025 0"/>
<geometry>
<box size=".03 .08 .0399"/>
</geometry>
<material name="dark">
<color rgba="0. 0.3412 0.3490 1"/>
</material>
</visual>
<collision>
<origin rpy="0 0 0" xyz="0.075 0.025 0"/>
<geometry>
<box size=".03 .08 .0399"/>
</geometry>
</collision>
</link>
</robot>

View File

@@ -0,0 +1,18 @@
# Blender v2.66 (sub 1) OBJ File: ''
# www.blender.org
mtllib plane.mtl
o Plane
v 15.000000 -15.000000 0.000000
v 15.000000 15.000000 0.000000
v -15.000000 15.000000 0.000000
v -15.000000 -15.000000 0.000000
vt 15.000000 0.000000
vt 15.000000 15.000000
vt 0.000000 15.000000
vt 0.000000 0.000000
usemtl Material
s off
f 1/1 2/2 3/3
f 1/1 3/3 4/4

View File

@@ -0,0 +1,396 @@
# Object Export From Tinkercad Server 2015
mtllib obj.mtl
o obj_0
v 7.413 37.27 25
v 7.413 37.27 0
v 33.513 17.913 25
v 35.107 14.542 25
v 35.107 14.542 0
v 33.513 17.913 0
v -31.596 21.112 25
v -31.596 21.112 0
v -33.513 17.913 0
v -33.513 17.913 25
v 3.725 37.817 0
v 3.725 37.817 25
v -29.374 24.107 25
v -29.374 24.107 0
v 0 38 0
v 11.031 36.364 25
v 11.031 36.364 0
v 14.542 35.107 25
v 14.542 35.107 0
v -37.27 -7.413 25
v 17.913 33.513 25
v -37.27 -7.413 0
v -11.031 -36.364 25
v 17.913 33.513 0
v -36.364 -11.031 0
v -14.542 -35.107 25
v -26.87 26.87 25
v -36.364 -11.031 25
v -14.542 -35.107 0
v -26.87 26.87 0
v -3.725 37.817 0
v -11.031 -36.364 0
v 24.107 -29.374 25
v -17.913 -33.513 25
v -17.913 -33.513 0
v 21.112 -31.596 25
v 21.112 -31.596 0
v 24.107 -29.374 0
v -3.725 37.817 25
v 0 38 25
v 17.913 -33.513 25
v 17.913 -33.513 0
v 37.817 -3.725 25
v -37.817 -3.725 25
v -37.817 -3.725 0
v 37.817 -3.725 0
v 38 0 0
v -21.112 -31.596 25
v 38 0 25
v 14.542 -35.107 25
v -21.112 -31.596 0
v 36.364 11.031 25
v 14.542 -35.107 0
v 36.364 11.031 0
v 37.27 -7.413 25
v 37.27 -7.413 0
v -24.107 29.374 25
v 21.112 31.596 25
v -24.107 29.374 0
v 36.364 -11.031 25
v 21.112 31.596 0
v 36.364 -11.031 0
v -38 0 25
v 11.031 -36.364 25
v 11.031 -36.364 0
v -21.112 31.596 25
v -21.112 31.596 0
v 7.413 -37.27 25
v 7.413 -37.27 0
v -17.913 33.513 25
v -17.913 33.513 0
v 24.107 29.374 0
v -38 0 0
v 26.87 26.87 0
v 35.107 -14.542 25
v 29.374 24.107 0
v 35.107 -14.542 0
v 31.596 21.112 0
v 3.725 -37.817 25
v 37.27 7.413 25
v 3.725 -37.817 0
v 24.107 29.374 25
v -24.107 -29.374 25
v -14.542 35.107 25
v -24.107 -29.374 0
v -14.542 35.107 0
v 37.27 7.413 0
v 33.513 -17.913 25
v 37.817 3.725 0
v 33.513 -17.913 0
v -11.031 36.364 25
v -11.031 36.364 0
v -7.413 37.27 25
v 26.87 26.87 25
v -7.413 37.27 0
v -26.87 -26.87 25
v -26.87 -26.87 0
v 37.817 3.725 25
v 0 -38 25
v 31.596 -21.112 25
v 0 -38 0
v 31.596 -21.112 0
v 29.374 24.107 25
v -29.374 -24.107 25
v -29.374 -24.107 0
v 29.374 -24.107 25
v 31.596 21.112 25
v 29.374 -24.107 0
v -3.725 -37.817 25
v -31.596 -21.112 25
v -3.725 -37.817 0
v -31.596 -21.112 0
v -37.817 3.725 25
v -37.817 3.725 0
v 26.87 -26.87 25
v 26.87 -26.87 0
v -37.27 7.413 0
v -37.27 7.413 25
v -7.413 -37.27 25
v -33.513 -17.913 25
v -33.513 -17.913 0
v -7.413 -37.27 0
v -36.364 11.031 25
v -36.364 11.031 0
v -35.107 -14.542 25
v -35.107 -14.542 0
v -35.107 14.542 25
v -35.107 14.542 0
# 128 vertices
g group_0_2829873
usemtl color_2829873
s 0
f 3 4 5
f 3 5 6
f 7 8 9
f 7 9 10
f 1 2 11
f 1 11 12
f 16 17 2
f 16 2 1
f 13 14 8
f 13 8 7
f 18 19 17
f 18 17 16
f 23 26 29
f 23 29 32
f 26 34 35
f 26 35 29
f 33 36 37
f 33 37 38
f 15 31 39
f 15 39 40
f 20 22 25
f 20 25 28
f 36 41 42
f 36 42 37
f 44 45 22
f 44 22 20
f 21 24 19
f 21 19 18
f 43 46 47
f 43 47 49
f 27 30 14
f 27 14 13
f 41 50 53
f 41 53 42
f 43 55 56
f 43 56 46
f 34 48 51
f 34 51 35
f 4 52 54
f 4 54 5
f 50 64 65
f 50 65 53
f 57 59 30
f 57 30 27
f 55 60 62
f 55 62 56
f 66 67 59
f 66 59 57
f 64 68 69
f 64 69 65
f 58 61 24
f 58 24 21
f 63 73 45
f 63 45 44
f 70 71 67
f 70 67 66
f 60 75 77
f 60 77 62
f 82 72 61
f 82 61 58
f 68 79 81
f 68 81 69
f 48 83 85
f 48 85 51
f 84 86 71
f 84 71 70
f 75 88 90
f 75 90 77
f 91 92 86
f 91 86 84
f 15 11 2
f 15 2 17
f 15 17 19
f 15 19 24
f 15 24 61
f 15 61 72
f 15 72 74
f 15 74 76
f 15 76 78
f 15 78 6
f 15 6 5
f 15 5 54
f 15 54 87
f 15 87 89
f 15 89 47
f 52 80 87
f 52 87 54
f 94 74 72
f 94 72 82
f 93 95 92
f 93 92 91
f 83 96 97
f 83 97 85
f 80 98 89
f 80 89 87
f 79 99 101
f 79 101 81
f 88 100 102
f 88 102 90
f 94 103 76
f 94 76 74
f 39 31 95
f 39 95 93
f 104 105 97
f 104 97 96
f 98 49 47
f 98 47 89
f 100 106 108
f 100 108 102
f 103 107 78
f 103 78 76
f 101 99 109
f 101 109 111
f 110 112 105
f 110 105 104
f 113 114 73
f 113 73 63
f 106 115 116
f 106 116 108
f 107 3 6
f 107 6 78
f 118 117 114
f 118 114 113
f 120 121 112
f 120 112 110
f 109 119 122
f 109 122 111
f 123 124 117
f 123 117 118
f 125 126 121
f 125 121 120
f 127 128 124
f 127 124 123
f 115 33 38
f 115 38 116
f 10 9 128
f 10 128 127
f 28 25 126
f 28 126 125
f 119 23 32
f 119 32 122
f 73 114 117
f 73 117 124
f 73 124 128
f 73 128 9
f 73 9 8
f 73 8 14
f 73 14 30
f 73 30 59
f 73 59 67
f 73 67 71
f 73 71 86
f 73 86 92
f 73 92 95
f 73 95 31
f 73 31 15
f 81 101 15
f 47 46 15
f 46 56 15
f 56 62 15
f 62 77 15
f 77 90 15
f 90 102 15
f 102 108 15
f 108 116 15
f 116 38 15
f 38 37 15
f 37 42 15
f 42 53 15
f 53 65 15
f 69 15 65
f 81 15 69
f 45 73 15
f 101 111 15
f 111 122 15
f 122 32 15
f 32 29 15
f 29 35 15
f 35 51 15
f 51 85 15
f 85 97 15
f 97 105 15
f 105 112 15
f 112 121 15
f 121 126 15
f 126 25 15
f 22 15 25
f 45 15 22
f 49 98 80
f 49 80 52
f 49 52 4
f 49 4 3
f 49 3 107
f 49 107 103
f 49 103 94
f 49 94 82
f 49 82 58
f 49 58 21
f 49 21 18
f 49 18 16
f 49 16 1
f 49 1 12
f 49 12 40
f 40 39 93
f 40 93 91
f 40 91 84
f 40 84 70
f 40 70 66
f 40 66 57
f 40 57 27
f 40 27 13
f 40 13 7
f 40 7 10
f 40 10 127
f 40 127 123
f 40 123 118
f 40 118 113
f 63 40 113
f 68 40 79
f 64 40 68
f 50 40 64
f 41 40 50
f 36 40 41
f 33 40 36
f 115 40 33
f 106 40 115
f 100 40 106
f 88 40 100
f 75 40 88
f 60 40 75
f 55 40 60
f 43 40 55
f 49 40 43
f 99 79 40
f 20 40 44
f 28 40 20
f 125 40 28
f 120 40 125
f 110 40 120
f 104 40 110
f 96 40 104
f 83 40 96
f 48 40 83
f 34 40 48
f 26 40 34
f 23 40 26
f 119 40 23
f 109 40 119
f 99 40 109
f 63 44 40
f 12 11 15
f 12 15 40
# 252 faces
#end of obj_0

View File

@@ -0,0 +1,98 @@
<?xml version="0.0" ?>
<robot name="cylinder.urdf">
<link name="headLink">
<contact>
<lateral_friction value="1.0"/>
<rolling_friction value="0.0001"/>
<inertia_scaling value="3.0"/>
</contact>
<inertial>
<origin rpy="0 0 0" xyz="0 0 0"/>
<mass value=".1"/>
<inertia ixx="1" ixy="0" ixz="0" iyy="1" iyz="0" izz="1"/>
</inertial>
<visual>
<origin rpy="0 0 0" xyz="0 0 0"/>
<geometry>
<mesh filename="head.obj" scale="0.001 0.001 0.001"/>
</geometry>
<material name="darkgrey">
<color rgba="0.2 0.2 0.2 1"/>
</material>
</visual>
<collision>
<origin rpy="0 0 0" xyz="0 0 0"/>
<geometry>
<mesh filename="head.obj" scale="0.001 0.001 0.001"/>
</geometry>
</collision>
</link>
<joint name="tipJoint" type="fixed">
<parent link="headLink"/>
<child link="tipLink"/>
<origin rpy="0.0 0.0 0.0" xyz="0.0 0.0 0.029"/>
<axis xyz="0 0 1"/>
<limit effort="150.0" lower="-6.28318530718" upper="6.28318530718" velocity="3.15"/>
<dynamics damping="10.0" friction="0.0"/>
</joint>
<link name="tipLink">
<contact>
<lateral_friction value="1.0"/>
<rolling_friction value="0.0001"/>
<inertia_scaling value="3.0"/>
</contact>
<inertial>
<origin rpy="0 0 0" xyz="0 0 0"/>
<mass value=".1"/>
<inertia ixx="1" ixy="0" ixz="0" iyy="1" iyz="0" izz="1"/>
</inertial>
<visual>
<origin rpy="0 0 0" xyz="0 0 0"/>
<geometry>
<cylinder length="0.05" radius="0.005"/>
</geometry>
<material name="blue">
<color rgba="0.18039216, 0.50588235, 0.77254902 1"/>
</material>
</visual>
<collision>
<origin rpy="0 0 0" xyz="0 0 0"/>
<geometry>
<cylinder length="0.05" radius="0.005"/>
</geometry>
</collision>
</link>
<!-- <link name="asdfLink">
<contact>
<lateral_friction value="1.0"/>
<rolling_friction value="0.0001"/>
<inertia_scaling value="3.0"/>
</contact>
<inertial>
<origin rpy="0 0 0" xyz="0 0 0"/>
<mass value=".1"/>
<inertia ixx="1" ixy="0" ixz="0" iyy="1" iyz="0" izz="1"/>
</inertial>
<visual>
<origin rpy="0 0 0" xyz="0 0 0"/>
<geometry>
<cylinder length="0.028" radius="0.001"/>
</geometry>
</visual>
</link>
<joint name="asdfoint" type="fixed">
<parent link="tipLink"/>
<child link="asdfLink"/>
<origin rpy="0.0 0.0 0.0" xyz="0.0 0.0 0.0"/>
<axis xyz="0 0 1"/>
<limit effort="150.0" lower="-6.28318530718" upper="6.28318530718" velocity="3.15"/>
<dynamics damping="10.0" friction="0.0"/>
</joint> -->
</robot>

View File

@@ -0,0 +1,98 @@
<?xml version="0.0" ?>
<robot name="cylinder_real.urdf">
<link name="headLink">
<contact>
<lateral_friction value="1.0"/>
<rolling_friction value="0.0001"/>
<inertia_scaling value="3.0"/>
</contact>
<inertial>
<origin rpy="0 0 0" xyz="0 0 0"/>
<mass value=".1"/>
<inertia ixx="1" ixy="0" ixz="0" iyy="1" iyz="0" izz="1"/>
</inertial>
<visual>
<origin rpy="0 0 0" xyz="0 0 0"/>
<geometry>
<mesh filename="head.obj" scale="0.001 0.001 0.001"/>
</geometry>
<material name="darkgrey">
<color rgba="0.2 0.2 0.2 1"/>
</material>
</visual>
<collision>
<origin rpy="0 0 0" xyz="0 0 0"/>
<geometry>
<mesh filename="head.obj" scale="0.001 0.001 0.001"/>
</geometry>
</collision>
</link>
<joint name="tipJoint" type="fixed">
<parent link="headLink"/>
<child link="tipLink"/>
<origin rpy="0.0 0.0 0.0" xyz="0.0 0.0 0.029"/>
<axis xyz="0 0 1"/>
<limit effort="150.0" lower="-6.28318530718" upper="6.28318530718" velocity="3.15"/>
<dynamics damping="10.0" friction="0.0"/>
</joint>
<link name="tipLink">
<contact>
<lateral_friction value="1.0"/>
<rolling_friction value="0.0001"/>
<inertia_scaling value="3.0"/>
</contact>
<inertial>
<origin rpy="0 0 0" xyz="0 0 0"/>
<mass value=".1"/>
<inertia ixx="1" ixy="0" ixz="0" iyy="1" iyz="0" izz="1"/>
</inertial>
<visual>
<origin rpy="0 0 0" xyz="0 0 0"/>
<geometry>
<cylinder length="0.135" radius="0.0127"/>
</geometry>
<material name="blue">
<color rgba="0.5, 0.5, 0.5 1"/>
</material>
</visual>
<collision>
<origin rpy="0 0 0" xyz="0 0 0"/>
<geometry>
<cylinder length="0.135" radius="0.0127"/>
</geometry>
</collision>
</link>
<!-- <link name="asdfLink">
<contact>
<lateral_friction value="1.0"/>
<rolling_friction value="0.0001"/>
<inertia_scaling value="3.0"/>
</contact>
<inertial>
<origin rpy="0 0 0" xyz="0 0 0"/>
<mass value=".1"/>
<inertia ixx="1" ixy="0" ixz="0" iyy="1" iyz="0" izz="1"/>
</inertial>
<visual>
<origin rpy="0 0 0" xyz="0 0 0"/>
<geometry>
<cylinder length="0.028" radius="0.001"/>
</geometry>
</visual>
</link>
<joint name="asdfoint" type="fixed">
<parent link="tipLink"/>
<child link="asdfLink"/>
<origin rpy="0.0 0.0 0.0" xyz="0.0 0.0 0.0"/>
<axis xyz="0 0 1"/>
<limit effort="150.0" lower="-6.28318530718" upper="6.28318530718" velocity="3.15"/>
<dynamics damping="10.0" friction="0.0"/>
</joint> -->
</robot>

View File

@@ -0,0 +1,396 @@
# Object Export From Tinkercad Server 2015
mtllib obj.mtl
o obj_0
v 0.001 5.001 0
v 0.492 4.977 0
v 0.977 4.905 0
v 1.453 4.786 0
v 1.915 4.621 0
v 2.358 4.411 0
v 2.779 4.159 0
v 3.173 3.866 0
v 3.537 3.537 0
v 3.866 3.173 0
v 4.159 2.779 0
v 4.411 2.358 0
v 4.621 1.915 0
v 4.786 1.453 0
v 4.905 0.977 0
v 4.977 0.492 0
v 5.001 0.001 0
v -2.776 4.159 0
v 4.159 -2.776 30
v -2.356 4.411 0
v -1.912 4.621 0
v -1.45 4.786 0
v -0.974 4.905 0
v -0.489 4.977 0
v 4.411 -2.356 30
v 4.621 -1.912 30
v 4.786 -1.45 30
v -4.999 0.001 0
v -4.974 0.492 0
v -4.903 0.977 0
v -4.783 1.453 0
v -4.618 1.915 0
v -4.408 2.358 0
v 4.905 -0.974 30
v -4.156 2.779 0
v -3.864 3.173 0
v -3.534 3.537 0
v -3.171 3.866 0
v 4.977 -0.489 30
v 4.977 -0.489 0
v 4.905 -0.974 0
v 4.786 -1.45 0
v 4.621 -1.912 0
v 4.411 -2.356 0
v 4.159 -2.776 0
v 0.001 -4.999 30
v 3.866 -3.171 0
v 3.537 -3.534 0
v 3.173 -3.864 0
v 2.779 -4.156 0
v 2.358 -4.408 0
v 1.915 -4.618 0
v 0.492 -4.974 30
v 1.453 -4.783 0
v 0.977 -4.902 0
v 0.492 -4.974 0
v 0.001 -4.999 0
v 0.977 -4.902 30
v 1.453 -4.783 30
v 1.915 -4.618 30
v -3.171 -3.864 0
v -3.534 -3.534 0
v -3.864 -3.171 0
v -4.156 -2.776 0
v -4.408 -2.356 0
v -4.618 -1.912 0
v -4.783 -1.45 0
v -4.903 -0.974 0
v -4.974 -0.489 0
v 2.358 -4.408 30
v 2.779 -4.156 30
v -0.489 -4.974 0
v -0.974 -4.902 0
v -1.45 -4.783 0
v -1.912 -4.618 0
v -2.356 -4.408 0
v -2.776 -4.156 0
v 3.173 -3.864 30
v 3.537 -3.534 30
v 0.001 5.001 30
v 2.779 4.159 30
v 2.358 4.411 30
v 1.915 4.621 30
v 3.866 -3.171 30
v 1.453 4.786 30
v 0.977 4.905 30
v 0.492 4.977 30
v 5.001 0.001 30
v 4.977 0.492 30
v 4.905 0.977 30
v 4.786 1.453 30
v 4.621 1.915 30
v 4.411 2.358 30
v 4.159 2.779 30
v 3.866 3.173 30
v 3.537 3.537 30
v 3.173 3.866 30
v -0.489 4.977 30
v -0.974 4.905 30
v -1.45 4.786 30
v -1.912 4.621 30
v -2.356 4.411 30
v -2.776 4.159 30
v -3.171 3.866 30
v -3.534 3.537 30
v -3.864 3.173 30
v -4.156 2.779 30
v -4.408 2.358 30
v -4.618 1.915 30
v -4.783 1.453 30
v -4.903 0.977 30
v -4.974 0.492 30
v -4.999 0.001 30
v -4.974 -0.489 30
v -4.903 -0.974 30
v -4.783 -1.45 30
v -4.618 -1.912 30
v -4.408 -2.356 30
v -4.156 -2.776 30
v -3.864 -3.171 30
v -3.534 -3.534 30
v -3.171 -3.864 30
v -0.489 -4.974 30
v -2.776 -4.156 30
v -2.356 -4.408 30
v -1.912 -4.618 30
v -1.45 -4.783 30
v -0.974 -4.902 30
# 128 vertices
g group_0_16089887
usemtl color_16089887
s 0
f 2 3 1
f 1 3 4
f 1 4 5
f 1 5 6
f 1 6 7
f 1 7 8
f 9 10 8
f 8 10 11
f 8 11 12
f 8 12 13
f 8 13 14
f 8 14 15
f 8 15 16
f 8 16 17
f 8 17 40
f 22 8 21
f 20 21 8
f 23 8 22
f 24 8 23
f 1 8 24
f 18 20 8
f 31 8 30
f 32 8 31
f 33 8 32
f 35 8 33
f 29 30 8
f 36 8 35
f 37 8 36
f 38 8 37
f 18 8 38
f 28 29 8
f 44 8 43
f 42 43 8
f 41 42 8
f 45 8 44
f 47 8 45
f 50 8 49
f 51 8 50
f 48 49 8
f 52 8 51
f 54 8 52
f 55 8 54
f 56 8 55
f 57 8 56
f 72 8 57
f 47 48 8
f 40 41 8
f 64 8 63
f 65 8 64
f 66 8 65
f 62 63 8
f 67 8 66
f 68 8 67
f 69 8 68
f 28 8 69
f 61 62 8
f 72 73 8
f 75 8 74
f 73 74 8
f 76 8 75
f 77 8 76
f 61 8 77
f 98 82 80
f 80 83 85
f 80 85 86
f 80 86 87
f 19 45 44
f 19 44 25
f 43 26 44
f 43 42 27
f 42 41 27
f 89 90 88
f 88 90 91
f 88 91 92
f 88 92 93
f 88 93 94
f 88 94 95
f 96 97 88
f 97 81 88
f 82 88 81
f 95 96 88
f 114 115 82
f 25 44 26
f 34 27 41
f 43 27 26
f 34 41 40
f 99 82 98
f 100 82 99
f 101 82 100
f 102 82 101
f 103 82 102
f 83 80 82
f 55 58 53
f 34 40 39
f 58 55 54
f 82 103 104
f 82 104 105
f 82 105 106
f 82 106 107
f 82 107 108
f 82 108 109
f 82 109 110
f 82 110 111
f 82 111 112
f 82 112 113
f 82 113 114
f 56 53 57
f 53 56 55
f 39 88 82
f 19 25 82
f 25 26 82
f 27 82 26
f 34 82 27
f 39 82 34
f 57 53 46
f 70 82 60
f 59 60 82
f 58 59 82
f 71 82 70
f 53 58 82
f 78 82 71
f 79 82 78
f 84 82 79
f 19 82 84
f 46 53 82
f 122 124 82
f 71 50 78
f 54 59 58
f 5 83 82
f 71 51 50
f 48 78 49
f 59 54 60
f 52 60 54
f 5 85 83
f 49 78 50
f 52 51 60
f 70 60 51
f 116 82 115
f 117 82 116
f 118 82 117
f 119 82 118
f 120 82 119
f 121 82 120
f 122 82 121
f 85 4 3
f 71 70 51
f 46 82 123
f 125 126 82
f 123 82 128
f 127 82 126
f 128 82 127
f 124 125 82
f 48 79 78
f 24 98 80
f 48 47 79
f 84 79 47
f 16 89 88
f 84 47 19
f 16 90 89
f 95 94 11
f 14 91 90
f 82 7 6
f 92 91 13
f 12 92 13
f 80 1 24
f 8 97 96
f 76 75 125
f 57 46 72
f 126 75 127
f 72 123 128
f 46 123 72
f 81 97 7
f 7 82 81
f 69 68 114
f 85 3 86
f 115 67 116
f 117 116 66
f 86 3 87
f 2 87 3
f 37 105 104
f 116 67 66
f 2 80 87
f 121 120 62
f 66 65 118
f 66 118 117
f 121 62 122
f 61 122 62
f 65 119 118
f 120 119 63
f 107 35 33
f 115 114 68
f 122 61 124
f 120 63 62
f 68 67 115
f 14 13 91
f 77 124 61
f 113 29 28
f 77 76 124
f 125 124 76
f 12 93 92
f 126 125 75
f 119 65 64
f 113 28 114
f 69 114 28
f 127 75 74
f 112 111 29
f 94 93 12
f 74 73 127
f 64 63 119
f 128 73 72
f 112 29 113
f 104 18 38
f 104 38 37
f 98 24 23
f 8 7 97
f 35 107 106
f 40 88 39
f 110 109 31
f 101 21 20
f 128 127 73
f 98 23 99
f 23 100 99
f 6 5 82
f 101 20 102
f 85 5 4
f 20 18 103
f 20 103 102
f 18 104 103
f 1 80 2
f 37 106 105
f 107 33 108
f 33 109 108
f 88 17 16
f 15 90 16
f 15 14 90
f 94 12 11
f 10 95 11
f 95 10 96
f 9 96 10
f 9 8 96
f 40 17 88
f 100 22 101
f 109 33 32
f 32 31 109
f 111 110 30
f 31 30 110
f 111 30 29
f 100 23 22
f 21 101 22
f 37 36 106
f 36 35 106
f 19 47 45
# 252 faces
#end of obj_0

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,69 @@
<?xml version="0.0" ?>
<robot name="suction-base.urdf">
<material name="DarkGrey">
<color rgba="0.2 0.2 0.2 1.0"/>
</material>
<link name="baseLink">
<contact>
<lateral_friction value="1.0"/>
<rolling_friction value="0.0001"/>
<inertia_scaling value="3.0"/>
</contact>
<inertial>
<origin rpy="0 0 0" xyz="0 0 0"/>
<mass value=".1"/>
<inertia ixx="1" ixy="0" ixz="0" iyy="1" iyz="0" izz="1"/>
</inertial>
<visual>
<origin rpy="0 0 0" xyz="0 0 0"/>
<geometry>
<mesh filename="base.obj" scale="0.001 0.001 0.001"/>
</geometry>
<material name="DarkGrey"/>
</visual>
<!-- <collision>
<origin rpy="0 0 0" xyz="0 0 0"/>
<geometry>
<mesh filename="base.obj" scale="0.001 0.001 0.001"/>
</geometry>
</collision> -->
</link>
<joint name="headJoint" type="fixed">
<parent link="baseLink"/>
<child link="midLink"/>
<origin rpy="0.0 0.0 0.0" xyz="0.0 0.0 0.015"/>
<axis xyz="0 0 1"/>
<limit effort="150.0" lower="-6.28318530718" upper="6.28318530718" velocity="3.15"/>
<dynamics damping="10.0" friction="0.0"/>
</joint>
<link name="midLink">
<contact>
<lateral_friction value="1.0"/>
<rolling_friction value="0.0001"/>
<inertia_scaling value="3.0"/>
</contact>
<inertial>
<origin rpy="0 0 0" xyz="0 0 0"/>
<mass value=".1"/>
<inertia ixx="1" ixy="0" ixz="0" iyy="1" iyz="0" izz="1"/>
</inertial>
<visual>
<origin rpy="0 0 0" xyz="0 0 0"/>
<geometry>
<mesh filename="mid.obj" scale="0.001 0.001 0.001"/>
</geometry>
<material name="DarkGrey"/>
</visual>
<collision>
<origin rpy="0 0 0" xyz="0 0 0"/>
<geometry>
<mesh filename="mid.obj" scale="0.001 0.001 0.001"/>
</geometry>
</collision>
</link>
</robot>

View File

@@ -0,0 +1,101 @@
<?xml version="0.0" ?>
<robot name="suction-head.urdf">
<link name="headLink">
<contact>
<lateral_friction value="1.0"/>
<rolling_friction value="0.0001"/>
<inertia_scaling value="3.0"/>
</contact>
<inertial>
<origin rpy="0 0 0" xyz="0 0 0"/>
<mass value=".1"/>
<inertia ixx="1" ixy="0" ixz="0" iyy="1" iyz="0" izz="1"/>
</inertial>
<visual>
<origin rpy="0 0 0" xyz="0 0 0"/>
<geometry>
<mesh filename="head.obj" scale="0.001 0.001 0.001"/>
</geometry>
<material name="darkgrey">
<color rgba="0.2 0.2 0.2 1"/>
</material>
</visual>
<collision>
<origin rpy="0 0 0" xyz="0 0 0"/>
<geometry>
<mesh filename="head.obj" scale="0.001 0.001 0.001"/>
</geometry>
</collision>
</link>
<joint name="tipJoint" type="fixed">
<parent link="headLink"/>
<child link="tipLink"/>
<origin rpy="0.0 0.0 0.0" xyz="0.0 0.0 0.029"/>
<axis xyz="0 0 1"/>
<limit effort="150.0" lower="-6.28318530718" upper="6.28318530718" velocity="3.15"/>
<dynamics damping="10.0" friction="0.0"/>
</joint>
<link name="tipLink">
<contact>
<lateral_friction value="1.0"/>
<rolling_friction value="0.0001"/>
<inertia_scaling value="3.0"/>
</contact>
<inertial>
<origin rpy="0 0 0" xyz="0 0 0"/>
<mass value=".1"/>
<inertia ixx="1" ixy="0" ixz="0" iyy="1" iyz="0" izz="1"/>
</inertial>
<visual>
<origin rpy="0 0 0" xyz="0 0 0"/>
<geometry>
<mesh filename="tip.obj" scale="0.001 0.001 0.001"/>
</geometry>
<material name="blue">
<color rgba="0.18039216, 0.50588235, 0.77254902 1"/>
</material>
</visual>
<collision>
<origin rpy="0 0 0" xyz="0 0 0"/>
<!-- <geometry>
<mesh filename="tip.obj" scale="0.001 0.001 0.001"/>
</geometry> -->
<geometry>
<cylinder length="0.028" radius="0.001"/>
</geometry>
</collision>
</link>
<!-- <link name="asdfLink">
<contact>
<lateral_friction value="1.0"/>
<rolling_friction value="0.0001"/>
<inertia_scaling value="3.0"/>
</contact>
<inertial>
<origin rpy="0 0 0" xyz="0 0 0"/>
<mass value=".1"/>
<inertia ixx="1" ixy="0" ixz="0" iyy="1" iyz="0" izz="1"/>
</inertial>
<visual>
<origin rpy="0 0 0" xyz="0 0 0"/>
<geometry>
<cylinder length="0.028" radius="0.001"/>
</geometry>
</visual>
</link>
<joint name="asdfoint" type="fixed">
<parent link="tipLink"/>
<child link="asdfLink"/>
<origin rpy="0.0 0.0 0.0" xyz="0.0 0.0 0.0"/>
<axis xyz="0 0 1"/>
<limit effort="150.0" lower="-6.28318530718" upper="6.28318530718" velocity="3.15"/>
<dynamics damping="10.0" friction="0.0"/>
</joint> -->
</robot>

Some files were not shown because too many files have changed in this diff Show More