Per-robot embodiments
reflex serve --embodiment <name> reads a per-robot config from configs/embodiments/<name>.json. Three presets ship: franka, so100, ur5. Bring your own with --custom-embodiment-config <path>.
Quick start
Section titled “Quick start”reflex serve ./my-export/ --embodiment frankareflex serve ./my-export/ --custom-embodiment-config ./my-robot.jsonWhen --embodiment is unset, reflex serve returns raw, unscaled actions. Fine for a smoke test; never for production hardware.
Top-level structure
Section titled “Top-level structure”{ "schema_version": 1, "embodiment": "franka", "action_space": { ... }, "normalization": { ... }, "gripper": { ... }, "cameras": [ ... ], "control": { ... }, "constraints": { ... }}schema_version bumps only on removed/renamed fields. Additive changes don’t bump.
action_space
Section titled “action_space”The robot’s commanded action vector.
| Field | Type | Range | Notes |
|---|---|---|---|
type | enum | continuous | discrete not supported in v1 |
dim | int | 1–32 | Number of action dimensions (e.g. 7 for Franka 6-DOF arm + gripper) |
ranges | array of [min, max] | length = dim | Per-dim hard limits |
normalization
Section titled “normalization”Action and state denormalization stats. The model is trained on normalized inputs; these undo the normalization at runtime.
| Field | Type | Constraint |
|---|---|---|
mean_action | array of floats | length == action_space.dim |
std_action | array of floats | same length; each element > 0 and ≤ 100 |
mean_state | array of floats | length defines the state vector |
std_state | array of floats | length == mean_state length |
gripper
Section titled “gripper”| Field | Type | Range |
|---|---|---|
component_idx | int | 0 to action_space.dim − 1 |
close_threshold | float | 0.0–1.0 |
inverted | bool | If true, low values close the gripper |
cameras
Section titled “cameras”List of camera streams the model expects, 1–8 cameras. Each:
| Field | Type | Notes |
|---|---|---|
name | string | Logical name (wrist, front, etc.). Must be unique. |
resolution | [int, int] | 32–4096 px |
fps | float | 0–240 |
color_space | enum | rgb8, bgr8, mono8, rgba8 |
control
Section titled “control”| Field | Type | Range | Notes |
|---|---|---|---|
frequency_hz | float | 0–1000 | Robot control loop rate |
chunk_size | int | 1–200 | Actions in a single inference chunk |
rtc_execution_horizon | int | 1–chunk_size | Number of actions to execute before requesting the next chunk |
constraints
Section titled “constraints”| Field | Type | Range |
|---|---|---|
max_ee_velocity | float | 0–10 m/s |
max_gripper_velocity | float | 0–10 m/s |
collision_check | bool | Run per-step collision check |
Validation
Section titled “Validation”Two layers run on every config load: JSON Schema (types, enums, ranges) plus Python cross-field rules (length parity, gripper-idx bounds, RTC horizon sanity, camera uniqueness). Each failure carries a stable error slug — for example action-ranges-length-mismatch, gripper-idx-out-of-range — that downstream tools can map to docs or GitHub issues.
from reflex.embodiments import EmbodimentConfigfrom reflex.embodiments.validate import validate_embodiment_config, format_errors
cfg = EmbodimentConfig.load_preset("franka")ok, errors = validate_embodiment_config(cfg)if not ok: print(format_errors(errors))Adding a new preset
Section titled “Adding a new preset”- Add a new dict to
scripts/emit_embodiment_presets.py(copy a similar robot’s preset; adjust DOFs, gripper index, normalization stats) - Add the slug to the
embodimentenum insrc/reflex/embodiments/schema.json - Add the slug to
ALL_PRESETSintests/test_embodiments.py - Run
python scripts/emit_embodiment_presets.py pytest tests/test_embodiments.py -v- Commit JSON + script + schema + test changes in one commit
Downstream consumers
Section titled “Downstream consumers”- RTC adapter uses
control.frequency_hz,control.rtc_execution_horizon,control.chunk_size - Action denormalization uses
constraints.max_ee_velocity,gripper.* - reflex doctor validates
gripper.inverted,control.chunk_size