Static and Plugged: Make Embodied Evaluation Simple
Paper β’ 2508.06553 β’ Published
Error code: DatasetGenerationCastError
Exception: DatasetGenerationCastError
Message: An error occurred while generating the dataset
All the data files must have the same columns, but at some point there are 1 new columns ({'g_index'})
This happened while the csv dataset builder was generating data using
hf://datasets/xiaojiahao/StaticEmbodiedBench/StaticEmbodiedBench_circular.tsv (at revision 3080428a0e1bdac0e3fbe2e12b22c50709c7830f)
Please either edit the data files to have matching columns, or separate them into different configurations (see docs at https://hf.co/docs/hub/datasets-manual-configuration#multiple-configurations)
Traceback: Traceback (most recent call last):
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1871, in _prepare_split_single
writer.write_table(table)
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/arrow_writer.py", line 643, in write_table
pa_table = table_cast(pa_table, self._schema)
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 2293, in table_cast
return cast_table_to_schema(table, schema)
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 2241, in cast_table_to_schema
raise CastError(
datasets.table.CastError: Couldn't cast
index: int64
question: string
hint: double
A: string
B: string
C: string
D: string
E: string
F: string
G: string
answer: string
category: string
l2-category: string
image: string
source: string
comment: double
split: string
g_index: int64
-- schema metadata --
pandas: '{"index_columns": [{"kind": "range", "name": null, "start": 0, "' + 2215
to
{'index': Value(dtype='int64', id=None), 'question': Value(dtype='string', id=None), 'hint': Value(dtype='float64', id=None), 'A': Value(dtype='string', id=None), 'B': Value(dtype='string', id=None), 'C': Value(dtype='string', id=None), 'D': Value(dtype='string', id=None), 'E': Value(dtype='string', id=None), 'F': Value(dtype='string', id=None), 'G': Value(dtype='string', id=None), 'answer': Value(dtype='string', id=None), 'category': Value(dtype='string', id=None), 'l2-category': Value(dtype='string', id=None), 'image': Value(dtype='string', id=None), 'source': Value(dtype='string', id=None), 'comment': Value(dtype='float64', id=None), 'split': Value(dtype='string', id=None)}
because column names don't match
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 1436, in compute_config_parquet_and_info_response
parquet_operations = convert_to_parquet(builder)
File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 1053, in convert_to_parquet
builder.download_and_prepare(
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 925, in download_and_prepare
self._download_and_prepare(
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1001, in _download_and_prepare
self._prepare_split(split_generator, **prepare_split_kwargs)
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1742, in _prepare_split
for job_id, done, content in self._prepare_split_single(
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1873, in _prepare_split_single
raise DatasetGenerationCastError.from_cast_error(
datasets.exceptions.DatasetGenerationCastError: An error occurred while generating the dataset
All the data files must have the same columns, but at some point there are 1 new columns ({'g_index'})
This happened while the csv dataset builder was generating data using
hf://datasets/xiaojiahao/StaticEmbodiedBench/StaticEmbodiedBench_circular.tsv (at revision 3080428a0e1bdac0e3fbe2e12b22c50709c7830f)
Please either edit the data files to have matching columns, or separate them into different configurations (see docs at https://hf.co/docs/hub/datasets-manual-configuration#multiple-configurations)Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
index int64 | question string | hint null | A string | B string | C string | D string | E null | F null | G null | answer string | category string | l2-category string | image string | source string | comment null | split string |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
0 | Please choose the option that best describes the position of the white rope relative to the black bar. | null | The rope is parallel to the bar. | The rope is placed underneath the bar. | The rope is draped over the bar. | The rope is perpendicular to the bar. | null | null | null | C | Micro Perception | Exterior | /9j/4AAQSkZJRgABAQAAAQABAAD/2wBDAAIBAQEBAQIBAQECAgICAgQDAgICAgUEBAMEBgUGBgYFBgYGBwkIBgcJBwYGCAsICQoKCgoKBggLDAsKDAkKCgr/2wBDAQICAgICAgUDAwUKBwYHCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgr/wAARCALQBQADASIAAhEBAxEB/8QAHwAAAQUBAQEBAQEAAAAAAAAAAAECAwQFBgcICQoL/8QAtRAAAgEDAwIEAwUFBAQAAAF9AQIDAAQRBRIh... | Droid | null | test |
1 | What item is closest to the number puzzle? | null | Scissors. | A white cup. | A robotic gripper. | A red button. | null | null | null | B | Micro Perception | Exterior | "/9j/4AAQSkZJRgABAQAAAQABAAD/2wBDAAIBAQEBAQIBAQECAgICAgQDAgICAgUEBAMEBgUGBgYFBgYGBwkIBgcJBwYGCAsICQo(...TRUNCATED) | Droid | null | test |
2 | What is the object directly underneath the gripper? | null | A wooden table. | A small robot. | A pair of scissors. | A cup. | null | null | null | A | Micro Perception | Exterior | "/9j/4AAQSkZJRgABAQAAAQABAAD/2wBDAAIBAQEBAQIBAQECAgICAgQDAgICAgUEBAMEBgUGBgYFBgYGBwkIBgcJBwYGCAsICQo(...TRUNCATED) | Droid | null | test |
3 | What is on the far right of the table? | null | A stack of plates. | A cutting board. | A set of knives. | A plate. | null | null | null | A | Micro Perception | Exterior | "/9j/4AAQSkZJRgABAQAAAQABAAD/2wBDAAIBAQEBAQIBAQECAgICAgQDAgICAgUEBAMEBgUGBgYFBgYGBwkIBgcJBwYGCAsICQo(...TRUNCATED) | Droid | null | test |
4 | What color is the cup most easily identifiable in the scene? | null | Red | Blue | Green | Yellow | null | null | null | C | Micro Perception | Wrist | "/9j/4AAQSkZJRgABAQAAAQABAAD/2wBDAAIBAQEBAQIBAQECAgICAgQDAgICAgUEBAMEBgUGBgYFBgYGBwkIBgcJBwYGCAsICQo(...TRUNCATED) | Droid | null | test |
5 | Which object is farthest from the power strip on the table? | null | A spoon | A notebook | A pen | A phone | null | null | null | A | Micro Perception | Exterior | "/9j/4AAQSkZJRgABAQAAAQABAAD/2wBDAAIBAQEBAQIBAQECAgICAgQDAgICAgUEBAMEBgUGBgYFBgYGBwkIBgcJBwYGCAsICQo(...TRUNCATED) | Droid | null | test |
6 | What object is the robotic gripper closest to in the image? | null | A yellow blanket. | A wooden chair. | An electrical socket. | A table lamp. | null | null | null | A | Micro Perception | Wrist | "/9j/4AAQSkZJRgABAQAAAQABAAD/2wBDAAIBAQEBAQIBAQECAgICAgQDAgICAgUEBAMEBgUGBgYFBgYGBwkIBgcJBwYGCAsICQo(...TRUNCATED) | Droid | null | test |
7 | How many objects are on the smaller table in the scene? | null | One | Two | Three | Four | null | null | null | C | Micro Perception | Exterior | "/9j/4AAQSkZJRgABAQAAAQABAAD/2wBDAAIBAQEBAQIBAQECAgICAgQDAgICAgUEBAMEBgUGBgYFBgYGBwkIBgcJBwYGCAsICQo(...TRUNCATED) | Droid | null | test |
8 | What item is directly in front of the red mug on the table? | null | A pair of scissors. | A black pencil case. | A blue notebook. | An open box. | null | null | null | A | Micro Perception | Exterior | "/9j/4AAQSkZJRgABAQAAAQABAAD/2wBDAAIBAQEBAQIBAQECAgICAgQDAgICAgUEBAMEBgUGBgYFBgYGBwkIBgcJBwYGCAsICQo(...TRUNCATED) | Droid | null | test |
9 | What is the object inside the sink? | null | A green sponge. | A silver fork. | A red cup. | A yellow cloth. | null | null | null | A | Micro Perception | Wrist | "/9j/4AAQSkZJRgABAQAAAQABAAD/2wBDAAIBAQEBAQIBAQECAgICAgQDAgICAgUEBAMEBgUGBgYFBgYGBwkIBgcJBwYGCAsICQo(...TRUNCATED) | Droid | null | test |
StaticEmbodiedBench is a dataset for evaluating vision-language models on embodied intelligence tasks, as featured in the OpenCompass leaderboard.
It covers three key capabilities:
Each sample is also labeled with a visual perspective:
This release includes 200 open-source samples from the full dataset, provided for public research and benchmarking purposes.
This dataset is fully supported by VLMEvalKit.
Registered dataset names:
StaticEmbodiedBench β for standard evaluation StaticEmbodiedBench_circular β for circular evaluation (multi-round)To run evaluation in VLMEvalKit:
python run.py --data StaticEmbodiedBench --model <your_model_name> --verbose
For circular evaluation, simply use:
python run.py --data StaticEmbodiedBench_circular --model <your_model_name> --verbose
If you use this dataset in your research, please cite it as follows:
@misc{xiao2025staticpluggedmakeembodied,
title={Static and Plugged: Make Embodied Evaluation Simple},
author={Jiahao Xiao and Jianbo Zhang and BoWen Yan and Shengyu Guo and Tongrui Ye and Kaiwei Zhang and Zicheng Zhang and Xiaohong Liu and Zhengxue Cheng and Lei Fan and Chuyi Li and Guangtao Zhai},
year={2025},
eprint={2508.06553},
archivePrefix={arXiv},
primaryClass={cs.CV},
url={https://arxiv.org/abs/2508.06553},
}