unit_id stringclasses 1
value | gain_corr_pearson float64 | gain_corr_spearman float64 | top1_agreement float64 | coverage_gap float64 | eps_bar float64 | gpu_hours float64 | speedup float64 | seed float64 |
|---|---|---|---|---|---|---|---|---|
# placeholder — see TODO.md | null | null | null | null | null | null | null | null |
CM-EVS: A Coverage-Curated Panoramic RGB-D Dataset for Indoor Scene Understanding
CM-EVS is a curated panoramic RGB-D dataset built under a single principle: maximize the geometric coverage of a 3D scene with the fewest equirectangular (ERP) frames possible. The release is structured as one redistributable Blender indoor data archive plus four license-aware adapter packages that regenerate matched frames locally from upstream sources whose terms forbid redistribution.
v1.0 status: this version stages the full Blender indoor data drop (374 scene instances, 13,631 ERP RGB-depth-pose frames; 201 from the round1+2 sampling and 173 from round2). The paper's headline
326 scenes / 11,583 framesis the curator-selected subset that will be derived from this drop after the §5 evaluation experiments finalize. SeeTODO.mdfor items still in flight.
Dataset summary
| Source | License | Released here | Scenes / frames |
|---|---|---|---|
| Blender indoor | CC-BY 4.0 | Full data (blender_indoor/) |
374 / 13,631 |
| HM3D | upstream EULA | Adapter only (adapters/hm3d/) |
401 rooms / regen-only |
| ScanNet++ | upstream ToS | Adapter only (adapters/scannetpp/) |
500 scans / regen-only |
| OB3D (outdoor) | upstream license | Adapter only (adapters/ob3d/) |
24 / regen-only |
| TartanGround (outdoor) | upstream license | Adapter only (adapters/tartanground/) |
762 parts / regen-only |
The Blender indoor frames are the only redistributable RGB-D data. For the four restricted sources, this dataset ships the per-source adapter (config + pipeline script + scene-id metadata); users obtain upstream data themselves and run the adapter locally to reproduce matching ERP frames under the unified schema.
Output schema
Every released ERP frame follows a single coordinate convention:
- World frame: right-handed,
+Xright,+Yup,+Zforward - Camera frame: OpenCV (
+ximage right,+yimage down,+zcamera forward) - Pose: scalar-first quaternion
q_wc = [w, x, y, z]plus positionC_w − C_{w,0}relative to the scene's first selected frame (the absolute first-frame center is recorded once per scene inmeta.jsonwhen present) - ERP pixel coords: longitude
(u/W − 0.5) · 2π, latitude(0.5 − v/H) · π - Range depth: each pixel stores radial distance from camera center to surface (not perspective
z-depth). NaN or 0 marks invalid pixels.
| File | Format | Description |
|---|---|---|
panorama_{NNNN}.png |
PNG, 2048×1024 | ERP RGB image |
panorama_{NNNN}_depth.npy |
float32 array | ERP range depth (m); NaN or 0 if invalid; absent for some frames where depth was not produced |
pose_{NNNN}.json |
JSON | q_wc, position, camera_type |
Per-scene meta.json, metadata/selected_viewpoints.json, metadata/candidates.jsonl, metadata/per_step_log.jsonl (curator-only) will land here once the curator runs on the merged 374-scene set; see TODO.md.
Directory layout
cmevs_hf_release/
├── README.md (this file — HF dataset card)
├── LICENSE.md (mixed-license matrix)
├── CHANGELOG.md
├── croissant.json (MLCommons Croissant v1.0; passes mlcroissant 1.1 validator)
├── SHA256SUMS (top-level checksums, excluding blender_indoor/scenes/)
├── TODO.md (pre-push checklist)
├── blender_indoor/
│ ├── README.md
│ ├── scenes/sence_indoor_{0001..0374}/{panorama,pose}_{NNNN}.{png,npy,json}
│ ├── SHA256SUMS (39,896 lines for 13,631 frames × ~3 files)
│ └── metadata/{source_manifest.json, splits.json, frame_manifest.csv,
│ scene_id_mapping.csv, frame_id_mapping.csv}
├── adapters/{hm3d, scannetpp, ob3d, tartanground}/
│ ├── README.md
│ ├── config.yaml
│ ├── pipeline.py / reencoding_script.md
│ └── metadata/source_manifest.json
├── code/ (curator core modules + scripts; reviewer reference)
└── results/ (paper §5 result CSVs; placeholders to be filled)
Datasheet
Following Gebru et al. 2021. (Source: main.tex Appendix A — content here is a faithful markdown rendering; cross-check against the paper PDF.)
Motivation
Purpose. Evaluates fixed-budget panoramic viewpoint curation policies on existing 3D assets, and provides reproducible ERP RGB-D-pose samples for panoramic perception experiments.
Creators / funding. Anonymized during double-blind review; finalized in camera-ready.
Composition
Instances. Each instance is an ERP frame triple (RGB image + range-depth array + camera pose), plus per-scene meta.json and curator-only provenance metadata.
Counts. v1.0 stages 13,631 ERP frames across 374 Blender indoor scene instances (CC-BY 4.0). The four restricted sources (HM3D / ScanNet++ / OB3D / TartanGround) ship adapters only; users regenerate matching frames locally.
Sampling. Indoor (Blender) frames are produced offline by Cycles ERP rendering. The 374 scene instances comprise 201 from round1+2 (Blender_indoor_FOU_threshold-0.2, rounds 1+2 merge) and 173 from round2 (independent extraction). 48 original sence_indoor_XXXX ids appear in both rounds with different sampling outcomes; both versions are kept and renumbered (see blender_indoor/metadata/scene_id_mapping.csv for traceability). Outdoor source trajectories (TartanGround, OB3D) are re-encoded into the unified schema by the adapters/{tartanground,ob3d}/ pipelines; the curator does not run on outdoor sources in v1.0.
Fields. RGB PNG (2048×1024 for Blender indoor; native source resolution otherwise), float32 range depth (.npy), pose JSON with scalar-first q_wc, meta.json, candidate / viewpoint / per-step-log metadata (curator-produced frames only), source / scene / split ids.
Missing values. Invalid depth pixels are NaN or 0 by source convention; per-frame invalid-depth ratio statistics will land in results/frame_quality.csv (see TODO).
Splits. Default scene-level 70 / 15 / 15 split via sha256(new_scene_id) % 100. See blender_indoor/metadata/splits.json. The downstream panoramic-depth experiment (paper §4.10) uses a separate 94-scene Blender-indoor subset under its own scene-level split (84 / 10 / 10 = 3,400 / 362 / 423 frames).
Collection
Indoor data is produced by the CM-EVS pipeline (asset loading, coordinate normalization, candidate generation, 26-direction geometric-validity filtering, conflict-aware greedy selection, 2048×1024 high-resolution Cycles ERP rendering, export under the unified schema). Outdoor data (TartanGround, OB3D) is re-encoded into the unified schema; the curator does not run on outdoor sources in v1.0. HM3D and ScanNet++ frames are not redistributed: the release ships adapter regeneration scripts that produce matched frames locally after the user accepts upstream license terms. No new human-subject data is collected.
Preprocessing
Coordinate normalization to right-handed +X-right +Y-up +Z-forward world frame with the OpenCV-style camera frame; pose stored as scalar-first q_wc = [w, x, y, z] plus position relative to the scene's first selected frame. AABB computation; source-specific candidate generation; 26-direction geometric-validity filter. Cubemap-to-ERP re-encoding at native resolution for outdoor sources; optional exposure adjustment for Blender; output schema conversion. Raw upstream 3D assets are not redistributed unless upstream licenses allow.
Uses
Recommended: panoramic depth estimation, ERP novel-view synthesis, data-centric viewpoint policy comparison, view-planning research, panoramic Gaussian-splatting reconstruction, panoramic world-model pretraining.
Avoid: identity-sensitive inference, safety-critical deployment, claims about private indoor spaces, treating synthetic-only results as real-world evidence without further validation.
Distribution
Blender indoor frames (CC-BY 4.0), curator code (MIT), documentation (CC-BY 4.0), Datasheet, and Croissant metadata are released here. The four restricted sources ship metadata + regeneration scripts only.
Maintenance
Versioned releases on a 6-month cadence. Errata are tracked via the project repository; checksum manifests are refreshed at every release; regeneration scripts are updated when upstream APIs, file layouts, or access terms change.
Code
The full curator source code, adapters, and reproduction scripts are released as a separate, anonymized repository:
Code:
huggingface.co/anon-cmevs-2026/cmevs-code
The code/ subtree mirrored inside this dataset repository is provided for offline reviewer convenience; the linked code repository is the canonical source.
Citation
@inproceedings{cmevs2026,
title={{CM-EVS}: A Coverage-Curated Panoramic {RGB-D} Dataset for Indoor Scene Understanding},
author={Anonymous Author(s)},
booktitle={NeurIPS 2026 Datasets and Benchmarks Track (under review)},
year={2026}
}
Reviewer quick sample (NeurIPS 2026 D&B "large dataset URL" requirement)
The full Blender indoor archive is ~109 GB. To support reviewer-time inspection without a full download, scene sence_indoor_0001 is provided as a representative sample at:
This single scene contains 33 RGB panoramas (panorama_NNNN.png, 2048×1024 ERP), 33 range-depth arrays (panorama_NNNN_depth.npy, float32 metres), and 33 pose JSON files (pose_NNNN.json, scalar-first quaternion + position) — 99 files in total, ~250 MB.
Sampling methodology
The scene was produced by the same end-to-end CM-EVS pipeline as every other Blender indoor scene in this release: asset loading → coordinate normalization to right-handed +X/+Y/+Z world frame → grid-based candidate generation with the 26-direction geometric-validity filter → conflict-aware greedy viewpoint selection → 2048×1024 Cycles ERP rendering → unified-schema export. No special preprocessing distinguishes the sample from the rest of the release; it was selected only because (i) it is the first scene id in lexical order and (ii) it represents the round1+2 sampling subset (the 201-scene half of the 374-scene v1.0 release; the other 173 scenes come from the round2 independent extraction). Reviewers can therefore use this scene to verify file format, coordinate convention, depth validity statistics, and image-depth alignment for the full release. The complete provenance — including the per-step coverage gain $G_t$, conflict ratio $L_t$, and viewpoint score $s_t$ — is in metadata/per_step_log.jsonl (curator-only fields, populated for all curator-produced frames once the §5 evaluation experiments are finalized).
Notes on directory naming
Scene directories under blender_indoor/scenes/ use the legacy id pattern sence_indoor_NNNN (note: sence, not scene). This is a typo inherited from the upstream Blender source pipeline used to produce the v1.0 build, and it is preserved verbatim so that scene ids match the production-side run logs and the entries in metadata/scene_id_mapping.csv. The misspelling does not affect file content, ERP coordinate convention, depth validity, pose schema, frame indexing, or downstream parsing — only the directory name string. Directories will be renamed to scene_indoor_NNNN in v1.1; the rename will be reflected in a new scene_id_mapping.csv row pointing each new id to its v1.0 sence_indoor_NNNN predecessor so existing consumers continue to resolve.
Verifying integrity
# top-level files + adapter packages + code + metadata
shasum -a 256 -c SHA256SUMS
# Blender indoor frames (39,896 entries: 13,631 panorama + 12,634 depth + 13,631 pose)
cd blender_indoor && shasum -a 256 -c SHA256SUMS
License
See LICENSE.md for the per-component license matrix.
- Downloads last month
- 721