SynWTS: Synthetic Woven Traffic Safety Dataset
SynWTS is a high-fidelity synthetic dataset built as a Digital Twin of the Woven Traffic Safety (WTS) dataset. It is developed for the 2026 AI City Challenge (Track 2) to advance Sim2Real research in transportation safety understanding.
Dataset Summary
Participants in the Sim2Real challenge must train models exclusively on this synthetic data and evaluate performance on real-world video. SynWTS provides a geometric match to real-world test locations, focusing on pedestrian-involved incidents with multi-view 1080p video, structured temporal captions, and complex Visual Question Answering (VQA) pairs.
Dataset Owner(s)
Santa Clara University
Key Features
- Sim2Real Benchmark: Specifically designed to bridge the gap between NVIDIA Isaac Sim environments and real-world traffic scenarios.
- Multi-View Perception: Synchronized views from overhead infrastructure cameras and vehicle-ego perspectives.
- Temporal Segmentation: Scenarios are partitioned into five safety-critical phases: Pre-recognition, Recognition, Judgment, Action, and Avoidance.
- Structured Annotations: Descriptions cover four pillars: Location, Attention, Behavior, and Context.
Dataset Creation Date
Dataset creation started in May 2025. The first section of the dataset was released May 1, 2026, a second section released May 11, 2026, with the remaining scenarios scheduled to be released by the end of May 2026.
Dataset Characterization
- Data Collection Method: Synthetic
- Labeling Method: Bounding box generation was automatic with IsaacSim, text labels were modified from the original WTS scenario labels to account for scenario modifications.
Video Format
- Video Standard: MP4 (H.264)
- Video Resolution: 1080p
- Video Frame rate: 30 FPS
Evaluation
- 2026 Edition: Evaluation based on average of BLEU-4, METEOR, ROUGE-L, and CIDEr scores for sub-task 1 (anomaly description/captioning) and accuracy (correct answers / total questions) for sub-task 2 (video question answering) at the 2026 AI City Challenge Server 2025 AI City Challenge Server. The winner will be determined by the mean score of sub-task 1 and sub-task 2.
Dataset Structure
Directory Layout
data/
βββ videos/
β βββ {split}/{scenario}/{view}/*.mp4
βββ annotations/
β βββ caption/
β β βββ {split}/{scenario}/{view}/{scenario}_caption.json
β βββ bbox_annotated/
β β βββ pedestrian/{split}/{scenario}/{view}/{scenario}_{camera_id}_bbox.json
β β βββ vehicle/{split}/{scenario}/overhead_view/{scenario}_{camera_id}_bbox.json
β βββ vqa/
β βββ {split}/{scenario}/{view}/{scenario}.json
{split} = train | val | test
{view} = overhead_view | vehicle_view | environment
{camera_id} = {camera_ip_address}_{direction_id} | vehicle_view
Data Fields & Samples
1. Fine-Grained Captions
Captions are generated from a checklist of 170+ traffic items. Each event phase contains a distinct caption for the pedestrian and the vehicle. We used the same annotations as in the WTS dataset and only updated necessary details that could not be simulated in the current version.
Sample (from overhead_view_caption.json):
{
"id": 765,
"event_phase": [
{
"labels": ["4"],
"caption_pedestrian": "The pedestrian was a male in his 30s walking slowly... He was standing close behind a vehicle... Although he almost noticed the vehicle, he seemed unaware of it.",
"caption_vehicle": "The vehicle was on the left side of the pedestrian and was close to them... The vehicle slightly collided with the pedestrian while moving at a speed of 0 km/h.",
"start_time": "8.993",
"end_time": "14.903"
}
]
}
2. Visual Question Answering (VQA)
Includes multiple-choice questions covering position, distance, visibility, and actions.
Sample (from vqa-vehicle_view.json):
{
"question": "What is the action taken by vehicle?",
"a": "Swerved to the left to avoid",
"b": "Swerved to the right, but could not avoid",
"c": "Tried sudden braking but could not avoid",
"d": "Collided with the pedestrian",
"correct": "d"
}
Submission Format
Sub-Task 1: Captioning
Results must be provided per scenario. For multi-view scenarios, use the scenario index as the key. For scenarios in the normal_trimmed folder, every single video in the test set requires caption results.
Note: Unlike the training data, the segment timestamps are not required for the test data submission. The segment labels (e.g., β4β, β3β) are known and will be provided.
{
"20230707_12_SN17_T1": [
{
"labels": ["4"],
"caption_pedestrian": "The pedestrian stands still on the left, looking toward the approaching traffic...",
"caption_vehicle": "The vehicle was positioned diagonally to the intersection, slowing down..."
},
{
"labels": ["3"],
"caption_pedestrian": "The pedestrian begins to step off the curb...",
"caption_vehicle": "The vehicle continues its approach without significant deceleration..."
},
{
"labels": ["2"],
"caption_pedestrian": "...",
"caption_vehicle": "..."
},
{
"labels": ["1"],
"caption_pedestrian": "...",
"caption_vehicle": "..."
},
{
"labels": ["0"],
"caption_pedestrian": "...",
"caption_vehicle": "..."
}
]
}
Sub-Task 2: VQA
The results are required per scenario/question. Participants must provide the predicted option label as follows:
[
{
"id": "3c8c80e3-33f1-4133-a86c-1192c8a26159",
"correct": "a"
},
{
"id": "be2f113a-c387-4987-befd-32a9c6dc488a",
"correct": "b"
}
]
Technical Specifications & Limitations
Digital Twin Characteristics
- Environmental Fidelity: Roads and buildings are a close geometric match to real-world WTS locations.
- No 3D Gaze: Unlike the original WTS, 3D gaze and head bounding boxes are not included due to simulation constraints.
- Character Dynamics: Poses are simulated and may not perfectly replicate real-world physics.
- Object Limitations: Characters do not hold hand-held objects (umbrellas, phones) that may appear in the real-world test set. Labels/VQA have been adjusted accordingly.
Test Set
The dataset only includes the train and val sets of the data. The test set will be the "internal" or "main" subset of the WTS Dataset. Note that the WTS dataset also contais a BDD_PC_5K subset in its train/val/test splits that will not be used for this challenge since synthetic versions of those scenarios are not included in our training and validation sets.
Release Schedule
- Initial Release: 80 scenarios (May 1, 2026)
- Mid-May Update: 144 scenarios (May 11, 2026)
- Final Dataset: ~249 scenarios total (Expected May 25, 2026).
Team & Credits
Santa Clara University
Dhanishtha Patil, Ridham Kachhadiya, Andrew Vattuone, and David C. Anastasiu
NVIDIA
Haoquan Liang, Jiajun Li, Yuxing Wang, and Thomas Tang
Woven by Toyota
Ashutosh Kumar and Quan Kong
Point of Contact:
For questions regarding the SynWTS dataset or the AI City Challenge Track 2, please contact:
David C. Anastasiu
Email: danastasiu@scu.edu
Citation
Please cite the original WTS paper and the 2026 AI City Challenge:
@article{kong2024wts,
title={WTS: A Pedestrian-Centric Traffic Video Dataset for Fine-grained Spatial-Temporal Understanding},
author={Kong, Quan and Kumar, Ashutosh and others},
journal={arXiv preprint arXiv:2407.15350},
year={2024}
}
Stay tuned for an updated citation to our dataset paper.
Changelog
- 2026-05-01: Initial release of
synwtsβ 80 scenarios. - 2026-05-11: Added 64 scenarios, for a total of 144 scenarios.
- Downloads last month
- 200