Abstract
SciCoQA is a dataset for identifying mismatches between scientific publications and code implementations, containing 611 discrepancies across multiple disciplines and demonstrating the challenge of detecting such issues even for advanced language models.
We present SciCoQA, a dataset for detecting discrepancies between scientific publications and their codebases to ensure faithful implementations. We construct SciCoQA from GitHub issues and reproducibility papers, and to scale our dataset, we propose a synthetic data generation method for constructing paper-code discrepancies. We analyze the paper-code discrepancies in detail and propose discrepancy types and categories to better understand the occurring mismatches. In total, our dataset consists of 611 paper-code discrepancies (81 real, 530 synthetic), spanning diverse computational science disciplines, including AI, Physics, Quantitative Biology, and others. Our evaluation of 21 LLMs highlights the difficulty of SciCoQA, particularly for instances involving omitted paper details, long-context inputs, and data outside the models' pre-training corpus. The best performing model in our evaluation, GPT-5, can only detect 45.7\% of real-world paper-code discrepancies.
Community
We introduce the SciCoQA dataset for evaluating models on detecting discrepancies between paper and code. Find all resources here:
- Paper: arXiv
- Data: Hugging Face Dataset
- Code: GitHub
- Demo: Hugging Face Space
- Project Page: UKPLab/scicoqa
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- FLAWS: A Benchmark for Error Identification and Localization in Scientific Papers (2025)
- VeriSciQA: An Auto-Verified Dataset for Scientific Visual Question Answering (2025)
- Sphinx: Benchmarking and Modeling for LLM-Driven Pull Request Review (2026)
- CodeFuse-CommitEval: Towards Benchmarking LLM's Power on Commit Message and Code Change Inconsistency Detection (2025)
- CodeSimpleQA: Scaling Factuality in Code Large Language Models (2025)
- SWE-Bench++: A Framework for the Scalable Generation of Software Engineering Benchmarks from Open-Source Repositories (2025)
- pdfQA: Diverse, Challenging, and Realistic Question Answering over PDFs (2026)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 1
Spaces citing this paper 1
Collections including this paper 0
No Collection including this paper