Model-tuning Via Prompts Makes NLP Models Adversarially Robust Paper • 2303.07320 • Published Mar 13, 2023
Scaling Laws for Data Filtering -- Data Curation cannot be Compute Agnostic Paper • 2404.07177 • Published Apr 10, 2024 • 1
Rethinking LLM Memorization through the Lens of Adversarial Compression Paper • 2404.15146 • Published Apr 23, 2024
OpenUnlearning: Accelerating LLM Unlearning via Unified Benchmarking of Methods and Metrics Paper • 2506.12618 • Published Jun 14
BeyondWeb: Lessons from Scaling Synthetic Data for Trillion-scale Pretraining Paper • 2508.10975 • Published Aug 14 • 60
TOFU Models w & w/o Knowledge Collection Models with and without TOFU author knowledge, used to evaluate metric faithfulness (see: https://arxiv.org/pdf/2506.12618). • 60 items • Updated Jun 30 • 1