palpit/MLLM-Safety-Study
Viewer
•
Updated
•
3.02k
•
7
This is the collection for CVPR 2025 paper: Do we really need curated malicious data for safety alignment in multi-modal large language models?