palpit 's Collections

MLLM-Safety-Study

This is the collection for CVPR 2025 paper: Do we really need curated malicious data for safety alignment in multi-modal large language models?