Active filters: dpo
zake7749/Qwen3-Coder-Next-Open-Code-SFT
Viewer
• Updated • 49.4k • 925
• 12
jondurbin/gutenberg-dpo-v0.1
Viewer
• Updated • 918 • 452
• 162
llamafactory/DPO-En-Zh-20k
Viewer
• Updated • 20k • 329
• 99
inclusionAI/Ling-Coder-DPO
Viewer
• Updated • 253k • 47
• 10
alexshynkarenk0/UA-Safety-Align-Sample
Viewer
• Updated • 50 • 13
• 1
Crownelius/Creative-Writing-KimiK2.5-Cleaned
Viewer
• Updated • 655 • 68
• 1
datapointai/text-2-image-dpo-human-preferences-full
Viewer
• Updated • 20.8k • 57
• 1
datapointai/text-2-image-dpo-human-preferences
Viewer
• Updated • 5k • 33
• 1
datapointai/text-2-image-dpo-human-preferences-small
Viewer
• Updated • 2.5k • 42
• 1
datapointai/text-2-video-ranking-human-preferences
Viewer
• Updated • 2.01k • 50
• 1
OptiRefine-Official/python-optimization-dpo-sample
Viewer
• Updated • 4 • 6
• 1
Viewer
• Updated • 264 • 1
animasuri/Ai_ethics_dataset
Viewer
• Updated • 95 • 1
d0rj/synthetic-instruct-gptj-pairwise-ru
Viewer
• Updated • 33.1k • 23
• 2
d0rj/rlhf-reward-datasets-ru
Viewer
• Updated • 81.4k • 40
• 4
Viewer
• Updated • 125k • 40
• 2
d0rj/oasst1_pairwise_rlhf_reward-ru
Viewer
• Updated • 18.9k • 14
• 1
xzuyn/mmlu-auxilary-train-dpo
Viewer
• Updated • 101k • 54
• 2
AlexHung29629/stack-exchange-paired-128K
Viewer
• Updated • 128k • 12
• 1
flyingfishinwater/ultrafeedback_clean
Viewer
• Updated • 175k • 6
• 2
efederici/alpaca-vs-alpaca-orpo-dpo
Viewer
• Updated • 49.2k • 84
• 7
Viewer
• Updated • 183k • 38
• 1
mlabonne/chatml_dpo_pairs
Viewer
• Updated • 12.9k • 58
• 55
Viewer
• Updated • 183k • 10
• 6
argilla/ultrafeedback-binarized-preferences-cleaned
Viewer
• Updated • 60.9k • 8.45k
• 162
ThWu/dpo_highest_n_random
Viewer
• Updated • 182k • 8
• 2
BramVanroy/orca_dpo_pairs_dutch
Viewer
• Updated • 11k • 26
• 6
argilla/ultrafeedback-multi-binarized-preferences-cleaned
Viewer
• Updated • 158k • 78
• 7
Viewer
• Updated • 2.42k • 37
• 10
Viewer
• Updated • 15.3k • 22
• 19