Update README.md
Browse files
README.md
CHANGED
|
@@ -3,7 +3,7 @@ license: apache-2.0
|
|
| 3 |
---
|
| 4 |
|
| 5 |
<p align="center">
|
| 6 |
-
<img src="assets/logo.png" width="65%">
|
| 7 |
</p>
|
| 8 |
|
| 9 |
<p align="center">
|
|
@@ -11,6 +11,8 @@ license: apache-2.0
|
|
| 11 |
<a href="https://arxiv.org/abs/2509.23909"><img src="https://img.shields.io/badge/arXiv%20paper-2509.23909-b31b1b.svg" alt="arxiv"></a>
|
| 12 |
<a href="https://huggingface.co/collections/EditScore/editscore-68d8e27ee676981221db3cfe"><img src="https://img.shields.io/badge/EditScore-🤗-yellow" alt="model"></a>
|
| 13 |
<a href="https://huggingface.co/datasets/EditScore/EditReward-Bench"><img src="https://img.shields.io/badge/EditReward--Bench-🤗-yellow" alt="dataset"></a>
|
|
|
|
|
|
|
| 14 |
</p>
|
| 15 |
|
| 16 |
<h4 align="center">
|
|
@@ -30,8 +32,11 @@ license: apache-2.0
|
|
| 30 |
- **Versatile Applications**: Ready to use as a best-in-class reranker to improve editing outputs, or as a high-fidelity reward signal for **stable and effective Reinforcement Learning (RL) fine-tuning**.
|
| 31 |
|
| 32 |
## 🔥 News
|
| 33 |
-
- **2025-
|
| 34 |
-
- **2025-
|
|
|
|
|
|
|
|
|
|
| 35 |
|
| 36 |
## 📖 Introduction
|
| 37 |
While Reinforcement Learning (RL) holds immense potential for this domain, its progress has been severely hindered by the absence of a high-fidelity, efficient reward signal.
|
|
@@ -43,7 +48,7 @@ To overcome this barrier, we provide a systematic, two-part solution:
|
|
| 43 |
- **A Powerful & Versatile Tool**: Guided by our benchmark, we developed the **EditScore** model series. Through meticulous data curation and an effective self-ensembling strategy, EditScore sets a new state of the art for open-source reward models, even surpassing the accuracy of leading proprietary VLMs.
|
| 44 |
|
| 45 |
<p align="center">
|
| 46 |
-
<img src="assets/table_reward_model_results.png" width="95%">
|
| 47 |
<br>
|
| 48 |
<em>Benchmark results on EditReward-Bench.</em>
|
| 49 |
</p>
|
|
@@ -56,7 +61,7 @@ We demonstrate the practical utility of EditScore through two key applications:
|
|
| 56 |
This repository releases both the **EditScore** models and the **EditReward-Bench** dataset to facilitate future research in reward modeling, policy optimization, and AI-driven model improvement.
|
| 57 |
|
| 58 |
<p align="center">
|
| 59 |
-
<img src="assets/figure_edit_results.png" width="95%">
|
| 60 |
<br>
|
| 61 |
<em>EditScore as a superior reward signal for image editing.</em>
|
| 62 |
</p>
|
|
@@ -64,46 +69,61 @@ This repository releases both the **EditScore** models and the **EditReward-Benc
|
|
| 64 |
|
| 65 |
## 📌 TODO
|
| 66 |
We are actively working on improving EditScore and expanding its capabilities. Here's what's next:
|
|
|
|
|
|
|
|
|
|
| 67 |
- [ ] Release RL training code applying EditScore to OmniGen2.
|
| 68 |
-
- [
|
| 69 |
|
| 70 |
## 🚀 Quick Start
|
| 71 |
|
| 72 |
### 🛠️ Environment Setup
|
|
|
|
|
|
|
|
|
|
| 73 |
|
| 74 |
-
####
|
| 75 |
-
|
| 76 |
```bash
|
| 77 |
-
#
|
| 78 |
-
git clone [email protected]:VectorSpaceLab/EditScore.git
|
| 79 |
-
cd EditScore
|
| 80 |
-
|
| 81 |
-
# 2. (Optional) Create a clean Python environment
|
| 82 |
conda create -n editscore python=3.12
|
| 83 |
conda activate editscore
|
| 84 |
|
| 85 |
-
#
|
| 86 |
-
#
|
| 87 |
pip install torch==2.7.1 torchvision --extra-index-url https://download.pytorch.org/whl/cu126
|
|
|
|
| 88 |
|
| 89 |
-
|
| 90 |
-
|
| 91 |
-
|
| 92 |
-
#
|
| 93 |
-
pip install
|
| 94 |
```
|
|
|
|
| 95 |
|
| 96 |
-
####
|
|
|
|
|
|
|
|
|
|
| 97 |
|
|
|
|
|
|
|
|
|
|
| 98 |
```bash
|
| 99 |
-
|
| 100 |
-
|
|
|
|
| 101 |
|
| 102 |
-
|
| 103 |
-
|
|
|
|
|
|
|
| 104 |
|
| 105 |
-
|
| 106 |
-
|
|
|
|
|
|
|
| 107 |
```
|
| 108 |
|
| 109 |
---
|
|
@@ -140,6 +160,12 @@ print(f"Edit Score: {result['final_score']}")
|
|
| 140 |
---
|
| 141 |
|
| 142 |
## 📊 Benchmark Your Image-Editing Reward Model
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 143 |
We provide an evaluation script to benchmark reward models on **EditReward-Bench**. To evaluate your own custom reward model, simply create a scorer class with a similar interface and update the script.
|
| 144 |
```bash
|
| 145 |
# This script will evaluate the default EditScore model on the benchmark
|
|
@@ -149,6 +175,13 @@ bash evaluate.sh
|
|
| 149 |
bash evaluate_vllm.sh
|
| 150 |
```
|
| 151 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 152 |
## ❤️ Citing Us
|
| 153 |
If you find this repository or our work useful, please consider giving a star ⭐ and citation 🦖, which would be greatly appreciated:
|
| 154 |
|
|
@@ -159,4 +192,4 @@ If you find this repository or our work useful, please consider giving a star
|
|
| 159 |
journal={arXiv preprint arXiv:2509.23909},
|
| 160 |
year={2025}
|
| 161 |
}
|
| 162 |
-
```
|
|
|
|
| 3 |
---
|
| 4 |
|
| 5 |
<p align="center">
|
| 6 |
+
<img src="https://raw.githubusercontent.com/VectorSpaceLab/EditScore/refs/heads/main/assets/logo.png" width="65%">
|
| 7 |
</p>
|
| 8 |
|
| 9 |
<p align="center">
|
|
|
|
| 11 |
<a href="https://arxiv.org/abs/2509.23909"><img src="https://img.shields.io/badge/arXiv%20paper-2509.23909-b31b1b.svg" alt="arxiv"></a>
|
| 12 |
<a href="https://huggingface.co/collections/EditScore/editscore-68d8e27ee676981221db3cfe"><img src="https://img.shields.io/badge/EditScore-🤗-yellow" alt="model"></a>
|
| 13 |
<a href="https://huggingface.co/datasets/EditScore/EditReward-Bench"><img src="https://img.shields.io/badge/EditReward--Bench-🤗-yellow" alt="dataset"></a>
|
| 14 |
+
<a href="https://huggingface.co/datasets/EditScore/EditScore-Reward-Data"><img src="https://img.shields.io/badge/EditScore--Reward--Data-🤗-yellow" alt="dataset"></a>
|
| 15 |
+
<a href="https://huggingface.co/datasets/EditScore/EditScore-RL-Data"><img src="https://img.shields.io/badge/EditScore--RL--Data-🤗-yellow" alt="dataset"></a>
|
| 16 |
</p>
|
| 17 |
|
| 18 |
<h4 align="center">
|
|
|
|
| 32 |
- **Versatile Applications**: Ready to use as a best-in-class reranker to improve editing outputs, or as a high-fidelity reward signal for **stable and effective Reinforcement Learning (RL) fine-tuning**.
|
| 33 |
|
| 34 |
## 🔥 News
|
| 35 |
+
- **2025-10-16**: Training datasets [EditScore-Reward-Data](https://huggingface.co/datasets/EditScore/EditScore-Reward-Data) and [EditScore-RL-Data](https://huggingface.co/datasets/EditScore/EditScore-RL-Data) are available.
|
| 36 |
+
- **2025-10-15**: **EditScore** is now available on PyPI — install it easily with `pip install editscore`.
|
| 37 |
+
- **2025-10-15**: Best-of-N inference scripts for OmniGen2, Flux-dev-Kontext, and Qwen-Image-Edit are now available! See [this](#apply-editscore-to-image-editing) for details.
|
| 38 |
+
- 2025-09-30: We release **OmniGen2-EditScore7B**, unlocking online RL For Image Editing via high-fidelity EditScore. LoRA weights are available at [Hugging Face](https://huggingface.co/OmniGen2/OmniGen2-EditScore7B) and [ModelScope](https://www.modelscope.cn/models/OmniGen2/OmniGen2-EditScore7B).
|
| 39 |
+
- 2025-09-30: We are excited to release **EditScore** and **EditReward-Bench**! Model weights and the benchmark dataset are now publicly available. You can access them on Hugging Face: [Models Collection](https://huggingface.co/collections/EditScore/editscore-68d8e27ee676981221db3cfe) and [Benchmark Dataset](https://huggingface.co/datasets/EditScore/EditReward-Bench), and on ModelScope: [Models Collection](https://www.modelscope.cn/collections/EditScore-8b0d53aa945d4e) and [Benchmark Dataset](https://www.modelscope.cn/datasets/EditScore/EditReward-Bench).
|
| 40 |
|
| 41 |
## 📖 Introduction
|
| 42 |
While Reinforcement Learning (RL) holds immense potential for this domain, its progress has been severely hindered by the absence of a high-fidelity, efficient reward signal.
|
|
|
|
| 48 |
- **A Powerful & Versatile Tool**: Guided by our benchmark, we developed the **EditScore** model series. Through meticulous data curation and an effective self-ensembling strategy, EditScore sets a new state of the art for open-source reward models, even surpassing the accuracy of leading proprietary VLMs.
|
| 49 |
|
| 50 |
<p align="center">
|
| 51 |
+
<img src="https://raw.githubusercontent.com/VectorSpaceLab/EditScore/refs/heads/main/assets/table_reward_model_results.png" width="95%">
|
| 52 |
<br>
|
| 53 |
<em>Benchmark results on EditReward-Bench.</em>
|
| 54 |
</p>
|
|
|
|
| 61 |
This repository releases both the **EditScore** models and the **EditReward-Bench** dataset to facilitate future research in reward modeling, policy optimization, and AI-driven model improvement.
|
| 62 |
|
| 63 |
<p align="center">
|
| 64 |
+
<img src="https://raw.githubusercontent.com/VectorSpaceLab/EditScore/refs/heads/main/assets/figure_edit_results.png" width="95%">
|
| 65 |
<br>
|
| 66 |
<em>EditScore as a superior reward signal for image editing.</em>
|
| 67 |
</p>
|
|
|
|
| 69 |
|
| 70 |
## 📌 TODO
|
| 71 |
We are actively working on improving EditScore and expanding its capabilities. Here's what's next:
|
| 72 |
+
|
| 73 |
+
|
| 74 |
+
- [x] Release training data for reward model and online RL.
|
| 75 |
- [ ] Release RL training code applying EditScore to OmniGen2.
|
| 76 |
+
- [x] Provide Best-of-N inference scripts for OmniGen2, Flux-dev-Kontext, and Qwen-Image-Edit.
|
| 77 |
|
| 78 |
## 🚀 Quick Start
|
| 79 |
|
| 80 |
### 🛠️ Environment Setup
|
| 81 |
+
We offer two ways to install EditScore. Choose the one that best fits your needs.
|
| 82 |
+
**Method 1: Install from PyPI (Recommended for Users)**: If you want to use EditScore as a library in your own project.
|
| 83 |
+
**Method 2: Install from Source (For Developers)**: If you plan to contribute to the code, modify it, or run the examples in this repository
|
| 84 |
|
| 85 |
+
#### Prerequisites: Installing PyTorch
|
| 86 |
+
Both installation methods require PyTorch to be installed first, as its version is dependent on your system's CUDA setup.
|
| 87 |
```bash
|
| 88 |
+
# (Optional) Create a clean Python environment
|
|
|
|
|
|
|
|
|
|
|
|
|
| 89 |
conda create -n editscore python=3.12
|
| 90 |
conda activate editscore
|
| 91 |
|
| 92 |
+
# Choose the command that matches your CUDA version.
|
| 93 |
+
# This example is for CUDA 12.6.
|
| 94 |
pip install torch==2.7.1 torchvision --extra-index-url https://download.pytorch.org/whl/cu126
|
| 95 |
+
````
|
| 96 |
|
| 97 |
+
<details>
|
| 98 |
+
<summary>🌏 For users in Mainland China</summary>
|
| 99 |
+
```bash
|
| 100 |
+
# Install PyTorch from a domestic mirror
|
| 101 |
+
pip install torch==2.7.1 torchvision --index-url https://mirror.sjtu.edu.cn/pytorch-wheels/cu126
|
| 102 |
```
|
| 103 |
+
</details>
|
| 104 |
|
| 105 |
+
#### Method 1: Install from PyPI (Recommended for Users)
|
| 106 |
+
```bash
|
| 107 |
+
pip install -U editscore
|
| 108 |
+
```
|
| 109 |
|
| 110 |
+
#### Method 2: Install from Source (For Developers)
|
| 111 |
+
This method gives you a local, editable version of the project.
|
| 112 |
+
1. Clone the repository
|
| 113 |
```bash
|
| 114 |
+
git clone https://github.com/VectorSpaceLab/EditScore.git
|
| 115 |
+
cd EditScore
|
| 116 |
+
```
|
| 117 |
|
| 118 |
+
2. Install EditScore in editable mode
|
| 119 |
+
```bash
|
| 120 |
+
pip install -e .
|
| 121 |
+
```
|
| 122 |
|
| 123 |
+
#### ✅ (Recommended) Install Optional High-Performance Dependencies
|
| 124 |
+
For the best performance, especially during inference, we highly recommend installing vllm.
|
| 125 |
+
```bash
|
| 126 |
+
pip install vllm
|
| 127 |
```
|
| 128 |
|
| 129 |
---
|
|
|
|
| 160 |
---
|
| 161 |
|
| 162 |
## 📊 Benchmark Your Image-Editing Reward Model
|
| 163 |
+
#### Install benchmark dependencies
|
| 164 |
+
To use example code for benchmark, run following
|
| 165 |
+
```bash
|
| 166 |
+
pip install -r requirements.txt
|
| 167 |
+
```
|
| 168 |
+
|
| 169 |
We provide an evaluation script to benchmark reward models on **EditReward-Bench**. To evaluate your own custom reward model, simply create a scorer class with a similar interface and update the script.
|
| 170 |
```bash
|
| 171 |
# This script will evaluate the default EditScore model on the benchmark
|
|
|
|
| 175 |
bash evaluate_vllm.sh
|
| 176 |
```
|
| 177 |
|
| 178 |
+
## Apply EditScore to Image Editing
|
| 179 |
+
We offer two example use cases for your exploration:
|
| 180 |
+
- **Best-of-N selection**: Use EditScore to automatically pick the most preferred image among multiple candidates.
|
| 181 |
+
- **Reinforcement fine-tuning**: Use EditScore as a reward model to guide RL-based optimization.
|
| 182 |
+
|
| 183 |
+
For detailed instructions and examples, please refer to the [documentation](examples/OmniGen2-RL/README.md).
|
| 184 |
+
|
| 185 |
## ❤️ Citing Us
|
| 186 |
If you find this repository or our work useful, please consider giving a star ⭐ and citation 🦖, which would be greatly appreciated:
|
| 187 |
|
|
|
|
| 192 |
journal={arXiv preprint arXiv:2509.23909},
|
| 193 |
year={2025}
|
| 194 |
}
|
| 195 |
+
```
|