Create README.md
Browse files
README.md
ADDED
|
@@ -0,0 +1,176 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# UniRef50 (Processed, ESM-valid as Validation)
|
| 2 |
+
|
| 3 |
+
## Dataset Summary
|
| 4 |
+
|
| 5 |
+
This dataset is a **preprocessed UniRef50** snapshot tailored for **unsupervised protein representation learning**. It:
|
| 6 |
+
|
| 7 |
+
* Normalizes sequences (uppercase, `*` removed), filters by length and ambiguity, and deduplicates by MD5.
|
| 8 |
+
* Splits by **UniRef50 cluster ID** to prevent leakage.
|
| 9 |
+
* Uses the **official ESM validation headers** as the entire `valid` split (no sampling).
|
| 10 |
+
* Provides **JSONL.zst shards** for efficient streaming with 🤗 `datasets`.
|
| 11 |
+
|
| 12 |
+
> If you need the exact preprocessing script: see **Reproducibility** below.
|
| 13 |
+
|
| 14 |
+
---
|
| 15 |
+
|
| 16 |
+
## Source
|
| 17 |
+
|
| 18 |
+
* **Upstream data:** UniProt / UniRef50 (2018_03 snapshot).
|
| 19 |
+
* **Evaluation headers:** `uniref201803_ur50_valid_headers.txt` from the ESM paper.
|
| 20 |
+
|
| 21 |
+
Please respect UniProt terms when using or redistributing this derivative dataset.
|
| 22 |
+
|
| 23 |
+
---
|
| 24 |
+
|
| 25 |
+
## Splits
|
| 26 |
+
|
| 27 |
+
| Split | Definition | Notes |
|
| 28 |
+
| ------- | ---------------------------------------------------------------- | ----------------------------------------- |
|
| 29 |
+
| `train` | All clusters **not** in ESM valid and not hashed into test | Majority of UniRef50 |
|
| 30 |
+
| `valid` | **Only** clusters in ESM’s validation header list | Field `is_esm_valid=true` for all records |
|
| 31 |
+
| `test` | Hash‐based holdout by cluster: `xxhash64(cluster_id) % 100 == 2` | Small random holdout |
|
| 32 |
+
|
| 33 |
+
> Splitting by **cluster_id** avoids train/val/test contamination across cluster members.
|
| 34 |
+
|
| 35 |
+
---
|
| 36 |
+
|
| 37 |
+
## Features (Schema)
|
| 38 |
+
|
| 39 |
+
| Field | Type | Description | |
|
| 40 |
+
| -------------- | ------- | ---------------------------------------------------------------- | -------- |
|
| 41 |
+
| `id` | string | Stable ID = `cluster_id | md5[:8]` |
|
| 42 |
+
| `sequence` | string | Normalized AA sequence (uppercase; `*` removed) | |
|
| 43 |
+
| `length` | int32 | Sequence length after normalization | |
|
| 44 |
+
| `cluster_id` | string | UniRef50 cluster ID (e.g., `UniRef50_Q8WZ42-5`) | |
|
| 45 |
+
| `description` | string? | Optional description parsed from FASTA header (after `Cluster:`) | |
|
| 46 |
+
| `seq_md5` | string | MD5 of normalized sequence | |
|
| 47 |
+
| `is_esm_valid` | bool | `true` iff the record belongs to the ESM validation header set | |
|
| 48 |
+
|
| 49 |
+
> Ambiguous residues: records with ambiguity fraction > 5% (non-canonical AAs) are filtered out by default.
|
| 50 |
+
|
| 51 |
+
---
|
| 52 |
+
|
| 53 |
+
## Preprocessing & Filters
|
| 54 |
+
|
| 55 |
+
* **Normalization:** uppercase, remove terminal/internal `*`.
|
| 56 |
+
* **Length filter:** keep `30 ≤ L ≤ 1024`.
|
| 57 |
+
* **Ambiguity filter:** keep sequences with ≤ **5%** non-canonical residues (`ACDEFGHIKLMNPQRSTVWY` are canonical).
|
| 58 |
+
* **Deduplication:** exact dedup by MD5 of normalized sequence (global).
|
| 59 |
+
* **Splitting:** by `cluster_id` as described above.
|
| 60 |
+
* **Headers:** FASTA lines like
|
| 61 |
+
`>UniRef50_Q8WZ42-5 Cluster: Isoform 5 of Titin` → `cluster_id="UniRef50_Q8WZ42-5"`, `description="Isoform 5 of Titin"`.
|
| 62 |
+
|
| 63 |
+
---
|
| 64 |
+
|
| 65 |
+
## Intended Use
|
| 66 |
+
|
| 67 |
+
* **Self-supervised training** of protein LMs/encoders that must be robust to substitutions and indels (e.g., OT/UOT objectives).
|
| 68 |
+
* **Evaluation** aligned with the ESM paper by using the official validation header set for `valid`.
|
| 69 |
+
|
| 70 |
+
Not intended for clinical use. No personal data.
|
| 71 |
+
|
| 72 |
+
---
|
| 73 |
+
|
| 74 |
+
## How to Load (Streaming & Local)
|
| 75 |
+
|
| 76 |
+
### Streaming (recommended for large shards)
|
| 77 |
+
|
| 78 |
+
```python
|
| 79 |
+
from datasets import load_dataset
|
| 80 |
+
|
| 81 |
+
repo = "DeepFoldProtein/uniref50_processed" # replace with your namespace
|
| 82 |
+
ds_train = load_dataset(repo, split="train", streaming=True)
|
| 83 |
+
row = next(iter(ds_train))
|
| 84 |
+
print(row["cluster_id"], row["length"])
|
| 85 |
+
```
|
| 86 |
+
|
| 87 |
+
### Extract ESM-valid subset (within `valid`)
|
| 88 |
+
|
| 89 |
+
```python
|
| 90 |
+
from datasets import load_dataset
|
| 91 |
+
|
| 92 |
+
ds_valid = load_dataset(repo, split="valid", streaming=True)
|
| 93 |
+
esm_valid = ds_valid.filter(lambda x: x["is_esm_valid"])
|
| 94 |
+
print(next(iter(esm_valid)))
|
| 95 |
+
```
|
| 96 |
+
|
| 97 |
+
### Non-streaming load (small splits only)
|
| 98 |
+
|
| 99 |
+
```python
|
| 100 |
+
from datasets import load_dataset
|
| 101 |
+
ds_test = load_dataset(repo, split="test") # materializes locally
|
| 102 |
+
print(len(ds_test))
|
| 103 |
+
```
|
| 104 |
+
|
| 105 |
+
---
|
| 106 |
+
|
| 107 |
+
## Quick Stats Helper
|
| 108 |
+
|
| 109 |
+
Use this helper to print length statistics per split:
|
| 110 |
+
|
| 111 |
+
```python
|
| 112 |
+
from datasets import load_dataset
|
| 113 |
+
import math
|
| 114 |
+
|
| 115 |
+
def stats(split):
|
| 116 |
+
ds = load_dataset("DeepFoldProtein/uniref50_processed", split=split, streaming=True)
|
| 117 |
+
n=s=s2=0; mn=10**9; mx=0
|
| 118 |
+
for r in ds:
|
| 119 |
+
L = int(r.get("length", len(r["sequence"])))
|
| 120 |
+
n += 1; s += L; s2 += L*L; mn = min(mn, L); mx = max(mx, L)
|
| 121 |
+
mean = s/n if n else float("nan")
|
| 122 |
+
std = math.sqrt(max(0.0, s2/n - mean*mean)) if n else float("nan")
|
| 123 |
+
return {"count": n, "min": mn, "max": mx, "mean": mean, "std": std}
|
| 124 |
+
|
| 125 |
+
print(stats("train"))
|
| 126 |
+
print(stats("valid"))
|
| 127 |
+
print(stats("test"))
|
| 128 |
+
```
|
| 129 |
+
|
| 130 |
+
---
|
| 131 |
+
|
| 132 |
+
## Licensing
|
| 133 |
+
|
| 134 |
+
* **Data source:** UniProt / UniRef50. Follow the UniProt license and attribution requirements: [https://www.uniprot.org/help/license](https://www.uniprot.org/help/license)
|
| 135 |
+
* **Derivative dataset:** You must attribute UniProt and include a link to their license when redistributing.
|
| 136 |
+
* **Code (preprocessing):** Provide your own license for the script if you distribute it.
|
| 137 |
+
|
| 138 |
+
---
|
| 139 |
+
|
| 140 |
+
## Citation
|
| 141 |
+
|
| 142 |
+
If you use this dataset, please cite UniProt and (optionally) ESM:
|
| 143 |
+
|
| 144 |
+
**UniProt:**
|
| 145 |
+
|
| 146 |
+
> The UniProt Consortium. *UniProt: the universal protein knowledgebase.* Nucleic Acids Res. (2018)
|
| 147 |
+
|
| 148 |
+
**ESM:**
|
| 149 |
+
|
| 150 |
+
> Rives et al. *Evolutionary-scale prediction of atomic-level protein structure with a language model.* Science (2023).
|
| 151 |
+
|
| 152 |
+
---
|
| 153 |
+
|
| 154 |
+
## Known Limitations
|
| 155 |
+
|
| 156 |
+
* **Snapshot drift:** This mirrors UniRef50 (2018_03) conventions; later UniRef releases may differ.
|
| 157 |
+
* **Non-random validation:** `valid` is defined by ESM’s curated header list (by design).
|
| 158 |
+
* **Ambiguity handling:** Sequences with >5% ambiguous residues are dropped; adjust if you need broader coverage.
|
| 159 |
+
* **Dedup scope:** Deduplication is by normalized sequence only (not by cluster consensus).
|
| 160 |
+
|
| 161 |
+
---
|
| 162 |
+
|
| 163 |
+
## Changelog / Versioning
|
| 164 |
+
|
| 165 |
+
* **v1.0:** Initial release — ESM-valid set defines `valid`; hash-based `test`; JSONL.zst shards; manifest schema above.
|
| 166 |
+
* Future updates will be tagged with semantic versions and described here.
|
| 167 |
+
|
| 168 |
+
---
|
| 169 |
+
|
| 170 |
+
## Contact
|
| 171 |
+
|
| 172 |
+
* **Issues:** Please open a GitHub issue or HF discussion on this dataset repo.
|
| 173 |
+
|
| 174 |
+
---
|
| 175 |
+
|
| 176 |
+
If you’d like, I can also generate a minimal `dataset_info.yaml` with this schema so the Hub shows the features immediately.
|