EuroBERT

university
Activity Feed

AI & ML interests

None defined yet.

Recent Activity

Articles

tomaarsen 
posted an update 3 days ago
view post
Post
203
🤖 I've just published Sentence Transformers v5.5.0, headlined by a new train-sentence-transformers Agent Skill that lets your AI coding agent (Claude Code, Codex, Cursor, Gemini CLI, ...) train and finetune embedding, reranker, and sparse encoder models for you. Plus training losses & fixes. Details:

The skill bundles curated guidance for the whole training workflow across all three model types: base model selection, loss and evaluator choice, hard-negative mining, distillation, LoRA, Matryoshka, multilingual training, static embeddings, etc. It also ships production-ready training template scripts the agent can adapt. Install it with hf skills add train-sentence-transformers, then just describe what you want, e.g. "finetune a reranker on my (question, answer) pairs, mine hard negatives, and push it to the Hub".

On the loss side: EmbedDistillLoss is a new embedding-level distillation loss for SentenceTransformer. Instead of distilling teacher scores like MarginMSELoss, it aligns the student's embeddings directly with pre-computed teacher embeddings, wtih an optional learnable projection for when the student and teacher dimensions differ. Second, ADRMSELoss is a new listwise learning-to-rank loss for CrossEncoder from the Rank-DistilLLM paper, aimed at the LLM-distillation reranking setting.

encode() and predict() also gained a per-call processing_kwargs override, so you can change processor settings like max_length, a vision-language model's image resolution, or a video's fps, for a single call without rebuilding the model.

The Agent Skill is the part of this release I'm most keen for people to try. Curious to hear how it works for you. I've been using it myself a lot to quickly set up some training runs that immediately use a bunch of best practices.

> pip install sentence-transformers==5.5.0
> hf skills add train-sentence-transformers

The full release notes: https://github.com/huggingface/sentence-transformers/releases/tag/v5.5.0
  • 3 replies
·
tomaarsen 
posted an update about 1 month ago
view post
Post
979
🌐 I've just published Sentence Transformers v5.4 to make the project fully multimodal for embeddings and reranking. The release also includes a modular CrossEncoder, and automatic Flash Attention 2 input flattening. Details:

You can now use SentenceTransformer and CrossEncoder with text, images, audio, and video, with the same familiar API. That means you can compute embeddings for an image and a text query using model.encode(), compare them with model.similarity(), and it just works. Models like Qwen3-VL-Embedding-2B and jinaai/jina-reranker-m0 are supported out of the box.

Beyond multimodal, I also fully modularized the CrossEncoder class. It's now a torch.nn.Sequential of composable modules, just like SentenceTransformer has been. This unlocked support for generative rerankers (CausalLM-based models like mxbai-rerank-v2 and the Qwen3 rerankers) via a new LogitScore module, which wasn't possible before without custom code.

Also, Flash Attention 2 now automatically skips padding for text-only inputs. If your batch has a mix of short and long texts, this gives you a nice speedup and lower VRAM usage for free.

I wrote a blog post walking through the multimodal features with practical examples. Check it out if you want to get started, or just point your Agent to the URL: https://huggingface.co/blog/multimodal-sentence-transformers

This release has set up the groundwork for more easily introducing late-interaction models (both text-only and multimodal) into Sentence Transformers in the next major release. I'm looking forward to it!
tomaarsen 
posted an update 5 months ago
view post
Post
4456
🐦‍🔥 I've just published Sentence Transformers v5.2.0! It introduces multi-processing for CrossEncoder (rerankers), multilingual NanoBEIR evaluators, similarity score outputs in mine_hard_negatives, Transformers v5 support and more. Details:

- CrossEncoder multi-processing: Similar to SentenceTransformer and SparseEncoder, you can now use multi-processing with CrossEncoder rerankers. Useful for multi-GPU and CPU settings, and simple to configure: just device=["cuda:0", "cuda:1"] or device=["cpu"]*4 on the model.predict or model.rank calls.

- Multilingual NanoBEIR Support: You can now use community translations of the tiny NanoBEIR retrieval benchmark instead of only the English one, by passing dataset_id, e.g. dataset_id="lightonai/NanoBEIR-de" for the German benchmark.

- Similarity scores in Hard Negatives Mining: When mining for hard negatives to create a strong training dataset, you can now pass output_scores=True to get similarity scores returned. This can be useful for some distillation losses!

- Transformers v5: This release works with both Transformers v4 and the upcoming v5. In the future, Sentence Transformers will only work with Transformers v5, but not yet!

- Python 3.9 deprecation: Now that Python 3.9 has lost security support, Sentence Transformers no longer supports it.

Check out the full changelog for more details: https://github.com/huggingface/sentence-transformers/releases/tag/v5.2.0

I'm quite excited about what's coming. There's a huge draft PR with a notable refactor in the works that should bring some exciting support. Specifically, better multimodality, rerankers, and perhaps some late interaction in the future!
tomaarsen 
posted an update 7 months ago
view post
Post
4621
🤗 Sentence Transformers is joining Hugging Face! 🤗 This formalizes the existing maintenance structure, as I've personally led the project for the past two years on behalf of Hugging Face! Details:

Today, the Ubiquitous Knowledge Processing (UKP) Lab is transferring the project to Hugging Face. Sentence Transformers will remain a community-driven, open-source project, with the same open-source license (Apache 2.0) as before. Contributions from researchers, developers, and enthusiasts are welcome and encouraged. The project will continue to prioritize transparency, collaboration, and broad accessibility.

Read our full announcement for more details and quotes from UKP and Hugging Face leadership: https://huggingface.co/blog/sentence-transformers-joins-hf

We see an increasing wish from companies to move from large LLM APIs to local models for better control and privacy, reflected in the library's growth: in just the last 30 days, Sentence Transformer models have been downloaded >270 million times, second only to transformers.

I would like to thank the UKP Lab, and especially Nils Reimers and Iryna Gurevych, both for their dedication to the project and for their trust in myself, both now and two years ago. Back then, neither of you knew me well, yet you trusted me to take the project to new heights. That choice ended up being very valuable for the embedding & Information Retrieval community, and I think this choice of granting Hugging Face stewardship will be similarly successful.

I'm very excited about the future of the project, and for the world of embeddings and retrieval at large!
  • 1 reply
·

Add QA head

3
#17 opened about 1 year ago by
manu