Dicta - Unsloth

non-profit
Activity Feed

AI & ML interests

None defined yet.

danielhanchenΒ 
posted an update 3 days ago
view post
Post
7080
We collaborated with NVIDIA to teach you how we made LLM training ~25% faster! πŸš€

Learn how 3 optimizations help your home GPU train models faster:
1. Packed-sequence metadata caching
2. Double-buffered checkpoint reloads
3. Faster MoE routing

Guide: https://unsloth.ai/blog/nvidia-collab
GitHub: https://github.com/unslothai/unsloth
danielhanchenΒ 
posted an update 6 days ago
view post
Post
8599
We made a guide on how to run open LLMs in Claude Code, Codex and OpenClaw.

Use Gemma 4 and Qwen3.6 GGUFs for local agentic coding on 24GB RAM

Run with self-healing tool calls, code execution, web search via the Unsloth API endpoint and llama.cpp

Guide: https://unsloth.ai/docs/basics/api
danielhanchenΒ 
posted an update 13 days ago
view post
Post
10704
Unsloth is now one of the top 10 most followed organizations on Hugging Face. πŸ€—πŸ¦₯

Thanks so much for all the support!
Our HF page:
unsloth
  • 5 replies
Β·
danielhanchenΒ 
posted an update 19 days ago
danielhanchenΒ 
posted an update 25 days ago
danielhanchenΒ 
posted an update about 1 month ago
danielhanchenΒ 
posted an update about 1 month ago
danielhanchenΒ 
posted an update about 1 month ago
view post
Post
2767
A new way to use Unsloth.

Coming soon...
danielhanchenΒ 
posted an update about 2 months ago
view post
Post
934
You don’t need to set LLM parameters anymore! πŸš€

llama.cpp uses only the context length + compute your local setup needs. Unsloth also auto-applies the correct model settings

Try in Unsloth Studio - now with precompiled llama.cpp binaries.

GitHub: https://github.com/unslothai/unsloth
  • 2 replies
Β·
danielhanchenΒ 
posted an update about 2 months ago
view post
Post
3416
Introducing Unsloth Studio ✨
A new open-source web UI to train and run LLMs.

β€’ Run models locally on Mac, Windows, Linux
β€’ Train 500+ models 2x faster with 70% less VRAM
β€’ Supports GGUF, vision, audio, embedding models
β€’ Auto-create datasets from PDF, CSV, DOCX
β€’ Self-healing tool calling and code execution
β€’ Compare models side by side + export to GGUF

GitHub: https://github.com/unslothai/unsloth
Blog and Guide: https://unsloth.ai/docs/new/studio

Available now on Hugging Face, NVIDIA, Docker and Colab.
danielhanchenΒ 
posted an update about 2 months ago
view post
Post
3934
We collaborated with NVIDIA to teach you about Reinforcement Learning and RL environments. πŸ’š Learn:

β€’ Why RL environments matter + how to build them
β€’ When RL is better than SFT
β€’ GRPO and RL best practices
β€’ How verifiable rewards and RLVR work

Blog: https://unsloth.ai/blog/rl-environments
  • 4 replies
Β·
danielhanchenΒ 
posted an update 3 months ago
view post
Post
3458
100,000+ models trained with Unsloth have now been open-sourced on πŸ€—Hugging Face! πŸ¦₯

Here are the most popular ones you can run local:
1. TeichAI - GLM-4.7-Flash distilled from Claude 4.5 Opus (high)
2. Zed - Qwen Coder 7B fine-tuned for stronger coding
3. DavidAU - Llama-3.3-8B distilled from Claude 4.5 Opus (high)
4. huihui - gpt-oss made β€œabliberated”

Links to models:
1. TeichAI: TeichAI/GLM-4.7-Flash-Claude-Opus-4.5-High-Reasoning-Distill-GGUF
2. Zed: zed-industries/zeta
3. DavidAU: DavidAU/Llama3.3-8B-Instruct-Thinking-Claude-4.5-Opus-High-Reasoning
4. huihui: huihui-ai/Huihui-gpt-oss-20b-BF16-abliterated

See all the 100K latest models fine-tuned with Unsloth here: https://huggingface.co/models?other=u
  • 2 replies
Β·
danielhanchenΒ 
posted an update 3 months ago
danielhanchenΒ 
posted an update 3 months ago
view post
Post
5225
We collaborated with Hugging Face to enable you to train MoE models 12Γ— faster with 35% less VRAM via our new Triton kernels (no accuracy loss). πŸ€—

Train gpt-oss locally on 12.8GB VRAM with our free notebooks: https://unsloth.ai/docs/new/faster-moe
  • 1 reply
Β·
danielhanchenΒ 
posted an update 3 months ago
danielhanchenΒ 
posted an update 4 months ago
view post
Post
2652
You can now fine-tune embedding models in our free Unsloth notebook! πŸ€—

Fine-tuning embedding models improves retrieval & RAG by aligning vectors to your domain-specific notion of similarity, improving search, clustering, and recommendations on your data.

⭐ Blog + Notebooks: https://unsloth.ai/docs/new/embedding-finetuning

Unsloth trains embedding models 1.8-3.3x faster with 20% less VRAM, 2x longer context & no accuracy loss vs. FA2 setups.

We'd like to thank Hugging Face and Unsloth contributor: electroglyph for making this possible!
  • 3 replies
Β·
danielhanchenΒ 
posted an update 4 months ago
danielhanchenΒ 
posted an update 4 months ago
view post
Post
2905
You can now do reinforcement learning training with 7Γ— longer context and no accuracy loss, via our new batching algorithms.

Long reasoning chains in RL are costly, but now we enable you to train gpt-oss with GRPO & reach 380K context on a 192GB GPU.

Blog: https://unsloth.ai/docs/new/grpo-long-context
danielhanchenΒ 
posted an update 4 months ago
danielhanchenΒ 
posted an update 5 months ago
view post
Post
4156
You can now run GLM-4.7, the new 355B parameter SOTA model on your local device (128GB RAM).✨

The model achieves SOTA performance on coding, agentic and chat benchmarks.

GGUF: unsloth/GLM-4.7-GGUF
Guide: https://docs.unsloth.ai/models/glm-4.7
  • 3 replies
Β·