Recycling Pretrained Checkpoints: Orthogonal Growth of Mixture-of-Experts for Efficient Large Language Model Pre-Training Paper • 2510.08008 • Published Oct 9, 2025 • 5
Recycling Pretrained Checkpoints: Orthogonal Growth of Mixture-of-Experts for Efficient Large Language Model Pre-Training Paper • 2510.08008 • Published Oct 9, 2025 • 5
Recycling Pretrained Checkpoints: Orthogonal Growth of Mixture-of-Experts for Efficient Large Language Model Pre-Training Paper • 2510.08008 • Published Oct 9, 2025 • 5 • 2
Optimizing Large Language Model Training Using FP4 Quantization Paper • 2501.17116 • Published Jan 28, 2025 • 36
RetroInfer: A Vector-Storage Approach for Scalable Long-Context LLM Inference Paper • 2505.02922 • Published May 5, 2025 • 28
Running 3.65k The Ultra-Scale Playbook 🌌 3.65k The ultimate guide to training LLM on large GPU Clusters
Optimizing Large Language Model Training Using FP4 Quantization Paper • 2501.17116 • Published Jan 28, 2025 • 36