Instructions to use moondream/moondream3-preview with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use moondream/moondream3-preview with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("image-text-to-text", model="moondream/moondream3-preview", trust_remote_code=True)# Load model directly from transformers import AutoModelForCausalLM model = AutoModelForCausalLM.from_pretrained("moondream/moondream3-preview", trust_remote_code=True, dtype="auto") - Notebooks
- Google Colab
- Kaggle
- Local Apps
- vLLM
How to use moondream/moondream3-preview with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "moondream/moondream3-preview" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "moondream/moondream3-preview", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }'Use Docker
docker model run hf.co/moondream/moondream3-preview
- SGLang
How to use moondream/moondream3-preview with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "moondream/moondream3-preview" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "moondream/moondream3-preview", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "moondream/moondream3-preview" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "moondream/moondream3-preview", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }' - Docker Model Runner
How to use moondream/moondream3-preview with Docker Model Runner:
docker model run hf.co/moondream/moondream3-preview
| # Ethically sourced from https://github.com/xjdr-alt/entropix | |
| import torch | |
| def precompute_freqs_cis( | |
| dim: int, | |
| end: int, | |
| theta: float = 1500000.0, | |
| dtype: torch.dtype = torch.float32, | |
| ) -> torch.Tensor: | |
| freqs = 1.0 / (theta ** (torch.arange(0, dim, 2, dtype=dtype)[: (dim // 2)] / dim)) | |
| t = torch.arange(end, dtype=dtype).unsqueeze(1) | |
| freqs = t * freqs.unsqueeze(0) | |
| freqs = torch.exp(1j * freqs) | |
| return torch.stack([freqs.real, freqs.imag], dim=-1) | |
| def apply_rotary_emb( | |
| x: torch.Tensor, | |
| freqs_cis: torch.Tensor, | |
| position_ids: torch.Tensor, | |
| num_heads: int, | |
| rot_dim: int = 32, | |
| interleave: bool = False, | |
| ) -> torch.Tensor: | |
| assert rot_dim == freqs_cis.shape[-2] * 2 | |
| assert num_heads == x.shape[1] | |
| x_rot, x_pass = x[..., :rot_dim], x[..., rot_dim:] | |
| if interleave: | |
| xq_r = x_rot.float().reshape(*x_rot.shape[:-1], -1, 2)[..., 0] | |
| xq_i = x_rot.float().reshape(*x_rot.shape[:-1], -1, 2)[..., 1] | |
| else: | |
| d_q = x_rot.shape[-1] // 2 | |
| xq_r, xq_i = x_rot[..., :d_q], x_rot[..., d_q:] | |
| freqs_cos = freqs_cis[..., 0][position_ids, :].unsqueeze(0).unsqueeze(0) | |
| freqs_sin = freqs_cis[..., 1][position_ids, :].unsqueeze(0).unsqueeze(0) | |
| # Complex multiplication: (a + bi) * (c + di) = (ac - bd) + (ad + bc)i | |
| xq_out_r = xq_r * freqs_cos - xq_i * freqs_sin | |
| xq_out_i = xq_r * freqs_sin + xq_i * freqs_cos | |
| xq_out = torch.stack((xq_out_r, xq_out_i), dim=-1).flatten(-2) | |
| return torch.cat([xq_out.to(x.dtype), x_pass], dim=-1) | |