license: apache-2.0
language:
- en
base_model:
- Qwen/Qwen3-Next-80B-A3B-Instruct
library_name: mlx-lm
pipeline_tag: text-generation
tags:
- mlx
- text-generation
- qwen
- mxfp4
- libraxisai
- MoE
- apple-silicon
- quantized
inference: false
widget:
- text: Summarize the operational risks in this deployment plan.
example_title: Reasoning prompt
Qwen3-Next-80B-A3B-Instruct-MLX-MXFP4
Qwen3-Next-80B-A3B-Instruct-MLX-MXFP4 is an MLX MXFP4 checkpoint derived from Qwen/Qwen3-Next-80B-A3B-Instruct, intended for local text generation on Apple Silicon.
Intended use
- Local text generation and chat-style prompting on Apple Silicon
- MLX-LM experimentation with the declared upstream model family
- Offline or operator-controlled inference workflows
Out of scope
- Safety-critical decisions without domain expert review
- Claims of benchmark superiority not backed by published evaluation data
- Non-MLX runtime guarantees; this card documents the shipped HF checkpoint, not every possible serving stack
Training and conversion metadata
| Parameter | Value |
|---|---|
| Repository | LibraxisAI/Qwen3-Next-80B-A3B-Instruct-MLX-MXFP4 |
| Base model | Qwen/Qwen3-Next-80B-A3B-Instruct |
| Task | text-generation |
| Library | mlx-lm |
| Format | MLX / Apple Silicon checkpoint |
| Quantization | MXFP4 |
| Architecture | Qwen3NextForCausalLM |
| Model files | 9 |
| Config model_type | qwen3_next |
This card only reports metadata present in the Hugging Face repository, existing card frontmatter, or public config files. Missing benchmark, dataset, or training-run details are left explicit rather than reconstructed.
Usage
CLI
pip install mlx-lm
mlx_lm.generate \
--model LibraxisAI/Qwen3-Next-80B-A3B-Instruct-MLX-MXFP4 \
--prompt "Summarize the key signals in this document and list the next action items." \
--max-tokens 400
Python
from mlx_lm import load, generate
model, tokenizer = load("LibraxisAI/Qwen3-Next-80B-A3B-Instruct-MLX-MXFP4")
prompt = "Summarize the key signals in this document and list the next action items."
response = generate(model, tokenizer, prompt=prompt, max_tokens=400)
print(response)
Multi-turn with the chat template
This checkpoint follows the tokenizer/chat-template contract inherited from Qwen/Qwen3-Next-80B-A3B-Instruct when the
template is present in the repository:
from mlx_lm import load, generate
model, tokenizer = load("LibraxisAI/Qwen3-Next-80B-A3B-Instruct-MLX-MXFP4")
messages = [
{"role": "user", "content": "Summarize the key signals in this document and list the next action items."},
]
prompt = tokenizer.apply_chat_template(messages, add_generation_prompt=True, tokenize=False)
response = generate(model, tokenizer, prompt=prompt, max_tokens=400)
print(response)
Example output
No public sample output is currently declared for this checkpoint. Run the usage example above against your own prompt or audio/image input to inspect behavior.
Quantization notes
| Aspect | Original/base checkpoint | This checkpoint |
|---|---|---|
| Lineage | Qwen/Qwen3-Next-80B-A3B-Instruct |
LibraxisAI/Qwen3-Next-80B-A3B-Instruct-MLX-MXFP4 |
| Runtime target | Upstream runtime format | MLX on Apple Silicon |
| Quantization | Base precision or upstream-declared format | MXFP4 |
| Published quality delta | Not declared in public metadata | Not declared in public metadata |
Limitations
- No public benchmarks for this checkpoint are declared in the model metadata.
- No public benchmark claims are made by this card unless listed in the frontmatter.
- Validate outputs on your own domain data before relying on this checkpoint.
- Memory use and speed depend heavily on the exact Apple Silicon generation, unified-memory size, and prompt length.
License
apache-2.0. Check the upstream/base model license as well when a base model is declared.
Citation
@misc{libraxisai-qwen3-next-80b-a3b-instruct-mlx-mxfp4,
title = {Qwen3-Next-80B-A3B-Instruct-MLX-MXFP4},
author = {LibraxisAI},
year = {2026},
howpublished = {\url{https://huggingface.co/LibraxisAI/Qwen3-Next-80B-A3B-Instruct-MLX-MXFP4}},
note = {MLX checkpoint published by LibraxisAI}
}
Inference tested on
Related
- Base model:
Qwen/Qwen3-Next-80B-A3B-Instruct
π ππππππππππ. with AI Agents by VetCoders (c)2024-2026 LibraxisAI