--- pipeline_tag: image-text-to-text base_model: - Qwen/Qwen3-VL-235B-A22B-Thinking --- This is a MXFP4_MOE quantization of the model Qwen3-VL-235B-A22B-Thinking Original model: https://huggingface.co/unsloth/Qwen3-VL-235B-A22B-Thinking-1M This is the version from unloth that has expanded the context size from 256k to 1M. Download the latest llama.cpp to use them.