Instructions to use techwithsergiu/Qwen3.5-text-9B with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use techwithsergiu/Qwen3.5-text-9B with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="techwithsergiu/Qwen3.5-text-9B") messages = [ {"role": "user", "content": "Who are you?"}, ] pipe(messages)# Load model directly from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("techwithsergiu/Qwen3.5-text-9B") model = AutoModelForCausalLM.from_pretrained("techwithsergiu/Qwen3.5-text-9B") messages = [ {"role": "user", "content": "Who are you?"}, ] inputs = tokenizer.apply_chat_template( messages, add_generation_prompt=True, tokenize=True, return_dict=True, return_tensors="pt", ).to(model.device) outputs = model.generate(**inputs, max_new_tokens=40) print(tokenizer.decode(outputs[0][inputs["input_ids"].shape[-1]:])) - Notebooks
- Google Colab
- Kaggle
- Local Apps
- vLLM
How to use techwithsergiu/Qwen3.5-text-9B with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "techwithsergiu/Qwen3.5-text-9B" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "techwithsergiu/Qwen3.5-text-9B", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker
docker model run hf.co/techwithsergiu/Qwen3.5-text-9B
- SGLang
How to use techwithsergiu/Qwen3.5-text-9B with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "techwithsergiu/Qwen3.5-text-9B" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "techwithsergiu/Qwen3.5-text-9B", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "techwithsergiu/Qwen3.5-text-9B" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "techwithsergiu/Qwen3.5-text-9B", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }' - Docker Model Runner
How to use techwithsergiu/Qwen3.5-text-9B with Docker Model Runner:
docker model run hf.co/techwithsergiu/Qwen3.5-text-9B
Qwen3.5-text-9B
Text-only bf16 derivative of Qwen/Qwen3.5-9B.
The visual tower (vision encoder, image merger, video preprocessor) has been removed. All text-backbone weights are identical to the original — no retraining, no weight changes, no quality loss for text tasks.
Primary use-case: intermediate model for GGUF conversion or CPU-side f16 merge after LoRA training. For direct fine-tuning use techwithsergiu/Qwen3.5-text-9B-bnb-4bit.
What was changed
- Visual tower removed:
visual,image_newline,patch_embed, and related keys stripped from safetensors shards config.jsonupdated:architectures→Qwen3_5ForCausalLM,vision_configremovedtokenizer_config.jsonandchat_template.jinja: image/video branches stripped from the Jinja2 chat template — prevents tokenizer errors when no image is provided- Vision-specific sidecar files omitted (
preprocessor_config.json,processor_config.json,video_preprocessor_config.json) - All text weights remain at bf16
Model family
| Model | Type | Base model |
|---|---|---|
| Qwen/Qwen3.5-9B | f16 · VLM · source | — |
| techwithsergiu/Qwen3.5-9B-bnb-4bit | BNB NF4 · VLM | Qwen/Qwen3.5-9B |
| techwithsergiu/Qwen3.5-text-9B | bf16 · text-only | Qwen/Qwen3.5-9B |
| techwithsergiu/Qwen3.5-text-9B-bnb-4bit | BNB NF4 · text-only | Qwen3.5-text-9B |
| techwithsergiu/Qwen3.5-text-9B-GGUF | GGUF quants | Qwen3.5-text-9B |
Removing the visual tower saves ~0.19 GB (0.8B), ~0.62 GB (2B / 4B), or ~0.85 GB (9B). The relative saving is larger for smaller models.
Inference
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
model_name = "techwithsergiu/Qwen3.5-text-9B"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(
model_name,
dtype=torch.bfloat16,
device_map="auto",
)
messages = [{"role": "user", "content": "What is the capital of Romania?"}]
# Thinking OFF — direct answer
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True,
enable_thinking=False,
)
inputs = tokenizer(text, return_tensors="pt").to(model.device)
outputs = model.generate(**inputs, max_new_tokens=256)
response = tokenizer.decode(
outputs[0][inputs["input_ids"].shape[1]:],
skip_special_tokens=True,
)
print(response)
# Thinking ON — chain-of-thought before the answer
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True,
enable_thinking=True,
)
inputs = tokenizer(text, return_tensors="pt").to(model.device)
outputs = model.generate(**inputs, max_new_tokens=1024)
response = tokenizer.decode(
outputs[0][inputs["input_ids"].shape[1]:],
skip_special_tokens=True,
)
print(response)
Fine-tuning
This model is an intermediate artifact — not a direct training target. For fine-tuning, use techwithsergiu/Qwen3.5-text-9B-bnb-4bit which is the BNB-quantized version of this model.
Training pipeline (QLoRA · Unsloth · TRL): github.com/techwithsergiu/qwen-qlora-train
Pipeline diagram
Conversion
Converted using qwen35-toolkit — a Python toolkit for BNB quantization, visual tower removal, verification and HF Hub publishing of Qwen3.5 models.
Acknowledgements
Based on Qwen/Qwen3.5-9B by the Qwen Team. If you use this model in research, please cite the original:
@misc{qwen3.5,
title = {{Qwen3.5}: Towards Native Multimodal Agents},
author = {{Qwen Team}},
month = {February},
year = {2026},
url = {https://qwen.ai/blog?id=qwen3.5}
}
- Downloads last month
- 324

