1

Qwen3-VisionCaption-2B

Qwen3-VisionCaption-2B is an abliterated v1.0 variant built upon Qwen3-VL-2B-Instruct-abliterated-v1, specifically optimized for seamless, high precision image captioning and uncensored visual analysis. It is engineered for robust caption generation, deep reasoning, and unrestricted descriptive understanding across diverse visual and multimodal contexts.

Key Highlights

  • Abliterated and uncensored captioning for descriptive and reasoning rich outputs.
  • High fidelity captions for general, artistic, technical, synthetic, abstract, and low context images.
  • Consistent performance across wide, tall, square, panoramic, and irregular visual formats.
  • Adjustable detail control ranging from brief summaries to fine grained reasoning.
  • Built on Qwen3-VL-2B architecture with strong multimodal reasoning and instruction following.
  • Multilingual output capability through effective prompt engineering.

Datasets

This model was fine tuned using the following datasets:

  • prithivMLmods/blip3o-caption-mini-arrow A high quality curated dataset with multi style captions oriented toward descriptive and reasoning rich visual interpretation.
  • prithivMLmods/Caption3o-Opt-v2 Optimized caption dataset targeting precision, context understanding, and descriptive generalization across diverse visual categories.
  • Private and unlisted datasets curated for uncensored and domain specific image captioning tasks, focusing on unrestricted visual understanding beyond standard filtered datasets.

The training objective focused on enhancing performance in unconstrained descriptive image captioning, particularly for edge cases and visual categories that are typically filtered out in standard captioning benchmarks.

Quick Start with Transformers

from transformers import Qwen3VLForConditionalGeneration, AutoProcessor
from qwen_vl_utils import process_vision_info
import torch

model = Qwen3VLForConditionalGeneration.from_pretrained(
    "prithivMLmods/Qwen3-VisionCaption-2B", torch_dtype="auto", device_map="auto"
)

processor = AutoProcessor.from_pretrained("prithivMLmods/Qwen3-VisionCaption-2B")

messages = [
    {
        "role": "user",
        "content": [
            {
                "type": "image",
                "image": "https://qianwen-res.oss-cn-beijing.aliyuncs.com/Qwen-VL/assets/demo.jpeg",
            },
            {"type": "text", "text": "Provide a detailed caption and reasoning for this image."},
        ],
    }
]

text = processor.apply_chat_template(
    messages, tokenize=False, add_generation_prompt=True
)
image_inputs, video_inputs = process_vision_info(messages)

inputs = processor(
    text=[text],
    images=image_inputs,
    videos=video_inputs,
    padding=True,
    return_tensors="pt",
)
inputs = inputs.to("cuda")

generated_ids = model.generate(**inputs, max_new_tokens=128)
generated_ids_trimmed = [
    out_ids[len(in_ids):] for in_ids, out_ids in zip(inputs.input_ids, generated_ids)
]
output_text = processor.batch_decode(
    generated_ids_trimmed, skip_special_tokens=True, clean_up_tokenization_spaces=False
)
print(output_text)

Run with llama.cpp on Jan, Ollama, LM Studio, and other platforms.

Preview 1 Preview 2
Screenshot 1 Screenshot 2

Find the Quants (GGUF) here: https://huggingface.co/prithivMLmods/Qwen3-VisionCaption-2B-GGUF

Intended Use

  • High precision captioning and reasoning for general purpose or non standard visual data.
  • Uncensored analytical captioning for research, red teaming, and moderation evaluation.
  • Creative and narrative oriented multimodal tasks.
  • Understanding stylized, synthetic, or complex images with challenging aspect ratios.

Limitations

  • May produce explicit, sensitive, or offensive descriptions depending on visual content.
  • Not recommended for production use where strict safety controls are required.
  • Performance may vary for heavily abstract or synthetic content.
  • Output tone depends on prompt phrasing and detail level requests.
Downloads last month
255
Safetensors
Model size
2B params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for prithivMLmods/Qwen3-VisionCaption-2B

Finetuned
(1)
this model
Quantizations
3 models

Datasets used to train prithivMLmods/Qwen3-VisionCaption-2B

Collection including prithivMLmods/Qwen3-VisionCaption-2B