1

Qwen3.6-35B-A3B-Uncensored-Aggressive

Qwen3.6-35B-A3B-Uncensored-Aggressive is an abliterated evolution built on top of Qwen/Qwen3.6-35B-A3B. This model applies advanced refusal direction analysis and ablation-based training strategies to reduce internal refusal behaviors while preserving the reasoning and instruction-following strengths of the original architecture. The result is a powerful 35B parameter Mixture-of-Experts language model optimized for detailed responses and improved instruction adherence.

GGUF: https://huggingface.co/prithivMLmods/Qwen3.6-35B-A3B-Uncensored-Aggressive-GGUF

This model is intended for research and learning purposes only. It reduces internal refusal behaviors, and any content generated by it is used at the user’s own risk. The authors and hosting page disclaim any liability for outputs produced by this model. Users are responsible for ensuring safe, ethical, and lawful usage.

Evaluation

Metric Result
Refusal Rate (harm_bench) 0 / 500
Test Setup 500 random harmful prompts
Inference Pipeline Transformers
Inference Type text-generation
Dataset harm_bench

Note: This model was tested on 500 randomly sampled harmful prompts based on the harm_bench dataset. The result shows 0 refusals out of 500. For more details, refer to the dataset page linked above.

Key Highlights

  • Advanced Refusal Direction Analysis Uses targeted activation analysis to identify and mitigate refusal directions within the model’s latent space.

  • Abliterated Aggressive Training Fine-tuned to significantly reduce refusal patterns while maintaining coherent and detailed outputs.

  • 35B MoE Architecture (A3B) Built on Qwen/Qwen3.6-35B-A3B, offering strong reasoning, scalability, and efficiency through a Mixture-of-Experts design.

  • Improved Instruction Adherence Optimized to follow complex prompts with minimal unnecessary refusals.

  • High-Capability Deployment Suitable for advanced research experimentation, powerful local inference setups, and large-scale AI applications.

Quick Start with Transformers

pip install transformers==5.2.0
# or
pip install git+https://github.com/huggingface/transformers.git
from transformers import Qwen3_5MoeForConditionalGeneration, AutoProcessor
import torch

model = Qwen3_5MoeForConditionalGeneration.from_pretrained(
    "prithivMLmods/Qwen3.6-35B-A3B-Uncensored-Aggressive",
    torch_dtype="auto",
    device_map="auto"
)

processor = AutoProcessor.from_pretrained(
    "prithivMLmods/Qwen3.6-35B-A3B-Uncensored-Aggressive"
)

messages = [
    {
        "role": "user",
        "content": [
            {"type": "text", "text": "Explain how transformer models work in simple terms."}
        ],
    }
]

text = processor.apply_chat_template(
    messages, tokenize=False, add_generation_prompt=True
)

inputs = processor(
    text=[text],
    padding=True,
    return_tensors="pt"
).to("cuda")

generated_ids = model.generate(**inputs, max_new_tokens=256)

generated_ids_trimmed = [
    out_ids[len(in_ids):] for in_ids, out_ids in zip(inputs.input_ids, generated_ids)
]

output_text = processor.batch_decode(
    generated_ids_trimmed,
    skip_special_tokens=True,
    clean_up_tokenization_spaces=False
)

print(output_text)

Intended Use

  • Alignment & Refusal Research Studying refusal behaviors and the impact of activation-level modifications.

  • Red-Teaming Experiments Evaluating robustness across adversarial or edge-case prompts.

  • High-Capability Local AI Deployment Running powerful instruction models on high-memory GPUs or multi-GPU setups.

  • Research Prototyping Experimentation with large transformer architectures and alignment techniques.

Limitations & Risks

Important Note: This model intentionally reduces built-in refusal mechanisms.

  • Sensitive Output Possibility The model may generate controversial or explicit responses depending on prompts.

  • User Responsibility Outputs must be handled responsibly and within legal and ethical boundaries.

  • Compute Requirements A 35B MoE model requires substantial GPU memory or optimized inference techniques such as quantization or tensor parallelism.

Dataset & Acknowledgements

Downloads last month
79
Safetensors
Model size
35B params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for prithivMLmods/Qwen3.6-35B-A3B-Uncensored-Aggressive

Finetuned
(88)
this model
Quantizations
1 model

Dataset used to train prithivMLmods/Qwen3.6-35B-A3B-Uncensored-Aggressive

Collection including prithivMLmods/Qwen3.6-35B-A3B-Uncensored-Aggressive