Gemma3 Turkish SFT
Collection
3 items • Updated • 1
This is an experimental Turkish instruction-tuned model based on google/gemma-3-270m.
It was fine-tuned for 1 epoch on the merve/turkish_instructions dataset.
This model was trained as part of ongoing experiments and is not intended as a production-ready release.
google/gemma-3-270mmerve/turkish_instructionsIf you use this model in your work, consider citing the base model and the dataset:
google/gemma-3-1b-itmerve/turkish_instructionsBelow is a minimal example using 🤗 Transformers:
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
model_id = "canbingol/gemma3_1B_it-tr-sft-1epoch"
device = "cuda" if torch.cuda.is_available() else "cpu"
model = AutoModelForCausalLM.from_pretrained(model_id)
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = model.to(device)
prompt = "nasıl yemek yaparım?"
inputs = tokenizer(prompt, return_tensors="pt").to(device)
outputs = model.generate(
**inputs,
max_new_tokens=50,
do_sample=False
)
generated_text = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(generated_text)