Model fine-tuned with morph resolution and morph explanation
Example
from transformers import T5Tokenizer, T5ForConditionalGeneration
# Load the tokenizer and model
model_path = "your-finetuned-mengzi-t5-checkpoint" # Replace with your actual model path or Hugging Face model ID
tokenizer = T5Tokenizer.from_pretrained("Langboat/mengzi-t5-base", legacy=False)
model = T5ForConditionalGeneration.from_pretrained(model_path)
# Input text (format should match the training format)
input_text = "<纠正>小糖人都是可以吃的。" # correct example input in Chinese
#input_text = "<解释>小糖人都是可以吃的。" # explan example input in Chinese
# Tokenize the input
inputs = tokenizer(
input_text,
return_tensors="pt",
max_length=512,
truncation=True,
padding=True
)
# Generate output
outputs = model.generate(
inputs["input_ids"],
max_length=256,
num_beams=4,
early_stopping=True
)
# Decode and print the result
generated_text = tokenizer.decode(outputs[0], skip_special_tokens=True)
print("Generated output:", generated_text)
# 糖尿病患者都是可以吃的。
Model tree for dawang123/t5-chineseAMR-base-multitask
Base model
Langboat/mengzi-t5-base