Qwen 7B Maritime (Marine) Adapters
This repository contains LoRA adapters for the Maritime-LLM project, trained via progressive continual pretraining on maritime domain data.
π¦ Available Checkpoints
All checkpoints are stored in subfolders within this repository.
| Description | Subfolder Name | Training Steps |
|---|---|---|
| Phase 1a Short Context (2,157 steps) | phase1a-short-ckpt2157 |
2157 |
| Phase 1a Short Context (4,314 steps) | phase1a-short-ckpt4314 |
4314 |
| Phase 1a Short Context (6,471 steps) | phase1a-short-ckpt6471 |
6471 |
| Phase 1a Short Context (8,628 steps - Final) | phase1a-short-ckpt8628 |
8628 |
| Phase 1b Medium Context (872 steps) | phase1b-medium-ckpt872 |
872 |
π Usage
You can load any specific checkpoint by specifying the subfolder argument.
from transformers import AutoModelForCausalLM, AutoTokenizer
from peft import PeftModel
# 1. Load Base Model
base_model_id = "Qwen/Qwen2.5-7B-Instruct"
base_model = AutoModelForCausalLM.from_pretrained(
base_model_id,
device_map="auto",
torch_dtype="auto"
)
# 2. Load Specific Adapter Checkpoint
# Example: Loading Phase 1a Final Checkpoint
adapter_id = "naga080898/qwen7b-marine"
subfolder = "phase1a-short-ckpt8628"
model = PeftModel.from_pretrained(
base_model,
adapter_id,
subfolder=subfolder
)
# 3. Inference
tokenizer = AutoTokenizer.from_pretrained(base_model_id)
prompt = "Explain the safety procedure for enclosing space entry:"
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
outputs = model.generate(**inputs, max_new_tokens=200)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
Training Phases
- Phase 1a (Short Context): Foundation maritime knowledge training (Short context window).
- Phase 1b (Medium Context): Extended context training with longer documents.
Inference Providers NEW
This model isn't deployed by any Inference Provider. π Ask for provider support