Update READ.ME
Browse files
README.md
CHANGED
|
@@ -20,3 +20,50 @@ language:
|
|
| 20 |
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
|
| 21 |
|
| 22 |
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 20 |
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
|
| 21 |
|
| 22 |
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
| 23 |
+
|
| 24 |
+
|
| 25 |
+
# Formula1Model ποΈ
|
| 26 |
+
|
| 27 |
+
An expert Formula 1 assistant fine-tuned on the **2024 Formula 1 Championship dataset** ([vibingshu/2024_formula1_championship_dataset](https://huggingface.co/datasets/vibingshu/2024_formula1_championship_dataset)).
|
| 28 |
+
|
| 29 |
+
This model was fine-tuned using **Unsloth** and exported in 8-bit (`Q8_0`) format for efficient local inference with Ollama.
|
| 30 |
+
|
| 31 |
+
---
|
| 32 |
+
|
| 33 |
+
## π§ Model Details
|
| 34 |
+
- **Base Model:** LLaMA 3.2 (fine-tuned with Unsloth)
|
| 35 |
+
- **Dataset:** 2024 F1 results, drivers, constructors, and races
|
| 36 |
+
- **Format:** GGUF (`Q8_0`)
|
| 37 |
+
- **Task:** Question answering & expert analysis on Formula 1
|
| 38 |
+
- **Use Case:** F1 trivia, race insights, driver/team history, strategy-style Q&A
|
| 39 |
+
|
| 40 |
+
---
|
| 41 |
+
|
| 42 |
+
## π Training
|
| 43 |
+
|
| 44 |
+
Hardware: Google Colab (T4 / A100, depending on availability)
|
| 45 |
+
Tools Used: Unsloth, Hugging Face datasets, LoRA adapters
|
| 46 |
+
Precision: 8-bit (Q8_0) for efficient inference
|
| 47 |
+
|
| 48 |
+
---
|
| 49 |
+
|
| 50 |
+
## π Usage
|
| 51 |
+
|
| 52 |
+
ollama pull zayedansari/Formula1Model
|
| 53 |
+
|
| 54 |
+
ollama run zayedansari/Formula1Model
|
| 55 |
+
|
| 56 |
+
---
|
| 57 |
+
|
| 58 |
+
## Example
|
| 59 |
+
|
| 60 |
+
**Who won the 2024 Monaco Grand Prix?**
|
| 61 |
+
|
| 62 |
+
> Max Verstappen won the Bahrain Grand Prix driving for Red Bull Racing Honda RBPT.
|
| 63 |
+
|
| 64 |
+
---
|
| 65 |
+
|
| 66 |
+
## π License
|
| 67 |
+
|
| 68 |
+
This model is released under the Apache 2.0 license.
|
| 69 |
+
You are free to use, modify, and distribute it with proper attribution.
|