| | --- |
| | library_name: peft |
| | license: llama3 |
| | base_model: meta-llama/Meta-Llama-3-8B-Instruct |
| | tags: |
| | - llama-factory |
| | - prefix-tuning |
| | - generated_from_trainer |
| | model-index: |
| | - name: train_siqa_1754652168 |
| | results: [] |
| | --- |
| | |
| | <!-- This model card has been generated automatically according to the information the Trainer had access to. You |
| | should probably proofread and complete it, then remove this comment. --> |
| |
|
| | # train_siqa_1754652168 |
| |
|
| | This model is a fine-tuned version of [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) on the siqa dataset. |
| | It achieves the following results on the evaluation set: |
| | - Loss: 0.5498 |
| | - Num Input Tokens Seen: 29840264 |
| |
|
| | ## Model description |
| |
|
| | More information needed |
| |
|
| | ## Intended uses & limitations |
| |
|
| | More information needed |
| |
|
| | ## Training and evaluation data |
| |
|
| | More information needed |
| |
|
| | ## Training procedure |
| |
|
| | ### Training hyperparameters |
| |
|
| | The following hyperparameters were used during training: |
| | - learning_rate: 5e-05 |
| | - train_batch_size: 4 |
| | - eval_batch_size: 4 |
| | - seed: 123 |
| | - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments |
| | - lr_scheduler_type: cosine |
| | - lr_scheduler_warmup_ratio: 0.1 |
| | - num_epochs: 10.0 |
| | |
| | ### Training results |
| | |
| | | Training Loss | Epoch | Step | Validation Loss | Input Tokens Seen | |
| | |:-------------:|:-----:|:-----:|:---------------:|:-----------------:| |
| | | 0.5633 | 0.5 | 3759 | 0.5894 | 1495072 | |
| | | 0.5615 | 1.0 | 7518 | 0.5582 | 2984720 | |
| | | 0.5898 | 1.5 | 11277 | 0.5572 | 4477104 | |
| | | 0.5474 | 2.0 | 15036 | 0.5530 | 5970384 | |
| | | 0.5483 | 2.5 | 18795 | 0.5526 | 7462384 | |
| | | 0.5508 | 3.0 | 22554 | 0.5522 | 8954176 | |
| | | 0.5541 | 3.5 | 26313 | 0.5542 | 10445088 | |
| | | 0.5407 | 4.0 | 30072 | 0.5514 | 11937344 | |
| | | 0.5503 | 4.5 | 33831 | 0.5523 | 13430048 | |
| | | 0.5525 | 5.0 | 37590 | 0.5522 | 14920992 | |
| | | 0.5362 | 5.5 | 41349 | 0.5513 | 16412032 | |
| | | 0.5476 | 6.0 | 45108 | 0.5503 | 17904680 | |
| | | 0.5594 | 6.5 | 48867 | 0.5507 | 19397416 | |
| | | 0.5403 | 7.0 | 52626 | 0.5506 | 20888856 | |
| | | 0.5436 | 7.5 | 56385 | 0.5498 | 22381080 | |
| | | 0.5452 | 8.0 | 60144 | 0.5519 | 23872880 | |
| | | 0.547 | 8.5 | 63903 | 0.5513 | 25363344 | |
| | | 0.5609 | 9.0 | 67662 | 0.5512 | 26855848 | |
| | | 0.5374 | 9.5 | 71421 | 0.5504 | 28348712 | |
| | | 0.5473 | 10.0 | 75180 | 0.5506 | 29840264 | |
| | |
| | |
| | ### Framework versions |
| | |
| | - PEFT 0.15.2 |
| | - Transformers 4.51.3 |
| | - Pytorch 2.8.0+cu128 |
| | - Datasets 3.6.0 |
| | - Tokenizers 0.21.1 |