Llama-3-8B Bitcoin Price Predictor (LoRA)

This is a Llama-3-8B model fine-tuned using LoRA to predict the next 4 days' closing prices of Bitcoin (BTC) based on the previous 60 days of data. The model takes a variety of inputs, including historical price data, technical analysis indicators, macroeconomic context (Gold, Oil, S&P 500 prices), and social media sentiment derived from tweets, to make its prediction. This model was fine-tuned for educational and demonstration purposes.

Training Details

The model was fine-tuned for 2 epochs. The training loss progression is as follows:

Step Training Loss
10 2.428700
20 1.060700
30 1.004200
40 1.868500
50 1.000100
60 0.985000
70 1.660100
80 0.981700
90 0.982000
100 1.580500
110 0.971200
120 0.966200
130 1.658700
140 0.952700
150 0.941700
160 1.385400
170 0.904400
180 0.884700
190 1.645000
200 0.905000
210 0.853500
220 1.495900
230 0.848800
240 0.848000

How to Use

To use this model, you must load the base model ({base_model_name}) and then apply the LoRA adapters from this repository.

import torch
from peft import PeftModel
from transformers import AutoModelForCausalLM, AutoTokenizer
# Specify the base model and the LoRA adapter path
base_model_name = "{base_model_name}"
adapter_path = "{hf_username}/{hub_model_name}"
# Load the base model in 4-bit
model = AutoModelForCausalLM.from_pretrained(
    base_model_name, load_in_4bit=True,
    torch_dtype=torch.bfloat16, device_map="auto"
)
tokenizer = AutoTokenizer.from_pretrained(base_model_name)
# Load the LoRA adapter
model = PeftModel.from_pretrained(model, adapter_path)
# --- Prepare your prompt ---
instruction = "..." # Your 60-day price series
input_text = "..." # Your contextual input
prompt = (
    f"<|begin_of_text|><|start_header_id|>user<|end_header_id|>\\n\\n"
    f"Instruction: {{instruction}}\\n\\nInput: {{input_text}}<|eot_id|>"
    f"<|start_header_id|>assistant<|end_header_id|>\\n\\n"
)
inputs = tokenizer(prompt.format(instruction=instruction, input_text=input_text), return_tensors="pt", truncation=True).to("cuda")
# Generate the prediction
with torch.no_grad():
    outputs = model.generate(
        input_ids=inputs["input_ids"], max_new_tokens=50,
        eos_token_id=tokenizer.eos_token_id
    )
response = tokenizer.decode(outputs[0], skip_special_tokens=True)
prediction = response.split("<|end_header_id|>\\n\\n")[-1]
print(f"Predicted Prices: {{prediction}}")
Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support