Instructions to use taoki/Mistral-7B-Instruct-v0.3_lora_jmultiwoz-dolly-amenokaku-alpaca_jp_python with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use taoki/Mistral-7B-Instruct-v0.3_lora_jmultiwoz-dolly-amenokaku-alpaca_jp_python with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="taoki/Mistral-7B-Instruct-v0.3_lora_jmultiwoz-dolly-amenokaku-alpaca_jp_python") messages = [ {"role": "user", "content": "Who are you?"}, ] pipe(messages)# Load model directly from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("taoki/Mistral-7B-Instruct-v0.3_lora_jmultiwoz-dolly-amenokaku-alpaca_jp_python") model = AutoModelForCausalLM.from_pretrained("taoki/Mistral-7B-Instruct-v0.3_lora_jmultiwoz-dolly-amenokaku-alpaca_jp_python") messages = [ {"role": "user", "content": "Who are you?"}, ] inputs = tokenizer.apply_chat_template( messages, add_generation_prompt=True, tokenize=True, return_dict=True, return_tensors="pt", ).to(model.device) outputs = model.generate(**inputs, max_new_tokens=40) print(tokenizer.decode(outputs[0][inputs["input_ids"].shape[-1]:])) - Notebooks
- Google Colab
- Kaggle
- Local Apps
- vLLM
How to use taoki/Mistral-7B-Instruct-v0.3_lora_jmultiwoz-dolly-amenokaku-alpaca_jp_python with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "taoki/Mistral-7B-Instruct-v0.3_lora_jmultiwoz-dolly-amenokaku-alpaca_jp_python" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "taoki/Mistral-7B-Instruct-v0.3_lora_jmultiwoz-dolly-amenokaku-alpaca_jp_python", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker
docker model run hf.co/taoki/Mistral-7B-Instruct-v0.3_lora_jmultiwoz-dolly-amenokaku-alpaca_jp_python
- SGLang
How to use taoki/Mistral-7B-Instruct-v0.3_lora_jmultiwoz-dolly-amenokaku-alpaca_jp_python with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "taoki/Mistral-7B-Instruct-v0.3_lora_jmultiwoz-dolly-amenokaku-alpaca_jp_python" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "taoki/Mistral-7B-Instruct-v0.3_lora_jmultiwoz-dolly-amenokaku-alpaca_jp_python", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "taoki/Mistral-7B-Instruct-v0.3_lora_jmultiwoz-dolly-amenokaku-alpaca_jp_python" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "taoki/Mistral-7B-Instruct-v0.3_lora_jmultiwoz-dolly-amenokaku-alpaca_jp_python", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }' - Docker Model Runner
How to use taoki/Mistral-7B-Instruct-v0.3_lora_jmultiwoz-dolly-amenokaku-alpaca_jp_python with Docker Model Runner:
docker model run hf.co/taoki/Mistral-7B-Instruct-v0.3_lora_jmultiwoz-dolly-amenokaku-alpaca_jp_python
Script Sharing
Hello Taoki,
I want to finetune this model but am stuck. Can you please share your script so that I can finetune this model or get some insight at least just the script not data. Thank You.
Best Regards
Fayaz Ali
Hello, Fayaz Ali
Thank you for your interest.
I'm sorry I haven't updated much recently, but you can download the trained code at the following URI.
https://github.com/to-aoki/lora_finetuning
I hope it will be useful to you.
Best Regards
taoki
Thank You... Much Appreciated
Best Regards
Fayaz Ali