Instructions to use NOYOUllm2/fortinet-lora-gguf with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- llama-cpp-python
How to use NOYOUllm2/fortinet-lora-gguf with llama-cpp-python:
# !pip install llama-cpp-python from llama_cpp import Llama llm = Llama.from_pretrained( repo_id="NOYOUllm2/fortinet-lora-gguf", filename="fortinet-lora-q4k.gguf", )
llm.create_chat_completion( messages = "No input example has been defined for this model task." )
- Notebooks
- Google Colab
- Kaggle
- Local Apps
- llama.cpp
How to use NOYOUllm2/fortinet-lora-gguf with llama.cpp:
Install from brew
brew install llama.cpp # Start a local OpenAI-compatible server with a web UI: llama-server -hf NOYOUllm2/fortinet-lora-gguf # Run inference directly in the terminal: llama-cli -hf NOYOUllm2/fortinet-lora-gguf
Install from WinGet (Windows)
winget install llama.cpp # Start a local OpenAI-compatible server with a web UI: llama-server -hf NOYOUllm2/fortinet-lora-gguf # Run inference directly in the terminal: llama-cli -hf NOYOUllm2/fortinet-lora-gguf
Use pre-built binary
# Download pre-built binary from: # https://github.com/ggerganov/llama.cpp/releases # Start a local OpenAI-compatible server with a web UI: ./llama-server -hf NOYOUllm2/fortinet-lora-gguf # Run inference directly in the terminal: ./llama-cli -hf NOYOUllm2/fortinet-lora-gguf
Build from source code
git clone https://github.com/ggerganov/llama.cpp.git cd llama.cpp cmake -B build cmake --build build -j --target llama-server llama-cli # Start a local OpenAI-compatible server with a web UI: ./build/bin/llama-server -hf NOYOUllm2/fortinet-lora-gguf # Run inference directly in the terminal: ./build/bin/llama-cli -hf NOYOUllm2/fortinet-lora-gguf
Use Docker
docker model run hf.co/NOYOUllm2/fortinet-lora-gguf
- LM Studio
- Jan
- Ollama
How to use NOYOUllm2/fortinet-lora-gguf with Ollama:
ollama run hf.co/NOYOUllm2/fortinet-lora-gguf
- Unsloth Studio new
How to use NOYOUllm2/fortinet-lora-gguf with Unsloth Studio:
Install Unsloth Studio (macOS, Linux, WSL)
curl -fsSL https://unsloth.ai/install.sh | sh # Run unsloth studio unsloth studio -H 0.0.0.0 -p 8888 # Then open http://localhost:8888 in your browser # Search for NOYOUllm2/fortinet-lora-gguf to start chatting
Install Unsloth Studio (Windows)
irm https://unsloth.ai/install.ps1 | iex # Run unsloth studio unsloth studio -H 0.0.0.0 -p 8888 # Then open http://localhost:8888 in your browser # Search for NOYOUllm2/fortinet-lora-gguf to start chatting
Using HuggingFace Spaces for Unsloth
# No setup required # Open https://huggingface.co/spaces/unsloth/studio in your browser # Search for NOYOUllm2/fortinet-lora-gguf to start chatting
- Docker Model Runner
How to use NOYOUllm2/fortinet-lora-gguf with Docker Model Runner:
docker model run hf.co/NOYOUllm2/fortinet-lora-gguf
- Lemonade
How to use NOYOUllm2/fortinet-lora-gguf with Lemonade:
Pull the model
# Download Lemonade from https://lemonade-server.ai/ lemonade pull NOYOUllm2/fortinet-lora-gguf
Run and chat with the model
lemonade run user.fortinet-lora-gguf-{{QUANT_TAG}}List all available models
lemonade list
llm.create_chat_completion(
messages = "No input example has been defined for this model task."
)Fortinet LoRA GGUF (Q4_K_M) Author: NOYOUllm2 License: MIT Model type: DeepSeek-LM 7B, fine-tuned with Fortinet CLI/troubleshooting data, merged and quantized to GGUF (Q4_K_M) Quantization: Q4_K_M Format: GGUF (for llama.cpp, LM Studio, and compatible tools) Status: Experimental / Community
Model Description This model is a Fortinet CLI and troubleshooting assistant, built by fine-tuning DeepSeek-LM 7B on a custom dataset of FortiGate commands, log messages, and admin Q&A pairs.
Base model: DeepSeek-LM 7B
Fine-tuning: Axolotl (QLoRA/LoRA)
Merge: Merged with LoRA adapter and exported as Hugging Face format
Quantization: Q4_K_M using llama.cpp
Format: GGUF for use with llama.cpp, LM Studio, Ollama, OpenWebUI, and other GGUF-compatible tools
Intended Use Network admins or security engineers working with FortiGate firewalls and Fortinet devices.
Quick lookup of CLI commands, log field meanings, troubleshooting steps, and configuration advice.
Can be run locally, offline, on consumer hardware (with llama.cpp or similar).
Example Usage Prompt:
kotlin Copy What is the FortiGate CLI command to show interface status? Response:
pgsql Copy To show interface status on FortiGate, use: show system interface Prompt:
matlab Copy What does the policyid field mean in FortiOS logs? Response:
nginx Copy The policyid field indicates the firewall policy ID that matched and processed the traffic. Files Included fortinet-lora-q4k.gguf — The quantized GGUF model (Q4_K_M)
tokenizer.json and tokenizer_config.json
config.json (optional, for reference)
How to Use With llama.cpp:
./main -m fortinet-lora-q4k.gguf
With LM Studio: Simply select and load the fortinet-lora-q4k.gguf file.
Limitations & Warnings This model is not affiliated with or endorsed by Fortinet, Inc.
Responses are based on training data and may not reflect the latest FortiOS versions or official best practices.
For critical configurations, always consult official Fortinet documentation.
Citation If you use this model or dataset, please cite:
@model{noyoullm2_fortinet-lora-gguf, author = {NOYOUllm2}, title = {Fortinet LoRA GGUF (Q4_K_M)}, year = {2024}, howpublished = {Hugging Face: https://huggingface.co/NOYOUllm2/fortinet-lora-gguf} } Questions? Open an issue or reach out via Hugging Face.
This model was created with ❤️ using open-source tools, for the Fortinet admin and cybersecurity community.
- Downloads last month
- 40
We're not able to determine the quantization variants.
Model tree for NOYOUllm2/fortinet-lora-gguf
Base model
deepseek-ai/DeepSeek-R1-0528
# !pip install llama-cpp-python from llama_cpp import Llama llm = Llama.from_pretrained( repo_id="NOYOUllm2/fortinet-lora-gguf", filename="fortinet-lora-q4k.gguf", )