GGUF
How to use from
llama.cpp
Install from brew
brew install llama.cpp
# Start a local OpenAI-compatible server with a web UI:
llama-server -hf NeverSleep/Nethena-20B-GGUF:
# Run inference directly in the terminal:
llama-cli -hf NeverSleep/Nethena-20B-GGUF:
Install from WinGet (Windows)
winget install llama.cpp
# Start a local OpenAI-compatible server with a web UI:
llama-server -hf NeverSleep/Nethena-20B-GGUF:
# Run inference directly in the terminal:
llama-cli -hf NeverSleep/Nethena-20B-GGUF:
Use pre-built binary
# Download pre-built binary from:
# https://github.com/ggerganov/llama.cpp/releases
# Start a local OpenAI-compatible server with a web UI:
./llama-server -hf NeverSleep/Nethena-20B-GGUF:
# Run inference directly in the terminal:
./llama-cli -hf NeverSleep/Nethena-20B-GGUF:
Build from source code
git clone https://github.com/ggerganov/llama.cpp.git
cd llama.cpp
cmake -B build
cmake --build build -j --target llama-server llama-cli
# Start a local OpenAI-compatible server with a web UI:
./build/bin/llama-server -hf NeverSleep/Nethena-20B-GGUF:
# Run inference directly in the terminal:
./build/bin/llama-cli -hf NeverSleep/Nethena-20B-GGUF:
Use Docker
docker model run hf.co/NeverSleep/Nethena-20B-GGUF:
Quick Links

image/png

This model is a collab between IkariDev and Undi!

Nethena-20B model. Use Alpaca format. Suitable for RP, ERP and general stuff.

What would happen if we combine all of out best models? Well.. here it is, the holy grail: Echidna v0.3 + Athena v3 + Nete

This model also has a 13b version, you can check it out right here.

[Recommended settings - No settings yet(Please suggest some over in the Community tab!)]

Description

This repo contains GGUF files of Nethena-20B.

FP16 - by IkariDev and Undi

GGUF - by IkariDev and Undi

Ratings:

Note: We have permission of all users to upload their ratings, i DONT screenshot random reviews without asking if i can put them here!

No ratings yet!

If you want your rating to be here, send us a message over on DC and we'll put up a screenshot of it here. DC name is "ikaridev" and "undi".

Models+loras used and recipe

  • NeverSleep/Echidna-13b-v0.3
  • IkariDev/Athena-v3
  • Undi95/Nete-13B

Prompt template: Alpaca

Below is an instruction that describes a task. Write a response that appropriately completes the request.

### Instruction:
{prompt}

### Response:

Others

Undi: If you want to support me, you can here.

IkariDev: Visit my retro/neocities style website please kek

Downloads last month
20
GGUF
Model size
20B params
Architecture
llama
Hardware compatibility
Log In to add your hardware

3-bit

4-bit

5-bit

6-bit

8-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support