Instructions to use davidberenstein1957/SmolLM2-360M-Instruct-smashed with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use davidberenstein1957/SmolLM2-360M-Instruct-smashed with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="davidberenstein1957/SmolLM2-360M-Instruct-smashed")# Load model directly from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("davidberenstein1957/SmolLM2-360M-Instruct-smashed") model = AutoModelForCausalLM.from_pretrained("davidberenstein1957/SmolLM2-360M-Instruct-smashed") - Transformers.js
How to use davidberenstein1957/SmolLM2-360M-Instruct-smashed with Transformers.js:
// npm i @huggingface/transformers import { pipeline } from '@huggingface/transformers'; // Allocate pipeline const pipe = await pipeline('text-generation', 'davidberenstein1957/SmolLM2-360M-Instruct-smashed'); - Pruna AI
How to use davidberenstein1957/SmolLM2-360M-Instruct-smashed with Pruna AI:
# Use a pipeline as a high-level helper from pruna import PrunaModel pipe = PrunaModel.from_pretrained("davidberenstein1957/SmolLM2-360M-Instruct-smashed")from pruna import PrunaModel # Load model directly from transformers import AutoTokenizer tokenizer = AutoTokenizer.from_pretrained("davidberenstein1957/SmolLM2-360M-Instruct-smashed") model = PrunaModel.from_pretrained("davidberenstein1957/SmolLM2-360M-Instruct-smashed") - Notebooks
- Google Colab
- Kaggle
- Local Apps
- vLLM
How to use davidberenstein1957/SmolLM2-360M-Instruct-smashed with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "davidberenstein1957/SmolLM2-360M-Instruct-smashed" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "davidberenstein1957/SmolLM2-360M-Instruct-smashed", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }'Use Docker
docker model run hf.co/davidberenstein1957/SmolLM2-360M-Instruct-smashed
- SGLang
How to use davidberenstein1957/SmolLM2-360M-Instruct-smashed with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "davidberenstein1957/SmolLM2-360M-Instruct-smashed" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "davidberenstein1957/SmolLM2-360M-Instruct-smashed", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "davidberenstein1957/SmolLM2-360M-Instruct-smashed" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "davidberenstein1957/SmolLM2-360M-Instruct-smashed", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }' - Docker Model Runner
How to use davidberenstein1957/SmolLM2-360M-Instruct-smashed with Docker Model Runner:
docker model run hf.co/davidberenstein1957/SmolLM2-360M-Instruct-smashed
Model Card for davidberenstein1957/SmolLM2-360M-Instruct-smashed
This model was created using the pruna library. Pruna is a model optimization framework built for developers, enabling you to deliver more efficient models with minimal implementation overhead.
Usage
First things first, you need to install the pruna library:
pip install pruna
You can use the transformers library to load the model but this might not include all optimizations by default.
To ensure that all optimizations are applied, use the pruna library to load the model using the following code:
from pruna import PrunaModel
loaded_model = PrunaModel.from_pretrained(
"davidberenstein1957/SmolLM2-360M-Instruct-smashed"
)
For inference, you can use the inference methods of the original model like shown in the original model card. Alternatively, you can visit the full documentation here for more information.
Smash Configuration
The compression configuration of the model is stored in the smash_config.json file, which describes the optimization methods that were applied to the model.
{
"batcher": null,
"cacher": null,
"compiler": null,
"factorizer": null,
"pruner": null,
"quantizer": "torchao",
"torchao_excluded_modules": "none",
"torchao_quant_type": "int8dq",
"batch_size": 1,
"device": "cpu",
"device_map": null,
"save_fns": [
"save_before_apply"
],
"load_fns": [
"transformers"
],
"reapply_after_load": {
"factorizer": null,
"pruner": null,
"quantizer": "torchao",
"cacher": null,
"compiler": null,
"batcher": null
}
}
π Join the Pruna AI community!
- Downloads last month
- 4
Model tree for davidberenstein1957/SmolLM2-360M-Instruct-smashed
Base model
HuggingFaceTB/SmolLM2-360M