Instructions to use gokaygokay/Low-Poly-Kontext-Dev-LoRA with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Diffusers
How to use gokaygokay/Low-Poly-Kontext-Dev-LoRA with Diffusers:
pip install -U diffusers transformers accelerate
import torch from diffusers import DiffusionPipeline from diffusers.utils import load_image # switch to "mps" for apple devices pipe = DiffusionPipeline.from_pretrained("black-forest-labs/FLUX.1-Kontext-dev", dtype=torch.bfloat16, device_map="cuda") pipe.load_lora_weights("gokaygokay/Low-Poly-Kontext-Dev-LoRA") prompt = "Convert this image to low poly version" input_image = load_image("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/cat.png") image = pipe(image=input_image, prompt=prompt).images[0] - Inference
- Notebooks
- Google Colab
- Kaggle
- Local Apps
- Draw Things
Low Poly Kontext Dev LoRA

- Prompt
- Convert this image to low poly version

- Prompt
- Convert this image to low poly version

- Prompt
- Convert this image to low poly version
Model description
Trigger phrase
Convert this image to low poly version
Download model
Weights for this model are available in Safetensors format.
Training at fal.ai
Training was done using fal.ai/models/fal-ai/flux-kontext-trainer/playground.
- Downloads last month
- 32
Model tree for gokaygokay/Low-Poly-Kontext-Dev-LoRA
Base model
black-forest-labs/FLUX.1-Kontext-dev