Nishino Shou Lora Flux NF4

- Prompt
- Training With QLoRA: 45 degree view of Nishino Shou smiles with striking short wavy black hair and a vibrant red dress. She's accessorized with a delicate necklace of white beads. The scene is set against a pink wall, which includes white writing, creating a visually appealing backdrop for the subject.

- Prompt
- Training Without QLoRA: 45 degree view of Nishino Shou smiles with striking short wavy black hair and a vibrant red dress. She's accessorized with a delicate necklace of white beads. The scene is set against a pink wall, which includes white writing, creating a visually appealing backdrop for the subject.

- Prompt
- Testing With QLoRA: Nishino Shou with feathers and jewelry at night on the beach by the fire in the style of magali villeneuve, eve ventrue, anna dittmann, realistic depiction of light, luminous pointillism, daz3d, the stars art group (xing xing) , sultan mohammed, burned/charred

- Prompt
- Testing Without QLoRA: Nishino Shou with feathers and jewelry at night on the beach by the fire in the style of magali villeneuve, eve ventrue, anna dittmann, realistic depiction of light, luminous pointillism, daz3d, the stars art group (xing xing) , sultan mohammed, burned/charred
西野翔 / にしのしょう / Nishino Shou
All files are also archived in https://github.com/je-suis-tm/huggingface-archive in case this gets censored.
The QLoRA fine-tuning process of nishino_shou_lora_flux_nf4 takes inspiration from this post (https://huggingface.co/blog/flux-qlora). The training was executed on a local computer with 1200 timesteps and the same parameters as the link mentioned above, which took around 8 hours on 8GB VRAM 4060. The peak VRAM usage was around 7.7GB. To avoid running low on VRAM, both transformers and text_encoder were quantized. The biggest challenge of training Japanese actresses is their photos used heavy filters to whiten and smoothen the skin. This practise severely distorts the training images which makes the result less convincing than Hollywood actresses. This training dataset contains a lot of face closeup which makes result more aligned with her actual face. The tradeoff is the overfitting problem of QLoRA which makes model more likely to ignore the prompt. All the images generated here are using the below parameters
- Height: 512
- Width: 512
- Guidance scale: 5
- Num inference steps: 20
- Max sequence length: 512
- Seed: 0
Usage
import torch
from diffusers import FluxPipeline, FluxTransformer2DModel
from transformers import T5EncoderModel
text_encoder_4bit = T5EncoderModel.from_pretrained(
"hf-internal-testing/flux.1-dev-nf4-pkg", subfolder="text_encoder_2",torch_dtype=torch.float16,)
transformer_4bit = FluxTransformer2DModel.from_pretrained(
"hf-internal-testing/flux.1-dev-nf4-pkg", subfolder="transformer",torch_dtype=torch.float16,)
pipe = FluxPipeline.from_pretrained("black-forest-labs/FLUX.1-dev", torch_dtype=torch.float16,
transformer=transformer_4bit,text_encoder_2=text_encoder_4bit)
pipe.load_lora_weights("je-suis-tm/nishino_shou_lora_flux_nf4",
weight_name='pytorch_lora_weights.safetensors')
prompt="Nishino Shou with feathers and jewelry at night on the beach by the fire in the style of magali villeneuve, eve ventrue, anna dittmann, realistic depiction of light, luminous pointillism, daz3d, the stars art group (xing xing) , sultan mohammed, burned/charred"
image = pipe(
prompt,
height=512,
width=512,
guidance_scale=5,
num_inference_steps=20,
max_sequence_length=512,
generator=torch.Generator("cpu").manual_seed(0),
).images[0]
image.save("nishino_shou_lora_flux_nf4.png")
Trigger words
You should use Nishino Shou to trigger the image generation.
Download model
Download them in the Files & versions tab.
- Downloads last month
- 4
Model tree for je-suis-tm/nishino_shou_lora_flux_nf4
Base model
black-forest-labs/FLUX.1-dev