NetaYume Lumina Image 2.0 GGUF

This repository contains the quantized version of NetaYume-Lumina-Image-2.0 in GGUF format.

Model Description

NetaYume Lumina Image 2.0 is a text-to-image diffusion model that has been quantized for improved performance and reduced memory usage.

Usage

This is my first attempt at quantization so if I made a mistake or there is something amiss do tell.

Model Details

  • Base Model: duongve/NetaYume-Lumina-Image-2.0
  • Format: GGUF
  • License: Apache 2.0
  • Task: Text-to-Image Generation

Requirements

  • Compatible GGUF runtime environment
  • Sufficient system memory for model loading

Citation

If you use this model, please cite the original NetaYume-Lumina-Image-2.0 model.

License

This project is licensed under the Apache License 2.0 - see the LICENSE file for details.

Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at

http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.

Downloads last month
655
GGUF
Model size
3B params
Architecture
lumina2
Hardware compatibility
Log In to view the estimation

2-bit

3-bit

4-bit

5-bit

6-bit

8-bit

16-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for Immac/NetaYume-Lumina-Image-2.0-GGUF

Quantized
(3)
this model