--- license: apache-2.0 pipeline_tag: text-generation arxiv: 2512.24873 tags: - agent - moe - mlx library_name: mlx base_model: FutureLivingLab/iFlow-ROME --- # dokterbob/iFlow-ROME-mlx-mxfp4 This model [dokterbob/iFlow-ROME-mlx-mxfp4](https://huggingface.co/dokterbob/iFlow-ROME-mlx-mxfp4) was converted to MLX format from [FutureLivingLab/iFlow-ROME](https://huggingface.co/FutureLivingLab/iFlow-ROME) using mlx-lm version **0.31.0**. ## Use with mlx ```bash pip install mlx-lm ``` ```python from mlx_lm import load, generate model, tokenizer = load("dokterbob/iFlow-ROME-mlx-mxfp4") prompt = "hello" if tokenizer.chat_template is not None: messages = [{"role": "user", "content": prompt}] prompt = tokenizer.apply_chat_template( messages, add_generation_prompt=True, return_dict=False, ) response = generate(model, tokenizer, prompt=prompt, verbose=True) ```