How to use from
vLLM
Install from pip and serve model
# Install vLLM from pip:
pip install vllm
# Start the vLLM server:
vllm serve "SenseLLM/FIM-SE-CL-7B"
# Call the server using curl (OpenAI-compatible API):
curl -X POST "http://localhost:8000/v1/completions" \
	-H "Content-Type: application/json" \
	--data '{
		"model": "SenseLLM/FIM-SE-CL-7B",
		"prompt": "Once upon a time,",
		"max_tokens": 512,
		"temperature": 0.5
	}'
Use Docker
docker model run hf.co/SenseLLM/FIM-SE-CL-7B
Quick Links

Empowering Character-level Text Infilling by Eliminating Sub-Tokens

πŸ“„ Paper β€’ 🏠 Repo β€’ πŸ€– Models

Introduction

FIM-SE stands for Fill-In-the-Middle with both Starting and Ending character constraints. The proposed method addresses character-level infilling tasks by utilizing a line-level format to avoid predicting any sub-token in inference.


Models

Model Checkpoint Size License
FIM-SE-CL-7B πŸ€— HF Link 7B Llama2
FIM-SE-CL-34B πŸ€— HF Link 13B Llama2
FIM-SE-SC-1B πŸ€— HF Link 1B StarCoder
FIM-SE-SC-15B πŸ€— HF Link 15B StarCoder

How to Use

Prompt Format

As shown in the figure, the prompt is organized as

<PRE>R-Prefix<SUF>R-Suffix<START>L-Prefix<END>F-Suffix<MID>

Inference Code

Please refer to our GitHub Repo for more technical details.

Citation

If you find this repo useful for your research, please kindly cite our paper:

@misc{ren2024empowering,
    title={Empowering Character-level Text Infilling by Eliminating Sub-Tokens}, 
    author={Houxing Ren and Mingjie Zhan and Zhongyuan Wu and Hongsheng Li},
    year={2024},
    eprint={2405.17103},
    archivePrefix={arXiv},
    primaryClass={cs.CL}
}

Acknowledgments

We thank the following amazing projects that truly inspired us:

Downloads last month
6
Safetensors
Model size
7B params
Tensor type
BF16
Β·
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Collection including SenseLLM/FIM-SE-CL-7B

Papers for SenseLLM/FIM-SE-CL-7B