ReligionBERT

ReligionBERT is a domain-adapted BERT model produced by continued masked language modelling (MLM) pre-training on the English Bible corpus. Starting from bert-base-uncased, the model was trained for 30,000 steps on 62,197 verses drawn from the King James Version and World English Bible translations. It is designed for downstream NLP tasks involving religious and biblical text, where general-purpose BERT models underperform due to archaic language, theological vocabulary, and long-range intertextual dependencies.

A companion multilingual model, MultiReligionBERT, covers 12 languages and supports cross-lingual zero-shot transfer for African language religious NLP.


Model Details

Field Details
Model type BERT (encoder-only, masked language model)
Base model bert-base-uncased (109M parameters)
Pre-training objective Continued MLM (15% token masking)
Pre-training corpus English Bible (KJV + WEB), 62,197 verses
Training steps 30,000
Final validation loss 1.164
Language English
License Apache 2.0
Developed by Lucas Licht
Institution Koforidua Technical University, Ghana

Intended Use

ReligionBERT is intended for NLP tasks on religious and biblical text, including:

  • Semantic similarity between Bible verses or passages
  • Book and section classification of biblical text
  • Extractive question answering over religious passages
  • Feature extraction for downstream religious NLP pipelines
  • Masked language modelling for biblical text generation and completion

It is not recommended for general-domain NLP tasks where bert-base-uncased is likely a stronger baseline.


How to Get Started

from transformers import AutoTokenizer, AutoModelForMaskedLM
import torch

tokenizer = AutoTokenizer.from_pretrained("LucasLicht/religion-bert")
model = AutoModelForMaskedLM.from_pretrained("LucasLicht/religion-bert")

text = "For God so loved the [MASK] that he gave his only begotten Son."
inputs = tokenizer(text, return_tensors="pt")

with torch.no_grad():
    outputs = model(**inputs)

masked_index = (inputs["input_ids"] == tokenizer.mask_token_id).nonzero(as_tuple=True)[1]
logits = outputs.logits[0, masked_index]
predicted_token = tokenizer.decode(torch.argmax(logits, dim=-1))
print(predicted_token)  # "world"

For sentence embeddings or downstream fine-tuning:

from transformers import AutoTokenizer, AutoModel
import torch

tokenizer = AutoTokenizer.from_pretrained("LucasLicht/religion-bert")
model = AutoModel.from_pretrained("LucasLicht/religion-bert")

inputs = tokenizer("The Lord is my shepherd; I shall not want.", return_tensors="pt")
with torch.no_grad():
    outputs = model(**inputs)

# Use CLS token as sentence representation
cls_embedding = outputs.last_hidden_state[:, 0, :]

Training Details

Pre-Training Corpus

The corpus was sourced from the christos-c/bible-corpus repository. All 66 books of the King James Version and World English Bible translations were extracted via XML parsing, yielding 62,197 verses.

Training Procedure

Hyperparameter Value
Base model bert-base-uncased
Training steps 30,000
Effective batch size 32 (16 per device, 2 gradient accumulation steps)
Learning rate 3e-5 (linear warmup, 500 steps)
Weight decay 0.01 (AdamW)
MLM masking probability 15%
Max sequence length 128 tokens
Precision FP16 mixed precision
Hardware NVIDIA Tesla T4 / A100 (Google Colab)
Framework HuggingFace Transformers 5.0.0

Training was conducted across multiple sessions with checkpoint recovery. A PermanentDeleteCallback retained only the two most recent checkpoints to prevent storage exhaustion. All metrics were logged to Weights and Biases.

Training Loss Curve

Step Validation Loss
500 1.806
5,000 1.451
10,000 1.339
15,000 1.272
20,000 1.204
25,000 1.156
28,500 1.129 (best)
30,000 1.164 (final)

Evaluation

Perplexity on Held-out Religious Text

Perplexity was computed on 500 held-out English Bible verses not seen during pre-training.

Model Perplexity (lower is better)
bert-base-uncased 15.48
ReligionBERT 3.73

ReligionBERT achieves a 75.9% reduction in perplexity, confirming successful domain alignment.


Downstream Task Results

Three fine-tuning tasks were evaluated using three automatically constructed datasets. All results are on held-out test sets. ReligionBERT is compared to its generic baseline (bert-base-uncased).

Semantic Similarity (21,994 verse pairs)

Model Pearson Spearman
bert-base-uncased 0.9569 0.6556
ReligionBERT 0.9623 0.6591

Book Classification (7,726 samples, 66 classes)

Model Accuracy Macro F1
bert-base-uncased 0.4075 0.3144
ReligionBERT 0.4347 0.3381

Extractive Question Answering (1,199 LLM-assisted examples)

Model Exact Match (%) Token F1 (%)
bert-base-uncased 32.50 58.91
ReligionBERT 40.00 60.48

ReligionBERT outperforms bert-base-uncased on 5 of 6 metrics, with the strongest gain of +7.50 Exact Match points on extractive QA.


Datasets

The three fine-tuning datasets used in this study are described below. They are derived automatically from the Bible corpus without manual annotation (except for the QA human verification step).

Dataset Task Size Notes
Verse Similarity Semantic similarity (STS) 21,994 pairs Cross-translation and intra-corpus pairs; balanced subset 6,392
Bible Book Classification Text classification 7,726 samples 66 classes (all Bible books); 80/10/10 split
Bible QA Extractive QA 1,199 examples LLM-assisted via Llama 3.3 70B; SQuAD v2 format; 100% human-verified quality

Limitations

  • Pre-training is limited to the English Bible (KJV and WEB translations). Performance on other religious traditions (Quran, Vedas, Buddhist sutras) has not been evaluated.
  • The model inherits biases present in bert-base-uncased and may reflect theological perspectives embedded in the King James Version.
  • The QA fine-tuning dataset is relatively small (959 training examples); downstream QA performance may benefit from larger domain-specific QA datasets.
  • The model is not intended for general-domain tasks.

Citation

If you use this model, please cite:

@misc{licht2025religionbert,
  title     = {ReligionBERT: Domain-Adaptive Pre-Training of BERT on Biblical Corpora for Religious NLP Tasks},
  author    = {Licht, Lucas},
  year      = {2025},
  note      = {Koforidua Technical University, Ghana. Model available at https://huggingface.co/LucasLicht/religion-bert}
}

Contact

For questions or collaboration, reach out via HuggingFace or GitHub: @Licht005

Downloads last month
32
Safetensors
Model size
0.1B params
Tensor type
F32
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for LucasLicht/religion-bert

Finetuned
(6680)
this model