| > ⚠️ **Note:** This model has been re-uploaded to a new organization as part of a consolidated collection. | |
| > The updated version is available at [https://huggingface.co/aieng-lab/bert-base-cased-mamut](https://huggingface.co/aieng-lab/bert-base-cased-mamut). | |
| > Please refer to the new repository for future updates, documentation, and related models. | |
| # MAMUT Bert (Mathematical Structure Aware BERT) | |
| <!-- Provide a quick summary of what the model is/does. --> | |
| Pretrained model based on [bert-base-cased](https://huggingface.co/bert-base-cased) with further mathematical pre-training, introduced in [MAMUT: A Novel Framework for Modifying Mathematical Formulas for the Generation of Specialized Datasets for Language Model Training](https://arxiv.org/abs/2502.20855). | |
| ## Model Details | |
| ### Model Description | |
| <!-- Provide a longer summary of what this model is. --> | |
| This model has been mathematically pretrained based on four tasks/datasets: | |
| - **[Mathematical Formulas (MF)](https://huggingface.co/datasets/ddrg/math_formulas):** Masked Language Modeling (MLM) task on math formulas written in LaTeX | |
| - **[Mathematical Texts (MT)](https://huggingface.co/datasets/ddrg/math_text):** MLM task on mathematical texts (i.e., texts containing LaTeX formulas). The masked tokens are more likely to be a one of the formula tokens or *mathematical words* (e.g., *sum*, *one*, ...) | |
| - **[Named Math Formulas (NMF)](https://huggingface.co/datasets/ddrg/named_math_formulas):** Next-Sentence-Prediction (NSP)-like task associating a name of a well known mathematical identity (e.g., Pythagorean Theorem) with a formula representation (and the task is to classify whether the formula matches the identity described by the name) | |
| - **[Math Formula Retrieval (MFR)](https://huggingface.co/datasets/ddrg/math_formula_retrieval):** NSP-like task associating two formulas (and the task is to decide whether both describe the same mathematical concept(identity)) | |
|  | |
| Compared to bert-base-cased, 300 additional mathematical [LaTeX tokens](added_tokens.json) have been added before the mathematical pre-training. | |
| - **Further pretrained from model:** [bert-base-cased](https://huggingface.co/google-bert/bert-base-cased) | |
| ### Model Sources [optional] | |
| <!-- Provide the basic links for the model. --> | |
| - **Repository:** aieng-lab/transformer-math-pretraining](https://github.com/aieng-lab/transformer-math-pretraining) | |
| - **Paper:** [MAMUT: A Novel Framework for Modifying Mathematical Formulas for the Generation of Specialized Datasets for Language Model Training](https://arxiv.org/abs/2502.20855) | |
| ## Uses | |
| <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> | |
| ## How to Get Started with the Model | |
| Use the code below to get started with the model. | |
| [More Information Needed] | |
| ## Training Details | |
| ### Training Data | |
| <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> | |
| [More Information Needed] | |
| ### Training Procedure | |
| <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> | |
| ## Evaluation | |
| <!-- This section describes the evaluation protocols and provides the results. --> | |
| ### Testing Data, Factors & Metrics | |
| #### Testing Data | |
| <!-- This should link to a Dataset Card if possible. --> | |
| [More Information Needed] | |
| #### Factors | |
| <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> | |
| [More Information Needed] | |
| #### Metrics | |
| <!-- These are the evaluation metrics being used, ideally with a description of why. --> | |
| [More Information Needed] | |
| ### Results | |
| [More Information Needed] | |
| #### Summary | |
| ## Environmental Impact | |
| <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> | |
| - **Hardware Type:** 8xA100 | |
| - **Hours used:** 48 | |
| - **Compute Region:** Germany | |
| ## Citation | |
| <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> | |
| **BibTeX:** | |
| [More Information Needed] | |