-
Mistral 7B
Paper • 2310.06825 • Published • 58 -
BloombergGPT: A Large Language Model for Finance
Paper • 2303.17564 • Published • 32 -
BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding
Paper • 1810.04805 • Published • 27 -
DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter
Paper • 1910.01108 • Published • 22
Collections
Discover the best community collections!
Collections including paper arxiv:2005.14165
-
Attention Is All You Need
Paper • 1706.03762 • Published • 121 -
Language Models are Few-Shot Learners
Paper • 2005.14165 • Published • 20 -
Chain-of-Thought Prompting Elicits Reasoning in Large Language Models
Paper • 2201.11903 • Published • 15 -
Orca 2: Teaching Small Language Models How to Reason
Paper • 2311.11045 • Published • 77
-
Attention Is All You Need
Paper • 1706.03762 • Published • 121 -
BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding
Paper • 1810.04805 • Published • 27 -
Universal Language Model Fine-tuning for Text Classification
Paper • 1801.06146 • Published • 8 -
Language Models are Few-Shot Learners
Paper • 2005.14165 • Published • 20
-
The RefinedWeb Dataset for Falcon LLM: Outperforming Curated Corpora with Web Data, and Web Data Only
Paper • 2306.01116 • Published • 44 -
FlashAttention: Fast and Memory-Efficient Exact Attention with IO-Awareness
Paper • 2205.14135 • Published • 15 -
RoFormer: Enhanced Transformer with Rotary Position Embedding
Paper • 2104.09864 • Published • 17 -
Language Models are Few-Shot Learners
Paper • 2005.14165 • Published • 20
-
Lost in the Middle: How Language Models Use Long Contexts
Paper • 2307.03172 • Published • 44 -
Efficient Estimation of Word Representations in Vector Space
Paper • 1301.3781 • Published • 8 -
BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding
Paper • 1810.04805 • Published • 27 -
Attention Is All You Need
Paper • 1706.03762 • Published • 121
-
Attention Is All You Need
Paper • 1706.03762 • Published • 121 -
BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding
Paper • 1810.04805 • Published • 27 -
Universal Language Model Fine-tuning for Text Classification
Paper • 1801.06146 • Published • 8 -
Language Models are Few-Shot Learners
Paper • 2005.14165 • Published • 20
-
Attention Is All You Need
Paper • 1706.03762 • Published • 121 -
Language Models are Few-Shot Learners
Paper • 2005.14165 • Published • 20 -
Learning to summarize from human feedback
Paper • 2009.01325 • Published • 4 -
Training language models to follow instructions with human feedback
Paper • 2203.02155 • Published • 24
-
Retentive Network: A Successor to Transformer for Large Language Models
Paper • 2307.08621 • Published • 173 -
Sparks of Artificial General Intelligence: Early experiments with GPT-4
Paper • 2303.12712 • Published • 5 -
GPT-4 Technical Report
Paper • 2303.08774 • Published • 7 -
Chain-of-Thought Prompting Elicits Reasoning in Large Language Models
Paper • 2201.11903 • Published • 15
-
Mistral 7B
Paper • 2310.06825 • Published • 58 -
BloombergGPT: A Large Language Model for Finance
Paper • 2303.17564 • Published • 32 -
BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding
Paper • 1810.04805 • Published • 27 -
DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter
Paper • 1910.01108 • Published • 22
-
Lost in the Middle: How Language Models Use Long Contexts
Paper • 2307.03172 • Published • 44 -
Efficient Estimation of Word Representations in Vector Space
Paper • 1301.3781 • Published • 8 -
BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding
Paper • 1810.04805 • Published • 27 -
Attention Is All You Need
Paper • 1706.03762 • Published • 121
-
Attention Is All You Need
Paper • 1706.03762 • Published • 121 -
Language Models are Few-Shot Learners
Paper • 2005.14165 • Published • 20 -
Chain-of-Thought Prompting Elicits Reasoning in Large Language Models
Paper • 2201.11903 • Published • 15 -
Orca 2: Teaching Small Language Models How to Reason
Paper • 2311.11045 • Published • 77
-
Attention Is All You Need
Paper • 1706.03762 • Published • 121 -
BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding
Paper • 1810.04805 • Published • 27 -
Universal Language Model Fine-tuning for Text Classification
Paper • 1801.06146 • Published • 8 -
Language Models are Few-Shot Learners
Paper • 2005.14165 • Published • 20
-
Attention Is All You Need
Paper • 1706.03762 • Published • 121 -
BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding
Paper • 1810.04805 • Published • 27 -
Universal Language Model Fine-tuning for Text Classification
Paper • 1801.06146 • Published • 8 -
Language Models are Few-Shot Learners
Paper • 2005.14165 • Published • 20
-
The RefinedWeb Dataset for Falcon LLM: Outperforming Curated Corpora with Web Data, and Web Data Only
Paper • 2306.01116 • Published • 44 -
FlashAttention: Fast and Memory-Efficient Exact Attention with IO-Awareness
Paper • 2205.14135 • Published • 15 -
RoFormer: Enhanced Transformer with Rotary Position Embedding
Paper • 2104.09864 • Published • 17 -
Language Models are Few-Shot Learners
Paper • 2005.14165 • Published • 20
-
Attention Is All You Need
Paper • 1706.03762 • Published • 121 -
Language Models are Few-Shot Learners
Paper • 2005.14165 • Published • 20 -
Learning to summarize from human feedback
Paper • 2009.01325 • Published • 4 -
Training language models to follow instructions with human feedback
Paper • 2203.02155 • Published • 24
-
Retentive Network: A Successor to Transformer for Large Language Models
Paper • 2307.08621 • Published • 173 -
Sparks of Artificial General Intelligence: Early experiments with GPT-4
Paper • 2303.12712 • Published • 5 -
GPT-4 Technical Report
Paper • 2303.08774 • Published • 7 -
Chain-of-Thought Prompting Elicits Reasoning in Large Language Models
Paper • 2201.11903 • Published • 15