Dataset Viewer
Auto-converted to Parquet Duplicate
token
stringlengths
1
687
frequency
int64
1
220M
document_frequency
int64
1
14.8M
.
220,373,313
14,840,661
the
207,397,407
14,567,551
,
200,859,483
14,534,043
and
118,500,132
14,417,441
to
111,384,770
14,255,763
of
99,998,213
13,978,837
a
91,785,542
14,012,756
in
71,784,231
13,633,597
-
63,895,226
11,850,610
is
46,724,328
12,292,265
for
45,215,695
12,800,734
that
37,388,342
10,680,109
you
35,296,967
8,028,307
33,518,373
7,105,098
with
33,203,100
11,684,661
it
32,321,629
9,910,578
on
31,732,312
11,456,983
i
31,537,153
5,663,010
s
31,386,672
10,086,613
are
24,034,360
9,361,999
as
23,708,205
9,294,975
'
23,676,541
5,501,754
this
23,674,294
9,839,990
:
23,408,220
8,043,709
be
21,002,999
9,043,245
)
20,137,119
7,582,937
(
19,744,938
7,528,213
at
19,383,777
8,984,219
your
19,138,886
5,691,923
we
18,436,993
6,275,863
from
18,009,409
9,022,687
or
17,814,568
7,737,049
have
17,670,527
8,268,437
was
17,344,065
6,130,689
by
17,000,741
8,521,472
will
15,038,183
6,609,774
can
14,661,083
6,919,526
"
14,653,094
2,807,523
an
14,423,410
7,942,320
not
13,065,275
6,850,314
but
12,376,711
6,604,641
all
12,077,478
7,018,950
has
11,886,125
6,533,663
they
11,498,598
5,422,582
our
11,026,060
4,679,621
|
10,895,938
984,090
10,598,066
3,637,406
he
10,550,122
3,342,912
!
10,546,768
4,095,201
more
10,421,121
6,238,922
one
10,418,028
6,233,516
/
10,394,688
3,718,439
their
10,182,690
5,272,003
t
10,151,785
5,106,913
10,127,532
3,637,736
?
9,821,153
4,376,202
my
9,677,996
3,384,958
if
9,628,287
5,359,733
about
9,110,966
5,467,485
so
8,621,471
5,104,424
his
8,557,368
3,204,607
up
8,452,780
5,320,681
new
8,253,015
4,622,983
when
8,081,684
5,038,845
what
8,032,937
4,664,586
out
7,973,843
5,136,356
there
7,919,830
4,872,671
which
7,850,967
5,022,907
time
7,798,123
4,940,267
also
7,650,847
5,168,564
who
7,404,174
4,509,121
been
6,580,228
4,352,896
like
6,515,757
4,250,715
some
6,449,519
4,268,634
do
6,372,095
3,979,773
1
6,217,409
3,323,973
how
6,005,782
3,759,015
other
5,918,630
4,285,173
just
5,914,376
4,001,459
her
5,807,644
1,863,862
get
5,715,755
3,766,691
5,634,526
2,447,664
no
5,617,169
3,846,373
had
5,613,554
3,186,985
were
5,560,562
3,183,054
first
5,478,123
3,747,739
its
5,363,603
3,311,126
into
5,290,572
3,793,377
people
5,216,094
3,087,253
them
5,213,788
3,387,494
said
5,211,819
2,326,739
she
5,177,020
1,700,088
would
5,173,792
3,240,314
these
5,139,932
3,472,946
2
5,078,554
3,014,265
year
4,995,159
3,005,450
most
4,901,728
3,559,344
than
4,900,150
3,557,854
over
4,883,346
3,573,258
make
4,866,006
3,423,046
End of preview. Expand in Data Studio

Fineweb 10Bt sample vocabulary counts

This is the vocabulary of the 10Bt sample of FineWeb. This vocabulary was obtained by normalizing and pretokenizing the vocabulary using the bert-base-uncased tokenizer. You can use this vocabulary to:

  1. Obtain probabilities of subparts of your corpus.
  2. Define useful tokenizer extensions without fitting a new tokenizer.
  3. Analyzing the semantic content of the corpus

The dataset consists of 1.85 million tokens with their associated frequency and document frequency. The dataset is already sorted by frequency, so taking the N top rows also gets you the N most frequent tokens.

Acknowledgments

Thanks Mixedbread AI for a GPU grant for research into small retrieval models.

Downloads last month
3