Common Crawl

Nonprofit web crawling and archive organization From Wikipedia, the free encyclopedia

Common Crawl is a nonprofit 501(c)(3) organization that crawls the web and freely provides its archives and datasets to the public.[1][2] Common Crawl's web archive consists of petabytes of data collected since 2008.[3] It completes crawls approximately once a month.[4]

Quick Facts Type of business, Founded ...
Common Crawl
Thumb
Type of business501(c)(3) non-profit
Founded2007
HeadquartersSan Francisco, California; Los Angeles, California, United States
Founder(s)Gil Elbaz
Key peoplePeter Norvig, Rich Skrenta, Eva Ho
URLcommoncrawl.org
Content license
Apache 2.0 (software) [clarification needed]
Close

Common Crawl was founded by Gil Elbaz.[5] Advisors to the non-profit include Peter Norvig and Joi Ito.[6] The organization's crawlers respect nofollow and robots.txt policies. Open source code for processing Common Crawl's data set is publicly available.

The Common Crawl dataset includes copyrighted work and is distributed from the US under fair use claims. Researchers in other countries have made use of techniques such as shuffling sentences or referencing the Common Crawl dataset to work around copyright law in other legal jurisdictions.[7]

English is the primary language for 46% of documents in the March 2023 version of the Common Crawl dataset. The next most common primary languages are German, Russian, Japanese, French, Spanish and Chinese, each with less than 6% of documents.[8]

History

Amazon Web Services began hosting Common Crawl's archive through its Public Data Sets program in 2012.[9]

The organization began releasing metadata files and the text output of the crawlers alongside .arc files in July 2012.[10] Common Crawl's archives had only included .arc files previously.[10]

In December 2012, blekko donated to Common Crawl search engine metadata blekko had gathered from crawls it conducted from February to October 2012.[11] The donated data helped Common Crawl "improve its crawl while avoiding spam, porn and the influence of excessive SEO."[11]

In 2013, Common Crawl began using the Apache Software Foundation's Nutch webcrawler instead of a custom crawler.[12] Common Crawl switched from using .arc files to .warc files with its November 2013 crawl.[13]

A filtered version of Common Crawl was used to train OpenAI's GPT-3 language model, announced in 2020.[14]

Timeline of Common Crawl data

The following data have been collected from the official Common Crawl Blog[15] and Common Crawl's API.[16]

More information Crawl date, Size in TiB ...
Crawl dateSize in TiBBillions of pagesComments
April 2024 386 2.7 Crawl conducted from April 12 to April 24, 2024
February/March 2024 425 3.16 Crawl conducted from February 20 to March 5, 2024
December 2023 454 3.35 Crawl conducted from November 28 to December 12, 2023
June 2023 390 3.1 Crawl conducted from May 27 to June 11, 2023
April 2023 400 3.1 Crawl conducted from March 20 to April 2, 2023
February 2023 400 3.15 Crawl conducted from January 26 to February 9, 2023
December 2022 420 3.35 Crawl conducted from November 26 to December 10, 2022
October 2022 380 3.15 Crawl conducted in September and October 2022
April 2021 320 3.1
November 2018 220 2.6
October 2018 240 3.0
September 2018 220 2.8
August 2018 220 2.65
July 2018 255 3.25
June 2018 235 3.05
May 20182152.75
April 20182303.1
March 20182503.2
February 20182703.4
January 20182703.4
December 20172402.9
November 20172603.2
October 20173003.65
September 20172503.01
August 20172803.28
July 20172402.89
June 20172603.16
May 20172502.96
April 20172502.94
March 20172503.07
February 20172503.08
January 20172503.14
December 20162.85
October 20163.25
September 20161.72
August 20161.61
July 20161.73
June 20161.23
May 20161.46
April 20161.33
February 20161.73
November 20151511.82
September 20151061.32
August 20151491.84
July 20151451.81
June 20151311.67
May 20151592.05
April 20151682.11
March 20151241.64
February 20151451.9
January 20151391.82
December 20141602.08
November 20141351.95
October 20142543.7
September 20142202.8
August 20142002.8
July 20142663.6
April 20141832.6
March 20142232.8First Nutch crawl
Winter 20131482.3Crawl conducted from December 4 through December 22, 2013
Summer 2013 ? ?Crawl conducted from May 2013 through June 2013. First WARC crawl
2012 ? ?Crawl conducted from January 2012 through June 2012. Final ARC crawl
2009-2010 ? ?Crawl conducted from July 2009 through September 2010
2008-2009 ? ?Crawl conducted from May 2008 through January 2009
Close

Norvig Web Data Science Award

In corroboration with SURFsara, Common Crawl sponsors the Norvig Web Data Science Award, a competition open to students and researchers in Benelux.[17][18] The award is named for Peter Norvig who also chairs the judging committee for the award.[17]

Colossal Clean Crawled Corpus

Google's version of the Common Crawl is called the Colossal Clean Crawled Corpus, or C4 for short. It was constructed for the training of the T5 language model series in 2019.[19] There are some concerns over copyrighted content in the C4.[20]

References

Wikiwand - on

Seamless Wikipedia browsing. On steroids.