Loading AI tools
Nonprofit web crawling and archive organization From Wikipedia, the free encyclopedia
Common Crawl is a nonprofit 501(c)(3) organization that crawls the web and freely provides its archives and datasets to the public.[1][2] Common Crawl's web archive consists of petabytes of data collected since 2008.[3] It completes crawls generally every month.[4]
Type of business | 501(c)(3) non-profit |
---|---|
Founded | 2007 |
Headquarters | San Francisco, California; Los Angeles, California, United States |
Founder(s) | Gil Elbaz |
Key people | Peter Norvig, Rich Skrenta, Eva Ho |
URL | commoncrawl |
Content license | Apache 2.0 (software) [clarification needed] |
Common Crawl was founded by Gil Elbaz.[5] Advisors to the non-profit include Peter Norvig and Joi Ito.[6] The organization's crawlers respect nofollow and robots.txt policies. Open source code for processing Common Crawl's data set is publicly available.
The Common Crawl dataset includes copyrighted work and is distributed from the US under fair use claims. Researchers in other countries have made use of techniques such as shuffling sentences or referencing the Common Crawl dataset to work around copyright law in other legal jurisdictions.[7]
English is the primary language for 46% of documents in the March 2023 version of the Common Crawl dataset. The next most common primary languages are German, Russian, Japanese, French, Spanish and Chinese, each with less than 6% of documents.[8]
Amazon Web Services began hosting Common Crawl's archive through its Public Data Sets program in 2012.[9]
The organization began releasing metadata files and the text output of the crawlers alongside .arc files in July 2012.[10] Common Crawl's archives had only included .arc files previously.[10]
In December 2012, blekko donated to Common Crawl search engine metadata blekko had gathered from crawls it conducted from February to October 2012.[11] The donated data helped Common Crawl "improve its crawl while avoiding spam, porn and the influence of excessive SEO."[11]
In 2013, Common Crawl began using the Apache Software Foundation's Nutch webcrawler instead of a custom crawler.[12] Common Crawl switched from using .arc files to .warc files with its November 2013 crawl.[13]
A filtered version of Common Crawl was used to train OpenAI's GPT-3 language model, announced in 2020.[14]
The following data have been collected from the official Common Crawl Blog[15] and Common Crawl's API.[16]
Crawl date | Size in TiB | Billions of pages | Comments |
---|---|---|---|
April 2024 | 386 | 2.7 | Crawl conducted from April 12 to April 24, 2024 |
February/March 2024 | 425 | 3.16 | Crawl conducted from February 20 to March 5, 2024 |
December 2023 | 454 | 3.35 | Crawl conducted from November 28 to December 12, 2023 |
June 2023 | 390 | 3.1 | Crawl conducted from May 27 to June 11, 2023 |
April 2023 | 400 | 3.1 | Crawl conducted from March 20 to April 2, 2023 |
February 2023 | 400 | 3.15 | Crawl conducted from January 26 to February 9, 2023 |
December 2022 | 420 | 3.35 | Crawl conducted from November 26 to December 10, 2022 |
October 2022 | 380 | 3.15 | Crawl conducted in September and October 2022 |
April 2021 | 320 | 3.1 | |
November 2018 | 220 | 2.6 | |
October 2018 | 240 | 3.0 | |
September 2018 | 220 | 2.8 | |
August 2018 | 220 | 2.65 | |
July 2018 | 255 | 3.25 | |
June 2018 | 235 | 3.05 | |
May 2018 | 215 | 2.75 | |
April 2018 | 230 | 3.1 | |
March 2018 | 250 | 3.2 | |
February 2018 | 270 | 3.4 | |
January 2018 | 270 | 3.4 | |
December 2017 | 240 | 2.9 | |
November 2017 | 260 | 3.2 | |
October 2017 | 300 | 3.65 | |
September 2017 | 250 | 3.01 | |
August 2017 | 280 | 3.28 | |
July 2017 | 240 | 2.89 | |
June 2017 | 260 | 3.16 | |
May 2017 | 250 | 2.96 | |
April 2017 | 250 | 2.94 | |
March 2017 | 250 | 3.07 | |
February 2017 | 250 | 3.08 | |
January 2017 | 250 | 3.14 | |
December 2016 | — | 2.85 | |
October 2016 | — | 3.25 | |
September 2016 | — | 1.72 | |
August 2016 | — | 1.61 | |
July 2016 | — | 1.73 | |
June 2016 | — | 1.23 | |
May 2016 | — | 1.46 | |
April 2016 | — | 1.33 | |
February 2016 | — | 1.73 | |
November 2015 | 151 | 1.82 | |
September 2015 | 106 | 1.32 | |
August 2015 | 149 | 1.84 | |
July 2015 | 145 | 1.81 | |
June 2015 | 131 | 1.67 | |
May 2015 | 159 | 2.05 | |
April 2015 | 168 | 2.11 | |
March 2015 | 124 | 1.64 | |
February 2015 | 145 | 1.9 | |
January 2015 | 139 | 1.82 | |
December 2014 | 160 | 2.08 | |
November 2014 | 135 | 1.95 | |
October 2014 | 254 | 3.7 | |
September 2014 | 220 | 2.8 | |
August 2014 | 200 | 2.8 | |
July 2014 | 266 | 3.6 | |
April 2014 | 183 | 2.6 | |
March 2014 | 223 | 2.8 | First Nutch crawl |
Winter 2013 | 148 | 2.3 | Crawl conducted from December 4 through December 22, 2013 |
Summer 2013 | ? | ? | Crawl conducted from May 2013 through June 2013. First WARC crawl |
2012 | ? | ? | Crawl conducted from January 2012 through June 2012. Final ARC crawl |
2009-2010 | ? | ? | Crawl conducted from July 2009 through September 2010 |
2008-2009 | ? | ? | Crawl conducted from May 2008 through January 2009 |
In corroboration with SURFsara, Common Crawl sponsors the Norvig Web Data Science Award, a competition open to students and researchers in Benelux.[17][18] The award is named for Peter Norvig who also chairs the judging committee for the award.[17]
Google's version of the Common Crawl is called the Colossal Clean Crawled Corpus, or C4 for short. It was constructed for the training of the T5 language model series in 2019.[19] There are some concern over copyrighted content in the C4.[20]
Seamless Wikipedia browsing. On steroids.
Every time you click a link to Wikipedia, Wiktionary or Wikiquote in your browser's search results, it will show the modern Wikiwand interface.
Wikiwand extension is a five stars, simple, with minimum permission required to keep your browsing private, safe and transparent.