This article is about the part of the World Wide Web not indexed by traditional search engines. For other uses, see
Deep web (disambiguation).
The deep web,[1] invisible web,[2] or hidden web[3] are parts of the World Wide Web whose contents are not indexed by standard web search-engine programs.[4] This is in contrast to the "surface web", which is accessible to anyone using the Internet.[5] Computer scientist Michael K. Bergman is credited with inventing the term in 2001 as a search-indexing term.[6]
Deep web sites can be accessed by a direct URL or IP address, but may require entering a password or other security information to access actual content.[7][8] Uses of deep web sites include web mail, online banking, cloud storage, restricted-access social-media pages and profiles, and web forums that require registration for viewing content. It also includes paywalled services such as video on demand and some online magazines and newspapers.
The first conflation of the terms "deep web" and "dark web" happened during 2009 when deep web search terminology was discussed together with illegal activities occurring on the Freenet and darknet.[9] Those criminal activities include the commerce of personal passwords, false identity documents, drugs, firearms, and child pornography.[10]
Since then, after their use in the media's reporting on the black-market website Silk Road, media outlets have generally used 'deep web' synonymously with the dark web or darknet, a comparison some reject as inaccurate[11] and consequently has become an ongoing source of confusion.[12] Wired reporters Kim Zetter[13] and Andy Greenberg[14] recommend the terms be used in distinct fashions. While the deep web is a reference to any site that cannot be accessed by a traditional search engine, the dark web is a portion of the deep web that has been hidden intentionally and is inaccessible by standard browsers and methods.[15][16][17][18][19][excessive citations]
Bergman, in a paper on the deep web published in The Journal of Electronic Publishing, mentioned that Jill Ellsworth used the term Invisible Web in 1994 to refer to websites that were not registered with any search engine.[20] Bergman cited a January 1996 article by Frank Garcia:[21]
It would be a site that's possibly reasonably designed, but they didn't bother to register it with any of the search engines. So, no one can find them! You're hidden. I call that the invisible Web.
Another early use of the term Invisible Web was by Bruce Mount and Matthew B. Koll of Personal Library Software, in a description of the No. 1 Deep Web program found in a December 1996 press release.[22]
The first use of the specific term deep web, now generally accepted, occurred in the aforementioned 2001 Bergman study.[20]
Methods that prevent web pages from being indexed by traditional search engines may be categorized as one or more of the following:
- Contextual web: pages with content varying for different access contexts (e.g., ranges of client IP addresses or previous navigation sequence).
- Dynamic content: dynamic pages, which are returned in response to a submitted query or accessed only through a form, especially if open-domain input elements (such as text fields) are used; such fields are hard to navigate without domain knowledge.
- Limited access content: sites that limit access to their pages in a technical manner (e.g., using the Robots Exclusion Standard or CAPTCHAs, or no-store directive, which prohibit search engines from browsing them and creating cached copies).[23] Sites may feature an internal search engine for exploring such pages.[24][25]
- Non-HTML/text content: textual content encoded in multimedia (image or video) files or specific file formats not recognised by search engines.
- Private web: sites that require registration and login (password-protected resources).
- Scripted content: pages that are accessible only by links produced by JavaScript as well as content dynamically downloaded from Web servers via Flash or Ajax solutions.
- Software: certain content is hidden intentionally from the regular Internet, accessible only with special software, such as Tor, I2P, or other darknet software. For example, Tor allows users to access websites using the .onion server address anonymously, hiding their IP address.
- Unlinked content: pages which are not linked to by other pages, which may prevent web crawling programs from accessing the content. This content is referred to as pages without backlinks (also known as inlinks). Also, search engines do not always detect all backlinks from searched web pages.
- Web archives: Web archival services such as the Wayback Machine enable users to see archived versions of web pages across time, including websites that have become inaccessible and are not indexed by search engines such as Google. [6]The Wayback Machine may be termed a program for viewing the deep web, as web archives that are not from the present cannot be indexed, as past versions of websites are impossible to view by a search. All websites are updated at some time, which is why web archives are considered Deep Web content.[26]
While it is not always possible to discover directly a specific web server's content so that it may be indexed, a site potentially can be accessed indirectly (due to computer vulnerabilities).
To discover content on the web, search engines use web crawlers that follow hyperlinks through known protocol virtual port numbers. This technique is ideal for discovering content on the surface web but is often ineffective at finding deep web content. For example, these crawlers do not attempt to find dynamic pages that are the result of database queries due to the indeterminate number of queries that are possible.[6] It has been noted that this can be overcome (partially) by providing links to query results, but this could unintentionally inflate the popularity of a site of the deep web.
DeepPeep, Intute, Deep Web Technologies, Scirus, and Ahmia.fi are a few search engines that have accessed the deep web. Intute ran out of funding and is now a temporary static archive as of July 2011.[27] Scirus retired near the end of January 2013.[28]
Researchers have been exploring how the deep web can be crawled in an automatic fashion, including content that can be accessed only by special software such as Tor. In 2001, Sriram Raghavan and Hector Garcia-Molina (Stanford Computer Science Department, Stanford University)[29][30] presented an architectural model for a hidden-Web crawler that used important terms provided by users or collected from the query interfaces to query a Web form and crawl the Deep Web content. Alexandros Ntoulas, Petros Zerfos, and Junghoo Cho of UCLA created a hidden-Web crawler that automatically generated meaningful queries to issue against search forms.[31] Several form query languages (e.g., DEQUEL[32]) have been proposed that, besides issuing a query, also allow extraction of structured data from result pages. Another effort is DeepPeep, a project of the University of Utah sponsored by the National Science Foundation, which gathered hidden-web sources (web forms) in different domains based on novel focused crawler techniques.[33][34]
Commercial search engines have begun exploring alternative methods to crawl the deep web. The Sitemap Protocol (first developed, and introduced by Google in 2005) and OAI-PMH are mechanisms that allow search engines and other interested parties to discover deep web resources on particular web servers. Both mechanisms allow web servers to advertise the URLs that are accessible on them, thereby allowing automatic discovery of resources that are not linked directly to the surface web. Google's deep web surfacing system computes submissions for each HTML form and adds the resulting HTML pages into the Google search engine index. The surfaced results account for a thousand queries per second to deep web content.[35] In this system, the pre-computation of submissions is done using three algorithms:
- selecting input values for text search inputs that accept keywords,
- identifying inputs that accept only values of a specific type (e.g., date) and
- selecting a small number of input combinations that generate URLs suitable for inclusion into the Web search index.
In 2008, to facilitate users of Tor hidden services in their access and search of a hidden .onion suffix, Aaron Swartz designed Tor2web—a proxy application able to provide access by means of common web browsers.[36] Using this application, deep web links appear as a random sequence of letters followed by the .onion top-level domain.
Devine, Jane; Egger-Sider, Francine (August 2021). "Beyond google: the invisible web in the academic library". The Journal of Academic Librarianship. 30 (4): 265–269. doi:10.1016/j.acalib.2004.04.010.
Raghavan, Sriram; Garcia-Molina, Hector (September 11–14, 2001). "Crawling the Hidden Web". 27th International Conference on Very Large Data Bases.
Madhavan, J., Ko, D., Kot, Ł., Ganapathy, V., Rasmussen, A., & Halevy, A. (2008). Google's deep web crawl. Proceedings of the VLDB Endowment, 1(2), 1241–52.
Lam, Kwok-Yan; Chi, Chi-Hung; Qing, Sihan (November 23, 2016). Information and Communications Security: 18th International Conference, ICICS 2016, Singapore, Singapore, November 29 – December 2, 2016, Proceedings. Springer. ISBN 9783319500119. Retrieved January 15, 2017.
"Elsevier to Retire Popular Science Search Engine". library.bldrdoc.gov. December 2013. Archived from the original on June 23, 2015. Retrieved June 22, 2015. by end of January 2014, Elsevier will be discontinuing Scirus, its free science search engine. Scirus has been a wide-ranging research tool, with over 575 million items indexed for searching, including webpages, pre-print articles, patents, and repositories.
Sriram Raghavan; Garcia-Molina, Hector (2000). "Crawling the Hidden Web" (PDF). Stanford Digital Libraries Technical Report. Archived from the original (PDF) on May 8, 2018. Retrieved December 27, 2008.
Raghavan, Sriram; Garcia-Molina, Hector (2001). "Crawling the Hidden Web" (PDF). Proceedings of the 27th International Conference on Very Large Data Bases (VLDB). pp. 129–38.
Madhavan, Jayant; Ko, David; Kot, Łucja; Ganapathy, Vignesh; Rasmussen, Alex; Halevy, Alon (2008). Google's Deep-Web Crawl (PDF). PVLDB '08, August 23-28, 2008, Auckland, New Zealand. VLDB Endowment, ACM. Archived from the original (PDF) on September 16, 2012. Retrieved April 17, 2009.
- Barker, Joe (January 2004). "Invisible Web: What it is, Why it exists, How to find it, and its inherent ambiguity". University of California, Berkeley, Teaching Library Internet Workshops. Archived from the original on July 29, 2005. Retrieved July 26, 2011..
- Basu, Saikat (March 14, 2010). "10 Search Engines to Explore the Invisible Web". MakeUseOf.com..
- Ozkan, Akin (November 2014). "Deep Web /Derin İnternet". Archived from the original on November 8, 2014. Retrieved November 6, 2014..
- Gruchawka, Steve (June 2006). "How-To Guide to the Deep Web". Archived from the original on January 5, 2014. Retrieved February 28, 2007..
- Hamilton, Nigel (2003). "The Mechanics of a Deep Net Metasearch Engine". 12th World Wide Web Conference..
- He, Bin; Chang, Kevin Chen-Chuan (2003). "Statistical Schema Matching across Web Query Interfaces" (PDF). Proceedings of the 2003 ACM SIGMOD International Conference on Management of Data. Archived from the original (PDF) on July 20, 2011.
- Howell O'Neill, Patrick (October 2013). "How to search the Deep Web". The Daily Dot..
- Ipeirotis, Panagiotis G.; Gravano, Luis; Sahami, Mehran (2001). "Probe, Count, and Classify: Categorizing Hidden-Web Databases" (PDF). Proceedings of the 2001 ACM SIGMOD International Conference on Management of Data. pp. 67–78. Archived from the original (PDF) on September 12, 2006. Retrieved September 26, 2006.
- King, John D.; Li, Yuefeng; Tao, Daniel; Nayak, Richi (November 2007). "Mining World Knowledge for Analysis of Search Engine Content" (PDF). Web Intelligence and Agent Systems. 5 (3): 233–53. Archived from the original (PDF) on December 3, 2008. Retrieved July 26, 2011.
- McCown, Frank; Liu, Xiaoming; Nelson, Michael L.; Zubair, Mohammad (March–April 2006). "Search Engine Coverage of the OAI-PMH Corpus" (PDF). IEEE Internet Computing. 10 (2): 66–73. doi:10.1109/MIC.2006.41. S2CID 15511914.
- Price, Gary; Sherman, Chris (July 2001). The Invisible Web: Uncovering Information Sources Search Engines Can't See. CyberAge Books. ISBN 978-0-910965-51-4.
- Shestakov, Denis (June 2008). Search Interfaces on the Web: Querying and Characterizing. TUCS Doctoral Dissertations 104, University of Turku
- Whoriskey, Peter (December 11, 2008). "Firms Push for a More Searchable Federal Web". The Washington Post. p. D01.
- Wright, Alex (March 2004). "In Search of the Deep Web". Salon. Archived from the original on March 9, 2007..
- Scientists, Naked (December 2014). "The Internet: the good, the bad and the ugly – In-depth exploration of the Internet and the Dark Web by Cambridge University's Naked Scientists" (Podcast).
- King, John D. (July 2009). Search Engine Content Analysis (PDF) (Thesis). Queensland University of Technology.
- Media related to Deep web at Wikimedia Commons
- The dictionary definition of deep web at Wiktionary