Remove ads
Facilities containing Google servers From Wikipedia, the free encyclopedia
Google data centers are the large data center facilities Google uses to provide their services, which combine large drives, computer nodes organized in aisles of racks, internal and external networking, environmental controls (mainly cooling and humidification control), and operations software (especially as concerns load balancing and fault tolerance).
External videos | |
---|---|
Google Data Center 360° Tour |
There is no official data on how many servers are in Google data centers, but Gartner estimated in a July 2016 report that Google at the time had 2.5 million servers. This number is changing as the company expands capacity and refreshes its hardware.[1]
The locations of Google's various data centers by continent are as follows:[2][3]
Continent | Location | Geo | Products Location | Cloud Location | Timeline | Description |
---|---|---|---|---|---|---|
North America | Arcola (VA), USA | 38°56′35.99″N 77°31′27.61″W | Loudoun County | N. Virginia (us-east4) | 2017 - announced[4][5] | |
North America | Atlanta (GA), USA | 33°44′59.04″N 84°35′5.33″W | Douglas County | - | 2003 - launched | 350 employees |
South America | Cerrillos, Santiago, Chile | 33°31′14″S 70°43′18″W[6] | - | Santiago (southamerica-west1) | 2020 - announced[7]
2021 - launched[8] |
|
Asia | Changhua County, Taiwan | 24°08′18.6″N 120°25′32.6″E | Changhua County | Taiwan
(asia-east1) |
2011 - announced
2013 - launched |
60 employees |
North America | Clarksville (TN), USA | 36°37′16″N 87°15′47″W | Montgomery County | - | 2015 - announced | |
North America | Columbus (OH), USA | - | Columbus (us-east5) | 2022 - launched[9] | ||
North America | Council Bluffs (IA), USA | 41°13′17.7″N 95°51′49.92″W | Council Bluffs | 2007 - announced
2009 - completed first phase completed 2012 and 2015 - expanded |
130 employees | |
North America | Council Bluffs (IA), USA | 41°10′06″N 95°47′46″W | Iowa (us-central1) | |||
Asia | Delhi, India | - | Delhi (asia-south2) | 2020 - announced
2021 - launched[10] |
||
Middle East | Doha, Qatar | - | Doha (me-central1) | 2023 - launched[11] | ||
Europe | Dublin, Ireland | 53°19′12.39″N 6°26′31.43″W | Dublin | - | 2011 - announced
2012 - launched |
150 employees[12] |
Europe | Eemshaven, Netherlands | 53.4252171°N 6.8622574°E | Eemshaven | Netherlands (europe-west4) | 2014 - announced
2016 - launched 2018, 2019 - expansion |
200 employees |
Europe | Frankfurt, Germany | 50°07′21″N 8°58′27″E[13] | - | Frankfurt (europe-west3) | 2022 - expanded[14] | |
Europe | Fredericia, Denmark | 55°33′29.5″N 9°39′20.8″E | Fredericia | - | 2018 - announced[15]
2020 - launched |
€600M building costs |
Europe | Ghlin, Hainaut, Belgium | 50°28′09.6″N 3°51′55.7″E | Saint-Ghislain | Belgium (europe-west1) | 2007 - announced
2010 - launched |
12 employees |
Europe | Hamina, Finland | 60°32′11.68″N 27°7′1.21″E | Hamina | Finland
(europe-north1) |
2009 - announced
2011 - first phase completed 2022 - expansion |
6 buildings, 400 employees [16] |
North America | Henderson (NV), USA | 36°03′20″N 115°00′37″W | Henderson | Las Vegas (us-west4) | 2019 - announced[17]
2020 - launched |
64-acres; $1.2B building costs[18][19] |
Asia | Hong Kong, Hong Kong | - | Hong Kong (asia-east2) | 2017 - announced[20]
2018 - launched[21] |
||
Asia | Inzai, Japan | 35°49′04″N 140°07′57″E | Inzai | - | 2023 - launched | |
Asia | Jakarta, Indonesia | - | Jakarta (asia-southeast2) | 2020 - launched[22] | ||
Asia | Koto-Ku, Tokyo, Japan | - | Tokyo
(asia-northeast1) |
2016 - launched[23] | ||
North America | Leesburg (VA), USA | 39°3′6.38″N 77°32′20.38″W | Loudoun County | N. Virginia (us-east4) | 2017 - announced[4][5] | |
North America | Lenoir (NC), USA | 35°53′54.78″N 81°32′50.58″W | Lenoir | - | 2007 - announced
2009 - launched |
over 110 employees |
Asia | Lok Yang Way, Pioneer, Singapore | 1°19′26″N 103°41′36″E[24] | Singapore | Singapore (asia-southeast1) | 2022 - launched | |
Europe | London, UK | - | London
(europe-west2) |
2017 - launched[25] | ||
North America | Los Angeles (CA), USA | - | Los Angeles (us-west2) | |||
Europe | Madrid, Spain | 40°31′10″N 3°20′27″W | - | Madrid (europe-southwest1) | 2022 - launched[26] | |
Pacific | Melbourne, Australia | - | Melbourne
(australia-southeast2) |
2021 - launched[27] | ||
Europe | Middenmeer, Noord-Holland, The Netherlands | 52°47′24″N 5°01′45″E[28] | Middenmeer | Netherlands (europe-west4) | 2019 - announced[29] | |
North America | Midlothian (TX), USA | 32°26′35″N 97°03′44″W | Midlothian | Dallas (us-south1) | 2019 - announced
2022 - launched[30] |
375-acres; $600M building costs[31] |
Europe | Milan, Italy | - | Milan (europe-west8) | 2022 - launched[32] | ||
North America | Moncks Corner (SC), USA | 33°03′50.8″N 80°02′36.1″W | Berkeley County | South Carolina (us-east1) | 2007 - launched
2013 - expanded |
150 employees |
North America | Montreal, Quebec, Canada[33] | - | Montréal (northamerica-northeast1) | 2018 - launched[34] | 62.4-hectares; $600M building costs[35] | |
Asia | Mumbai, India | - | Mumbai (asia-south1) | 2017 - launched[36] | ||
North America | New Albany (OH), USA | 40°03′41″N 82°45′31″W | New Albany | - | 2019 - announced | 400-acres; $600M building costs[37][38] |
Asia | Osaka, Japan | - | Osaka
(asia-northeast2) |
2019 - launched[39] | ||
South America | Osasco, São Paulo, Brazil | - | São Paulo (southamerica-east1) | 2017 - launched[40] | ||
North America | Papillion (NE), USA | 41°08′00″N 96°08′39″W | Papillion | - | 2019 - announced | 275-acres; $600M building costs[41][42] |
Europe | Paris, France | - | Paris (europe-west9) | 2022 - launched[43] | ||
North America | Pryor Creek (OK), USA | 36°14′28.1″N 95°19′48.22″W | Mayes County | - | 2007 - announced
2012 - expanded |
over 400 employees,[44] land at MidAmerica Industrial Park |
South America | Quilicura, Santiago, Chile | 33°21′30.5″S 70°41′50.4″W | Quilicura | - | 2012 - announced
2015 - launched |
up to 20 employees expected. A million dollar investment plan to increase capacity at Quilicura was announced in 2018.[45] |
North America | Reno (NV), USA | 39°30′04″N 119°25′46″W | Storey County | - | 2017 - 1,210 acres of land bought in the Tahoe Reno Industrial Center[46]
2018 - announced 2018 November - project approved by the state of Nevada[47][48] |
|
North America | Salt Lake City (UT), USA | - | Salt Lake City (us-west3) | 2020 - launched[49] | ||
Asia | Seoul, South Korea | - | Seoul
(asia-northeast3) |
2020 - launched[50] | ||
Pacific | Sydney, Australia | - | Sydney
(australia-southeast1) |
2017 - launched[51] | ||
Middle East | Tel Aviv, Israel[52] | - | Tel Aviv (me-west1) | 2022 - launched[53] | ||
North America | The Dalles (OR), USA | 45°37′57.04″N 121°12′8.16″W | The Dalles | Oregon (us-west1) | 2006 - launched | 80 full-time employees |
North America | Toronto, Canada | - | Toronto (northamerica-northeast2) | 2021 - launched[54] | ||
Europe | Turin, Italy | 45°08′48″N 7°44′32″E | - | Turin (europe-west12) | 2023 - launched[55] | |
South America | Vinhedo, São Paulo, Brazil | São Paulo (southamerica-east1) | ||||
Europe | Warsaw, Poland | - | Warsaw (europe-central2) | 2019 - announced
2021 - launched[56] |
||
Asia | Wenya, Jurong West, Singapore | 1°21′04.8″N 103°42′35.2″E | Singapore | Singapore (asia-southeast1) | 2011 - announced
2013 - launched 2015 - expanded |
|
North America | Widows Creek (Bridgeport) (AL), USA | 34°54′48.4″N 85°44′53.1″W[57] | Jackson County | - | 2018 - broke ground | |
Europe | Zürich, Switzerland | 47°26′45″N 8°12′39″E[58] | - | Zurich (europe-west6) | 2018 - announced
2019 - launched[59] |
|
Europe | Austria | 2022 - announced[60] | ||||
Europe | Berlin, Germany[61] | Berlin (europe-west10) | 2021 - announced[62] 2023 August - launched [63] | |||
Middle East | Dammam, Saudi Arabia | 2021 - announced[64] | ||||
Europe | Athens, Greece | 2022 - announced[60] | ||||
North America | Kansas City, Missouri | 2019 - announced[65] | ||||
Middle East | Kuwait | 2023 - announced[66] | ||||
Asia | Malaysia | 2022 - announced[67] | ||||
Pacific | Auckland, New Zealand | 2022 - announced[67] | ||||
Europe | Oslo, Norway | 2022 - announced[60] | ||||
North America | Querétaro, Mexico | 2022 - announced[68] | ||||
Africa | Johannesburg, South Africa | Johannesburg (africa-south1) | 2022 - announced[60]2024 - launched | |||
Europe | Sweden | 2022 - announced[60] | ||||
Asia | Tainan City, Taiwan | - | Taiwan
(asia-east1) |
2019 September - announced[69][70][71] | ||
Asia | Thailand | 2022 - announced[67] | ||||
Asia | Yunlin County, Taiwan | - | Taiwan (asia-east1) | 2020 September - announced[72] | ||
North America | Mesa (AZ), USA | 2023 - construction started[73] | ||||
Europe | Waltham Cross, Hertfordshire, UK | 51°41′44″N 0°02′55″W | 2024 January - announced[74] | |||
South America | Canelones, Uruguay | 34°48′56″S 55°59′44″W | 2024 - construction started[75]
2026 - inauguration expected[76] |
The original hardware (circa 1998) that was used by Google when it was located at Stanford University included:[77]
The state of Google infrastructure in 2003 was described in a report by Luiz André Barroso, Jeff Dean, and Urs Hölzle as a "reliable computing infrastructure from clusters of unreliable commodity PCs".[78]
On average, a single search query requires reads ~100 MB of data, and consumes CPU cycles. During peak time, Google serves ~1000 queries per second. To handle this peak load, they built a compute cluster with ~15,000 commodity-class PCs instead of expensive supercomputer hardware to save money. To make up for the lower hardware reliability, they wrote fault tolerant software.
The structure of the cluster consists of 5 parts. Central Google Web servers (GWS) face the public Internet. Upon receiving a user request, the Google Web server communicates with a spell checker, an advertisement server, many index servers, many document servers. Each of the 4 parts responds to a part of the request, and the GWS assembles their responses and serves the final response to the user.
The raw documents were ~100 TB, and the index files were ~10 TB. The index files are sharded, and each shard is served by a "pool" of index servers. Similarly, the raw documents are also sharded. Each query to the index file results in a list of document IDs, which are then sent to the document servers to retrieve the title and the keyword-in-context snippets.
There were several CPU generations in use, ranging from single-processor 533MHz Intel-Celeron-based servers to dual 1.4GHz Intel Pentium III. Each server contains one or more hard drives, 80 GB each. Index servers have less disk space than document servers. Each rack has two Ethernet switches, one per side. The servers on each side interconnect via a 100-Mbps. Each switch has a ~250 MB/sec uplink to a central switch that connects to all racks.
The design objectives include:
Due to the massive parallelism, scaling up hardware scales up the thoroughput linearly, i.e. doubling the compute cluster doubles the number of queries servable per second.
The cluster is made of server racks at 2 configurations: 40 x 1u per side with 2 sides, or 20 x 2u per side with 2 sides. The power consumption is 10 kW per rack, at a density of 400 W/ft^2, consuming 10 MWh per month, costing $1,500 per month.
As of 2014, Google has used a heavily customized version of Debian Linux. They migrated from a Red Hat-based system incrementally in 2013.[79]
The customization goal is to purchase CPU generations that offer the best performance per dollar, not absolute performance. How this is measured is unclear, but it is likely to incorporate running costs of the entire server, and CPU power consumption could be a significant factor.[80] Servers as of 2009–2010 consisted of custom-made open-top systems containing two processors (each with several cores[81]), a considerable amount of RAM spread over 8 DIMM slots housing double-height DIMMs, and at least two SATA hard disk drives connected through a non-standard ATX-sized power supply unit.[82] The servers were open top so more servers could fit into a rack. According to CNET and a book by John Hennessy, each server had a novel 12-volt battery to reduce costs and improve power efficiency.[81][83]
According to Google, their global data center operation electrical power ranges between 500 and 681 megawatts.[84][85] The combined processing power of these servers might have reached from 20 to 100 petaflops in 2008.[86]
Details of the Google worldwide private networks are not publicly available, but Google publications[87][88] make references to the "Atlas Top 10" report that ranks Google as the third largest ISP behind Level 3.
In order to run such a large network, with direct connections to as many ISPs as possible at the lowest possible cost, Google has a very open peering policy.[89]
From this site, we can see that the Google network can be accessed from 67 public exchange points and 69 different locations across the world. As of May 2012, Google had 882 Gbit/s of public connectivity (not counting private peering agreements that Google has with the largest ISPs). This public network is used to distribute content to Google users as well as to crawl the internet to build its search indexes. The private side of the network is a secret, but a recent disclosure from Google[90] indicate that they use custom built high-radix switch-routers (with a capacity of 128 × 10 Gigabit Ethernet port) for the wide area network. Running no less than two routers per datacenter (for redundancy) we can conclude that the Google network scales in the terabit per second range (with two fully loaded routers the bi-sectional bandwidth amount to 1,280 Gbit/s).
These custom switch-routers are connected to DWDM devices to interconnect data centers and point of presences (PoP) via dark fiber.
From a datacenter view, the network starts at the rack level, where 19-inch racks are custom-made and contain 40 to 80 servers (20 to 40 1U servers on either side, while new servers are 2U rackmount systems.[91] Each rack has an Ethernet switch). Servers are connected via a 1 Gbit/s Ethernet link to the top of rack switch (TOR). TOR switches are then connected to a gigabit cluster switch using multiple gigabit or ten gigabit uplinks.[92] The cluster switches themselves are interconnected and form the datacenter interconnect fabric (most likely using a dragonfly design rather than a classic butterfly or flattened butterfly layout[93]).
From an operation standpoint, when a client computer attempts to connect to Google, several DNS servers resolve www.google.com
into multiple IP addresses via Round Robin policy. Furthermore, this acts as the first level of load balancing and directs the client to different Google clusters. A Google cluster has thousands of servers, and once the client has connected to the server additional load balancing is done to send the queries to the least loaded web server. This makes Google one of the largest and most complex content delivery networks.[94]
Google has numerous data centers scattered around the world. At least 12 significant Google data center installations are located in the United States. The largest known centers are located in The Dalles, Oregon; Atlanta, Georgia; Reston, Virginia; Lenoir, North Carolina; and Moncks Corner, South Carolina.[95] In Europe, the largest known centers are in Eemshaven and Groningen in the Netherlands and Mons, Belgium.[95] Google's Oceania Data Center is located in Sydney, Australia.[96]
To support fault tolerance, increase the scale of data centers and accommodate low-radix switches, Google has adopted various modified Clos topologies in the past.[97]
One of the largest Google data centers is located in the town of The Dalles, Oregon, on the Columbia River, approximately 80 miles (129 km) from Portland. Codenamed "Project 02", the complex was built in 2006 and is approximately the size of two American football fields, with cooling towers four stories high.[98][99] The site was chosen to take advantage of inexpensive hydroelectric power, and to tap into the region's large surplus of fiber optic cable, a remnant of the dot-com boom. A blueprint of the site appeared in 2008.[100]
In February 2009, Stora Enso announced that they had sold the Summa paper mill in Hamina, Finland to Google for 40 million Euros.[101][102] Google invested 200 million euros on the site to build a data center and announced additional 150 million euro investment in 2012.[103][104] Google chose this location due to the availability and proximity of renewable energy sources.[105]
In 2013, the press revealed the existence of Google's floating data centers along the coasts of the states of California (Treasure Island's Building 3) and Maine. The development project was maintained under tight secrecy. The data centers are 250 feet long, 72 feet wide, 16 feet deep. The patent for an in-ocean data center cooling technology was bought by Google in 2009[106][107] (along with a wave-powered ship-based data center patent in 2008[108][109]). Shortly thereafter, Google declared that the two massive and secretly-built infrastructures were merely "interactive learning centers, [...] a space where people can learn about new technology."[110]
Google halted work on the barges in late 2013 and began selling off the barges in 2014.[111][112]
Most of the software stack that Google uses on their servers was developed in-house.[113] According to a well-known former Google employee in 2006, C++, Java, Python and (more recently) Go are favored over other programming languages.[114] For example, the back end of Gmail is written in Java and the back end of Google Search is written in C++.[115] Google has acknowledged that Python has played an important role from the beginning, and that it continues to do so as the system grows and evolves.[116]
The software that runs the Google infrastructure includes:[117]
Google has developed several abstractions which it uses for storing most of its data:[125]
Most operations are read-only. When an update is required, queries are redirected to other servers, so as to simplify consistency issues. Queries are divided into sub-queries, where those sub-queries may be sent to different ducts in parallel, thus reducing the latency time.[91]
To lessen the effects of unavoidable hardware failure, software is designed to be fault tolerant. Thus, when a system goes down, data is still available on other servers, which increases reliability.
Like most search engines, Google indexes documents by building a data structure known as inverted index. Such an index obtains a list of documents by a query word. The index is very large due to the number of documents stored in the servers.[94]
The index is partitioned by document IDs into many pieces called shards. Each shard is replicated onto multiple servers. Initially, the index was being served from hard disk drives, as is done in traditional information retrieval (IR) systems. Google dealt with the increasing query volume by increasing number of replicas of each shard and thus increasing number of servers. Soon they found that they had enough servers to keep a copy of the whole index in main memory (although with low replication or no replication at all), and in early 2001 Google switched to an in-memory index system. This switch "radically changed many design parameters" of their search system, and allowed for a significant increase in throughput and a large decrease in latency of queries.[130]
In June 2010, Google rolled out a next-generation indexing and serving system called "Caffeine" which can continuously crawl and update the search index. Previously, Google updated its search index in batches using a series of MapReduce jobs. The index was separated into several layers, some of which were updated faster than the others, and the main layer wouldn't be updated for as long as two weeks. With Caffeine, the entire index is updated incrementally on a continuous basis. Later Google revealed a distributed data processing system called "Percolator"[131] which is said to be the basis of Caffeine indexing system.[123][132]
Google's server infrastructure is divided into several types, each assigned to a different purpose:[91][94][133][134][135]
There are also "canary requests", whereby a request is first sent to one or two leaf servers to see if the response time is reasonable. If not, then the request fails. This provides security.[136]
External videos | |
---|---|
Google Data Center Security: 6 Layers Deep |
In October 2013, The Washington Post reported that the U.S. National Security Agency intercepted communications between Google's data centers, as part of a program named MUSCULAR.[137][138] This wiretapping was made possible because, at the time, Google did not encrypt data passed inside its own network.[139] This was rectified when Google began encrypting data sent between data centers in 2013.[140]
Google's most efficient data center runs at 35 °C (95 °F) using only fresh air cooling, requiring no electrically powered air conditioning.[141]
In December 2016, Google announced that—starting in 2017—it would purchase enough renewable energy to match 100% of the energy usage of its data centers and offices. The commitment will make Google "the world's largest corporate buyer of renewable power, with commitments reaching 2.6 gigawatts (2,600 megawatts) of wind and solar energy".[142][143][144]
Seamless Wikipedia browsing. On steroids.
Every time you click a link to Wikipedia, Wiktionary or Wikiquote in your browser's search results, it will show the modern Wikiwand interface.
Wikiwand extension is a five stars, simple, with minimum permission required to keep your browsing private, safe and transparent.