Remove ads
Computer storage measure From Wikipedia, the free encyclopedia
Density is a measure of the quantity of information bits that can be stored on a given physical space of a computer storage medium. There are three types of density: length (linear density) of track, area of the surface (areal density), or in a given volume (volumetric density).
Generally, higher density is more desirable, for it allows more data to be stored in the same physical space. Density therefore has a direct relationship to storage capacity of a given medium. Density also generally affects the performance within a particular medium, as well as price.
Solid state drives use flash memory to store non-volatile media. They are the latest form of mass produced storage and rival magnetic disk media. Solid state media data is saved to a pool of NAND flash. NAND itself is made up of what are called floating gate transistors. Unlike the transistor designs used in DRAM, which must be refreshed multiple times per second, NAND flash is designed to retain its charge state even when not powered up. The highest capacity drives commercially available are the Nimbus Data Exadrive© DC series drives, these drives come in capacities ranging 16TB to 100TB. Nimbus states that for its size the 100TB SSD has a 6:1 space saving ratio over a nearline HDD[1]
Hard disk drives store data in the magnetic polarization of small patches of the surface coating on a disk. The maximum areal density is defined by the size of the magnetic particles in the surface, as well as the size of the "head" used to read and write the data. In 1956 the first hard drive, the IBM 350, had an areal density of 2,000 bit/in2. Since then, the increase in density has matched Moore's Law, reaching 1 Tbit/in2 in 2014.[2] In 2015, Seagate introduced a hard drive with a density of 1.34 Tbit/in2,[3] more than 600 million times that of the IBM 350. It is expected that current recording technology can "feasibly" scale to at least 5 Tbit/in2 in the near future.[3][4] New technologies like heat-assisted magnetic recording (HAMR) and microwave-assisted magnetic recording (MAMR) are under development and are expected to allow increases in magnetic areal density to continue.[5]
Optical discs store data in small pits in a plastic surface that is then covered with a thin layer of reflective metal. Compact discs (CDs) offer a density of about 0.90 Gbit/in2, using pits which are 0.83 micrometers long and 0.5 micrometers wide, arranged in tracks spaced 1.6 micrometers apart. DVD disks are essentially a higher-density CD, using more of the disk surface, smaller pits (0.64 micrometers), and tighter tracks (0.74 micrometers), offering a density of about 2.2 Gbit/in2. Single-layer HD DVD and Blu-ray disks offer densities around 7.5 Gbit/in2 and 12.5 Gbit/in2, respectively.
When introduced in 1982 CDs had considerably higher densities than hard disk drives, but hard disk drives have since advanced much more quickly and eclipsed optical media in both areal density and capacity per device.
The first magnetic tape drive, the Univac Uniservo, recorded at the density of 128 bit/in on a half-inch magnetic tape, resulting in the areal density of 256 bit/in2.[6] In 2015, IBM and Fujifilm claimed a new record for the magnetic tape areal density of 123 Gbit/in2,[7] while LTO-6, the highest-density production tape shipping in 2015, provides an areal density of 0.84 Gbit/in2.[8]
A number of technologies are attempting to surpass the densities of existing media.
IBM aimed to commercialize their Millipede memory system at 1 Tbit/in2 in 2007 but development appears to be moribund. A newer IBM technology, racetrack memory, uses an array of many small nanoscopic wires arranged in 3D, each holding numerous bits to improve density.[9] Although exact numbers have not been mentioned, IBM news articles talk of "100 times" increases.
Holographic storage technologies are also attempting to leapfrog existing systems, but they too have been losing the race, and are estimated to offer 1 Tbit/in2 as well, with about 250 GB/in2 being the best demonstrated to date for non-quantum holography systems.
Other experimental technologies offer even higher densities. Molecular polymer storage has been shown to store 10 Tbit/in2.[10] By far the densest type of memory storage experimentally to date is electronic quantum holography. By superimposing images of different wavelengths into the same hologram, in 2009 a Stanford research team achieved a bit density of 35 bit/electron (approximately 3 exabytes/in2) using electron microscopes and a copper medium.[11]
In 2012, DNA was successfully used as an experimental data storage medium, but required a DNA synthesizer and DNA microchips for the transcoding. As of 2012[update], DNA holds the record for highest-density storage medium.[12] In March 2017, scientists at Columbia University and the New York Genome Center published a method known as DNA Fountain which allows perfect retrieval of information from a density of 215 petabytes per gram of DNA, 85% of the theoretical limit.[13][14]
With the notable exception of NAND Flash memory, increasing storage density of a medium typically improves the transfer speed at which that medium can operate. This is most obvious when considering various disk-based media, where the storage elements are spread over the surface of the disk and must be physically rotated under the "head" in order to be read or written. Higher density means more data moves under the head for any given mechanical movement.
For example, we can calculate the effective transfer speed for a floppy disc by determining how fast the bits move under the head. A standard 3½-inch floppy disk spins at 300 rpm, and the innermost track is about 66 mm long (10.5 mm radius). At 300 rpm the linear speed of the media under the head is thus about 66 mm × 300 rpm = 19800 mm/minute, or 330 mm/s. Along that track the bits are stored at a density of 686 bit/mm, which means that the head sees 686 bit/mm × 330 mm/s = 226,380 bit/s (or 28.3 KB/s).
Now consider an improvement to the design that doubles the density of the bits by reducing sample length and keeping the same track spacing. This would double the transfer speed because the bits would be passing under the head twice as fast. Early floppy disk interfaces were designed for 250 kbit/s transfer speeds, but were rapidly outperformed with the introduction of the "high density" 1.44 MB (1,440 KB) floppies in the 1980s. The vast majority of PCs included interfaces designed for high density drives that ran at 500 kbit/s instead. These, too, were completely overwhelmed by newer devices like the LS-120, which were forced to use higher-speed interfaces such as IDE.
Although the effect on performance is most obvious on rotating media, similar effects come into play even for solid-state media like Flash RAM or DRAM. In this case the performance is generally defined by the time it takes for the electrical signals to travel through the computer bus to the chips, and then through the chips to the individual "cells" used to store data (each cell holds one bit).
One defining electrical property is the resistance of the wires inside the chips. As the cell size decreases, through the improvements in semiconductor fabrication that led to Moore's Law, the resistance is reduced and less power is needed to operate the cells. This, in turn, means that less electric current is needed for operation, and thus less time is needed to send the required amount of electrical charge into the system. In DRAM, in particular, the amount of charge that needs to be stored in a cell's capacitor also directly affects this time.
As fabrication has improved, solid-state memory has improved dramatically in terms of performance. Modern DRAM chips had operational speeds on the order of 10 ns or less. A less obvious effect is that as density improves, the number of DIMMs needed to supply any particular amount of memory decreases, which in turn means less DIMMs overall in any particular computer. This often leads to improved performance as well, as there is less bus traffic. However, this effect is generally not linear.
The examples and perspective in this article may not represent a worldwide view of the subject. (January 2014) |
Storage density also has a strong effect on the price of memory, although in this case, the reasons are not so obvious.
In the case of disk-based media, the primary cost is the moving parts inside the drive. This sets a fixed lower limit, which is why the average selling price for both of the major HDD manufacturers has been US$45–75 since 2007.[15] That said, the price of high-capacity drives has fallen rapidly, and this is indeed an effect of density. The highest capacity drives use more platters, essentially individual hard drives within the case. As the density increases, the number of platters can be reduced, leading to lower costs.
Hard drives are often measured in terms of cost per bit. For example, the first commercial hard drive, IBM's RAMAC in 1957, supplied 3.75 MB for $34,500, or $9,200 per megabyte. In 1989, a 40 MB hard drive cost $1200, or $30/MB. And in 2018, 4 Tb drives sold for $75, or 1.9¢/GB, an improvement of 1.5 million since 1989 and 520 million since the RAMAC. This is without adjusting for inflation, which increased prices nine-fold from 1956 to 2018.
date | capacity | cost | $/GB |
---|---|---|---|
1957 | 3.75 MB | $34,500 | $9.2 million/GB |
1989 | 40 MB | $1,200 | $30,000/GB |
1995 | 1 GB | $850 | $850/GB |
2004 | 250 GB | $250 | $1/GB |
2011 | 2 TB | $70 | $0.035/GB |
2018 | 4 TB | $75 | $0.019/GB |
2023 | 8 TB | $175 | $0.022/GB |
Solid-state storage has seen a similar drop in cost per bit. In this case the cost is determined by the yield, the number of viable chips produced in a unit time. Chips are produced in batches printed on the surface of a single large silicon wafer, which is cut up and non-working samples are discarded. Fabrication has improved yields over time by using larger wafers, and producing wafers with fewer failures. The lower limit on this process is about $1 per completed chip due to packaging and other costs.[16]
The relationship between information density and cost per bit can be illustrated as follows: a memory chip that is half the physical size means that twice as many units can be produced on the same wafer, thus halving the price of each one. As a comparison, DRAM was first introduced commercially in 1971, a 1 kbit part that cost about $50 in large batches, or about 5 cents per bit. 64 Mbit parts were common in 1999, which cost about 0.00002 cents per bit (20 microcents/bit).[16]
Seamless Wikipedia browsing. On steroids.
Every time you click a link to Wikipedia, Wiktionary or Wikiquote in your browser's search results, it will show the modern Wikiwand interface.
Wikiwand extension is a five stars, simple, with minimum permission required to keep your browsing private, safe and transparent.