Units of information

Unit of measure for digital data From Wikipedia, the free encyclopedia

A unit of information is any unit of measure of digital data size. In digital computing, a unit of information is used to describe the capacity of a digital data storage device. In telecommunications, a unit of information is used to describe the throughput of a communication channel. In information theory, a unit of information is used to measure information contained in messages and the entropy of random variables.

Due to the need to work with data sizes that range from very small to very large, units of information cover a wide range of data sizes. Units are defined as multiples of a smaller unit except for the smallest unit which is based on convention and hardware design. Multiplier prefixes are used to describe relatively large sizes.

For binary hardware, by far the most common hardware today, the smallest unit is the bit, a portmanteau of binary digit,[1] which represents a value that is one of two possible values; typically shown as 0 and 1. The nibble, 4 bits, represents the value of a single hexadecimal digit. The byte, 8 bits, 2 nibbles, is possibly the most commonly known and used base unit to describe data size. The word is a size that varies by and has a special importance for a particular hardware context. On modern hardware, a word is typically 2, 4 or 8 bytes, but the size varies dramatically on older hardware. Larger sizes can be expressed as multiples of a base unit via SI metric prefixes (powers of ten) or the newer and generally more accurate IEC binary prefixes (powers of two).

Information theory

Summarize
Perspective
Comparison of units of information: bit, trit, nat, ban. Quantity of information is the height of bars. Dark green level is the "nat" unit.

In 1928, Ralph Hartley observed a fundamental storage principle,[2] which was further formalized by Claude Shannon in 1945: the information that can be stored in a system is proportional to the logarithm of N possible states of that system, denoted logb N. Changing the base of the logarithm from b to a different number c has the effect of multiplying the value of the logarithm by a fixed constant, namely logc N = (logc b) logb N. Therefore, the choice of the base b determines the unit used to measure information. In particular, if b is a positive integer, then the unit is the amount of information that can be stored in a system with b possible states.

When b is 2, the unit is the shannon, equal to the information content of one "bit". A system with 8 possible states, for example, can store up to log2 8 = 3 bits of information. Other units that have been named include:

Base b = 3
the unit is called "trit", and is equal to log2 3 (≈ 1.585) bits.[3]
Base b = 10
the unit is called decimal digit, hartley, ban, decit, or dit, and is equal to log2 10 (≈ 3.322) bits.[2][4][5][6]
Base b = e, the base of natural logarithms
the unit is called a nat, nit, or nepit (from Neperian), and is worth log2 e (≈ 1.443) bits.[2]

The trit, ban, and nat are rarely used to measure storage capacity; but the nat, in particular, is often used in information theory, because natural logarithms are mathematically more convenient than logarithms in other bases.

Units derived from bit

Summarize
Perspective

Several conventional names are used for collections or groups of bits.

Byte

Historically, a byte was the number of bits used to encode a character of text in the computer, which depended on computer hardware architecture, but today it almost always means eight bits – that is, an octet. An 8-bit byte can represent 256 (28) distinct values, such as non-negative integers from 0 to 255, or signed integers from −128 to 127. The IEEE 1541-2002 standard specifies "B" (upper case) as the symbol for byte (IEC 80000-13 uses "o" for octet in French, but also allows "B" in English). Bytes, or multiples thereof, are almost always used to specify the sizes of computer files and the capacity of storage units. Most modern computers and peripheral devices are designed to manipulate data in whole bytes or groups of bytes, rather than individual bits.

Nibble

A group of four bits, or half a byte, is sometimes called a nibble, nybble or nyble. This unit is most often used in the context of hexadecimal number representations, since a nibble has the same number of possible values as one hexadecimal digit has.[7]

Word, block, and page

Computers usually manipulate bits in groups of a fixed size, conventionally called words. The number of bits in a word is usually defined by the size of the registers in the computer's CPU, or by the number of data bits that are fetched from its main memory in a single operation. In the IA-32 architecture more commonly known as x86-32, a word is 32 bits, but other past and current architectures use words with 4, 8, 9, 12, 13, 16, 18, 20, 21, 22, 24, 25, 29, 30, 31, 32, 33, 35, 36, 38, 39, 40, 42, 44, 48, 50, 52, 54, 56, 60, 64, 72[8] bits or others.

Some machine instructions and computer number formats use two words (a "double word" or "dword"), or four words (a "quad word" or "quad").

Computer memory caches usually operate on blocks of memory that consist of several consecutive words. These units are customarily called cache blocks, or, in CPU caches, cache lines.

Virtual memory systems partition the computer's main storage into even larger units, traditionally called pages.

Multiplicative prefixes

A unit for a large amount of data can be formed using either a metric or binary prefix with a base unit. For storage, the base unit is typically byte. For communication throughput, a base unit of bit is common. For example, using the metric kilo prefix, a kilobyte is 1000 bytes and a kilobit is 1000 bits.

Use of metric prefixes is common, but often inaccurate since binary storage hardware is organized with capacity that is a power of 2 not 10 as the metric prefixes are. In the context of computing, the metric prefixes are often intended to mean something other than their normal meaning. For example, 'kilobyte' often refers to 1024 bytes even though the standard meaning of kilo is 1000. Also, 'mega' normally means one million, but in computing is often used to mean 220 = 1048576. The table below illustrates the differences between normal metric sizes and the intended size the binary size.

More information Symbol, Prefix ...
Symbol Prefix Metric size Binary size Size difference
k kilo 1000 1024 2.40%
M mega 10002 10242 4.86%
G giga 10003 10243 7.37%
T tera 10004 10244 9.95%
P peta 10005 10245 12.59%
E exa 10006 10246 15.29%
Z zetta 10007 10247 18.06%
Y yotta 10008 10248 20.89%
R ronna 10009 10249 23.79%
Q quetta 100010 102410 26.77%
Close

The International Electrotechnical Commission (IEC) issued a standard that introduces binary prefixes that accurately represent binary sizes without changing the meaning of the standard metric terms. Rather than based on powers of 1000, these are based on powers of 1024 which is a power of 2.[9]

More information Symbol, Prefix ...
Symbol Prefix Example Size
Ki kibi kibibyte (KiB) 210, 1024
Mi mebi mebibyte (MiB) 220, 10242
Gi gibi gibibyte (GiB) 230, 10243
Ti tebi tebibyte (TiB) 240, 10244
Pi pebi pebibyte (PiB) 250, 10245
Ei exbi exbibyte (EiB) 260, 10246
Zi zebi zebibyte (ZiB) 270, 10247
Yi yobi yobibyte (YiB) 280, 10248
Ri robi robibyte (RiB) 290, 10249
Qi quebi quebibyte (QiB) 2100, 102410
Close

The JEDEC memory standard JESD88F notes that the definitions of kilo (K), giga (G), and mega (M) based on powers of two are included only to reflect common usage, but are otherwise deprecated.[10]

Size examples

  • 1 bit: Answer to a yes/no question
  • 1 byte: A number from 0 to 255
  • 90 bytes: Enough to store a typical line of text from a book
  • 512 bytes = 0.5 KiB: The typical sector size of an old style hard disk drive (modern Advanced Format sectors are 4096 bytes).
  • 1024 bytes = 1 KiB: A block size in some older UNIX filesystems
  • 2048 bytes = 2 KiB: A CD-ROM sector
  • 4096 bytes = 4 KiB: A memory page in x86 (since Intel 80386) and many other architectures, also the modern Advanced Format hard disk drive sector size.
  • 4 kB: About one page of text from a novel
  • 120 kB: The text of a typical pocket book
  • 1 MiB: A 1024×1024 pixel bitmap image with 256 colors (8 bpp color depth)
  • 3 MB: A three-minute song (133 kbit/s)
  • 650–900 MB – a CD-ROM
  • 1 GB: 114 minutes of uncompressed CD-quality audio at 1.4 Mbit/s
  • 16 GB: DDR5 DRAM laptop memory under $40 (as of early 2024)
  • 32/64/128 GB: Three common sizes of USB flash drives
  • 1 TB: The size of a $30 hard disk (as of early 2024)
  • 6 TB: The size of a $100 hard disk (as of early 2022)
  • 16 TB: The size of a small/cheap $130 (as of early 2024) enterprise SAS hard disk drive
  • 24 TB: The size of $440 (as of early 2024) "video" hard disk drive
  • 32 TB: Largest hard disk drive (as of mid-2024)
  • 100 TB: Largest commercially available solid-state drive (as of mid-2024)
  • 200 TB: Largest solid-state drive constructed (prediction for mid-2022)
  • 1.6 PB (1600 TB): Amount of possible storage in one 2U server (world record as of 2021, using 100 TB solid-states drives).[11]
  • 1.3 ZB: Prediction of the volume of the whole internet in 2016

Obsolete and unusual units


Some notable unit names that are today obsolete or only used in limited contexts.

  • 5 bits: pentad, pentade,[23]
  • 7 bits: heptad, heptade[23]
  • 9 bits: nonet,[27] rarely used
  • 18 bits: chomp, chawmp (on a 36-bit machine)[38]
  • 256 bytes: page (on Intel 4004,[44] 8080 and 8086 processors,[42] also many other 8-bit processors – typically much larger on many 16-bit/32-bit processors)

See also

References

Loading related searches...

Wikiwand - on

Seamless Wikipedia browsing. On steroids.