Loading AI tools
From Wikipedia, the free encyclopedia
In computing, a normal number is a non-zero number in a floating-point representation which is within the balanced range supported by a given floating-point format: it is a floating point number that can be represented without leading zeros in its significand.
This article needs additional citations for verification. (December 2009) |
The magnitude of the smallest normal number in a format is given by:
where b is the base (radix) of the format (like common values 2 or 10, for binary and decimal number systems), and depends on the size and layout of the format.
Similarly, the magnitude of the largest normal number in a format is given by
where p is the precision of the format in digits and is related to as:
In the IEEE 754 binary and decimal formats, b, p, , and have the following values:[1]
For example, in the smallest decimal format in the table (decimal32), the range of positive normal numbers is 10−95 through 9.999999 × 1096.
Non-zero numbers smaller in magnitude than the smallest normal number are called subnormal numbers (or denormal numbers).
Zero is considered neither normal nor subnormal.
Seamless Wikipedia browsing. On steroids.
Every time you click a link to Wikipedia, Wiktionary or Wikiquote in your browser's search results, it will show the modern Wikiwand interface.
Wikiwand extension is a five stars, simple, with minimum permission required to keep your browsing private, safe and transparent.