NVM Express (NVMe) or Non-Volatile Memory Host Controller Interface Specification (NVMHCIS) is an open, logical-device interface specification for accessing a computer's non-volatile storage media usually attached via the PCI Express bus. The initial NVM stands for non-volatile memory, which is often NAND flash memory that comes in several physical form factors, including solid-state drives (SSDs), PCIe add-in cards, and M.2 cards, the successor to mSATA cards. NVM Express, as a logical-device interface, has been designed to capitalize on the low latency and internal parallelism of solid-state storage devices.[2]
Non-Volatile Memory Host Controller Interface Specification | |
Abbreviation | NVMe |
---|---|
Status | Published |
Year started | 2011 |
Latest version | 2.1 August 5, 2024[1] |
Organization | NVM Express, Inc. (since 2014) NVM Express Work Group (before 2014) |
Website | nvmexpress |
Architecturally, the logic for NVMe is physically stored within and executed by the NVMe controller chip that is physically co-located with the storage media, usually an SSD. Version changes for NVMe, e.g., 1.3 to 1.4, are incorporated within the storage media, and do not affect PCIe-compatible components such as motherboards and CPUs.[3]
By its design, NVM Express allows host hardware and software to fully exploit the levels of parallelism possible in modern SSDs. As a result, NVM Express reduces I/O overhead and brings various performance improvements relative to previous logical-device interfaces, including multiple long command queues, and reduced latency. The previous interface protocols like AHCI were developed for use with far slower hard disk drives (HDD) where a very lengthy delay (relative to CPU operations) exists between a request and data transfer, where data speeds are much slower than RAM speeds, and where disk rotation and seek time give rise to further optimization requirements.
NVM Express devices are chiefly available in the form of standard-sized PCI Express expansion cards[4] and as 2.5-inch form-factor devices that provide a four-lane PCI Express interface through the U.2 connector (formerly known as SFF-8639).[5][6] Storage devices using SATA Express and the M.2 specification which support NVM Express as the logical-device interface are a popular use-case for NVMe and have become the dominant form of solid-state storage for servers, desktops, and laptops alike.[7][8]
Specifications
Specifications for NVMe released to date include:[9]
- 1.0e (January 2013)
- 1.1b (July 2014) that adds standardized Command Sets to achieve better compatibility across different NVMe devices, Management Interface that provides standardized tools for managing NVMe devices, simplifying administration and Transport Specifications that defines how NVMe commands are transported over various physical interfaces, enhancing interoperability.[10]
- 1.2 (November 2014)
- 1.2a (October 2015)
- 1.2b (June 2016)
- 1.2.1 (June 2016) that introduces the following new features over version 1.1b: Multi-Queue to supports multiple I/O queues, enhancing data throughput and performance, Namespace Management that allows for dynamic creation, deletion, and resizing of namespaces, providing greater flexibility, and Endurance Management to monitor and manage SSD wear levels, optimizing performance and extending drive life.[11]
- 1.3 (May 2017)
- 1.3a (October 2017)
- 1.3b (May 2018)
- 1.3c (May 2018)
- 1.3d (March 2019) that since version 1.2.1 added Namespace Sharing to allow multiple hosts accessing a single namespace, facilitating shared storage environments, Namespace Reservation to provides mechanisms for hosts to reserve namespaces, preventing conflicts and ensuring data integrity, and Namespace Priority that sets priority levels for different namespaces, optimizing performance for critical workloads.[12][13]
- 1.4 (June 2019)
- 1.4a (March 2020)
- 1.4b (September 2020)
- 1.4c (June 2021), that has the following new features compared to 1.3d: IO Determinism to ensure consistent latency and performance by isolating workloads, Namespace Write Protect for preventing data corruption or unauthorized modifications, Persistent Event Log that stores event logs in non-volatile memory, aiding in diagnostics and troubleshooting, and Verify Command that checks the integrity of data.[14][15]
- 2.0 (May 2021)[16]
- 2.0a (July 2021)
- 2.0b (January 2022)
- 2.0c (October 2022)
- 2.0d (January 2024)[17], that, compared to 1.4c, introduces Zoned Namespaces (ZNS) to organizes data into zones for efficient write operations, reducing write amplification and improving SSD longevity, Key Value (KV) for efficient storage and retrieval of key-value pairs directly on the NVMe device, bypassing traditional file systems, Endurance Group Management to manages groups of SSDs based on their endurance, optimizing usage and extending lifespan.[18][17][19]
- 2.1 (August 2024)[1] that introduces Live Migration to maintaining service availability during migration, Key Per I/O for applying encryption keys at a per-operation level, NVMe-MI High Availability Out of Band Management for managing NVMe devices outside of regular data paths, and NVMe Network Boot / UEFI for booting NVMe devices over a network.[20]
Background
Historically, most SSDs used buses such as SATA, SAS, or Fibre Channel for interfacing with the rest of a computer system. Since SSDs became available in mass markets, SATA has become the most typical way for connecting SSDs in personal computers; however, SATA was designed primarily for interfacing with mechanical hard disk drives (HDDs), and it became increasingly inadequate for SSDs, which improved in speed over time.[21] For example, within about five years of mass market mainstream adoption (2005–2010) many SSDs were already held back by the comparatively slow data rates available for hard drives—unlike hard disk drives, some SSDs are limited by the maximum throughput of SATA.
High-end SSDs had been made using the PCI Express bus before NVMe, but using non-standard specification interfaces, or by emulating a hardware RAID controller.[22] By standardizing the interface of SSDs, operating systems only need one common device driver to work with all SSDs adhering to the specification. It also means that each SSD manufacturer does not have to design specific interface drivers. This is similar to how USB mass storage devices are built to follow the USB mass-storage device class specification and work with all computers, with no per-device drivers needed.[23]
NVM Express devices are also used as the building block of the burst buffer storage in many leading supercomputers, such as Fugaku Supercomputer, Summit Supercomputer and Sierra Supercomputer, etc.[24][25]
History
The first details of a new standard for accessing non-volatile memory emerged at the Intel Developer Forum 2007, when NVMHCI was shown as the host-side protocol of a proposed architectural design that had Open NAND Flash Interface Working Group (ONFI) on the memory (flash) chips side.[26] A NVMHCI working group led by Intel was formed that year. The NVMHCI 1.0 specification was completed in April 2008 and released on Intel's web site.[27][28][29]
Technical work on NVMe began in the second half of 2009.[30] The NVMe specifications were developed by the NVM Express Workgroup, which consists of more than 90 companies; Amber Huffman of Intel was the working group's chair. Version 1.0 of the specification was released on 1 March 2011,[31] while version 1.1 of the specification was released on 11 October 2012.[32] Major features added in version 1.1 are multi-path I/O (with namespace sharing) and arbitrary-length scatter-gather I/O. It is expected that future revisions will significantly enhance namespace management.[30] Because of its feature focus, NVMe 1.1 was initially called "Enterprise NVMHCI".[33] An update for the base NVMe specification, called version 1.0e, was released in January 2013.[34] In June 2011, a Promoter Group led by seven companies was formed.
The first commercially available NVMe chipsets were released by Integrated Device Technology (89HF16P04AG3 and 89HF32P08AG3) in August 2012.[35][36] The first NVMe drive, Samsung's XS1715 enterprise drive, was announced in July 2013; according to Samsung, this drive supported 3 GB/s read speeds, six times faster than their previous enterprise offerings.[37] The LSI SandForce SF3700 controller family, released in November 2013, also supports NVMe.[38][39] A Kingston HyperX "prosumer" product using this controller was showcased at the Consumer Electronics Show 2014 and promised similar performance.[40][41] In June 2014, Intel announced their first NVM Express products, the Intel SSD data center family that interfaces with the host through PCI Express bus, which includes the DC P3700 series, the DC P3600 series, and the DC P3500 series.[42] As of November 2014[update], NVMe drives are commercially available.
In March 2014, the group incorporated to become NVM Express, Inc., which as of November 2014[update] consists of more than 65 companies from across the industry. NVM Express specifications are owned and maintained by NVM Express, Inc., which also promotes industry awareness of NVM Express as an industry-wide standard. NVM Express, Inc. is directed by a thirteen-member board of directors selected from the Promoter Group, which includes Cisco, Dell, EMC, HGST, Intel, Micron, Microsoft, NetApp, Oracle, PMC, Samsung, SanDisk and Seagate.[43]
In September 2016, the CompactFlash Association announced that it would be releasing a new memory card specification, CFexpress, which uses NVMe.[citation needed]
NVMe Host Memory Buffer (HMB) feature added in version 1.2 of the NVMe specification.[44] HMB allows SSDs to utilize the host's DRAM, which can improve the I/O performance for DRAM-less SSDs.[45] For example, HMB can be used for cache the FTL table by the SSD controller, which can improve I/O performance.[46] NVMe 2.0 added optional Zoned Namespaces (ZNS) feature and Key-Value (KV) feature, and support for rotating media such as hard drives. ZNS and KV allows data to be mapped directly to its physical location in flash memory to directly access data on an SSD.[47] ZNS and KV can also decrease write amplification of flash media.
Form factors
There are many form factors of NVMe solid-state drive, such as AIC, U.2, U.3, M.2 etc.
AIC (add-in card)
Almost all early NVMe solid-state drives are HHHL (half height, half length) or FHHL (full height, half length) AIC, with a PCIe 2.0 or 3.0 interface. A HHHL NVMe solid-state drive card is easy to insert into a PCIe slot of a server.
SATA Express, U.2 and U.3 (SFF-8639)
SATA Express allows the use of two PCI Express 2.0 or 3.0 lanes and two SATA 3.0 (6 Gbit/s) ports through the same host-side SATA Express connector (but not both at the same time). SATA Express supports NVMe as the logical device interface for attached PCI Express storage devices. It is electrically compatible with MultiLink SAS, so a backplane can support both at the same time.
U.2, formerly known as SFF-8639, uses the same physical port as SATA Express but allows up to four PCI Express lanes. Available servers can combine up to 48 U.2 NVMe solid-state drives.[48]
U.3 (SFF-TA-1001) is built on the U.2 spec and uses the same SFF-8639 connector. Unlike in U.2, a single "tri-mode" (PCIe/SATA/SAS) backplane receptacle can handle all three types of connections; the controller automatically detects the type of connection used. This is unlike U.2, where users need to use separate controllers for SATA/SAS and NVMe. U.3 devices are required to be backwards-compatible with U.2 hosts. U.2 devices can be used with U.3 hosts.[49]
M.2
M.2, formerly known as the Next Generation Form Factor (NGFF), uses a M.2 NVMe solid-state drive computer bus. Interfaces provided through the M.2 connector are PCI Express 3.0 or higher (up to four lanes).
EDSFF
NVMe-oF
NVM Express over Fabrics (NVMe-oF) is the concept of using a transport protocol over a network to connect remote NVMe devices, contrary to regular NVMe where physical NVMe devices are connected to a PCIe bus either directly or over a PCIe switch to a PCIe bus. In August 2017, a standard for using NVMe over Fibre Channel (FC) was submitted by the standards organization International Committee for Information Technology Standards (ICITS), and this combination is often referred to as FC-NVMe or sometimes NVMe/FC.[50]
As of May 2021, supported NVMe transport protocols are:
- FC, FC-NVMe[50][51]
- TCP, NVMe/TCP[52]
- Ethernet, RoCE v1/v2 (RDMA over converged Ethernet)[53]
- InfiniBand, NVMe over InfiniBand or NVMe/IB[54]
The standard for NVMe over Fabrics was published by NVM Express, Inc. in 2016.[55][56]
The following software implements the NVMe-oF protocol:
- Linux NVMe-oF initiator and target.[57] RoCE transport was supported initially, and with Linux kernel 5.x, native support for TCP was added.[58]
- Storage Performance Development Kit (SPDK) NVMe-oF initiator and target drivers.[59] Both RoCE and TCP transports are supported.[60][61]
- StarWind NVMe-oF initiator[62] and target for Linux and Microsoft Windows, supporting both RoCE & TCP, and Fibre Channel transports.[63]
- Lightbits Labs NVMe over TCP target[64] for various Linux distributions[65] & public clouds.
- Bloombase StoreSafe Intelligent Storage Firewall supports NVMe over RoCE, TCP, and Fibre Channel for transparent storage security protection.
Comparison with AHCI
The Advanced Host Controller Interface (AHCI) has the benefit of wide software compatibility, but has the downside of not delivering optimal performance when used with SSDs connected via the PCI Express bus. As a logical-device interface, AHCI was developed when the purpose of a host bus adapter (HBA) in a system was to connect the CPU/memory subsystem with a much slower storage subsystem based on rotating magnetic media. As a result, AHCI introduces certain inefficiencies when used with SSD devices, which behave much more like RAM than like spinning media.[7]
The NVMe device interface has been designed from the ground up, capitalizing on the lower latency and parallelism of PCI Express SSDs, and complementing the parallelism of contemporary CPUs, platforms and applications. At a high level, the basic advantages of NVMe over AHCI relate to its ability to exploit parallelism in host hardware and software, manifested by the differences in command queue depths, efficiency of interrupt processing, the number of uncacheable register accesses, etc., resulting in various performance improvements.[7][66]: 17–18
The table below summarizes high-level differences between the NVMe and AHCI logical-device interfaces.
AHCI | NVMe | |
---|---|---|
Maximum queue depth | One command queue; Up to 32 commands per queue | Up to 65535 queues;[67] Up to 65536 commands per queue |
Uncacheable register accesses (2000 cycles each) |
Up to six per non-queued command; Up to nine per queued command | Up to two per command |
Interrupt | A single interrupt | Up to 2048 MSI-X interrupts |
Parallelism and multiple threads |
Requires synchronization lock to issue a command | No locking |
Efficiency for 4 KB commands |
Command parameters require two serialized host DRAM fetches | Gets command parameters in one 64-byte fetch |
Data transmission | Usually half-duplex | Full-duplex |
Host Memory Buffer (HMB) | No | Yes |
Operating system support
- ChromeOS
- On February 24, 2015, support for booting from NVM Express devices was added to ChromeOS.[69][70]
- DragonFly BSD
- The first release of DragonFly BSD with NVMe support is version 4.6.[71]
- FreeBSD
- Intel sponsored a NVM Express driver for FreeBSD's head and stable/9 branches.[72][73] The nvd(4) and nvme(4) drivers are included in the GENERIC kernel configuration by default since FreeBSD version 10.2 in 2015.[74]
- Genode
- Support for consumer-grade NVMe was added to the Genode framework as part of the 18.05[75] release.
- iOS
- With the release of the iPhone 6S and 6S Plus, Apple introduced the first mobile deployment of NVMe over PCIe in smartphones.[79] Apple followed these releases with the release of the first-generation iPad Pro and first-generation iPhone SE that also use NVMe over PCIe.[80]
- Linux
- Intel published an NVM Express driver for Linux on 3 March 2011,[81][82][83] which was merged into the Linux kernel mainline on 18 January 2012 and released as part of version 3.3 of the Linux kernel on 19 March 2012.[84] Linux kernel supports NVMe Host Memory Buffer[85] from version 4.13.1[86] with default maximum size 128 MB.[87] Linux kernel supports NVMe Zoned Namespaces start from version 5.9.
- macOS
- Apple introduced software support for NVM Express in Yosemite 10.10.3. The NVMe hardware interface was introduced in the 2016 MacBook and MacBook Pro.[88]
- NetBSD
- NetBSD added support for NVMe in NetBSD 8.0.[89] The implementation is derived from OpenBSD 6.0.
- OpenBSD
- Development work required to support NVMe in OpenBSD has been started in April 2014 by a senior developer formerly responsible for USB 2.0 and AHCI support.[90] Support for NVMe has been enabled in the OpenBSD 6.0 release.[91]
- OS/2
- Arca Noae provides an NVMe driver for ArcaOS, as of April, 2021. The driver requires advanced interrupts as provided by the ACPI PSD running in advanced interrupt mode (mode 2), thus requiring the SMP kernel, as well.[92]
- VMware
- Intel has provided an NVMe driver for VMware,[94] which is included in vSphere 6.0 and later builds, supporting various NVMe devices.[95] As of vSphere 6 update 1, VMware's VSAN software-defined storage subsystem also supports NVMe devices.[96]
- Windows
- Microsoft added native support for NVMe to Windows 8.1 and Windows Server 2012 R2.[66][97] Native drivers for Windows 7 and Windows Server 2008 R2 have been added in updates.[98] Many vendors have released their own Windows drivers for their devices as well. There are also manually customized installer files available to install a specific vendor's driver to any NVMe card, such as using a Samsung NVMe driver with a non-Samsung NVMe device, which may be needed for additional features, performance, and stability.[99]
- Support for NVMe HMB was added in Windows 10 Anniversary Update (Version 1607) in 2016.[44] In Microsoft Windows from Windows 10 1607 to Windows 11 23H2, the maximum HMB size is 64 MB. Windows 11 24H2 updates the maximum HMB size to 1/64 of system RAM.[100]
- Support for NVMe ZNS and KV was added in Windows 10 version 21H2 and Windows 11 in 2021.[101] The OpenFabrics Alliance maintains an open-source NVMe Windows Driver for Windows 7/8/8.1 and Windows Server 2008R2/2012/2012R2, developed from the baseline code submitted by several promoter companies in the NVMe workgroup, specifically IDT, Intel, and LSI.[102] The current release is 1.5 from December 2016.[103]
Software support
Management tools
nvmecontrol
The nvmecontrol
tool is used to control an NVMe disk from the command line on FreeBSD. It was added in FreeBSD 9.2.[106]
nvme-cli
NVM-Express user space tooling for Linux.[107]
See also
References
External links
Wikiwand in your browser!
Seamless Wikipedia browsing. On steroids.
Every time you click a link to Wikipedia, Wiktionary or Wikiquote in your browser's search results, it will show the modern Wikiwand interface.
Wikiwand extension is a five stars, simple, with minimum permission required to keep your browsing private, safe and transparent.