It may not have been noticed by many IT professionals, but recently Intel sold its Nand flash and solid-state drive (SSD) manufacturing assets to South Korean firm SK Hynix for $9bn. The interesting bit is what it didn’t sell, however.
Intel is holding tight to its 3D Xpoint-based Optane product and sees a huge, yawning gap opening up for the type of performance characteristics it offers, which are somewhere between RAM and storage and very well-suited to multiple emerging use cases. These include databases, simulation and analytics, virtual infrastructure and artificial intelligence (AI).
But what is Optane and what use cases is it best suited to?
In terms of performance, Optane sits between the very expensive DRAM memory and the less costly but much slower to access storage media such as SSD, disk and tape.
In a recent webcast, Intel senior director Kristie Mann put Optane latency between 1µs (microsecond) in its Optane Persistent Memory (PMem) product and 10µs in the SSD version. That compares with 100ns (nanoseconds) for DRAM while standard SSD has latency up to around 100µs. Hard disk drives and tape are obviously way slower than that.
3D XPoint is architectured differently from other flash products. It is based on phase-change memory technology.
It has no transistors and gets its name from its “cross-point” architecture that puts memory cells and selectors at the intersection of perpendicular wires in grids that can be stacked and connected in three dimensions to increase the density of storage.
Cells are persistent and hold their values indefinitely, even through a power loss. Reads and writes occur by varying the amount of voltage sent to each selector.
Mann identified three key trends Intel sees as driving Optane adoption. These are:
- Increased demand for more cores and threads in compute. Which in turn is driving…
- Increased demand for more server memory. She cited research (unspecified source) that said most enterprise servers would want more than 0.5TB of memory by 2025. And finally, that…
- Dynamic random access memory (DRAM) storage density cannot keep up with these needs.
“Optane fills the gap between memory and storage in latency terms,” said Mann. “It sits on the memory bus but data is persistent, with large capacity and low latency.”
Optane is also byte-addressable. That means the media can be written to and from at the level of a single character. That’s a key characteristic of memory, and what distinguishes it from storage.
Optane PMem can work in two different modes. These are app-direct and memory. In app-direct mode, applications access Optane as a tier of non-volatile persistent memory to store data permanently while DRAM is used as a tier of volatile memory. In memory mode, applications access DRAM and Optane PMem as a single pool of volatile memory.
Mann believes app-direct will eventually be the most-used mode of operation, but that it will take time to catch on because of the refactoring of applications it requires. Meanwhile, memory mode offers large memory capacity without the need to optimise applications, but there’s no persistence of storage.
She recently estimated that 60% of Optane proofs-of-concept (PoC) deployments – which number nearly 600 – use Optane PMem in memory mode and 40% have modified systems to use it in app-direct mode.
Those PoC deployments break down like this:
- SAP Hana – 23%
- Virtual infrastructure – 22%
- High-performance computing (HPC) – 18%
- AI and analytics – 13%.
Intel defines HPC to include visualisation and simulation, as well as credit card fraud detection and trading operations in financial services, which is much wider than the supercomputing-type workloads it has generally been associated with.