The Flash Storage market continues to innovate, with the release of bigger and cheaper models. To some extent, the market has two categories, the Enterprise market and the PC market.
It is well known that flash drives have a lifespan that is dependent on the number of times data is written to it. Manufacturers usually express this endurance with two different numbers; Drive Writes Per Day (DWPD) and Terabytes Written (TBW). They also quote a warranty lifetime in years. How these two numbers work depends on the size of the drive. Say you have a 2TB flash drive with a 5 year warranty, a DWTB of 2 and a TBW of 3650.
DWPD is simply the number of times the entire drive can be overwitten every day, for the drive's planned lifetime. So, if you multiply out the DWTB value over 5 years, you get 2 * 365 * 5 = 3650. Basically, the higher the DWTB the better quality the flash drive will be, but at a cost of course.
Terabytes Written (TBW) is just the total amount of data that you can write to the drive in its lifetime, and matches the DWPD in the example above. However the numbers do not always match, especially for larger drives. This becomes important now that 96-layer triple-level cell (TLC) 3D NAND solid-state drives (SSDs) are coming onstream. These drives have a very high capacity for their footprint, which will (or should) lower the cost per gigabyte, but maybe at an endurance cost.
For example Western Digital are expected to start shipping their new NVMe-based SSDs in volume toward the end of this year. These are the Ultrastar DC SN640 and SN340 models: a long, ruler-like device with maximum capacity of 30.72 TB, and a smaller 22-mm-by-110-mm device storing up to 3.84 TB. The Ultrastar DC SN640 device can support up to 128 NVMe namespaces, which lets you subdivide a drive and give individual users their own storage space. However, the SN640 comes with 2 endurance options, 0.8 DWPD with a capacity up to 7.68 TB, and 2.0 DWPD, with capacity up to 6.4 TB. The Ultrastar SN340 is designed for read-intensive workloads and has a DWPD of just 0.3, with capacities of between 3.84 TB and 7.68 TB.
At the other end of the scale, Toshiba is to release a high performance single-level cell (SLC) XL-Flash chip designed to fill the performance gap between conventional NAND flash and more expensive DRAM. XL-Flash is intended for enterprise and datacenter storage applications, competing with Intel's Optane 3D XPoint technology. Samsung is also working on a similar SLC-based Z-NAND technology, designed for the same niche, and we could see it in 2020.
Toshiba are making XL-Flash with the same basic process as their 96-layer 3D NAND, but it uses shorter bit lines and word lines to build a flash memory die with many more planes than the two or four usually seen on current NAND devices. Toshiba claims that XL-Flash will have one tenth the read latency of TLC NAND, providing better random read IOPS, especially at low queue depths. Initial capacities are planned at 256GB, 512GB and 1TB, so sizes are significantly smaller than the 96 layer devices.
It supports NVMe 1.3 and PCI Express (PCIe) 3.0 to enable up to 4 GB per second of bandwidth over four lanes, and future releases will support up to 8 GBps with PCIe 4.0. The 16-plane architecture promises more efficient parallelism in future.
While XL-FLASH is expected to be in mass production in 2020, there are issues to be resolved with standardisation yet, and the market for the device is unclear. It could well be that this is currently a solution looking for a problem, though it could serve as a higher tier cache for TLC NAND. The technology also needs to be endorsed by a standards committee, such as SNIA
The 'QLC' stands for Quad level or 4 level, and V-NAND is Vertical NAND. V-NAND stacks memory cells vertically, which means more capacity for a given surface area. A V-NAND cell is a hole cut into a polysilicon substrate, then this hole is filled with various cylindrical layers, one of which is the floating gate, a silicon nitride cylinder that stores the charge. Individual cells are placed up the cylinder and a central rod of polysilicon acts as the conducting channel. The data in the vertical stack is read by sensing how much current flow is present, rather than simply its presence or absence. The charges cannot move vertically through the silicon nitride storage medium and the electric fields associated with the gates are closely confined within each layer so memory cells in different vertical layers do not interfere with each other. Also, because the cylindrical layers are thicker than planar NAND read there is no read disturb. This 'simple' V-NAND architecture is used for 2, 3 and 4 level devices
As of 2019, V-NAND can stack up to 96 bits high. These high capacity V-NAND cells will most likely be aimed at the Enterprise market. Once you start building high NAND stacks, the 'simple' V-NAND architecture is not able to access the cells low down in the structure. Instead the device is built up a bit like a step pyramid and connections are made by sinking vertical holes (called vias) down from each step to connect with each level. Each step must have a specific height, so a 96 layer chip will be about twice as tall as a 48 layer chip. These devices are expected to connect with NVMe rather than SATA, to get the access speeds required by Enterprise Systems.
Flash Memory is used extensively in enterprise storage subsystems, either as an 'All-Flash' subsystem or less commonly as a top level storage tier in a hybrid system. As well as big iron, flash storage is also found in personal computers, PDAs, digital audio players, digital cameras, mobile phones, synthesizers, video games, scientific instrumentation, industrial robotics, medical electronics and more. Flash has become so popular because it has fast read access times, it is more resistant to mechanical shock than hard disks in portable devices, and it can withstand high pressure, temperature and immersion in water.
Flash memory was developed from EEPROM (electrically erasable programmable read-only memory) by Toshiba in 1984. However while EEPROM has to be completely erased before it could be overwritten, bytes, blocks or pages of flash memory can be re-written.
Two main types of flash memory exist, NOR and NAND. The names are derived from the way the individual memory cells are coupled together, resembling NOR and NAND gates. NOR flash cells are connected in parallel to the bit lines, so each cell can be addressed individually. NAND flash cells are connected in series just like a NAND gate and this means that the cells cannot be programmed individually but must be read in series.
NOR-based flash has relatively long erase and write times, but its big advantage is that it is possible to read and update data at bit level, just like random-access memory. This means that it can be used as a direct replacement for those older ROM chips that are used to store data that is frequently referenced, but not updated very often. in fact, programs stored in NOR flash can be executed directly from the NOR flash without needing to be copied into RAM first.
The data bits in an unused NOR flash device are all initially set to binary '1'. When the device is programmed, all the bits that need to be changed are set to binary '0'. If the data needs to be re-written, then all the data in an entire block must first be reset back to '1'. Typical block sizes are 64, 128, or 256 KB. This means that write times are relatively slow, compared to NAND flash.
A NOR chip needs a separate metal contact for each cell, but NAND chips do not, so the packing density of NAND cells is typically 2.5 times higher than on a NOR chip.
NAND flash can write and erase data faster than NOR and it does not have those metal contacts, which means it needs less chip area per cell. This means it can store more cells in a given area so NAND is cheaper than NOR for a given capacity. Also, NOR flash storage must be error free, whereas NAND flash can tolerate some faulty cells, which also reduces the price as fewer faulty chips need to be discarded. So why has NAND flash not replaced NOR flash? The problem with NAND is that it is made up of blocks of data that are combined into pages, and reading or updating data is at page level, while erasing data is at block level.
So is this really a problem? Well, NAND storage is not suitable for replacing byte-level random access storage as used by most microprocessors and microcontrollers, this is where NOR flash is still required. However it is very suitable for replacing storage that accesses data at block level, like magnetic disks! NAND pages are typically 512 or 2,048 or 4,096 bytes in size and these are reasonable block sizes for disk storage. NAND flash storage is eating up the data traditionally storage on magnetic disk.
NAND cells can go faulty, so NAND flash manages bad cells in a couple of ways. Each page contains an error correcting code (ECC) checksum area, typically a few bytes, and this is used to compensate for those bits that might go faulty during normal device operation. When the device is performing an erase or update, it can detect blocks that fail to program or update and then it can write the data to a good block and mark the bad block in the bad block map. This means that the overall memory capacity gradually shrinks as more blocks are marked as bad.
NAND flash memory cards are much faster at reading than writing.
Intensive write and read patterns can wear out a Flash device.
Read disturb is the term used to describe the fact that every time a NAND flash memory cell is read, nearby cells in the same memory block are affected. If data is continually read from the same cell, then that cell will not fail, but cells around it might fail though it could take hundreds of thousands of reads to make this happen. To prevent this, the flash controller will count the number of reads to a NAND block and if a threshold is exceeded it will then write that data to a new block. The original block is then erased, and the erasure will fix any issues made by the reads.
Flash memory has a limit on how many times a block can be erased or programmed, typically about 100,000. The chip or driver firmware can count the writes to a block then dynamically relocate a block if it starting to exceed a write threshold. This is called Wear Levelling. If a write fails, then the data can be written to another block and the failing block marked as unusable. This is called Bad Block Management.
Flash memory cell is effectively a transistor. Data is stored in a floating gate that is sandwiched between two insulating silicon dioxide layers that in turn are sandwiched between a control gate and the transistor base. Because the floating gate is electrically isolated, electrons placed on it will stay there until they are removed by the application of an electric field. If there are trapped electrons in the floating gate, this is a logical '0' while absence of electrons is a logical '1'. The trapped electrons mean there is a potential difference within the cell, so if you apply a voltage to the control gate that is lower than that charge, and the cell conducts, then there is no charge present so the cell is logical '1'. Conversely, if a charge does exist then the cell will not conduct and the cell is logical '0'. To set a cell to '1' by removing the charge, or to erase a block of cells, a higher voltage must be applied to the cell.
Most flash devices will only have a single supply voltage so they use an on-chip 'charge pump' to generate that higher voltage
Phase change or PRAM could replace NAND. At a very simplistic level, it works by changing the state of chalcogenide glass as the two states have different electrical resistance. The amorphous state has a high resistance and represents a binary 0, while the crystalline phase has a low resistance state and represents a 1. The advantages of PRAM are that it is more stable than NAND, it can write data much faster, and it does not suffer as much write degradation as NAND does. Also, PRAM has the capability to change a single bit rather then a whole block and it can change that bit faster. In other words, it is getting closer to RAM performance and usability.
Watch out for 3D XPoint. The Optane 900P is now avaliable in 280 or 480 GB devices. It has shown to be 10 times faster and 10 times denser than the NAND and was invented by Intel and Micron. Intel is not pitching it as a replacement for either flash storage or RAM, but as something that comes in between the two. They claim that the technology is not phase change, but I have seen a very persuasive argument that it could be just that. The name 3D XPoint (pronounced cross point) relates to the crossbar structure of wiring, with layers of parallel wires, where each layer runs at right angles to the one above. At each intersection of the wires there is a very small column which consists of a memory cell and a selector cell which is used to allow read and write access to the memory call. Access is controlled by varying the amount of voltage it receives via the wires. As 3D XPoint does not require transistors to store data, it avoids most of the issues found in NAND chips.
The important part of processing data, is to get it to the processor, so that numbers can be crunched, and data can be filtered, sorted and combined, then the results presented back on the customer's screen, or maybe stored in a file. The point is, that there is no point in producing storage devices that provide superfast access to data, if the comms channels that connect that device to the processor do not run at the same speed. This problem has been addressed in a number of different ways.
PCIe flash provides faster access over the PCIe bus and does not need SATA or SAS protocol conversion. It uses the standard NVMe driver, which means that you don't need PCIe manufacturer and device-specific drivers. It is becoming the standard way to connect flash devices on an internal bus. However direct attached flash sacrifices all of the benefits of shared storage.
SAN or network attached shared flash storage has issues with connectivity, channel speeds and throughput.
NVMe over fabrics provides PCIe bus-class access speeds between servers and external flash arrays by using RDMA (Remote Direct Memory access).
Rack-scale flash, mainly introduced by EMC, is designed to bring flash physically closer to the processors. It uses low latency protocols and data path improvements to speed up performance.
Flash NAND memory is relatively slow compared to the DRAM memory used in server and PC memory. DRAM is fast, but the data is volatile, and is lost when the power drops. Imagine the result if you combined the two technologies, very fast access combined with non-volatile data.
Well that is exactly what DDRdrive have done with their DDR Drive X1, released in early May, 2009. The initial device contains 4 GB of DRAM backed up by 4 GB of NAND Flash memory. Your applications interface with the very fast DRAM storage, while updates are written out to the NAND flash memory for safekeeping. DDR quotes that the entire drive can be backed up from DRAM to NAND in 60 seconds, with a restore taking the same time. The DDRdrive package is connected to a motherboard via a PCI express slot.
Christopher George, the CEO and founder of DDRdrive, states that the DDRdrive X1 is 'The drive for speed' (a trademarked description) and should be targeted at very IO intensive applications like databases, while applications that do not need blinding performance can be kept on spinning disk.
For more details, check out http://www.ddrdrive.com/