Why Flash/SSD is useful in today’s enterprise data centers.
While the rest of the data center benefits from continuing breakthroughs in silicon technology, storage remains stuck in a mechanical world, where each access to data waits for the disk head to seek and platter to rotate, inserting millisecond pauses between microsecond data transfers. Virtualization and data consolidation are driving up the demand for random input/output (I/O) at the same time that disk drive performance is falling. The result is an I/O crisis in storage.
Flash memory has transformed the market for consumer devices, allowing manufacturers to deliver revolutionary products in form factors not possible with rotating disk. Flash is poised to have an equally disruptive effect on the enterprise storage market due in part to its random I/O performance, predictable low latency and dramatic power and space savings.
Flash excels at random I/O performance, offering greater than 10x gains in I/Os per second (IOPS); flash has no seeking, no rotational latency, and performs equally well on random workloads as it does on sequential ones. Flash can accelerate virtual server and desktop deployments while affording higher consolidation and greater efficiency as well as accelerate SQL and NoSQL workloads without partitioning or changes to the application. Flash can also substantially reduce the need to overprovision DRAM and solve cache consistency issues. Because flash is like memory (any block of data can be fetched in nearly constant time), applications can be designed to expect sub-millisecond latency no matter what the I/O stream (random or sequential) or data distribution. And because flash storage uses dramatically less power and space than rotational hard drives, businesses can reduce their footprint and substantially expand capacity in place.
Flash should be a #1 priority for Tier 1 data created by random I/O-intensive applications such as server virtualization, desktop virtualization (VDI), database (OLTP, rich analytics/OLAP, SQL, NoSQL) and cloud computing.
The biggest challenges for data center and IT managers.
One of the challenges IT managers face when considering flash storage is understanding which applications benefit from leveraging flash and which don’t. The storage industry has been espousing flash as the “cure-all pill” for application performance problems, but folks who have just thrown flash blindly at their applications have found mixed results at best. Why? It turns out that some applications are very well-suited for flash, while others may see little performance benefit from flash. Some applications will respond well to a flash caching strategy, where for others “all flash” approaches are required to realize the benefit. It all comes down to understanding the I/O profile of your application, something we’ve taken to calling an application’s “I/O fingerprint.” Applications will vary greatly in their I/O fingerprint, and that fingerprint can change over time as the workload changes, or the architecture evolves.
Overcoming those challenges.
IT managers can assess their applications to determine it’s “I/O Fingerprint” and determine how it should leverage flash storage.
The first thing to understand is how much I/O your application is doing. How many IOPS (I/Os per Second), and how consistent is that I/O load? Understanding this gives a rough idea of how much performance your storage architecture needs to deliver for your application, and how it is changing over time.
The second thing to understand is I/O size, which is key to interpreting whether given measures of IOPS, latency, and bandwidth are good or bad, and when comparing two different applications you must normalize these measures to a common I/O size.
Different storage architectures behave very differently in how they handle read workloads, write workloads, and mixed read/write workloads, and much of the configuration tweaking one does on a storage architecture involves optimization of the device for these access patterns. Understanding the read/write mix of your application can help determine if caching will help your application or not, and will determine a suitable cache size.
Finally, you should understand the latency sensitivity of your application. Simply put, some applications will greatly benefit from greater I/O response times, others will not. If your application is gated by complex application logic and storage I/O times aren’t the bottleneck, then improving storage performance won’t help much. On the other hand, if storage I/O transfers dominate your transaction completion time, then improving them will have a dramatic effect on transaction rate and scalability.