carloscastilla - Fotolia
Can I replace HDDs with SSDs to support in-memory storage?
Replacing all your HDDs with SSDs won't solve the storage issues associated with in-memory databases. Look to hyper-convergence and NVDIMMs instead.
Using in-memory storage can boost performance, but it's important to have adequate persistent storage on the back end. Designing this architecture is difficult, but IT administrators have several options.
In-memory storage databases come in two flavors: There are those that never save anything, and those that always write an update to persistent storage.
Cluster architectures that lived comfortably with RAID arrays with 60 or 120 hard drives can't keep up with the accelerated transaction rate of in-memory storage. What's an admin to do?
All the solutions cost more than old disk drives, but they reduce server counts. Plus, the economics favor ultra-fast architectures.
Back ends to support in-memory storage
Simply replacing HDDs with SSDs doesn't work; RAID arrays have bandwidth limitations, and 60 SSDs, even lower speed units, would tax that limit. Hyper-converged infrastructure works well if it has ultra-fast NVMe drives that can be accessed locally within their host server, as well as via remote direct memory access (RDMA) across the cluster.
Resiliency and the desire for nonstop operation dictates that the persistent copy be in another node in case the in-memory node dies. This requires a write via RDMA to another server's drives. But using remote storage as a journal -- and accepting a small window of exposure for transactions that might need to be recovered -- can simplify the architecture and help avoid long transaction times.
NVDIMMs are designed to extend memory and deliver faster performance than NVMe, but they also increase costs. NVDIMM, like dynamic RAM, can be exposed to the whole cluster as a pooled resource. Also, byte-addressable I/O across RDMA is still significantly faster than RDMA 4 KB block I/O.