In AI, No Data Is Ever Cold

Posted by Thomas Coughlin, Contributor | 3 hours ago | /big-data, /enterprise-tech, /innovation, Big Data, Enterprise Tech, enterprise&cloud, Innovation, standard | Views: 12


So said David Flynn, CEO of Hammerspace at the 2025 SNIA SDC in Santa Clara, CA. Among his other accomplishments, David was CEO of FusionIO, the company that pioneered the SSD interface that eventually became NVMe.

He gave a clear and well thought out talk about the Open Flash Platform, OFP, based upon NFS in Linux and a metadata server with data orchestration in a separate control plane from the data, that would dramatically improve system performance by avoiding system controllers and moving data directly from the storage devices to processors.

The image below compares the OFP storage architecture on the right with traditional NAS architecture on the left. By storing separating the data path from the metadata and data orchestration OFP gets rid of the controller used in conventional storage systems and it is possible to pursue linear scaling, particularly to provide higher performance to meet the needs of AI workflows.

By eliminated the controller and other potential bottlenecks using OFP he says that the cost can be reduced by 60%, the capacity increased by 10X and it uses 10% of the power of conventional SSD storage.

He introduced the idea of a OFP sled that basically consists of banks of flash memory. This could use conventional SSDs but he also suggested that more dense packages could be built for these sleds such as the E2 512TB Embedded NVMe SSD shown below.

These could be stacked into the sled configuration shown below with a DPU and a fiber interface to form a storage NIC.

Rows of these sleds could be packed next to each other in a 1U shelf and these shelves stacked in a rack. The resulting configuration provides much denser storage than conventional all flash parallel file systems. The figure below shows 2 fully packed 1U OFP shelves in one rack compared to two fully populated racks of conventional flash storage. Both units have about the same capacity, about 50PB, but the OFP has 40X greater storage density and 98% lower power consumption.

A fully populated OFP rack could provide an Exabyte with 252 sleds, and provide 200 Tbps data rates. Idle power would be just 25KW and 40PB/kW and 8Tbps/kW. The table below compares the OFP 1EB storage platform versus conventional parallel file flash storage. One rack achieves this versus 10 racks, with 10X less power consumption, 10X less service time per year and longer service life and 10X less operating costs per year.

One of the things David said is that with such a system one could tolerate a lot more failures and degradation in place, part of why he estimated a longer service life, although he also said that the parts on the sled that might fail—such as the DPU, would be modular and easily replaced in house.

Hammerspace is working with several organizations to make OFP happen, including Los Alamos National Laboratory, Linux, SK hynix ScaleFlus and Xsight. It was also obvious from discussion after his talk that other prominent storage companies are also working on OFP. He pointed out that many of these ideas, such as having storage directly connected to an Ethernet fabric have been around for years, but until the rise of AI there was not the demand for storage performance that there is today.

At the 2025 SNIA SDC Hammerspace’s David Flynn presented the concept for a Linux NFS-based Open Flash Platform that can provide the capacity and performance needed for modern AI workflows.



Forbes

Leave a Reply

Your email address will not be published. Required fields are marked *