Getty Images

Hammerspace CEO: Storage flexibility is now crucial

In this Q&A, David Flynn, CEO of Hammerspace, reflects on how COVID-19 and remote work have affected storage vendors that specialize in data storage management.

Software-defined storage vendors are pitching themselves as a means for enterprises to do more with existing technology, especially as COVID-19 continues to disrupt supply chains and hamper hardware distribution.

David Flynn, CEO at Hammerspace, said data storage management tools, such as his company's Global Data Environment (GDE), can bring data from on-premises and cloud products together to help enterprises support a remote workforce and avoid a hardware refresh in the near future.

GDE, which entered general availability last year, is a NAS management service used to bring structured and unstructured data together to appear as a unified file system. Hammerspace's software creates metadata tags over existing files to enable visibility across sites, generate data storage and transfer policies, as well as dictate file movement.

SearchStorage spoke with Flynn about how GDE differs from other cloud file services, use cases among customers and its future relevance.

As the pandemic wears on and continues to strain resources, how will data storage management tools and SaaS alternatives benefit enterprises over traditional purchases?

David FlynnDavid Flynn

David Flynn: Supply chain issues, especially around IT, have been a major deal. That has emphasized the need for flexibility for using [a variety of] resources across the cloud, across a different cloud. … If you can't get the stuff for your own data center, you need to flexibly use many other data centers. The ability to tap into distributed resources has been driven to the forefront.

But there's another trend that also ties into this, and that's the work from home and allowing people to move back to their home country or to a different state or wherever.

At such distances, that latency matters, especially when these are data-intensive workflows. It's about distributed resources, but in this case it's human resources, people and getting brain power.

So, whether it's getting more servers operating against your data, or getting more people operating against your data, the thing you have to do is be able to lift the data up and have the data go to wherever those resources are.

This flips the conventional wisdom in the IT world, [where you] move your compute to your data. Data is so massive, you need to do your compute around your data. We need to fix this so data is a fluid resource and can be easily redistributed to where your resources are.

How?

Flynn: We need to more optimally use that [data] and the electric grids where power is made more cheaply.

We need to fix it so that data is a fluid resource and that the data can easily be distributed to where your resources are. That is a fundamental inversion of the general wisdom in the industry.

It's being driven by these mega trends of the explosive growth of our use of IT running headlong into the constraints of supply chains and COVID, or just natural growth. We have to get more efficient, even if it weren't for COVID.

We can't sustain this growth and inefficiently use the silicon that's being printed across the world. We need to more optimally use that. Even more optimally use the electricity grids and power where it's generated cheaply.

We can move your compute workloads to data centers in Canada or Iceland where it's cheap energy and easy to cool. The key to all of this is making data global.

How does Hammerspace's product solve this distributed workload issue over other vendors, such as cloud file services?

Flynn: There are those that have been attempting to stretch data geographically -- the cloud gateway [vendors], the Cteras, the Nasunis. They've been in the business of trying to stretch data that's been sitting in one silo out to other endpoints.

We're fundamentally different than any of them. They're propagating the infrastructure-centric view and caching the data. … What we do is completely decouple the data from any infrastructure.

You can set up [automated management] policies so the data is where you need it in advance of when you need it so you can tap into cheap electricity, you can tap into servers, you can tap into people in far-flung distributed spaces.

What are some examples of Hammerspace customers taking advantage of 'decoupling from infrastructure?'  

Flynn: We have a customer that is doing … special effects and animation. The specific problem they have isn't so much sharing data with this main studio, it's sharing data across the world with artists who are in India or Australia [as well as] sharing data into other data centers where electricity and servers are cheap, such as Canada, to do the rendering.

If they do the rendering in their local data center -- one in London -- that power is more expensive, and those servers are more premium. It will cost them roughly $1 million to render out a movie scene. If, on the other hand, they can do that render out in Canada on a different part of the Azure Cloud, it costs $700,000.

They've saved $300,000 right there out of the gate, simply by being able to shift their workload to somewhere else. That has not really been feasible before.

We automate the distribution of data across storage infrastructure and can load balance it for performance and capacity. So we are, I would argue for the very first time, offering a way to automate the process of mapping data to storage.

Next Steps

Hammerspace acquires Rozo Systems for storage tech

Hammerspace's global file system now includes tape

New Hammerspace capability sets up enterprise A

Dig Deeper on Storage architecture and strategy