Like Water from the Tap: Allocating Storage with a Composable Data Fabric
I live in Colorado, where our water comes mostly from snow melt. We see a lot of snow, which means water is plentiful. When I turn on the tap, I don’t think about where it comes from or what pumping stations or pipes it passed through. It’s just there – on-demand.
Delivering IT services would be much easier if IT could get the same kind of on-demand experience from storage resources. Unfortunately, data storage is a complex endeavor. It’s no longer just a choice of SAN or NAS from one of a few established vendors. In today’s data center, there are a multitude of workloads each with its own unique set of requirements. This has led to vendors sprouting up with a multitude of hardware or software options to meet those different requirements. Storage techs face an almost paralyzing number of choice points — structured vs. unstructured data solutions, hyper-converged vs. software defined vs. external storage, bare metal vs. virtualized vs. containers, on-premises or hosted/cloud. The permutations are almost endless.
To meet these needs, your data center probably runs the gamut with a half dozen or more separate storage architectures, from DAS to internal SDS to external flash arrays. Traditionally this has required separate siloed approaches to storage deployment in which each application stack has its own set of resources and management tools. In a world where your business needs have to move fast to remain competitive, that’s an unaffordable level of complexity.
Alternatively, there is the “composable data fabric.”
We’ve been using the term ‘fabric’ for years in networking. Multiple switches and host adapters are physically connected together to create a consistent SAN or Ethernet fabric across the infrastructure. Data fabric is a similar concept: clusters of software-defined storage (SDS) deployed on x86 servers federated together to enable a layer of data services consistency between otherwise disparate devices.
In addition, the data fabric layer can provide the necessary data services needed by the vast set of applications that make your business run. It’s a universal software-defined storage resource that can span storage arrays, x86 servers, hyper-converged appliances and composable infrastructure, and be deployed on-demand. Having a data fabric that provides a single storage stack consolidates structured and unstructured data in support of traditional and cloud-native applications. It enables workload mobility through storage federation and is intelligently managed by a single infrastructure management tool with programmatic API access.
Ready for the future
The obvious benefits of eliminating storage silos are reduced cost and day-to-day operational complexity, but there is a deeper set of benefits as well. As a data fabric stitches together different strands of servers and storage your data center becomes seamless, agile and ready for the future.
Since composable data fabric operates at a software level, it gives you agility and adaptability. Software can be updated easily. It can be improved over time, adding new features and functionality. It's adaptable to any hardware and storage media. You can fold in Flash or NVMe or whatever technology might come next. It’s adaptable, yet with the continuity of resource availability to keep your business running as things change.
This also means that application data becomes more mobile, independent of underlying storage hardware infrastructures. As applications evolve and data grows, they can be re-platformed without the need for painful migrations. You can scale into the future, without having to know what your future requirements might be.
Composable data fabric brings simplicity to your storage infrastructure, making storage like the water in your tap: there when you need it, for any application or workload you want to run.
Kate Davis is manager, software-defined storage technologies, at HPE.