Covering Scientific & Technical AI | Sunday, November 24, 2024

2019 Data Center Drivers: Kubernetes, Cloud, ML, File System Storage 

If 2019 is anything like the previous year, there will be a renewed push for modernization in the data center to alleviate the management complexity, handle the scalability requirements and achieve an improved ROI. We will see storage systems improve to handle the rise in containerized environments, an uptick in software-defined storage solutions and a return to a tried-and-true approach that has been marginalized in recent years.

  1. 2019: the year of Kubernetes

Containers have become an integral part of today’s IT environments for their ability to reduce costs and complexity while enabling the rapid testing and deployment of applications. While containerized infrastructures are not new in and of themselves, Kubernetes is still considered the new kid on the block. Nevertheless, its ability to make the handling of the complex IT infrastructure required to run various business-critical applications easier than ever makes it a rising star. 2019 will be the year that we will see broader adoption rates of Kubernetes in the enterprise.

To ensure the functionality of the Kubernetes platform, expect to see a rise in technologies supporting containerized workloads – from application-level service to data-processing frameworks to storage systems.

  1. Latency and costs will inhibit workloads moving to the cloud

The cloud has been hailed as a panacea for what ails the data center with companies large and small “moving” to the public cloud. While there are times that a cloud-based approach can be beneficial, like cold storage, it is simply too limited to be all things in all situations. Cloud economics may work to store seldom-used data offsite but does the pricing remain attractive when you have to get it back? And how long does it take to restore a substantial volume of information?

For high-performance computing environments, such as in life sciences, media and entertainment, a cloud environment is simply too costly and too slow to be effective. Storing information in the cloud is one thing, but leveraging it as part of a mission-critical workload is not a set-and-forget operation. As enterprises continue to realize that their workloads are HPC in nature, they will recognize that they need a scalable, fault-tolerant, high-performance file system to support their commercial applications.

  1. Machine learning will gain in move from analytics to AI

Knowing that your data has value and knowing how to extract that value are two entirely separate entities. As enterprises look to better leverage their business-critical information for increased profit opportunities, the need for solutions that can handle modern workloads will dramatically increase. Analytics have been popular with organizations looking to understand their data; AI and ML are critical for helping organizations do more with their data. Look for a rise in solutions that can satisfy the processing and speed requirements of these workloads as the volume of data continues to rise.

  1. Software-defined storage in Kubernetes, DevOps and Private Cloud

Likewise, other applications making their presences felt in the data center are changing the way IT approaches data storage. The aforementioned containerized environments, testing infrastructures and elastic storage require scalability, performance and efficient management at scale. Software-defined storage systems that deliver excellent performance on all workloads will see increased adoption rates in the upcoming year. SDS infrastructures that provide higher IOPs, linear scalability, zero downtime and the ability to change on the fly to suit shifting business needs will show a marked improvement over other software and hardware options deployed in traditional enterprise IT environments today.

  1. Smart turns into genius

These increasingly complex, varied infrastructures place high demands on IT administrators, but the good news is enterprise IT products get smarter all the time. 2019 will see more capable products, including more AI and more automation, that “learn” and adjust their operations based on their environment, and require less human intervention and hands-on management.

Nearly any element in an infrastructure can be controlled and managed with intelligence: move a file or workload based on traffic patterns, give a user access to a particular server, ensure legal or performance isolation of systems, shift data to a cloud, meet an SLA, failover to another business location, and more. We are only seeing the beginning of how enterprise products, particularly software, can optimize themselves for performance, workload, governance, or other requirements.

  1. File systems will be rediscovered

Finally, what’s old becomes new again. The trend in recent years has been to deploy different storage types in an attempt to “optimize” it to corresponding workloads. As such, enterprises were left with storage islands requiring dedicated resources and management. The file system is the way to manage and use data more flexibly and easily vs. block or object storage. Within a single, distributed file system, enterprises can gain high-performance storage for block workloads, service provider-grade object storage and small-file workloads with ultra-low latency. This isn’t your father’s NAS but a next-generation approach that enables IT to employ the speed and convenience of the rediscovered technology with the benefits of today’s modern approaches.

Björn Kolbeck is co-founder and CEO of Quobyte.

AIwire