So Long Single-Protocol Storage Network; Hello Intelligent Storage Infrastructure
IT departments everywhere are looking for new ways to improve the business impact of their storage networks — such as adopting cloud or hyper-converged architectures. This has been translated by some industry watchers as marking the end of dedicated storage networks and existing network protocols, namely Fibre Channel, commonly used in most storage environments.
I say, “Not so fast!”
This is not a short-term revolution, but a welcome evolution in storage networking propelled by increasing quantity and variety of application workloads generated by the digital economy. This will see multiple storage protocols traversing on a single fabric. To efficiently support the organization today and in the future, these fabrics must embrace old and new storage approaches and technologies. And they must be able to manage this mixture of protocols to optimize application performance without increasing complexity or cost, while providing a reliable, secure, agile and intelligent infrastructure
Digital Adoption Creates Divergent Demands
Accelerated uptake of digital technologies has seen new digitally-enabled business models arise, and new routes-to-market, leading to increased demands on the network that legacy technologies are not engineered to support.
The nature of IP compute networks means that while some convergence on the corporate shared LAN is possible, using them for critical storage traffic is inadvisable. Shared networks often become congested, and managing them can be complicated. Ensuring the right levels of protection and prioritization is challenging, especially during unexpected peaks in demand. This means a dedicated storage network – be it Fibre Channel or IP – still offers the best option.
Meanwhile a significant increase in new applications that enable digital services has caused a blurring of the lines between those applications designated as “mission-critical” versus “business-critical.” Applications such as email, collaboration, data analytics and web are central to customer satisfaction, revenue and productivity, and considered “mission-critical” by the business regardless of legacy technology perspectives. The specific needs of these applications, and the explosion in the volume of data they create, has placed new demands on the storage environment. In a world where a slow loading webpage can create a meaningful barrier to new customers and revenue streams, the systems and support required by these applications must be reconsidered.
In an effort to create the “right” storage network to address this challenge, organizations have embraced offsite cloud-services, hyper-convergence offerings, and a mix of storage network protocols. But this can increase complexity, make management and monitoring more time-consuming, and can lead to a higher spend.
It’s About Application Workloads, not Protocols
Debates on how to build an improved storage network often focus on which protocols are “best” for storage. But as anyone working in the storage environment knows, this is the wrong starting point. Every protocol has specific strengths and weaknesses that make them the “best” choice for an application based on the specific need of that application. Which is why IT has to consider how they can deploy and support a variety of protocols, while providing the agility, availability, security and speed demanded by digital operations.
Before settling on a specific approach, or creating new storage systems that may be quickly superseded by the demands of new applications or storage innovations, a critical starting point is to adopt a modern storage infrastructure.
A Dedicated Storage Network
The right storage network makes the running of multi-protocol, complex and data-intensive systems simpler, faster, lower cost and future-proof. Network resilience means more than the elimination of downtime, but also preventing performance slowdown before it occurs. Network security also needs to evolve to be built-in, while increasing the level of protection. And to deliver this, the infrastructure must be highly instrumented, more automated, adaptive and intelligent.
It’s not an either/or, Fibre Channel or IP. The new business-critical plus mission-critical paradigm requires both. So the storage infrastructure selected must fully support both, and make management, monitoring, and maintenance simple and seamless across both environments.
The only architecture and solution that can support this range of requirements is a purpose-built storage fabric. Fabrics are:
- Intelligent and automated for fast, simple agility and scale
- More efficient, requiring fewer devices to deliver better performance
- Highly interoperable and device-aware for almost autonomous integration
- Secure, offering hardware-based encryption within and between data centers
- Application-aware and self-healing to optimize application and services access, availability and performance
- Software-enabled for better utilization, ease-of-management, and lower cost of ownership
Fabrics can be implemented as any combination of Fibre Channel or IP, so they can support both mission-critical and business-critical applications requirements simultaneously.
Their low-latency resilience and easy interconnectivity also provides the infrastructure required for end-to-end network visibility, and to support the use of analytics, monitoring and resolution tools. Fabrics further enable such tools as predictive and proactive management capabilities, ensuring services performance is protected and optimized in balance with other tasks and processes.
Future-ready with Gen 6 Fibre Channel Fabrics
When adopting fabrics you should consider:
- Gen 6 Fibre Channel for flash-friendly, performance-rich network with management features for agility and visibility, and the robust reliability demanded by mission-critical systems. Latency can be reduced by up to 20 percent, while speeds of up to 128Gbps ensure flash-based storage workloads are fully supported.
- A storage networking platform that is engineered for the future to provide Gen 7 compatibility, and support the upcoming NVMe over Fabrics protocol - the lowest latency protocol for large storage environments - which is currently in development.
You should also look to deploy secure end-to-end visibility and advanced network analytics. By improving network insight and analysis you are better placed to provide better services, availability and automated network discovery and recovery, and advanced VM visibility monitoring and diagnostics to make complex environments more manageable. Such functionality also reduces operational effort and expense while delivering the performance and managerial capabilities required.
Begin the Evolution to a More Intelligent Infrastructure
Organizations are facing a plethora of options as they look to increase the levels of agility and scale in their networks. But they also need to mitigate the risk of adopting solutions that may be made quickly redundant as new technologies come to market.
Success in the digital economy will challenge everyone and reward the lucky few who get it right. With this in mind it’s wise to start building a modern storage network today, so the organization can continue to scale with the future unhindered.
AJ Casamento is global solutions architect at Brocade Communications Systems.