HPE’s Memory-Driven Vision Takes Tangible Form
![](https://www.aiwire.net/wp-content/uploads/2018/06/HPE-The-Machine-370x290.jpg)
Source: HPE
HPE’s ambitious quest to overturn classical computing’s architectural conventions and processor-centric norms, in place for more than 60 years, with what the company calls Memory-Driven Computing (MDC) has taken further steps toward maturation.
HPE MDC research and development has been conducted by HPE Labs under the rubric called “The Machine,” a multi-year effort announced in 2014 that extends back more than two decades in the thinking of the company’s technology strategists. Today, as MDC emerges into more tangible form, The Machine research project has expanded to include multiple efforts to bring MDC to multiple markets.
In the run-up to next week’s ISC conference in Frankfurt, HPE issued project updates on The Machine at the HPE Discover conference in Las Vegas. Announcements include the first commercial customer – e-commerce platform Travelport – along with an “incubation” practice through the HPE Pointnext services group to help customers adopt MDC, and the MDC Sandbox, a development environment for experimentation and proofs-of-concept.
In fact, Memory-Centric Computing was the primary theme throughout Discover. HPE executives, starting with CEO Antonio Neri, openly declared that in the coming years, HPE’s product portfolio will increasingly take on MDC technologies, including the noteworthy statement, again from Neri, that “there will come a time when you can run the entire enterprise in memory for real time analytics.”
Closely related to its MDC-related news was HPE’s announcement yesterday that it plans to invest $4 billion in “intelligent edge” technologies and services over the next four years. The aim is to help organizations turn their data – from the edge to the cloud – into intelligence that “drives seamless interactions between people and things, delivers personalized user experiences, and employs AI and machine learning to continuously adapt to changes in real time.”
This objective, said Steve Conway, COO and SVP of research at HPC industry analyst firm Hyperion Research, will require “very, very powerful, dense nodes” at the edge.
“This is a direction that we started talking about four to five years ago, the fact that HPC-like resources are going to be needed at the edge, or in the Internet of Things,” Conway told EnterpriseTech. “You can’t have all these devices and sensors producing massive amounts of data, and move all of that data. It really needs to be dealt with, for the most part, locally.”
Conway said MDC is a reaction against existing high performance systems whose architectures are weighted toward processing but light on data movement.
“They’re pushing back, and it’s because of this big data phenomenon,” he said. “It’s just too much to handle with those (existing) kinds of architectures, the architectures have gotten extremely compute-centric, and what HPE is talking about is kind of a return to a more balanced architecture, but extremely different from vector computers of the past, because in this brave new world it’s massively scaled out.”
At the Discover conference, HPE touted its Superdome Flex system, announced last November and adopted by Travelport, which has the capacity for 160 terabytes of memory on its memory fabric.
The cloud-based Memory-Driven Computing Sandbox will feature HPE Superdome Flex (see picture) with Software-Defined Scalable Memory, a new system feature that enables the system’s memory fabric that can compose memory on the fabric and scales to 96 terabytes.
Why does MDC offer the potential for better, more efficient performance? According to HPE, MDC gives every processor in a system access to a giant shared pool of memory — a departure from conventional systems in which relatively small amounts of memory are tethered to each processor.
“The resulting inefficiencies limit performance,” HPE explained in a recent blog post. “For one processor to access data not held in its own memory, the computer must play an inefficient game of ‘Mother May I,’ so to speak. One processor must request access from another processor to get anything accomplished. What’s worse, the relationship between storage and memory is also inefficient. In fact, in today’s computers it’s estimated that 90 percent of work is devoted to moving information between tiers of memory and storage.”
Another key to MDC: the interconnect fabric, the communication vehicle for data transfer between the different elements of the system. As explained in HPE’s blog, computer components of today are connected using different types of interconnects: memory connects using DDR, hard drives via SATA, flash drives and graphics processing units via PCIe and so on.
HPE, as a charter member of the GenZ Consortium, is helping to build an open systems interconnect with “memory-semantic” access to data and devices. “Every component is connected using the same high-performance interconnect protocol. This is a much simpler and more flexible way to build a computer. One key reason it’s faster is that data is accessed one byte at a time using the same simple commands used to access memory: just ‘load’ and ‘store.’ This eliminates (moving) many large blocks of data around and is much more efficient. The fabric is what ties physical packages of memory together to form the vast pool of memory at the heart of (MDC).
Kirk Bresniker, VP, HPE Fellow and chief MDC architect at Hewlett Packard Labs, told EnterpriseTech last week, “What we’ve been driving toward in the GenZ industry consortium is a way… to connect up many heterogeneous amounts of computations, not just general purpose x86 but task-specific accelerators, on a memory fabric. And various types of memory – storage class memory, long term storage memory, high performance-high bandwidth memory – all uniformly available on a memory fabric.”
The Superdome Flex platform used by Travelport, Bresniker said, “gives us that great memory fabric, that great low latency coherent memory fabric. And then what we’ve done with the software-defined coherent memory is the ability to still have that fabric, and then to turn off that coherency that limits the scalability. So we can transcend the amount of memory that any one processor can address, but be able to have all that large pool of memory sources available on the fabric, so that they can directly load and store more memory… So it takes what is a scalability limit and removes it.”
In addition, HPE’s MDC utilizes photonics – i.e., light – to transmit data between compute components, rather than sending electrons through copper wire. “Using microscopic lasers, we can funnel hundreds of times more data down an optic fiber ribbon using much less energy,” the blog stated.
In April, Travelport, a $2.5 billion company based in London, installed a Superdome Flex system to support its commerce platform that provides distribution, technology, payment and other solutions for the global travel industry.
Travelport’s Matt Minetola, EVP of technology and global CIO, said speed and accuracy are critical success factors to travel search. The volume of shopping requests doubles every 18 months and will reach an average of three shopping requests per month per person on the globe by 2020, Minetola said, which translates into moving over 125 terabytes of data in and out of Travelport’s data centers each day.
“We have about five seconds to get the attention of a consumer. We need to know who they are, where they want to go, how they want to do it and we need to capture all of that information and deliver it to them in the palm of their hands,” he said.
Currently, Travelport handles about 5,000 transactions per second, and has set a goal of reducing response times to one second. The company is working with HPE’s Pointnext services group to rearchitect Travelport algorithms using MDC programming techniques, and identify a performance baseline for infrastructure upgrades.
Travelport’s need for nearly instant turnaround of answers from massive amounts of data is an apt early test of HPE’s MDC strategy.
“I think most of the vendors are paying a lot more attention to data-intensive computing than they were a few years ago,” Conway said. “But what we’re talking about here is a vision that’s going to be realized fully over time, and I think the vision is very articulate here. As always, we’ll have to see how it plays out in reality. But as far as painting a picture of the future that’s fairly detailed and bold, I think HPE is at the forefront.”