EnterpriseHPC Summit: The How and Why HPC is Burgeoning in the Commercial Sphere
The annual EnterpriseHPC Summit, produced by EnterpriseTech and HPCwire and held in San Diego this week, featured presentations and participation from some of the major thought leaders at the forefront of bringing advanced scale computing into commercial environments. With delegates from leading vendors (Dell, Intel, DDN and EMC, among others) and from end user organizations such as Gulfstream, GE, PayPal and John Deere, the growth of the conference reflects the quickening transference of HPC into the broader enterprise.
Beyond the presentations and panel discussions, much of the value of the conference, held at the Paradise Point Resort & Spa in San Diego, lies in the opportunities for attendees to meet and exchange ideas and to compare notes about the effectiveness of advanced scale technology adoption with peers actively involved with these tasks.
Here’s a summary of the conference:
The proceedings kicked off with a keynote address from Lynn DeRose, principal investigator, systems engineering, GE Global Research, who offered an interesting perspective around IoT as well as recruiting for the technology industry. Her comments reflected the flavor of GE’s new corporate advertising campaign showing GE as a cutting edge tech company that hires young, ambitious people. DeRose talked about GE’s recruiting strategy at the SXSW (South by Southwest) conference, an annual set of film, interactive media, and music festivals and conferences in Austin, in which GE offered barbeque to hundreds of attendees. As part of this effort, GE built a 15-foot-high meat smoker controlled with sensors, demonstrating GE “smart machines” capabilities. The demonstration underscored GE’s adoption of IoT to measure, model, and analyze their own factory assets, such as large-scale turbines and jet engines, in a corporate wide initiative called The Brilliant Factory.
Molly Rector, chief marketing officer and EVP of product management at DDN Storage, discussed the rapid evolution of HPC from, a decade ago, of highly customized technologies usable only by experts and computer scientists at government labs and research facilities. Today, Rector said, HPC has permeated the fabric of daily life in a broad range of consumer applications, along with the enterprise, in most internet-enabled environments globally. She also discussed DDN’s broadening focus to include both the traditional HPC and enterprise markets, and the need for focus on ease-of-use, ease of adoption and pre-engineered, merged solutions that include compute, fabric, interconnect and I/O. Rector also discussed how leveraging intelligent storage tiering, from cache to NVMe and SSD and spinning disk and tape, will require a highly innovative software model to accelerate both legacy and newly developed applications, such as Hadoop and Open Stack.
Larry Patterson, senior manager, high performance computing, at Gulfstream Aerospace, gave the first public presentation in 25 years on the company’s R&D efforts. Patterson explored the HPC deployment techniques Gulfstream practices to solve challenges facing its different departments, examine the challenges organizations face as they expand into this next frontier of computing, and underscore the ROI that can come from building new supercomputing technology platforms for one of the world’s leading business jet manufacturers.
Dell’s Chief HPC Technology Strategist, Jay Boisseau, spoke about the growing scope and broad adoption of HPC, noting that 15 years ago, Dell generated no HPC-related revenue from the enterprise, whereas today the enterprise comprises about half of the company’s HPC revenue and will soon eclipse the traditional HPC market.
Boisseau said that this has brought on a major shift in the way the company delivers systems: for traditional HPC customers, systems were built based on a server or other base technology from the ground up for a given use case. Today, for the enterprise, that approach is not feasible because of lack of computer science expertise at most businesses. The result is a top-down approach, with systems optimized in advance for the vertical industry of the customer. He also noted the changing processing landscape, with newer entrants (ARM, IBM POWER, etc.) challenging Intel’s x86 dominance, while also stating that demand for machine learning solutions is skyrocketing, that cloud computing is growing in importance and that OpenStack is steadily gaining market traction.
Next came a panel discussion on advanced scale technologies that accelerate business operations and analytics, including some of the challenges of introducing advanced scale computing within the enterprise. Arno Kolster, senior database analyst at PayPal, discussed the company’s strategy in implementing HPC to combat fraud, noting that their system processes 3 million events per second and must detect anomalies in real time.
Ari Berman, vice president, general manager of consulting at BioTeam, a technology consulting company for the life sciences industry, said HPC-driven deep modeling and development represents a paradigm change in the pharmaceutical industry, an important advance over previous work in the area of mathematical biology. He also said that the wide diversity of data formats involved in life sciences poses a major bottleneck challenge. Berman also said that unlike other industry verticals, in life sciences there are few finished, production model systems – because the science changes so quickly.
This sparked discussion from other panelists, including Kolster, who noted that PayPal requires “five nines” reliability as did Jamal Uddin, senior HPC administrator at Dana Holding Corporation, a manufacturer of powertrain components. Uddin noted that HPC is both necessary and ubiquitous in his industry and that Dana Holding has a three-year HPC refresh cycle that typically doubles the company's compute capacity.
John Deere’s Manager, Advanced Materials and Mechanics, Mohamad El-Zein, delivered an interesting and pertinent talk that challenged conventional HPC thinking and practices that generally focused on adding more compute power. He said that adding more compute resources can, unless they are suited to the applications being used, be counterproductive. A common problem, he said, is that many commercial CFD and FEA applications do not scale beyond a few hundred cores, with the result that adding more cores merely maximizes cost.
A panel discussion hosted by HPCwire Editor John Russell looked at on-ramping HPC, focusing on paths – and barriers – to adopting advanced scale solutions. Gaétan Didier, head of computational fluid dynamics at the Sahara Force India Formula One Team, emphasized that for his organization, HPC is not an option, it’s a “must have.” He said HPC is so widely used in Formula One that limits are placed on the compute power the various teams are allowed to use, to maintain competitive balance.
Fred Streitz, director of the HPC Innovation Center at Lawrence Livermore National Laboratory, said there can be a disconnect between the HPC technology available to end users and their actual need. The key to successful adoption of HPC, he said, is a thorough knowledge of the job, of the workload requirements, and a strong sense of what users are trying to achieve. He also said that in the world at large, there is not enough HPC awareness and that the job of educating wider sectors of the scientific and business worlds is needed.
Anthony Galassi, deputy division chief at the National Geospatial Intelligence Agency, discussed his experience with implementing cybersecurity and Amazon Web Services, along with discussion of how Linux evolved into a widely used, robust system, and whether OpenHPC will follow the same model.
Finally, Rob Farber, CEO of TechEnablement, a consulting and training firm that provides technology education, planning, analysis and code tutorial services, delivered a presentation on major trends that have driven HPC development, including the competition between the IBM-led OpenPOWER platform, which combines the technologies of multiple architectures and vendors, over against integrated, everything-in-silicon computing from Intel. He also noted that a major demand driver for Exascale computing is virtual reality, which will require a 7X increase in throughput power.