Covering Scientific & Technical AI | Friday, December 27, 2024

Cerebras Systems CS-2 Selected by TotalEnergies for Multi-Energy Research 

SUNNYVALE, Calif., March 2, 2022 — Cerebras Systems, the pioneer in high performance artificial intelligence (AI) compute, today announced that TotalEnergies Research & Technology USA has selected the world’s fastest AI computer, the Cerebras CS-2 system, to accelerate its multi-energy research. This continues the rapid adoption of Cerebras by leading enterprises around the world and is the first publicly announced deployment of the Cerebras CS-2 in the energy sector.

Cerebras CS-2 Wafer-Scale Engine

“TotalEnergies’ roadmap is crystal clear: more energy, less emissions. To achieve this, we need to combine our strengths with those who enable us to go faster, higher, and … stronger,” quipped Dr. Vincent Saubestre, CEO and President, TotalEnergies Research & Technology USA with a thinly veiled reference to the olympic motto Citius, Altius, Fortius. “Cerebras Systems offers one of the highest performance AI accelerators. We count on the CS-2 system to boost our multi-energy research and give our research ‘athletes’ that extra competitive advantage.”

Thanks to the CS-2’s leading AI compute, modeling and advanced analytics will enable fast and accurate simulations across a large range of issues tackled by TotalEnergies: from batteries to biofuels, to wind flows, drillings, and CO2 storage.

We are thrilled to partner with TotalEnergies and bring our industry-leading AI performance to the multi-energy market,” said Andrew Feldman, CEO and co-founder of Cerebras Systems. “The energy sector has a long history of leading the way in using compute to generate insight. AI and AI’s integration with simulation can accelerate TotalEnergies’ mission to deliver affordable, cleaner, and more reliable access to energy. We are proud to participate in this important endeavor.”

Predictive modeling requires massive computing resources and high bandwidth data communication. Using traditional general-purpose hardware for this work typically requires large clusters of GPUs or CPUs and frequent data movement between individual processors. Limited chip-to-chip bandwidth causes a communications bottleneck, which slows down the modeling workload and delays time to insight.

This challenge can be remedied with a single Cerebras CS-2 system, the fastest AI computer in existence. A single CS-2 delivers not only cluster-scale computing power, but communication and memory bandwidth orders of magnitude greater than traditional clusters. This translates into extraordinary performance on workloads like predictive modeling that are central to efficient energy development and production.

In recent work with TotalEnergies, Cerebras demonstrated more than 100x improvement on a finite difference benchmark for seismic modeling vs traditional architectures. Total and Cerebras engineers wrote the benchmark code using the new Cerebras Software Language (CSL). The CSL is part of the Cerebras SDK, which allows developers to take advantage of the strengths of the CS-2 system.

With customers and partners in North America, Asia, Europe and the Middle East, Cerebras is delivering industry leading AI solutions to a growing roster of customers in the enterprise, military, and high performance computing segments, including Argonne National LaboratoryLawrence Livermore National LaboratoryPittsburgh Supercomputing CenterEPCCTokyo Electron Devices, and GlaxoSmithKline.

For more information about the Cerebras CS-2 system and its application in energy, please visit https://cerebras.net/industries/energy.

About Cerebras Systems

Cerebras Systems is a team of pioneering computer architects, computer scientists, deep learning researchers, and engineers of all types. We have come together to build a new class of computer system, designed for the singular purpose of accelerating AI and changing the future of AI work forever. Our flagship product, the CS-2 system is powered by the world’s largest processor – the 850,000 core Cerebras WSE-2, enables customers to accelerate their deep learning work by orders of magnitude over general purpose compute.


Source: Cerebras Systems

AIwire