Accelerating and Reconfiguring Data Centers with eFPGA
Over the last several years, the focus in the data center has been the massive build-out of new infrastructure or upgrading older facilities with newer technologies. While this has driven significant increases in performance and available bandwidth, it’s also introduced a new problem. Data center operators keep finding themselves in the hot seat when a new protocol or standard emerges because this typically requires them to replace every chip in their facility. While this was feasible in the past, this is no longer an option as the move to higher process nodes with masks costs several million dollars or more.
As a result, data center operators are torn. They need products developed on advanced process nodes because that is the only way to achieve lower power, small form factors, higher density and higher performance. However, they can’t afford to replace those products when a new standard or protocol emerges.
That is where “embedded FPGA” (eFPGA) comes in. Not only does eFPGA enable chips to be reconfigurable and reprogrammable after they have been deployed in the data center, but its ability to reconfigure also enables it to accelerate more than one task as workloads require or to address different customers’ needs for their specific applications. In fact, our research shows that eFPGA can actually accelerate processor performance by 40-100X.
eFPGA vs. Traditional FPGA
Contrary to what most people believe, eFPGA is not like standard FPGAs, such as those offered by Xilinx and others. Surprisingly, embedded FPGAs don’t compete with FPGA chips. FPGA chips are used for rapid prototyping and lower-volume products that can’t justify the increasing cost of ASIC development. When systems with FPGAs hit high volume, FPGAs are generally converted to ASICs for cost-reduction.
While traditional FPGAs have offered a new level of programmability in the data center, their performance has been bottlenecked by the PCIe bus. In contrast, eFPGA, which is essentially a complete FPGA integrated directly onto the chip, solves these performance problems. When integrated into an ASIC, the data transfers can be done by buses as wide as needed running at full on-chip clock rates.
Currently, there are a number of trends driving the proliferation of eFPGA into the data center:
- Mask costs are increasing rapidly ~$1M for 40nm, ~$2M for 28nm, ~$4M for 16nm.
- The size of design teams required to design advanced nodes is increasing. Fewer chips are being designed, but they want the same functions as in the past.
- Standards are constantly changing
- Data centers require programmable protocols
- AI and machine learning algorithms
Winning Combination
The combination of reconfigurability and acceleration is a game changer for data centers, and companies clearly recognize this opportunity. In the last two years, a handful of eFPGA players have come into the market. That is also likely the reason Intel paid $16 billion for Altera. Through that acquisition, Intel can first integrate Xeon processors and FPGA chips in multi-chip packages and then eventually integrate FPGA onto the same die as the CPU.
Embedded FPGA is now available and silicon has been proven in multiple, popular process nodes. This technology is quickly finding a home in microcontrollers, IoT, deep learning, SoCs/ASICs, wireless base stations as well as in networking/data centers. Below are just a few of the volume applications expected to emerge in the next few years, including:Xeon with integrated FPGA
- SmartNICs (a la Microsoft)
- Reconfigurable switches with programmable packet parsers
- Deep learning
2017 was the beginning of eFPGA taking off, with a slew of companies announcing their plans to use it. And for every company that announced, there were many more behind it not willing to announce for fear they would tip off their competition. Harvard University announced it is integrating eFPGA into its deep learning chip in 16nm, which is now in fabrication. The reason is deep learning algorithms are evolving rapidly; and with embedded FPGA, the algorithms can evolve in real time, not once a year when a new chip tapes out. Other companies that announced plans to use eFPGA included Sandia National Labs, SiFive and DARPA.
Conclusion
Using eFPGA, data centers operators can now future-proof their offerings by ensuring the chips they incorporate can be reconfigured or reprogrammed over time. They can also use the same technology to increase their performance as needed by offloading tasks from the processor and onto the embedded FPGA. This will provide much needed flexibility in the future, something data centers desperately need to keep moving the needle forward in bandwidth while keeping costs in check.
Geoff Tate is co-founder and CEO of Flex Logix Technologies.