Covering Scientific & Technical AI | Saturday, November 30, 2024

Nvidia Claims 6000x Speed-Up for Stock Trading Backtest Benchmark 

A stock trading backtesting algorithm used by hedge funds to simulate trading variants has received a massive, GPU-based performance boost, according to Nvidia, which has announced a 6,250x acceleration to the STAC-A3 “parameter sweep” benchmark.

Using an Nvidia DGX-2 system to run accelerated Python libraries, Nvidia said in one case the system ran 20 million STAC-A3 simulations on a basket of 50 financial instruments in a 60-minute period, breaking the previous record of 3,200 simulations.

The results have been validated by the Securities Technology Analysis Center (STAC), whose international membership includes more than 390 banks, hedge funds and financial services technology companies. In a pre-announcement press briefing, STAC Director Peter Lankford said that in an exercise using 48 instruments, increasing the number of simulations from 1,000 to 10,000 only added 346 milliseconds, “suggesting that a quant can significantly expand the parameter space without significant cost using this platform.”

“The ability to run many simulations on a given set of historical data is often important to trading and investment firms,” said Michel Debiche, a former Wall Street quantitative analyst who is now STAC’s director of analytics research. “Exploring more combinations of parameters in an algorithm can lead to more optimized models and thus more profitable strategies.”

A recent Global Algorithmic Trading Market report stated that 90 percent of public trading today and handled by financial trading algorithms, the Wall Street Journal has reported that quants now control about 30 percent of all trading on the U.S. stock markets.

Financial trading algorithms make up about 90 percent of public trading today, according to the Global Algorithmic Trading Market 2016–2020 report, and quants now control about a third of all trading on the U.S. stock markets, according to the Wall Street Journal.

“The workload in this case is a big data and big compute kind of workload,” Lankford said. “…a great deal of the trading…these days is automated, using robots, that’s true on the trading side and increasingly so on the investment side. A consequence of that competition is that there is a lot of pressure on firms to come up with clever algorithms for those robots, and the half-life of a given trading strategy gets shorter all the time. So a firm will come out with a strategy and make money with it for a while, and then the rest of the market catches on or counteracts it, and the firm has to go back to the drawing board. So this is about the drawing board.”

Beyond the throughput power of its GPUs, Nvidia attributed the benchmark record to advancements in its software, specifically around Python, to help reduce GPU programming complexity. The benchmark results were achieved with 16 Nvidia V100 GPUs in a DGX-2 system (along with Intel Xeon processors and NVMe-based SSD storage) and Python using Nvidia CUDA-X AI software and Nvidia RAPIDS, software libraries designed to simplify GPU acceleration of common Python data science tasks. Also included in the software stack: Numba, an open-source compiler that translates a subset of Python into machine code, allowing data scientists to write Python compiled into the GPU’s native CUDA and extending the capabilities of RAPIDS, according to Nvidia.

Director of global financial services strategy at Nvidia John Ashley said that while Nvidia has worked for several years with hedge funds on backtesting simulation in C/C++, the work Nvidia is doing in Python and the DGX-2 lets Nvidia use “our flagship deep learning server optimized for deep learning training, optimized for this kind of hyper-parameter tuning.”

“The key point is we’re able to do this in Python,” said Ashley. “We could have done this at almost any time with CUDA, but Python makes this accessible to a huge community of data scientists who aren’t comfortable in C++, who don’t feel maximally productive writing their algoriths in C, but who are used to day-in day-out working in Python. And because of our investments in AI and under the RAPIDS umbrella in machine learning, and specifically in working with open source technologies like the Apache Arrow Project on the CUDA dataframe, that is an open source way to leverage this with the Python environment…

“That’s really the driver for now. We're on a journey at Nvidia around accelerating data science in general and the open source libraries have gotten to the point where we can do the whole thing in Python.”

AIwire