Covering Scientific & Technical AI | Saturday, November 30, 2024

Advanced Technology on Wall Street – Rumors from the Trade Show Floor 

New York is always an exciting, energetic city to visit and ... was made even more so as I attended the (recent) ‘HPC & AI on Wall Street’ conference, which HPCWire are now championing. It was well worth the train ride from Boston and interesting to see the varied mix of attendees present and hear how HPC and AI is evolving in the world of finance.

All sectors of the financial services industry (FSI) were represented at the conference from traditional, old school banks to venture capital firms, fintechs and hedge funds. AI adoptions varied from integrating machine learning into existing applications and processes to mining unearthly amounts of market data for a trading edge.

My first lesson was there is a dichotomy between too much data volume and insufficient data context within this community. Don’t worry the bank account and market value data is as solid as always. It’s the contextual data relating to the root causes of market shifts which is trickier: before a government change; a regulation change; a merger or acquisition; a corporate management change. Often these nuances have a significant impact.

Verne Global's Bob Fletcher

After spending my summer being thoroughly impressed by deep neural network (DNN) translation portals, machine vision, etc - I assumed DNNs would save the day. Bloomberg’s Head of Quantitative Risk Analysis, Harvey Stein, explained at length the great use cases and otherwise for DNNs in the Wall Street ecosystem. In his experience much of the available market data is a series of data pools which can be trained against, but rarely tell the whole story without some external context like government changes. Additionally, DNN training on biased data like successful consumer loans can lead to very biased results perhaps leading to a fast-track early retirement and the local prison! Compliance is very important in this industry.

There are few restrictions for the quant funds, the venture capitalists and internal risk teams to exploit all the available data but the commercial banking, investment advisors and insurance community need to be extra careful. So, the former group are leading the exploitation of DNNs and GPUs, and many of the trading research companies considering Verne Global are asking about hosting GPUs and even NVIDIA DGX-2s or using our reserve bare-metal GPU cloud for proof of concept tests. While the latter group are being rightly conservative. One large insurance company suggested that they had less than 50 GPUs in the whole company! (and many people coming to our booth didn’t even know what a GPU was…)

While obviously the New York financial community was the focus for this conference one of the best presentations came from out-of-town. Thomas Thurston, CTO at San Francisco-based WR Hambrecht Ventures shared an amazing use case for DNN training. Most VCs spend their time networking and digesting hundreds of interview-style start-up pitches every quarter to finally filter them down to a limited number of investment candidates. It’s an incredibly time intensive process and the opportunities to either miss a good investment candidate, or plump for the wrong candidate are high.

W.R. Hambrecht's Thomas Thurston speaks on machine learning in venture capital

Thomas’s team of 9 data scientists have turned this process on its head and they mine a wide variety of data sources – industry, social media, ‘dark data’, etc which are then pushed through the machine learning blender to determine the best start-ups to invest in. Unfortunately (but unexpectedly) he didn’t share too many specifics on the data or DNN models, but he did share some compelling results data. Compared to normal, traditional VC and corporate investment team average start-up success rates WR Hambrecht achieved:

  • Typical VC portfolios – 3X improvement
  • Corporate investment portfolios – 3X improvement
  • Internal corporate projects cancelled due to internal issues – 3X improvement
  • Picking successful internal corporate technical projects – 8X improvement

Pretty impressive! Perhaps the days of the Harvard MBA VC are numbered? I’m not betting against them yet - they will likely hire data science teams of their own to replicate Thomas’s excellent approach.

Almost every organisation at the event talked about their use of machine learning and some indicated what would make them extend it into full-scale deep learning. The most important criteria were the appropriateness of the DNN training techniques. DNN training has worked wonders on non-FSI applications - natural language processing, machine vision, etc but in many cases traditional data science Bayesian techniques may have offered similar results quicker and with less cost. The top 10 stepping-stones that caught my attention were:

  1. Data scientists with subject matter experience
  2. DNN explainability to meet compliance demands
  3. Large volume data ingest – connectivity/transport, tools and cost (100TB > 1PB are typical datasets)
  4. Data pre-training conditioning – baked images
  5. Effective noisy data filtering
  6. Orthogonal influence data
  7. Improved data gravity – the data being close to the compute resources
  8. Increasingly robust open source applications and tools – hyperscale cloud use is really helping here
  9. Data scientists working in the business units versus the research lab
  10. Research experienced data scientists

After lunch it was interesting to watch a slick presentation from Scott Aylor, AMD’s CVP and GM Data Center Solutions about their new EPYC 7002 Series Processors, code named ‘Rome’. It was hard to not be impressed at what AMD have put together and I could see a number of employees from other processor manufacturers squirming in their seats somewhat as the benchmark stats and world records rolled up onto the screen.

Competition in the HPC space is much needed – it helps us drive innovation, performance and choice for the end users and AMD have certainly ruffled a few HPC feathers. We’re looking forward to including their EPYC Processors in our hpcDIRECT cloud platform enabling our users with a super choice between Intel Xeon, AMD EPYC and Arm/Marvell TX2.

Back to the show - no financial service event would be complete without some perspective on the evolving crypto currency market and it was the last session expertly moderated by Addison Snellof Intersect360, who has an unbelievable insight into the evolving HPC technology markets.

Unfortunately, the panel had no idea where the Bitcoin price was going next, but they did have fabulous insight into other aspects of the industry. The current crypto coins are sub-optimal for the coming IoT world with 100,000,000,000 devices from toasters to jet aircraft where the typical transaction value may be as little $0.0001. The coins intended for these applications will likely be democratised and not require huge investments to mine. The concept which I liked best was you get the value of a transaction validation when you validate two other unrelated transactions. It appears that crypto currencies are the closest to mainstream money in Thailand, Singapore, Switzerland and Brazil and it was speculated that in the next year or so a central bank would issue a crypto coin. My money is on Switzerland. Who do you think?

After heading to the giant SuperComputing 19 in Denver, I’ll be back in New York in December for the AI Summit, at which we will be hosting an HPC & AI meetup (11th December). If you’re working in the fields of HPC and AI, please come along for what will no doubt be some healthy debate, excellent presentations, and hopefully some smashing Icelandic beer.

Thanks for reading, and appreciate any feedback or thoughts - Bob Fletcher, VP Strategy, Verne Global (Email: bob.fletcher@verneglobal.com)

This blog post originally appeared on the Verne Global website and is reprinted with permission of Bob Fletcher.

AIwire