Covering Scientific & Technical AI | Sunday, December 1, 2024

Technology Partners Make Developing Cloud-Based, GPU-Accelerated AI Recommender Systems Easier 
Sponsored Content by MICROSOFT/NVIDIA

Financial services organizations have large volumes of customer data that includes account balances, payment transactions and information such as customer FICO scores, and credit history. Organizations are increasingly using cloud-based, GPU-accelerated artificial intelligence (AI) and machine learning (ML) predictive analysis recommender systems to make personalized suggestions to customers. However, creating and maintaining recommender systems is a complex and time-consuming task. As technology partners, Microsoft and NVIDIA have customized tools to help develop and maintain recommender systems.

What is a recommender system?

Recommendation systems, also called recommendation engines, are AI systems used to suggest a product, service, or information to a user. Recommendation systems are based on user characteristics, preferences, history, and data, so the recommendation is always personalized for a particular customer or user.

Creating and maintaining recommender systems is complex

Historically, developing and maintaining recommender systems requires financial services staff with special skills such as data scientists or developers. Finding and maintaining the right recommender algorithms can be a daunting task. Many financial organizations have legacy infrastructure, limited budgets for AI development and staff that lack data science skills needed to implement AI recommender algorithms. This Forrester report research shows that “roughly two-thirds (64%) of technical decision-makers are not fully confident in their ability to meet their organization’s AI goals based on current resources.”

There are a number of tasks required for setting up and testing a recommender system ML model to meet the specific needs of an organization. These tasks include data preparation, building or selecting a recommender algorithm model, tuning, training to optimize the model, and finally implementing the model. A first step is collecting, retrieving, and organizing data on customers and the financial products or services they are using. Once data is located, it must be collated into a standardized format for use in AI or ML algorithms.

There are a number of existing recommender algorithms available in repositories such as GitHub. As described in this Microsoft article, “When asked to build a recommender system, data scientists will often turn to more commonly known algorithms to alleviate the time and costs needed to choose and test more state-of-the-art algorithms. Selecting the right recommender algorithm from scratch and implementing new models for recommender systems can be costly as they require ample time for training and testing as well as large amounts of compute power.”

Building an effective AI recommender solution using cloud-based GPU-accelerated solutions

Training ML recommender models requires huge computational resources. Legacy infrastructure with CPU-based processing cannot handle the processing speeds required. Moving to a GPU-based infrastructure provides much faster processing and training for ML inference models and can help increase an organization’s return on investment (ROI).

According to the Forrester survey, “What organizations need are prebuilt, configurable AI cloud services. Cloud AI services allow developers to access a depth of AI capabilities via APIs for fueling application innovation without requiring data science experience.” Moving to a cloud-based AI solution that includes pre-built AI models, results in faster deployment time, and gives organizations access to AI models that have been responsibly built and tested.

Using cloud-based, GPU accelerated AI and ML solutions removes barriers financial service institutions face in developing AI and ML recommender algorithms. NVIDIA’s “State of AI in Financial Services survey” found that “companies are experiencing significant financial benefit from enabling AI across the enterprise. Over 30 percent of respondents stated that AI increases annual revenues by more than 10 percent, while over 25 percent stated that AI is reducing annual costs by more than 10 percent.”

Technology partners provide tools to help develop cloud-based, GPU-accelerated AI recommender solutions

Microsoft and NVIDIA have a long history of working together and providing technology to support financial institutions in creating and implementing AI recommender systems. Using Microsoft Azure cloud, and the NVIDIA AI platform provides scalable, accelerated resources needed to run AI/ML algorithms, routines, and libraries.

The partnership between Microsoft and NVIDIA makes powerful GPU acceleration available to financial institutions. The Azure Machine Learning service integrates the NVIDIA open-source RAPIDS software library that allows machine learning users to accelerate their pipelines with NVIDIA GPUs. The NVIDIA TensorRT acceleration library was added to ONNX Runtime to speed deep learning inferencing. Azure supports NVIDIA’s T4 Tensor Core Graphics Processing Units (GPUs), and the NVIDIA DGX H100 system which are optimized for the cost-effective deployment of machine learning inferencing or analytical workloads.

NVIDIA Merlin framework designed for recommender workflows

NVIDIA Merlin provides tools to build high-performing recommender systems at scale. Merlin includes libraries, methods, and tools that streamline the building of recommender systems. Merlin components and capabilities are optimized to support the retrieval, filtering, scoring, and ordering of hundreds of terabytes of data, all accessible through APIs. NVIDIA Merlin’s open-source components simplify building and deploying a production-quality recommender system pipeline. Capital One developed a state-of-the-art personalized recommendation architecture powered by the ALBERT Transformer Algorithm and NVIDIA’s Merlin Transformers4Rec that achieves superior performance in providing relevant ads to repeat visitors of the Capital One homepage. Register for NVIDIA GTC to learn more. https://www.nvidia.com/gtc/

Microsoft cloud-based solutions for financial recommender systems

Moving to the Microsoft Azure cloud solution provides financial institutions with a complete set of computing, networking, and storage resources integrated with workload services capable of handling the requirements of recommender algorithm processing. Microsoft Azure allows developers to build and train new AI models faster with automated machine learning, autoscaling cloud compute, and built-in DevOps.

To help create and implement recommender algorithms, Microsoft provides a GitHub repository with Python best practice examples to facilitate the building and evaluation of recommendation systems using Azure Machine Learning services.

Summary

Financial services organizations are increasingly implementing AI recommender systems to personalize offers of products or services to individual customers. Predictive analysis supported by cloud-based GPU-accelerated AI and ML recommender systems can enhance customer experience and provide a new source of revenue for the organization.

But developing a ML recommender system is time-consuming and complex and requires staff with specialized skills such as data science or programming. Microsoft and NVIDIA provide tools to streamline the process of developing, testing, training, and implementing recommender systems.

AIwire