How AI Is Transforming the Technology Workplace: Lessons from JPMorgan Chase
AI poses great potential for improving productivity in the technology workplace, Salwa Alamir of JPMorgan Chase said in her plenary session at PEARC21. A data-driven ML approach can help identify worker skills necessary for a job or a role in a given project, determine how projects can be sped up via improved task management and boost understanding of how code can be better maintained. ML- and data-driven analyses of three areas – people, tooling and process – have shown promise in improving the productivity of software developers, with further applications across that vast banking enterprise.
The PEARC conference series provides a forum for discussing challenges, opportunities and solutions among the broad range of participants in the research computing community. This community-driven effort builds on successes of the past, and aims to grow and be more inclusive by involving additional local, regional, national and international cyberinfrastructure and research computing partners spanning academia, government and industry. PEARC21, Evolution Across All Dimensions, was offered this year as a virtual event (July 19-22).
Tooling: Data Driven Approach
JPMorgan Chase employs more than 50,000 technologists, with an annual budget of more than $11 billion, Alamir said. The AI team began by asking whether the company’s productivity-to-cost balance could be improved via machine learning. They devised a three-pronged approach that split the problem into three “productivity enablers:” people, process and tooling.
For the tooling analysis, the team analyzed the correlation between productivity tool adoption with industry standard metrics such as lead time for changes, change failure rates, deployment frequencies and mean time to recovery after failures.
“We decided to use some of these metrics to decide what are the most useful tools developers can use,” she said. “We decided that unique incidents are not as bad as … incidents that are happening over and over,” given that the latter imply developers may be addressing the symptoms and not the root causes of a given incident. They identified a number of tools associated with better metrics, including continuous integration/continuous delivery tools, cloud tools, training portals and task-management tools that the company should prioritize use of by its developers.
People: AI for Skills Understanding
“For people, we were interested in skill sets,” she said. “How can we identify skill sets and retain the top talent we have in the firm?” The focus of the exercise was automated review of resumes.
At any time, JPMorgan Chase has numerous job positions open globally. The number of resumes it receives for these positions is on the order of a million; manual review of these resumes is often time-prohibitive. Aids to automate the process to date, such as keyword searches, have proved inadequate to the task.
Resume formats also proved a challenge, Alamir reported. For example, a resume written in columns can’t be read line by line, or the AI risks conflating information pertaining to different topics. Skills are changing over time, so no static list of required skills will remain relevant for long – the AI would have to detect new skills not present in the training data.
The team hit upon a four-dimensional rubric, featuring a deep soft-skill extractor, a sectional hard-skill extractor, a project-delivery extractor and an experience extractor designed to extract skills from any free text. While intended for situations of applicant surplus, they found that their algorithms could also be applied to the opposite problem of too few applicants by searching for skills that are relevant for other open positions.
Process: AI for Task Management
The “process” productivity enabler portion of the effort focused in part on project management. The team developed a technique able to detect patterns of mismanagement among projects. These patterns may include “the cliff,” in which the projects languish for long periods and then the work is performed at the last minute; “the gap,” in which the project proceeds according to schedule for a time but then enters a phase in which the remaining work isn’t completed; and “the wave,” in which successive bursts of task completion and addition of new tasks kept the project from reducing the amount of outstanding work. Notably, employees report that task planning and management was time consuming, with sudden workloads posed by unexpected events.
To improve on the current, manual planning method, Alamir and her colleagues leveraged historical records kept at JPMorgan Chase that tracked every development task ever created at the company. Learning from these data, their algorithm considers task difficulty, priority, precedence, duration, developer skills and additional constraints specific to a particular project to assign tasks optimally and minimize time to completion.
The team has a paper on that work that has been accepted for publication. Moreover, an informal survey of the company’s development teams elicited positive reviews of the manageability of the AI-generated project plans.
Process: AI for Code Maintenance
Another aspect of the process study was that of the vast network of scripts underlying the company’s systems. Some of its code repositories are huge, containing more than 10 million lines created by thousands of developers. Development can be accelerated when common tasks are performed by the same piece of code in the repository. But this approach poses risks.
“It causes a dependency on scripts that developers may not be aware of,” Alamir said. “If a developer decides to change a function, [they] don’t know how many people are using it. It may happen that ten teams’ [code] fail because of that change.” Uncovering and correcting those failures manually can take months.
The AI team developed a multistep approach to solve the issue. By generating a network graph through static analysis of the code repository, they were able to visualize the directional code dependencies in a given program. By parsing release notes they were able to find deprecated functions requiring updates. The AI is able to automate that process when an update is available, or alert developers when an updated script isn’t available at a high degree of confidence and the code must be corrected manually. Finally, unit tests of the updated scripts and scripts with dependencies can confirm a successful update.
The J.P. Morgan team has to date not implemented their algorithms at a scale large enough to demonstrate improvement in the three productivity enablers. Alamir is confident, though, that the size of the enterprise will greatly multiply even modest gains.
“One hour more productivity per week per developer” in such a large company will add up, she said. “We’re applying economies of scale; even just a little bit of improvement could end up being huge.”
Read additional coverage of PEARC21 on our sister site HPCwire.
Related
With over a decade’s experience covering the HPC space, Tiffany Trader is one of the preeminent voices reporting on advanced scale computing today.