Covering Scientific & Technical AI | Monday, April 7, 2025

Stanford HAI’s 2025 AI Index Reveals Record Growth in AI Capabilities, Investment, and Regulation 

STANFORD, Calif., April 7, 2025 -- Today, the Stanford Institute for Human-Centered AI (HAI) released its 2025 AI Index report which provides a comprehensive look at the global state of artificial intelligence. Now in its eighth edition, the AI Index tracks, distills, and visualizes data across technical performance, economic impact, education, policy, and responsible AI, offering an empirical foundation for understanding AI’s rapid evolution.

“AI is a civilization-changing technology — not confined to any one sector, but transforming every industry it touches,” said Russell Wald, Executive Director at Stanford HAI and member of the AI Index Steering Committee. “Last year we saw AI adoption accelerate at an unprecedented pace, and its reach and impact will only continue to grow. The AI Index equips policymakers, researchers, and the public with the data they need to make informed decisions — and to ensure AI is developed with human-centered values at its core.”

The 2025 AI Index highlights key developments over the past year, including major gains in model performance, record levels of private investment, new regulatory action, and growing real-world adoption. The report also underscores enduring challenges in reasoning, safety, and equitable access — areas that remain critical as AI systems become more advanced and widely deployed. Top takeaways include:

  1. AI performance on demanding benchmarks continues to improve: In 2023, researchers introduced new benchmarks—MMMU, GPQA, and SWE-bench—to test the limits of advanced AI systems. Just a year later, performance sharply increased: scores rose by 18.8, 48.9, and 67.3 percentage points on MMMU, GPQA, and SWE-bench, respectively. Beyond benchmarks, AI systems made major strides in generating high-quality video, and in some settings, agentic AI models even outperformed humans.
  2. AI is increasingly embedded in everyday life: From healthcare to transportation, AI is rapidly moving from the lab to daily life. As of August 2024, the FDA had approved 950 AI-enabled medical devices—a sharp rise from just six in 2015 and 221 in 2023. On the roads, self-driving cars are no longer experimental: Waymo, one of the largest U.S. operators, now provides over 150,000 autonomous rides each week.
  3. Business is all-in on AI, fueling record investment and adoption, as research continues to show strong productivity impacts: In 2024, U.S. private AI investment grew to $109.1 billion—nearly 12 times China’s $9.3 billion and 24 times the U.K.’s $4.5 billion. Generative AI saw particularly strong momentum, attracting $33.9 billion globally in private investment—an 18.7% increase from 2023. AI business adoption is also accelerating: 78% of organizations reported using AI in 2024, up from 55% the year before. Meanwhile, a growing body of research confirms that AI boosts productivity and in most cases, helps narrow skill gaps across the workforce.
  4. The U.S. still leads in producing top AI models—but China is closing the performance gap: In 2024, U.S. institutions produced 40 notable AI models, significantly outpacing China’s 15 and Europe’s three. While the U.S. maintains its lead in quantity, Chinese models have rapidly closed the quality gap: performance differences on major benchmarks such as MMLU and HumanEval shrank from double digits in 2023 to near parity in 2024. Meanwhile, China continues to lead in AI publications and patents. At the same time, model development is increasingly global, with notable launches from regions such as the Middle East, Latin America, and Southeast Asia
  5. The responsible AI (RAI) ecosystem unevenly evolves: AI-related incidents are rising sharply, yet standardized RAI evaluations remain rare among major industrial model developers. However, new benchmarks like HELM Safety, AIR-Bench, and FACTS offer promising tools for assessing factuality and safety. Among companies, a gap persists between recognizing RAI risks and taking meaningful action. In contrast, governments are showing increased urgency: in 2024, global cooperation on AI governance intensified, with organizations including the OECD, EU, UN, and African Union releasing frameworks focused on transparency, trustworthiness, and other core RAI principles.
  6. Global AI optimism is rising—but deep regional divides remain: In countries like China (83%), Indonesia (80%), and Thailand (77%), strong majorities see AI products and services as more beneficial than harmful. In contrast, optimism remains far lower in places like Canada (40%), the United States (39%), and the Netherlands (36%). Still, sentiment is shifting: since 2022, optimism has grown significantly in several previously skeptical countries—including Germany (+10%), France (+10%), Canada (+8%), Great Britain (+8%), and the United States (+4%).
  7. AI becomes more efficient, affordable, and accessible: Driven by increasingly capable small models, the inference cost for a system performing at the level of GPT-3.5 dropped over 280-fold between November 2022 and October 2024. At the hardware level, costs have declined by 30% annually, while energy efficiency has improved by 40% each year. Open-weight models are also closing the gap with closed models, reducing the performance difference from 8% to just 1.7% on some benchmarks in a single year. Together, these trends are rapidly lowering the barriers to advanced AI.
  8. Governments are stepping up on AI—with regulation and investment: In 2024, U.S. federal agencies introduced 59 AI-related regulations—more than double the number in 2023—and issued by twice as many agencies. Globally, legislative mentions of AI rose 21.3% across 75 countries, continuing a ninefold increase since 2016. Alongside rising attention, governments are investing at scale: Canada pledged $2.4 billion, China launched a $47.5 billion semiconductor fund, France committed €109 billion, India pledged $1.25 billion, and Saudi Arabia’s Project Transcendence represents a $100 billion initiative.
  9. AI and computer science education are growing—but gaps in access and readiness persist: Two-thirds of countries now offer or plan to offer K–12 CS education—twice as many as in 2019—with Africa and Latin America making the most progress. Yet access remains limited in many African countries due to basic infrastructure gaps like electricity. In the U.S., 81% of CS teachers say AI should be part of foundational CS education, but less than half feel equipped to teach it.
  10. Industry is racing ahead in AI—but the frontier is tightening: Nearly 90% of notable AI models in 2024 came from industry, up from 60% in 2023, while academia remains the top source of highly cited research. Model scale continues to grow rapidly—training compute doubles every five months, datasets every eight, and power use annually. Yet performance gaps are shrinking: the score difference between the top and 10th-ranked models fell from 11.9% to 5.4% in a year, and the top two are now separated by just 0.7%. The frontier is increasingly competitive—and increasingly crowded.
  11. AI earns top honors for its impact on science: AI’s growing importance is reflected in major scientific awards: two Nobel Prizes recognized work that led to deep learning (physics), and to its application to protein folding (chemistry), while the Turing Award honored groundbreaking contributions to reinforcement learning.
  12. Reasoning remains a challenge: Learning-based systems that generate and verify hypotheses using symbolic methods perform well—though not superhumanly—on tasks like International Math Olympiad problems. LLMs, however, still lag on complex reasoning benchmarks like MMMU and struggle with reliably solving logic-heavy tasks such as arithmetic and planning, even when correct solutions are provable. This limits their use in high-stakes, accuracy-critical settings.

The AI Index is used by decision-makers across sectors to better understand the pace and direction of AI development. Over the past eight years, it has become a foundational resource for government agencies, industry leaders, and civil society, cited by policymakers in nearly every major country and used to brief global enterprises such as Accenture, Wells Fargo, IBM, and Fidelity. As artificial intelligence continues to evolve at speed, the Index remains a vital tool for those seeking timely, trustworthy insights into where the field stands—and where it is headed.

The AI Index is available now at https://hai.stanford.edu/ai-index/2025-ai-index-report.

About the AI Index

The AI Index report tracks, collates, distills, and visualizes data related to artificial intelligence (AI). Our mission is to provide unbiased, rigorously vetted, broadly sourced data in order for policymakers, researchers, executives, journalists, and the general public to develop a more thorough and nuanced understanding of the complex field of AI. The AI Index is recognized globally as one of the most credible and authoritative sources for data and insights on artificial intelligence.

About the Stanford Institute for Human-Centered AI

The Stanford Institute for Human-Centered AI (HAI) is an interdisciplinary institute established in 2019 to advance AI research, education, policy, and practice. Stanford HAI brings together thought leaders from academia, industry, government, and civil society to shape the development and responsible deployment of AI. Stanford HAI’s mission is to advance AI research, education, policy, and practice to improve the human condition. We believe AI should be guided by its human impact, inspired by human intelligence, and designed to augment, not replace, people. Our interdisciplinary faculty conducts research focused on guiding the development of AI technologies intended to enhance human capabilities while ensuring its ethical, fair, and transparent use.


Source: Stanford Institute for Human-Centered AI

AIwire