2021: An Incredible Year of AI in Review
From the still-menacing COVID-19 pandemic to new virus variants and overwhelmed hospitals, to extreme weather events, continuing global supply chain issues and a world clamoring for some semblance of recovery and normalcy, 2021 sure has been exhausting.
It was a turbulent year in the world of AI as well, with a constant string of eye-opening investment deals, ongoing corporate dramas, compelling new uses of AI, intriguing corporate product and road map strategies and a huge influx of noteworthy research and development.
To put the most important 2021 happenings in AI into perspective, here is our EnterpriseAI top 5 list of the biggest AI news stories for the year. The events and trends below exemplify the extraordinary activity and growth in the field of AI in the last 12 months as well as the evolving importance of this technology and the possibilities that it inspires in manufacturing, banking, retail, business, insurance and every other industry.
- The Nvidia-Arm Acquisition Saga
It all began with optimism, at least from Nvidia’s side, when the company first announced in September 2020 that it would acquire British chip IP vendor Arm Ltd. for $40 billion. But since then, the bad news has been piling up for the potential deal as the British government, the U.S. Federal Trade Commission and the European Commission have all launched deeper investigations into the deal and a myriad of competition, national security and related issues that concern regulators.
One of the main concerns about the deal for the regulators is that the acquisition will cause harm to competitors by giving Nvidia access to the competitively sensitive information of Arm’s licensees, some of whom are Nvidia’s rivals, according to the FTC’s complaint. And since Arm works with many smaller licensees, the effects on the industry could be severe and damaging, the regulators said.
Even back in 2020 when the eye-bulging deal was unveiled, there were some critics who said it would run into roadblocks. That is certainly an understatement. Several major tech companies, including Google and Microsoft, quickly and vocally opposed the deal, issuing repeated concerns about its negative effects on competition and pricing, while three other chip companies – Broadcom, Marvell and MediaTek – announced support for the deal and began publicly saying that they see the move as one that could ultimately benefit their own businesses.
The situation remains fluid, but a growing number of industry analysts have been saying that they do not see how the deal will eventually end in the way Nvidia had hoped.
Interestingly, Nvidia CEO Jensen Huang said in an interview in June that his company does not have to complete this acquisition to continue to be successful.
“We don’t have to buy Arm,” he said. “Nvidia is doing well and it has a great strategy. We want to buy Arm because it will expand the reach of the Nvidia ecosystem and help Arm get into new markets — I’m incredibly excited about sharing our technology with a company that enables millions of devices a day.”
The Nvidia acquisition of Arm arose when Japanese technology investment company SoftBank, which bought Arm in July of 2016 in a $32.25 billion all-cash deal, chose to sell the company after hemorrhaging cash since the first quarter of 2020. SoftBank had been looking to sell off assets to raise money after the company’s earlier bets on the rise of connected devices failed to pay off. The company’s Vision Fund, its AI investment fund, suffered a $13 billion annual loss in its fiscal year ending in March 2020.
Now, say some analysts, SoftBank would love nothing more than the current acquisition proposal to fail so that SoftBank can sell a now more highly-valued Arm to a bidder who will pay more than the $40 billion offered by Nvidia.
For 2022, more intrigue awaits with this scenario as uncertainty is everywhere in the deal.
It all begs the question – what was Nvidia thinking when it proposed wrapping up a deal for a company that has its tentacles seemingly everywhere?
- GPT-3 Continues to Mature
Since its 2020 debut, OpenAI’s GPT-3 massive natural language model has gained more new features and capabilities and has been helping to bring more interest to language modeling around the world.
GPT-3, which stands for Generative Pre-trained Transformer 3, is an autoregressive language model with 175 billion parameters, which OpenAI claims is ten times more than any previous non-sparse language model. The first version, GPT-1, arrived in 2018, while the second version, GPT-2, debuted in 2019. With the release of GPT-3 in 2020, natural language processing (NLP) gained more power and use cases in the enterprise than ever before.
In December, GPT-3 got faster and less expensive to use, thanks to a new API feature that integrates fine-tuning so users can customize their models to produce better results from their workloads. By harnessing automated fine-tuning into the API, developers can now create versions of GPT-3 tailored for their enterprise applications. Developers can start using the customized API using just one command in the OpenAI command line tool, according to the independent AI research and deployment company. The custom version will begin training and then be available immediately to the API.
That followed the news in November that GPT-3’s wait list was being dropped, making the technology available to anyone who wants to use it for the first time.
- Soaring AI Investment Goes Crazy
In April, when AI platform vendor SambaNova Systems announced that it secured a fresh war chest of $676 million in Series D funding to put AI market leader Nvidia directly in its crosshairs, the company made some headlines.
That was just the beginning as the flow of investment cash continued to pour into the AI marketplace all year.
Also in April, Scale AI brought in a $325 million Series E round, Chinese startup Enflame raised $278.5 million in January and Mythic AI unveiled a $70 million Series C round in May.
AI conversation intelligence platform Chorus.ai was acquired by marketing software vendor ZoomInfo in July for $575 million to expand its go-to-market services to enterprises, while AI edge chip vendor Blaize took in $71 million to expand its edge and automotive products.
In August, data-centric AI platform vendor Snorkel AI secured $85 million in Round C funding to continue to grow its Snorkel Flow platform, which helps data scientists and non-technical experts greatly reduce the time spent on AI modeling by automating data labeling and groupings using the power of AI.
But there was even more.
In July, inference chip vendor Untether AI brought in $125 million in oversubscribed Round B funding to further develop and market its AI inferencing chips for neural networks and cloud infrastructure, while Mythic AI brought in $70 million in May to bring its own M1108 Analog Matrix Processor (AMP) AI inferencing chips into mass production and to develop its next hardware and software products.
There were plenty of other smaller AI investments unveiled in 2021 as well, as the number of companies in the space continued to grow. It likely will not stay this way, of course, as the best ideas will continue to get funding and the less successful companies will melt away or be acquired, but the pattern is clear – AI technologies are prime targets for investment and development as the market for AI appears to be limitless as 2022 approaches.
- Nvidia’s Earth Digital Twin Moonshot
AI is powering voice bots, conversational AI and improvements in a wide range of tasks in business and society, but it is also envisioned in an even bigger way to help solve a myriad of environmental and other challenges on Earth.
That’s the idea behind Nvidia’s moonshot Earth digital twin project, which was announced in November at the GPU maker’s fall Nvidia GTC21 virtual conference. Using the company’s Nvidia Omniverse 3D virtual world design platform, Nvidia wants to accelerate its digital twin dreams from helping enterprises model products, factories, assembly lines and more – and rev it up to create the grandest digital twin so far – of the planet Earth itself that will continuously model, predict and track climate change in real-time so scientists can seek ways to reverse or stop the destructive effects of its spread.
The work will be done using a powerful new supercomputer that the company is now building, called Earth-2, which will run AI physics created by the new Nvidia Modulus AI framework at million-X speeds on the Nvidia Omniverse platform, he said.
“All the technologies we have invented up to this moment are needed to make Earth-2 possible,” said CEO Jensen Huang as he announced the project. “I cannot imagine a greater and more important use.”
The Nvidia Modulus AI framework is powered by physics machine learning models that can build neural network models of industrial digital twins which help enterprises with a wide range of development and business tasks, as well as for climate science, protein engineering and more.
So, Nvidia is working to create an Earth digital twin? Think about that for a moment. This is huge. This is impressive. This has meaning for our planet. This is a true moonshot. This is what technology created to help humankind is supposed to look like. Now let us see where it goes and what happens.
- Nvidia Debuts Its Enterprise-Focused Megatron Large Language Model
Bringing large language model (LLM) capabilities directly to enterprises to help them expand their business strategies and capabilities is a growing and important need in the world of business technology. With this market in mind, Nvidia in November unveiled its own NeMo Megatron large language framework and its latest customizable 530B parameter Megatron-Turing language model at the company’s fall GTC21 conference. Both technologies aim to help give enterprises promising capabilities to do more with their data and business planning in the future. The large language models aim to enable enterprises to build their own domain-specific chatbots, personal assistants and other AI applications which will understand human languages with more sensitive levels of subtlety, context and nuance.
The Megatron framework and model build on Nvidia’s work with the open source Megatron project, which is led by Nvidia researchers who study the training of large transformer language models at scale. Megatron 530B is the world’s largest customizable language model, according to the company.
The Nvidia large language model amplifies the interest in similar models from other organizations in 2021 as well.
In December, Alphabet’s DeepMind division unveiled its new 280 billion parameter language model named Gopher, as well as several smaller models which aim to deliver further insights in this fast-growing area of AI and machine learning discoveries. The experiments, which analyzed the performance of six Transformer-based language models – in sizes from 44 million parameters up to Gopher’s 280 billion parameter model – were evaluated while performing 152 diverse tasks to watch how they performed and stood up to the results of other language models that are in use.
And in October, China-based Inspur AI Research revealed the availability of its Yuan 1.0 language model, which has 245.7 billion parameters and has undergone training using 5TB of datasets. Yuan 1.0 was built from the ground up as a model for the Chinese language, which is complex and required a unique development approach compared to English, according to Inspur AI Research.
As 2021 winds down, 2022 is approaching as a year in which AI will likely take on even greater importance in enterprise computing and business processes around the world. So, join us here at EnterpriseAI throughout 2022 as we observe, marvel and navigate the changes and evolution that are heading our way in the world of enterprise AI.