Baidu Brain 7.0 AI Platform Announced; Baidu Kunlun II AI Chips Now in Mass Production
The latest version of Baidu’s open AI platform, Baidu Brain 7.0, has been launched by the Chinese technology company, which also announced that its Kunlun II AI chips are now in mass production.
The updates were unveiled at the company’s flagship Baidu World 2021 technology conference, which was live-streamed from Beijing. The conference was held virtually due to the continuing COVID-19 pandemic.
Baidu Brain 7.0 is one of the world’s largest AI open platforms, according to the company, and is used by Baidu to support industrial applications of AI and Baidu Cloud. The new version offers greater integration of a wide array of knowledge sources and deep learning, including language comprehension and reasoning using a combination of technologies to enable output across language, voice and visual formats. No further details or specifications of the new release were provided by Baidu at the conference.
The previous version, Baidu Brain 6.0, had developed more than 270 core AI capabilities and created over 310,000 models for developers, becoming a key driver of the intelligent transformation in a wide range of industries, according to the company. Baidu Brain debuted in 2016 as a platform.
The mass availability of the new Kunlun II AI chip from Baidu means that customers will be able to use the latest chips along with the dramatically improved Baidu Brain 7.0 software to enable a new generation of AI applications, the company said in a statement.
The Kunlun II chips, which were independently developed by Baidu, promise two to three times more processing power than the first generation of Kunlun chips. The Kunlun II features a 7nm process and is equipped with Baidu's homegrown second-generation XPU neural processor architecture for cloud, edge, and AI, according to the company.
The Kunlun II chips provide up to 256 TeraOPS (INT8) and 128 TFLOPS(FP16), with a maximum power consumption of 120W, according to Baidu. The chips feature Arm CPUs, high-speed interconnects, security and virtualization, as well as an upgraded compiler engine and development kit. The company is not presently releasing any additional specifications or performance figures for the Kunlun II chips.
The first-gen Kunlun chips were introduced in 2018 and began mass production in late 2019. More than 20,000 first-gen Kunlun chips have been manufactured so far and have been used for search engine, smart assistants and cloud business needs, according to Baidu.
The original first-gen Kunlun chips used a 14nm process technology and included 512 gigabytes per second (Gbps) of memory bandwidth. They provided performance of up to 260 TOPS at 150 watts and allowed Ernie pre-training models for natural language processing to infer three times faster than the conventional GPU/FPGA-accelerating model, according to Baidu.
The new Kunlun II chips can be used with cloud, terminal and edge computing scenarios for a wide variety of needs, including high-performance computer clusters, biocomputing and intelligent transportation and autonomous driving, according to Baidu. The Kunlun II chips are optimized for AI technologies such as voice, natural language processing and images and support deep learning frameworks such as Baidu's open source deep learning platform, PaddlePaddle.
In March, Baidu’s Kunlun chip division received a new round of funding that was reported to be about $2 billion, according to a report by Reuters, while in a bigger move in June, Baidu spun the Kunlun chipmaking unit off as its own company.
Several AI analysts told EnterpriseAI that Baidu’s progress with its AI chips and platform are notable.
“Baidu Kunlun I was quite impressive at 256 TOPS, and so was the traction it engendered for Baidu’s PaddlePaddle framework” and its use by what the company claims is more than 3.6 million developers around the world,” said Karl Freund, the founder and principal analyst of Cambrian AI Research. “That is very impressive. I suspect a lot of that was done on Nvidia GPUs [since PaddlePaddle also supports Nvidia GPUs], but the performance of Kunlun certainly captured a significant share.”
At the same time, though, Freund said he does not think that Baidu has adequate channels to penetrate markets outside of China. “But there is plenty of AI business in the mainland [of China] to go around, and Baidu is very well positioned to grab a lot of that for Kunlun II. At two to three times the performance of its predecessor, Kunlun II will be very competitive with Nvidia A100, the current leader in data center AI.”
Dan Olds, chief research officer at Intersect360 Research, called the announcements by Baidu “a big salvo in the race to get better and more optimized processors for AI workloads. Baidu has been working on AI from both the software and hardware angles for a long time and they have really advanced the state of the art in recent years.”
Olds said that the company’s statement about providing two to three times more performance on the new 7nm chip compared to the original Kunlun chips is impressive, but “I’d like to see some more numbers on how it performs on real world AI benchmarks and common tasks. I also would like to know how typical HPC/AI clusters can take advantage of the processor.”
Also, with the Kunlun chip manufacturing unit being spun off from Baidu in June, Olds said that could mean that the company will make the new Kunlun II processors available for sale to all comers. “We will see,” he said. “This certainly means more competition for AI accelerator companies, particularly Nvidia. It will be interesting to see how this all shakes out over time.”
Another analyst, Tony Baer of dbInsight, said that the rapid rise of specialized hardware for compute-intensive tasks reflects new tiering for data and analytics.
“On one hand, the ability to process what we used to call ‘big data’ was made possible by compute frameworks that could linearly scale commodity hardware,” said Baer. “But with AI being more compute-intensive, a need for specialized hardware emerged, and the result is the competition that you see with challengers such as Baidu coming up with their responses to Google, AWS and others and spawning a new arms race. Commodity hardware is still important for scaling analytics, but specialized hardware will complement it for specific tasks taking advantage of that scale.”
Kevin Krewell, principal analyst with TIRIAS Research, called Baidu’s latest chip “another example of a hyperscaler building custom silicon optimized for their particular workloads and software ecosystem. The goal is to be as efficient in power and cost as possible.”
Rob Enderle, principal analyst of Enderle Group, said that on paper Baidu looks to be doing well, but it added that is not the chip that makes the difference. “It is training that has been the costly and time-consuming side of AI development,” said Enderle. “We are still at the early stages of AI development, so the partner and developer interest show promise. Still, we need to assess actual deployments before honestly telling if this effort is a game-changer.”
Meanwhile, AI market leader Nvidia is far ahead on training and technology use, making them the company to beat today, said Enderle. “But the market is young and dominance early on can be fleeting. I would use Nvidia as the gold standard until some other company passes them. Nvidia is already in trial or deployment in cars, robots and drones.”
But Baidu’s Kunlun II chips could gain traction in other markets, said Enderle. “I would expect them to make inroads in conversational computing, and with recommendation engines, where they should be able to gain a competitive advantage over others given their overall business focus,” he said. “Baidu does have a strong reputation for solid work, so they are clearly in the game.”