Covering Scientific & Technical AI | Thursday, January 2, 2025

SambaNova Launches Second-Gen DataScale System

SambaNova Systems is announcing – and shipping – its second-generation DataScale system, the DataScale SN30. Powered by the eponymous Cardinal SN30 RDU (Reconfigurable Data Unit), SambaNova claims that the ...Full Article

Nvidia, Qualcomm Shine in MLPerf Inference; Intel’s Sapphire Rapids Makes an Appearance.

The steady maturation of MLCommons/MLPerf as an AI benchmarking tool was apparent in today’s release of MLPerf v2.1 Inference results. Twenty-one organizations submitted 5,300 performance results and 2,400 power measurement. While ...Full Article

Intel Gives a Name to Data Center GPUs for Cloud Gaming and AI

Intel has been hyping up its media delivery and cloud gaming GPUs codenamed Arctic Sound-M, and has given a formal name: Flex Series GPUs. The Flex Series GPUs will ...Full Article

Tesla Expands Its GPU-Powered AI Supercomputer – Is Dojo Next?

Tesla has revealed that its biggest in-house AI supercomputer – which we wrote about last year – now has a total of 7,360 A100 GPUs, a nearly 28 percent uplift ...Full Article

CXL Brings Datacenter-sized Computing with 3.0 Standard, Thinks Ahead to 4.0

A new version of a standard backed by major cloud providers and chip companies could change the way some of the world’s largest datacenters and fastest supercomputers are built. ...Full Article

IBM Research Open-Sources Deep Search Tools

IBM Research’s Deep Search product uses natural language processing (NLP) to “ingest and analyze massive amounts of data—structured and unstructured.” Over the years, Deep Search has seen a wide ...Full Article

Samsung Announces 24Gbps GDDR6 DRAM for 30% Faster Speeds

Samsung Electronics announced Thursday it has begun sampling a 16GB Graphics Double Data Rate 6 (GDDR6) DRAM featuring 24-gigabit-per-second processing speeds. The new high-speed memory chips for graphics cards ...Full Article

To Sell More Hardware, Chipmakers Step up to Make AI Straightforward 

Hardware makers have spent years and billions of dollars building up wares for AI, but are now asking themselves: how do we make AI straightforward for small and large ...Full Article

HPE to Ship First Inference Server with Qualcomm Chip in August

HPE will ship a first server aimed at AI inferencing in August with a chip from Qualcomm. The server, which will be part of the Edgeline 8000 platform, will ...Full Article

HPE Pushing IT as a Utility Model, with Hardware as a Facilitator

HPE is positioning itself to be a utility company for IT, with more revenue derived from services, and the hardware and infrastructure playing a background role as a facilitator. ...Full Article
Page 7 of 107« First...56789...203040...Last »
AIwire