ddh
Covering Scientific & Technical AI | Friday, February 21, 2025

MLCommons

MLCommons Releases AILuminate v1.1, Adding French Language Capabilities to AI Safety Benchmark

PARIS, Feb. 11, 2025 -- MLCommons, in partnership with the AI Verify Foundation, today released v1.1 of AILuminate, incorporating new French language capabilities into its first-of-its-kind AI safety benchmark. The ...Full Article

MLCommons Introduces MLPerf Client v0.5

SAN FRANCISCO, Dec. 11, 2024 -- MLCommons, the leading open engineering consortium dedicated to advancing machine learning (ML), is excited to announce the public release of the MLPerf Client v0.5 ...Full Article

Shining a Light on AI Risks: Inside MLCommons’ AILuminate Benchmark

As the world continues to navigate new pathways brought about by generative AI, the need for tools that can illuminate the risk and reliability of these systems has never ...Full Article

MLCommons Launches AILuminate Benchmark to Measure Safety of LLMs

SAN FRANCISCO, Dec. 4, 2024 -- MLCommons today released AILuminate, a first-of-its-kind safety test for large language models (LLMs). The v1.0 benchmark – which provides a series of safety grades ...Full Article

NVIDIA: Blackwell Delivers Next-Level MLPerf Training Performance

Nov. 13, 2024 -- Generative AI applications that use text, computer code, protein chains, summaries, video and even 3D graphics require data-center-scale accelerated computing to efficiently train the large language ...Full Article

New MLPerf Training v4.1 Benchmarks Highlight Industry’s Focus on New Systems and GenAI Applications

SAN FRANCISCO, Nov. 13, 2024 -- Today, MLCommons announced new results for the MLPerf Training v4.1 benchmark suite, including several preview category submissions using the next generation of accelerator hardware. ...Full Article

New MLPerf Storage v1.0 Benchmark Results Show Storage Systems Play a Critical Role in AI Model Training Performance

SAN FRANCISCO, Sept. 25, 2024 -- Today, MLCommons announced results for its industry-standard MLPerf Storage v1.0 benchmark suite, which is designed to measure the performance of storage systems for ...Full Article

New MLPerf Inference v4.1 Benchmark Results Highlight Rapid Innovations in GenAI Systems

Aug. 29, 2024 -- MLCommons has announced new results for its industry-standard MLPerf Inference v4.1 benchmark suite, which delivers machine learning (ML) system performance benchmarking in an architecture-neutral, representative, ...Full Article

AMD Achieves Strong MLPerf Inference Results with Instinct MI300X GPUs

Aug. 28, 2024 -- AMD Instinct MI300X GPUs, advanced by one of the latest versions of open-source ROCm achieved impressive results in the MLPerf Inference v4.1 round, highlighting strength of ...Full Article

Intel Xeon 6 Demonstrates Enhanced AI Inference Capabilities in MLPerf Testing

Aug. 28, 2024 -- Today, MLCommons published results of its industry-standard AI performance benchmark suite, MLPerf Inference v4.1. Intel submitted results across six MLPerf benchmarks for 5th Gen Intel ...Full Article
Page 1 of 212
AIwire