Happening Now
Tuesday, November 26- Dell Technologies Delivers Q3 Fiscal 2025 Financial Results
- Intel, Biden-Harris Administration Finalize $7.86B Funding Award Under US CHIPS Act
- AMD Expands ROCm 6.3 with Optimized Libraries for AI and HPC Workflows
- Cloud Management Platform emma Secures $17M Series A
- Luma AI Launches New Dream Machine Service
- New Research From Google Workspace Shows Rising Leaders Are Embracing AI
- H2O.ai Unveils Agentic AI That Converges Generative and Predictive AI with Purpose-built SLMs
- Tokyo University of Science Unveils Synaptic Device for Low-Power Edge AI Sensors
- NetApp Joins the Vultr Cloud Alliance to Provide Scalable Data Management for Enterprise AI Workloads
- Capgemini, Mistral AI and Microsoft Collaborate to Further Accelerate Adoption of GenAI Tech
- NVIDIA Announces Financial Results for Q3 Fiscal 2025, Revenue of $35B
- Enfabrica Introduces ACF SuperNIC to Advance AI Networking Efficiency
- Franz Inc Enhances AllegroGraph’s Neuro-Symbolic AI Features with Support for Additional AI Models
- Hitachi Vantara Unveils Hitachi iQ for NVIDIA HGX Platform
- Ansys to Drive Advances in Semiconductor Design Using NVIDIA AI
- Dell Technologies Boosts Enterprise AI with AI Factory Advancements
- DDN Unveils Next-Gen Data Intelligence Platform Enhancements to Drive AI and HPC Innovation
- SAS Signs European Commission’s AI Pact
-
-
Recent News
-
Contributors
Tiffany TraderEditorial DirectorJamie HamptonManaging EditorKevin JacksonContributing EditorJohn RussellContributing EditorAlex WoodieContributing EditorDouglas EadlineContributing EditorAli AzharContributing EditorDrew JollyAssistant Editor
Author Archives: Doug Eadline
Doug Eadline
Nvidia Releasing Open-Source Optimized Tensor RT-LLM Runtime with Commercial Foundational AI Models to Follow Later This Year
September 14th, 2023 Comments Off on Nvidia Releasing Open-Source Optimized Tensor RT-LLM Runtime with Commercial Foundational AI Models to Follow Later This Year
Nvidia's large-language models will become generally available later this year, the company confirmed. Organizations widely rely on Nvidia's graphics processors to write AI applications. The company has also created proprietary pre-trained models similar to OpenAI's GPT-4 and Google's PaLM-2. ...
MLPerf Releases Latest Inference Results and New Storage Benchmark
September 14th, 2023 Comments Off on MLPerf Releases Latest Inference Results and New Storage Benchmark
MLCommons this week issued the results of its latest MLPerf Inference (v3.1) benchmark exercise. Nvidia was again the top performing accelerator, but Intel (Xeon CPU) and Habana (Gaudi1 and 2) performed well. Google provided a peak at its new ...
Nvidia H100: Are 550,000 GPUs Enough for This Year?
August 21st, 2023 Comments Off on Nvidia H100: Are 550,000 GPUs Enough for This Year?
The GPU Squeeze continues to place a premium on Nvidia H100 GPUs. In a recent Financial Times article, Nvidia reports that it expects to ship 550,000 of its latest H100 GPUs worldwide in 2023. The appetite for GPUs is obviously coming ...
GigaIO’s New SuperNode Takes-off with Record Breaking AMD GPU Performance
August 11th, 2023 Comments Off on GigaIO’s New SuperNode Takes-off with Record Breaking AMD GPU Performance
The HPC user's dream is to keep stuffing GPUs into a rack mount box and make everything go faster. There are some servers that offer up to eight GPUs, but the standard server usually offers four GPU slots. Fair ...