Covering Scientific & Technical AI | Tuesday, December 24, 2024

Untether AI Releases Latest Version of imAIgine Software Development Kit 

TORONTO, Jan. 17, 2023 -- Untether AI, the leader in at-memory computation for artificial intelligence (AI) workloads, today announced the availability of the imAIgine Software Development Kit (SDK) version 22.12. The imAIgine SDK provides an automated path to running neural networks on Untether AI’s runAI devices and tsunAImi accelerator cards, with push-button quantization, optimization, physical allocation, and multi-chip partitioning. This release dramatically improves the speed in which developers can create and deploy neural networks or high-performance compute workloads, saving months of development time.

Increasing Developer Velocity for Custom Neural Networks

“There has been an explosion of neural networks over the last several years,” said Arun Iyengar, CEO of Untether AI. “Keeping up with the support of these new, innovative networks requires an open, flexible tool flow, and with the 22.12 release of the Imagine SDK we’ve made the necessary improvements to allow customers to quickly and easily add support without requiring Untether AI assistance.”

A key innovation with this release is the introduction of flexible kernels, which can automatically adapt to different input and output shapes of neural network layers. Additionally, Untether AI is providing its customers with the source code to the kernels to provide examples of code optimized for at-memory compute. Developers can modify these kernels and register them with the imAIgine compiler so that they can be selected by the compiler in the automatic lowering process. In this manner, customers are free to self-support their neural network development. The imAIgine SDK provides the low-level kernel compiler, code profiler, and cycle-accurate simulator to provide instant feedback to the developer on the performance of their custom kernels.

Introducing the High-Performance Compute Flow

“Customers are seeing the energy-centric benefits of Untether AI’s at-memory compute architecture in other, non-AI applications,” said Mr. Iyengar. “High-performance simulation, signal processing and linear algebra acceleration are a few of the applications that our customers are requesting.”

In response, the 22.12 release introduces a high-performance compute (HPC) design flow in the imAIgine SDK for runAI200 devices. The runAI200 devices have 511 memory banks – each memory bank with its own RISC processor and a two-dimensional array of 512 at-memory processing elements, arranged as a single-instruction multiple-data (SIMD) architecture. With the HPC flow, customers can directly develop “bare metal” kernels for the RISC processors and processing elements in the runAI200 devices. Users can then manually place the kernels in any topology on the memory banks and use pre-defined code for bank-to-bank data transmission. The code profiler tool within the imAIgine SDK shows exactly how the code is running, identifying any compute bottlenecks and data transmission congestions, which can then be rectified through duplication of kernels and re-placement of the kernels in the runAI200 spatial architecture.

Reducing the Learning Curve

Whether using the neural network or the HPC flow, Untether AI provides on-line and downloadable documentation for all of the imAIgine SDK’s tools and procedures to create, quantize, compile, and run neural networks or low-level kernel code on the runAI200 devices. Untether AI also offers a live, instructor-led training program with many tutorials and coding examples included.

Availability

The imAIgine SDK latest version 22.12 is available today and can be downloaded from the Untether AI customer portal. To gain access, please visit www.untether.ai and request download privileges.

About Untether AI

Untether AI provides ultra-efficient, high-performance AI chips to enable new frontiers in AI applications. By combining the power efficiency of at-memory computation with the robustness of digital processing, Untether AI has developed a groundbreaking new chip architecture for neural net inference that eliminates the data movement bottleneck that costs energy and performance in traditional architectures. Founded in Toronto in 2018, Untether AI is funded by CPPIB, GM Ventures, Intel Capital, Radical Ventures, and Tracker Capital.


Source: Untether AI

AIwire