Exploring cutting-edge chip design architectures built specifically for artificial intelligence and machine learning applications

Authors

Botlagunta Preethish Nandan
SAP Delivery Analytics, ASML, Wilton, CT, United States

Synopsis

AI’s most important driver is the Knowledge Explosion resulting from the internet. There is an increasing demand for AI with better cognitive functions to assist human endeavors, such as scientific discovery and analysis of massive data sets that cannot be intuitively understood. There is an urgent need to synthesize AI hardware and algorithms to deliver brain-like real-time intelligence. Neuromorphic computers would exponentially increase the varieties and the efficiency of AI applications. Brain-like devices that power AI algorithms would have a size compressed by several orders while simultaneously consuming several orders of magnitude less energy. Advanced synthesis tools could be developed to assist scientists generating hypotheses (Krishnamoorthy et al., 2023; Miller et al., 2023; Nagar et al., 2024).

They must be able to learn complex relationships from structured data, continuously update in response to new experience, and generalize from this data to “see situations” for which they have no previous data. Nascent efforts in biologically plausible AI have thrown a spotlight on a much wider range of resources, from non-ideal silicon devices at the sub-nanometer scale to large-scale nanotechnology, NEMS-based sensors, microfluidics, and ultra-microchips that orchestrate interactions among vast ensemble states of matter. However, this makes real-time learning with these devices a nontrivial task and implies the need for smarter ways to train, program, and implement such algorithms.

Existing hardware-nearing-algorithms efforts are generally limited to AI chips that are variations of an NVIDIA GPU. Limited biophysical restrictions of existing AI chips mean a constrained biophysically inspired AI SoC architecture, such as one based on CMOS micro-continuous-time reservoir computing, cannot be readily explored. Digital neural chips are the most established AI chip implementations for practical use. They can realize the densely connected networks and very high TP with about 250 pJ energy cost per MAC ops. Compared with this, it is difficult and appears impossible to implement RNN or spiking neural networks (SNN)-through VLSI circuits (Patra, 2024; Rane et al., 2024; Wang et al., 2024).

Downloads

Published

7 May 2025

How to Cite

Nandan, B. P. . (2025). Exploring cutting-edge chip design architectures built specifically for artificial intelligence and machine learning applications. In Artificial Intelligence Chips and Data: Engineering the Semiconductor Revolution for the Next Technological Era (pp. 16-33). Deep Science Publishing. https://doi.org/10.70593/978-93-49910-47-8_2