Covering Scientific & Technical AI | Wednesday, December 4, 2024

Artificial Super-Intelligence: 3 Roadblocks and the End of Moore’s Law 

Debate over AI’s ultimate impact, utopic vs. dystopic, on humanity has swing back and forth this year like a Hegelian Dialectic on steroids. This publication has given voice to the dystopics, led by Elon Musk, who held sway in late spring and early summer. Now as we enter autumn we’re seeing thoughtful counterarguments on the, if not utopic then the, anti-dystopic side of the discourse.

Here’s one from Richard Windsor, analyst at Edison Investment Research, who recently wrote:

“There is considerable disagreement over the dangers presented to the human race by AI, but it will be the laws of physics that prevent dystopian predictions from coming true. At the Tech Crunch conference, Google’s head of AI was quick to dismiss Elon Musk’s concerns that AI could present an existential threat to humans or cause a third world war. Artificial super-intelligence is when machines become more intelligent than humans. To achieve this, computers need to continue evolving at an exponential rate for the next 23 years and three huge AI problems need to be solved.

“We have identified these problems as first, the ability to train AIs using much less data than today. Second, the creation of an AI that can take what it has learned from one task and apply it to another, and third the creation of AI that can build its own models rather than relying on humans to do it.

“Progress against these three goals is incredibly slow and only the very best companies are making any real progress at all. Everyone else claims to be working on AI but in reality are using advanced statistics to make predictions that have an improved probability of being correct. Even with the best minds working on these, it will be decades before these problems are even close to being solved. However, the real reason why AI will not overtake the human race comes down to Moore’s Law. If one extrapolates the exponential pace of computer capability over the last 40 years, one can predict that computer intelligence will overtake that of humans by 2040. This is what most of the predictions of artificial super-intelligence are based on and where much of the fear comes from. However, we do not think that the current breakneck pace of Moore’s Law can continue.

“10nm is currently the cutting-edge geometry for semiconductors and beyond around 5nm the laws of physics start to misbehave. This means that doubling the number of transistors in the same area of silicon every 18 months will no longer be possible using the transistors we know. It is this doubling that has underpinned the exponential improvement in computer capability over the last 40 years and without it, this improvement will slow to a crawl. In order to continue beyond this point a new form of transistor is required which could prove as fundamental a change as the shift from triode vacuum tubes to silicon transistors. Alternatives to silicon transistors are at such an early stage of development that it seems inevitable that Moore’s Law will grind to a halt long before a viable alternative is found. This will mean that the pace of improvement of computer capability will also slow down to the point where artificial super-intelligence drops way below the visible horizon.

“Hence, while Elon Musk is right to think that humans are in trouble if machines ever become more intelligent than man, it is so far away in time that Google is also right not to be worried about it.”

AIwire