Is Exponential Growth in Computational Power Slowing Down?

Is Exponential Growth in Computational Power Slowing Down?

With advancements in technology, computational power has historically grown at an exponential rate, as predicted by Moore's Law. However, some experts suggest that this growth may be slowing down, marking a potential end to an era of rapid technological progress. Is this true, or are there still ways to continue enjoying the gains of the past half-century?

The Challenges Faced

The exponential growth in computational power is facing several limitations. One major reason is the tunneling effect, where the behavior of electrons causes them to pass through solid materials, making it difficult to increase processor speed indefinitely. Another hurdle is the limits of light's wavelength in photo-lithography, a critical process in semiconductor fabrication. Light's wavelength does not allow for transistors to be etched smaller than a certain threshold.

Moore's Law: Definitions and Reality

Moore's Law, often cited as a guideline for the exponential increase in transistor density, was never a solid, unbreakable law. Rather, it was a vague observation by Gordon Moore. He noted that the number of transistors in a dense integrated circuit would double approximately every two years. However, this was never a set rule but a generalized trend.

Despite the slowdown in the rate of doubling, computing power continues to increase. Models such as the i9 12900K demonstrate that the number of multi-core processors and single-core performance has almost doubled compared to previous generations. This suggests that the essence of Moore's Law, in terms of increased transistor density, is still relevant and has not been entirely broken.

Moore's Law is Over, According to Experts

According to Professor Charles Leiserson of MIT, Moore's Law has been over since at least 2016. The typical doubling time of two years has been broken, as it took Intel five years to go from 14-nanometer technology in 2014 to 10-nanometer technology in 2019. The implications of this are significant, especially in the context of generative AI and large language models (LLMs).

Professor Leiserson argues that the only way to achieve more computing capacity today is by building bigger machines, which are more energy-consuming. This development could have substantial negative impacts on the climate, especially if there is an AI arms race.

Tackling the Post-Moore Era

Despite the challenges, there are still ways to continue gaining computational power. Professor Leiserson, along with several MIT scholars, published a paper titled “There’s Plenty of Room at the Top: What Will Drive Computer Performance After Moore’s Law” in 2020. This publication suggests that gains in computer performance can be found at the highest levels of the computing stack.

Software performance engineering, including advancements in software algorithms and hardware architecture, offers promising solutions to improve system efficiency and speed. Professor Leiserson uses the analogy of retirement to describe this transition. While during the era of Moore's Law, increasing performance was a given and more productive, the post-Moore era requires focusing on optimizing performance through better coding methods.

In their research, the team was able to achieve significant speed improvements through software optimization, although not as dramatic as the improvements in hardware. Software performance engineering offers a promising path to continue making substantial gains in computing capacity, even as the advancements in hardware are limited.

While the exponential growth in computational power is slowing down, the potential for continued progress through innovative software and architecture design remains strong. As we enter the post-Moore era, the emphasis will shift from hardware-driven improvements to performance engineering.