Debate Arises Over Moore's Law: Interviews with Intel, AMD, Nvidia, and Qualcomm Reveal a Consensus - Progress Remains the Constant
In a groundbreaking series of exclusive interviews, executives from tech giants AMD, Apple, Arm, Intel, MediaTek, Nvidia, and Qualcomm have shared their insights on the future of CPUs and GPUs. These in-depth conversations will be published throughout the week as part of our Silicon Survey special issue.
Performance Gains and the Quest for Efficiency
The collective agreement among the interviewees is that performance gains will continue to be a driving force in the tech industry for years to come. However, the means of achieving these advancements may vary between chipmakers.
Nvidia's RTX 50-series GPUs, for instance, are still utilizing the 4N (4-nanometer) node as seen in the RTX 40-series, as this combination provides the best balance of performance, power, and price. Nvidia emphasizes optimization and AI as the key to continued performance, and has developed an AI software platform to maximize the potential of its GPUs.
Similarly, AMD's Director of Product Marketing, Adam Kozak, states that even with older architecture and nodes, major performance improvements can be achieved through software optimization.
The Debate over Moore's Law
The end of Moore's Law's accuracy has been a topic of debate among chipmakers. Nvidia's CEO Jensen Huang has been a proponent of its death, while former Intel CEO Pat Gelsinger argues otherwise. Nvidia's argument has shifted from "Moore's Law is dead" to "Our systems are progressing way faster than Moore's Law", due to the impact of artificial intelligence.
Sub-5nm and Sub-2nm Semiconductor Processes
Leading manufacturers are pushing beyond 3nm towards sub-2nm nodes, with AI and high-performance computing (HPC) driving over 60% of demand for these chips by 2030. These smaller nodes enable higher speed and improved power efficiency but face increasing manufacturing complexity and cost, encouraging new architectures.
Chiplet-based Architectures and 3D Packaging
To overcome the physical and economic limits of Moore’s Law scaling, modular chiplet designs are becoming mainstream. They allow flexible performance scaling, lower costs, and better yields. The chiplet market is forecasted to reach $50 billion by 2030, with adoption across AI, cloud, and edge computing.
High-bandwidth Memory (HBM)
3D stacking technologies like HBM are becoming standard in AI and HPC servers, alleviating memory bandwidth bottlenecks critical for large AI models. HBM is expected to grow explosively with a CAGR of over 60% through the 2020s.
AI and Edge AI Acceleration
Companies such as Qualcomm, MediaTek, Apple, and Nvidia are designing specialized AI accelerators and GPUs optimized for real-time, low-latency computing at the edge and in data centers. The edge AI hardware market alone is projected to grow from $26 billion in 2025 to nearly $59 billion by 2030, propelled by IoT, autonomous vehicles, and smart devices.
Multi-core and Parallel Processing Optimization
Industry leaders including AMD and Intel continue to enhance multi-core architectures and parallel computing capabilities to meet expanding cloud and AI workload demands, focusing on energy-efficient designs for both consumer and enterprise systems.
Integration with Emerging Software and AI Ecosystems
As Apple and Arm emphasize tightly integrated hardware-software ecosystems, including AI-driven optimization, future devices will see improvements not only in raw speed but in intelligent power management and predictive performance tailoring.
Competitive Innovation in Data Center and Cloud Hardware
Nvidia leads in GPUs for AI training, while cloud providers and rivals such as AMD and Intel develop customized AI chips and ASICs, intensifying competition and accelerating hardware performance gains.
In summary, the combined efforts of AMD, Apple, Arm, Intel, MediaTek, Nvidia, and Qualcomm leverage cutting-edge semiconductor processes, modular architectures, advanced memory technologies, and AI-centric designs to drive the next decade of hardware performance and efficiency improvements. This trajectory supports growing demand for AI workloads, edge computing, and energy-conscious portable devices.
Stay tuned for more insights from our exclusive interviews in the upcoming days as part of the Silicon Survey special issue on our website.
[1] TrendForce (2022). "Global Semiconductor Industry Outlook 2022: AI, HPC, and 5G Driving Growth". Link
[2] Moore, Gordon (1965). "Cramming more components onto integrated circuits". Electronics Magazine. Link
[3] Apple (2021). "Apple Silicon: A new era for Mac". Link
[4] Intel (2022). "Intel 7 process technology". Link
[5] Qualcomm (2021). "Qualcomm Snapdragon X65 5G Modem-RF System". Link
- The tech industry's future is aggressively driven by performance gains, with different chipmakers, including AMD and Nvidia, relying on various strategies to achieve these advancements.
- Nvidia's RTX 50-series GPUs, currently using the 4N (4-nanometer) node, are designed for the best balance of performance, power, and cost, with AI optimization playing a significant role.
- AMD, meanwhile, highlights software optimization as a key strategy for major performance improvements even with older architecture and nodes.
- The debate over Moore's Law's accuracy has stirred between chipmakers, with Nvidia's CEO Jensen Huang advocating for its death, while Intel's former CEO Pat Gelsinger believes otherwise.
- To overcome the limits of Moore's Law scaling, modular chiplet designs and 3D packaging are becoming popular, aimed at providing flexible scalability, lower costs, and improved yields.
- In a bid to enhance the performance of AI and edge computing, companies such as Apple, Qualcomm, and Nvidia are designing specialized AI accelerators and GPUs, with the edge AI hardware market expected to grow from $26 billion in 2025 to nearly $59 billion by 2030.