News
Why LLMs Are the Best Thing to Happen to Chip Design


Jun 21, 2025
Kartik Hegde
We live in a world increasingly shaped by artificial intelligence. At the heart of this revolution lies a piece of technology that most people never see: the chip.
Fueled by Moore’s Law, compute-per-dollar and compute-per-watt have improved by more than a million times over the past few decades. These massive gains have enabled us to train large AI models on vast datasets, unlocking capabilities we once only imagined, such as the large language models (LLMs) powering today’s most advanced AI applications.

There is growing evidence that scaling laws are continuing to hold true for AI models. Increasing the compute used for training by an order of magnitude can result in a generational leap in model performance*,* as depicted by the graph above. As such, in the pursuit of artificial general intelligence (AGI), nearly every major tech company is now racing to build massive compute infrastructure to train these next-generation models.
If chips are so fundamental to achieving AGI, why isn’t every large AI company building its own? The answer: They are. In fact, there are more companies building custom silicon today than ever before!
Custom Silicon: Two-Year Minimum, 100+ Engineers
While there is a massive interest in building custom silicon for applications like AI, designing and taping out a chip continues to be a Herculean task. It demands immense engineering effort—estimated at over 1,000 engineering months for an ASIC of typical complexity. Additionally, companies must identify and hire rare talent with expertise across various phases of chip design and have a leadership team willing to invest capital while bearing significant risk.
So, what makes chip design today so complex? Let’s find out.
Historical Perspective: Shifting Bottlenecks
Key Insight: Pioneering work in EDA in synthesis, place & route, and HDLs over the last few decades has shifted the bottleneck to chip verification.
To understand the complexity of chip design today, let’s first expand the key steps in designing a chip. We’ll then travel back in time to examine the historical perspective of how bottlenecks in the chip design process have shifted.
Key Stages of Chip Design

Specification & Architecture: Capture functional goals, PPA targets, and draft a high-level micro-architecture that meets the product requirements.
Design & RTL Development: Implement the architecture in an HDL (e.g., Verilog), writing clean, synthesizable RTL with clear timing and power intent.
Functional Verification: Use simulation, formal, and emulation to exhaustively prove the RTL behaves as intended before silicon dollars are at stake.
Logic Design & Synthesis: Translate RTL to a gate-level netlist with constraints, optimizing for area, power, and timing, while meeting foundry libraries.
Physical Design: Floor-plan, place, clock-tree, and route the netlist; run STA, DRC/LVS, and power-integrity checks to produce tape-out-ready GDSII.
Packaging & Test: Define I/O ring and package, insert scan/BIST for manufacturability, and prepare ATE vectors for wafer sort, assembly, and final silicon validation.
If you are wondering about which stage takes more effort and engineering hours, the answer is dependent upon when in history you’re asking.
The 1980-2000s: Optimizing Logic Synthesis and Place & Route
Back in the 1980s, when chips had only a few thousand transistors, the design process looked very different from how it does today. Engineers designed circuits and placed transistors by hand. As the number of transistors grew, this approach quickly became unscalable. The emergence of electronic design automation (EDA) transformed the industry, diminishing manual effort and enabling designers to build much larger and more complex chips.
Additionally, the emergence of hardware description languages (HDL) like Verilog in 1986 abstracted away low-level circuit details (much like C and C++ did for assembly programming), allowing engineers to focus on architecture and logic. A combination of HDLs and EDA brought a significant boost to the productivity of chip designers, making it possible to produce the massive chips that we see today.

In parallel, Moore’s Law continued to fuel the semiconductor industry, doubling the number of transistors every two years at the same cost. The availability of all these transistors led to scaling the number of cores on a chip, as well as increasing the complexity of the architecture.
To summarize:
EDA tools emerged to address difficulties in place and route (P&R) and logic synthesis.
HDLs made it easier to design logic and resulted in a major productivity boost.
Availability of more transistors enabled chip designers to build more complex, higher-performance chips.
The net effect of these factors is clear from the figure above: In the last three decades, innovations in chip design methodologies and Moore’s Law have delivered us massive, multi-billion-transistor chips. Evolving maturity of EDA tools for synthesis and P&R continued to support the growth of chips in size and complexity. As such, the key bottlenecks in chip design shifted to the earlier stages of the flow: functional verification, logic design, and architectural innovation.
2000s and Beyond: Increasing Complexity of Functional Verification

As chips grew larger and more complex, ensuring the functional correctness of the RTL became more cumbersome. Today, functional verification accounts for the vast majority of the effort in chip development: often as much as 60–70%. These are complex, human-driven processes involving deep architectural reasoning, edge-case testing, and a careful understanding of constraints.
Verification has expanded into a multi-layered, multi-method process that spans both the pre-silicon and post-silicon stages. Below are some examples of how verification approaches might vary:
Design hierarchy
Block-level verification (unit testing)
Subsystem-level verification
Chip/system-level verification
Methodology
Simulation
Formal verification
Emulation
Gate-level Simulation
Abstraction Level
Spec-level (e.g. natural language or executable spec)
RTL-level
Netlist-level
Post-layout (SPICE-level)
Verification Intent
Functional verification (does it do what it’s supposed to?)
Structural verification (e.g. connectivity, lint, DRC, CDC, RDC)
Power-aware verification (UPF/CPF checks)
Security verification (e.g. side channels, isolation)
Timing verification (e.g. static timing analysis)
Environment
Pre-silicon (simulation/emulation)
Post-silicon (bring-up, system validation, in-field testing)
Each stage plays a critical role in ensuring correctness as complexity scales. The combinatorial explosion here is real: a chip with n transistors can theoretically exist in 2ⁿ possible states. For a chip with 1 billion transistors (10⁹), the number of possible binary states is 10^{3.0 x 10^{8}} , which is incomprehensibly larger than any physical quantity we can observe! While most of these states are irrelevant or unreachable in practice, this exponential growth gives a sense of the overwhelming complexity involved in verifying modern chips.
It’s Getting Harder—And More Costly—To Build Chips

The challenges continue. It is taking more time and effort to build next-generation chips than before. There are three key driving forces:
Moore’s Law continues to add more transistors, meaning ever-larger chips
Specialization continues to increase, and algorithms grow more complex
Complexity grows, yet at smaller nodes and with more stringent power budgets
All of these are directly leading to increasing functional verification efforts. Not solving the verification bottleneck is not an option for fast-moving chip design teams today.
How LLMs Are Changing the Chip Design Game
AI is not new for chip design. What’s different this time?
AI has been used in chip design before—most notably, for placement and routing tasks, which lend themselves well to black-box optimization techniques like reinforcement learning. They have been shown to bring improvements to the chip design flow. Given that the stages of synthesis, P&R, etc., have already been automated to a large extent, applying AI here offers only marginal gains and fails to address the true bottleneck: the cognitive effort required to design and verify chips.
Why didn’t the industry create a tool for automating verification as well, much like synthesis?
The answer is simple: Design and verification are fundamentally natural language reasoning problems. Engineers must understand specifications, reason through architectural intent, and ensure that implementation aligns with that intent. All of these tasks involve processing, generating, and interpreting human language. Historically, there has not been a technology to automate this part, hence it has always been manual.
This is where large language models come in. LLMs excel at precisely what design and verification need: natural language understanding and reasoning. With LLMs, understanding the design intent, figuring out what to test, writing code, employing agentic ways of running the right tools, debugging the failures, and finding coverage holes have all become possible. These new capabilities have galvanized the team at ChipStack to pioneer the next generation of verification tooling.
There is a lot of excitement in this rapidly evolving landscape, but there are also many more questions to be answered:
How should—or how must—chip design methodology change in the era of LLMs?
How much of the design and verification process can be successfully automated?
How will the roles of human engineers evolve and change alongside their new AI “colleagues”?
… and a whole host of other queries that we, as an industry, have not even considered yet.
We will explore these questions and more in future articles.
The Road Ahead: Rise of New Abstractions for Chip Design
As chips grow in complexity and the demand for compute continues to surge, LLMs may become essential not only to AI workloads but also to the hardware that enables them. These models have the potential to revolutionize chip design itself, closing the loop between silicon and software in ways we've never seen before.
I believe chip design is poised for another “Verilog moment”—a shift to a higher level of abstraction. We expect a move beyond RTL to a new representation that allows engineers to express intent more intuitively, potentially through natural language rather than rigid syntax. At ChipStack, we refer to this as the “mental model,” and we’re actively researching how to make it a practical reality.
The future of AI will be shaped by the chips we build—and, soon, those chips may be shaped by the AI we have trained. It’s a two-way street. This cycle promises to accelerate innovation in both silicon and software, bringing us closer to the next generation of computing capabilities.
About Kartik Hegde, Co-Founder & CEO, ChipStack
Kartik Hegde is the co-founder and CEO of ChipStack, a company reimagining chip design and verification, leveraging generative AI to reduce the effort required to build complex chips dramatically. Before founding ChipStack, Kartik held research and engineering roles at Arm, NVIDIA, Facebook AI Research, and the Allen Institute for AI, where he drove various projects on compute-efficient AI algorithms and specialized silicon for machine learning. His work has been recognized with the Facebook Fellowship for Hardware & Software Infrastructure for Machine Learning. Kartik earned his Ph.D. in Computer Science from the University of Illinois Urbana-Champaign, specializing in computer architecture and machine learning.
We live in a world increasingly shaped by artificial intelligence. At the heart of this revolution lies a piece of technology that most people never see: the chip.
Fueled by Moore’s Law, compute-per-dollar and compute-per-watt have improved by more than a million times over the past few decades. These massive gains have enabled us to train large AI models on vast datasets, unlocking capabilities we once only imagined, such as the large language models (LLMs) powering today’s most advanced AI applications.

There is growing evidence that scaling laws are continuing to hold true for AI models. Increasing the compute used for training by an order of magnitude can result in a generational leap in model performance*,* as depicted by the graph above. As such, in the pursuit of artificial general intelligence (AGI), nearly every major tech company is now racing to build massive compute infrastructure to train these next-generation models.
If chips are so fundamental to achieving AGI, why isn’t every large AI company building its own? The answer: They are. In fact, there are more companies building custom silicon today than ever before!
Custom Silicon: Two-Year Minimum, 100+ Engineers
While there is a massive interest in building custom silicon for applications like AI, designing and taping out a chip continues to be a Herculean task. It demands immense engineering effort—estimated at over 1,000 engineering months for an ASIC of typical complexity. Additionally, companies must identify and hire rare talent with expertise across various phases of chip design and have a leadership team willing to invest capital while bearing significant risk.
So, what makes chip design today so complex? Let’s find out.
Historical Perspective: Shifting Bottlenecks
Key Insight: Pioneering work in EDA in synthesis, place & route, and HDLs over the last few decades has shifted the bottleneck to chip verification.
To understand the complexity of chip design today, let’s first expand the key steps in designing a chip. We’ll then travel back in time to examine the historical perspective of how bottlenecks in the chip design process have shifted.
Key Stages of Chip Design

Specification & Architecture: Capture functional goals, PPA targets, and draft a high-level micro-architecture that meets the product requirements.
Design & RTL Development: Implement the architecture in an HDL (e.g., Verilog), writing clean, synthesizable RTL with clear timing and power intent.
Functional Verification: Use simulation, formal, and emulation to exhaustively prove the RTL behaves as intended before silicon dollars are at stake.
Logic Design & Synthesis: Translate RTL to a gate-level netlist with constraints, optimizing for area, power, and timing, while meeting foundry libraries.
Physical Design: Floor-plan, place, clock-tree, and route the netlist; run STA, DRC/LVS, and power-integrity checks to produce tape-out-ready GDSII.
Packaging & Test: Define I/O ring and package, insert scan/BIST for manufacturability, and prepare ATE vectors for wafer sort, assembly, and final silicon validation.
If you are wondering about which stage takes more effort and engineering hours, the answer is dependent upon when in history you’re asking.
The 1980-2000s: Optimizing Logic Synthesis and Place & Route
Back in the 1980s, when chips had only a few thousand transistors, the design process looked very different from how it does today. Engineers designed circuits and placed transistors by hand. As the number of transistors grew, this approach quickly became unscalable. The emergence of electronic design automation (EDA) transformed the industry, diminishing manual effort and enabling designers to build much larger and more complex chips.
Additionally, the emergence of hardware description languages (HDL) like Verilog in 1986 abstracted away low-level circuit details (much like C and C++ did for assembly programming), allowing engineers to focus on architecture and logic. A combination of HDLs and EDA brought a significant boost to the productivity of chip designers, making it possible to produce the massive chips that we see today.

In parallel, Moore’s Law continued to fuel the semiconductor industry, doubling the number of transistors every two years at the same cost. The availability of all these transistors led to scaling the number of cores on a chip, as well as increasing the complexity of the architecture.
To summarize:
EDA tools emerged to address difficulties in place and route (P&R) and logic synthesis.
HDLs made it easier to design logic and resulted in a major productivity boost.
Availability of more transistors enabled chip designers to build more complex, higher-performance chips.
The net effect of these factors is clear from the figure above: In the last three decades, innovations in chip design methodologies and Moore’s Law have delivered us massive, multi-billion-transistor chips. Evolving maturity of EDA tools for synthesis and P&R continued to support the growth of chips in size and complexity. As such, the key bottlenecks in chip design shifted to the earlier stages of the flow: functional verification, logic design, and architectural innovation.
2000s and Beyond: Increasing Complexity of Functional Verification

As chips grew larger and more complex, ensuring the functional correctness of the RTL became more cumbersome. Today, functional verification accounts for the vast majority of the effort in chip development: often as much as 60–70%. These are complex, human-driven processes involving deep architectural reasoning, edge-case testing, and a careful understanding of constraints.
Verification has expanded into a multi-layered, multi-method process that spans both the pre-silicon and post-silicon stages. Below are some examples of how verification approaches might vary:
Design hierarchy
Block-level verification (unit testing)
Subsystem-level verification
Chip/system-level verification
Methodology
Simulation
Formal verification
Emulation
Gate-level Simulation
Abstraction Level
Spec-level (e.g. natural language or executable spec)
RTL-level
Netlist-level
Post-layout (SPICE-level)
Verification Intent
Functional verification (does it do what it’s supposed to?)
Structural verification (e.g. connectivity, lint, DRC, CDC, RDC)
Power-aware verification (UPF/CPF checks)
Security verification (e.g. side channels, isolation)
Timing verification (e.g. static timing analysis)
Environment
Pre-silicon (simulation/emulation)
Post-silicon (bring-up, system validation, in-field testing)
Each stage plays a critical role in ensuring correctness as complexity scales. The combinatorial explosion here is real: a chip with n transistors can theoretically exist in 2ⁿ possible states. For a chip with 1 billion transistors (10⁹), the number of possible binary states is 10^{3.0 x 10^{8}} , which is incomprehensibly larger than any physical quantity we can observe! While most of these states are irrelevant or unreachable in practice, this exponential growth gives a sense of the overwhelming complexity involved in verifying modern chips.
It’s Getting Harder—And More Costly—To Build Chips

The challenges continue. It is taking more time and effort to build next-generation chips than before. There are three key driving forces:
Moore’s Law continues to add more transistors, meaning ever-larger chips
Specialization continues to increase, and algorithms grow more complex
Complexity grows, yet at smaller nodes and with more stringent power budgets
All of these are directly leading to increasing functional verification efforts. Not solving the verification bottleneck is not an option for fast-moving chip design teams today.
How LLMs Are Changing the Chip Design Game
AI is not new for chip design. What’s different this time?
AI has been used in chip design before—most notably, for placement and routing tasks, which lend themselves well to black-box optimization techniques like reinforcement learning. They have been shown to bring improvements to the chip design flow. Given that the stages of synthesis, P&R, etc., have already been automated to a large extent, applying AI here offers only marginal gains and fails to address the true bottleneck: the cognitive effort required to design and verify chips.
Why didn’t the industry create a tool for automating verification as well, much like synthesis?
The answer is simple: Design and verification are fundamentally natural language reasoning problems. Engineers must understand specifications, reason through architectural intent, and ensure that implementation aligns with that intent. All of these tasks involve processing, generating, and interpreting human language. Historically, there has not been a technology to automate this part, hence it has always been manual.
This is where large language models come in. LLMs excel at precisely what design and verification need: natural language understanding and reasoning. With LLMs, understanding the design intent, figuring out what to test, writing code, employing agentic ways of running the right tools, debugging the failures, and finding coverage holes have all become possible. These new capabilities have galvanized the team at ChipStack to pioneer the next generation of verification tooling.
There is a lot of excitement in this rapidly evolving landscape, but there are also many more questions to be answered:
How should—or how must—chip design methodology change in the era of LLMs?
How much of the design and verification process can be successfully automated?
How will the roles of human engineers evolve and change alongside their new AI “colleagues”?
… and a whole host of other queries that we, as an industry, have not even considered yet.
We will explore these questions and more in future articles.
The Road Ahead: Rise of New Abstractions for Chip Design
As chips grow in complexity and the demand for compute continues to surge, LLMs may become essential not only to AI workloads but also to the hardware that enables them. These models have the potential to revolutionize chip design itself, closing the loop between silicon and software in ways we've never seen before.
I believe chip design is poised for another “Verilog moment”—a shift to a higher level of abstraction. We expect a move beyond RTL to a new representation that allows engineers to express intent more intuitively, potentially through natural language rather than rigid syntax. At ChipStack, we refer to this as the “mental model,” and we’re actively researching how to make it a practical reality.
The future of AI will be shaped by the chips we build—and, soon, those chips may be shaped by the AI we have trained. It’s a two-way street. This cycle promises to accelerate innovation in both silicon and software, bringing us closer to the next generation of computing capabilities.
About Kartik Hegde, Co-Founder & CEO, ChipStack
Kartik Hegde is the co-founder and CEO of ChipStack, a company reimagining chip design and verification, leveraging generative AI to reduce the effort required to build complex chips dramatically. Before founding ChipStack, Kartik held research and engineering roles at Arm, NVIDIA, Facebook AI Research, and the Allen Institute for AI, where he drove various projects on compute-efficient AI algorithms and specialized silicon for machine learning. His work has been recognized with the Facebook Fellowship for Hardware & Software Infrastructure for Machine Learning. Kartik earned his Ph.D. in Computer Science from the University of Illinois Urbana-Champaign, specializing in computer architecture and machine learning.
We live in a world increasingly shaped by artificial intelligence. At the heart of this revolution lies a piece of technology that most people never see: the chip.
Fueled by Moore’s Law, compute-per-dollar and compute-per-watt have improved by more than a million times over the past few decades. These massive gains have enabled us to train large AI models on vast datasets, unlocking capabilities we once only imagined, such as the large language models (LLMs) powering today’s most advanced AI applications.

There is growing evidence that scaling laws are continuing to hold true for AI models. Increasing the compute used for training by an order of magnitude can result in a generational leap in model performance*,* as depicted by the graph above. As such, in the pursuit of artificial general intelligence (AGI), nearly every major tech company is now racing to build massive compute infrastructure to train these next-generation models.
If chips are so fundamental to achieving AGI, why isn’t every large AI company building its own? The answer: They are. In fact, there are more companies building custom silicon today than ever before!
Custom Silicon: Two-Year Minimum, 100+ Engineers
While there is a massive interest in building custom silicon for applications like AI, designing and taping out a chip continues to be a Herculean task. It demands immense engineering effort—estimated at over 1,000 engineering months for an ASIC of typical complexity. Additionally, companies must identify and hire rare talent with expertise across various phases of chip design and have a leadership team willing to invest capital while bearing significant risk.
So, what makes chip design today so complex? Let’s find out.
Historical Perspective: Shifting Bottlenecks
Key Insight: Pioneering work in EDA in synthesis, place & route, and HDLs over the last few decades has shifted the bottleneck to chip verification.
To understand the complexity of chip design today, let’s first expand the key steps in designing a chip. We’ll then travel back in time to examine the historical perspective of how bottlenecks in the chip design process have shifted.
Key Stages of Chip Design

Specification & Architecture: Capture functional goals, PPA targets, and draft a high-level micro-architecture that meets the product requirements.
Design & RTL Development: Implement the architecture in an HDL (e.g., Verilog), writing clean, synthesizable RTL with clear timing and power intent.
Functional Verification: Use simulation, formal, and emulation to exhaustively prove the RTL behaves as intended before silicon dollars are at stake.
Logic Design & Synthesis: Translate RTL to a gate-level netlist with constraints, optimizing for area, power, and timing, while meeting foundry libraries.
Physical Design: Floor-plan, place, clock-tree, and route the netlist; run STA, DRC/LVS, and power-integrity checks to produce tape-out-ready GDSII.
Packaging & Test: Define I/O ring and package, insert scan/BIST for manufacturability, and prepare ATE vectors for wafer sort, assembly, and final silicon validation.
If you are wondering about which stage takes more effort and engineering hours, the answer is dependent upon when in history you’re asking.
The 1980-2000s: Optimizing Logic Synthesis and Place & Route
Back in the 1980s, when chips had only a few thousand transistors, the design process looked very different from how it does today. Engineers designed circuits and placed transistors by hand. As the number of transistors grew, this approach quickly became unscalable. The emergence of electronic design automation (EDA) transformed the industry, diminishing manual effort and enabling designers to build much larger and more complex chips.
Additionally, the emergence of hardware description languages (HDL) like Verilog in 1986 abstracted away low-level circuit details (much like C and C++ did for assembly programming), allowing engineers to focus on architecture and logic. A combination of HDLs and EDA brought a significant boost to the productivity of chip designers, making it possible to produce the massive chips that we see today.

In parallel, Moore’s Law continued to fuel the semiconductor industry, doubling the number of transistors every two years at the same cost. The availability of all these transistors led to scaling the number of cores on a chip, as well as increasing the complexity of the architecture.
To summarize:
EDA tools emerged to address difficulties in place and route (P&R) and logic synthesis.
HDLs made it easier to design logic and resulted in a major productivity boost.
Availability of more transistors enabled chip designers to build more complex, higher-performance chips.
The net effect of these factors is clear from the figure above: In the last three decades, innovations in chip design methodologies and Moore’s Law have delivered us massive, multi-billion-transistor chips. Evolving maturity of EDA tools for synthesis and P&R continued to support the growth of chips in size and complexity. As such, the key bottlenecks in chip design shifted to the earlier stages of the flow: functional verification, logic design, and architectural innovation.
2000s and Beyond: Increasing Complexity of Functional Verification

As chips grew larger and more complex, ensuring the functional correctness of the RTL became more cumbersome. Today, functional verification accounts for the vast majority of the effort in chip development: often as much as 60–70%. These are complex, human-driven processes involving deep architectural reasoning, edge-case testing, and a careful understanding of constraints.
Verification has expanded into a multi-layered, multi-method process that spans both the pre-silicon and post-silicon stages. Below are some examples of how verification approaches might vary:
Design hierarchy
Block-level verification (unit testing)
Subsystem-level verification
Chip/system-level verification
Methodology
Simulation
Formal verification
Emulation
Gate-level Simulation
Abstraction Level
Spec-level (e.g. natural language or executable spec)
RTL-level
Netlist-level
Post-layout (SPICE-level)
Verification Intent
Functional verification (does it do what it’s supposed to?)
Structural verification (e.g. connectivity, lint, DRC, CDC, RDC)
Power-aware verification (UPF/CPF checks)
Security verification (e.g. side channels, isolation)
Timing verification (e.g. static timing analysis)
Environment
Pre-silicon (simulation/emulation)
Post-silicon (bring-up, system validation, in-field testing)
Each stage plays a critical role in ensuring correctness as complexity scales. The combinatorial explosion here is real: a chip with n transistors can theoretically exist in 2ⁿ possible states. For a chip with 1 billion transistors (10⁹), the number of possible binary states is 10^{3.0 x 10^{8}} , which is incomprehensibly larger than any physical quantity we can observe! While most of these states are irrelevant or unreachable in practice, this exponential growth gives a sense of the overwhelming complexity involved in verifying modern chips.
It’s Getting Harder—And More Costly—To Build Chips

The challenges continue. It is taking more time and effort to build next-generation chips than before. There are three key driving forces:
Moore’s Law continues to add more transistors, meaning ever-larger chips
Specialization continues to increase, and algorithms grow more complex
Complexity grows, yet at smaller nodes and with more stringent power budgets
All of these are directly leading to increasing functional verification efforts. Not solving the verification bottleneck is not an option for fast-moving chip design teams today.
How LLMs Are Changing the Chip Design Game
AI is not new for chip design. What’s different this time?
AI has been used in chip design before—most notably, for placement and routing tasks, which lend themselves well to black-box optimization techniques like reinforcement learning. They have been shown to bring improvements to the chip design flow. Given that the stages of synthesis, P&R, etc., have already been automated to a large extent, applying AI here offers only marginal gains and fails to address the true bottleneck: the cognitive effort required to design and verify chips.
Why didn’t the industry create a tool for automating verification as well, much like synthesis?
The answer is simple: Design and verification are fundamentally natural language reasoning problems. Engineers must understand specifications, reason through architectural intent, and ensure that implementation aligns with that intent. All of these tasks involve processing, generating, and interpreting human language. Historically, there has not been a technology to automate this part, hence it has always been manual.
This is where large language models come in. LLMs excel at precisely what design and verification need: natural language understanding and reasoning. With LLMs, understanding the design intent, figuring out what to test, writing code, employing agentic ways of running the right tools, debugging the failures, and finding coverage holes have all become possible. These new capabilities have galvanized the team at ChipStack to pioneer the next generation of verification tooling.
There is a lot of excitement in this rapidly evolving landscape, but there are also many more questions to be answered:
How should—or how must—chip design methodology change in the era of LLMs?
How much of the design and verification process can be successfully automated?
How will the roles of human engineers evolve and change alongside their new AI “colleagues”?
… and a whole host of other queries that we, as an industry, have not even considered yet.
We will explore these questions and more in future articles.
The Road Ahead: Rise of New Abstractions for Chip Design
As chips grow in complexity and the demand for compute continues to surge, LLMs may become essential not only to AI workloads but also to the hardware that enables them. These models have the potential to revolutionize chip design itself, closing the loop between silicon and software in ways we've never seen before.
I believe chip design is poised for another “Verilog moment”—a shift to a higher level of abstraction. We expect a move beyond RTL to a new representation that allows engineers to express intent more intuitively, potentially through natural language rather than rigid syntax. At ChipStack, we refer to this as the “mental model,” and we’re actively researching how to make it a practical reality.
The future of AI will be shaped by the chips we build—and, soon, those chips may be shaped by the AI we have trained. It’s a two-way street. This cycle promises to accelerate innovation in both silicon and software, bringing us closer to the next generation of computing capabilities.
About Kartik Hegde, Co-Founder & CEO, ChipStack
Kartik Hegde is the co-founder and CEO of ChipStack, a company reimagining chip design and verification, leveraging generative AI to reduce the effort required to build complex chips dramatically. Before founding ChipStack, Kartik held research and engineering roles at Arm, NVIDIA, Facebook AI Research, and the Allen Institute for AI, where he drove various projects on compute-efficient AI algorithms and specialized silicon for machine learning. His work has been recognized with the Facebook Fellowship for Hardware & Software Infrastructure for Machine Learning. Kartik earned his Ph.D. in Computer Science from the University of Illinois Urbana-Champaign, specializing in computer architecture and machine learning.