How Foundation Models Will Impact Engineering And Scientific Discovery

Posted by Yujie Huang, Forbes Councils Member | 2 days ago | /innovation, Innovation, standard, technology | Views: 23


Yujie Huang is Co-Founder of KronosAI.

For decades, simulation has been the cornerstone of scientific discovery and hardware design. But despite the advances we’ve made in computing power and tooling, the way we run simulations hasn’t fundamentally changed. Engineers and scientists still rely on numerical solvers to compute every single instance of a physics problem from scratch, over and over again. There’s no learning in the loop. That’s a huge waste of data, time and talent.

If there’s one key takeaway I hope readers get from this article, it’s that we’re on the cusp of a fundamental shift. In the next five years, I believe a large portion of simulations across science and engineering will be powered by AI, specifically by foundation models trained on physics laws. This will make simulations orders of magnitude faster and far more generalizable.

A GPT-3 Moment For Physics

In physics, many domains are governed by the same underlying equations, such as Maxwell’s equations for electromagnetics, for example. But in today’s approach, each simulation is treated as a new problem. There’s no reuse, no accumulated knowledge. Foundation models allow us to change that.

Instead of solving Maxwell’s equations from scratch every time, imagine training a model to deeply understand the underlying physics. Then, when a new problem comes up, you simply pass it through the model with an inference call. That’s it. No complex solver setup. No waiting hours or days for results. It’s like the GPT-3 moment, but for engineering. One model, many applications.

That’s the work we’re doing, developing foundation models for physics simulation that can generalize across domains and drastically reduce simulation time.

From Weekends To Seconds

Right now, it’s common for engineers to start a simulation on Friday, let it run all weekend, and come back Monday just hoping it worked. If not, they’re back at square one. That process is frustrating and severely limits what’s possible.

Now imagine if that same simulation could be completed in seconds. Engineers could run thousands of design iterations, test new ideas instantly and refine models on the fly. This speed unlocks new workflows, like design space exploration and inverse design, where algorithms automatically optimize designs within a set of constraints.

We’re talking about a massive increase in what’s possible in terms of speed, quality and innovation.

Inverse Design: Engineering Beyond Intuition

What excites me most is the shift from trial-and-error engineering to inverse design. Traditionally, you’d pick from a known library of modules and hope something works. If it doesn’t, you start over. That’s not a system designed for innovation, it’s a system designed for iteration.

Inverse design flips that model. You define what you want—your goals, constraints, performance metrics—and let the system search the design space automatically. That’s how you unlock solutions that would be impossible to discover by human intuition alone.

But to make inverse design work at scale, you need simulation tools that are fast, accurate and able to generalize across domains. That’s where foundation models come in.

A Universal Tool For Physical Design

What fields are ready for this shift? Almost every area that interacts with the physical world:

• Semiconductors.

• Consumer electronics.

• Telecom and 6G.

• Sensors, antennas and radar systems.

• Defense and aerospace.

• Healthcare technology.

These are all governed by physics. And in many cases, the same set of partial differential equations. That’s why a single, powerful foundation model trained on those laws could unlock huge efficiencies across industries.

Physics simulation is already a $20 billion market and it’s growing. But more importantly, it’s foundational to innovation in fields that impact billions of lives.

Synthetic Data And Emergent Behavior

A common concern with AI models is generalizability. Can they handle new, unseen problems? In large language models, that’s a limitation, partly because the data is finite “fossil fuel,” as some say.

But in physics, we don’t have that problem. All the training data for a physics model can be synthetically generated. There’s no shortage. As long as the team building the model has the right physics intuition, they can create data that covers a wide variety of use cases.

And once trained, these models exhibit emergent behavior. They can recognize patterns and make accurate predictions even on problems they haven’t seen before, as long as they’re governed by the same underlying laws. That’s what gives this approach its power and scalability.

Designing The Future At Scale

Looking ahead five to 10 years, I predict this technology will enable new kinds of pipelines and make existing ones vastly more efficient.

Take complex, multiscale systems like metasurfaces or silicon photonics interconnects. These systems combine components with radically different physical properties. Simulating them accurately with traditional solvers is incredibly difficult. With AI-native solvers, we can handle that heterogeneity and do it fast.

The impact is twofold:

1. We enable entirely new workflows that weren’t feasible before.

2. We massively improve existing ones.

That means better products, faster timelines and lower costs across the board.

Making Simulation More Accessible

There’s another critical pain point that AI can solve: usability.

Currently, setting up a simulation is incredibly complex. You configure boundary conditions, select from dozens of solver modules and tweak parameters only an expert understands. It’s so complicated that some startups hire support engineers just to help them run simulations.

With foundation models, we can abstract away that complexity. You no longer need to be a domain expert to run high-fidelity simulations. That lowers the barrier to entry and makes advanced engineering tools accessible to more people.

Why This Moment Matters

We’re right at the center of this transformation. And while the technology is evolving rapidly, the core idea is simple: we can, and should, build simulations that learn.

If we succeed in making solvers fast and generalizable enough, inverse design becomes ubiquitous. Simulation stops being a bottleneck and becomes an accelerator. That’s not just a technical upgrade. It’s a reimagining of how we explore, invent and build in the physical world.

For engineers, scientists and innovators everywhere, that changes everything.


Forbes Technology Council is an invitation-only community for world-class CIOs, CTOs and technology executives. Do I qualify?




Forbes

Leave a Reply

Your email address will not be published. Required fields are marked *