AI, Simulation, And The Generative Adversarial Network

artificial intelligence neural networks
getty
AI is big and powerful – many humans with even a passing knowledge of the new tech know it intuitively, and many more have seen these models in action, or looked behind the curtain to understand just how quickly they are evolving.
But what is the role of AI?
You can parse it down into different digital tasks, or ponder what it’s going to be like when AI entities, freed from centralized data centers by the promise of edge computing, inhabit robotic bodies (and I do) – but another way to think about this is to consider the role of AI in simulation projects. Or in other words, to think about how AI and simulation work together.
In an AI Spectrum podcast, Siemens AI/ML Technical Specialist Justin Hodges goes over some of the main aspects of using AI for simulations. Much of the focus is on enhancing user experience, and there’s a lot to get into there, but the underlying idea shines a light on how engineers apply AI to a simulation environment.
Basically speaking, the simulations offer rich, robust data sets created with precision, although they can be resource-intensive. The AI, then, is able to evaluate and internalize these data sets quickly.
To me, this really mirrors a concept that appeared to many of us in the prior “machine learning age” in the late twenty-teens, when I came across the idea of the generative adversarial network or GAN.
The GAN has two parts: a generative engine and a discriminative engine. Or, if you want, a generator and a discriminator. The generator spits out new images, or data pieces, in whatever format the project involves. The discriminator “looks” at these and makes some kinds of decisions or implements some kinds of processing. That idea applied to early machine learning jargon, but it’s still useful in contemplating this intersection of AI and simulations.
It didn’t take long for human innovators to figure out the rich potential of this approach, to medicine, to factories, to public administration. Let’s take the example of manufacturing. Why do you need humans in a factory at all? It’s largely to monitor the products of the generative machine: did the robotic equipment fill the bottles right? Were the boxes properly sealed?
If the GAN’s discriminator can fulfill that role, we can soon have human-less factories: the robots can make the things, and we can all go home.
Anyway, that’s the idea: that in the GAN, those two pieces work in a cooperative way to specific ends.
Digital Twinning: An Example
Here’s another of those top buzzwords from the last decade: digital twinning.
What this means is that a given project constructs a digital replica of a complex real-world system. That can be a vehicle or piece of equipment, or a building or ecosystem – or a human body.
Taking the example of the human body, the generative function can be performed by wearables, by implants, anything that can capture information about the body and compile it for the simulation. Then the AI is the discriminator – it takes in that data, and produces results.
Digital twinning and the like will save countless lives, transform our world and enhance our quality of life in so many ways. That’s why it’s worth writing about the way stations that we’re experiencing on the way to this eventual destination.
Alternatives to the GAN
It’s notable to point out that the GAN isn’t the only framework for AI/ML operations.
For example, there’s something called the variational autoencoder or VAE. You can get a detailed description in this resource from Hugging Face. Essentially, the machine is going to generate new data sets from existing ones, sort of like how a diffusion model operates.
“VAEs are widely used in generative AI because they can learn structured latent spaces that enable smooth data generation and interpolation,” write a group of authors at Codecademy. “They’re effective in tasks like image synthesis, data imputation, and anomaly detection across healthcare and computer vision domains.”
Still, the VAE approach is not the same as a GAN, or the application of the GAN concept to today’s systems. We have vast means to gather the data for the simulation, and highly capable models to do the discriminatory work.
Unique and Original Data Sets
Okay, so how do we keep getting fresh inputs for AI if we don’t have enough real-world data?
This is a question often asked in different contexts, as we look at how AI is taking over systems and verticals. The idea is that without original human work, the AI can’t keep making things.
The GAN, or the simulation/AI loop, might be able to help solve that problem. In fact, when I tried to find some sourcing in Bing, here’s what Copilot sent me in a rather unsolicited way:
“GANs are not a standalone solution to the data problem, but are a powerful tool that can help address it in various ways. They can generate synthetic data to augment real data, which can be particularly useful when the original dataset is limited or lacks diversity. GANs can also be used to generate new data that closely resembles the original dataset, which can be helpful for training machine learning models. However, GANs require careful tuning and validation to ensure the quality of the synthetic data generated.”
I had simply asked, in search, “can the GAN help solve the data problem?”
It turns out Copilot understood what I meant by “the data problem” and had a response handy. The idea, though, is that the generator can make the original stuff, so that a human doesn’t have to.
Now, the question is, how capable will the GAN or the simulation be, at compiling original content for us? Will it continue to produce new, interesting things, or get stuck in a loop? A deterministic programmer, now an atavistic relic, could maybe tell you how likely tech systems are, in general, to loop in on themselves recursively instead of going out to scale new heights. Maybe we can give them a nudge. The next ten years will reveal a lot about the nature of systems, and about human nature, too. Stay tuned.