5 AI Regulation Lies Everyone Must Stop Believing

Posted by Bernard Marr, Contributor | 6 hours ago | /ai, /enterprise-tech, /innovation, AI, Enterprise Tech, Innovation, standard, technology | Views: 5


AI is evolving and being adopted at lightning speed, and laws designed to keep us safe aren’t keeping pace.

You probably knew that already. But when it comes to AI regulation, there are lots of other ideas and conceptions that might not be so water-tight.

The topic of AI regulation is a vast one that covers everything from different attitudes to privacy and human rights to the challenges of enforcing rules on how we use tools that are often open-source and readily accessible to anyone.

However, understanding its implications is becoming increasingly important as we find ourselves having to make decisions about how we use AI in business and our personal lives.

So here’s my overview of five misconceptions that should be put to bed about the way AI is regulated if we want to understand how it will affect us, our business or society at large.

AI Regulations Only Matter To Techies

Many people’s first assumption is that AI regulation is something that only AI engineers, data scientists and developers have to worry about. But with AI systems increasingly becoming embedded in business functions from marketing and HR to customer service, everyone now has obligations to ensure it’s used legally and safely.

It’s important to remember that AI regulation we’ve seen so far, such as the EU, Chinese, and assorted US legislations, mainly impose regulation on those using AI rather than those developing it.

Regardless of a professional’s role within their organization, they will have to understand the rules and safeguards. This means understanding what data they’re using to do their job, what’s being done with it, and what needs to be done to make sure they stay on the right side of the law.

AI Regulation Stifles Innovation

There’s a strong feeling among sections of the AI community that regulation stifles innovation. By imposing rules, the argument goes, AI developers are restricted in what they can build, and users are restricted in what they can do.

The counter-argument is that regulation actually fosters innovation—by creating a level playing field and giving businesses confidence they’re working within legal and ethical frameworks.

By putting up guard rails around potentially dangerous or harmful use cases, regulation helps industries build trust with customers and experiment safely with new ideas.

In practice, this is a balancing act, with regulators aiming to facilitate innovation while mitigating risk. However, looking at regulation as anti-innovation or unnecessary interference is a frequent and dangerous misconception.

AI Regulations Control What Can Be Developed

So we touched on this before, but really, it deserves to be its own point. A layman might assume that AI regulation is something imposed on big AI developers like Google or OpenAI that somehow restricts what they can build.

In reality, most legislation we’ve seen so far has focused on the impact of AI and what can be done by those using it. The EU act, for example, bans or strictly regulates “high risk” AI activities like social scoring, real-time biometric identification of people in public places, and exploiting vulnerable groups of people. Other use cases, such as facial recognition, are limited to law enforcement and subject to strict guidelines.

So, with developers still essentially free to build incredibly powerful models, the lesson is that just because something is possible with AI, it doesn’t mean it’s legal. Ultimately, end users take responsibility for the results of their actions.

Geopolitics Overrides AI Legislation

Back in 2017, Vladimir Putin said that whoever becomes the leader of AI would be the leader of the world. His prediction seems to be on track so far. With the advantages in warfare, intelligence, and economy that AI maturity will grant a nation-state, why would leaders put up barriers to achieving it?

In reality, it’s because they understand that it can regulate itself and be used as a tool to further their political and geopolitical agendas. The EU, for example, emphasizes the importance of preserving privacy and fundamental citizen rights in its legislation, whereas China’s policies focus on maintaining social harmony and law enforcement. In the US, legislators have shown that increasing the competitiveness of its domestic AI industry is a priority.

Taking an early lead in the AI arms race gives nations the opportunity to shape the direction the AI market will take in the next 10 years and beyond, and regulation is a key tool for getting this done.

AI Can’t Be Regulated Because It’s A “Black Box”

Even the creators of foundation AI systems, such as the large language models (LLMs) powering ChatGPT, don’t know exactly how they work.

And if no one knows how they work, how can we impose rules on them? Will they even follow them? Perhaps they could even pretend to follow them (alignment faking) to lull us into a false sense of security and advantage themselves in some way.

These are questions that frequently come up when the pros and cons of AI regulation are being debated. However, as we’ve covered, regulation isn’t designed to control or limit the development or capability of AI. It’s to put guardrails around potentially dangerous behaviors.

By regulating with a focus on outcomes, we don’t have to fully understand AI in order to regulate it. Ensuring that the regulatory frameworks we’re building now are robust is critical for ensuring we can deal with the implications of more advanced and potentially dangerous AI in the future.

Why Everyone Should Understand How AI Regulation Will Affect Them

It isn’t just governmental policymakers and computer scientists that need to understand how and why AI is regulated and how regulations affect them.

As AI becomes increasingly embedded in our lives, understanding the rules and why they exist will become critical to capitalizing on the opportunities it offers in a safe and ethical way.



Forbes

Leave a Reply

Your email address will not be published. Required fields are marked *