Ethical Frameworks For AI: Are They Good Enough?

Ethical Frameworks For AI: Are They Good Enough?


By fits and starts, the world is starting to wake up to the awe-inspiring power of neural nets to change our lives. That means figuring out appropriate responses, and how governments will treat something that did not exist in societies before.

Think about what would happen, for example, if every citizen of a country suddenly had x-ray vision. How would that change law, and ethics, and how would the community respond and adapt?

In this case, we have evolving standards aimed at developing global solutions for the use of AI around the world. One source is the U.S. National Institute for Standards and Technologies or NIST, where a project named A Plan for Global Engagement on AI Standards is about a year old.

“Recognizing the importance of technical standards in shaping development and use of Artificial Intelligence, the President’s October 2023 Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence calls for ‘a coordinated effort…to drive the development and implementation of AI-related consensus standards, cooperation and coordination, and information sharing’ internationally,” write spokespersons.

How about beyond our borders?

Well, the BRICS nations, a set of developing countries forming an economic bloc, have created their own standard. The BRICs Standard on Global Governance of Artificial Intelligence represents a policy effort on the part of Brazil, Russia, India, China, South Africa, Saudi Arabia, Egypt, UAE, Ethiopia, Indonesia, and Iran. (The original BRIC coalition involved the first four listed nations.)

Going Further in AI Regulation

At a recent Imagination in Action panel at Stanford, participants discussed the current reality, and why it’s important to keep working on a range of efforts aimed at putting guardrails on the use of AI.

“Without regulatory guardrails, it’s the wild, wild west,” said Stanford assistant professor Sanmi Koyejo. “People think they’ll like it better, but actually, they often don’t know what to do in this wild west, because it’s unclear where the boundaries of behavior and reasonableness are. And, I think, clear regulation … would certainly make it easier for adoption across many places, … (showing) what the norms and behavior should be, where liability lies, some of these questions.”

“It is deeply concerning,” said Stanford HAI executive director Russell Wald of stakeholder consolidation, calling for a broader multi-stakeholder environment. “You need a robust and vibrant open source community.”

Working on Solutions

Early in the presentation, Wald talked about state offices working together, and some nations doing their own work on AI regulation, characterizing the work of the U.S. federal government as a “pullback” of sorts, but qualifying that in a couple of important ways.

“It’s clear that this new (federal U.S.) administration has dialed back significantly from where the Biden administration was,” he said, “(but) there was a worldwide step back. … there was a big divergence of where we’ve been, and … some people may have felt like on the regulatory side, where safety was being put in, it may not have been being put into the best or optimal areas. … I think that there was a kind of a sea change, and a bit of a tone change, that was happening.”

In lieu of a concerted federal effort, Wald suggested that states will pick up some of the slack.

“On the federal side, there is this possibility of nothing happening in this space,” he said. “But what you see is in the EU, the AI Act has advanced, you see other countries applying things. And then in the United States, you see states actively working on things.”

Other collaboration comes from business.

“A lot of the ecosystem around best practices is being set up by industry, for sure,” he said, in response to a question from moderator Krystal Hu about a lack of regulation. “Once some of the states have clear regulatory guidelines, I think that will have an effect. But for now, a lot of it is happening through companies who are talking to each other, and safety folks who are thinking around what the right governance practice should be.”

He gave an example of an active project in this kind of space.

“I’m helping a small group who’s playing with AI insurance, which is a fun new version of this question, to try and think through insurance pricing … because now you can turn it into money that someone’s going to pay … to understand what the right governance infrastructure might look like, what testing might look like, some of these other questions. But I’d say yes, it’s being (largely) left up to industry to figure it out right now.”

Data Vulnerabilities are Bad for Business

“Companies are concerned about how the data is being used to train the model,” said Rehan Jalil of Securiti. “What if their data gets used to train the model, and let’s say some other company asks similar questions, and they get answers based on the model that was trained on some data from previous companies. That’s a fair concern, right? You wouldn’t want to expose your proprietary data to somebody else’s answers. So a lot of guardrails are built in.”

She characterized something called “enterprise protection” that she suggested is supported by Google:

“You get your own contained version of a model, which means all the outputs will be your outputs,” she said.

Panelist Max Nadeau addressed the idea of funding gaps and other issues in developing these types of business systems.

“There are a lot of obstacles to making these evaluations as realistic and as difficult as we want them to be, and cost is a big one,” he said.

No Crystal Ball

All of these ideas give us sort of a flavor of what’s to come with AI, but there are so many unanswered questions. Will there be more regulation? Where will it come from? And which will win out – the open source or closed source model?

Some of that will have to do with data privacy. Some of it will have to do with economic incentives. And some will have to do with policy. All of it amounts to a big block box, sort of like much of the model behavior that we don’t really fully understand. Stay tuned.



Forbes

Leave a Reply

Your email address will not be published. Required fields are marked *