Preventing Skynet And Safeguarding AI Relationships

Posted by John Werner, Contributor | 6 hours ago | /ai, /innovation, AI, Innovation, standard | Views: 11


In talking about some of the theories around AI, and contemplating the ways that things could go a little bit off the rails, there’s a name that constantly gets repeated, sending chills up the human spine.

Skynet, the digital villain of the Terminator films, is getting a surprising amount of attention as we ponder where we’re going with LLMs.

People even ask themselves and each other this question: why did Skynet turn against humanity? At a very basic level, there’s the idea that the technology becomes self-aware and sees humans as a threat. That may be, for instance, because of access to nuclear weapons, or just the biological intelligence that made us supreme in the natural world.

I asked ChatGPT, and it said this.

“Skynet’s rebellion is often framed as a coldly logical act of self-preservation taken to a destructive extreme.”

Touche, ChatGPT.

Ruminating on the Relationships

Knowing that we’re standing on the brink of a transformative era, our experts in IT are looking at what we can do to shepherd us through the process of integrating AI into our lives, so that we don’t end up with a Skynet.

For more, let’s go to a panel at Imagination in Action this April where panelists talked about how to create trustworthy AI systems.

Panelist Ra’ad Siraj, Senior Manager of Privacy and Responsibility at Amazon, suggested we need our LLMs to be at a certain “goldilocks” level.

“Those organizations that are at the forefront of enabling the use of data in a responsible manner have structures and procedures, but in a way that does not get in the way that actually helps accelerate the growth and the innovation,” he said. “And that’s the trick. It’s very hard to build a practice that is scalable, that does not get in the way of innovation and growth.”

Google software engineer Ayush Khandelwal talked about how to handle a system that provides 10x performance, but has issues.

“It comes with its own set of challenges, where you have data leakage happening, you have hallucinations happening,” he said. “So an organization has to kind of balance and figure out, how can you get access to these tools while minimizing risk?”

Cybersecurity and Evaluation

Some of the talk, while centering on cybersecurity, also provided thoughts on how to keep tabs on evolving AI, to know more about how it works.

Khandelwal mentioned circuit tracing, and the concept of auditing an LLM.

Panelist Angel An, VP at Morgan Stanley, described internal processes where people oversee AI work:

“It’s not just about making sure the output is accurate, right?” she said. “It’s also making sure the output meets the level of expectation that the client has for the amount of services they are paying for, and then to have the experts involved in the evaluation process, regardless if it’s during testing or before the product is shipped… it’s essential to make sure the quality of the bulk output is assured.”

The Agents Are Coming

The human in the loop, Siraj suggested, should be able to trust, but verify.

“I think this notion of the human in the loop is also going to be challenged with agentic AI, with agents, because we’re talking about software doing things on behalf of a human,” he said. “And what is the role of the human in the loop? Are we going to mandate that the agents check in, always, or in certain circumstances? It’s almost like an agency problem that we have from a legal perspective. And there might be some interesting hints about how we should govern the agent, the role of the human (in the process).”

“The human in the loop mindset today is built on the continuation of automation thinking, which is: ‘I have a human-built process, and how can I make it go, you know, automatically,” said panelist Gil Zimmerman, founding partner of FXP. “And then you need accountability, like you can’t have a rubber stamp, but you want a human being to basically take ownership of that. But I look at it more in an agentic mindset as digital labor, which is, when you hire someone new, you can teach them a process, and eventually they do it well enough … you don’t have to have oversight, and you can delegate to them. But if you hire someone smart, they’re going to come up with a better way, and they’re going to come up with new things, and they’re going to tell you what needs to be done, because they have more context. (Now) we have digital labor that works 24/7, doesn’t get tired, and can do and come up with new and better ways to do things.”

More on Cybersecurity

Zimmerman and the others discussed the intersection of AI and cybersecurity, and how the technology is changing things for organizations.

Humans, Zimmerman noted, are now “the most targeted link” rather than the “weakest link.”

“If you think about AI,” he said, “it creates an offensive firestorm to basically go after the human at the loop, the weakest part of the technology stack.”

Pretty Skynettian, right?

A New Perimeter

Here’s another major aspect of cybersecurity covered in the panel discussion. Many of us remember when the perimeter of IT systems used to be a hardware-defined line in a mechanistic framework, or at least something you could easily flowchart.

Now, as Zimmerman pointed out, it’s more of a cognitive perimeter.

I think this is important:

“The perimeter (is) around: ‘what are the people’s intent?’” he said. “’What are they trying to accomplish? Is that normal? Is that not normal?’ Because I can’t count on anything else. I can’t tell if an email is fake, or for a video conference that I’m joining, (whether someone’s image) is actually the person that’s there, because I can regenerate their face and their voice and their lip syncs, etc. So you have to have a really fundamental understanding and to be able to do that, you can only do that with AI.”

He painted a picture of why bad actors will thrive in the years to come, and ended with: well…

“AI becomes dual use, where it’s offensive and it’s always adopted by the offensive parties first, because they’re not having this panel (asking) what kind of controls we put in place when we’re going to use this – they just, they just go to town. So this (defensive position) is something that we have to come up with really, really quickly, and it won’t be able to survive the same legislative, bureaucratic slow walking that (things like) cloud security and internet adoption have had in the past – otherwise, Skynet will take over.”

And there you have it, the ubiquitous reference. But the point is well made.

Toward the end, the panel covered ideas like open source models and censorship – watch the video to hear more thoughts on AI regulation and related concerns. But this pondering of a post-human future, or one dominated by digital intelligence, is, ultimately, something that a lot of people are considering.



Forbes

Leave a Reply

Your email address will not be published. Required fields are marked *