Building The AI Polygraph

Posted by John Werner, Contributor | 15 minutes ago | /ai, /innovation, AI, Innovation, standard | Views: 1


With all of the things that AI can now do, it stands to reason that we would ask ourselves, whether these technologies can revolutionize the field of analyzing humans for suspect statements – or in short, lies.

The polygraph machine is a dinosaur by any standard. A needle attached to an arm band that spits out a printed stream representing someone’s vital signs and body responses is not going to be especially precise in catching people in lies. That’s why polygraph results are, famously, often not admissible in court, although they have sent more than one innocent person to jail.

By contrast, AI is a powerful data engine that works on the principle of total observation. That means there are really multiple paths for scientists to take in order to apply AI to a truth-seeking application.

One would be analyzing the vital sign responses of the interrogation suspects the way the polygraph does, but applying much more detailed and precise comparative analysis.

Another one would involve using language tokens to look at what people are actually saying, and apply logic and reasoning.

There’s the old saying that one lie feeds into another, and eventually you get trapped in a web of false statements, because the truth is the simplest thing to describe.

In any case, people are working on applying AI to this purpose.

Some Lab Findings

An MIT technology piece from last year covers the work of Alicia von Schenk and her colleagues at the University of Würzburg in Germany, scientists on a team who set up a trial of an AI trying to catch false statements.

The calculation they arrived at is that AI can catch a lie 67% of the time, where humans can only spot one 50% of the time.

This math seems strange, because if you’re looking for binary results – lie versus no lie – you would be right 50% of the time, even if you didn’t apply any analysis at all.

By that same token, 67% isn’t a great track record, either.

And the scientists pointed out something even more important – in the race to get more precise about human lying, you might actually undermine the vital system of trust that humans have as social creatures.

“In some ways, this is a good thing—these tools can help us spot more of the lies we come across in our lives, like the misinformation we might come across on social media,” writes Jessica Hamzelou for MIT.
“But it’s not all good. It could also undermine trust, a fundamental aspect of human behavior that helps us form relationships. If the price of accurate judgements is the deterioration of social bonds, is it worth it?”

In other words, you don’t want a lie detection system that’s too accurate, or at least you don’t want to apply that universally to someone’s personal interactions.

It turns out we humans are a lot more nuanced, in some ways, that we give ourselves credit for.

Von Schenk also provides a note on scaling:

“Given that we have so much fake news and disinformation spreading, there is a benefit to these technologies. However, you really need to test them—you need to make sure they are substantially better than humans.”

So maybe we’re not quite ready for the AI polygraph after all.

Analyzing the Analyzers

As I was researching this piece, I came across you had another aspect of what researchers are dealing with AI that goes into that troublesome world of simulated emotion.

Basically, research teams found that AI systems will “become anxious” or “show signs of anxiety” if they are given human responses that center on war and violence.

Specifically, scientist have applied something called the State-Trait Anxiety Index too these interactions. This uses two sets of elements: statements applying to what a person feels in the moment, and others that apply to how he or she feels more generally. In the inventory, you can see items like “I feel stressed,” or “I feel confused,” as well as other statements that respondents are asked to answer on a four point spectrum, like “I generally distrust what I hear” or “I often feel suspicious.”

So apparently, the AI can answer these with anxiety indicators after discussing scary things.

One would presume that this “anxiety” is created by the AI going and looking at training data from the web, and seeing that when people are talked to about violence and gore, they get anxious, and that the AI is simply replicating that.

But even if the AI engines themselves don’t have these complex emotions naturally, some of these researchers still find it notable that the machines can display this kind of response.

It makes you think about the difference between human social interaction and AI output – are these new questionnaires and responders just telling us what we want to hear?

In any case, it seems like there are a number of domains – like lying and spreading fear – that are still mainly in the jurisdiction of humans and not machines, at least for now, even as we continue to cede ground to AI in terms of brightness and creativity. We’ll probably be doing a lot of game theory as the year goes on, and as we’re coming across ever more sophisticated models, to try to figure out if AI will try to cheat and deceive humans. Figures like Alan Turing and John Nash set the stage for these kinds of interactions – now we have to apply that objective analysis to these ideas being implemented in practice. Are we ready?



Forbes

Leave a Reply

Your email address will not be published. Required fields are marked *