AI With Empathy For Humans

AI With Empathy For Humans


Whether it’s nu-metal arias like Metallica’s “Spit out the Bone,” or film masterpieces illustrating the prospect of a post-human world, or a human resources memo showing the redundancy of human employees, there’s a wide body of literature and a lot of canaries in the coal mine, so to speak, suggesting that AI is going to, in some way, shape or form, replace us humans.

That’s disconcerting, to put it mildly, and it’s something that lots of folks are thinking about, here at MIT and elsewhere.

Job displacement is part of it, a big part. But ultimately, it may go further than that.

“Many young people are turning to AI for companionship,” writes Marlynn Wei in Psychology Today. “Nearly 75 percent of teens have tried AI companions like Character.AI and Replika. One in three teens find these interactions as satisfying or more satisfying than those with real-life friends. Yet, one in three also reported feeling uncomfortable with something an AI companion said.”

That’s just one example. On the labor side, you have a warning by Dario Amodei, (I reported on this previously) that gets top billing, but also, many more reports of this type of thing already happening.

AI and Empathy

Let’s get some insights from a panel that was at the Imagination in Action summit in September, held at Stanford University. This particular group included my colleague Alexander “Sandy” Pentland, who asked questions of Stanford assistant professor Diyi Yang and Deloitte AI Leader Laura Shact about the scenario that we now find ourselves in.

In terms of ethical AI, Pentland started with these kinds of questions:

How do you actually do things that are not self-dealing?” he asked. “How do you do things that aren’t going to get you sued? And one of the hardest things is, how do you get AI to really understand what you want?”

Yang talked about the job of figuring out what to tell students about AI, suggesting that we should be finding key ways to augment human activity. She also talked about large-scale research of the workforce, and a “mismatch” between what AI often does, and what people generally need.

Show, Not Tell

The point that Yang made was pretty compelling for someone who has chatted with any GPT model and been frustrated by long, multi-screen readouts.

People, she noted, don’t want to see 1000 words about something. They want a short blurb, and maybe a picture, or explainer video, or a function that illustrates the point. Not a wall of text.

She gave the example of GPT returning a working model of a keyboard in response to a question about how that piece of equipment works.

Shact added some thoughts about “roles and jobs,” and human impact, citing work done by human “influencers” and “managers” that guides the advent of AI.

“It’s been very interesting seeing what actually sticks,” she said. “How do people contextualize using AI? It’s not enough to just say ‘we’ve got the technology,’ but people really need very specific guidance around … how it’s going to serve everyone, then a very clear expectation of how they’re supposed to use it.”

Regardless of AI’s intent, Pentland ruminated, it does not always seem to do a good job of aligning.

It’s putting the burden on the human to figure out how to manipulate the AI into doing something,” he said.

The Hard Specs

Yang had some things to say about the AI build and how that relates to alignment.

“I think the paradigm that we use today is that we either have this kind of supervised fine-tuning, or (we’re) doing this kind of learning from human preference with some kind of reward,” she said. “Most of those settings are quite local or upper-level, so you give them a very short snippet, and then try to enforce input and output alignment. This actually kind of contradicts, to some extent, if you have a longitudinal task where there are a long range of interactions… getting the reward and the signals correct is just very hard. I think this is part of the reason why models feel like they don’t understand the human intent. Because I think most of the learning right now happens on the local level.”

Pentland spoke to the volatility of many of these systems.

“There (are) a number of places … where you see lots of agents coming together, and you get flash crashes, and you get spikes, and you get all sorts of crazy non-linearities and stuff,” he said. “It seems like, to actually automate large chunks of the corporation, you need to have a sense of what is supposed to be happening, and what people are supposed to be doing, and that’s very context sensitive. But that’s a real challenge for LLMs.”

Crafting Solutions

Shact talked about the utility of process maps.

“We have very large process maps,” she said. “What is a process? How do you optimize the process? It’s become very interesting, pulling back out those maps, which can seem very academic, because now, when you’re thinking about (how) agents need to take on the work, we actually have a blueprint of what the work is and where we could be automating, where we could be augmenting the human in that workflow.”

Yang brought a bit of a different perspective.

“I think we also assume that there is a blueprint, there is a workflow, and then we’ll build agents to mimic that,” she said. “But actually, agents and humans work very differently. So recently, we have been analyzing how agents do the task, and how humans do the task. So humans use very diverse UI tools, agents like writing Python functions. Humans like doing back and forth checks. Agents just do it with one pass.”

And then, she noted, there’s creativity.

“People are creative,” she continued. “Workers are very creative in their own ways. A lot of the tasks and delegation are emergent, in the sense that we actually don’t know how to predict (them). So instead of this kind of prescriptive way of building agents, I think it needs to be very descriptive. You really need to look at: if you give people access to AI, how would they use it for the types of things that they are working on?”

The Jobs

Here’s where the panel took on job displacement.

Pentland set the stage by talking about his recent experience discussing the issue with executives.

“I was having a conversation with a head of a large organization, (where leaders said) we’re not hiring young computer science people, because they generally don’t have the experience with tools and things like that, and we just don’t know what’s going to happen,” he said. “And we see all these AI tools generating code, so do we need them? And then, if you take that logic the next step further, you said, Well, what happens to lots of jobs, right? And what happens to the company that had about 10,000 people or 100,000 people, and now it’s going to shrink down.”

I think that for organizations right now, what you’re seeing is the change in maybe hiring at junior levels is sort of a stop-gap measure,” Shact said. “What organizations can look at is their levers of hiring. When they start to make some of these reductions, it seems pretty short-sighted … there are all these experiences that you need, and people you need in your organization who are going to become the junior engineer, the senior engineer … there’s a risk to the talent pipeline of not having individuals in those spaces.”

“A lot of the highly paid jobs today, the top skills are analyzing information, analyzing data,” Yang said. “I think AI literacy is really something we need to think about.”

We also heard some thoughts on changes in investing and the future roles of VCs.

All of it, I think, highlights where we’re at, and where we’re going with AI.



Forbes

Leave a Reply

Your email address will not be published. Required fields are marked *