Four Challenges In Gaming And Learning Innovation With AI

Four Challenges In Gaming And Learning Innovation With AI


In so many ways, it seems like with AI in business, the sky is the limit. Countless innovations are transforming industries from the ground up, and from the top down. Consumer technology and enterprise technology are tall towers that are continually being enhanced, so often to meet in the middle.

But challenges remain. Here are some of the usual issues that leadership teams have to deal with when developing ground-breaking AI applications.

Quick Innovation and the Quality Bar

One principle of AI development brings to mind that old saying that bringing a project to completion is often “easier said than done.”

Or perhaps it would be better to alter this slightly to: “it’s easier made than refined.”

The idea is that with democratization of code and all these other trends, a non-tech person can create an application quickly – maybe even in minutes.

But supporting that application, making sure that it’s bug-free and works as it’s intended, is a lot harder. That’s where we encounter some of this “jagged frontier” of AI capabilities. It can make things, but it can’t always test things that well.

The higher the quality bar is, the harder it is to support and maintain a project that may have been started very easily (see the quote near the end of this post.)

Iteration Kills User Counts

The second challenge, which is common in gamified and learning apps, can also be summed up with an old saying, this time in the form of a question: What have you done for me lately?

Or you could put it a different way, saying that users tend to be fickle and demanding over the long-term.

Here’s an example of how this concept could hurt an application – let’s say that the first iteration of the project is simply amazing to a large user base, and a lot of people sign on, wanting to use this stuff to their advantage.

However, over using successive versions of the software, they find that it’s buggy or glitchy, or doesn’t work as it’s intended or as it was advertised.

And the project starts bleeding users.

This might happen in the beta phase, or even later. Again, it might have to do with quick development and lack of overall support. It’s the idea that a central premise for AI might be amazing, but unless it’s well designed, it might not deliver what the users want, and so over time, they end up fleeing the ship.

The Bugbear of Vendor Costs

If you’ve ever heard a company say that their “AI bills are high,” what you’re dealing with is an enterprise trying to offer either a wrapper or a third-party service based on someone else’s models.

That means they have to purchase AI functionality in order to run their platform, and they can end up paying more than they want to.

In the end, this is just the cost of doing business, but it can be a problem for a development team if they don’t have their own in-house systems, or aren’t able to work around vendor thresholds.

No-Code Building, Features and On-Boarding

Again, you can build something easily, but it might not have all of the features or functionality that you need.

Take the role of on-boarding. Suppose the software uses AI to produce excellent user results, but it’s so hard to sign into that the users, again, run screaming away.

More On AI Projects for Learning and Gaming

A recent Imagination in Action segment at Stanford went over some of these issues, where leaders at language and learning companies talked about some of their experience.

Bing Gordon of Kleiner Perkins interviewed Natalie Glance of Duolingo and Kylan Gibbs of Inworld.ai about “Building Intelligent Experiences.” The two covered the challenges mentioned above, which I’ll highlight with some quotes. They also talked about goals like talent development.

“We have a talent acquisition machine and an onboarding machine that’s been going on for years now, and it works,” Glance said. “It works really well. So we put our job post on, and we get 1000s of applicants within a few days. So we get our pick of the best and the brightest across the United States, across the world, and so that’s great.”

She detailed this process a bit, describing a virtuous cycle of users becoming involved in the business.

“We get people who are really, really good, who are really excited to work for Duolingo,” she added. “They might have been using it since middle school. And we actually start with them when they’re still in university. So they come in, typically as rising juniors or rising seniors. We have an internship program. So they get to actually learn on the job, and prove themselves on the job, too.”

Innovating with NPCs

“We basically built (live interaction tools) as a way to primarily build characters for games, NPCs, you know, learning applications, these kinds of things,” Gibbs said. “And a lot of what we did was build kind of the prompt structure around that, to be able to have these real-time interactions to set the personality. And then what happened with ChatGPT, I think, is that people decided, or we canonically decided, that the form factor that we’re all used to … is something like a character, is something like a voice or text-based interface (or) talking to a human.”

That, he said, produced a deliberate response at his company.

“We just moved lower down (in) the stack,” he said. “We focused more on … the text-to-speech models, the language recognition models, as well as, you know, things like knowledge. And then we found that the way we built it, initially, was this kind of single box where everybody can build a character the same way, but (different characters) actually have completely different architectures. And … because they have some clearly completed architectures, we really had to open up what we did. And that’s basically where our journey went, is going lower down in the stack, and then opening it up.”

Using a Knowledge Base

Glance and Gibbs also talked about the use of memory, which Gibbs characterized as a “knowledge base.”

“Then you’re going to decide, how am I going to effectively use those during the conversation?” Gibbs said. “Am I going to really bring them up all the time? … if you talk to someone and they remember every single thing that you said, it’s going to be really weird, and especially if they repeat it back to you. And so I think it’s not only: how do you store that memory, but – how do you process it, and then also, (an issue of) conceptual classification. When you have a memory, it’s not necessarily just a list of memories. There’s these conceptual spaces of memories.”

More Quotes from the Panel

“Most people you talk to, including some C level executives, are like: ‘well, I built this thing in like, a weekend, and it’s pretty awesome. Like, why the heck does it take my team, like, six months to ship this thing?” – Kylan Gibbs, on dev cycles

“Our main priority is creating a really, really good feature. That’s what comes first, and then we care second about the speed of product duration, and only third about cost. Now that said, we do still care about cost, because it’s a really big bill. One thing is: we can just wait, right? We can just wait for OpenAI to reduce costs, and that’s been a big part of it. There’s actually some interesting learnings we’ve had around fine-tuning for cost. We’ve been working on that, but it turns out that fine-tuning is quite brittle. So even if we can, at a moment in time, fine-tune the feature to reduce costs by five times, we change the feature a little bit.” – Natalie Glance, on working with cost

“Everyone has their own world that they’re coding.” – Kylan Gibbs, on the democratization of coding

The Entrepreneur’s World

I hope that some of this is helpful in showing leaders what to do in the AI playground. For more, keep an eye on the blog. Stay tuned.



Forbes

Leave a Reply

Your email address will not be published. Required fields are marked *