AI Can’t Read Minds, So Learn to Spell Things Out

Learn to talk the talk
First, the good news: it is now possible to develop programs, illustrations, or extract AI output with plain-spoken English prompting, versus the need to write code in Python, R, or SQL.
Now, the reality check: this new form of engaging machines, prompting, requires an ability to know exactly what to ask, and be able to drill down to specific elements. Otherwise, executives and business users will end up either with vague, rehashed, or wrong answers to their queries. This can be very problematic as decision-makers assume AI knows all.
Prompting may be the ultimate stage of self-service, no-coding environments, which have been evolving for decades now. Executives and business users can just make plain-English queries against language models, and see relatively fast results, be it reports or applications. It can even deliver via spoken prompts. Now, emerging memory features may help retain prompts for future use and refinement.
All good, right? But we need to do prompting right, according to AI expert Nate B. Jones, who was Michael Krigsman’s recent guest on CXOTalk. Krigsman teed up the discussion with the significance of prompting, as “the secret skill that taps into AI’s real capabilities, transforming large language models from flashy demos into engines of real-world productivity.”
The art of prompting collides with some of the vagueness or inconsistencies of human language, Jones explained. That was the whole purpose of computer languages in the first place – since they offered precise, step-by-step processes.
But while LLMs may have more intelligence than standard databases and applications, they aren’t mind-readers. “They are not incredibly reliable yet at inferring your intent if you are not precise about what you mean or want,” said Jones. “They don’t do that reliably. They guess, and they might guess right, and they might guess wrong.”
Then there’s time involved in waiting for responses to prompts. Though they may be delivered relatively quickly, end-users may have to prompt over and over again to try to get things right.
Awaiting the response to a prompt reminds Jones of the old punch-card days in computing, when programmers had to wait until a job ran before they knew if the instructions on the cards were correct. Now, we end up awaiting prompt results, which could take up to 20 minutes to generate, to see if they worked.
Repeated narrowing-down of prompting may work fine for smaller models, but more sophisticated instances of genAI may take up an inordinate amount of time. “If you give something to a frontline model and it’s running for six minutes, eight minutes, 10 minutes, 20 minutes, and it comes back, and you did not clearly specify the scope, you’re going to be frustrated,” Jones said.
There are countless models in the AI space, and determining the best one to direct one’s prompts also takes some understanding of the topic, the context, and the model being queried. “A lot of the art is in figuring out what is this subject, what is my intent, what is the right model for that?” he explained. “And once I have all of that figured out, now how do I craft a prompt and then bring in the context the model needs so it can do a good job for me?”
Ultimately, what these models are trying to do “is just infer from your utterances what they think you mean,” Jones explained. They need to “figure out where in latent space they can go and get a reasonable pattern match, do some searching across the web. In the case of an inference model, do a lot of that iteratively they can figure out what’s best, and then put together something.”
Jones speculated that within the next few years, the models will gain so much experience that sharp prompting skills may not be as necessary. But in the meantime, he provides three considerations for developing an effective prompt:
- Be really clear about the outcome that you are looking for and about how the model can know that it’s done. “The more you can specify and be clear about what you’re looking for and what good looks like, the better off you’re going to be for the rest of the prompt.”
- Provide the model all the context it requires, but don’t overdo it. “Be more clean and clear about, ‘this is what I want you to focus on in a web search,’ or ‘here’s some documents I want you to review. I want you to keep your thinking focused around this particular set of meeting transcripts.’”
- Understand the constraints and guardrails that you need. “Make sure that the model knows, ‘don’t do this. Where do I not go?’ Jones speculated that this thinking is derived from dealing with human colleagues. “We don’t tend to regard a senior colleague as someone who needs a tremendous number of warnings and constraints for a task. We just say, ‘hey, go tackle this. I’m sure you’ll do a great job. Come back and let me think about what you get.’”