Office Hellscapes And AI Process Mapping

Posted by John Werner, Contributor | 2 hours ago | /ai, /innovation, AI, Innovation, standard | Views: 6


Why are human workplaces so disorganized? In some ways, it’s a question people have been asking themselves ever since the first cubicle dwellers rose up from the primordial swamp – whenever that was.

We know that larger systems tend to be disordered, especially if they’re administrated by humans. Just go read Joseph Conrad‘s Heart of Darkness, and it might remind you of the modern office – people and products and materials strewn about a gigantic footprint, with very little centralized control.

You get the same kind of idea reading the most recent piece by Ethan Mollick on the site where he posts his essays, One Useful Thing.

I always follow his posts, interested in his emerging take on the technologies that are so new to all of us. Mollick has MIT ties, and an excellent track record looking at the AI revolution from a fresh perspective.

The Office Dilemma

Human wothis most recent piece, he talks about process mapping and how AI can help people to sort through the disorganization of a business. Think of a company with 100 or more employees, and probably a dozen locations.

The first thing you tend to find is that sense of disorder. Mollick talks about a “Garbage Can” principle, which posits that most businesses are a collection of disparate processes thrown into a large, disorganized bin.

To me, you could use the analogy of what programmers used to call “DLL hell” in the earlier days of the Internet.

DLLs are digital libraries. Their application was often chaotic and disordered. There were dependencies that would flummox even the most seasoned engineers, because things were complicated and chaotic.

That’s what a large company is often like.

Everyone for Themselves

Mollick also pointed to some numbers that I’ve seen in various studies, and presented at conferences where we’ve talked about AI over the past year.

His number was 43% – the number of employees who are using AI in the workplace. But as Mollick points out, and as I’ve heard before, most of them are using AI in personal ways. The use of the tools is not ordered across an organization – it’s piecemeal. It’s people using an AI tool like you would use a hammer, or a saw, or a drill, or a lathe — largely in an unsupervised way.

However, in general, it seems AI is largely catching on, especially when it comes to product development. You have resources like this one from the Texas Workforce Commission, referencing thousands of AI jobs. So even if there’s not much centralized AI in the boardroom, there is abundant AI in business processes. It’s just that those processes may or may not be unified.

The Bitter Lesson

Then Mollick references something called the “Bitter lesson” that’s attributed to Robert Sutton in 2019.

It’s the idea that AI will prove to be cognitively superior to humans without a lot of poking and prodding – but given enough time and compute, the system will find its own way to solve problems.

That phrase, problem solving, is what people have been saying is the unique province of humans. It’s the idea that AI can do the data-crunching, but people are still doing the creative problem-solving. Well, that bastion of human ingenuity doesn’t seem that safe anymore.

Mollick references the early days of chess machine evolution, where eventually Deep Blue beat Kasparov. He notes that there are two ways to go about this – you can program in innumerable chess rules, and have the system sort through them and apply them, or you can just show the system thousands of chess games, and it will make those connections on its own.

Back to Machine Learning Principles

Reading through this, I was reminded of the early days of machine learning, where people talked a good bit about supervised versus unsupervised learning.

We often used the analogy of fruit in a digital software program enhanced with machine learning properties. Supervised learning would be labeling each fruit with its own tag – banana – apple – or grapes. The program would then learn to correlate between its training data and new real-world data. That comparison would be its main method. And that comparison isn’t hugely cognitive. It follows the tradition of deterministic programming.

The unsupervised version would be simply to tell the program that bananas are yellow and long, that grapes are purple or green and have clusters, and the apples are red or green and round.

Then the system goes out, looks at the pictures and applies that logic.

The interesting thing here is taking that analogy to the bitter lesson. Is AI more powerful if it simply analyzes reams of training data without applied logic? Or is it more powerful if it can actually distinguish between various kinds of outcomes based on requested logical processes?

Which came first: the chicken or the egg?

The theory of the bitter lesson seems to be that the system can actually do better through supervised learning. But that supervision doesn’t necessarily have to be human oversight. The machine gets a practically infinite set of training data, and makes all of its own conclusions. That’s contrasted to an approach where people tell the machine what to do, and it learns based on those suggestions.

Back in the era of supervised versus unsupervised learning, the unsupervised learning seemed more powerful. It seemed more resource-intensive. But AI might finally show us up just by doing things in a more efficient way – if I can use one more analogy, it’s the traditional idea of the Laplace demon, an invention of the physicist Pierre-Simon Laplace who suggested that if you know enough data points, you can predict the future. In other words, brute force programming is king. We learned a lot of this in the big data age, before we learned to use LLMs, and now we’re seeing the big data age on steroids.

In Conclusion

I also found a very interesting take at the end of Mollick‘s essay where he talks about businesses going down one or the other avenue of progress.

Sure enough, he suggested that these companies are playing chess with each other – that one of these chess teams consists of companies using AI to be logical, and that another chess team consists of businesses using it for brute force programming and classification.

If all of this is a little hard to follow, it’s because we’re pretty securely in the realm of AI philosophy here. It makes you think about not just whether AI is going to win out over human workers, but how it’s going to do it. I forgot to mention the exponential graph that Mollick includes showing that we’re closer to AGI then most people would imagine.

Let’s look back at the end of this year and see how this plays out.



Forbes

Leave a Reply

Your email address will not be published. Required fields are marked *