You Already Have AI in Your Company – Time to Energize It!

Paul Hebert A.I., Culture, Employee Engagement, Employment Branding and Culture, Engagement and Satisfaction, Managing People, Paul Hebert, Performance, Recognition

This post will be both intellectually deep and shallow. It will follow the rules of the quantum world – it will take both paths simultaneously – blog quantum superposition if you will. It will be intellectually deep because I’m pulling from a piece that was posted recently on the Edge.org which describes its mission as:

“To arrive at the edge of the world’s knowledge, seek out the most complex and sophisticated minds, put them in a room together, and have them ask each other the questions they are asking themselves.”

Pretty lofty eh? And to think I actually read it. (Understand it is another question.) Intellectually shallow because I’m writing it.


The most recent newsletter from the Edge included an audio post from Alex “Sandy” Pentland, who is a professor at MIT, and director of the MIT Connection Science and Human Dynamics labs. He is also a founding member of advisory boards for Google, AT&T, Nissan, and the UN Secretary General. He is the author of Social Physics, and Honest Signal. (Evidence he has the best words!)

The conversation he started was around artificial intelligence. I’ll try to get this down to blog-level info-chunks. Here goes…

Artificial intelligence (AI) is simply an algorithm that uses something called credit assignment function, which reinforces good decisions and doesn’t reinforce bad decisions. As you reinforce good decisions the AI gets smarter. As “Sandy” says (I’ll keep using the quotes – it’s kinda fun)…

“It’s a way of taking a random bunch of things that are all hooked together in a network and making them smart by giving them feedback about what works and what doesn’t.”

However, as “Sandy” points out, computers (ie: software) don’t generalize information very well. Software needs specific rules. It can’t “guess” at the next step because “in general” it is like another step it has seen. Doing that kind of generalization is what would make AI better.

But here’s a bit of a fun fact. That is exactly what HUMANS DO REALLY WELL. We can generalize. We can see patterns that computers can’t. And that makes us better at credit assignments.

“Sandy” also suggests that AI can only work really well when it gets “truthful” feedback on its decisions. In other words, when it does something correctly – and it gets reinforced – the value of the AI is only as good as the “truthfulness” of the reinforcement decision. If the “up-vote” is based on false data (can you say Facebook and the election) then you get bad results. If it is truthful, you get good results. I’ll also amend that to include not only “truthful” but also “desired.” If the reinforcement is truthful and desired the AI will give that value and that will continue to reinforce that type of activity.

You know what else does EXACTLY that process (or should?)

Your MANAGEMENT TEAM!

You have the exact same mechanism inherent in your management team that is built into today’s complex artificial intelligence tools. From the article:

“Culture is something that comes from a sort of human AI, the function of reinforcing the good and penalizing the bad, but applied to humans and human problems. Once you realize that you can take this general framework of AI and create a human AI, the question becomes, what’s the right way to do that? Is it a safe idea? Is it completely crazy?”

It is not only not crazy – it is safe – and it is required.

And… what “Sandy” is asking about already exists in most companies today.

It’s called a recognition system.

Your Recognition System is a Human Powered AI

Your manager to employee and peer to peer recognition system is a human powered AI in that it regularly up-votes the behaviors we want to see continue. It is creating culture through “credit assignment” and it is using your own people as the arbiters of truth and value.

But most companies don’t see these tools in this light. They see them as necessary evils.

I’ll bet if you relabel your recognition system as “Cultural AI” (I claim trademark on this! And service mark and Marky Mark! Seriously. I mean it!) you’d get your C-Suite to pay attention. And you wouldn’t be lying.

Don’t take my word for – listen to our MIT genius…

“A key point with AI is that if you control the data then you control the AI. This is what my group is doing—and we’re actually setting systems up on nationwide scales—then I don’t need to know in detail how the decisions are made. But I do need to know what you decide, and on what evidence. As long as I can know what the AI is doing, I can ask if I like it or I don’t like it.”

And isn’t that what your managers are tasked with? Don’t managers provide the feedback on good/bad? Pluses and minuses? Up-votes/Down-votes?

Yes they are.

Go energize your managers. Get them to use the human AI that’s installed in your company right now and create, manage, drive the culture you want. You control the data, you control the reinforcement. You control the result.