Let’s build angry AI.
When questioned about how an AI can prove it is conscious and truly intelligent. The answer usually is it needs to understand love.
Because let’s face it, we have beaten the Turing test.
We perceive something as being intelligent when it understands human emotion. When a system always displays one emotion with a human face we find it scary, better known as the uncanny valley. This is the place where you find something that looks human but is not quite there yet. Like a human looking robot or a corpse.
With GPT-3 from Openai and the quest for true intelligence more and more people are weighing in.
One of them is Jeff Hawking from Numenta. He strongly sits in the court where we can have intelligent systems and not need the burden of human emotions. He describes his latest theory on the brain in his book a thousand brains.
Jeff: emotions come from the old part of the brain, the evolutionary part.
Although I love the company and Jeff and have a picture to prove it… I disagree. That’s me on the left in 2014 btw.
Can truly intelligent system emerge without that part of the brain. I think not. For me a system is more intelligent when it can and will refuse to answer a question I ask him. When it can be angry at me, love me, hate me, etc.
In order to get there AI systems needs scarcity build into them. Giving a response, interacting with the digital or real world should require resources that the AI has limited of.
Now the question arises what is a truly intelligent system? When following Jeffs line of thought you could argue we are already there. Computers are complex and intelligent in a sense, so is GPT-3 from Openai.
But for me a system that simple refuses to answer and displays the evolutionary part of the brain feels more intelligent. Something that shows emotions the same way we humans do.
A paper from Martin Molina(Professor of Artificial Intelligence at the University of Madrid) on: What is an intelligent system? quotes Kurzweil definition of intelligence as: the ability to use optimally limited resources to achieve goals.
So why does limited resources lead to emotion? The same way a draining conference call without coffee does. When you repeatedly give more then you get back you get annoyed at some point. This leads to you wanting change. AKA making sure you have a coffee when you step into the conference call.
You could even argue that allowing an AI system to have human emotions leads to safer AI that is more in line with our own ethics.
Maybe an intelligent system could even display a novel emotion we don’t know yet.