Q&A: Microsoft’s Lili Cheng talks about emotionally intelligent machines
For machines to be truly intelligent, some artificial intelligence (AI) researchers believe that computers must recognize, understand and respond to human emotions. In short, machines must be equipped with emotional intelligence: the ability to express and identify one’s emotions, and to empathetically interact with others.
That unequivocally human quality “is key to unlocking emergence of machines that are not only more general, robust and efficient, but that also are aligned with the values of humanity,” Microsoft AI researchers Daniel McDuff and Ashish Kapoor wrote in a recent paper.
Mastering emotional intelligence will enable computers to better support humans in their physical and mental health, as well as learning new skills, wrote McDuff and Kapoor.
The Seattle Times talked with Microsoft’s Corporate Vice President of AI and Research, Lili Cheng, about developments in machine emotional intelligence, and what it means for humans. (The conversation has been edited for length and clarity.)
Q: What is the technology behind machine emotional intelligence?
A: The way the human brain works is there are some things that you’ve learned over time, but then there’s other things that you do very intuitively. Right before you get into an accident, your body might tense up and you might not think about (how you would) respond to certain things – (in various other situations) that applies to many different emotions like happiness and joy.
(The paper by McDuff and Kapoor) looked at how humans respond with fear. Could we help automated systems learn from some of the same techniques?
They took a simulation of someone driving, and they measured the person’s pulse to see how they responded if you’re trying to avoid crashing or hitting something.
And then they built that model and applied it to an automated car that’s driving through the same simulation. They found that applying the fight-or-flight (reaction) that a person (experiences), building an algorithm around that, and applying that to an automated car helped it respond very quickly in some situations.
They’re not just using fear. They’re combining that with the other tech that is more rational, just like a person would do, because you’re not always just responding to fear.
They can simulate some things in a virtual environment and still do these tests without actually having to have real cars instrumented. It would be very hard to do these sorts of tests in the real world.
Read more here:
Leave a Reply