Pascale Fung on building robots with empathy (15-04-2016)

Building robots and machines with empathy

More videos with Pascale Fung



No video? Please use the latest version of Safari, Chrome or Firefox. Internet Explorer might cause problems.
Empathy modules are indispensable for machine and robots of the future? (00:00:00)
What makes you want to introduce empathy in robots?  (00:01:35)
Why are empathy modules indispensable for tomorrow's robots? (00:02:04)
Why would we need emotional intelligence for machines? (00:03:15)
What kind of emotions should be recognized? (00:04:17)
What are the primary emotions?  (00:04:44)
How the robots will recognize humor and sarcasm? (00:05:53)
Should robots have an expression?  (00:06:40)
Should robots compare the way speech is delivered? (00:07:39)
How do you teach the robots about the difference? (00:08:24)
Can a robot be traumatized? (00:09:35)
Can robots be disfunctional after a trauma? (00:10:31)
Should robots be able to apologize? (00:11:16)
Somehow you should be feeling happy with robots? (00:12:13)
Can a robot develop a character? (00:12:38)
Robots can have nature and be nurtured?  (00:13:34)
Do we have to teach robots values? (00:14:41)
You talk a bit like a robot could become a person somehow? (00:15:32)
Can a robot have a self-image? (00:16:10)
How can we monitor robots? (00:17:17)
What is your wildest dream? (00:18:46)
What would a copy of yourself be? (00:20:20)
Could a copy of yourself develop consciousness?  (00:21:00)
In 25 years from now, how would robots looks like? (00:21:57)
What is the response of people about your work? (00:22:47)
What are the three laws for robots? (00:24:41)
Can people start seeing robots as friends? (00:25:10)
Can the intelligence of a robot be copied? (00:26:18)
People are prepared for these kind of robots? (00:27:18)
Can you explain your research? (00:27:35)
How do robots know something's funny? (00:28:23)
How does a robots learn what's humorous?  (00:29:10)
Do we know how unsupervised learning is working? (00:30:36)
Do we know how people learn unsupervised?  (00:31:04)
What is the condition of having an artificial intelligence system? (00:31:51)
It is not possible to have human like intelligence for robots without empathy? (00:33:04)
Can you imagine people being afraid of a growing technology network? (00:33:52)
What would be the ultimate condition to create a good robot? (00:35:12)
Can you imagine that there is a combination of robot and human mind? (00:36:32)
Can you explain more about your wildest dreams? (00:38:44)
You're not wishing for immortality? (00:41:11)
People could say that you are reading too much science fiction? (00:43:03)
automatically generated captions
00:00:00 Speaker 1: You think that empathy modules are indispensable for machines and robots of the future, why?
00:00:08 Speaker 2: Because we are going to have robots in our lives.
00:00:13 They are going to work with us, they are going to take care of us.
00:00:16 So for us to be able to trust them, we need to build a relationship with the robot.
00:00:22 Studies have show that for us to work with robots we do need robots to understand our intention and emotion
00:00:30 and not just what we say. It's not just a command and control kind of machine.
00:00:34 Speaker 1: Aren't current robots good enough in that respect?
00:00:37 Speaker 2: No, because they take your commands literally right now. They don't understand what you really mean.
00:00:45 You can say the same thing and in different tones and with different gesture it will mean something totally different.
00:00:51 Suppose you have a robot that is taking care of the elderly and the robot says "how are you doing today"
00:00:57 and you say "I'm not sure." If the robot understands you literally, it just means you are not sure
00:01:07 and the robot will just take off and walk away.
00:01:10 But it really means, what the [inaudible 00:01:12] really is that the patient
00:01:15 or the elderly person is not feeling that great.
00:01:18 So that robot has to deduce that intention and emotion from what you are saying,
00:01:23 how you are saying it in order to take care of us better..
00:01:26 Speaker 1: And currently robots aren't ..
00:01:27 Speaker 2: No they are not able to do that yet, we are working on making them to do that.
00:01:35 Speaker 1: What made you actually want to introduce empathy in robots?
00:01:39 Speaker 2: Because I believe that in the future we are going to need robots to take care of people,
00:01:48 to take care of the young, to educate us, to help us with a lot of work.
00:01:52 Since we're going to have them around us I think it is important that they're more human like in their empathy..
00:02:02 Speaker 1: Why are empathy modules indispensable for lets say .. Tomorrow's robots?
00:02:09 Speaker 2: We want robots to be intelligent, right?
00:02:14 Intelligent robots needs to have both the cognitive intelligence and also emotional intelligence.
00:02:21 This is what we humans have. When we communicate with each other, we use our emotional intelligence all the time.
00:02:30 That is indispensable for understanding each other.
00:02:33 For robots to understand us, they need to have that kind of emotional intelligence, which is empathy.
00:02:38 Speaker 1: Because robots aren't capable of doing that?
00:02:41 Speaker 2: No, not yet. Current robots is most of the time still controlled by explicit commands.
00:02:49 For example you can tell a robot to vacuum your room.
00:02:52 In some restaurants there are robot waiters that will bring you food and they are focused very narrowly on one task.
00:03:00 They are not that much more advanced than your vacuum cleaner now
00:03:05 or the rice cooker. Current robots don't have that kind of emotional intelligence but there are some robots.
00:03:11 They're putting this in some robots and we are starting to see them..
00:03:15 Speaker 1: Why would we need intelligent emotion .. Emotional intelligence for machines?
00:03:19 Speaker 2: So that they can work with us better. They can help us better.
00:03:25 If they need to take care of us, take care of our children, our elderly,
00:03:29 they really need to understand our true intent emotion in order to take care of us. Say if you go to the hospital
00:03:36 there's a nurse and what is a nurse does is not just to take your temperature and look at your vital signs
00:03:43 but also talk to you and see how you are doing and whether you need comforting, whether need water,
00:03:50 whether you need medicine at this point in time, so that requires emotional intelligence and it requires empathy.
00:03:56 Speaker 1: Is it because they are becoming closer to us or because there is a difference of environment?
00:04:02 Speaker 2: Yes the robots are coming more into our lives, into our daily lives and there will be more robots around us.
00:04:10 If they don't have emotional intelligence they are more likely to make mistakes and even hurt us.
00:04:16 Speaker 1: There are a lot of different kinds of emotions.
00:04:20 What kind of emotions should they express or should they at least recognize?
00:04:23 Speaker 2: For example the very first thing we're working on for robots to recognize includes whether the human is
00:04:31 happy, angry, sad, or frustrated or hesitating or even sense of humor.
00:04:40 One of my students is working on recognizing sense of humor.
00:04:44 Speaker 1: Let's talk about the range of emotion. Of course there are a different kinds of emotions.
00:04:50 What is the let's say the primary emotion that they should recognize?.
00:04:52 Speaker 2: The primary emotions are happiness, sad .. Sadness, and anger, and neutral.
00:05:00 So you have to be able to tell whether the person is happy. Happy means satisfied, not satisfied, not happy.
00:05:09 Sad, needs help, maybe frustrated.
00:05:14 Angry, so if the person is angry at the robot, the robot should have to do something in response..
00:05:21 Speaker 1: But people can be ironic or .. [crosstalk 00:05:25]
00:05:24 Speaker 2: Right, we're also working on for robots to understand a sense of humor and sarcasm,
00:05:32 we're working on that because we use humor and sarcasm in our daily communications to deflect the situation
00:05:42 or challenge or to make a conversation more friendly, to make things go more smoothly.
00:05:51 Robots need to learn to recognize that.
00:05:54 Speaker 1: Even for people it is sometimes difficult to recognize [crosstalk 00:05:58] a robot to recognize that..
00:05:58 Speaker 2: Indeed .. So we're teaching robots to watch a lot of comedy shows to learn sense of humor.
00:06:06 It's machine learning so we think that if we let the robot watch a lot of comedy shows
00:06:14 and observe how people communicate with each other, the so called big-data analytics and use machine learning,
00:06:21 then they will be able to learn..
00:06:23 Speaker 1: So you teach them actually by letting them watch ..
00:06:27 Speaker 2: We feed comedy shows and movies and yes. And people's daily communications.
00:06:36 A lot of YouTube videos, we feed these to the robot..
00:06:40 Speaker 1: Should it also express somehow ..
00:06:43 Speaker 2: Yes that's very, it is important.
00:06:47 Studies have shown that humans feel more related to a machine that has some facial expressions right?
00:06:53 That's why we don't really feel a connection to our refrigerator.
00:06:58 or our rice cooker because they are just machines. When you see a robot with a cute face, people start going ..
00:07:05 They start talking to the robot in a more human way. They go "so how are you doing" and stuff like that.
00:07:13 Embodiment of the machine with a robotic body with facial expressions is important.
00:07:18 That is also an important area,
00:07:21 different researchers are working on that to allow robots to generate appropriate facial expressions,
00:07:28 appropriate gestures, for example to say "hi" or to shake you hand and all that..
00:07:34 Speaker 1: Because of tone of voice ...
00:07:35 Speaker 2: [crosstalk 00:07:36] The tone of voice also from the [crosstalk 00:07:36] .. Is very important too
00:07:38 Speaker 1: Some of you say that a robot should not alone learn to understand the content of speak
00:07:44 but it should it also compare it with the way it is delivered. Can you explain it?
00:07:52 Speaker 2: So if I say something like,
00:07:54 "I'm really happy I'm going to work today," you probably think that I'm truly happy that I'm going to work today..
00:08:00 But if I say "I'm really happy I'm going to work today ..
00:08:06 " Even though the words I speak are the same, the emotion is totally different.
00:08:11 In the second case you know I'm being sarcastic and I'm not really feeling like going to work.
00:08:16 So the tone of voice is important. We use tone of voice in our way of expressing our meaning a lot.
00:08:23 Speaker 1: But how do you teach a robot to do that?
00:08:25 Speaker 2: So again the difference is we teach the robot to listen, again, to different kind of tone of voice.
00:08:34 We show a robot examples of the same sentences in angry voice, happy voice, and nervous voice, and frustrated voice.
00:08:43 We tell the robot "Look this is the kind of voice that expresses anger,
00:08:47 this is the kind of voice that expresses frustration" and the robot learns.
00:08:53 Speaker 1: Somehow we learned that as children.
00:08:55 Speaker 2: Yes we did, we do as children.
00:08:59 Speaker 1: How does the robot learn that?
00:08:59 Speaker 2: So children have multiple sensor input, right?
00:09:05 Children perceive the world not just by what people are saying, how they are saying, but they also look at things.
00:09:12 So right now we are working towards enabling robots to have multiple sensory input as well.
00:09:18 So the robot now learns from the speech you're saying, what you're saying,
00:09:25 but also learns from how you're saying it. We also enable the robot to look at your facial expression
00:09:30 when you say it so that enable the robot to learn better.
00:09:35 Speaker 1: As a child you learn by experience, a robot learns by experience?
00:09:40 Speaker 2: Yes.
00:09:44 Speaker 1: As a child you can be traumatized by experience.
00:09:46 Speaker 2: Right.
00:09:47 Speaker 1: What about robots? Is it possible somehow to get robot trauma?..
00:09:51 Speaker 2: Robot trauma I would say .. Robot can be mislead. Okay, if you feed robot some information ..
00:10:00 It's like how we teach our children. Robots can be traumatized too because they are machines.
00:10:06 Machine can have machine errors, either from the embedded coding, the program or from experience,
00:10:13 meaning the data they learned from.
00:10:15 Speaker 1: It's a different kind of trauma than the human trauma.
00:10:21 Speaker 2: They can be lead astray and then they can have errors. Is it different from human trauma?
00:10:29 It's different, machine trauma, I will say.
00:10:32 Speaker 1: Children can be dysfunctional because of they're trauma or can't behave very well.
00:10:39 I mean, what about robots?
00:10:41 Speaker 2: Robots can be dysfunctional, they can stop functioning properly if they have had a machine error.
00:10:47 If the error is caused by they're experience you can call that trauma. They can be traumatized into misbehaving indeed.
00:10:56 Speaker 1: It's going to be grumpy?
00:11:00 Speaker 2: Grumpiness means they work slowly or they don't respond friendly in the friendly manner,
00:11:09 they may not even answer. You can perceive that as grumpiness, then machines can be grumpy, yes.
00:11:15 Speaker 1: You also think that robots should apologize somehow?
00:11:19 Speaker 2: Of course!
00:11:24 Speaker 1: What do you mean by robots should be apologize?
00:11:26 Speaker 2: Robots make errors like humans do.
00:11:28 These days when there is a machine error, you might just see a blue screen and it just says,
00:11:34 "machine error 404." That is not very friendly, we cannot have robots do that.
00:11:39 Machines are bound to make errors, there's not going to a perfect machine that never makes errors.
00:11:45 Just like humans,
00:11:46 there are no perfect humans that never make any mistake. It is important for empathetic machine to apologize like
00:11:54 humans do because that will keep the communications smooth. That will keep the conversation continue with a human user..
00:12:01 Speaker 1: Because that is actually the most important thing to keep [inaudible 00:12:05]
00:12:06 Speaker 2: Indeed yes it is most important to keep the conversation smooth and the human/robot communication smooth.
00:12:11 Speaker 1: But somehow you suggest that you should feel happy with your robot. Is that correct, am I correct?
00:12:22 Speaker 2: Humans should feel happy
00:12:23 when they are communicating with the robot at least they can feel related to the robot in a friendly sort of way.
00:12:31 It is important for humans to feel that in order for us to trust the robot, to work with the robot.
00:12:37 Speaker 1: We develop a character if we are born we have sort of character or maybe genetically.
00:12:43 But we give out the character because, well experience we have.
00:12:48 Can a robot develop a character as well in your opinion, your model?
00:12:52 Speaker 2: So, when we built empathetic robots we are very careful in also designing the robot personality.
00:12:59 This is what we call robot personality. In the beginning we will be designing the personality.
00:13:06 This is similar to human predisposed personality we're born with. As we go on we also let the robot learn personality
00:13:16 from data, so from particular personality type.
00:13:21 For example,
00:13:21 a robot can imitate a particular persons personality type over time by observing how that person communicates with
00:13:28 other people. A robot can also be nurtured into developing it's personality.
00:13:35 Speaker 1: So you have a discussion and you use for example where is nature [crosstalk 00:13:39] nurture.
00:13:42 What about robot?
00:13:42 Speaker 2: Robot also. There's a nature and nurture. So nature, robot nature comes from our laboratories right?
00:13:49 Comes from the design of robot by humans, by engineers that's the nature..
00:13:55 That's when robot comes out of lab and robot has that personality. But then as we ...
00:14:02 Since robots have machine learning algorithms in them, they will also learn from particular kind of ..
00:14:08 You know, they will learn from the environment, and then they will continue to develop their personality. For example,
00:14:15 the beginning we ensure that the robot is not racist. That is predisposed what we design at the beginning.
00:14:24 Over time when the robot sees whatever is out in the world, there can be racist comments and all that
00:14:31 but the robot will reject that and will not absorb that into the robots personality.
00:14:35 The robot will make a judgment to say "oh, that is racist and I should not learn that.".
00:14:41 Speaker 1: That suggests that you put moral codes into the robot ..
00:14:44 Speaker 2: In the beginning, yeah, in the beginning we need to teach robot values and personality.
00:14:50 Speaker 1: Doesn't it depend on what kind of culture you sit in?
00:14:55 Speaker 2: Yes.
00:14:57 [crosstalk 00:14:57] So the people who work on robots, we all have this responsibility,
00:15:03 we're all like parents to robots.
00:15:05 Parents to our children, we indeed in the beginning we do teach our robots in certain ways, with certain code.
00:15:12 Then we let them run.
00:15:14 Very much like how we nurture our children when they reach adulthood we just let them go. In fact
00:15:20 when they're younger we send them to schools so we don't always teach them everything ourselves.
00:15:25 We send them to school, we send them out into the world, they learn from the environment,
00:15:29 so that's what we do with robots.
00:15:35 Speaker 1: You talk a little bit as if it's becoming a person somehow..
00:15:39 Speaker 2: Robot is .. So we're building robots to be a little bit more like us, more like a person indeed.
00:15:47 Because humans can communicate with another human being better. We cannot ask humans to speak the robot language right?
00:15:56 So we the robot to speak our language, and understand us.
00:16:00 With that the personality thing, the robot values,
00:16:05 that is all necessary in order for us to communicate with the robot better.
00:16:10 Speaker 1: Do robots have a sort of self image in terms of I can think about myself?
00:16:17 What do robots think about itself as well?
00:16:19 Speaker 2: The robot can certainly be taught to think about itself yes or shows the behavior of thinking about itself.
00:16:29 Meaning that robot can simulate the behavior somebody who's thinking about itself.
00:16:35 Whether their is consciousness within the robot we don't know because we don't understand consciousness.
00:16:40 Speaker 1: That's what everybody's saying, "yeah you can program it but it's always pre-programmed.".
00:16:45 Speaker 2: It's not always pre-programmed. There's certain things ..
00:16:48 So this is what I was trying to say, we pre-program our machine learning way.
00:16:53 So we pre-program Robots to learn, just like humans are also programmed to learn. Part of it is pre-programmed.
00:17:03 This is what we call nature.
00:17:06 That pre-programming also allows the robot to learn,
00:17:09 to pick up from the physical world from interacting with human beings for their knowledge
00:17:15 and for their personality even.
00:17:18 Speaker 1: How can we monitor that?
00:17:20 Speaker 2: Can we monitor them?..
00:17:24 Speaker 1: I mean we monitor people, we tell people to not behave like that .. Good impression ..
00:17:36 Sort of punishment on bad behavior for example..
00:17:38 Speaker 2: Different robots, depending on the purposes ...
00:17:40 If they are supposed to take care of patients and they make mistakes then we think there's going to be ..
00:17:46 There must be a machine error, then we will check the machine, we check the code
00:17:50 and try to spot that error. We don't really punish robot per se at this point
00:17:57 but it can be imagined that in some learning algorithms we can incorporate what is a reward
00:18:02 and what is punishment so that they learn proper things.
00:18:05 Speaker 1: For example there is a discussion about autonomous cars for example [crosstalk 00:18:10] who is responsible?.
00:18:13 When robots come closer you'll get the same kind of question because they can harm or somehow .. [crosstalk 00:18:20].
00:18:20 Speaker 2: Indeed, indeed ..
00:18:21 If they accidentally harm somebody you say "you know, robot you shouldn't do that" and they should learn from that.
00:18:29 Punishment perhaps not like human punishment, I don't we can hit the robot
00:18:33 and the robot will feel "ooh," not that kind of punishment.
00:18:37 It's in the algorithm that there's a cause function, whether they are doing the right thing or not.
00:18:43 There will be a cause function of positive and negative values.
00:18:46 Speaker 1: What's your wildest dreams in terms of robots?
00:18:48 Speaker 2: My wildest dreams is to have my memory, my sensory abilities, my intelligence,
00:19:01 my emotional intelligence whatever it is all be downloaded to a robot with an android body
00:19:09 and then that robot will continue functioning as me when I'm no longer in this world..
00:19:15 Speaker 1: You want to make a copy of yourself basically ..
00:19:18 Speaker 2: I want to make a copy of myself, that will be interesting.
00:19:21 Speaker 1: Are you think that's feasible?
00:19:23 Speaker 2: I think to some extent we're already doing that..
00:19:27 I'm not sure it's feasible within my lifetime but I think it is feasible. The robot will have ..
00:19:36 Will be equipped with our perception abilities, and our intelligence, our emotional intelligence,
00:19:41 and our ability to learn.
00:19:45 The people who build the robot with really like form with very lifelike skin and eyes and all that..
00:19:54 When we put this together the robot will have the ..
00:19:59 Will be embodied in an almost human like body so they will pick up signals from the world as we do.
00:20:06 I think that is feasible. I'm not saying that will be me, that will be just a copy of me
00:20:12 and that will not necessarily have the conscious of me. I'm not talking about me living forever.
00:20:17 I'm talking about a copy.
00:20:19 Speaker 1: But what would it be?
00:20:21 Speaker 2: What would it be, a very good question. A copy. Just enjoy as a copy of a human.
00:20:29 Speaker 1: What would a copy of you mean? What would it do?
00:20:35 Speaker 2: It would do what I would do under given circumstances.
00:20:41 For example, maybe it can go and lecture in the university, teach students.
00:20:45 It can learn like I do from the world, maybe it can perform research. It can build other robots..
00:20:52 Speaker 1: But it won't be you ..
00:20:54 Speaker 2: It will not be me it will be a copy of me.
00:21:00 Speaker 1: But suppose it would be a copy of you, would it develop it's own consciousness? It's quite hard to imagine.
00:21:08 Speaker 2: I know, I know. We don't know what consciousness is right. It's almost a philosophical question.
00:21:16 Does it exist?
00:21:19 Once we have all this kind of sensory input intelligence
00:21:23 and learning in place would there be a point where there is indeed a consciousness? I don't know. I don't know.
00:21:30 We don't know what consciousness it and where it comes from.
00:21:32 Speaker 1: In terms of the old philosopher, let's say that you could say that robots [inaudible 00:21:37]
00:21:36 I think so I am.
00:21:39 Speaker 2: Yes indeed. I think so I am. Indeed, indeed.
00:21:44 A robot can conclude that but even then the robot can have the behavior of someone with consciousness.
00:21:50 We still don't know what really has this "me-ness' this self consciousness.
00:21:56 Speaker 1: If you look, let's say in 25 years, what would robots life look like in your perception?.
00:22:06 Speaker 2: So in 25 years I believe there will be robots .. Not all robots, some robots will look just like us.
00:22:14 They will talk just like us and behave just like us. It's possible in 25 years.
00:22:19 They can move and gesture exactly like humans. Some of these robots, these are called androids will be very very um ....
00:22:28 You cannot tell the difference between them and humans. Then there are other robots.
00:22:33 We still need other robots to help us, other robotic machines like the vacuum cleaner.
00:22:37 The vacuum cleaner is not going to have a human head that would be kind of creepy.
00:22:41 So we still have other robots that don't look like humans, so there will be a variety of robots among us.
00:22:48 Speaker 1: You are working to make it into a reality.
00:22:52 When you expose this scientific work, what's the response from people?
00:22:57 Speaker 2: Some people feel threatened. Some people question why robots need to be more human like.
00:23:07 They think that then they will be challenging us and taking over us..
00:23:12 Others like especially children think it's just really cool. They want to .. They cannot wait for that to happen.
00:23:19 I think depending on the culture people come from and what they expect of machines, they have different reactions.
00:23:26 For the most part, for example I talk to doctors, medical doctors
00:23:30 and they love the idea that robots will have empathy towards patients so that they can better take care of patients.
00:23:37 Speaker 1: Can you imagine that people somehow become scared or afraid?
00:23:41 Speaker 2: Some people are already scared. Some people are already scared of robots.
00:23:47 I think when people see robots become more human like they imagine all sorts of things.
00:23:53 I think one reason is there has been too many science fiction movies that portray robots as threatening and menacing.
00:24:00 but that's just science fiction. People shouldn't be .. People should not be swayed by fiction.
00:24:09 Working under reality we're all building robots to help people.
00:24:13 Nobody's building robots to purposely destroy human kind and that kind of thing...
00:24:23 Speaker 1: [inaudible 00:24:26] .. Let's robots be compelling and empathetic ..
00:24:26 Speaker 2: We're trying to build robots that are friendly.
00:24:30 If robots have empathy, then they will never harm people, right? That's why empathy is important.
00:24:38 If they do have empathy they will never hurt us.
00:24:41 Speaker 1: There are some robots laws made by Isaac [inaudible 00:24:44] [crosstalk 00:24:46].
00:24:45 Speaker 2: The three laws of robotics .. I don't remember all of them..
00:24:52 Speaker 1: But actually one of them is to help people ...
00:24:56 Speaker 2: Yeah to help people and not to harm ..
00:24:58 Speaker 1: That's your purposes.
00:24:58 Speaker 2: Indeed, indeed. So one of the three laws of robotics is for robots to help people.
00:25:03 I think all the robots that are being worked on today are trying to do that.
00:25:08 Speaker 1: Could you imagine that people start to see robots, if they are so friendly as a friend? Really as a friend?
00:25:15 Speaker 2: I hope so, I hope people will see robots as friends because they're friendly.
00:25:20 If we can see robots as friends we can trust them to help us.
00:25:24 Speaker 1: They are friends forever? Companions somehow?
00:25:32 Speaker 2: We are building robots to be people's companion.
00:25:38 To be companions to children, to be companions to the elderly when they're lonely and all that.
00:25:46 Like all machines, the hardware deteriorates right?.
00:25:50 So, your iPhone or your smartphone, you might buy the next generation smartphone ..
00:25:55 So robots, are we going to have a robot that lasts forever? I don't think right now that's the purpose.
00:26:02 So you might have the next generation of the same robot,
00:26:04 but that robot will have the same personality as the first robot and will have the memory. So yeah,
00:26:10 in that sense you can have a robot companion forever. But the body might have to be changed from time to time.
00:26:18 Speaker 1: If you were wanting can it be copied to another robot or how can you imagine that?
00:26:25 Speaker 2: The intelligence of robot is completely software based. We can make multiple copies of the same software.
00:26:30 We can have the same robot, same personality, same memory, in different robotic bodies.
00:26:35 So we can have multiple robots that sound the same, behave the same, and do the same thing.
00:26:41 Speaker 1: And copy their experience?
00:26:44 Speaker 2: And copy the experience because it's software based.
00:26:47 Speaker 1: Somehow people love their machines like they love their cars.
00:26:53 They start loving their robot as well, probably.
00:26:56 Speaker 2: I think people will love their robots if the robots are friendly, empathetic, cool, have a sense of humor,
00:27:03 who wouldn't love them?
00:27:04 People will love them like they love their motorcycles
00:27:08 and their cars indeed. But they also might want to have multiple robots,
00:27:12 a different version with a different personality just like some people like multiple cars of different styles.
00:27:17 Speaker 1: Do you think we are prepared for those kinds of problems?
00:27:21 Speaker 2: I think people are prepared, we've already seen men on the moon 40-50 years ago.
00:27:29 It's high time we see robots among us and I think people are ready for robots as long as they are friendly.
00:27:34 Speaker 1: Language is very important and the intention of language. Explain a little bit of what you're doing..
00:27:43 Speaker 2: What we're doing is um .. For example us the example of humor as I explained before.
00:27:51 Humor comes from not just the words you use but the tone of voice and even your facial expressions.
00:27:58 The same word expressed in different context can be humorous and not humorous.
00:28:03 So what we do is,
00:28:05 we program machines to have learning algorithms so they learn from watching a lot of comedy shows for example
00:28:13 and YouTube videos
00:28:14 and figure out what humor is. So next time somebody says something humorous they machine will know
00:28:20 and to be able to laugh for example.
00:28:23 Speaker 1: How do they know that it's funny?
00:28:27 Speaker 2: How do they know it's funny? From learning.
00:28:28 The learning algorithms enable the machine to see examples, many many examples, millions of sentences..
00:28:37 Thousands and tens of thousands of TV shows where people laugh. For example, humor consists of ..
00:28:44 How do you tell a joke? Usually there's a setup, a trigger, and a punch line.
00:28:52 The machine will see that in all these comedy shows like humans do and then they will pick up
00:29:00 when there will be a punch line and they know that is humor. The machine algorithm,
00:29:04 learning algorithm we use currently is what is commonly known as deep learning algorithms.
00:29:10 Speaker 1: How does this robot learn what's humorous?
00:29:15 Speaker 2: We use machine learning to teach robots to learn about our emotion sentiments including humor.
00:29:22 Machine learning there are two kinds of machine learning approaches.
00:29:27 One is what we call supervised learning the other is unsupervised. Supervised learning we actually give machines
00:29:33 examples and we have the data annotated by humans.
00:29:38 Humans say, "look, this is a sentence that's humorous," "this is a sentence that's a punchline for a joke,"
00:29:43 and "this is where people laugh." That's called supervised learning
00:29:46 and machines learn from that to have a sense of humor. Unsupervised learning is more like how humans learn.
00:29:53 We don't tell the machine explicitly this is a humor, this is not humorous, we give the machine a lot of data
00:30:00 and for the machine to learn from context.
00:30:03 So the unsupervised learning is really what we hope to achieve in the future.
00:30:07 If machine can have unsupervised learning, then we don't need humans explicitly teaching machines all the time,
00:30:14 this is humor, this is happiness and all that. That would save a lot of effort from the human annotators.
00:30:19 Unsupervised learning is harder tho because it will require a lot of general learning abilities
00:30:27 and general deduction abilities, induction. I believe it will require machines to have multi-sensory input.
00:30:36 Speaker 1: Do we know how unsupervised learning is working?.
00:30:39 Speaker 2: So cognitive scientists have been studying how humans learn. They believe that humans have ..
00:30:48 Of course we are born with some kind of learning abilities, innate.
00:30:52 When babies are born they already know how to recognize their mothers faces and voices.
00:30:59 They are already picking up from 10 months in the mothers tummy..
00:31:03 Speaker 1: Lets just [inaudible 00:31:05] .. Do we know how people learn unsupervised?
00:31:09 Speaker 2: We don't know exactly how we learn unsupervised. We're trying to model that.
00:31:15 So another side of working on robots is that
00:31:18 when we want to build robots to be more human like we have to build models of human thinking.
00:31:24 As we research how to make robots that have a sense of humor we understand human sense of humor better.
00:31:32 As we learn, as the machines learn, we also learn how we function.
00:31:38 We don't know exactly how we learn in unsupervised fashion but we're trying to research on that.
00:31:47 That's a very important research direction.
00:31:50 Speaker 1: There's a lot of talk about artificial intelligence,
00:31:55 what is let's say the condition of having an artificial intelligence system in your opinion...
00:32:02 Speaker 2: The condition of artificial intelligence .. We are trying to ..
00:32:06 Working towards what some people call a strong AI which is a general purpose AI system or general purpose robot.
00:32:14 Today we're only working, we're still only working on single purpose or multipurpose robots that can do one task.
00:32:22 You've heard of [inaudible 00:32:25] beating world champion [inaudible 00:32:27]
00:32:28 and their systems that can lift heavy weights and assemble cars. These are all single purposes robotic systems.
00:32:36 So we are working towards a general purpose robotics system that can really be your companion and take care of people.
00:32:42 In that case, the robot must have intelligence to be more human like. For human intelligence, empathy is indispensable.
00:32:51 It must recognize not just what people are saying but how people are saying it, what you truly mean.
00:32:59 So empathy is important. Emotional intelligence is a huge part of human intelligence.
00:33:04 Speaker 1: You suggest then couldn't be intelligence without [crosstalk 00:33:08]
00:33:07 Speaker 2: We cannot call a system or robot, human like intelligence without empathy.
00:33:14 Speaker 1: Is that what is lacking somehow in the discussion?
00:33:19 Speaker 2: It has been lacking in our discussion of building AI systems for the last 30 years, but it's coming now.
00:33:29 I'm happy to say I've been talking about it and some other researchers are also working on this..
00:33:36 Speaker 1: And because you are working in this field [crosstalk 00:33:38]
00:33:38 Speaker 2: That is a new direction and people in general agree with this direction so yeah,
00:33:46 research is working towards this goal as well. There is agreement in terms of that.
00:33:51 Speaker 1: Can you imagine people being afraid of you know, this huge intelligent network that is going to conquer us
00:33:58 or something like that?
00:33:59 Speaker 2: I think people are afraid of the unknown. People are always afraid of the unknown.
00:34:05 I think if we go back in time, 1950's and describe today's internet and smartphones and how we use them
00:34:14 and how we can get access to good material as well as bad material at your fingertips.
00:34:19 If we told people in the 50's about this today,
00:34:23 they would also be very very afraid. What happens is that we adapt to technology just like technology adapts to us.
00:34:31 It doesn't happen overnight.
00:34:34 We've been living with artificial intelligence for long time already starting with all those automatic calculators
00:34:42 and then we take airplane without being afraid that it's being flown a computer actually. So we've been living with
00:34:51 artificial intelligence for a long time its just gradually we are going to get used to the evolution of such
00:34:58 intelligent machines.
00:34:59 They will gradually be able to talk to us and they will gradually be able to empathize with our feelings
00:35:05 and they will gradually be able to do more to help us.
00:35:09 We'll get used to them step by step, it doesn't happen overnight.?
00:35:12 Speaker 1: What would be the ultimate condition to generate, let's say artificial intelligence [inaudible 00:35:26]
00:35:29 We already [inaudible 00:35:28] it's no problem.
00:35:30 Can you talk about robots with a heart, can you talk a little more about the necessity of this heart?
00:35:35 Speaker 2: Yes, so without empathy in the robot a robot would never behave and learn and understand like a human being.
00:35:45 So without empathy I would say that the robot would not be human like and that intelligence would be limited.
00:35:53 Speaker 1: So you think it is of the utmost importance of the entire discussion about artificial intelligence also.
00:35:56 should include ..
00:35:57 Speaker 2: The entire discussion of artificial intelligence does include that today.
00:36:07 I'm one of the people who champion it
00:36:11 and in general there is a general agreement that it is needed for artificial intelligence.
00:36:17 People work on different components of artificial intelligence.
00:36:21 Those of us work on the emotion recognition certainly see this as our job to make that happen,
00:36:30 to make robots have empathy.
00:36:32 Speaker 1: If you could lets say, look further
00:36:35 and further into the future can you imagine there is a combination of humans and robots
00:36:45 and artificial intelligence that goes beyond the stars for example or whatsoever?
00:36:48 It's larger than we are [inaudible 00:36:55] it's intention is [crosstalk 00:36:57] can you imagine?
00:36:53 Speaker 2: Yes what's interesting today, what's happening already is my mind
00:37:05 and your mind are no longer limited by our own life experience.
00:37:11 Twenty years ago I wouldn't know how to respond to a lot of questions that I'm not an expert in.
00:37:17 Today anybody with access to the internet
00:37:19 and Wikipedia can tell you a lot of things about a specific topic. Our human mind and our human knowledge has evolved.
00:37:28 We're already connected to this vast network of minds so you can pull up YouTube video to learn how to cook any kind of
00:37:38 food.
00:37:39 You can pull up Wikipedia page to learn about any particular technical topic or political topic or some history
00:37:48 and that happens instantaneously. So is that part of my mind already or part of the world? We already are connected.
00:37:56 So in the future when robots enhance our physical abilities they will also enhance our mental abilities.
00:38:03 When that happens, there will be basically, on top of the internet we have access to,
00:38:08 we also have these robots that enhance our ability to understand the knowledge.
00:38:15 That will be another layer of intelligence that enhances human intelligence.
00:38:20 Just like today there's robotic systems that help people who cannot walk, walk.
00:38:27 Those exoskeleton robots that can help people become stronger physically.
00:38:34 Robots can also enhance our intelligence to enable us to know more to be able to do more and think better,
00:38:41 with the the help of the robots.
00:38:45 Speaker 1: Earlier on I asked you your wildest dreams can you explain a little bit more intensely what your wildest
00:38:52 dream is?
00:38:53 Speaker 2: To build robots with all the components of human intelligence, human learning abilities, human perception,
00:39:04 human memories and human judgment, human values. So a long list of these things.
00:39:14 My wildest dream will be able to do that and teach that to robot. For example for a robot to copy all that from me.
00:39:22 My personal experience, my memory, my judgment, my values, which evolve as well. It's like a copy of a person, of me.
00:39:35 When I'm not around that copy will continue, maybe continue to talk to my children's children.
00:39:42 They know it's not me, they know it's not mommy but its a copy.
00:39:45 That will be interesting to replicate before I die so I see if it's really me or not..
00:39:53 Speaker 1: It's a replica? That's actually what you ..
00:39:55 Speaker 2: A replicant? Yeah you've seen Blade Runner right? I'm a huge fan of Blade Runner.
00:40:03 Yeah my wildest dream will have replicants but replicants also know they are replicants.
00:40:08 They don't fool people into thinking they are human beings.
00:40:16 Speaker 1: [inaudible 00:40:20]
00:40:17 Speaker 2: It doesn't have to be me, it can be anybody.
00:40:22 Speaker 1: If you copy let's say your intentions, your information somehow, what would it be? Would it be you?
00:40:32 What do you think?
00:40:34 Speaker 2: I don't know, I'm curious to know. If that happens would that be me?
00:40:41 Would that be just a copy of me? I can say that today we have people who can build a robot that physically looks like
00:40:47 me, exactly a copy of me but intelligent wise, memory and all that its not close.
00:40:55 It's still very very far from being a complete copy of a real human being. If we have more of a almost perfect copy
00:41:05 would that still be just a copy? I think its just a copy, its not me.
00:41:10 Speaker 1: It's not a wish for immortality?
00:41:12 Speaker 2: No, it would be my avatar I would say.
00:41:13 That would be avatar at the physical and mental level but still an avatar.
00:41:22 Speaker 1: Not the real you.
00:41:24 Speaker 2: Not the real me. Maybe it can do the tasks that I can do, I don't know.
00:41:29 Maybe it can continue to teach but it would be an avatar.
00:41:33 Speaker 1: Would it be something like a 3D or 4D photograph of you somehow for your children or grandchildren?
00:41:42 Speaker 2: I am actually thinking it's going to be a physical body with very human like skin
00:41:48 and very human like everything.
00:41:51 There are people working on building that so I think it's entirely possible in 20 years..
00:41:58 The body is possible, but the mind, because we don't understand the mind completely so how ..
00:42:06 Component by component,
00:42:07 module by module we're building the mind into the robot Today were still talking about service robots
00:42:15 and home robots that do particular tasks. We're still not building a general purpose human like robot like a replicant.
00:42:23 We're not doing that yet.
00:42:25 We don't have an interest in doing that it would be more like a scientific pursuit because we don't know what we need.
00:42:33 Why do we need a general purpose robot, that is exactly like me? What is the application of that?.
00:42:39 There's no application except that ..
00:42:40 That would serve us like a scientific quest rather than engineering application, rather than commercial purpose,
00:42:49 its a scientific quest.
00:42:51 As I mentioned earlier, when we study robot,
00:42:53 when we model intelligence for robot we are also modeling human intelligence, we're studying humans at the same time
00:43:01 and that is interesting to me.
00:43:02 Speaker 1: Some people could say she has been reading too many science fiction.
00:43:08 Speaker 2: (laughter) yes some people will say that.
00:43:09 But you know, many people who work artificial intelligence are huge science fiction fans.
00:43:14 We are just naturally attracted to science fiction since we were young
00:43:18 and then we got into this area because of that. Many of us are, many.
00:43:22 Speaker 1: To make it into reality?
00:43:24 Speaker 2: Yeah, to make it into reality.
00:43:26 It's fair to say that a lot of our imagination are shaped by science fiction we grew up with.
00:43:34 So you will see things that look science fiction like and its not a coincidence it's just we're shaped by that.
TV Get inspired and watch tv episodes of The Mind of the Universe, made by Dutch public broadcaster VPRO
  • Browse through over 30 hours of interviews
  • Download the interviews, including subtitles
  • Remix, re-use and edit under CC-BY-SA license
  • Start exploring