Book Read Free

Architects of Intelligence

Page 24

by Martin Ford


  MARTIN FORD: Do you see potential applications in healthcare? Given that we do have a mental health crisis, I wonder if you think the kind of technology you’re building at Affectiva might help in areas like counseling?

  RANA EL KALIOUBY: Healthcare is probably what I’m most excited about, because we know that there are facial and vocal biomarkers of depression, and we know that there are signs that could be predictive of suicidal intent in a person. Think about how often we are in front of our devices and our phones, that’s an opportunity to collect very objective data.

  Right now, you can only ask a person, on a scale from 1 to 10, how depressed they are, or how suicidal they are. It’s just not accurate. But we now have the opportunity to collect data at scale and build a baseline model of who someone is and what their baseline mental state or mental health state is. Once we have that data, if someone starts to deviate from their normal baseline, then a system can signal that to the person themselves, to their family members or even maybe a healthcare professional.

  Then imagine how we could use these same metrics to analyze the efficacy of different treatments. The person could try cognitive behavioral therapy or certain drugs, and we would be able to quantify, very accurately and very objectively over time, if those treatments were effective or not. I feel that there’s a real potential here to understand anxiety, stress, and depression, and be able to quantify it.

  MARTIN FORD: I want to move into a discussion about the ethics of AI. It’s easy to think of things that people might find disturbing about this kind of technology. For example, during a negotiation, if your system was secretly watching someone and giving the other side information about their responses, that would create an unfair advantage. Or it could be used for some form of wider workplace surveillance. Monitoring someone when they’re driving to make sure they’re attentive would probably be okay with most people, but they might feel very different about the idea of your system watching an office worker sitting in front of a computer. How do you address those concerns?

  RANA EL KALIOUBY: There’s a little history lesson here about when Rosalind, myself, and our first employee met around Rosalind’s kitchen table and we were thinking: Affectiva is going to get tested, so what are our boundaries and what’s non-negotiable? In the end, we landed on this core value of respecting that people’s emotions are a very personal type of data. From then on, we agreed that we would only take on situations where people are explicitly consenting and opting in to share that data. And, ideally, where they’re also getting some value in return for sharing that data.

  These are things that Affectiva has been tested on. In 2011, we were running low on funds, but we had the opportunity for funding from a security agency that had a venture arm, and it was very interested in using the technology for surveillance and security. Even though most people know that when they go to an airport, they’re being watched, we just felt that this was not in line with our core value of consent and opt-in, so we declined the offer even though the money was there. At Affectiva, we’ve stayed away from applications where we feel that people aren’t necessarily opting in and the value equation is not balanced.

  When you think about the applications around the workplace, this question does become very interesting because the same tool could be used in ways that might be very empowering—or of course, very like Big Brother. I do think it would be super-interesting if people wanted to opt-in, anonymously, and employers were able to then get a sentiment score, or just an overall view, of whether people are stressed in the office—or whether people are engaged and happy.

  Another great example would be where a CEO is giving a presentation, to people dialed in from around the world, and the machine indicates whether or not the message is resonating as they CEO intends. Are the goals exciting? Are people motivated? These are core questions that if we’re all co-located, it would be easy to collect; but now, with everybody distributed, it’s just really hard to get a sense of these things. However, if you turn it around and use the same technology to say, “OK. I’m going to pick on a certain member of staff because they seemed really disengaged,” then that’s a total abuse of the data.

  Another example would be where we have a version of the technology that tracks how meetings go, and at the end of every meeting, it can give people feedback. It would give you feedback like, “you rambled for 30 minutes, and you were pretty hostile towards so-and-so, you should be a little bit more thoughtful or more empathetic.” You can easily imagine how this technology could be used as a coach to help staff negotiate better or be a more thoughtful team member; but at the same time, you could use it to hurt people’s careers.

  I would like to think of us as advocating for situations where people can get the data back, and they can learn something about it, and it could help them advance their social and emotional intelligence skills.

  MARTIN FORD: Let’s delve into the technology you’re using. I know that you use deep learning quite heavily. How do you feel about that as a technology? There has been some recent pushback, with some people suggesting that progress in deep learning is going to slow or even hit a wall, and that another approach will be needed. How do you feel about the use of neural networks and how they’re going to evolve in the future?

  RANA EL KALIOUBY: Back when I did my PhD, I used dynamic Bayesian networks to quantify and build these classifiers. Then a couple of years ago we moved all our science infrastructure to be deep learning-based, and we have absolutely reaped the benefits of that.

  I would say that we haven’t even maxed out yet on deep learning. With more data combined with these deep neural nets, we see increases in the accuracy and robustness of our analysis across so many different situations.

  Deep learning being awesome, I don’t think that it’s the be-all, end-all to all of our needs. It’s still pretty much supervised, so you still need to have some labeled data to track these classifiers. I think of it as an awesome tool within this bigger bucket of machine learning, but deep learning is not going to be the only tool that we use.

  MARTIN FORD: Thinking more generally now, let’s talk about the march towards artificial general intelligence. What are the hurdles involved? Is AGI something that is feasible, realistic or even something you expect to see in your lifetime?

  RANA EL KALIOUBY: We’re many, many, many, many, many years away from an AGI and the reason I say that is because when you look at all the examples of AI that we have today, all of them are pretty narrow. Today’s AI does one thing well, but they all had to be bootstrapped in one way or another, even if they learned how to play a game from scratch.

  I think there are sub-assumptions, or some level of sub-curation, that happens with the dataset, which has allowed that algorithm to learn whatever it learns, and I don’t think that we’ve yet figured out how to give it human-level intelligence.

  Even if you look at the best natural language processing system that we have today, and you give it something like a third-grade test, it doesn’t pass.

  MARTIN FORD: What are your thoughts about the intersection between AGI and emotion? A lot of your work is primarily focused on getting machines to understand emotion, but flipping the coin, what about having a machine that exhibits emotion? Do you think that’s an important part of what AGI would be, or do you imagine a zombie-like machine that has no emotional sense at all?

  RANA EL KALIOUBY: I would say that we are already there, right now, in terms of machines exhibiting emotions. Affectiva has developed an emotion-sensing platform, and a lot of our partners use this sensing platform to actuate machine behavior. Whether that technology is a car, or a social robot, an emotion-sensing platform can take our human metrics as input, and that data can be used to decide how a robot is going to respond. Those responses could be the things that a robot says from our stimuli, just like Amazon Alexa responds today.

  Of course, if you’re asking Amazon Alexa to order something and it keeps getting it wrong, then you’re now getting annoyed. But inste
ad of Alexa just being completely oblivious to all of that, your Alexa device could say, “OK, I’m sorry. I realize I’m getting this wrong. Let me try again.” Alexa could acknowledge our level of frustration and it could then incorporate that into its response, and into what it actually does next. A robot could move its head, it could move around, it could write, and it could exhibit actions that we would translate into, “Oh! It looks like it’s sorry.”

  I would argue that machine systems are already incorporating emotional cues in their actions, and that they can portray emotions, in any way that someone designs them to do so. That is quite different, of course, from the device actually having emotions, but we don’t need to go there.

  MARTIN FORD: I want to talk about the potential impact on jobs. How do you feel about that? Do you think that there is the potential for a big economic and job-market disruption from AI and robotics, or do you think that’s perhaps been overhyped, and we shouldn’t worry quite so much about it?

  RANA EL KALIOUBY: I’d like to think of this as more of a human-technology partnership. I acknowledge that some jobs are going to cease to exist, but that’s nothing new in the history of humanity. We’ve seen that shift of jobs over and over again, and so I think there’s going to be a whole new class of jobs and job opportunities. While we can envision some of those new jobs now, we can’t envision all of them.

  I don’t subscribe to the vision of a world where robots are going to take over and be in control, whilst humanity will just sit around and chill by the beach. I grew up in the Middle East during the time of the first Gulf War, so I’ve realized that there are so many problems in the world that need to be solved. I don’t think we’re anywhere close to a machine that’s just going to wake up someday and be able to solve all these problems. So, to answer your question, I’m not concerned.

  MARTIN FORD: If you think about a relatively routine job, for example a customer service job in a call center, it does sound like the technology you’re creating might enable machines to do that more human element of the work as well. When I’m asked about this, which is often, I say the jobs that are most likely to be safe are the more human-oriented jobs, the ones that involve emotional intelligence. But it sounds like you’re pushing the technology into this area as well, so it does seem that there’s a very broad range of occupations that could be eventually be impacted, including some areas currently perceived as quite safe from automation.

  RANA EL KALIOUBY: I think you’re right about this, and let me give an example with nurses. At Affectiva, we are collaborating with companies that are building nurse avatars for our phones, and even installing social robots in our homes, which are designed to be a companion to terminally-ill patients. I don’t think this is going to take the place of real nurses, but I do think it’s going to change how nurses do their jobs.

  You can easily imagine how a human nurse could be assigned to twenty patients, and each of these patients has access to a nurse avatar or a nurse robot. The human nurse only gets brought into the loop if there is a problem that the nurse robot can’t deal with. The technology allows the nurse robot to manage so many more patients, and manage them longitudinally, in a way that’s not possible today.

  There’s a similar example with teachers. I don’t think intelligent learning systems are going to replace teachers, but they are going to augment them in places where there isn’t access to enough teachers. It’s like we’re delegating these jobs to those mini-robots that could do parts of the job on our behalf.

  I think this is even true for truck drivers. Nobody will be driving a truck in the next ten years, but someone is sitting at home and tele-operating 100 fleets out there and making sure that they’re all on track. There may instead be a job where someone needs to intervene, every so often, and take human control of one of them.

  MARTIN FORD: What is your response to some of the fears expressed about AI or AGI, in particular by Elon Musk, who has been very vocal about existential risks?

  RANA EL KALIOUBY: There’s a documentary on the internet called Do You Trust This Computer? which was partially funded by Elon Musk, and I was featured in it being interviewed.

  MARTIN FORD: Yes, in fact, a couple of the other people I’ve interviewed in this book were also featured in that documentary.

  RANA EL KALIOUBY: Having grown up in the Middle East, I feel that humanity has bigger problems than AI, so I’m not concerned.

  I feel that this view, about the existential threat that robots are going to take over humanity, takes away our agency as humans. At the end of the day, we’re designing these systems, and we get to say how they are deployed, we can turn the switch off. So, I don’t subscribe to those fears. I do think that we have more imminent concerns with AI, and these have to do with the AI systems themselves and whether we are, through them, just perpetuating bias?

  MARTIN FORD: So, you would say that bias is one of the more pressing issues that we’re currently facing?

  RANA EL KALIOUBY: Yes. Because the technology is moving so fast, while we train these algorithms, we don’t necessarily know exactly what the algorithm or the neural network is learning. I fear that we are just rebuilding all the biases that exist in society by implementing them in these algorithms.

  MARTIN FORD: Because the data is coming from people, so inevitably it incorporates their biases. You’re saying that it isn’t the algorithms that are biased, it’s the data.

  RANA EL KALIOUBY: Exactly, it’s the data. It’s how we’re applying this data. So Affectiva, as a company, is very transparent about the fact that we need to make sure that the training data is representative of all the different ethnic groups, and that it has gender balance and age balance.

  We need to be very thoughtful about how we train and validate these algorithms. This an ongoing concern, it’s always a work in progress. There is always more that we can do to guard against these kinds of biases.

  MARTIN FORD: But the positive side would be that while fixing bias in people is very hard, fixing bias in an algorithm, once you understand it, might be a lot easier. You could easily make an argument that relying on algorithms more in the future might lead to a world with much less bias or discrimination.

  RANA EL KALIOUBY: Exactly. One great example is in hiring. Affectiva has partnered a company called HireVue, who use our technology in the hiring process. Instead of sending a Word resume, candidates send a video interview, and by using a combination of our algorithms and natural language processing classifiers, the system ranks and sorts those candidates based on their non-verbal communication, in addition to how they answered the questions. This algorithm is gender-blind, and it’s ethnically blind. So, the first filters for these interviews do not consider gender and ethnicity.

  HireVue has published a case study, with Unilever, where it shows that not only did it reduce its time to hire by 90%, but the process resulted in a 16% increase in the diversity of its incoming hiring population. I found that to be pretty cool.

  MARTIN FORD: Do you think AI will need to be regulated? You’ve talked about how you’ve got very high ethical standards at Affectiva, but looking into the future, there’s a real chance that your competitors are going to develop similar technologies but perhaps not adhere to the same standards. They might accept the contract from an authoritarian state, or the corporation that wants to secretly spy on its employees or customers, even if you would not. Given this, do you think there’s going to be a need to regulate this type of technology?

  RANA EL KALIOUBY: I’m a big advocate of regulation. Affectiva is part of the Partnership on AI consortium, and a member of the FATE working group, which is the Fair, Accountable, Transparent and Equitable AI.

  Through working with these groups, our mandate is to develop guidelines that advocate for the equivalent of an FDA (Food and Drug Administration) process for AI. Alongside this work, Affectiva publishes best practices and guidelines for the industry. Since we are thought leaders, it is our responsibility to be an advocate for regulation, and
to move the ball forward, as opposed to just saying, “Oh, yeah. We’re just going to wait until legislation comes about.” I don’t think that that’s the right solution.

  I’m also a part of the World Economic Forum, on which there’s an international forum council on robotics and AI. Through working with this forum, I’ve become fascinated by the cultural differences in how different countries think about AI. A great example can be seen in China, which is part of this council. We know that the Chinese government doesn’t really care about ethics, and so it begs the question, how do you navigate that? Different nations think about AI regulation differently, which makes this difficult to answer the question.

  MARTIN FORD: To end on an upbeat note, I assume you’re an optimist? That you believe these technologies are, on balance, going to be beneficial for humanity?

  RANA EL KALIOUBY: Yes, I would say that I’m an optimist because I believe that technology is neutral. What matters is how we decide to use it, and I think there’s a potential for good, and we should, as an industry, follow the footsteps of my team, where we’ve decided to focus our mindshare on the positive applications of AI.

  RANA EL KALIOUBY is the CEO and co-founder of Affectiva, a company focused on emotion AI. She received her undergraduate and master’s degrees from American University in Cairo, Egypt and her PhD from the Computer Lab at the University of Cambridge. She worked as a research scientist at the MIT Media Lab, where she developed technology to assist autistic children. That work led directly to the launch of Affectiva.

  Rana has received a number of awards and distinctions, including selection as a Young Global Leader in 2017 by the World Economic Forum. She was also featured on Fortune Magazine’s 40 under 40 and TechCrunch’s 40 Female founders who crushed it in 2016 lists.

 

‹ Prev