Skip to content

BLOG

Exploring Advanced Emotion AI with Gabi Zijderveld

August 4, 2020
7 min read

Do technology and emotion mix, and if so, how do they work together on a practical, technical level? 

During Connected Things 2020: WFH Edition, Very partnered with MIT Enterprise Forum Cambridge to speak with industry leaders like Gabi Zijderveld, Chief Marketing Officer at Affectiva, to speak about topics like this one. This post includes excerpts from MITEF’s conversation with Gabi, which have been edited for length and clarity. To see and hear the full session, watch the video below.

Meet Gabi Zijderveld

Gabi is the CMO at Affectiva, where she created the new technology category of Emotion AI and leads Affectiva’s product strategy to deliver on their multi-modal vision. She has over 20 years of product management and international experience at leading tech companies including Dragon Systems and IBM.

What is Emotion AI? 

As a company, Gabi shares, Affectiva is on a mission to humanize technology before it dehumanizes us. 

“When you think about it, in this day and age, we are surrounded by technology,” she says. “We have these highly-advanced AI systems and hyper-connected devices. We like to say that all these technologies have huge cognitive capabilities, that they’re amazingly fast at analytics and data insights. 

“But despite all this high IQ, there’s no EQ – there’s no emotional intelligence. So what if technology could understand humans the way we understand each other?”

In this day and age, where people are doing so much of their work, learning, and socializing virtually, Gabi says it’s becoming more apparent that technology is emotion-blind. 

One-to-one video chats are pretty helpful for working and socializing because the people chatting can still see facial expressions and read non-verbal cues. But what about a virtual conference where a speaker is delivering a keynote to an audience of 300 people? 

In a real-world situation, a keynote speaker can read the vibe of the room, see the faces of the people in the first few rows, and see how many people are pulling out their phones during the talk. When the speaker makes a joke or says something interesting, they get immediate feedback from the audience. Online, there’s absolutely no feedback on audience engagement because the technology isn’t there yet. 

Affectiva was founded on this desire to build technology that can understand humans, Gabi says. They accomplish this by building software that uses cameras as well as microphone audio to analyze people’s reactions, their expressions, and contextual information to interpret their emotions. The company coined the term “emotion AI” to describe their unique technology. 

What is Automotive AI? 

One of the key practical applications of Affectiva’s technology is within vehicles. AI has been relevant in automotive for many years – especially when it comes to the development of autonomous vehicles, where AI is mapping the external world and teaching vehicles how to operate within it.

This version of automotive AI, however, has historically been externally focused. What additional benefits could we realize if we could turn the AI inwards? Gabi says that building the most optimal driving experiences and improving road safety also hinges on understanding what’s going on inside the vehicle. 

To address this need, Affectiva offers what they call “in-cabin sensing AI,” which is all about understanding the state of the driver, the state of the occupants that are also in the vehicle, and also the state of the cabin.  

Using cameras that are now being deployed in vehicles – because many new models are coming out with cameras in them – Affectiva’s automotive AI analyzes the state of the driver to monitor for things like drowsiness and distraction. In the future, they’re hoping to be able to track other important contextual elements as well, like:  

  • Is a person’s cell phone in the region of the driver? 
  • If a driver’s eyes are not on the road or their head is moved to the side, is it because the driver is distracted or is it part of the normal act of driving? 
  • Is there an occupied child seat in the vehicle? (to help prevent children being left behind in vehicles)  
  • Did someone leave a personal item behind in the vehicle? (relevant for ride-sharing and the future of autonomous shuttles)

Car manufacturers can turn these machine learning insights into action through different methods relevant to the thing being tracked, from audio alarms to haptic alerts like the shaking of the steering wheel or tug of the seatbelt. 

One critical element to note here, Gabi says, is that many of the things being tracked are not a matter of simply checking a “yes” or “no” box. 

Drowsiness, for example, is not about awake versus asleep. There are levels of drowsiness. If a driver is yawning, they don’t necessarily want or need their car going off with all kinds of body alerts. 

However, if the driver continues yawning, begins nodding, and the car is also tracking changes in steering, lane drifting, or acceleration and braking, there’s a potentially dangerous situation where an alert would be relevant. Machine learning solutions for automotive AI need to be able to detect these nuances. In the future, Gabi hopes these same solutions can incorporate personalization and track people longitudinally since people react to different stimuli in different ways depending on their current mental state. 

Ethics and Privacy in Machine Learning

When collecting the large amount of personal user data required to make these kinds of machine learning solutions work, ethics and privacy become an important topic of conversation. 

First, Gabi explains that Affectiva’s solution is not facial recognition technology, because it does not identify or authenticate the individual. Additionally, she shares that the company has made a purposeful decision that they do not want their technology deployed where there is no opt-in or consent. While they’ve been asked by security and intelligence government agencies who are interested in the technology for surveillance and security purposes, they pass on those opportunities because they do not think it’s an ethical use of their technology.

But what if the technology could do a lot of good without giving users the ability to opt in — to keep airports and retail stores secure, for example? The issue for Gabi is where to draw the line. 

“We’ve seen things go horribly wrong with some facial recognition systems, especially in terms of bias,” she says. “I think that’s a huge concern, where these systems are supposed to accurately recognize people regardless of appearance, age, gender, or ethnicity, and we’ve seen plenty of things in the news where some systems have failed miserably. 

“Those are basically lousy design decisions. In terms of machine learning, it’s a bad sampling of data. It’s not focusing on data that are representative of the use case. It’s about how the technology is being used and who your users are. These are things that, if you really focus on them and prioritize them, can be avoided, because the AI algorithms are trained with data. If suddenly your facial recognition system cannot identify black women, then you’d better go back to your data set and make sure you have data that represents people in all populations and focus on retraining your algorithms. 

“So in our company ethos, we believe that there needs to be a focus on ethical development of AI, and ethical employment of AI.” 

How is the COVID-19 Pandemic Affecting Emotion AI?

How is technology that analyzes people’s faces affected by large populations now wearing masks to prevent the spread of COVID-19? 

Gabi says that this is actually something Affectiva was thinking about pre-pandemic because they do automotive business in countries like Japan, where people commonly wear face masks. If you want to detect a smile, of course, tracking the mouth is very helpful, but Affectiva also analyzes the entire face. There’s a lot of information they can access from the eye and forehead region. This approach necessitates having massive amounts of data, which is one of Affectiva’s strengths —they’ve analyzed over nine million faces in 87 countries, including specific studies of people wearing face masks. 

Conclusion

To learn more about the story of how and why Affectiva was founded, Gabi encouraged listeners and readers to explore “Girl Decoded,” a book by Affectiva CEO and co-founder Rana el Kaliouby released in April 2020.  

“It is a story of self-discovery and perseverance, and people who are generally interested in technology, innovation, or leadership will find it interesting as well,” Gabi shares. 

To learn more or order the book, click here