How Are You Feeling Today?: Emotion AI in the Marketing World

Brought to you by the FCB Health Engagement Strategy team

What's this trend?

Emotion AI or affective computing refers to artificial intelligence that detects and interprets human emotional signals. This technology combines cameras and other devices with artificial-intelligence programs to capture facial expressions, body language, vocal intonation and other cues. Although Emotion AI is a newer technology, facial recognition tools are nothing new.

Police departments have had access to these tools for almost 20 years, and newer technologies like Clearview has expanded that reach across the web. But the promise of Emotion AI is to go beyond facial recognition and identification to uncover hidden feelings, motivations and attitudes of the people in the images, leading to a rich trove of data that can have a lot of potential for marketers—and a lot of watchouts.

One of the recent uses of this technology comes from Zenus, an Austin, TX based company that rolled out Emotion AI at a trade show for Hilton hotels. Discreetly placed cameras tracked each attendee’s movements and cataloged subtle contractions in individuals’ facials muscles at 5-10 frames per second. Based on the data collected, Zenus was able to deduce that a puppies-and-ice-cream event the company staged was more engaging than the event's open bar.

Of course, this technology has widespread potential beyond the conference setting. Systems that read cues of feeling, character and intent are being used or tested to detect threats at border checkpoints, evaluate job candidates, monitor classrooms for boredom or disruption, and recognize signs of aggressive driving. Major automakers are putting this technology into the coming generations of their vehicles and major tech companies offer cloud-based emotion-AI services, often bundled with facial recognition. Startups are also rolling this technology out to help companies make hiring decisions.

But how does it work? Does it actually work?

These AI systems use various types of data to generate insights into emotion and behavior including vocal intonation, body language and analyzing the content of spoken or written speech for affect and attitude.

However, there are major concerns and nuances that come with technology. These machines are trained, and like any other machine, can be wrong:

  • When it comes to AI algorithms being trained on data sets with embedded racial, ethnic and gender biases, they can quickly prejudice their evaluations. Facial-recognition systems, most also based on deep learning, have been widely criticized for bias. Research at the MIT Media Lab found that these systems were less accurate when matching the identities of nonwhite, nonmale faces. This is typically due to using training data sets skewing white and male. And now, identifying emotional expressions adds additional layers of complexity.
     
  • Additionally, scientists have found evidence of significant cultural and individual variations in facial expressions. The idea that outside appearances match a decipherable inner emotion for everyone has also started to generate strong scientific opposition. Moment to moment, facial expressions reflect complicated internal states, a smile might cover up pain or it might convey sympathy. This makes it almost impossible for an AI system to consistently and reliably categorize those internal feelings just based on facial expressions. For example, if someone was truly happy but their facial expressions are timid because they are just a naturally shy person, the algorithm could possibly pick up a different feeling.

How can healthcare marketers dive in – thoughtfully?

Market research

If used correctly, brands can use the data of emotion AI to develop deeper, more nuances understanding of their customers through recognizing and interpreting human emotions. An example is Affectiva, a Boston-based company which specializes in automotive AI and advertising research. With the customer’s consent, they use emotion AI technology to capture macro and micro facial expressions and reactions to an advertisement. This then allows a company to know what people really think of their ads and to know their purchase intent afterwards. This could change how brands think about their overall marketing strategy, along with refinements along the way. Using emotion AI could accelerate the way brands refine their ads and marketing messages, instead of waiting to receive the data as it exists now.

Diagnosis Assistance

There are some healthcare companies that are already beginning to see the potential for usage of Emotion AI in diagnosis. Through video calls, Opsis Emotion AI software technology is used by counselors to help diagnose mental health conditions such as anxiety, stress and depression. The technology, responses and facial expressions captured on video are analyzed by the software to determine a person’s mental state and will present heat maps of that persons positive and negative emotions. This is beneficial in assisting healthcare professionals deal more effectively with mental health issues for patients suffering in silence and can even help provide early warning of suicidal tendencies.

Building Relationships

Marketers can also use this technology to implement chatbots that use this technology to identify a customer’s personality traits and what drives their emotional responses regarding a company’s product selection or service. Based on their answers, the chatbot could direct them towards the most relevant place on the website or even a live agent.

If misused, AI can invade privacy

Because of the deeply personal nature of emotions, and the newness of a technology like this, there’s a potential for privacy issues if companies don’t keep their customers in the know.

A European privacy watchdog has sanctioned the controversial facial recognition firm, Clearview AI, which scrapes selfies off the Internet to amass a database of some 10 billion faces to power an identity-matching service it sells to law enforcement. The press release said. “The findings revealed that the personal data held by the company, including biometric and geolocation data, are processed illegally, without an adequate legal basis, which certainly cannot be the legitimate interest of the American company”.

The bottom line – keep things transparent

If a company/brand does move forward with emotional AI, building trust with their consumers is crucial, as well as being transparent about the uses of the technology, as well as working with privacy and security teams to understand the implications.