As businesses experiment with AI implementations across the board, one surprising trend is that more and more companies are turning to AI to help their many new robots better understand human emotions.
It's a field called "emotional AI," and a new report from PitchBook Enterprise Saas Emerging Tech Research predicts the technology is set to boom. The reason is something like this. If a company develops AI assistants for managers and employees, make the IA chat robot a highly seller and a representative of customer service. Does that mean that? What does "and confusion" mean? "
Emotional AI claims to be AI technology, a more complicated sister in mood analysis. This is trying to remove human emotions in textbooks, especially in social networks. Emotions can be called multi -modal by trying to detect human emotions during interactions using sensors in combination with visuals, audio, other input, automatic learning, and psychology. Major cloud AI providers offer services that allow developers to access emotion AI capabilities, such as Microsoft Azure's Emotion API and Amazon Web Services' Rekognition service. (This last point has been controversial for years.)
According to PitchBook, emotion AI, even if offered as a cloud service, is not new, but the sudden rise of robots in the workforce presents more opportunities than ever in the business world. "With the rise of artificial intelligence assistants and fully automated human-machine interactions, emotional AI is expected to enable more human-like interpretations and responses," PitchBook principal analyst Derek Hernandez said in a report on emerging technologies.
"Cameras and microphones are essential parts of emotional AI hardware. These can be on a laptop, phone, or individually located in a physical space. Additionally, wearable hardware will likely provide another avenue to employ emotion AI beyond these devices,” Hernandez tells . (So if that customer service chatbot is asking for camera access, that could be the reason.)
To that end, more and more startups are launching with this goal in mind. These include Uniphore ($610 million raised in total, including $400 million in 2022 led by NEA), as well as MorphCast, Voicesense, Superceed, Siena AI, audEERING, and Opsis, which have also raised modest sums from various venture capitalists, according to PitchBook estimates. Of course, emotional AI is a Silicon Valley approach, using technology to solve problems that arise from applying technology to people.
But even if most AI robots eventually acquire some form of automated empathy, that doesn't mean the solution will actually work. In fact, the last time emotion AI became of hot interest in Silicon Valley around the 2019 time frame when much of the AI/ML world was still focused on computer vision rather than on generative language and art researchers threw a wrench in the idea. That year, a team of researchers published a meta-review of studies and concluded that human emotion cannot actually be determined by facial movements. In other words, this idea that we can teach AI to find human emotions, how other people try to do this (face, body run gauge, voice). Their hypothesis is somewhat wrong.
In addition, regulations on artificial intelligence, such as the AI methods on the European Union, which prohibit systems that detect the emotions of computer buildings for the use of education, may promote this idea. (Some state laws, such as Illinois' BIPA, also prohibit the collection of biometric data without permission.)
All of this gives a broader perspective on the future of AI everywhere that Silicon Valley is busily building. These AI bots will either try to understand emotions to perform customer service, sales, HR, and other tasks people want them to perform, or they'll be less good at tasks that actually require this ability. . . We may see an office life filled with AI robots on par with the Siri of 2023. Who's to say there's anything worse than a robot demanded by management that guesses everyone's feelings in real time during meetings?