Consumers place a high value on the way automated assistants respond to human emotions, Microsoft AI CEO Mustafa Suleyman has said, adding that tools that add emotional intelligence to LLMs would become critical to competition between AI suppliers.
Emotionally-aware AI enhancements can enable interactions between people and LLMs to become more fluid and conversational.
Embi, dubbed 'AI with emotional intelligence', which secured $50m in a second funding round last year and has released its new EVI model following specialised emotional intelligence training, is one of a number of companies tackling the issue.
EmoBench, a popular benchmarking test, has been used to assess LLMs' emotive capabilities, while tests used to measure “expressivity” and “empathy” showed that OpenAI’s GPT4 was closest to humans in terms of emotional understanding.
As AI efforts to understand and respond to humans improve, problems have arisen including gender bias, the possibility of excessive attachment, and LLMs performing satisfactorily in the output of generalised text, but struggling in recognising gender and expressing similar or conflicting emotions.
Anthropic sees emotions as important in enhancing Claude, its flagship LLM model that fits into a broader plan for an online personality that appears authentic, without becoming overbearing or losing emotional boundaries.
A cautionary warning has been provided, however, over the potential for people to form emotional attachments to AI models and reduce their need for human interaction.
According to one report, a user actually died after becoming deeply attached to an AI personality, a development that illustrates the risks that AI developers face in trying to build emotionally convincing, near-humanlike automated assistants.
Anthropic's current goal is to add an authentic personality to its AI model, and to improve Claude's ability to ask meaningful follow-up questions, a hallmark of successful conversations between humans.
As the AI industry continues to move toward integrating more emotionally intelligent models, developers will need to balance their applications' empathy with a sense of personal responsibility.