Join our research team! Recruiting volunteer/paid positions for interested undergraduate students.
Detecting emotions in text is a challenging task because people interpret emotions differently based on language, culture, and context. Sometimes, the emotion expressed by the author is different from how the audience perceives it due to the aforementioned factors. To tackle this, we created a system that uses both small and large language models (LLMs) to analyze emotions evoked in the author. For the smaller models we used Meta Llama 3.2 (3 billion parameters) and had these models act as 4 distinct subject area experts - culture and language, psychology, communication and ethics.
The output from these smaller models including the prediction and reasoning for prediction we all fed into a larger model - DeepSeek R1 (32 billion parameters) which makes a final decision given the insights of all the smaller models. By combining the insights of these smaller models and then using a larger model to make the final decision, we aim to have a more accurate and well-rounded understanding of the emotion invoked in the audience across different cultures and languages. Our results showed that the system performed well on high-resource languages—those with extensive datasets—but struggled with low-resource languages. While some languages exceeded baseline performance, the system underperformed in most cases, highlighting the challenges of emotion detection.
2024-Present