Large Language Models (LLMs) struggle with code-mixed language understanding despite their success in various NLP tasks.
CHAI is introduced as a framework to enhance multilingual LLMs' capabilities in handling code-mixed languages.
The framework involves using LLMs for accurate annotations, generating preference data, and employing reinforcement learning from AI feedback (RLAIF) for enhancement.
Experimental evaluation demonstrates that CHAI-powered LLMs outperform existing models by 25.66% in code-mixed translation tasks, paving the way for more inclusive code-mixed LLMs.