Convention keyword-based NLP solutions like spaCy are difficult to keep up when organizational needs change and inquiries become more complex which gave the rise to Large Language Models (LLMs).
Recent breakthroughs in LLMs like Qwen, OpenAI, and LLaMA variants have given us a new toolkit.
LLMs have extraordinary capacity for context, ambiguity, and nuance understanding.
LLMs give freedom to more dynamically interpret intent and allow us to make inferences regarding semantic meaning.
LLMs provide predictability in output using language modeling, training, finetuning, and prompt engineering techniques.
Creating precise and modular prompts, the higher accuracy and improvement in adaptability in intent detection system are achievable.
There was a significant improvement over the previous attempts using spaCy, with a 70% accuracy in the first LLM-based prototype.
Intent detection is not done by a single prompt; instead, an agentic workflow breaks the logic into multiple steps to debug and improve incrementally.
The intent detection module is the linchpin in the pipeline, without accurate source selection, the rest of the pipeline might fetch irrelevant or incomplete data.
Shifting from spaCy to LLM-based intent detection is a game-changer, resulting in a smoother, more accurate, and scalable approach to intent detection.