MIT CSAIL researchers created ContextCite to improve trust in AI-generated content that uses external context for answering queries.
ContextCite helps identify the sources of external context used by a model to generate any particular statement that can enhance users' trust to verify the statement.
Researchers perform context ablations, where specific external data pieces are removed from the dataset to test the model's response, to make ContextCite possible.
ContextCite can help improve the quality of AI's responses by identifying and pruning irrelevant context while producing more accurate responses.
The tool can detect poisoning attacks where malicious actors insert statements that trick AI assistants into leaking personalized information.
ContextCite reveals the exact source material being used by a model to form its response to guideline the model's approach effectively
ContextCite can be used in industries that require high levels of accuracy, like healthcare, law, and education.
The scientists are aware of issues of complexity of language, but they recognize the importance of further refinement to address these complexities.
ContextCite strives to build itself as a fundamental component to achieve reliable and attributable insights generated by AI
The researchers will present their findings at the Conference on Neural Information Processing Systems this week.