AI models, especially deep learning models, struggle when faced with unfamiliar situations.
The lack of explainability in AI models makes it challenging to trust their decisions in high-risk environments.
To address these challenges, a team of researchers at Michigan State University has proposed Anunnaki, a modular framework.
Anunnaki consists of three key components: Enlil, Enki, and Utu, which aim to detect failures, train AI models for various environments, and adapt AI behavior accordingly.