Condition monitoring (CM) is essential in ensuring reliability and efficiency in the process industry.
Computerized maintenance systems can detect and classify faults, but fault severity estimation and maintenance decisions often rely on human expert analysis.
Automated systems currently exhibit uncertainty and high false alarm rates, leading to increased workload and reduced efficiency.
A framework called MindRAG integrates large language model (LLM)-based reasoning agents with CM workflows.
The goal is to reduce false alarms, enhance fault severity estimation, improve decision support, and provide explainable interfaces.
MindRAG combines multimodal retrieval-augmented generation (RAG) with vector store structures designed for CM data.
Annotations and maintenance work orders are used as surrogates for labels in training predictive models on noisy real-world datasets.
Key contributions include structuring CM data into a multimodal vector store, developing tailored RAG techniques, creating practical reasoning agents, and presenting an experimental framework for evaluation.
Preliminary results suggest that MindRAG offers meaningful decision support for better alarm management and enhanced interpretability of CM systems.