Existing methods for graph out-of-distribution (OOD) detection rely on labeled in-distribution (ID) data.
A new approach, LLM-GOOD, combines large language models (LLMs) and graph neural networks (GNNs) to enhance data efficiency in graph OOD detection.
LLM-GOOD leverages LLMs to filter out likely OOD nodes, reducing human annotation burden.
Experiments show that LLM-GOOD reduces human annotation costs and outperforms state-of-the-art baselines in ID classification accuracy and OOD detection performance.