Discovering materials with desirable properties in an efficient way remains a significant problem in materials science.
LLM-Fusion is a novel multimodal fusion model that integrates diverse representations for accurate material property prediction.
The model leverages large language models (LLMs) to combine different sources of information, such as SMILES, SELFIES, text descriptions, and molecular fingerprints.
LLM-Fusion outperforms traditional methods in terms of accuracy and shows promising results on multiple prediction tasks.