Researchers propose an interpretable model for few-shot learning based on human-friendly attributes.
The model utilizes an online attribute selection mechanism to filter out irrelevant attributes in each episode.
An automated mechanism is introduced to detect episodes with insufficient available attributes and augment them with learned unknown attributes.
The proposed method achieves results comparable to black-box few-shot learning models and outperforms other methods in terms of decision alignment with human understanding.