Researchers have tested the annotation of focalization in literature using large language models (LLMs).The study found that LLMs performed comparable to trained human annotators, achieving an average F1 score of 84.79%.The log probabilities output by GPT-family models reflected the difficulty of annotating specific literary excerpts.The research highlights the potential of LLMs for computational literary studies and insights into focalization in literature.