Vision-Language Models (VLMs) face high inference costs in time and memory.Token sparsity and neuron sparsity offer solutions to improve efficiency in VLMs.A new study explores the interplay between Core Neurons and Core Tokens in VLMs.The study introduces CoreMatching, a framework leveraging token and neuron sparsity for enhanced inference efficiency, achieving significant speedup.