<ul data-eligibleForWebStory="true">Motion in-betweening, used by animators for detailed control, is typically facilitated by complex machine learning models.A new study introduces a simple Transformer-based framework for motion in-betweening, using a single Transformer encoder.The research emphasizes the role of data modeling choices in enhancing in-betweening performance.Increasing data volume can lead to improved motion transitions.The choice of pose representation significantly influences result quality in motion synthesis.Incorporating velocity input features is highlighted as beneficial for animation performance.The study challenges the idea that model complexity is the main factor for animation quality.Insights from the research advocate for a more data-centric approach to motion interpolation.Additional videos and supplementary material can be accessed at https://silk-paper.github.io.