Language models have gained significant interest due to their general-purpose capabilities.However, large language models impose stringent requirements on computing systems.To address these challenges, researchers explored the possibility of using smaller models (~30-120M parameters) for specific tasks.They developed a framework that allows users to train and deploy these small models on edge devices.