The application of language models on mobile devices is facing a challenge in terms of computation and memory costs.A study has analyzed the effect of various components on tiny language models with 1B parameters.Optimization strategies such as tokenizer compression, architecture tweaking, and parameter inheritance have shown effective results.Experimental results demonstrate improved optimization and architecture of tiny language models, yielding notable performance improvements.