Public opinion on being polite to AI fluctuates, with users increasingly using 'please' and 'thank you' partly believing it leads to better results.Research indicates that polite prompts can positively affect language models, leading to clearer responses and fewer refusals.However, a study challenges the belief that politeness delays language model output degradation, stating its impact is minimal.Polite terms are said to be unrelated to the prompt’s main topic, and as such, have little influence on model behavior.The tipping point for language model output degradation is determined by the alignment of meaningful tokens, not polite language.The study presents a mathematical framework predicting when a model's output transitions to misleading content, unaffected by politeness.Politeness terms are likened to statistical noise within language model functioning, with minimal impact on output quality.The model's directional pull towards good or bad outputs depends on token alignment, not the politeness of the user's prompt.The study suggests politeness might not have a significant effect on AI behavior and highlights the importance of prompt tokens.While further research is needed on complex models, the paper's theory challenges the idea that politeness significantly influences AI responses.