<ul data-eligibleForWebStory="true">Language models are essentially probability distributions over token sequences.Auto-regressive models generate sentences by iteratively computing and sampling from the distribution of the next token.Prior research has assumed that language models make probabilistic decisions similar to sampling from unknown distributions.A study questions whether language models exhibit Bayesian decision-making.Findings reveal that language models can display near-deterministic decision-making under specific conditions.This challenges the assumption of stochastic decision-making and impacts methods for inferring language model priors.Systems with deterministic behavior undergoing simulated Gibbs sampling may converge to a false prior without proper scrutiny.A proposed approach aims to differentiate between stochastic and deterministic decision patterns in Gibbs sampling.The study experiments with various large language models to analyze their decision patterns under different scenarios.The research offers valuable insights into understanding the decision-making processes of large language models.