Prompting is crucial for adapting pretrained models to target tasks, but current methods lack a solid conceptual understanding.A Bayesian view is proposed to understand optimal prompting and the limitations that can be overcome by tuning weights.Meta-trained neural networks act as Bayesian predictors over pretraining distribution, allowing rapid in-context adaptation.Educational experiments on LSTMs and Transformers show the effectiveness of soft prefixes in prompting, manipulating activations in novel ways.