<ul data-eligibleForWebStory="false">Code generation with Large Language Models (LLMs) is an active area of research.The quality of generated code from LLMs depends on the prompts given.User background and familiarity with software development can impact the generated code quality.Quantifying LLM's sensitivity to input variations is crucial.A synthetic evaluation pipeline for code generation with LLMs is proposed.A persona-based evaluation approach is suggested to highlight qualitative differences in LLM responses based on user backgrounds.The proposed methods are applicable across various programming tasks and LLMs.Experimental evidence supports the utility of the proposed methods.The code used in the experiments has been made available to the community.