Generative AI, such as Large Language Models (LLMs), is becoming more accessible for the general consumer.However, there are concerns regarding reliance on network access, privacy, and security risks.To address this, researchers are optimizing LLMs for running locally on edge devices.A recent study focuses on characterizing commercially available embedded hardware (Jetson Orin) for LLMs and provides a utility for batch testing.