Large Language Models (LLMs) have been successful in various natural language tasks and are now being extended to multimodal domains and resource-constrained environments.
This dissertation focuses on enhancing the usability of LLMs under real-world constraints by introducing a text-centric alignment framework for integrating diverse modalities and an adversarial prompting technique for robustness against noisy data.
It also explores inference-time optimization strategies using prompt search and uncertainty quantification to improve LLM performance without additional training.
Furthermore, the work addresses low-resource domains like Verilog code generation by utilizing synthetic data pipelines and logic-enhanced reasoning models to achieve state-of-the-art performance with minimal data.