Traditional deep reinforcement learning struggles in dynamic wireless network environments due to scattered and evolving feedback.
Large Language Models (LLMs) help by structuring unorganized network feedback into meaningful latent representations for more effective decision-making.
A contextualization-based adaptation method integrating learnable prompts into an LLM-augmented DRL framework is introduced for O-RAN network slicing.
The developed Prompt-Augmented Multi agent RL (PA-MRL) framework optimizes semantic clustering and RL objectives, leading to faster, more scalable, and adaptive resource allocation in O-RAN slicing.