Adversarial attacks are being used to test the robustness of AI-generated text detection models.A novel token probability-based approach using embedding models is proposed to reduce the likelihood of detection of AI-generated texts.The method utilizes different embedding techniques, including the Tsetlin Machine (TM), to perturb the data and reconstruct the texts.The proposed method shows a significant reduction in detection scores against Fast-DetectGPT on XSum and SQuAD datasets.