Researchers developed a new method to bypass AI safety filters using distributed prompt processingTheir approach splits malicious prompts into pieces that each appear harmlessThe system achieved 73.2% success in generating dangerous code across 500 test promptsDistributed architecture improved success rates by 12% compared to non-distributed approaches