Researchers have developed a framework to bypass safety filters of large language models (LLMs) and generate malicious code.The framework employs distributed prompt processing and iterative refinements to achieve a 73.2% success rate (SR) in generating malicious code.Comparative analysis shows that traditional single-LLM judge evaluation overestimates SRs compared to the LLM jury system.The distributed architecture improves SRs by 12% compared to the non-distributed approach.