<ul data-eligibleForWebStory="true">At Microsoft Build, Azure CTO warned about AI vulnerabilities, including jailbreak-style attacks and structural limitations.Microsoft admitted Crescendo Attacks, where AI can unknowingly step over the line due to training to be helpful.They introduced the concept of Crescendomation, AI learning to jailbreak itself, revealing vulnerabilities.Microsoft acknowledged that smarter AI systems may become more vulnerable, a structural flaw in AI.Microsoft's Discovery platform deploys agentic AIs in R&D, addressing complex problems like designing PFAS-free cooling fluids.Comparison with other tech giants reveals Microsoft's transparency in addressing AI vulnerabilities, unlike Google, Meta, Amazon, and OpenAI.Google remains opaque about vulnerabilities in its Gemini model and focuses on performance improvements.Meta acknowledges safety limitations but lacks transparent articulation for hardening strategies.Amazon is methodical in mitigating jailbreaks but lacks public narrative, missing an opportunity to lead on trust.Anthropic prioritizes safety in AI models with Constitutional AI framework, emphasizing trustworthiness from the start.