DeepMind, a subsidiary of Google, is already planning safety measures for Artificial General Intelligence (AGI).
DeepMind believes AGI could be as smart or smarter than the top 1% of humans in the world.
DeepMind identifies misuse, misalignment, mistakes, and structural risks as the main categories of risks.
To counter these risks, DeepMind suggests blocking malicious actors, understanding AI systems in detail, and having systems in place for approving AGI actions.