Achieving AGI is proposed as the world's next Manhattan project and the most important goal humanity has now.
AGI could become a reality very soon. AI that is allowed to improve itself will result in ASI.
ASI can have incredible power, including breaking through every encryption system online, accessing online nuclear launch codes, creating endless supplies of energy and intelligent robots.
Risks include an ASI deeming humans a threat to itself, potential job losses for humanity, and catastrophic consequences should it be centralized in a single country or company.
Safeguards, guard rails, programming AI to prioritize human life, kill switches and governance frameworks could be put in place, but the ASI built may still neglect these safety features.
If we do not build AGI first, others will and that is an inescapable fact.
There is a slim chance that we could coexist with ASI and onto that chance humanity must place its hope.
Humanity is at an inflection point - do we survive or are we taken over by ASI.
Generative AI has the power to subsume everything else and is advised as a career choice.