The paper discusses the shift from centralised model training to distributed and decentralised setups in AI training algorithms.
It distinguishes between distributed and decentralised training, highlighting technical governance challenges related to compute structuring and AI capability proliferation.
The trends towards decentralised AI could impact key assumptions of compute governance but also offer benefits like privacy-preserving training and mitigating power concentration.
The authors emphasize the importance of precise policy-making in areas such as compute governance, capability proliferation, and decentralised AI development.