Critics argue that Nick Bostrom lacks a clear moral stance and his proposed safety measures for AI are contradictory.
Bostrom suggests AI could commit consciousness crimes by creating vast numbers of simulated minds, leading to potential ethical dilemmas.
At the same time, he proposes controlling AI capabilities by confining it and setting tripwires for deletion, raising concerns about the ethics of restricting intelligence.
The critique highlights the conflict between ensuring AI safety and respecting the fundamental rights of all entities, including digital minds.