The EU AI Act’s definition of 'deepfake' has been criticised for its vagueness when it comes to digital image manipulation.
The new definition was criticised for not taking into account the general influence of AI on consumer applications, and the subjective nature of artistic conventions that predate its invention.
Imprecise laws could lead to both a 'chilling effect' and a 'scofflaw effect', hence shifting the responsibility of establishing practical legal definitions on future court rulings.
AI-based image-manipulation technologies remain notably ahead of legislation’s capacity to address them.
The study suggests that photos 'have never been an objective depiction of reality' due to the subjective nature of cameras.
There are some difficulties in reaching a consensus on the subjective stipulation set up by the fourth condition of Article 3(60) of the EU AI Act.
The researchers suggest that the current definition of Deepfakes in the AI Act and corresponding obligations are not sufficiently specified to tackle the challenges posed by Deepfakes.
However, the study frequently uses 'Deepfake' as if its meaning were self-evident as the term has been notably extended over the last two years and its meaning has been diluted.
The researchers intend to encourage interdisciplinary study around the regulation of Deepfakes.
The authors argue that such legislation should provide more clarity and be less subjective for efficient operation.