The AI Act, formally adopted by the EU in March 2024, requires suppliers of AI techniques to mark their output as AI-generated content material. This labelling requirement is supposed to permit customers to detect when they’re interacting with content material generated by AI techniques to deal with considerations like deepfakes and misinformation. Sadly, implementing one of many AI Act’s steered strategies for assembly this requirement — watermarking — will not be possible or efficient for some kinds of media. Because the EU’s AI Workplace begins to implement the AI Act’s necessities, it ought to carefully consider the practicalities of AI watermarking to keep away from subjecting AI suppliers to unreasonable and unworkable obligations.
AI watermarking is the method of embedding a definite and distinctive sign, often called a watermark, into the output of an AI mannequin, comparable to textual content, audio, or pictures. This sign identifies the content material as AI-generated. Typically these watermarks are inconspicuous. For instance, an AI watermark could also be created by making imperceptible modifications to a picture which can be invisible to the bare eye. Different instances, the watermark could also be noticeable, comparable to a visible image overlayed on a picture. Ideally, watermarks must be tamper-resistant in order that even when somebody modifies the output, the watermark will stay.
There are two major strategies for watermarking AI-generated content material. Within the first technique, builders prepare their AI fashions to embed watermarks of their output as a part of the technology course of. Within the second, builders apply a watermark after an AI mannequin generates output. In both case, specialised algorithms can then detect whether or not a specific piece of content material comprises a watermark.
Article 50(2) of the AI Act mandates that suppliers of general-purpose AI techniques make sure that their output is “marked in a machine-readable format and detectable as artificially generated or manipulated.” As well as, they need to guarantee “their technical options are efficient, interoperable, sturdy, and dependable so far as that is technically possible.” Nonetheless, reaching all these goals concurrently with AI watermarking is difficult as a result of enhancing one property usually compromises one other. As an example, growing a watermark’s robustness often requires making extra outstanding modifications to the output, which might degrade content material high quality. Interoperability and reliability will also be in battle. The shortage of standardisation in AI watermarking applied sciences implies that a watermark created by one system will not be readable by one other, and builders are experimenting with completely different watermarking techniques to seek out one that’s dependable.
Some policymakers have touted AI watermarking as a common answer throughout varied media varieties for labelling content material as AI-generated. For instance, EU Commissioner for Inside Market Thierry Breton mentioned in a speech, “…the European Parliament, the Council, and the Fee have a standard understanding…on the necessity for transparency for generative synthetic intelligence. To be clear, this includes figuring out what’s created by generative intelligence (pictures, movies, texts) by including digital watermarking.” However these policymakers overestimate the capabilities of AI watermarking applied sciences, giving them extra credit score than they deserve. For instance, one research has proven that it’s simple to tamper with or take away watermarks in pictures, whereas reliably watermarking textual content could not even be doable. As even the European Parliamentary Analysis Service has discovered, “state-of-the-art AI watermarking methods show sturdy technical limitations and downsides.”
Sadly, within the rush to cross the AI Act, EU policymakers didn’t fastidiously contemplate the technical complexities and limitations of AI watermarking. As one unnamed European Fee official informed a reporter, the watermarking obligations have been handed on the expectation that “over time the expertise will mature.” However the actuality is that no person is aware of for positive. Watermarking could get higher sooner or later, or it might show to be a technological useless finish. Both manner, the AI Workplace should nonetheless resolve the way it will implement this legislation now with right now’s expertise.
To keep away from any extra missteps, the AI Workplace shouldn’t let coverage outpace the expertise and as a substitute solely transfer ahead with the AI Act’s watermarking obligations on particular kinds of media if the expertise is provably safe and sturdy. Till then, shifting ahead with ineffective watermarks dangers complicated shoppers and detracting from different efforts to deal with misinformation and content material provenance.
Photograph by Alexey Larionov on Unsplash