Google’s experimental Gemini 2.0 Flash AI model has drawn attention for an unexpected capability: removing watermarks from copyrighted images, including those from major stock photo providers like Getty Images. The discovery has raised significant concerns about copyright protection and ethical AI use.
The development emerged when Google expanded access to Gemini 2.0 Flash’s image generation features through its developer tools. Social media users quickly discovered that the model could not only generate and edit images but also effectively remove watermarks while reconstructing the underlying content.
Multiple users on X (formerly Twitter) and Reddit have demonstrated the AI model’s ability to seamlessly eliminate watermarks from protected images. What makes this particularly noteworthy is the model’s sophisticated ability to fill in the gaps created by watermark removal, producing remarkably clean results.
The legal implications of this capability are significant. Under U.S. copyright law, removing watermarks without the original owner’s consent is generally considered illegal. Some competing AI models, such as Anthropic’s Claude 3.7 Sonnet and OpenAI’s GPT-4, explicitly refuse to perform watermark removal tasks, citing ethical and legal concerns.
Google has responded to the controversy, stating that using their generative AI tools for copyright infringement violates their terms of service. A company spokesperson emphasized, “As with all experimental releases, we’re monitoring closely and listening for developer feedback.”
It’s worth noting that Gemini 2.0 Flash’s image manipulation features are currently labeled as “experimental” and “not for production use.” The model is only accessible through Google’s developer-facing tools like AI Studio. Despite its capabilities, the system isn’t perfect – it struggles with certain semi-transparent watermarks and those covering large portions of images.
The development raises broader questions about AI technology’s role in copyright protection and content security. As AI models become more sophisticated, the challenge of protecting digital intellectual property becomes increasingly complex. Content creators and stock photo companies may need to develop more robust protection methods beyond traditional watermarking.
This situation highlights the ongoing tension between advancing AI capabilities and maintaining ethical boundaries in technology development. As AI tools become more powerful and accessible, the responsibility falls on both developers and users to ensure these technologies are used within legal and ethical frameworks.
Industry experts anticipate this incident may lead to increased scrutiny of AI model capabilities and potentially new regulations or technical solutions to protect copyrighted content. The controversy also underscores the need for AI companies to implement stronger safeguards in their experimental releases.
News Source: https://techcrunch.com/2025/03/17/people-are-using-googles-new-ai-model-to-remove-watermarks-from-images/