
Leading AI companies, including OpenAI, Alphabet, and Meta Platforms, have voluntarily committed to implementing watermarking measures for AI-generated content to enhance safety, according to the Biden administration.
These companies, which also include Anthropic, Inflection, Amazon.com, and OpenAI’s partner Microsoft, have vowed to conduct thorough testing of AI systems before their release and to share information on risk reduction and cybersecurity investment.
The move marks a significant step in the Biden administration’s efforts to regulate AI technology, which has seen a surge in investment and popularity among consumers.
Generative AI, which produces human-like content such as ChatGPT’s prose, has gained immense popularity this year, prompting lawmakers worldwide to consider ways to address potential dangers to national security and the economy posed by this emerging technology.
In June, U.S. Senate Majority Leader Chuck Schumer called for “comprehensive legislation” to ensure safeguards on artificial intelligence.
Congress is currently reviewing a bill that would require political ads to disclose whether AI was used in creating imagery or other content.
To address these concerns, President Joe Biden is hosting executives from the seven companies at the White House to collaborate on developing an executive order and bipartisan legislation focused on AI technology.
As part of their commitment, the seven companies will work on creating a watermarking system for all forms of content, including text, images, audio, and videos generated by AI. This watermarking will allow users to identify when AI technology has been utilized.
The embedded watermark, applied in a technical manner, is expected to aid users in detecting deep-fake images or audio that may depict non-existent violence, enable more sophisticated scams, or manipulate a politician’s photo in an unfavorable manner.
However, the specific details on how the watermark will be evident in shared information are yet to be disclosed.
Additionally, the companies have pledged to prioritize user privacy as AI continues to advance and ensure that the technology is free from bias and discrimination against vulnerable groups. Their commitments also extend to developing AI solutions for scientific problems like medical research and climate change mitigation.