[hibiscus_horz_sbtp]

The end of AI content creation? OpenAI, Google, and others pledge to watermarking AI content for safety, White House says

Just yesterday, The New York Times reported that Google is working on a new AI tool that will assist journalists in creating news articles and usher in a new era of news reporting. The AI tool, known internally within Google as Genesis, has been pitched to several major news organizations as a helpmate for journalists, including The New York Times, The Washington Post, and News Corp.

However, while the momentum continues to build on the idea of using AI to accelerate content creation, the excitement may be short-lived.

Today, The White House announced that some major AI players, including OpenAI, Alphabet (Google’s parent company), and Meta have voluntarily pledged to take measures to enhance safety with the implementation of watermarking on AI-generated content, Reuters reported, citing the Biden administration. The move is part of the effort to ensure greater accountability and reliability in the use of AI-generated materials.

The companies, which also include Amazon.com, Anthropic, Inflection, and Microsoft, all pledged to rigorously test their AI systems before releasing them to the public. In addition, the companies said they are committed to sharing valuable insights on minimizing risks associated with AI technology and making significant investments in cybersecurity measures. This united effort aims to enhance the safety and reliability of AI applications for the benefit of users and society at large.

As part of the collective effort, the seven companies made commitments to implement a system for “watermarking” all types of AI-generated content, including text, images, audio, and videos. This watermarking process will technically embed a marker in the content, allowing users to easily identify instances where AI technology has been utilized.

The main purpose of this watermark is to help users detect deep-fake content, such as manipulated images or audio that may show non-existent violence, enable more sophisticated scams, or present politicians in a negative light through altered photos. However, the specific method of how the watermark will be noticeable when the information is shared remains unclear at this point.

Additionally, the seven companies also pledged to prioritize user privacy as AI continues to advance to ensure that AI technology is free from biases and will not be used to discriminate against vulnerable groups. They also expressed to the White House about their commitment to developing AI solutions for crucial scientific challenges, such as medical research and addressing climate change.

In a blog post on Friday, Microsoft said: “We welcome the President’s leadership in bringing the tech industry together to hammer out concrete steps that will help make AI safer, more secure, and more beneficial for the public.”

Since the launch of ChatGPT late last year, generative AI has gained immense popularity for its ability to produce human-like prose and other content using data-driven techniques, like what you see in ChatGPT.

However, with its rise in prominence, lawmakers globally have also started contemplating measures to address the potential risks posed to national security and the economy. The focus is on finding ways to ensure responsible and safe implementation of generative AI while harnessing its potential for positive advancements.


TOP