Time Plus News

Breaking News, Latest News, World News, Headlines and Videos

YouTube forces creators to disclose AI usage, but not always

Summary

YouTube creators are now required to disclose when using generative AI to alter content for viewers’ clarity.
Flagged content includes altered imagery of people or events that didn’t occur, to distinguish fact from fiction.
AI disclosure labels will be more prominent on specific topics like health, news, elections, or finance content.

As AI continues to weave its way into Google’s apps and services, the company is taking measures to ensure users understand its impact. YouTube, the Google-owned video streaming platform, has a plethora of creators who are beginning to leverage AI for content. Now, the tech giant is enacting policies to make sure that viewers can differentiate fact from fiction while using YouTube.

Related

Lights, camera, controversy: How YouTube has evolved since its launch in 2005

Let’s explore all the ups and downs of YouTube over the past 19 years

In an announcement posted on the YouTube Help page, Google says it’s rolling out a new tool that will require YouTube creators to specify when they’ve used generative AI to make content. The information is then displayed in the expanded description for the content, where viewers can learn about how what they are seeing has been altered.

Content that must be flagged includes anything that makes a person say or do something that didn’t really happen, as well as altered imagery of a real event or place. A realistic scene that didn’t occur, but seemed to have happened via AI, must also be disclosed. The label on the synthetic content might state that sound or visuals were “significantly edited or digitally generated.”

Not all content featuring AI requires a label

Screenshot showing YouTube's badge to indicate when content has been created using AI and may be misleading Screenshot showing YouTube's description indicating that content has been created using AI and may be misleading

While it may be easy enough to skim past such notifications in an expanded description, Google noted that it will make these notices more prominent on content about specific topics. For example, YouTube videos that address health, news, elections, or financial matters will have a more obvious AI disclosure label. However, some content will not need to be labeled for its inclusion of AI — the use of lighting filters or special background effects doesn’t need to be disclosed. Creators also don’t need to flag content if AI is used for production matters, such as script generation. Google says that if AI changes to the content are “inconsequential,” reporting these tweaks is not essential.

As Google continues to divert more attention to its AI initiatives, it is addressing the impact of the technology on more than just YouTube. In 2023, for instance, Android app developers were asked to start flagging potentially offensive AI-generated content. It seems that the company believes a collective effort is the most effective way to moderate the new AI-generated landscape.

Source link