Let me ask: did you believe Facebook CEO Mark Zuckerberg really boasted about the company’s power in a 2019 video, saying, “Imagine this for a second: One man, with total control of billions of people’s stolen data, all their secrets, their lives, their futures.”

The video of Zuckerberg purportedly saying this really caused a stir, especially for CBS News, whose logo was also used in the fake video.

How about what just happened to Taylor Swift? She was the latest victim of deep fakes. The content was proliferating faster than anyone could respond.

Deep fakes are propagating and many people are being fooled. As manipulated information becomes increasingly realistic, it can be harder and harder to identify what is fake and what is real. Look closely and you see the inconsistencies, but at first blush, the videos, images and articles generated by AI can appear to be shockingly real.

We’re in an information boom driven by generative AI. Fasten your seatbelts because it’s about to come at you faster in 2024.

Generative AI: What it is and why it matters

The risk to readers: Who can you trust?

Ponder the implications of shared content without safeguards and you begin to appreciate the potential for harm. Case in point, political elections spawn an abundance of misinformation in all forms and across all channels – at breakneck speed. For many of us, it’s believable. False information can be readily absorbed and shared as truth causing unanticipated conflict and unrest. Unfortunately, this result is intended by nefarious actors.

Remember the fake images depicting an explosion at the Pentagon? Manipulated images can also pull the levers on truth versus reality, setting financial markets into a spin. Major stock market indices briefly dipped after the Pentagon image went viral.

This issue is far from inconsequential to our daily lives. It also brings to light the potential to spread bias and worsen inequities.

“AI comes with promise and peril,” said Reggie Townsend, VP and Director of Data Ethics Practice at SAS. “The need for legal, technical, social and academic frameworks to capitalize on the promise of AI while mitigating the peril is not only important but urgent in these times.”

Townsend was named to the National Artificial Intelligence Advisory Committee (NAIAC) in 2022. The NAIAC was formed in the United States to advise the president and the National AI Initiative Office on various AI issues.

The growing need for guardrails intensifies

A recent Forbes article predicts AI will become a black box in 2024. That means consumers completely lose sight of what’s behind the curtain making it even harder to decipher the veracity of content.

"Invisible AI is not the future, it’s the present," says Marinela Profi, AI strategy advisor for SAS. "AI functions are so well integrated that they become normal, unremarkable parts of a user’s interaction with the technology. “

In this evolving digital era, how will the average consumer of digital information know what’s real and what’s not? How will platforms protect integrity and garner trust? How can we all keep up with rapid change? What role do we have in controlling the spread of fabricated information and misinformation?

Currently, the European Union (EU) is proposing that organizations disclose if material is generated by AI and inform individuals in certain cases.

Conversely, information consumers, or readers, want to see labels and disclosures on AI-generated content. Big tech is feeling pressure from both global policymakers and platform users. Take YouTube for example. The platform has already enacted policies that require creators to label when they upload manipulated content from AI tools.

Labeling now and in the future

There are different approaches to disclosures based on content type. A watermark has been discussed to work like a fingerprint, invisible to the human eye, but identifiable by AI detectors. It’s a form of labeling content that has been manipulated or AI-generated to thwart bad actors and protect human ingenuity.

Another idea is a content credential that serves a purpose like a nutrition label on a bag of potato chips. It lists who was involved and where it was published for a complete record. Anyone who interacts with the content would have greater trust in the source.

Labeling also ensures shared accountability with the platform and the content producer. Therefore, platforms could punish users who do not comply with penalties, content removal or suspension. The shared accountability would allow platforms to protect integrity and trustworthiness.

For this to work, standardization and broad adoption of standards are paramount, although challenging. For example, private, public and academic stakeholders must agree on the approach and strategies for AI standards development in the United States.

In February 2021, Adobe and Microsoft and others launched a formal coalition for standards development: The Coalition for Content Provenance and Authenticity (C2PA). On their site, C2PA is described as mutually governed to accelerate adoptable standards for digital provenance, serving creators, editors, publishers, media platforms and consumers. C2PA brings together the Content Authenticity Initiative, formed with cross-industry participation to provide media transparency.

Participation is two-fold: Transparency and diligence

Labeling AI-generated content is important, quickly evolving and our responsibility as ethical creators and consumers of the technology. Consider these three tips for labeling and consuming AI-generated content:

  • Make it abundantly clear when content is AI-generated. Help consumers of information know immediately what they are seeing.
  • Consider using standardized labels or content credentials. Consistency will allow for greater adoption and trust.
  • Stay abreast of policy developments and changes. Know what’s happening domestically and globally.

Remember, labeling AI-generated content is a process that requires updates and improvement. You’ll need to revisit and refine to keep pace with the whirlwind that is generative AI – both from a creator and consumer’s perspective. Still more to come from global policymakers!

Learn more on the Pondering AI podcast: Generative AI: Unreal Realities with IIke Demir from Intel as she affirms the need for greater public literacy and content accountability.

Also, read more about ethical considerations: Embedding responsible innovation: How SAS is leading the charge in ethical technology.

Share

About Author

Lindsey Coombs

Senior Editor, Data and AI

Lindsey Coombs is a Senior Editor for data and AI at SAS. She researches and writes on topics covering advanced analytics and evolving tech like generative AI. Lindsey is a seasoned communicator with more than 18 years of experience writing content for a broad range of industries and audiences. She is passionate about the safe and ethical use of technology that benefits humanity.

1 Comment

Leave A Reply

Back to Top