Google’s then Executive Director Eric Schmidt concluded an otherwise optimistic speech about the myriad social benefits of technology at a policy dinner in Washington, DC, warning. Schmidt warned that when we marvel at the blessings and limitless potential of technology, we must never lose sight of the fact that there will always be people who pledge to make the same life-improving advances for deeply shameful purposes use. The technology ecosystem provided the goods, but exposed us to new dangers.
That evening’s speech came about seven months after Edward Snowden spilled the beans on the National Security Agency, but Schmidt’s warning was still prescient. It came before the big wide world began to approach the increasing risk of cyber misconduct. It came before the Russians made a concerted effort to manipulate US public opinion ahead of the 2016 election. It happened before cable news networks gave up all pretexts of objectivity, before social media platforms became profoundly politicized, and before “fake news,” content moderation, and Section 230 of the Communications Decency Act became slang.
Widespread access to technology and communication channels has provided ordinary people with the tools to generate and distribute endless amounts of news, comments, photos, videos, and audio recordings. Statistics on the amount of data we generate are mind-boggling. Around 90 percent of the world’s data (right from the start!) Was generated in the last two years alone.
Of course, these trends and skills have enabled bad actors to misuse technology to misinform and manipulate public opinion, and to become accomplices to millions of well-meaning internet users who inadvertently spread fake news with the click of a mouse. The abuse and misuse of technology and the deepening of media bias has caused Americans to lose confidence in what they see, read and hear, especially on the internet.
Quoting the annual Edelman Trust Barometer for 2021, Axios reports: “For the first time, less than half of all Americans trust traditional media [and] Social media trust has hit an all-time low of 27%. “Fabricated stories, doctoral photos, and fake videos designed to support false narratives have become widespread problems that arouse suspicion, fuel hatred, undermine democracy, and turn our communications infrastructure into a public good that has the potential to be efficient improve, encourage politeness and raise the standard of living – unreliable, if not dangerous. According to a 2020 poll by Pew, around three-quarters of adults in the United States say tech companies are responsible for preventing their platforms from being misused to influence elections, but only around one-quarter trust these companies to do so will do. The media and technology deserve the public’s disdain, as confirmed in this summary of the Gallup polls.
Skepticism may be good in that it encourages content consumers to be more critical, but skepticism alone is not enough. Something needs to be done to limit the supply of fake news and inauthentic content. Protecting the public from misinformation and disinformation, taking steps to reduce the frequency of both, and restoring trust in media and social media are tasks that must be shared by individuals, governments, technology companies, and media companies. Popular platforms like Facebook and Twitter have set rules for posting and distributing “manipulated media”. However, more comprehensive solutions that rely less on subjective human determinations must play a more important role as the accumulation of data – and its potential for abuse – increases with every passing minute.
With that in mind, one approach that holds promise is through the Content Authenticity Initiative (“CAI”), which is partnered with technology company Adobe, the New York Times, Twitter, and other companies and individuals involved in these areas to help. Content consumers make more informed decisions about what to trust. ”
Inauthentic content – both unintentionally and intentionally misleading – is on the rise. With the rapid expansion of digital content and the tools to create and edit it, developing more reliable methods of ensuring proper mapping and transparency is critical to restoring and maintaining trust.
The CAI aims to help consumers make more informed decisions about the authenticity of content and where that content comes from – who produced it and how; and when, where, why and by whom it may have been changed. There are currently options for content creators to embed authorship metadata into their work. However, there are no standards for the secure and tamper-proof transmission of attribution information across media platforms. This undermines the ability of publishers and consumers to determine the authenticity of media content.
The CAI aims to solve this problem by developing a digital ancestry system that uses cryptographic evidence – verifiable metadata that includes information about asset creation, authorship, editing actions, the collection of device details, the software used, and other characteristics . This makes it easier to identify manipulated or inauthentic content and allows content creators and editors to disclose information about who created or changed an asset, what was changed, and how it was changed. The ability to provide content assignments to authors, publishers, and consumers is critical to building trust online.
Whether it is the CAI or some other open source collaboration between content producers, publishers and consumers that ultimately mitigates the “fake news” problem, technology that has been a blessing and a curse has the chance to redeem oneself and to play a central role in bridging the trust gap by society.