Everyone seems to agree that “fake news,” or disinformation, permeates the online environment. Researchers are increasingly examining the Internet’s false and embellished data as a significant social phenomenon, which is reaching and influencing millions of Internet users.
Two recent studies seek to define a path to information reliability amid the increasing proliferation of false and misleading information on the Internet, particularly in social media.
A European Union study, “A Multi-Dimensional Approach to Disinformation,” issued in March 2018, recognizes the problem of disinformation using modern technologies as complex and unlikely to be solved though heavy-handed regulation. Initially, the report acknowledges that disinformation problems are “deeply intertwined with the development of digital media.” Various different actors, alone and in groups, have made “manipulative uses” of the Internet and other communications infrastructures “to produce, circulate and amplify disinformation on a larger scale than previously, often in new ways that are still poorly mapped and understood.”
In a sense, the report recognizes how different the Internet has developed from John Perry Barlow’s original expectation of a “humane and fair” new “civilization of the Mind in Cyberspace.” The report sees digital disinformation as threatening democratic political processes, including the integrity of elections, and public policies in the health, science, and finance areas.
The EU report, interestingly, recommends abandoning the phrase “fake news,” because it has been “appropriated and used misleadingly by powerful actors.” But whatever the problem is called (the report prefers “disinformation”), it needs to be spotted and revealed as misleading. To this end, the report calls for greater financial support for independent news media, fact checking sources, and media and information literacy programs. To the extent possible, these efforts need to be independently and privately supported, to avoid political influence.
It also suggests new data-sharing platforms and efforts, so that independent researchers and fact-checkers can readily access and use reliable data. The idea seems to be that if large, reliable up-to-date digital libraries are available, fact-checkers and independent researchers will more quickly and reliably put the kibosh on false reports. The 44 pages of the study, however, provide few specifics other than a commitment to further study and a hope that the problem can be combated by the combination of reliable fact-checkers and smarter media consumers.
Another report, by the Data and Society Research Institute, addresses that crucial issue of educating consumers to better understand and evaluate the media messages they receive. In this February 2018 study, “The Promises, Challenges, and Futures of Media Literacy,” authors Monica Bulger and Patrick Davison examine whether media literacy programs are up to the task of combatting the tsunamis of disinformation on today’s Internet.
Media literacy education teaches students active inquiry and critical thinking about media messages. According to the report, some studies suggest that students trained in media literacy are more likely to rate evidence-based posts as accurate than posts containing disinformation. But the data is incomplete and inconclusive; in a few studies, many students who had performed well in media literacy courses still readily accepted hoaxes presented to them on the Internet.
Accordingly, the authors caution that media literacy “cannot be treated as a panacea” and at best is only one tool in today’s complex media and information environment. Modern social media presents particular problems, because it is harder for individuals to use traditional media literacy tools to assess the personalized information presented on those sites.
In their recommendations on the “future of media literacy,” the authors suggest that an effective media literacy program needs to bring together tools and understandings in many different disciplines, including social psychology (personal decision-making and its built-in biases and shortcuts); political science (how people accept, justify, and believe disinformation that reinforces personal biases); sociology (how fear and polarization affect choices); and communications science (regarding susceptibility to conspiracy theories).
The two studies, essentially, paint grim pictures of the disinformation environment, telling us that it was much easier to create this environment than it is to climb out of it. In the old days of edited/curated/vetted information, most disinformation was filtered out before publication. In today’s world where everyone is their own publisher/poster, without editors or fact-checkers, the burden of assessing and evaluating information has been placed on recipients. As these reports show, it won’t be easy to develop and teach the sophisticated and effective media literacy tools that Internet users will need.
Mark Sableman and Mike Nepple are partners in Thompson Coburn’s intellectual property practice group.