In the United States around two thirds of adults say that they get their news from social media platforms*. However, in recent years, the inaccuracy of these sources have come to light in the form of controversy, fabricated news stories, and inaccurate statements. As news continues to become more and more readily accessible, many consumers find themselves unable to differentiate between true news and news that has been created to feed people in a certain direction of thought. While many know this is an issue in print, few may be aware of the implication of these techniques in both video and audio files. In a world where we rely on social media daily, emerging technologies such as “deepfakes” (a technology that allows a user to superimpose the face of another individual onto a body double, thus creating a digital copy of the individual) could cause the disruption of news flow or confusion regarding the truth behind certain sources, disrupting day to day life or ending the internet altogether.
In early 2018, a “deepfake” of former President Barack Obama surfaced online, attracting the attention of millions by calling current President Donald Trump a “total and complete dipshit” alongside a warning that “our enemies can make it seem like anyone is saying anything.” While this “deepfake” was made by Jordan Peele, a famous writer and director, it does speak to the ease of creating both fake audio and visual representations of a person online. After creating this project in conjunction with Buzzfeed News, Buzzfeed’s CEO Jonah Peretti commented on not only how simple this task was, but how common its use was becoming. “We’ve covered counterfeit news websites that say the Pope endorsed Trump that look kinda like real news,” he said, “Now we’re starting to see tech that allows people to put words into the mouths of public figures.” Both the audio and visual aspect of these videos were created using publicly available computer and video programs such as Adobe’s After Effects and Artificial Intelligence programs such as FakeApp.
In fact, this same technique was recently used by a Belgian political party who goes by the name of Socialistische Partij Anders, or sp.a, to publish a video of current President Donald Trump telling the Belgian people that he, “had the balls to withdraw from the Paris climate agreement,” and “[they] should too.” This sparked outrage from Belgian viewers who taunted the president, calling Americans names and insinuating that the president had both overstepped his bounds and had no idea what he was talking about. In fact, the tirade did not end until the party publicly responded to comments claiming that the post was just meant to be a joke and that it was obviously a fake. A spokesperson for the party said that, “It is clear from the lip movements that this is not a genuine speech by Trump,” although the video was convincing enough for millions of viewers. While these are not the only circumstances of the use of this technology, they, along with revenge porn, are some of the most common. Since our society is built on speed and convency many people fail to vet their information or even look deeper into videos to spot these fakes. These forms of Artificial Intelligence, when used correctly, could be an ample enemy for both the Department of Defense and civilians alike. If this strategy is used against us by other countries and credible duplicates are made, the repercussions could be innumerable.
Speaking from the perspective of a 17 year old female, I at times find it hard to differentiate between new sources that claim to be verified versus those that are. According to PBS, this is a problem that many teens share. In studies done by Stanford* middle schoolers were unable to differentiate between an advertisement and a factual news source, high school students failed to see the altered results of a graph created by the Minnesota Gun Owners Political Action Committee, and college students willingly cited .org sites without checking further into their references and resources. This is both startling and jarring as these are the children and young adults who will soon be the decision makers of the country. As the previous election showed, the power of young adult voters should not be overlooked, and it is for this reason that both the military and civilians alike need to be aware of the true implications of fabricated news stories and “deepfakes” alike.
This form of technology is a double edged sword that provides ample opportunities for the Department of Defense to engage in face to face communication with terrorism groups by imitating either the face, voice, or both, of powerful leaders or members of their communities. This would allow the military to infiltrate organizations while minimizing or eliminating the need to leave the United States altogether. However, if used against us, it could coheres voters into making decisions based on false statements, could provide false advertisements, false information, and more. All of this combined could drastically change the way that information is received and change the way citizens use the internet. People may turn away from social media and a valuable form of advertising and communication may be lost, resulting in a longer time for messages and information to be both delivered and received.
The military should train its soilders in both the proper use of social media and ways to distinguish between fake and real news, no matter how ‘realistic’ the source may seem. Additionally, it should train sectors in the prospect of using AI to infiltrate communities and hostile organizations. This can be done in a multitude of ways, including allowing soilders to experience interacting with and maintaining the cover of artificial intellegnce. Additioanlly, the government could create programs and services that educate the population on the importance of checking resources and provide tips to tell the difference between fabricated and real news. Overall, “deepfakes” and manufactured news stories are becoming more and more popular and carrying more consequnces with them. T is up to us to sift through this information and find the real news behind it.
- Donald, Brooke. “Stanford Researchers Find Students Have Trouble Judging the Credibility of Information Online.” Stanford Graduate School of Education, 15 Dec. 2016, ed.stanford.edu/news/stanford-researchers-find-students-have-trouble-judging-credibility-information-online.
- Schwartz, Oscar. “You Thought Fake News Was Bad? Deep Fakes Are Where Truth Goes to Die.” The Guardian, Guardian News and Media, 12 Nov. 2018, www.theguardian.com/technology/2018/nov/12/deep-fakes-fake-news-truth.
- Wineburg, Sam, and Sarah McGrew. “Americans Expect to Get Their News from Social Media, but They Don’t Expect It to Be Accurate.” Nieman Lab, 13 Dec. 2016, www.niemanlab.org/2018/09/americans-expect-to-get-their-news-from-social-media-but-they-dont-expect-it-to-be-accurate/.