Richard Kern’s photography are confrontational and many escape any didactic interpretation. They reveal an underbelly of darkness and many times the reflection turns inward; an audience member must confront questions of representation, perversion, and beauty are all confronted. In December 2020, he published his most recent book, Medicated, a collection of photographs and interviews compiled over the course of 10 years, inspecting the relationship between young women and prescription drug use. I sat down with Richard Kern and talked to him about it.
Read MoreSocial Media
Social Media's Dark, Unintended Consequences
Searching for news sources that fit their hyper-partisan appetite, Americans are heading to social media sites like Facebook, Youtube, and Twitter for their news in staggering numbers. Many appear to have filled their newsfeeds and subscription lists with sensational clickbait or sound bites rather than relying on professional articles written by reputable news organizations; it almost appears as if Americans are confusing reliable reporting from The New York Times with politically-slanted NowThis News videos proliferating on Facebook.
Unfortunately, accurate and informative content can be hard to find on these social media sites. Remember when Facebook erroneously spread People Magazine’s misquotation of an interview President Trump did in 1998? Or the Pizzagate scandal which spread across InfoWars Youtube page? Not to mention the countless other examples of “fake news” that social media has been lit ablaze with. Some, like teens in Veles, Macedonia, have even earned enough advertising revenue by reposting fake articles to justify dropping out of high school and continuing to cash in on conservative clickbait tendencies.
Fake news ‒ and the continual attacks on mainstream media as illegitimate ‒ are incredibly damaging to the integrity of journalists and the population as a whole. The understanding that a free and open press is a fundamental part of the U.S. Constitution no longer seems apparent. With each passing day, an educated citizenry self-governing the American experiment seems more unattainable because of the news we are digesting.
On the surface of this existential crisis is fake news, but lurking underneath is hateful, radical, and extremist content emboldening dangerous homegrown terrorists. Facebook, Youtube, and Twitter, everyone’s favorite social media apps, seem to be little more than bystanders.
Since its origins, ISIS has utilized social media as a digital weapon to recruit members and spread its message to a global audience. Youtube videos, Facebook pages, and Twitter accounts abound with the infamous Anwar Al-Awlaki ‒ just one amongst many imams ‒ preaching hateful versions of Quranic verses that justify jihad against America and our allies. A quick 5 minute search will yield over 71,400 videos touting his radical message for millions to view.
While uncomfortable to many, social media companies’ advertising-driven business model allows them to profit from views and clicks on extremist content. Video producers also bask in the monetary gain, using the funds to support global terrorist operations abroad. Unintentionally then, Google, Youtube, and Facebook’s revenue model may be supporting ISIS, their counterparts, and homegrown terrorists around the globe. Should stockholders vote to change Youtube, Twitter, and Facebook’s business model in the face of the global war on terrorism?
From their humble origins, none of these three sites envisioned radical extremist users. Facebook began as a college-only dating site, Youtube as a way to share home videos, and Twitter as a way to speak your mind in 140 characters or less. Today, however, their paradigms have shifted and the seething hatred of American values has penetrated once innocent social media sites. This has put these Silicon Valley titans at a crossroads between protecting free speech and profiting off this content, or upholding their terms of service and doing everything possible to delete it.
Yet is there any evidence suggesting that social media has directly caused terrorism? Lawsuits filed against Google, Twitter, and Facebook from the San Bernardino shootings allege that they “knowingly and recklessly” provided ISIS with “a tool for spreading extremist propaganda, raising funds and attracting new recruits.” The Showtime documentary American Jihad explains how Youtube videos, Facebook pages, and Twitter posts are radicalizing many and causing them to commit violent acts in the name of ISIS, without ever traveling to the group’s homebase in Syria and Iraq. Plenty of examples illustrate the homegrown terrorist threat America now faces; the Boston Marathon bombing, Orlando nightclub attack and others were all linked to perpetrators’ ability to access radical content from the confines of their home.
Logically, we’d expect these social media companies to do more than disavow the radical undercurrent of their sites. Yet with 500 hours of content uploaded every minute, it’s impossible for Youtube employees to continually monitor everyone who spreads extremist ideology online. Even if they do block users with potentially violent tendencies, new accounts continue to spawn with identical videos spreading the same messages. This removal and reappearance process is like a never-ending whack-a-mole cycle.
Despite taking recent steps to improve their content curation, it’s likely impossible to root out all extremists from social media without a computer algorithm to do it quicker than humans can. With advertising agencies controlling corporations’ ads, the job is in their hands to ensure viewers don’t see a Mercedes E-Class ad on a video sympathizing with ISIS. Utilizing a high-functioning system called programmatic advertising, this computer algorithm seeks out the most viewed and clicked content within seconds. As a result, this rapid tool can place ads on content humans don’t have time to review for hateful undertones, often displaying a company’s ad on content unreflective of their values. Despite blacklisting options being available, programmatic ads face the same dilemma social media companies do: banning a user from running ads is impossible when others pop up with the same video.
Without much foresight when their platforms originated, it seems like Google, Youtube, and Facebook may have created a living Frankenstein. Wary of letting radical messages continue to spread, it might be time for outside actors to intervene in social media’s fight against terrorism. Online content is a public good, and therefore government interference is justifiable within reason. If the U.S. is going to continue fighting terrorism abroad, the government should look online first.
-Jordan Wolken