IF A NAZI texted your teenager, would you want to know?
We parents seem to have accepted social media as an inevitable part of our kids’ lives. They go on Instagram where we post our pretty pictures of food and artful vacation shots, and they’re Snapchatting friends and sometimes us. And in general, they’re not looking at troubling images. But that doesn’t mean we shouldn’t be concerned: It’s now clear that even Instagram is teeming with dangerous messages designed to radicalize our offspring. It’s not a matter of if they’ll see disturbing things, but when.
Since the 2016 election, plenty of attention has been focused on Facebook. Yet the apps where our kids spend their time — Instagram, Snapchat, YouTube, TikTok and Musical.ly — remain largely overlooked, primarily because most adults don’t use them or think of them as nefarious.
Indeed, about 85 percent of teenagers use both Instagram and Snapchat, according to the research firm Piper Jaffray & Co., while among 30 to 49 year-olds, just 40 percent and 26 percent, respectively, use those apps, says the Pew Research Center.
And just how much time do adolescents spend using them? Teen smartphone ownership is nearly universal and 99 percent of them report being on social media “near-constantly” or “several times a day,” also according to Pew. On any given day, our children may receive more communication from memes on their phones than from family members.
Instead of hiding out on encrypted, dark web message boards, provocateurs now plant their flags on social apps in plain sight. In an attempt to attract new followers, they are creating handles like @theright .americans and @raging_patriots. And because our kids lack “media literacy,” a set of skills for analyzing and evaluating media, they can’t tell the difference between professionally reported news, advertising, opinion, and propaganda.
Adults may find it difficult to understand how radical right messaging can show up amidst snapshots of spring break antics, but a quick glance at any social media platform reveals how easily it can happen. Just a single meme shared and hashtagged by a child’s friend can trigger a matrix of similar content automatically pushed into your child’s feed. Click on any of the suggested posts and you may find that a seemingly feel-good image was actually a Trojan horse, leading to a plethora of misogynistic slogans, jingoistic cartoons, and racist caricatures.
According to a recent report in The Atlantic, “Instagram is teeming with these conspiracy theories, viral misinformation, and extremist memes, all daisy-chained together via a network of accounts with incredible algorithmic reach and millions of collective followers.” The same AI alchemy is at work on YouTube and TikTok, which mash up children’s own videos and cartoons with ever-darker adult themes. (Incidentally, TikTok’s online legal page contains a link for law-enforcement data requests, presumably to deal with all the half-naked prepubescent frolicking.)
As an adult, I can dismiss bizarre images and radical memes as disturbing yet absurd, but experts who study how teenagers use social media say it is entirely appropriate for parents to be alarmed about the rise of hate speech on Instagram. “Kids are looking to social media to say, ‘What should I look like? What’s normal? What’s acceptable?’” says Dr. David L. Hill, chairperson of the American Academy of Pediatrics’ Council on Communications and Media. “Social media gives teens an opportunity to try on identities almost like costumes.” But what happens when the identity on offer is that of a white supremacist? Is all this still harmless fun?
RESEARCHERS AT COMMON Sense Media, one of the country’s leading independent nonprofit organizations devoted to helping parents navigate media and technology, have long been aware of how media content helps shape children’s beliefs, and they provide resources for parents to help their kids stay safe online, including app reviews and Fortnight explainers.
They also understand a fundamental conflict between children’s well-being and profit-driven media companies. “It’s in the companies’ best interest to get as many users as possible, keep them using the platform as long as possible, and have users increase their network as much as they can — all in contradiction to parents’ goals for our kids,” says Caroline Knorr, the Senior Parenting Editor at Common Sense, who authored a guide for parents to combat online hate. “These companies really don’t have our kids’ best interests at heart, and they’re not regulated the same way TV is. Until there is more regulation in terms of protecting kids, then it really is the responsibility of parents to be aware of what their kids are doing.”
Of course, Facebook-owned Instagram has a formal policy that it will “remove content that contains credible threats or hate speech.” But political propaganda can escape even sophisticated AI filters, which means the policy is hard to enforce. Consider a meme of an angel-protected Donald Trump dressed as a Founding Father standing up to Mexican, Chinese, Jewish, and Native American people with the slogans “FOR GOD AND COUNTRY; IT IS OUR HOLY DUTY TO GUARD AGAINST THE FOREIGN HORDES.” On its face, the image seems a respectful, even reverent depiction of a patriotic leader. The underlying message, however, is a perverted patriotism that falsely equates American ideals with racism and xenophobia — and warns non-whites, immigrants, and so many others that they don’t belong.
Distinguishing real from fake is increasingly difficult in our digital era; understandably, many children take what they see at face value. “Students often believe that all information is created equal, and sometimes they believe it is all true,” said Alan C. Miller, a longtime journalist who in 2008 founded the News Literacy Project, which helps teach students how to discern fact from fiction. Miller, who has visited scores of classrooms in the last decade, says he has met high school seniors who flat out say that they believe everything they see on Instagram — and routinely share it with everyone in their networks.
This troubling effect of online communication is what RAND last year called “Truth Decay,” or the “diminishing role of facts and analysis in American public life.”
Even more worrisome is what the Stanford History Education Group (SHEG) found in 2016: a widespread inability among middle schoolers, high schoolers, and college students to determine the credibility of information that flows through social media. The conclusion of SHEG’s landmark study, “Evaluating Information: The Cornerstone of Civic Online Reasoning,” should send chills down the spine of every parent: “Overall, young people’s ability to reason about the information on the Internet can be summed up in one word: bleak.”
Children and adolescents are particularly susceptible to the veil of authenticity that amateur videos and memes suggest. Because these posts lack professional gloss, they profess to tell a deeper truth. “We’ve heard students who believe all [online information] is created equal and equally driven by bias, whether commercial, political, or agenda bias,” says Miller of the News Literacy Project. “So they think if it’s on a blog or YouTube, it’s more credible because it’s not mediated.” This attitude leaves youths “especially susceptible to hoaxes and conspiracy theories and rumors.”
FOR TOO LONG, social media companies have refused to acknowledge the obvious: By building a platform where anyone can publish anything, they have made it easier to propagate humanity’s darkest impulses. And they’ve shown a reluctance to rein us back in. After years of complaints, it was only this week that Facebook announced that it would finally take steps to ban the promotion of all white supremacist content on its platforms. Whether the social media company can successfully police this type of messaging remains to be seen.
As Facebook’s failure to stop foreign meddling, racist advertising, and privacy violations makes clear, powerful media companies do not regulate themselves. More alarming, as the New York Times revealed last year, the company has pursued deceptive tactics to evade regulation, including the launch of a propaganda campaign to influence lawmakers.
We Americans, including our children, are not just consumers of products but citizens of a democracy. As such, our communication media are a public trust, an idea enshrined in law since Congress established the Federal Communications Commission (FCC) in 1934. Just as the FCC regulates other forms of broadcasting, social media companies need to be held accountable to the American people for balancing commerce with the public interest. The urgency of the problem could not be more evident.
Hate speech and conspiracy theories appeal most to kids who are already vulnerable, those who are looking for peer approval or seeking a group to belong to, says Common Sense’s Knorr. Hopefully, the absolute number of teens who will ultimately be radicalized by white supremacists is small.
Yet, the harm that one influenceable young person can cause — like the March 15 massacre in Christchurch, New Zealand, which began with a white nationalist meme and was engineered to be broadcast live on Facebook — remains incalculable.
Julie Scelfo is a journalist who writes about human behavior and the author of “The Women Who Made New York.”