Social media platforms like Facebook and YouTube are fertile breeding grounds for white supremacists and white nationalists – enabling acts of real-world violence and fueling bigotry – and Big Tech must do more to stop it, advocacy groups that monitor and combat hate and discrimination say.
During a House Judiciary Committee hearing on Tuesday, the Anti-Defamation League (ADL) and other organizations detailed how tech platforms have fueled an uptick in hate crimes against Muslims, immigrants, African-Americans and Jews by enabling perpetrators to self-radicalize, locate extremist hate groups and find like-minded users.
Facebook and Google, meanwhile, defended measures their companies have taken in the wake of horrific incidents such as the Tree of Life Pittsburgh synagogue shooting and the New Zealand mosque massacre, both of which were allegedly perpetrated by gunmen who espoused white supremacist or white nationalist views online.
FACEBOOK BLASTED AS 'MORALLY BANKRUPT LIARS' BY NEW ZEALAND PRIVACY CHIEF
"The white supremacist movement has sought to re-brand itself and become more palatable to broader audiences," said Kristen Clarke, president and executive director of the National Lawyers Committee for Civil Rights Under Law, in her prepared remarks. "Instead of 'white supremacy,' they say 'white nationalism.' Instead of hiding behind masks, they hide behind computer screens."
The ADL's senior vice president for policy, Eileen Hershenov, explained that white supremacist violence accounted for 78 percent of all extremist-related murders in the United States in 2018.
"White supremacists in the United States have experienced a resurgence in the past three years," Hershenov said. "There is also a clear corollary, as our research shows, to the rise in polarizing and hateful rhetoric on the part of candidates and elected leaders."
Hershenov called on tech giants like Facebook and Twitter to share their granular data regarding hate groups, which is one thing they have not done.
Representatives from Facebook and Google both said they made the removal of white supremacists and white nationalists a priority, and both explained that they are still improving their artificial intelligence systems to better detect hateful content in advance. Facebook has 30,000 people working broadly on health and safety, although the social network was blasted after the New Zealand shooting for allowing the gunman's livestreamed video to spread.
FACEBOOK, GOOGLE IN CROSSHAIRS OF NEW UK POLICY TO CONTROL TECH GIANTS
In 2017, following the deadly white nationalist demonstrations in Charlottesville, Va., tech giants began banishing extremist groups and individuals espousing white supremacist views and support for violence.
Even so, as recently as Tuesday afternoon, there was one Facebook account called "Aryan Pride" that was still visible on the platform.
The hearing, which at times devolved into more partisan bickering over the Trump administration's rhetoric and policies, and their impact on the global spread of white nationalist violence, also featured Mohammad Abu-Salha, the father of two daughters and a son-in-law murdered by a hate-filled shooter in North Carolina in 2015.
AIR FORCE F-16 GETS F-35 SENSORS, WEAPONS AND RADAR
"I taught my children our faith every Sunday. That is why they were all loving and caring," Abu-Salha said. "Yusor was a vibrant 21-year-old woman who always found ways to give to others. Razan was 19 years old and was so full of life, a gentle soul."
Advocates called for more resources from law enforcement agencies, more coordination between tech companies and transparency about their methods for stopping hate groups and better enforcement of Big Tech's existing policies against extremism. During the hearing, the comments became so hateful on the YouTube livestream that the company disabled them.
“The companies that create and profit from online platforms where white supremacy is prevalent, also have a responsibility to address this crisis. We call on all online platforms to fully and fairly enforce their terms of service, and terminate purveyors of hate who violate those terms by promoting and inciting violence," said Clarke. "We urge greater transparency by the tech sector regarding the prevalence of hateful activities on their platforms."
CLICK HERE FOR THE FOX NEWS APP
Hershenov emphasized that digital platforms must do more to battle extremism.
"Individuals can easily find sanction and reinforcement online for their extreme opinions or actions," she said. "This enables adherents like white supremacist mass shooters such as [Tree of Life gunman] Bowers to self-radicalize without face-to-face contact with an established terror group or cell."