2018, it’s us vs. them. Counterculture has become staid.

 

How much the narrative has dissembled.

 

 

Facebook, Google and Twitter told Congress Wednesday that they’ve gone beyond screening and removing extremist content and are creating more anti-terror propaganda to pre-empt violent messages at the source.

 

Representatives from the three companies told the Senate Committee on Commerce, Science and Transportation that they are, among other things, targeting people likely to be swayed by extremist messages and pushing content aimed at countering that message. Several senators criticized their past efforts as not going far enough.

 

“We believe that a key part of combating extremism is preventing recruitment by disrupting the underlying ideologies that drive people to commit acts of violence. That’s why we support a variety of counterspeech efforts,” said Monika Bickert, Facebook’s head of global policy management, according to an advance copy of her testimony obtained by CNBC.

 

 

 

Counter-axis of tyranny

 

 

 

Facebook is also working with universities, nongovernmental organizations and community groups around the world “to empower positive and moderate voices,” Bickert said.
Google’s YouTube, meanwhile, says it will continue to use what it calls the “Redirect Method,” developed by Google’s Jigsaw research group, to send anti-terror messages to people likely to seek out extremist content through what is essentially targeted advertising. If YouTube determines that a person may be headed toward extremism based on their search history, it will serve them ads that subtly contradict the propaganda that they might see from ISIS or other such groups. Meanwhile, YouTube supports “Creators for Change,” a group of people who use their channels to counteract hate.

 

The video site is also adapting how it deals with videos that are offensive but don’t technically violate its community guidelines, putting this so-called borderline content behind interstitials and removing comments, according to the testimony of Juniper Downs, YouTube’s head of public policy.

 

Downs said that over the past year YouTube’s algorithms, in concert with human reviewers, have been able to remove hateful content faster than before.
“Our advances in machine learning let us now take down nearly 70% of violent extremism content within 8 hours of upload and nearly half of it in 2 hours,” Downs said.

 

 

 

Censorship is noxious;  collective brainwashing and artificial manipulation of the paradigm is the end of society as we know it.