• Uncategorized

Twitter may notify users exposed to Russian propaganda during 2016 election

nttnttntttnttttntttttttttnttttttntttttttnttttttttntttttttn

n n Twitter may notify users exposed to Russian propaganda during 2016 election - image1n n

n nttttttt

Testifying Wednesday (from left) Monika Bickert, head of global policy management at Facebook Inc.; Juniper Downs, global head of public policy and government relations at YouTube Inc.; and Carlos Monje, director of U.S. and Canada public policy and philanthropy at Twitter Inc., appear before a Senate Commerce, Science and Transportation Committee hearing in Washington. The committee is looking into terrorism and social media and whether large tech companies like Twitter, Facebook and YouTube are doing enough to combat the spread of extremist propaganda online. | BLOOMBERG
nttttttntttttnttttttttntttnttntnnttttt

Twitter may notify users whether they were exposed to content generated by a suspected Russian propaganda service, a company executive told U.S. lawmakers on Wednesday.

The social media company is “working to identify and inform individually” its users who saw tweets during the 2016 U.S. presidential election produced by accounts tied to the Kremlin-linked Internet Research Army, Carlos Monje, Twitter’s director of public policy, told the U.S. Senate Commerce, Science and Transportation Committee.

n

A Twitter spokeswoman did not immediately respond to a request for comment about plans to notify its users.

Facebook Inc. in December created a portal where its users could learn whether they had liked or followed accounts created by the Internet Research Agency.

Both companies and Alphabet’s YouTube appeared before the Senate committee on Wednesday to answer lawmaker questions about their efforts to combat the use of their platforms by violent extremists, such as the Islamic State.

But the hearing often turned its focus to questions of Russian propaganda, a vexing issue for internet firms who spent most of the past year responding to a backlash that they did too little to deter Russians from using their services to anonymously spread divisive messages among Americans in the run-up to the 2016 U.S. elections.

U.S. intelligence agencies concluded Russia sought to interfere in the election through a variety of cyberenabled means to sow political discord and help President Donald Trump win. Russia has repeatedly denied the allegations.

The three social media companies faced a wide array of questions related to how they police different varieties of content on their services, including extremist recruitment, gun sales, automated spam accounts, intentionally fake news stories and Russian propaganda.

Monje said Twitter had improved its ability to detect and remove “maliciously automated” accounts, and now challenged up to 4 million per week — up from 2 million per week last year.

Facebook’s head of global policy, Monika Bickert, said the company was deploying a mix of technology and human review to “disrupt false news and help (users) connect with authentic news.”

Most attempts to spread disinformation on Facebook were financially motivated, Bickert said.

The companies repeatedly touted increasing success in using algorithms and artificial intelligence to catch content not suitable for their services.

Juniper Downs, YouTube’s director of public policy, said algorithms quickly catch and remove 98 percent of videos flagged for extremism. But the company still deploys some 10,000 human reviewers to monitor videos, Downs said.

ntnttnttntntnttnttntntnttnttntnnnt

In a time of both misinformation and too much information, quality journalism is more crucial than ever.
By subscribing, you can help us get the story right.

ntSUBSCRIBE NOWnnnntnttPHOTO GALLERY (CLICK TO ENLARGE)ntntnttnt ntttttttttntttttnttttttntttttttnttttttntttttnttttnttttttn ntnnntnttKEYWORDSntntCongress, Google, Twitter, Facebook, 2016 U.S. presidential election, russian propagandann n n tn tttt