#Russian computer trolls manipulated U.S. electorate
Explore tagged Tumblr posts
malenipshadows · 7 years ago
Link
USA Today news analysis.    *** While some ads focused on topics as banal as business promotion or Pokémon, the Russian-based Internet Research Agency consistently promoted ads designed to inflame race-related tensions. Some ads dealt with race directly; others dealt with issues fraught with racial and religious baggage such as ads focused on protests over policing, the debate over a wall on the U.S. border with Mexico and relationships with the Muslim community.    The I.R.A. company continued to hammer racial themes even after the election. ***
2 notes · View notes
bountyofbeads · 5 years ago
Text
Bipartisan Senate report calls for sweeping effort to prevent Russian interference in 2020 election
By Craig Timberg and Tony Romm | Published October 08 at 2:45 PM ET | Washington Post | Posted October 8, 2019 5:15 PM ET
A bipartisan panel of U.S. senators Tuesday called for sweeping action by Congress, the White House and Silicon Valley to ensure social media sites aren’t used to interfere in the coming presidential election, delivering a sobering assessment about the weaknesses that Russian operatives exploited in the 2016 campaign.
The Senate Intelligence Committee, a Republican-led panel that has been investigating foreign electoral interference for more than two and a half years, said in blunt language that Russians worked to damage Democrat Hillary Clinton while bolstering Republican Donald Trump — and made clear that fresh rounds of interference are likely ahead of the 2020 vote.
“Russia is waging an information warfare campaign against the U.S. that didn’t start and didn’t end with the 2016 election," said Sen. Richard Burr (R-N.C.), the committee’s chairman. “Their goal is broader: to sow societal discord and erode public confidence in the machinery of government. By flooding social media with false reports, conspiracy theories, and trolls, and by exploiting existing divisions, Russia is trying to breed distrust of our democratic institutions and our fellow Americans.”
Though the 85-page report itself had extensive redactions, in the visible sections lawmakers urged their peers in Congress to act, including through the potential adoption of new regulations that would make who bought an ad more transparent. The report also called on the White House and the executive branch to adopt a more forceful, public role, warning Americans about the ways in which dangerous misinformation can spread while creating new teams within the U.S. government to monitor for threats and share intelligence with industry.
The recommendations call for Silicon Valley to more extensively share intelligence among companies, in recognition of the shortage of such sharing in 2016 and also the ways that disinformation from Russia and other countries spreads across numerous platforms — with posts linking back and forth in a tangle of connections.
“The Committee found that Russia’s targeting of the 2016 U.S. presidential election was part of a broader, sophisticated and ongoing information warfare campaign,” the report says. The Russian effort was “a vastly more complex and strategic assault on the United States than was initially understood... an increasingly brazen interference by the Kremlin on the citizens and democratic institutions of the United States."
The committee report recounts extensive Russian manipulation of Facebook, Instagram, YouTube, Twitter, Google and other major platforms with the goal of dividing Americans, suppressing African American turnout and helping elect Trump president. Tuesday’s report, the second volume of the committee’s final report on Russian interference in the 2016 election, offered the most detailed set of recommendations so far for stiffening the nation’s defenses against foreign meddling online — now a routine tactic for many nations.
While the report tracks closely with the previous findings of Special Counsel Robert S. Mueller III and several independent researchers, the comprehensiveness and forcefulness of the report’s conclusions are striking in light of Trump’s efforts to minimize the impact of Russian interference in the election that brought him to office. The release also comes amid a burgeoning impeachment inquiry over whether Trump sought foreign help — from Ukraine and others — to help his reelection chances in 2020.
The White House did not immediately respond to requests for comment.
Trump has questioned the findings by U.S. intelligence officials that the 2016 election was a target of Russian manipulation, sometimes embracing conservative conspiracy theories even as federal investigators have detailed efforts to interfere through fake social media accounts, leaks of stolen Democratic Party documents and hacks into state voting systems.
The Senate Intelligence Committee backed the views of other federal officials regarding the sweep and goals of the Russian effort saying that the operation “sought to influence the 2016 U.S. presidential election by harming Hillary Clinton’s chances of success and supporting Donald Trump at the direction of the Kremlin.”
The White House, say numerous researchers and outside critics, has failed to lead the kind of aggressive, government-wide effort they argue would protect the 2020 race, though some federal agencies took steps to address foreign threats more forcefully during the 2018 congressional election.
That included a cyber-operation that disrupted Russia’s Internet Research Agency, based in St. Petersburg, on election day. Mueller indicted the agency and 13 affiliated Russians for their alleged role in 2016 election interference, which played a central role as well in Mueller’s landmark final report, released in April.
“With the 2020 elections on the horizon, there’s no doubt that bad actors will continue to try to weaponize the scale and reach of social media platforms to erode public confidence and foster chaos,” said Sen. Mark R. Warner (Va.), the top Democrat on the committee. “The Russian playbook is out in the open for other foreign and domestic adversaries to expand upon — and their techniques will only get more sophisticated.”
Lawmakers delivered their recommendations just days after new revelations of possible election interference jolted Washington. On Friday, Microsoft announced it had discovered Iranian-linked hackers had targeted the personal email accounts associated with a number of current and former government officials, journalists writing on global affairs and at least one presidential candidate's campaign.
Microsoft declined to name the affected campaign, and said the account was not compromised. Still, the Iranian effort highlighted the lingering aftermath of Russia’s online efforts three years ago, as other countries around the now seek to adopt the Kremlin’s tactics, turning disinformation and other forms of election meddling into a global phenomenon.
Iran has joined Russia as a leader in foreign online interference. The list of countries known to have conducted such operations also includes Saudi Arabia, Israel, China, the United Arab Emirates, Pakistan and Venezuela, say researchers. A report by Oxford University’s Computational Propaganda Project said last month that at least 70 nations have sought to manipulate voters and others online, though most meddle mainly in their own domestic politics.
The Senate Intelligence Committee also has documented extensive Russia efforts to manipulate American voting systems.
In the first chapter of the committee’s report, released in July, lawmakers said that voting systems in all 50 states likely had been targeted by Russian agents in some manner. While it affirmed that votes had not been changed or compromised during the 2016 election, it concluded that the U.S. government had fallen far short in its security responsibilities by failing to warn state officials, who oversee elections, and provide them with sufficient, actionable threat information.
Last year’s Senate Intelligence Committee report on social media manipulation found Facebook in particular was key to reaching African Americans and conservative voters. The 20 most popular Facebook pages run by the Russians — with names such as “Being Patriotic,” “Blacktivist” and “Army of Jesus” — generated 39 million likes, 31 million shares, 5.4 million reactions and 3.4 million comments. The Russian campaign reached 126 million people on Facebook and 20 million more on Instagram, company officials reported to Congress.
Tuesday’s report described how efforts to manipulate Americans over social media operated in multiple steps. Fake accounts operating from Russia started by ingratiating themselves into online conversations using non-political comments, then switched to overtly partisan content.
The Russian-created “Army of Jesus” Facebook group, for example, on Oct. 26, 2016 —less than two weeks before the presidential vote — said, “There has never been a day when people did not need to walk with Jesus.”
Then, with the election approaching on Nov. 1, the same “Army of Jesus” page said, “HILLARY APPROVES REMOVAL OF GOD FROM THE PLEDGE OF ALLEGIANCE.”
The report also noted that the paid advertisements on Facebook, Instagram and other platforms were much less important than the free, viral context created by teams of Russian disinformation operatives working across multiple platforms.
Andy Stone, a spokesman for Facebook, said the tech giant since 2016 has “stepped up our efforts to build strong defenses on multiple fronts,” including efforts to detect fake accounts and remove coordinated efforts to spread misinformation on the site. In September, Facebook hosted U.S. government officials and other tech company representatives to discuss ways to safeguard the 2020 election.
Google and Twitter did not immediately respond to requests.
Instagram, which is owned by Facebook and has grown increasingly influential in recent years, played a key role the Russian disinformation campaign. The top 10 accounts run by the Russian operatives were designed to appeal to specific groups, including African Americans, veterans and gay people. The names of the accounts included, “@Blackstagram,” “@american.veterans,” “@rainbow_nation,” “@afrokingdom,” “feminism_tag” and “@cop_block_us.”
“On the basis of engagement and audience following measures," the report said, "the Instagram social media platform was the most effective tool used by the [Internet Research Agency] to conduct its information operations campaign.”
Putin helped Trump in 2016. What is he planning for 2020?
By Paul Waldman | Published October 08 at 3:40 PM ET | Washington Post | Posted October 8, 2019 5:15 PM ET
We’ve just received a new report on the Russian attack on the 2016 elections, and the conclusions — because they are supported by actual evidence and are in accord with everything we’ve learned since then — will not make President Trump happy:
A bipartisan panel of U.S. senators Tuesday called for sweeping action by Congress, the White House and Silicon Valley to ensure social media sites aren’t used to interfere in the coming presidential election, delivering a sobering assessment about the weaknesses that Russian operatives exploited in the 2016 campaign.
The Senate Intelligence Committee, a Republican-led panel that has been investigating foreign electoral interference for more than two and a half years, said in blunt language that Russians worked to damage Democrat Hillary Clinton while bolstering Republican Donald Trump — and made clear that fresh rounds of interference are likely ahead of the 2020 vote.
This is nothing new, but it’s still notable that a bipartisan committee led by a Republican was able to release it, given the fact that the leader of the Republican Party still periodically claims that Russia may not have been behind the attacks.
Right out of the gate, the committee writes that the Internet Research Agency, the Russian intelligence entity tasked with waging cyberwarfare, “sought to influence the 2016 U.S. presidential election by harming Hillary Clinton’s chances of success and supporting Donald Trump at the direction of the Kremlin.”
This assertion should be utterly uncontroversial by now, but think for a moment about the context in which it lands.
The president of the United States has become obsessed with a banana-pants conspiracy theory, which says there was no Russian attack in 2016 at all, but instead the whole thing was engineered from Ukraine to make Russia and Trump look bad. That Ukraine is involved is not merely coincidence: On the same phone call in which Trump made clear to Ukrainian President Volodymyr Zelensky that he wanted Zelensky to get on an investigation of Joe Biden, Trump brought up this conspiracy theory.
“I would like you to find out what happened with this whole situation with Ukraine, they say CrowdStrike," Trump said to Zelensky, referencing a cybersecurity firm that worked for the Democratic National Committee. "I guess you have one of your wealthy people … the server, they say Ukraine has it.”
The conspiracy theory has it that a DNC server was spirited away to Ukraine, presumably to hide the fact that the Russians never hacked into it and the whole thing was some sort of false-flag operation.
Despite having the full resources of the U.S. intelligence apparatus at his disposal, Trump prefers to believe the speculations of a bunch of tinfoil-hat-wearers on 4chan. But that’s not all: Right now, the attorney general of the United States is bouncing around the globe trying to determine why the FBI would possibly have wanted to investigate the Russian attack that has now been extensively documented by both the Senate Intelligence Committee and Robert S. Mueller III’s prosecutors, as though the FBI’s investigation were unnecessary and obviously suspicious.
The committee was clear, as the intelligence community has been, that the Russian attack didn’t end after 2016 but is an ongoing threat, an effort to destabilize Western democracies that began before that election and continues to this day. Near the end of their report, the authors write:
The Committee recommends that the Executive Branch should, in the run up to the 2020 election, reinforce with the public the danger of attempted foreign interference in the 2020 election.
In a different time, this recommendation might be so obvious as to be mundane, but we have a president who has, both publicly and privately, welcomed and even solicited foreign interference in the 2020 election, so long as it’s done to benefit him.
Amidst the Ukraine controversy, we’ve almost forgotten about Russia, but you can bet that the Russians are already preparing their offensive actions — what intelligence professionals call “active measures” — for 2020. Which brings me to another part of the Intelligence Committee report.
Under the section titled “Features of Russian Active Measures” they list “Fluid Ideology," noting: “Because the Kremlin’s information warfare objectives are not necessarily focused on any particular, objective truth, Russian disinformation is unconstrained by support for any specific political viewpoint and continually shifts to serve its own self-interest.”
Other "Russian Active Measures“ noted in the report include "Attacking the Media.” and “Exploiting Existing Fissures.” All of which sounds an awful lot like they’re describing Donald Trump, the campaign he waged in 2016 and the one he’ll mount in 2020.
Though we can’t say with complete certainty that Russia will be working to help Trump get reelected, it would be a shock if it doesn’t. Vladimir Putin surely derives no end of satisfaction from the fact that the American president acts toward him like an 11-year-old girl who got a chance to talk to one of the Jonas brothers.
And if Putin’s goal is to spread chaos and disorder throughout the West, there’s no better way to do it than to help Trump stick around.
Senators warn of foreign social media meddling in US vote
By Christina A. Cassidy and Mary Clare Jalonick | Published October 08 at 5:08 PM ET | AP | Posted October 8, 2019
WASHINGTON — A bipartisan group of U.S. senators urged President Donald Trump on Tuesday to warn the public about efforts by foreign governments to interfere in U.S. elections, a subject he has largely avoided, and take steps to thwart attempts by hostile nations to use social media to meddle in the 2020 presidential contest.
The recommendations came in an 85-page report issued by the Senate Intelligence Committee, which has been investigating Russia’s large-scale effort to interfere in the 2016 presidential election. The senators described the social media activities of the Kremlin-backed Internet Research Agency in 2016 as part of a “broader, sophisticated, and ongoing information warfare campaign designed to sow discord in American politics and society.”
The senators noted the Russians’ social media effort was a “vastly more complex and strategic assault on the United States than was initially understood,” with planning underway in 2014 when two Internet Research Agency operatives were sent to the U.S. to gather intelligence.
While a previous assessment indicated the Russian activities aspired to help then-candidate Trump when possible, the Senate report went further and said the Russians’ social media campaign was “overtly and almost invariably supportive” of Trump and designed to harm Democrat Hillary Clinton. Also targeted by Russian social media efforts were Trump’s Republican opponents — Sens. Ted Cruz of Texas and Marco Rubio of Florida and former Florida Gov. Jeb Bush.
Trump has been largely dismissive of Russian activities in 2016 and now faces an impeachment inquiry into whether he inappropriately solicited foreign election help from Ukraine ahead of the 2020 vote.
Tuesday’s report concluded the Russian activities were focused largely on socially divisive issues, such as race, immigration and guns, in “an attempt to pit Americans against one another and against their government.” It found Russian efforts targeted black Americans more than any other group and the overall activity increased rather than decreased after Election Day in 2016.
“Russia is waging an information warfare campaign against the U.S. that didn’t start and didn’t end with the 2016 election,” said North Carolina Sen. Richard Burr, the Republican chairman of the panel. “By flooding social media with false reports, conspiracy theories and trolls, and by exploiting existing divisions, Russia is trying to breed distrust of our democratic institutions and our fellow Americans.”
The senators warned that Russia is not the only one posing a threat to U.S. elections, pointing to China, North Korea and Iran. The nation’s intelligence chiefs have warned about the threat of foreign interference in the upcoming 2020 election.
“The Russian playbook is out in the open for other foreign and domestic adversaries to expand upon - and their techniques will only get more sophisticated,” said Virginia Sen. Mark Warner, the panel’s top Democrat.
The report detailed efforts by the Russians to exploit tensions in American society, particularly along racial lines. For instance, over 66 percent of the Internet Research Agency’s Facebook ads contained a term related to race and its Facebook pages were targeted to black Americans in key metropolitan areas.
California Sen. Kamala Harris, a Democrat running for president, highlighted parts of the report on Twitter and urged lawmakers to take action to prevent future interference.
“I think of America as a family_and like any family, we have issues. We have a history of slavery, Jim Crow, and segregation that we need to confront,” Harris wrote on Twitter. “But someone came into our house and inflamed these tensions to turn us against each other. We can’t let that happen again.”
In the report, the Senate Intelligence Committee recommends that the Trump administration “publicly reinforce” the danger of attempts by hostile nations to interfere in the 2020 election. It also calls for the administration to develop a framework for deterring future attacks and to create an interagency task force to monitor the use of social media by foreign governments for interference.
Senators also recommended social media companies improve coordination and cooperation with relevant government agencies. In addition, they said, Congress should consider legislation to ensure the public knows the source behind online political ads.
On Tuesday, House Democrats unveiled an election security bill that would require more transparency in online political ads along with tightening laws around the exchange of campaign information between candidates and foreign governments and requiring that campaigns report illicit offers of foreign help to the FBI.
The House bill would require TV stations, cable and satellite providers and social media companies such as Facebook to make “reasonable efforts” to ensure political advertising is not purchased by people outside of the U.S., either directly or indirectly, in part by requiring that the customer provide a valid U.S. address.
This summer, Facebook announced it would be tightening its rules around political ads and requiring those who want to run ads pertaining to elections, politics or major social issues to confirm their identity and prove they are in the U.S. with a tax identification number or other government ID.
A separate House election security measure passed the House at the beginning of the year, but Senate Majority Leader Mitch McConnell has declined to take it up. McConnell, however, has supported an effort to send $250 million in additional election security funds to states to shore up their systems ahead of 2020.
“Today’s bipartisan Senate Intelligence Committee report makes it crystal clear to everyone that Vladimir Putin exploited social media to spread false information in the 2016 elections and that the Senate must take action to ensure Americans know who is behind online political ads to help prevent it from happening again,” said Senate Democratic leader Chuck Schumer of New York.
An earlier report by the committee focused on efforts by the Russians to target state and local election systems.
1 note · View note
judeblenews-blog · 6 years ago
Text
Google has a big advantage over Facebook in a crisis
Tumblr media
There are bad bugs, and there are worse bugs. But until this week, there had never been a bug that killed a social network. Then the Wall Street Journal reported that a glitch had exposed private Google+ profile information to third-party developers between 2015 until earlier this year. A few hours later, the network — which once claimed 135 million users — was dead. For most of its seven years, Google’s effort to build a Facebook-style social network served mostly as a punchline. The company regularly touted suspiciously massive user numbers, but aside from a few pockets of enthusiasts, Google+ never managed to find a place in people’s lives the way Gmail, YouTube, or other Google services did. Google attempted to reinvent Plus several times, most recently as a kind of modern spin on message boards. And one part of Plus, which focused on helping you organize your photos, thrived once it spun out into a separate service. But mostly it was a wild goose chase — the most prominent example of Google’s many failed attempts to build a true social network. And it will be forever remembered as the social network that shut down over a security glitch — one that it didn’t tell us about until it was discovered by journalists. Why didn’t Google fess up at the time? Here’s what it told the Journal: In weighing whether to disclose the incident, the company considered “whether we could accurately identify the users to inform, whether there was any evidence of misuse, and whether there were any actions a developer or user could take in response,” he said. “None of these thresholds were met here.” As my colleague Russell Brandom notes in a good piece, this wasn’t a “breach” in the legal sense of the word. There are good reasons not to require companies to issue a public disclosure every time they find a simple vulnerability, without any evidence that it was exploited. (Chief among them: it can incentivize them to stop looking so hard.) Still: After Facebook’s painful fall from grace, the legal and the cybersecurity arguments seem almost beside the point. The contract between tech companies and their users feels more fragile than ever, and stories like this one stretch it even thinner. The concern is less about a breach of information than a breach of trust. Something went wrong, and Google didn’t tell anyone. Absent the Journal reporting, it’s not clear it ever would have. It’s hard to avoid the uncomfortable, unanswerable question: what else isn’t it telling us? Google will likely pay a price for this data exposure. (Probably in Euros.) State attorneys general are have taken an interest. US Sen. Mark Warner, D-VA, called the cover-up “pretty outrageous.” And yet Google seemed to shrug off all those worries on stage Tuesday, when its executives appeared to announce the company’s fall hardware lineup. There was a new phone, a tablet, and a competitor to the Echo Show and Facebook Portal that distinguishes itself by omitting a camera. There was no discussion of Google+. That speaks to how dramatically the company has shifted since its social network was born — and why, despite their similar advertising businesses, Google and Facebook occupy such different places in consumers’ minds. Google has focused consistently on being a utility. It builds powerful services that don’t require an understanding of your family structure or your friend relationships. Google Maps iterates constantly in search of the perfect commute; Gmail adds automatic replies to speed up your inbox; Google Photos absorbs all the pictures on your phone and uses machine learning to understand their contents and make them searchable. Google gives us sincerely new and useful things. And so, when we learn that it has exposed our data inadvertently, we might be more likely to give it a pass. At Facebook, on the other hand, the prime directive is still user growth. The company talks about a shift to foster more “meaningful” connections, but in practice this simply means growing different parts of its product suite. Facebook is useful, but it is useful mainly in the way that a phone book is useful, and after you have reached a certain number of friends that usefulness plateaus. Its biggest hit products in recent years — Instagram and WhatsApp — have been acquisitions. The new features it adds are often imported from other social networks. Its News Feed is essentially an entertainment product, but as a mirror for our times, it is often more distressing than entertaining. It gives us less, we like it less, we trust it less. I’m oversimplifying, of course. But I once spoke with someone had worked at both Google and Facebook who described the difference between how those two companies are perceived in exactly those terms. Sometimes a company misses the boat on a trend, and regrets it forever. In the case of Google+, I suspect many executives wish the company had simply avoided building a true social network altogether. David Byttow, who worked on the project and is now at Snap, put it this way: “As a tech lead and an original founding member of Google+, my only thought on Google sunsetting it is... FINALLY.”
Democracy
Researchers: No Evidence That Russia Is Messing With Campaign 2018—Yet Here’s a great story from Kevin Poulsen and Spencer Ackerman that asks: why hasn’t Russia made more obvious attempts to interfere in the midterm elections? Today the troll factory is using a mix of surviving accounts and new ones to do what it’s always done, spread fake news and fan division on Twitter, said Ryan Fox, a former NSA official now serving as COO of the smear-fighting startup New Knowledge. It’s also sneaking back onto Facebook, which discovered and deleted a fresh batch of fraudulent IRA-linked profiles and group pages in July. So far, though, none of the accounts are doing anything special for the election. “Lately, it’s been Kavanaugh all day, all the time,” said Fox. “My assessment of the situation is they’re having to reconstitute. I also would assume that because most of their accounts were taken down that they don’t have the same robustness available,” Fox said.The indicted Russian businessman who funded the IRA is now pouring resources into a new venture called USA Really, a Russian site dedicated to pushing anti-American propaganda. Unlike the IRA’s deceptive websites and Facebook groups, USA Really doesn’t disguise itself as a domestic U.S. entity, and it has real people on its masthead. In the short term, that makes it less effective at influencing Americans, but it also makes the site harder to target with a rational social media policy. Fox thinks that model is the future of Russia’s information operations. “They’re out in the open now,” said Fox. “You can’t just call them out as Russian bots. You have to get into a debate about who counts as a journalist.” Trump Campaign Aide Requested Online Manipulation Plans From Israeli Intelligence Firm Mark Mazzetti, Ronen Bergman, David D. Kirkpatrick and Maggie Haberman have the tale of how Rick Gates, a top Trump campaign official, requested proposals from an Israeli company to create fake digital identities as part of its campaign strategy: The campaign official, Rick Gates, sought one proposal to use bogus personas to target and sway 5,000 delegates to the 2016 Republican National Convention by attacking Senator Ted Cruz of Texas, Mr. Trump’s main opponent at the time. Another proposal describes opposition research and “complementary intelligence activities” about Mrs. Clinton and people close to her, according to copies of the proposals obtained by the New York Times and interviews with four people involved in creating the documents. Leaked Transcript of Private Meeting Contradicts Google’s Official Story on China Ben Gomes runs search for Google. Publicly, he has called Project Dragonfly “an exploration.” But privately, he wanted it completed “as soon as possible,” Ryan Gallagher reports, in a damning new story based on a transcript of Gomes’ comments to his team. Gomes, who joined Google in 1999 and is one of the key engineers behind the company’s search engine, said he hoped the censored Chinese version of the platform could be launched within six and nine months, but it could be sooner. “This is a world none of us have ever lived in before,” he said. “So I feel like we shouldn’t put too much definite into the timeline.” Google Drops Out of Pentagon’s $10 Billion Cloud Competition In the midst of a small-scale employee revolt over Project Dragonfly, Google decided not to compete for the Pentagon’s cloud-computing contract, Naomi Nix reports: “We are not bidding on the JEDI contract because first, we couldn’t be assured that it would align with our AI Principles,“ a Google spokesman said in a statement. “And second, we determined that there were portions of the contract that were out of scope with our current government certifications.” No One Knows How Bad Fake News Is On WhatsApp, But If Brazil’s Election Is Any Indication, It’s Bad Ryan Broderick travels to Sao Paulo to try to understand how the electorate is using WhatsApp: WhatsApp is also a nightmare for fact-checkers. Nieman Lab called it a “black box of viral misinformation.” Brazil’s political activists, especially on the far right, have been extremely aggressive about using it to organize. Last year, Movimento Brasil Livre (MBL), or “Free Brazil Movement,” a right-wing pro-Bolsonaro youth movement, was the subject of an investigation by one of the country’s biggest papers, which reported from inside one of their WhatsApp groups. The paper discovered that MBL was using WhatsApp groups like “MBL merchants” or “MBL lawyers” to spread their content — including rumors and fake news. BuzzFeed News has reached out to MBL for comment. #ElectionWatch: Claims of Electronic Voting Fraud Circulate in Brazil Ahead of Brazil’s presidential election on October 7 vote, false narratives about electronic voting fraud have spiked and deepened mistrust as citizens head to the polls. The narrative has been… A Thriving Chat Startup Braces For The Alt-Right Joe Bernstein checks in on the alt-right chat rooms on Discord: In a Discord chat server called “/pol/Nation” — named for the controversial 4chan imageboard — more than 3,000 users participate in a rolling multimedia chat extravaganza of Hitler memes, white nationalist revisionist history, and computer game strategy. And in a voice-over-IP chatroom within the server, users keep up a steady chatter about the same subjects. It’s like a cutting-edge, venture-backed version of its namesake; 4chan on steroids.
Elsewhere
Facebook will soon rely on Instagram for the majority of its ad revenue growth The next time Facebook does something to smother Instagram and you find yourself asking why, remember these data points: Last quarter, Instagram generated an estimated $2 billion, or about 15 percent, of Facebook’s $13 billion in ad revenue, according to estimates from Andy Hargreaves, a research analyst with KeyBanc Capital Markets. Hargreaves expects Instagram to grow to about 30 percent of Facebook’s ad revenue in two years, as well as nearly 70 percent of the company’s new revenue by 2020 — driving the majority of Facebook’s growth. Video Swells to 25% of US Digital Ad Spending According to eMarketer’s latest ad spending forecast, video will grow nearly 30 percent, to $27.82 billion, of which Facebook and Instagram are expected to capture nearly one quarter. Snap is ‘Quickly Running Out of Money,’ Analyst Says Snap Inc. “is quickly running out of money” and may need to raise capital by the middle of next year, according to one analyst: In order to reach Chief Executive Officer Evan Spiegel’s goal of profitability in 2019, Snap would need to grow “massively faster” than expected and cut costs aggressively, analyst Michael Nathanson wrote. He expects a loss of more than $1.5 billion in 2019 as Snap looks to rebuild its user base. Beware the viral Facebook hoax that’s tricking people into thinking their account was hacked There’s a new copy/paste hoax making the rounds on Facebook: Snopes, the fact-checking site, explains that the hoax appears to reference fears about “cloned” Facebook accounts, where would-be scammers copy the name, profile picture, and basic information from a real account to create a second, nearly identical account on Facebook. Then, they send a bunch of friend requests to the original account’s friend list, to try to scam the person’s unsuspecting friends into granting access to their personal information by accepting the request. A Facebook spokesperson said in an emailed statement that the company had “heard that some people are seeing posts or messages about accounts being cloned on Facebook,” messages that they likened to a chain letter or email. Although account cloning is a real thing, the volume of messages spreading across Facebook don’t reflect any actual spike in cloned accounts on the service WeChat Rival Removed From Apple App Store in China ($) Amid a broader and somewhat mysterious app crackdown in China, Bullet Messenger, a Chinese messaging app that surged in popularity in the past few months, is no longer available in Apple’s App Store, Juro Osawa reports. How Gym Selfies Are Quietly Changing the Way We Work Out Today in the increasingly popular genre of “Instagram changes everything” stories: the gym. The gym selfie, experts say, is more than just a visual brag or photo-driven pep talk. Social media is fundamentally changing the way we work out—and the way we see ourselves in the mirror. In a recent study, professors Tricia Burke and Stephen Rains found that individuals who saw more workout posts in their feeds were more likely to feel concerned about their own bodies, especially if the posts came from a person they felt looked similar to them. This means that even a passive scroll through Instagram can be more about stoking self-consciousness, in oneself and in others, than providing motivation—and that we internalize these lessons more easily than we think. “If people become preoccupied with their weight, that could manifest itself in less healthy ways,” Burke told me.
Launches
Instagram is using AI to detect bullying in photos and captions Can you really detect a phenomenon as abstract as bullying using artificial intelligence? Instagram says it can now: Interestingly, Instagram says it’s not just analyzing photos captions to identify bullying, but also the photo itself. Speaking to The Verge, a spokesperson gave the example of the AI looking for split-screen images as an example of potential bullying, as one person might be negatively compared to another. What other factors the AI will look for though isn’t clear. That might be a good idea considering that when Facebook announced it would scan memes using AI, people immediately started thinking of ways to get around such filters. Along with the new filters, Instagram is also launching a “kindness camera effect,” which sounds like it’s a way to spread a positive message as a method to boost user engagement. While using the rear camera, the effects fill the screen with an overlay of “kind comments in many languages.” Switch to your front-facing camera, and you get a shimmer of hearts and a polite encouragement to “tag a friend you want to support.” Instagram now supports third-party authentication apps on Android Instagram previously rolled out support for third-party authentication apps like Authy on iOS. Today, it brought that feature to Android. Meredith is developing 10 original shows for Instagram’s IGTV Here’s a win for IGTV: magazine publisher Meredith is developing a slate of 10 original series for Instagram’s 3-month-old experimental vertical TV app, the first of which will premiere later this year. Facebook Workplace adds algorithmic feed, Safety Check and enhanced chat The most interesting nugget in this Josh Constine update on Workplace from its first-ever user conference: while more than 30,000 organizations are customers, Facebook hasn’t updated that number in a year. It suggests that the product has been slow to catch on during a trying year for the parent company. The 5 biggest announcements from the Google Pixel 3 event Google launched many new things today, including a phone, a tablet, and a competitor to the Facebook Portal and Echo Show that is most notable for its lack of a camera. Read about the biggest announcements here. Google rebrands AR stickers as Playground and adds new animations Playmoji is the new name for Google’s augmented reality stickers, which will be familiar to any Snapchat user: Initially announced last fall as AR Stickers, these virtual animations were similar to the lenses and filters that Snapchat popularized a few years back. But a key difference is that these are entirely in 3D and are deployed with a much smarter sense of spatial and object recognition, thanks to Google’s advances in artificial intelligence. Google launched Strangers Things stickers, as well as a pack for Star Wars during The Last Jedi theatrical run late last year. In the new Playmoji packs, Google lets you pick from a selection of cartoony pets, visual and interactive signs, comic strip-style sports animations, and anthropomorphic weather effects:
Takes
Facebook Isn’t Sorry — It Just Wants Your Data Charlie Warzel says that Facebook Portal is only explicable in the context of Americans’ apathetic view toward their own privacy: It’s also further confirmation that Facebook isn’t particularly sorry for its privacy failures — despite a recent apology tour that included an expensive “don’t worry, we got this” mini-documentary, full-page apology ads in major papers, and COO Sheryl Sandberg saying things like, “We have a responsibility to protect your information. If we can’t, we don’t deserve it.” Worse, it belies the idea that Facebook has any real desire to reckon with the structural issues that obviously undergird its continued privacy missteps. But more troubling still is what a product like Portal says about us, Facebook’s users: We don’t care enough about our privacy to quit it. Facebook, are you kidding? Taylor Hatmaker is similarly agog at Portal: It stands to reason that if Facebook cannot reliably secure its flagship product — Facebook itself — then the company should not be trusted with experimental forays into wildly different products, i.e. physical ones. Securing a software platform that serves 2.23 billion users is an extremely challenging task, and adding hardware to that equation just complicates existing concerns. You don’t have to know the technical ins and outs of security to make secure choices. Trust is leverage — demand that it be earned. If a product doesn’t pass the smell test, trust that feeling. Throw it out. Better yet, don’t invite it onto your kitchen counter to begin with.
And finally ...
#HimToo mom inspires infinite ‘this is MY son’ memes and a rare Reverse Milkshake Duck Navy veteran Pieter Hanson became a Twitter sensation on Monday night, after his mother tweeted a photo of him in his dress uniform claiming Hanson was “afraid to go on solo dates,” because of “the current climate of false sexual allegations.” Hanson created a legendary Twitter handle — @thatwasmymom — and in a literally perfect first tweet, disavowed her comments and claimed himself an ally of women in their struggle for equality. Pieter — call me.
Talk to me
Send me tips, comments, questions, and your best-ever Google+ post: [email protected]. Via: Theverge Read the full article
0 notes
goldieseoservices · 7 years ago
Text
Move fast and fix things: Why the social web needs to start thinking differently about its products
The entrance to Facebook’s headquarters in Menlo Park, Calif. (Facebook Photo)
The people and companies who created social media told us it would be a democratizing force that would bring the world together and help us understand our differences. Turns out, it actually is a very effective tool for changing the course of history.
It was a bad week for big tech, called on the carpet before members of Congress to explain how they allowed a huge portion of the registered voters in the U.S. to be exposed to a massive disinformation campaign sponsored by Russia. Congressional hearings are usually more theatrical than effective, but the defensive posture Facebook, Google, and Twitter had to assume before two separate committees was clearly uncomfortable for the representatives of those world-changing companies.
“You bear the responsibility,” said California Senator Dianne Feinstein, a Democrat up for re-election next year, during this week’s hearings. “You created these platforms, and now they’re being misused — and you have to be the ones to do something about it, or we will.”
Most every Russia story is how we built a flaw into our society and are shocked to find a nation exploiting it instead of a company.
— Kelsey D. Atherton (@AthertonKD) November 1, 2017
Facebook has already signaled it’s going to throw money at the problem — a time-honored approach — but that spending presumes Facebook and other companies can figure out a way to stay one step ahead of an army of professional con artists who have weaponized the open, connected nature of these products to confuse and distract hundreds of millions of people around the world.
Microsoft CEO Satya Nadella talked about “the responsibility of a platform company” during his appearance at our GeekWire Summit last month. If Facebook, Google, and Twitter are going to take that responsibility seriously — or if Congress forces them to — they’re going to have to make changes to their product strategies that go against their nature.
Terms of engagement
Google, Facebook, and Twitter are in the business of capturing massive audiences by delivering information over the internet. User engagement is one of the most important metrics looked at when running such a business.
A lot of these measures are relatively simple — a click or tap is probably the fundamental unit of engagement — but even humble community-oriented tech news sites can also tell how long someone lingers on their site, or how quickly they scroll through a mobile feed, and whether or not you actually read this far.
A Chartbeat dashboard, which is popular with publishers. Big web companies track a huge number of metrics on their sites. (Chartbeat Photo)
Big web companies measure far more of your behavior than that; at one point Facebook actually tracked whatever you typed into the status update box but decided for whatever reason not to post, and it also experimented with altering the presentation of content in the newsfeed to see if it could cause an emotional reaction in users.
We all use Google and Facebook in slightly different ways, and all of that data is used to make decisions about how the site is presented to users. Whatever gets the most engagement usually tends to win, and Google and Facebook are among the most powerful companies in the 21st century in large part because they have figured out how to make these decisions faster and better than anybody else.
At the same time they have separately made important breakthroughs in technology infrastructure design, ensuring that those engaging sites are always available and always snappy. And they are throwing billions of dollars at the next generation of these technologies, developing artificial intelligence systems that could foster even more personalized experiences on their sites.
Engagement, however, is not a measure of quality, and it certainly is not a measure of truth. It’s rare that the top-rated or best-selling versions of an information product overlap with the ones people hold up as the best.
As we’ve learned over the past year, computers are not very good at separating fact from fiction. And as we’ve know for some time, people are pretty good at exploiting weaknesses in computer systems.
Google’s original search algorithm was based on the idea that if lots of websites were linking to one particular website, that meant it was a good website. For a fair amount of time, that was actually true!
But along came link farms, keyword stuffing, and the other dark arts of the search-engine optimization industry. Google quickly learned out to sniff out the most egregious offenders and change its search recipe to penalize the worst, and it now relies on dozens of signals to rank search results.
Don’t feed the bear
Facebook now has a similar problem on its hands.
Russia attempted to sow chaos in the U.S. and Europe over the past year by taking advantage of Facebook’s open nature and enormous reach to flood the site with deliberately misleading or inflammatory content, aided and abetted by a army of trolls and bots designed to drive up engagement on those posts. Less nefarious but still dangerous groups are in the disinformation business just for the money; “profitgandists,” as Snopes co-founder Vinny Green called them at the GeekWire Summit.
Embed from Getty Images
A fair amount of criticism has fallen on these web companies for profiting from these efforts, but as Apple CEO Tim Cook put it, ads on Facebook aren’t really the issue.
“The bigger issue is that some of these tools are used to divide people, to manipulate people, to get fake news to people in broad numbers, and so, to influence their thinking,” Cook said this week. “And this, to me, is the number one through ten issue.”
Disinformation is an ancient tactic, but the reach, speed, and stickiness of modern social media sites have completely changed the game. Facebook and Twitter aren’t just the global town square, they’re also the global town newspaper and the global town gossip mill, and when every piece of information on their sites looks more or less the same, even level-headed people with a decent grasp of reading comprehension can be swayed by clever disinformation.
We’ve known for sure these sites were being used for disinformation campaigns for about a year, and these companies — known for moving fast and breaking things — appear to have done very little to address the core issues. Take Twitter; the company responded to a massive concerted effort to hijack its platform by ramping up its efforts to deliver algorithmically curated feeds to its users, increasing the incentive to use bots and trolling to spread fake news into their feeds.
Google’s situation is a little different. This New York Times piece about how Google’s well-documented failure to build a competitive social network actually turned out to be a benefit in 2017 made me laugh in that sad and amazed way we laugh these days. But Google often surfaces fake news at the top of searches for given topics, and YouTube is home to an awful lot of disinformation and propaganda.
The Pottery Barn rule
There are two equally troubling realities that companies operating information businesses on the world wide web must now confront as they think about their product road maps over the next few years. The first is not of their making; human nature ensures that some people will always use whatever tools are available to them to inflict distress upon their fellow humans, and as some of the images actually used in the Russian disinformation campaign were shown in the hearing, it was difficult to understand how someone could think they were real.
The second, however, is. Social media operations need to develop better ways for detecting the massive spam operations designed to sway the minds of an electorate, because the current versions of these tools are not getting it done.
Google, Facebook, and Twitter are right when they argue that they shouldn’t be in the business of determining an objective “truth,” not only because placing that power in the hands of a corporation could be even scarier, but because their workforces are far too homogeneous to understand the reality faced by their billions of collective users.
Healthy skepticism is a good thing, but when everyone is convinced others are trolls, the trust erosion problem has spread to individuals.
— Renee DiResta (@noUpside) November 3, 2017
But they face a stark choice: either admit that their technological approach to policing these issues has failed in a big way and introduce a much greater number of humans into the equation, or quickly find a technical way to sort out deliberate sowers of disinformation on their platforms. A former Twitter engineering leader thinks a combination of artificial intelligence and human editors is the best bet, pointing to Twitter Moments as a first step.
The 2018 election cycle will be one of the most scrutinized mid-term elections in modern American history. If Facebook, Google, and Twitter really do want to make the world a better place, they need to figure this out now and fashion a tech version of something every doctor knows: first, do no harm.
from DIYS http://ift.tt/2AjYYpM
0 notes
seo78580 · 7 years ago
Text
Move fast and fix things: Why the social web needs to start thinking differently about its products
The entrance to Facebook’s headquarters in Menlo Park, Calif. (Facebook Photo)
The people and companies who created social media told us it would be a democratizing force that would bring the world together and help us understand our differences. Turns out, it actually is a very effective tool for changing the course of history.
It was a bad week for big tech, called on the carpet before members of Congress to explain how they allowed a huge portion of the registered voters in the U.S. to be exposed to a massive disinformation campaign sponsored by Russia. Congressional hearings are usually more theatrical than effective, but the defensive posture Facebook, Google, and Twitter had to assume before two separate committees was clearly uncomfortable for the representatives of those world-changing companies.
“You bear the responsibility,” said California Senator Dianne Feinstein, a Democrat up for re-election next year, during this week’s hearings. “You created these platforms, and now they’re being misused — and you have to be the ones to do something about it, or we will.”
Most every Russia story is how we built a flaw into our society and are shocked to find a nation exploiting it instead of a company.
— Kelsey D. Atherton (@AthertonKD) November 1, 2017
Facebook has already signaled it’s going to throw money at the problem — a time-honored approach — but that spending presumes Facebook and other companies can figure out a way to stay one step ahead of an army of professional con artists who have weaponized the open, connected nature of these products to confuse and distract hundreds of millions of people around the world.
Microsoft CEO Satya Nadella talked about “the responsibility of a platform company” during his appearance at our GeekWire Summit last month. If Facebook, Google, and Twitter are going to take that responsibility seriously — or if Congress forces them to — they’re going to have to make changes to their product strategies that go against their nature.
Terms of engagement
Google, Facebook, and Twitter are in the business of capturing massive audiences by delivering information over the internet. User engagement is one of the most important metrics looked at when running such a business.
A lot of these measures are relatively simple — a click or tap is probably the fundamental unit of engagement — but even humble community-oriented tech news sites can also tell how long someone lingers on their site, or how quickly they scroll through a mobile feed, and whether or not you actually read this far.
A Chartbeat dashboard, which is popular with publishers. Big web companies track a huge number of metrics on their sites. (Chartbeat Photo)
Big web companies measure far more of your behavior than that; at one point Facebook actually tracked whatever you typed into the status update box but decided for whatever reason not to post, and it also experimented with altering the presentation of content in the newsfeed to see if it could cause an emotional reaction in users.
We all use Google and Facebook in slightly different ways, and all of that data is used to make decisions about how the site is presented to users. Whatever gets the most engagement usually tends to win, and Google and Facebook are among the most powerful companies in the 21st century in large part because they have figured out how to make these decisions faster and better than anybody else.
At the same time they have separately made important breakthroughs in technology infrastructure design, ensuring that those engaging sites are always available and always snappy. And they are throwing billions of dollars at the next generation of these technologies, developing artificial intelligence systems that could foster even more personalized experiences on their sites.
Engagement, however, is not a measure of quality, and it certainly is not a measure of truth. It’s rare that the top-rated or best-selling versions of an information product overlap with the ones people hold up as the best.
As we’ve learned over the past year, computers are not very good at separating fact from fiction. And as we’ve know for some time, people are pretty good at exploiting weaknesses in computer systems.
Google’s original search algorithm was based on the idea that if lots of websites were linking to one particular website, that meant it was a good website. For a fair amount of time, that was actually true!
But along came link farms, keyword stuffing, and the other dark arts of the search-engine optimization industry. Google quickly learned out to sniff out the most egregious offenders and change its search recipe to penalize the worst, and it now relies on dozens of signals to rank search results.
Don’t feed the bear
Facebook now has a similar problem on its hands.
Russia attempted to sow chaos in the U.S. and Europe over the past year by taking advantage of Facebook’s open nature and enormous reach to flood the site with deliberately misleading or inflammatory content, aided and abetted by a army of trolls and bots designed to drive up engagement on those posts. Less nefarious but still dangerous groups are in the disinformation business just for the money; “profitgandists,” as Snopes co-founder Vinny Green called them at the GeekWire Summit.
Embed from Getty Images
A fair amount of criticism has fallen on these web companies for profiting from these efforts, but as Apple CEO Tim Cook put it, ads on Facebook aren’t really the issue.
“The bigger issue is that some of these tools are used to divide people, to manipulate people, to get fake news to people in broad numbers, and so, to influence their thinking,” Cook said this week. “And this, to me, is the number one through ten issue.”
Disinformation is an ancient tactic, but the reach, speed, and stickiness of modern social media sites have completely changed the game. Facebook and Twitter aren’t just the global town square, they’re also the global town newspaper and the global town gossip mill, and when every piece of information on their sites looks more or less the same, even level-headed people with a decent grasp of reading comprehension can be swayed by clever disinformation.
We’ve known for sure these sites were being used for disinformation campaigns for about a year, and these companies — known for moving fast and breaking things — appear to have done very little to address the core issues. Take Twitter; the company responded to a massive concerted effort to hijack its platform by ramping up its efforts to deliver algorithmically curated feeds to its users, increasing the incentive to use bots and trolling to spread fake news into their feeds.
Google’s situation is a little different. This New York Times piece about how Google’s well-documented failure to build a competitive social network actually turned out to be a benefit in 2017 made me laugh in that sad and amazed way we laugh these days. But Google often surfaces fake news at the top of searches for given topics, and YouTube is home to an awful lot of disinformation and propaganda.
The Pottery Barn rule
There are two equally troubling realities that companies operating information businesses on the world wide web must now confront as they think about their product road maps over the next few years. The first is not of their making; human nature ensures that some people will always use whatever tools are available to them to inflict distress upon their fellow humans, and as some of the images actually used in the Russian disinformation campaign were shown in the hearing, it was difficult to understand how someone could think they were real.
The second, however, is. Social media operations need to develop better ways for detecting the massive spam operations designed to sway the minds of an electorate, because the current versions of these tools are not getting it done.
Google, Facebook, and Twitter are right when they argue that they shouldn’t be in the business of determining an objective “truth,” not only because placing that power in the hands of a corporation could be even scarier, but because their workforces are far too homogeneous to understand the reality faced by their billions of collective users.
Healthy skepticism is a good thing, but when everyone is convinced others are trolls, the trust erosion problem has spread to individuals.
— Renee DiResta (@noUpside) November 3, 2017
But they face a stark choice: either admit that their technological approach to policing these issues has failed in a big way and introduce a much greater number of humans into the equation, or quickly find a technical way to sort out deliberate sowers of disinformation on their platforms. A former Twitter engineering leader thinks a combination of artificial intelligence and human editors is the best bet, pointing to Twitter Moments as a first step.
The 2018 election cycle will be one of the most scrutinized mid-term elections in modern American history. If Facebook, Google, and Twitter really do want to make the world a better place, they need to figure this out now and fashion a tech version of something every doctor knows: first, do no harm.
from DIYS http://ift.tt/2AjYYpM
0 notes
malenipshadows · 6 years ago
Link
0 notes
malenipshadows · 7 years ago
Link
  ***The effort tricked thousands of users into spreading graphic racial epithets across social media, interweaving provocative content with disinformation and falsehoods.    The tweets were uncovered in a database of over 202,000 Russian troll tweets that NBC News compiled from several sources, including three sources familiar with Twitter's application programming interface, or API, an online system that allows outside software developers to work with the data underlying users' tweets.***
0 notes