#(who are a *highly motivated* group of voters they can use to sway elections)
Explore tagged Tumblr posts
Text
Trans Exclusionary Radical Feminists are a very particular type of festering evil, and calling tepidly transphobic pseudo feminists like JK Rowling TERFs is a bad thing, because it lets actual TERFs hide behind them, and will radicalize otherwise slightly annoying people into psychos who legitimately hate trans people.
#the same is true for nazis#right wing politicians will feel more comfortable associating with nazis#(who are a *highly motivated* group of voters they can use to sway elections)#if you let send a message that you can't distinguish between their average voters#and facists#when you call random people who are against state healthcare fascists#you are sending a message that the average person can't tell the difference#and the average voter won't notice if a politician *does* associate with fascists#worse#you make this true by radicalizing normal people#and making them more easily able to dismiss actual fascists as normal people#because their base assumption will be that the description is being used poorly#politics#(that last tag is there for very tired people who filter their feeds to just chill out on Tumblr.)#(tagging your posts matters)
1 note
·
View note
Text
How Russia’s online influence campaign engaged with millions for years
Russian efforts to influence U.S. politics and sway public opinion were consistent and, as far as engaging with target audiences, largely successful, according to a report from Oxford’s Computational Propaganda Project published today. Based on data provided to Congress by Facebook, Instagram, Google and Twitter, the study paints a portrait of the years-long campaign that’s less than flattering to the companies.
The report, which you can read here, was published today but given to some outlets over the weekend; it summarizes the work of the Internet Research Agency, Moscow’s online influence factory and troll farm. The data cover various periods for different companies, but 2016 and 2017 showed by far the most activity.
A clearer picture
If you’ve only checked into this narrative occasionally during the last couple of years, the Comprop report is a great way to get a bird’s-eye view of the whole thing, with no “we take this very seriously” palaver interrupting the facts.
If you’ve been following the story closely, the value of the report is mostly in deriving specifics and some new statistics from the data, which Oxford researchers were provided some seven months ago for analysis. The numbers, predictably, all seem to be a bit higher or more damning than those provided by the companies themselves in their voluntary reports and carefully practiced testimony.
Previous estimates have focused on the rather nebulous metric of “encountering” or “seeing” IRA content put on these social metrics. This had the dual effect of increasing the affected number — to over 100 million on Facebook alone — but “seeing” could easily be downplayed in importance; after all, how many things do you “see” on the internet every day?
Facebook will show which Russian election troll accounts you followed
The Oxford researchers better quantify the engagement, on Facebook first, with more specific and consequential numbers. For instance, in 2016 and 2017, nearly 30 million people on Facebook actually shared Russian propaganda content, with similar numbers of likes garnered, and millions of comments generated.
Note that these aren’t ads that Russian shell companies were paying to shove into your timeline — these were pages and groups with thousands of users on board who actively engaged with and spread posts, memes and disinformation on captive news sites linked to by the propaganda accounts.
The content itself was, of course, carefully curated to touch on a number of divisive issues: immigration, gun control, race relations and so on. Many different groups (i.e. black Americans, conservatives, Muslims, LGBT communities) were targeted; all generated significant engagement, as this breakdown of the above stats shows:
Although the targeted communities were surprisingly diverse, the intent was highly focused: stoke partisan divisions, suppress left-leaning voters and activate right-leaning ones.
Black voters in particular were a popular target across all platforms, and a great deal of content was posted both to keep racial tensions high and to interfere with their actual voting. Memes were posted suggesting followers withhold their votes, or with deliberately incorrect instructions on how to vote. These efforts were among the most numerous and popular of the IRA’s campaign; it’s difficult to judge their effectiveness, but certainly they had reach.
Examples of posts targeting black Americans.
In a statement, Facebook said that it was cooperating with officials and that “Congress and the intelligence community are best placed to use the information we and others provide to determine the political motivations of actors like the Internet Research Agency.” It also noted that it has “made progress in helping prevent interference on our platforms during elections, strengthened our policies against voter suppression ahead of the 2018 midterms, and funded independent research on the impact of social media on democracy.”
Instagram on the rise
Based on the narrative thus far, one might expect that Facebook — being the focus for much of it — was the biggest platform for this propaganda, and that it would have peaked around the 2016 election, when the evident goal of helping Donald Trump get elected had been accomplished.
In fact Instagram was receiving as much or more content than Facebook, and it was being engaged with on a similar scale. Previous reports disclosed that around 120,000 IRA-related posts on Instagram had reached several million people in the run-up to the election. The Oxford researchers conclude, however, that 40 accounts received in total some 185 million likes and 4 million comments during the period covered by the data (2015-2017).
A partial explanation for these rather high numbers may be that, also counter to the most obvious narrative, IRA posting in fact increased following the election — for all platforms, but particularly on Instagram.
IRA-related Instagram posts jumped from an average of 2,611 per month in 2016 to 5,956 in 2017; note that the numbers don’t match the above table exactly because the time periods differ slightly.
Twitter posts, while extremely numerous, are quite steady at just under 60,000 per month, totaling around 73 million engagements over the period studied. To be perfectly frank, this kind of voluminous bot and sock puppet activity is so commonplace on Twitter, and the company seems to have done so little to thwart it, that it hardly bears mentioning. But it was certainly there, and often reused existing bot nets that previously had chimed in on politics elsewhere and in other languages.
In a statement, Twitter said that it has “made significant strides since 2016 to counter manipulation of our service, including our release of additional data in October related to previously disclosed activities to enable further independent academic research and investigation.”
Google too is somewhat hard to find in the report, though not necessarily because it has a handle on Russian influence on its platforms. Oxford’s researchers complain that Google and YouTube have been not just stingy, but appear to have actively attempted to stymie analysis.
Google chose to supply the Senate committee with data in a non-machine-readable format. The evidence that the IRA had bought ads on Google was provided as images of ad text and in PDF format whose pages displayed copies of information previously organized in spreadsheets. This means that Google could have provided the useable ad text and spreadsheets—in a standard machine- readable file format, such as CSV or JSON, that would be useful to data scientists—but chose to turn them into images and PDFs as if the material would all be printed out on paper.
This forced the researchers to collect their own data via citations and mentions of YouTube content. As a consequence, their conclusions are limited. Generally speaking, when a tech company does this, it means that the data they could provide would tell a story they don’t want heard.
For instance, one interesting point brought up by a second report published today, by New Knowledge, concerns the 1,108 videos uploaded by IRA-linked accounts on YouTube. These videos, a Google statement explained, “were not targeted to the U.S. or to any particular sector of the U.S. population.”
In fact, all but a few dozen of these videos concerned police brutality and Black Lives Matter, which as you’ll recall were among the most popular topics on the other platforms. Seems reasonable to expect that this extremely narrow targeting would have been mentioned by YouTube in some way. Unfortunately it was left to be discovered by a third party and gives one an idea of just how far a statement from the company can be trusted. (Google did not immediately respond to a request for comment.)
Desperately seeking transparency
In its conclusion, the Oxford researchers — Philip N. Howard, Bharath Ganesh and Dimitra Liotsiou — point out that although the Russian propaganda efforts were (and remain) disturbingly effective and well organized, the country is not alone in this.
“During 2016 and 2017 we saw significant efforts made by Russia to disrupt elections around the world, but also political parties in these countries spreading disinformation domestically,” they write. “In many democracies it is not even clear that spreading computational propaganda contravenes election laws.”
“It is, however, quite clear that the strategies and techniques used by government cyber troops have an impact,” the report continues, “and that their activities violate the norms of democratic practice… Social media have gone from being the natural infrastructure for sharing collective grievances and coordinating civic engagement, to being a computational tool for social control, manipulated by canny political consultants, and available to politicians in democracies and dictatorships alike.”
Predictably, even social networks’ moderation policies became targets for propagandizing.
Waiting on politicians is, as usual, something of a long shot, and the onus is squarely on the providers of social media and internet services to create an environment in which malicious actors are less likely to thrive.
Specifically, this means that these companies need to embrace researchers and watchdogs in good faith instead of freezing them out in order to protect some internal process or embarrassing misstep.
“Twitter used to provide researchers at major universities with access to several APIs, but has withdrawn this and provides so little information on the sampling of existing APIs that researchers increasingly question its utility for even basic social science,” the researchers point out. “Facebook provides an extremely limited API for the analysis of public pages, but no API for Instagram.” (And we’ve already heard what they think of Google’s submissions.)
If the companies exposed in this report truly take these issues seriously, as they tell us time and again, perhaps they should implement some of these suggestions.
via Social – TechCrunch https://ift.tt/2BoiW4g
0 notes
Text
How Russia’s online influence campaign engaged with millions for years
Russian efforts to influence U.S. politics and sway public opinion were consistent and, as far as engaging with target audiences, largely successful, according to a report from Oxford’s Computational Propaganda Project published today. Based on data provided to Congress by Facebook, Instagram, Google, and Twitter, the study paints a portrait of the years-long campaign that’s less than flattering to the companies.
The report, which you can read here, was published today but given to some outlets over the weekend, summarizes the work of the Internet Research Agency, Moscow’s online influence factory and troll farm. The data cover various periods for different companies, but 2016 and 2017 showed by far the most activity.
A clearer picture
If you’ve only checked into this narrative occasionally during the last couple years, the Comprop report is a great way to get a bird’s-eye view of the whole thing, with no “we take this very seriously” palaver interrupting the facts.
If you’ve been following the story closely, the value of the report is mostly in deriving specifics and some new statistics from the data, which Oxford researchers were provided some seven months ago for analysis. The numbers, predictably, all seem to be a bit higher or more damning than those provided by the companies themselves in their voluntary reports and carefully practiced testimony.
Previous estimates have focused on the rather nebulous metric of “encountering” or “seeing” IRA content put on these social metrics. This had the dual effect of increasing the affected number — to over a hundred million on Facebook alone — but “seeing” could easily be downplayed in importance; after all, how many things do you “see” on the internet every day?
Facebook will show which Russian election troll accounts you followed
The Oxford researchers better quantify the engagement, on Facebook first, with more specific and consequential numbers. For instance, in 2016 and 2017, nearly 30 million people on Facebook actually shared Russian propaganda content, with similar numbers of likes garnered, and millions of comments generated.
Note that these aren’t ads that Russian shell companies were paying to shove into your timeline — these were pages and groups with thousands of users on board who actively engaged with and spread posts, memes, and disinformation on captive news sites linked to by the propaganda accounts.
The content itself was, of course, carefully curated to touch on a number of divisive issues: immigration, gun control, race relations, and so on. Many different groups (i.e. black Americans, conservatives, Muslims, LGBT communities) were targeted all generated significant engagement, as this breakdown of the above stats shows:
Although the targeted communities were surprisingly diverse, the intent was highly focused: stoke partisan divisions, suppress left-leaning voters, and activate right-leaning ones.
Black voters in particular were a popular target across all platforms, and a great deal of content was posted both to keep racial tensions high and to interfere with their actual voting. Memes were posted suggesting followers withhold their votes, or deliberately incorrect instructions on how to vote. These efforts were among the most numerous and popular of the IRA’s campaign; it’s difficult to judge their effectiveness, but certainly they had reach.
Examples of posts targeting black Americans.
In a statement, Facebook said that it was cooperating with officials and that “Congress and the intelligence community are best placed to use the information we and others provide to determine the political motivations of actors like the Internet Research Agency.” It also noted that it has “made progress in helping prevent interference on our platforms during elections, strengthened our policies against voter suppression ahead of the 2018 midterms, and funded independent research on the impact of social media on democracy.”
Instagram on the rise
Based on the narrative thus far, one might expect that Facebook — being the focus for much of it — was the biggest platform for this propaganda, and that it would have peaked around the 2016 election, when the evident goal of helping Donald Trump get elected had been accomplished.
In fact Instagram was receiving as much or more content than Facebook, and it was being engaged with on a similar scale. Previous reports disclosed that around 120,000 IRA-related posts on Instagram had reached several million people in the run-up to the election. The Oxford researchers conclude, however, that 40 accounts received in total some 185 million likes and 4 million comments during the period covered by the data (2015-2017).
A partial explanation for these rather high numbers may be that, also counter to the most obvious narrative, IRA posting in fact increased following the election — for all platforms, but particularly on Instagram.
IRA-related Instagram posts jumped from an average of 2,611 per month in 2016 to 5,956 in 2017; note that the numbers don’t match the above table exactly because the time periods differ slightly.
Twitter posts, while extremely numerous, are quite steady at just under 60,000 per month, totaling around 73 million engagements over the period studied. To be perfectly frank this kind of voluminous bot and sock puppet activity is so commonplace on Twitter, and the company seems to have done so little to thwart it, that it hardly bears mentioning. But it was certainly there, and often reused existing bot nets that previously had chimed in on politics elsewhere and in other languages.
In a statement, Twitter said that it has “made significant strides since 2016 to counter manipulation of our service, including our release of additional data in October related to previously disclosed activities to enable further independent academic research and investigation.”
Google too is somewhat hard to find in the report, though not necessarily because it has a handle on Russian influence on its platforms. Oxford’s researchers complain that Google and YouTube have been not just stingy, but appear to have actively attempted to stymie analysis.
Google chose to supply the Senate committee with data in a non-machine-readable format. The evidence that the IRA had bought ads on Google was provided as images of ad text and in PDF format whose pages displayed copies of information previously organized in spreadsheets. This means that Google could have provided the useable ad text and spreadsheets—in a standard machine- readable file format, such as CSV or JSON, that would be useful to data scientists—but chose to turn them into images and PDFs as if the material would all be printed out on paper.
This forced the researchers to collect their own data via citations and mentions of YouTube content. As a consequence their conclusions are limited. Generally speaking when a tech company does this, it means that the data they could provide would tell a story they don’t want heard.
For instance, one interesting point brought up by a second report published today, by New Knowledge, concerns the 1,108 videos uploaded by IRA-linked accounts on YouTube. These videos, a Google statement explained, “were not targeted to the U.S. or to any particular sector of the U.S. population.”
In fact, all but a few dozen of these videos concerned police brutality and Black Lives Matter, which as you’ll recall were among the most popular topics on the other platforms. Seems reasonable to expect that this extremely narrow targeting would have been mentioned by YouTube in some way. Unfortunately it was left to be discovered by a third party and gives one an idea of just how far a statement from the company can be trusted.
Desperately seeking transparency
In its conclusion, the Oxford researchers — Philip N. Howard, Bharath Ganesh, and Dimitra Liotsiou — point out that although the Russian propaganda efforts were (and remain) disturbingly effective and well organized, the country is not alone in this.
“During 2016 and 2017 we saw significant efforts made by Russia to disrupt elections around the world, but also political parties in these countries spreading disinformation domestically,” they write. “In many democracies it is not even clear that spreading computational propaganda contravenes election laws.”
“It is, however, quite clear that the strategies and techniques used by government cyber troops have an impact,” the report continues, “and that their activities violate the norms of democratic practice… Social media have gone from being the natural infrastructure for sharing collective grievances and coordinating civic engagement, to being a computational tool for social control, manipulated by canny political consultants, and available to politicians in democracies and dictatorships alike.”
Predictably, even social networks’ moderation policies became targets for propagandizing.
Waiting on politicians is, as usual, something of a long shot, and the onus is squarely on the providers of social media and internet services to create an environment in which malicious actors are less likely to thrive.
Specifically, this means that these companies need to embrace researchers and watchdogs in good faith instead of freezing them out in order to protect some internal process or embarrassing misstep.
“Twitter used to provide researchers at major universities with access to several APIs, but has withdrawn this and provides so little information on the sampling of existing APIs that researchers increasingly question its utility for even basic social science,” the researchers point out. “Facebook provides an extremely limited API for the analysis of public pages, but no API for Instagram.” (And we’ve already heard what they think of Google’s submissions.)
If the companies exposed in this report truly take these issues seriously, as they tell us time and again, perhaps they should implement some of these suggestions.
0 notes
Text
How Russia’s online influence campaign engaged with millions for years
Russian efforts to influence U.S. politics and sway public opinion were consistent and, as far as engaging with target audiences, largely successful, according to a report from Oxford’s Computational Propaganda Project published today. Based on data provided to Congress by Facebook, Instagram, Google, and Twitter, the study paints a portrait of the years-long campaign that’s less than flattering to the companies.
The report, which you can read here, was published today but given to some outlets over the weekend, summarizes the work of the Internet Research Agency, Moscow’s online influence factory and troll farm. The data cover various periods for different companies, but 2016 and 2017 showed by far the most activity.
A clearer picture
If you’ve only checked into this narrative occasionally during the last couple years, the Comprop report is a great way to get a bird’s-eye view of the whole thing, with no “we take this very seriously” palaver interrupting the facts.
If you’ve been following the story closely, the value of the report is mostly in deriving specifics and some new statistics from the data, which Oxford researchers were provided some seven months ago for analysis. The numbers, predictably, all seem to be a bit higher or more damning than those provided by the companies themselves in their voluntary reports and carefully practiced testimony.
Previous estimates have focused on the rather nebulous metric of “encountering” or “seeing” IRA content put on these social metrics. This had the dual effect of increasing the affected number — to over a hundred million on Facebook alone — but “seeing” could easily be downplayed in importance; after all, how many things do you “see” on the internet every day?
Facebook will show which Russian election troll accounts you followed
The Oxford researchers better quantify the engagement, on Facebook first, with more specific and consequential numbers. For instance, in 2016 and 2017, nearly 30 million people on Facebook actually shared Russian propaganda content, with similar numbers of likes garnered, and millions of comments generated.
Note that these aren’t ads that Russian shell companies were paying to shove into your timeline — these were pages and groups with thousands of users on board who actively engaged with and spread posts, memes, and disinformation on captive news sites linked to by the propaganda accounts.
The content itself was, of course, carefully curated to touch on a number of divisive issues: immigration, gun control, race relations, and so on. Many different groups (i.e. black Americans, conservatives, Muslims, LGBT communities) were targeted all generated significant engagement, as this breakdown of the above stats shows:
Although the targeted communities were surprisingly diverse, the intent was highly focused: stoke partisan divisions, suppress left-leaning voters, and activate right-leaning ones.
Black voters in particular were a popular target across all platforms, and a great deal of content was posted both to keep racial tensions high and to interfere with their actual voting. Memes were posted suggesting followers withhold their votes, or deliberately incorrect instructions on how to vote. These efforts were among the most numerous and popular of the IRA’s campaign; it’s difficult to judge their effectiveness, but certainly they had reach.
Examples of posts targeting black Americans.
In a statement, Facebook said that it was cooperating with officials and that “Congress and the intelligence community are best placed to use the information we and others provide to determine the political motivations of actors like the Internet Research Agency.” It also noted that it has “made progress in helping prevent interference on our platforms during elections, strengthened our policies against voter suppression ahead of the 2018 midterms, and funded independent research on the impact of social media on democracy.”
Instagram on the rise
Based on the narrative thus far, one might expect that Facebook — being the focus for much of it — was the biggest platform for this propaganda, and that it would have peaked around the 2016 election, when the evident goal of helping Donald Trump get elected had been accomplished.
In fact Instagram was receiving as much or more content than Facebook, and it was being engaged with on a similar scale. Previous reports disclosed that around 120,000 IRA-related posts on Instagram had reached several million people in the run-up to the election. The Oxford researchers conclude, however, that 40 accounts received in total some 185 million likes and 4 million comments during the period covered by the data (2015-2017).
A partial explanation for these rather high numbers may be that, also counter to the most obvious narrative, IRA posting in fact increased following the election — for all platforms, but particularly on Instagram.
IRA-related Instagram posts jumped from an average of 2,611 per month in 2016 to 5,956 in 2017; note that the numbers don’t match the above table exactly because the time periods differ slightly.
Twitter posts, while extremely numerous, are quite steady at just under 60,000 per month, totaling around 73 million engagements over the period studied. To be perfectly frank this kind of voluminous bot and sock puppet activity is so commonplace on Twitter, and the company seems to have done so little to thwart it, that it hardly bears mentioning. But it was certainly there, and often reused existing bot nets that previously had chimed in on politics elsewhere and in other languages.
In a statement, Twitter said that it has “made significant strides since 2016 to counter manipulation of our service, including our release of additional data in October related to previously disclosed activities to enable further independent academic research and investigation.”
Google too is somewhat hard to find in the report, though not necessarily because it has a handle on Russian influence on its platforms. Oxford’s researchers complain that Google and YouTube have been not just stingy, but appear to have actively attempted to stymie analysis.
Google chose to supply the Senate committee with data in a non-machine-readable format. The evidence that the IRA had bought ads on Google was provided as images of ad text and in PDF format whose pages displayed copies of information previously organized in spreadsheets. This means that Google could have provided the useable ad text and spreadsheets—in a standard machine- readable file format, such as CSV or JSON, that would be useful to data scientists—but chose to turn them into images and PDFs as if the material would all be printed out on paper.
This forced the researchers to collect their own data via citations and mentions of YouTube content. As a consequence their conclusions are limited. Generally speaking when a tech company does this, it means that the data they could provide would tell a story they don’t want heard.
For instance, one interesting point brought up by a second report published today, by New Knowledge, concerns the 1,108 videos uploaded by IRA-linked accounts on YouTube. These videos, a Google statement explained, “were not targeted to the U.S. or to any particular sector of the U.S. population.”
In fact, all but a few dozen of these videos concerned police brutality and Black Lives Matter, which as you’ll recall were among the most popular topics on the other platforms. Seems reasonable to expect that this extremely narrow targeting would have been mentioned by YouTube in some way. Unfortunately it was left to be discovered by a third party and gives one an idea of just how far a statement from the company can be trusted.
Desperately seeking transparency
In its conclusion, the Oxford researchers — Philip N. Howard, Bharath Ganesh, and Dimitra Liotsiou — point out that although the Russian propaganda efforts were (and remain) disturbingly effective and well organized, the country is not alone in this.
“During 2016 and 2017 we saw significant efforts made by Russia to disrupt elections around the world, but also political parties in these countries spreading disinformation domestically,” they write. “In many democracies it is not even clear that spreading computational propaganda contravenes election laws.”
“It is, however, quite clear that the strategies and techniques used by government cyber troops have an impact,” the report continues, “and that their activities violate the norms of democratic practice… Social media have gone from being the natural infrastructure for sharing collective grievances and coordinating civic engagement, to being a computational tool for social control, manipulated by canny political consultants, and available to politicians in democracies and dictatorships alike.”
Predictably, even social networks’ moderation policies became targets for propagandizing.
Waiting on politicians is, as usual, something of a long shot, and the onus is squarely on the providers of social media and internet services to create an environment in which malicious actors are less likely to thrive.
Specifically, this means that these companies need to embrace researchers and watchdogs in good faith instead of freezing them out in order to protect some internal process or embarrassing misstep.
“Twitter used to provide researchers at major universities with access to several APIs, but has withdrawn this and provides so little information on the sampling of existing APIs that researchers increasingly question its utility for even basic social science,” the researchers point out. “Facebook provides an extremely limited API for the analysis of public pages, but no API for Instagram.” (And we’ve already heard what they think of Google’s submissions.)
If the companies exposed in this report truly take these issues seriously, as they tell us time and again, perhaps they should implement some of these suggestions.
from Facebook – TechCrunch https://ift.tt/2BoiW4g via IFTTT
0 notes
Text
How Russia’s online influence campaign engaged with millions for years
Russian efforts to influence U.S. politics and sway public opinion were consistent and, as far as engaging with target audiences, largely successful, according to a report from Oxford’s Computational Propaganda Project published today. Based on data provided to Congress by Facebook, Instagram, Google, and Twitter, the study paints a portrait of the years-long campaign that’s less than flattering to the companies.
The report, which you can read here, was published today but given to some outlets over the weekend, summarizes the work of the Internet Research Agency, Moscow’s online influence factory and troll farm. The data cover various periods for different companies, but 2016 and 2017 showed by far the most activity.
A clearer picture
If you’ve only checked into this narrative occasionally during the last couple years, the Comprop report is a great way to get a bird’s-eye view of the whole thing, with no “we take this very seriously” palaver interrupting the facts.
If you’ve been following the story closely, the value of the report is mostly in deriving specifics and some new statistics from the data, which Oxford researchers were provided some seven months ago for analysis. The numbers, predictably, all seem to be a bit higher or more damning than those provided by the companies themselves in their voluntary reports and carefully practiced testimony.
Previous estimates have focused on the rather nebulous metric of “encountering” or “seeing” IRA content put on these social metrics. This had the dual effect of increasing the affected number — to over a hundred million on Facebook alone — but “seeing” could easily be downplayed in importance; after all, how many things do you “see” on the internet every day?
Facebook will show which Russian election troll accounts you followed
The Oxford researchers better quantify the engagement, on Facebook first, with more specific and consequential numbers. For instance, in 2016 and 2017, nearly 30 million people on Facebook actually shared Russian propaganda content, with similar numbers of likes garnered, and millions of comments generated.
Note that these aren’t ads that Russian shell companies were paying to shove into your timeline — these were pages and groups with thousands of users on board who actively engaged with and spread posts, memes, and disinformation on captive news sites linked to by the propaganda accounts.
The content itself was, of course, carefully curated to touch on a number of divisive issues: immigration, gun control, race relations, and so on. Many different groups (i.e. black Americans, conservatives, Muslims, LGBT communities) were targeted all generated significant engagement, as this breakdown of the above stats shows:
Although the targeted communities were surprisingly diverse, the intent was highly focused: stoke partisan divisions, suppress left-leaning voters, and activate right-leaning ones.
Black voters in particular were a popular target across all platforms, and a great deal of content was posted both to keep racial tensions high and to interfere with their actual voting. Memes were posted suggesting followers withhold their votes, or deliberately incorrect instructions on how to vote. These efforts were among the most numerous and popular of the IRA’s campaign; it’s difficult to judge their effectiveness, but certainly they had reach.
Examples of posts targeting black Americans.
In a statement, Facebook said that it was cooperating with officials and that “Congress and the intelligence community are best placed to use the information we and others provide to determine the political motivations of actors like the Internet Research Agency.” It also noted that it has “made progress in helping prevent interference on our platforms during elections, strengthened our policies against voter suppression ahead of the 2018 midterms, and funded independent research on the impact of social media on democracy.”
Instagram on the rise
Based on the narrative thus far, one might expect that Facebook — being the focus for much of it — was the biggest platform for this propaganda, and that it would have peaked around the 2016 election, when the evident goal of helping Donald Trump get elected had been accomplished.
In fact Instagram was receiving as much or more content than Facebook, and it was being engaged with on a similar scale. Previous reports disclosed that around 120,000 IRA-related posts on Instagram had reached several million people in the run-up to the election. The Oxford researchers conclude, however, that 40 accounts received in total some 185 million likes and 4 million comments during the period covered by the data (2015-2017).
A partial explanation for these rather high numbers may be that, also counter to the most obvious narrative, IRA posting in fact increased following the election — for all platforms, but particularly on Instagram.
IRA-related Instagram posts jumped from an average of 2,611 per month in 2016 to 5,956 in 2017; note that the numbers don’t match the above table exactly because the time periods differ slightly.
Twitter posts, while extremely numerous, are quite steady at just under 60,000 per month, totaling around 73 million engagements over the period studied. To be perfectly frank this kind of voluminous bot and sock puppet activity is so commonplace on Twitter, and the company seems to have done so little to thwart it, that it hardly bears mentioning. But it was certainly there, and often reused existing bot nets that previously had chimed in on politics elsewhere and in other languages.
In a statement, Twitter said that it has “made significant strides since 2016 to counter manipulation of our service, including our release of additional data in October related to previously disclosed activities to enable further independent academic research and investigation.”
Google too is somewhat hard to find in the report, though not necessarily because it has a handle on Russian influence on its platforms. Oxford’s researchers complain that Google and YouTube have been not just stingy, but appear to have actively attempted to stymie analysis.
Google chose to supply the Senate committee with data in a non-machine-readable format. The evidence that the IRA had bought ads on Google was provided as images of ad text and in PDF format whose pages displayed copies of information previously organized in spreadsheets. This means that Google could have provided the useable ad text and spreadsheets—in a standard machine- readable file format, such as CSV or JSON, that would be useful to data scientists—but chose to turn them into images and PDFs as if the material would all be printed out on paper.
This forced the researchers to collect their own data via citations and mentions of YouTube content. As a consequence their conclusions are limited. Generally speaking when a tech company does this, it means that the data they could provide would tell a story they don’t want heard.
For instance, one interesting point brought up by a second report published today, by New Knowledge, concerns the 1,108 videos uploaded by IRA-linked accounts on YouTube. These videos, a Google statement explained, “were not targeted to the U.S. or to any particular sector of the U.S. population.”
In fact, all but a few dozen of these videos concerned police brutality and Black Lives Matter, which as you’ll recall were among the most popular topics on the other platforms. Seems reasonable to expect that this extremely narrow targeting would have been mentioned by YouTube in some way. Unfortunately it was left to be discovered by a third party and gives one an idea of just how far a statement from the company can be trusted.
Desperately seeking transparency
In its conclusion, the Oxford researchers — Philip N. Howard, Bharath Ganesh, and Dimitra Liotsiou — point out that although the Russian propaganda efforts were (and remain) disturbingly effective and well organized, the country is not alone in this.
“During 2016 and 2017 we saw significant efforts made by Russia to disrupt elections around the world, but also political parties in these countries spreading disinformation domestically,” they write. “In many democracies it is not even clear that spreading computational propaganda contravenes election laws.”
“It is, however, quite clear that the strategies and techniques used by government cyber troops have an impact,” the report continues, “and that their activities violate the norms of democratic practice… Social media have gone from being the natural infrastructure for sharing collective grievances and coordinating civic engagement, to being a computational tool for social control, manipulated by canny political consultants, and available to politicians in democracies and dictatorships alike.”
Predictably, even social networks’ moderation policies became targets for propagandizing.
Waiting on politicians is, as usual, something of a long shot, and the onus is squarely on the providers of social media and internet services to create an environment in which malicious actors are less likely to thrive.
Specifically, this means that these companies need to embrace researchers and watchdogs in good faith instead of freezing them out in order to protect some internal process or embarrassing misstep.
“Twitter used to provide researchers at major universities with access to several APIs, but has withdrawn this and provides so little information on the sampling of existing APIs that researchers increasingly question its utility for even basic social science,” the researchers point out. “Facebook provides an extremely limited API for the analysis of public pages, but no API for Instagram.” (And we’ve already heard what they think of Google’s submissions.)
If the companies exposed in this report truly take these issues seriously, as they tell us time and again, perhaps they should implement some of these suggestions.
0 notes
Text
Higher Ground Labs is betting tech can help sway the 2020 elections for Democrats
When Shomik Dutta and Betsy Hoover first met in 2007, he was coordinating fundraising and get-out-the-vote efforts for Barack Obama’s first presidential campaign and she was a deputy field director for the campaign.
Over the next two election cycles the two would become part of an organizing and fundraising team that transformed the business of politics through its use of technology — supposedly laying the groundwork for years of Democratic dominance in organizing, fundraising, polling and grassroots advocacy.
Then came Donald J. Trump and the 2016 election.
For both Dutta and Hoover, the 2016 outcome was a wake-up call against complacency. What had worked for the Democratic party in 2008 and 2012 wasn’t going to be effective in future election cycles, so they created the investment firm Higher Ground Labs to provide financing and a launching pad for new companies serving Democratic campaigns and progressive organizations.
Higher Ground Labs backs 13 startups to help Democrats win in 2018 and beyond
“As the political world shifts from analog to digital, we need a lot more tools to capture that spend,” says Dutta. “Democrats are spending on average 70 cents of every dollar raised on television ads. We are addicted to old ways of campaigning. If we want to activate and engage an enduring majority of voters we have to go where they are (and that’s increasingly online) and we have to adapt to be able to have these conversations wherever they are.”
Social media and the rise of “direct to consumer” politics
While the Obama campaign effectively used the internet as a mobilization tool in its two campaigns, the lessons of social media and mobile technologies that offer a “direct-to-consumer” politics circumventing traditional norms have, in the ensuing years, been harnessed most effectively by conservative organizations, according to some scholars and activists.
“The internet is a tool and in that sense it’s neutral, but just like other communication tools from the past, people with more power, with more resources, with more organization, have been able to take advantage of it,” Jen Schradie, an assistant professor at the Observatoire sociologique du changement at Sciences Po in Paris, told Vox in an interview earlier this month.
Schradie is a scholar whose recent book, “The Revolution That Wasn’t,” contends that the internet’s early application as a progressive organizing tool has been overtaken by more conservative elements. “The idea of neutrality seems more true of the internet because the costs of distributing information are dramatically lower than with something like television or radio or other communication tools,” she said. “However, to make full use of the internet, you still need substantial resources and time and motivation. The people who can afford to do this, who can fund the right digital strategy, create a major imbalance in their favor.”
Schradie contends that a web of privately funded think tanks, media organizations, talk radio and — increasingly — mobile applications have woven a conservative stitch into the fabric of social media. The medium’s own tendency to promote polarizing and fringe viewpoints also served to amplify the views of pundits who were previously believed to be political outliers.
Essentially, these sites have enabled commentators and personalities to create a patchwork of “grassroots” organizations and media operations dedicated to reaching an audience receptive to their particular political message that’s funded by billionaire donors and apolitical corporate ad dollars.
Then there’s the technology companies, like Cambridge Analytica, which improperly used access to Facebook data for targeting purposes — also financed by these same billionaires.
Bannon and Cambridge Analytica planned suppression of black voters, whistleblower tells Senate
“The last six years have witnessed millions and millions of dollars of private Koch money and Mercer money that have gone to pretty sophisticated data and media efforts to advance the Republican agenda,” says Dutta. “I want to even the scale.”
Dutta is referring to Charles and David Koch and Robert Mercer, the scions and founder (respectively) of two family dynasties worth billions. The Koch brothers support a web of political advocacy groups, while Mercer and his daughter were large backers of Breitbart News and Cambridge Analytica, two organizations that arguably provided much of the policy underpinnings and online political machinery for the Trump presidential campaign.
But there’s also the simple fact that Donald Trump’s digital strategy director, Brad Parscale, was able to effectively and inexpensively leverage the social media tools and data troves amassed by the Republican National Committee that were already available to the candidate who won the Republican primary. In fact, in the wake of Romney’s loss, Republicans spent years building up profiles of 200 million Americans for targeted messaging in the 2016 election.
“Who controls Facebook controls the 2016 election,” Parscale said during a speaking engagement at the Romanian Academy of Sciences, according to a report in Forbes.
Parscale, now the campaign manager for the president’s 2020 reelection campaign recalled, “These guys from Facebook walked into my office and said: ‘we have a beta … it’s a new onboarding tool … you can onboard audiences straight into Facebook and we will match them to their Facebook accounts,’ ” according to Forbes .
During the 2016 campaign, Hillary Clinton’s team made 66,000 visual ads, according to Parscale, while the Trump campaign made 5.9 million ads by leveraging social media networks and the language of memes. And in the run-up to the 2020 election, Parscale intends to go back to the same well. The Trump campaign has already spent more than $5 million on Facebook ads in the current election cycle, according to The New York Times — outspending every single Democratic candidate in the field and roughly all of the Democrats combined.
Reaching higher ground
Dutta and Hoover are working to offset this movement with investments of their own. Back in 2017, the two launched Higher Ground Labs, an early-stage company accelerator and investment firm dedicated to financing technology companies that could support progressive causes.
The firm has $15 million committed from investors, including Reid Hoffman, the co-founder of LinkedIn and a partner at Greylock; Ron Conway, the founder of SV Angel and an early backer of Google, Facebook and Twitter; Chris Sacca, an early investor in Uber; and Elizabeth Cutler, the founder of SoulCycle. Already, Higher Ground has invested in more than 30 companies focused on services like advocacy outreach, polling and campaign organizing — among others.
The latest cohort of companies to receive backing Higher Ground Labs
“It is vitally important that Democrats learn to do their campaigns online,” says Dutta. “The way you recruit volunteers; the way you poll sentiment; the way you target and mobilize voters has to be done with online tools and has to improve in the progressive movement and that’s the job of Higher Ground Labs to fix.”
For-profit companies have a critical role to play in election organizing and mobilization, Dutta says. Thanks to government regulation, only private companies are allowed to trade data across organizations and causes (provided they do it at fair market value). That means advocacy groups, unions and others can tap the information these companies collect — for a fee.
The Democratic Party already has one highly valued private company that it uses for its technology services. Formed from the merger of NGP Software and Voter Activation Network, two companies that got their start in the late 1990s and early 2000s, NGP VAN is the largest software and technology services provider for Democratic campaigns. It’s also a highly valued company, which received roughly $100 million in financing last year from the private equity firm Insight Venture Partners, according to people familiar with the investment. Terms of the deal were not disclosed.
“Our vision has been to build a platform that would break down the painful data silos that exist in the campaigns and nonprofit space, and to offer truly best-in-class digital, fundraising and organizing features that could serve both the largest and the smallest nonprofits and campaigns, all with one unified CRM,” wrote Stu Trevelyan, the chief executive of NGP VAN + EveryAction, in an August blogpost announcing the investment. “We’re so excited that others, like our new partners at Insight, share that vision, and we can’t wait to continue innovating and growing together in the coming years.”
Can startups lead the way?
Even as private equity dollars boost the firepower of organizations like NGP VAN, venture capitalists are financing several companies from the Higher Ground Labs portfolio.
Standouts like Hustle, which raised $30 million last May, show that investors are buying into the proposition that these companies can build lasting businesses serving Democratic and progressive political campaigns and corporate businesses that would also like to rally employees or personalize a marketing pitch to customers.
Hustle rallies $30M for grassroots texting tool Republicans can’t use
Then there are earlier stage companies that are gaining significant traction both on the political and commercial circuits.
These are companies like Change Research, an earlier-stage company that just launched from Higher Ground Labs accelerator last year. That company, founded by Mike Greenfield, a serial Silicon Valley entrepreneur who was the first data scientist working on the problem of fraud detection at PayPal, and Pat Reilly, a communications professional who worked with state and local Democratic politicians, is slashing the cost of political polling.
“I wanted to do something for American democracy to try and improve the state of things,” Greenfield said in an interview last year.
For Greenfield, that meant increasing access to polling information. He cited the test case of a Kansas special election in a district that Donald Trump had won by 27 points. Using his own proprietary polling data, Greenfield predicted that the Democratic challenger, James Thompson, would pose a significant threat to his Republican opponent, Mike Estes.
Estes went on to a 7% victory at the ballot, but Thompson’s campaign did not have access to polling data that could have helped inform his messaging and — potentially — sway the election, said Greenfield.
“Public opinion is used to ween out who can be most successful based on how much money they’re able to raise for a poll,” says Reilly. It’s another way that electoral politics is skewed in favor of the people with disposable income to spend what is a not-insignificant amount of money on campaigns.
Polls alone can cost between $20,000 to $30,000 — and Change Research has been able to cut that by 80% to 90%, according to the company’s founders.
“It’s safe to say that most of the world was stunned by the outcome [of the presidential election] because most polls predicted the opposite,” says Greenfield. “Being a good American and as a parent of a 10-year-old and a 12-year-old, providing forward-thinking candidates and causes with the kind of insight they needed to win up and down the ballot could not only be a good business, but really help us save our democracy.”
Change Research isn’t just polling for politicians. Last year, the company conducted roughly 500 polls for political candidates and advocacy groups.
“The way that I’ve described Change Research to investors is that we want to simultaneously move the world in a better direction and having a positive impact while building a substantial business,” says Greenfield. “We’re only going to work with candidates and causes that we’re aligned with.”
Being exclusively focused on progressive causes isn’t the liability that many in the broader business community would think, says Dutta. Many Democratic organizations won’t work with companies that sell services to both sides of the aisle.
For Higher Ground Labs, a stipulation for receiving their money is a commitment not to work with any Republican candidate. Corporations are okay, but conservative causes and organizations are forbidden.
“We’re in a moment of existential crisis in America and this Republican party is deeply toxic to the health and future of our country,” says Dutta. “The only path out of this mess is to vote Republicans out of office and to do that we need to make it easier for good candidates to run for office and to engage a broader electorate into voting regularly.”
UPDATE: An earlier version of this story mentioned Civis Analytics as a Higher Ground Labs portfolio company. That was incorrect.
from iraidajzsmmwtv https://ift.tt/2Lw0Q7d via IFTTT
0 notes
Link
When Shomik Dutta and Betsy Hoover first met in 2007, he was coordinating fundraising and get-out-the-vote efforts for Barack Obama’s first presidential campaign and she was a deputy field director for the campaign.
Over the next two election cycles the two would become parts of an organizing and fundraising team that transformed the business of politics through its use of technology — supposedly laying the groundwork for years of Democratic dominance in organizing, fundraising, polling and grassroots advocacy.
Then came Donald J. Trump and the 2016 election.
For both Dutta and Hoover the 2016 outcome was a wake up call against complacency. What had worked for the Democratic party in 2008 and 2012 wasn’t going to be effective in future election cycles, so they created the investment firm Higher Ground Labs to provide financing and a launching pad for new companies serving Democratic campaigns and progressive organizations.
Higher Ground Labs backs 13 startups to help Democrats win in 2018 and beyond
“As the political world shifts from analog to digital, we need a lot more tools to capture that spend,” says Dutta. “Democrats are spending on average 70 cents of every dollar raised on television ads. We are addicted to old ways of campaigning. If we want to activate and engage an enduring majority of voters we have to go where they are (and that’s increasingly online) and we have to adapt to be able to have these conversations wherever they are.”
Social media and the rise of “direct to consumer” politics
While the Obama campaign effectively used the Internet as a mobilization tool in its two campaigns, the lessons of social media and mobile technologies that offer a “direct-to-consumer” politics circumventing traditional norms have, in the ensuing years, been harnessed most effectively by conservative organizations, according to some scholars and activists.
“The internet is a tool and in that sense it’s neutral, but just like other communication tools from the past, people with more power, with more resources, with more organization, have been able to take advantage of it,” Jen Schradie, an Assistant Professor at the Observatoire sociologique du changement at Sciences Po in Paris, told Vox in an interview earlier this month.
Schradie is a scholar whose recent book, “The Revolution That Wasn’t” contends that the Internet’s early applications as a progressive organizing tool has been overtaken by more conservative elements. “The idea of neutrality seems more true of the internet because the costs of distributing information are dramatically lower than with something like television or radio or other communication tools,” she said. “However, to make full use of the internet, you still need substantial resources and time and motivation. The people who can afford to do this, who can fund the right digital strategy, create a major imbalance in their favor.”
Schradie contends that a web of privately funded think tanks, media organizations, talk radio, and — increasingly — mobile applications have woven a conservative stitch into the fabric of social media. The medium’s own tendency to promote polarizing and fringe viewpoints also served to amplify the views of pundits who were previously believed to be political outliers.
Essentially, these sites have enabled commentators and personalities to create a patchwork of “grassroots” organizations and media operations dedicated to reaching an audience receptive to their particular political message that’s funded by billionaire donors and apolitical corporate ad dollars.
Then there’s the technology companies, like Cambridge Analytica, which improperly used access to Facebook data for targeting purposes — also financed by these same billionaires.
Bannon and Cambridge Analytica planned suppression of black voters, whistleblower tells Senate
“The last six years have witnessed millions and millions of dollars of private Koch money and Mercer money that have gone to pretty sophisticated data and media efforts to advance the Republican agenda,” says Dutta. “I want to even the scale.”
Dutta is referring to Charles and David Koch and Robert Mercer, the scions and founder (respectively) of two family dynasties worth billions. The Koch brothers support a web of political advocacy groups while Mercer and his daughter were large backers of Breitbart News and Cambridge Analytica, two organizations which arguably provided much of the policy underpinnings and online political machinery for the Trump presidential campaign.
But there’s also the simple fact that Donald Trump’s digital strategy director, Brad Parscale, was able to effectively and inexpensively leverage the social media tools and data troves amassed by the Republican National Committee that were already available to the candidate who won the Republican primary. In fact, in the wake of Romney’s loss, Republicans spent years building up profiles of 200 million Americans for targeted messaging in the 2016 election.
“Who controls Facebook controls the 2016 election,” Parscale said during a speaking engagement at the Romanian Academy of Sciences, according to a report in Forbes.
Parscale, now the campaign manager for the President’s 2020 reelection campaign recalled, “These guys from Facebook walked to my office and said: ‘we have a beta … it’s a new onboarding tool … you can onboard audiences straight into Facebook and we will match them to their Facebook accounts,’” according to Forbes.
During the 2016 campaign Hillary Clinton’s team made 66,000 visual ads, according to Parscale, while the Trump campaign made 5.9 million ads by leveraging social media networks and the language of memes. And in the run-up to the 2020 election, Parscale intends to go back to the same well. The Trump campaign has already spent over $5 million on Facebook ads in the current election cycle, according to The New York Times— outspending every single Democratic candidate in the field and roughly all of the Democrats combined.
Reaching Higher Ground
Dutta and Hoover are working to offset this movement with investments of their own. Back in 2017, the two launched Higher Ground Labs, an early stage company accelerator and investment firm dedicated to financing technology companies that could support progressive causes.
The firm has $15 million committed from investors including Reid Hoffman, the co-founder of LinkedIn and a partner at Greylock; Ron Conway, the founder of SV Angel and an early backer of Google, Facebook, and Twitter; Chris Sacca, an early investor in Uber; and Elizabeth Cutler, the founder of SoulCycle. Already, Higher Ground has invested in over thirty companies focused on services like advocacy outreach, polling, and campaign organizing — among others.
The latest cohort of companies to receive backing Higher Ground Labs
“It is vitally important that Democrats learn to do their campaigns online,” says Dutta. “The way you recruit volunteers; the way you poll sentiment; the way you target and mobilize voters has to be done with online tools and has to improve in the progressive movement and that’s the job of Higher Ground Labs to fix.”
For profit companies have a critical role to play in election organizing and mobilization, Dutta says. Thanks to government regulation, only private companies are allowed to trade data across organizations and causes (provided they do it at fair market value). That means advocacy groups, unions and others can tap the information these companies collect — for a fee.
The Democratic party already has one highly valued private company that it uses for its technology services. Formed from the merger of NGP Software and Voter Activation Network, two companies that got their start int he late 90s and early 2000s, NGP VAN is the largest software and technology services provider for Democratic campaigns. It’s also a highly valued company, which received roughly $100 million in financing last year from the private equity firm Insight Venture Partners, according to people familiar with the investment. Terms of the deal were not disclosed.
“Our vision has been to build a platform that would break down the painful data silos that exist in the campaigns and nonprofit space, and to offer truly best-in-class digital, fundraising and organizing features that could serve both the largest and the smallest nonprofits and campaigns, all with one unified CRM,” wrote Stu Trevelyan, the chief executive of NGP VAN + EveryAction, in an August blogpost announcing the investment. “We’re so excited that others, like our new partners at Insight, share that vision, and we can’t wait to continue innovating and growing together in the coming years.”
Can startups lead the way?
Even as private equity dollars boost the firepower of organizations like NGP VAN, venture capitalists are financing several companies from the Higher Ground Labs portfolio.
Civis Analytics, a startup founded by the former Chief Analytics Officer of Barack Obama’s 2012 reelection campaign raised $22 million from outside investors and counts Higher Ground Labs among its backers. Qriously, another Higher Ground Labs portfolio company, was acquired by Brandwatch, as was GroundBase, a messaging platform acquired by the nonprofit progressive advocacy organization ACRONYM.
Other companies in the portfolio are also attracting serious attention from investors. Standouts like Civis Analytics and Hustle, which raised $30 million last May, show that investors are buying into the proposition that these companies can build lasting businesses serving Democratic and progressive political campaigns and corporate businesses that would also like to rally employees or personalize a marketing pitch to customers.
Hustle rallies $30M for grassroots texting tool Republicans can’t use
These are companies like Change Research, an earlier stage company that just launched from Higher Ground Labs accelerator last year. That company, founded by Mike Greenfield, a serial Silicon Valley entrepreneur who was the first data scientist working on the problem fraud detection at PayPal, and Pat Reilly, a communications professional who worked with state and local Democratic politicians, is slashing the cost of political polling.
“I wanted to do something for American Democracy to try and improve the state of things,” Greenfield said in an interview last year.
For Greenfield, that meant increasing access to polling information. He cited the test case of a Kansas special election in a district that Donald Trump had won by 27 points. Using his own proprietary polling data, Greenfield predicted that the Democratic challenger, James Thompson, would pose a significant threat to his Republican opponent, Mike Estes.
Estes went on to a 7% victory at the ballot, but Thompson’s campaign did not have access to polling data that could have helped inform his messaging and — potentially — sway the election, said Greenfield.
“Public opinion is used to ween out who can be most successful based on how much money they’re able to raise for a poll,” says Reilly. It’s another way that electoral politics is skewed in favor of the people with disposable income to spend what is a not-insignificant amount of money on campaigns.
Polls alone can cost between $20,000 to $30,000 — and Change Research has been able to cut that by 80% to 90%, according to the company’s founders.
“It’s safe to say that most of the world was stunned by the outcome [of the Presidential election] because most polls predicted the opposite.,” says Greenfield. “Being a good American and as a parent of a ten-year-old and a twelve-year-old, providing forward thinking candidates and causes with the kind of insight they needed to win up and down the ballot could not only be a good business, but really help us save our Democracy.”
Change Research isn’t just polling for politicians. Last year, the company conducted roughly 500 polls for political candidates and advocacy groups.
“The way that I’ve described Change Research to investors is that we want to simultaneously move the world in a better direction and having a positive impact while building a substantial business,” says Greenfield. “We’re only going to work with candidates and causes that we’re aligned with.”
Being exclusively focused on progressive causes isn’t the liability that many in the broader business community would think, says Dutta. Many Democratic organizations won’t work with companies that sell services to both sides of the aisle.
For Higher Ground Labs, a stipulation for receiving their money is a commitment not to work with any Republican candidate. Corporations are okay, but conservative causes and organizations are forbidden.
“We’re in a moment of existential crisis in America and this Republican party is deeply toxic to the health and future of our country,” says Dutta. “The only path out of this mess is to vote Republicans out of office and to do that we need to make it easier for good candidates to run for office and to engage a broader electorate into voting regularly.”
from Mobile – TechCrunch https://ift.tt/2Lw0Q7d ORIGINAL CONTENT FROM: https://techcrunch.com/
0 notes
Text
How Russia’s online influence campaign engaged with millions for years
Russian efforts to influence U.S. politics and sway public opinion were consistent and, as far as engaging with target audiences, largely successful, according to a report from Oxford’s Computational Propaganda Project published today. Based on data provided to Congress by Facebook, Instagram, Google, and Twitter, the study paints a portrait of the years-long campaign that’s less than flattering to the companies.
The report, which you can read here, was published today but given to some outlets over the weekend, summarizes the work of the Internet Research Agency, Moscow’s online influence factory and troll farm. The data cover various periods for different companies, but 2016 and 2017 showed by far the most activity.
A clearer picture
If you’ve only checked into this narrative occasionally during the last couple years, the Comprop report is a great way to get a bird’s-eye view of the whole thing, with no “we take this very seriously” palaver interrupting the facts.
If you’ve been following the story closely, the value of the report is mostly in deriving specifics and some new statistics from the data, which Oxford researchers were provided some seven months ago for analysis. The numbers, predictably, all seem to be a bit higher or more damning than those provided by the companies themselves in their voluntary reports and carefully practiced testimony.
Previous estimates have focused on the rather nebulous metric of “encountering” or “seeing” IRA content put on these social metrics. This had the dual effect of increasing the affected number — to over a hundred million on Facebook alone — but “seeing” could easily be downplayed in importance; after all, how many things do you “see” on the internet every day?
Facebook will show which Russian election troll accounts you followed
The Oxford researchers better quantify the engagement, on Facebook first, with more specific and consequential numbers. For instance, in 2016 and 2017, nearly 30 million people on Facebook actually shared Russian propaganda content, with similar numbers of likes garnered, and millions of comments generated.
Note that these aren’t ads that Russian shell companies were paying to shove into your timeline — these were pages and groups with thousands of users on board who actively engaged with and spread posts, memes, and disinformation on captive news sites linked to by the propaganda accounts.
The content itself was, of course, carefully curated to touch on a number of divisive issues: immigration, gun control, race relations, and so on. Many different groups (i.e. black Americans, conservatives, Muslims, LGBT communities) were targeted all generated significant engagement, as this breakdown of the above stats shows:
Although the targeted communities were surprisingly diverse, the intent was highly focused: stoke partisan divisions, suppress left-leaning voters, and activate right-leaning ones.
Black voters in particular were a popular target across all platforms, and a great deal of content was posted both to keep racial tensions high and to interfere with their actual voting. Memes were posted suggesting followers withhold their votes, or deliberately incorrect instructions on how to vote. These efforts were among the most numerous and popular of the IRA’s campaign; it’s difficult to judge their effectiveness, but certainly they had reach.
Examples of posts targeting black Americans.
In a statement, Facebook said that it was cooperating with officials and that “Congress and the intelligence community are best placed to use the information we and others provide to determine the political motivations of actors like the Internet Research Agency.” It also noted that it has “made progress in helping prevent interference on our platforms during elections, strengthened our policies against voter suppression ahead of the 2018 midterms, and funded independent research on the impact of social media on democracy.”
Instagram on the rise
Based on the narrative thus far, one might expect that Facebook — being the focus for much of it — was the biggest platform for this propaganda, and that it would have peaked around the 2016 election, when the evident goal of helping Donald Trump get elected had been accomplished.
In fact Instagram was receiving as much or more content than Facebook, and it was being engaged with on a similar scale. Previous reports disclosed that around 120,000 IRA-related posts on Instagram had reached several million people in the run-up to the election. The Oxford researchers conclude, however, that 40 accounts received in total some 185 million likes and 4 million comments during the period covered by the data (2015-2017).
A partial explanation for these rather high numbers may be that, also counter to the most obvious narrative, IRA posting in fact increased following the election — for all platforms, but particularly on Instagram.
IRA-related Instagram posts jumped from an average of 2,611 per month in 2016 to 5,956 in 2017; note that the numbers don’t match the above table exactly because the time periods differ slightly.
Twitter posts, while extremely numerous, are quite steady at just under 60,000 per month, totaling around 73 million engagements over the period studied. To be perfectly frank this kind of voluminous bot and sock puppet activity is so commonplace on Twitter, and the company seems to have done so little to thwart it, that it hardly bears mentioning. But it was certainly there, and often reused existing bot nets that previously had chimed in on politics elsewhere and in other languages.
In a statement, Twitter said that it has “made significant strides since 2016 to counter manipulation of our service, including our release of additional data in October related to previously disclosed activities to enable further independent academic research and investigation.”
Google too is somewhat hard to find in the report, though not necessarily because it has a handle on Russian influence on its platforms. Oxford’s researchers complain that Google and YouTube have been not just stingy, but appear to have actively attempted to stymie analysis.
Google chose to supply the Senate committee with data in a non-machine-readable format. The evidence that the IRA had bought ads on Google was provided as images of ad text and in PDF format whose pages displayed copies of information previously organized in spreadsheets. This means that Google could have provided the useable ad text and spreadsheets—in a standard machine- readable file format, such as CSV or JSON, that would be useful to data scientists—but chose to turn them into images and PDFs as if the material would all be printed out on paper.
This forced the researchers to collect their own data via citations and mentions of YouTube content. As a consequence their conclusions are limited. Generally speaking when a tech company does this, it means that the data they could provide would tell a story they don’t want heard.
For instance, one interesting point brought up by a second report published today, by New Knowledge, concerns the 1,108 videos uploaded by IRA-linked accounts on YouTube. These videos, a Google statement explained, “were not targeted to the U.S. or to any particular sector of the U.S. population.”
In fact, all but a few dozen of these videos concerned police brutality and Black Lives Matter, which as you’ll recall were among the most popular topics on the other platforms. Seems reasonable to expect that this extremely narrow targeting would have been mentioned by YouTube in some way. Unfortunately it was left to be discovered by a third party and gives one an idea of just how far a statement from the company can be trusted. (Google did not immediately respond to a request for comment.)
Desperately seeking transparency
In its conclusion, the Oxford researchers — Philip N. Howard, Bharath Ganesh, and Dimitra Liotsiou — point out that although the Russian propaganda efforts were (and remain) disturbingly effective and well organized, the country is not alone in this.
“During 2016 and 2017 we saw significant efforts made by Russia to disrupt elections around the world, but also political parties in these countries spreading disinformation domestically,” they write. “In many democracies it is not even clear that spreading computational propaganda contravenes election laws.”
“It is, however, quite clear that the strategies and techniques used by government cyber troops have an impact,” the report continues, “and that their activities violate the norms of democratic practice… Social media have gone from being the natural infrastructure for sharing collective grievances and coordinating civic engagement, to being a computational tool for social control, manipulated by canny political consultants, and available to politicians in democracies and dictatorships alike.”
Predictably, even social networks’ moderation policies became targets for propagandizing.
Waiting on politicians is, as usual, something of a long shot, and the onus is squarely on the providers of social media and internet services to create an environment in which malicious actors are less likely to thrive.
Specifically, this means that these companies need to embrace researchers and watchdogs in good faith instead of freezing them out in order to protect some internal process or embarrassing misstep.
“Twitter used to provide researchers at major universities with access to several APIs, but has withdrawn this and provides so little information on the sampling of existing APIs that researchers increasingly question its utility for even basic social science,” the researchers point out. “Facebook provides an extremely limited API for the analysis of public pages, but no API for Instagram.” (And we’ve already heard what they think of Google’s submissions.)
If the companies exposed in this report truly take these issues seriously, as they tell us time and again, perhaps they should implement some of these suggestions.
How Russia’s online influence campaign engaged with millions for years published first on https://timloewe.tumblr.com/
0 notes
Link
Russian efforts to influence U.S. politics and sway public opinion were consistent and, as far as engaging with target audiences, largely successful, according to a report from Oxford’s Computational Propaganda Project published today. Based on data provided to Congress by Facebook, Instagram, Google and Twitter, the study paints a portrait of the years-long campaign that’s less than flattering to the companies.
The report, which you can read here, was published today but given to some outlets over the weekend; it summarizes the work of the Internet Research Agency, Moscow’s online influence factory and troll farm. The data cover various periods for different companies, but 2016 and 2017 showed by far the most activity.
A clearer picture
If you’ve only checked into this narrative occasionally during the last couple of years, the Comprop report is a great way to get a bird’s-eye view of the whole thing, with no “we take this very seriously” palaver interrupting the facts.
If you’ve been following the story closely, the value of the report is mostly in deriving specifics and some new statistics from the data, which Oxford researchers were provided some seven months ago for analysis. The numbers, predictably, all seem to be a bit higher or more damning than those provided by the companies themselves in their voluntary reports and carefully practiced testimony.
Previous estimates have focused on the rather nebulous metric of “encountering” or “seeing” IRA content put on these social metrics. This had the dual effect of increasing the affected number — to over 100 million on Facebook alone — but “seeing” could easily be downplayed in importance; after all, how many things do you “see” on the internet every day?
Facebook will show which Russian election troll accounts you followed
The Oxford researchers better quantify the engagement, on Facebook first, with more specific and consequential numbers. For instance, in 2016 and 2017, nearly 30 million people on Facebook actually shared Russian propaganda content, with similar numbers of likes garnered, and millions of comments generated.
Note that these aren’t ads that Russian shell companies were paying to shove into your timeline — these were pages and groups with thousands of users on board who actively engaged with and spread posts, memes and disinformation on captive news sites linked to by the propaganda accounts.
The content itself was, of course, carefully curated to touch on a number of divisive issues: immigration, gun control, race relations and so on. Many different groups (i.e. black Americans, conservatives, Muslims, LGBT communities) were targeted; all generated significant engagement, as this breakdown of the above stats shows:
Although the targeted communities were surprisingly diverse, the intent was highly focused: stoke partisan divisions, suppress left-leaning voters and activate right-leaning ones.
Black voters in particular were a popular target across all platforms, and a great deal of content was posted both to keep racial tensions high and to interfere with their actual voting. Memes were posted suggesting followers withhold their votes, or with deliberately incorrect instructions on how to vote. These efforts were among the most numerous and popular of the IRA’s campaign; it’s difficult to judge their effectiveness, but certainly they had reach.
Examples of posts targeting black Americans.
In a statement, Facebook said that it was cooperating with officials and that “Congress and the intelligence community are best placed to use the information we and others provide to determine the political motivations of actors like the Internet Research Agency.” It also noted that it has “made progress in helping prevent interference on our platforms during elections, strengthened our policies against voter suppression ahead of the 2018 midterms, and funded independent research on the impact of social media on democracy.”
Instagram on the rise
Based on the narrative thus far, one might expect that Facebook — being the focus for much of it — was the biggest platform for this propaganda, and that it would have peaked around the 2016 election, when the evident goal of helping Donald Trump get elected had been accomplished.
In fact Instagram was receiving as much or more content than Facebook, and it was being engaged with on a similar scale. Previous reports disclosed that around 120,000 IRA-related posts on Instagram had reached several million people in the run-up to the election. The Oxford researchers conclude, however, that 40 accounts received in total some 185 million likes and 4 million comments during the period covered by the data (2015-2017).
A partial explanation for these rather high numbers may be that, also counter to the most obvious narrative, IRA posting in fact increased following the election — for all platforms, but particularly on Instagram.
IRA-related Instagram posts jumped from an average of 2,611 per month in 2016 to 5,956 in 2017; note that the numbers don’t match the above table exactly because the time periods differ slightly.
Twitter posts, while extremely numerous, are quite steady at just under 60,000 per month, totaling around 73 million engagements over the period studied. To be perfectly frank, this kind of voluminous bot and sock puppet activity is so commonplace on Twitter, and the company seems to have done so little to thwart it, that it hardly bears mentioning. But it was certainly there, and often reused existing bot nets that previously had chimed in on politics elsewhere and in other languages.
In a statement, Twitter said that it has “made significant strides since 2016 to counter manipulation of our service, including our release of additional data in October related to previously disclosed activities to enable further independent academic research and investigation.”
Google too is somewhat hard to find in the report, though not necessarily because it has a handle on Russian influence on its platforms. Oxford’s researchers complain that Google and YouTube have been not just stingy, but appear to have actively attempted to stymie analysis.
Google chose to supply the Senate committee with data in a non-machine-readable format. The evidence that the IRA had bought ads on Google was provided as images of ad text and in PDF format whose pages displayed copies of information previously organized in spreadsheets. This means that Google could have provided the useable ad text and spreadsheets—in a standard machine- readable file format, such as CSV or JSON, that would be useful to data scientists—but chose to turn them into images and PDFs as if the material would all be printed out on paper.
This forced the researchers to collect their own data via citations and mentions of YouTube content. As a consequence, their conclusions are limited. Generally speaking, when a tech company does this, it means that the data they could provide would tell a story they don’t want heard.
For instance, one interesting point brought up by a second report published today, by New Knowledge, concerns the 1,108 videos uploaded by IRA-linked accounts on YouTube. These videos, a Google statement explained, “were not targeted to the U.S. or to any particular sector of the U.S. population.”
In fact, all but a few dozen of these videos concerned police brutality and Black Lives Matter, which as you’ll recall were among the most popular topics on the other platforms. Seems reasonable to expect that this extremely narrow targeting would have been mentioned by YouTube in some way. Unfortunately it was left to be discovered by a third party and gives one an idea of just how far a statement from the company can be trusted. (Google did not immediately respond to a request for comment.)
Desperately seeking transparency
In its conclusion, the Oxford researchers — Philip N. Howard, Bharath Ganesh and Dimitra Liotsiou — point out that although the Russian propaganda efforts were (and remain) disturbingly effective and well organized, the country is not alone in this.
“During 2016 and 2017 we saw significant efforts made by Russia to disrupt elections around the world, but also political parties in these countries spreading disinformation domestically,” they write. “In many democracies it is not even clear that spreading computational propaganda contravenes election laws.”
“It is, however, quite clear that the strategies and techniques used by government cyber troops have an impact,” the report continues, “and that their activities violate the norms of democratic practice… Social media have gone from being the natural infrastructure for sharing collective grievances and coordinating civic engagement, to being a computational tool for social control, manipulated by canny political consultants, and available to politicians in democracies and dictatorships alike.”
Predictably, even social networks’ moderation policies became targets for propagandizing.
Waiting on politicians is, as usual, something of a long shot, and the onus is squarely on the providers of social media and internet services to create an environment in which malicious actors are less likely to thrive.
Specifically, this means that these companies need to embrace researchers and watchdogs in good faith instead of freezing them out in order to protect some internal process or embarrassing misstep.
“Twitter used to provide researchers at major universities with access to several APIs, but has withdrawn this and provides so little information on the sampling of existing APIs that researchers increasingly question its utility for even basic social science,” the researchers point out. “Facebook provides an extremely limited API for the analysis of public pages, but no API for Instagram.” (And we’ve already heard what they think of Google’s submissions.)
If the companies exposed in this report truly take these issues seriously, as they tell us time and again, perhaps they should implement some of these suggestions.
via TechCrunch
0 notes
Text
How Russia’s online influence campaign engaged with millions for years
Russian efforts to influence U.S. politics and sway public opinion were consistent and, as far as engaging with target audiences, largely successful, according to a report from Oxford’s Computational Propaganda Project published today. Based on data provided to Congress by Facebook, Instagram, Google, and Twitter, the study paints a portrait of the years-long campaign that’s less than flattering to the companies.
The report, which you can read here, was published today but given to some outlets over the weekend, summarizes the work of the Internet Research Agency, Moscow’s online influence factory and troll farm. The data cover various periods for different companies, but 2016 and 2017 showed by far the most activity.
A clearer picture
If you’ve only checked into this narrative occasionally during the last couple years, the Comprop report is a great way to get a bird’s-eye view of the whole thing, with no “we take this very seriously” palaver interrupting the facts.
If you’ve been following the story closely, the value of the report is mostly in deriving specifics and some new statistics from the data, which Oxford researchers were provided some seven months ago for analysis. The numbers, predictably, all seem to be a bit higher or more damning than those provided by the companies themselves in their voluntary reports and carefully practiced testimony.
Previous estimates have focused on the rather nebulous metric of “encountering” or “seeing” IRA content put on these social metrics. This had the dual effect of increasing the affected number — to over a hundred million on Facebook alone — but “seeing” could easily be downplayed in importance; after all, how many things do you “see” on the internet every day?
Facebook will show which Russian election troll accounts you followed
The Oxford researchers better quantify the engagement, on Facebook first, with more specific and consequential numbers. For instance, in 2016 and 2017, nearly 30 million people on Facebook actually shared Russian propaganda content, with similar numbers of likes garnered, and millions of comments generated.
Note that these aren’t ads that Russian shell companies were paying to shove into your timeline — these were pages and groups with thousands of users on board who actively engaged with and spread posts, memes, and disinformation on captive news sites linked to by the propaganda accounts.
The content itself was, of course, carefully curated to touch on a number of divisive issues: immigration, gun control, race relations, and so on. Many different groups (i.e. black Americans, conservatives, Muslims, LGBT communities) were targeted all generated significant engagement, as this breakdown of the above stats shows:
Although the targeted communities were surprisingly diverse, the intent was highly focused: stoke partisan divisions, suppress left-leaning voters, and activate right-leaning ones.
Black voters in particular were a popular target across all platforms, and a great deal of content was posted both to keep racial tensions high and to interfere with their actual voting. Memes were posted suggesting followers withhold their votes, or deliberately incorrect instructions on how to vote. These efforts were among the most numerous and popular of the IRA’s campaign; it’s difficult to judge their effectiveness, but certainly they had reach.
Examples of posts targeting black Americans.
In a statement, Facebook said that it was cooperating with officials and that “Congress and the intelligence community are best placed to use the information we and others provide to determine the political motivations of actors like the Internet Research Agency.” It also noted that it has “made progress in helping prevent interference on our platforms during elections, strengthened our policies against voter suppression ahead of the 2018 midterms, and funded independent research on the impact of social media on democracy.”
Instagram on the rise
Based on the narrative thus far, one might expect that Facebook — being the focus for much of it — was the biggest platform for this propaganda, and that it would have peaked around the 2016 election, when the evident goal of helping Donald Trump get elected had been accomplished.
In fact Instagram was receiving as much or more content than Facebook, and it was being engaged with on a similar scale. Previous reports disclosed that around 120,000 IRA-related posts on Instagram had reached several million people in the run-up to the election. The Oxford researchers conclude, however, that 40 accounts received in total some 185 million likes and 4 million comments during the period covered by the data (2015-2017).
A partial explanation for these rather high numbers may be that, also counter to the most obvious narrative, IRA posting in fact increased following the election — for all platforms, but particularly on Instagram.
IRA-related Instagram posts jumped from an average of 2,611 per month in 2016 to 5,956 in 2017; note that the numbers don’t match the above table exactly because the time periods differ slightly.
Twitter posts, while extremely numerous, are quite steady at just under 60,000 per month, totaling around 73 million engagements over the period studied. To be perfectly frank this kind of voluminous bot and sock puppet activity is so commonplace on Twitter, and the company seems to have done so little to thwart it, that it hardly bears mentioning. But it was certainly there, and often reused existing bot nets that previously had chimed in on politics elsewhere and in other languages.
In a statement, Twitter said that it has “made significant strides since 2016 to counter manipulation of our service, including our release of additional data in October related to previously disclosed activities to enable further independent academic research and investigation.”
Google too is somewhat hard to find in the report, though not necessarily because it has a handle on Russian influence on its platforms. Oxford’s researchers complain that Google and YouTube have been not just stingy, but appear to have actively attempted to stymie analysis.
Google chose to supply the Senate committee with data in a non-machine-readable format. The evidence that the IRA had bought ads on Google was provided as images of ad text and in PDF format whose pages displayed copies of information previously organized in spreadsheets. This means that Google could have provided the useable ad text and spreadsheets—in a standard machine- readable file format, such as CSV or JSON, that would be useful to data scientists—but chose to turn them into images and PDFs as if the material would all be printed out on paper.
This forced the researchers to collect their own data via citations and mentions of YouTube content. As a consequence their conclusions are limited. Generally speaking when a tech company does this, it means that the data they could provide would tell a story they don’t want heard.
For instance, one interesting point brought up by a second report published today, by New Knowledge, concerns the 1,108 videos uploaded by IRA-linked accounts on YouTube. These videos, a Google statement explained, “were not targeted to the U.S. or to any particular sector of the U.S. population.”
In fact, all but a few dozen of these videos concerned police brutality and Black Lives Matter, which as you’ll recall were among the most popular topics on the other platforms. Seems reasonable to expect that this extremely narrow targeting would have been mentioned by YouTube in some way. Unfortunately it was left to be discovered by a third party and gives one an idea of just how far a statement from the company can be trusted.
Desperately seeking transparency
In its conclusion, the Oxford researchers — Philip N. Howard, Bharath Ganesh, and Dimitra Liotsiou — point out that although the Russian propaganda efforts were (and remain) disturbingly effective and well organized, the country is not alone in this.
“During 2016 and 2017 we saw significant efforts made by Russia to disrupt elections around the world, but also political parties in these countries spreading disinformation domestically,” they write. “In many democracies it is not even clear that spreading computational propaganda contravenes election laws.”
“It is, however, quite clear that the strategies and techniques used by government cyber troops have an impact,” the report continues, “and that their activities violate the norms of democratic practice… Social media have gone from being the natural infrastructure for sharing collective grievances and coordinating civic engagement, to being a computational tool for social control, manipulated by canny political consultants, and available to politicians in democracies and dictatorships alike.”
Predictably, even social networks’ moderation policies became targets for propagandizing.
Waiting on politicians is, as usual, something of a long shot, and the onus is squarely on the providers of social media and internet services to create an environment in which malicious actors are less likely to thrive.
Specifically, this means that these companies need to embrace researchers and watchdogs in good faith instead of freezing them out in order to protect some internal process or embarrassing misstep.
“Twitter used to provide researchers at major universities with access to several APIs, but has withdrawn this and provides so little information on the sampling of existing APIs that researchers increasingly question its utility for even basic social science,” the researchers point out. “Facebook provides an extremely limited API for the analysis of public pages, but no API for Instagram.” (And we’ve already heard what they think of Google’s submissions.)
If the companies exposed in this report truly take these issues seriously, as they tell us time and again, perhaps they should implement some of these suggestions.
Via Devin Coldewey https://techcrunch.com
0 notes
Text
Nature Alabama Revisits Ten Commandments, Hoping for Help from Kavanaugh
Nature Alabama Revisits Ten Commandments, Hoping for Help from Kavanaugh Nature Alabama Revisits Ten Commandments, Hoping for Help from Kavanaugh http://www.nature-business.com/nature-alabama-revisits-ten-commandments-hoping-for-help-from-kavanaugh/
Nature
Image
Dean Young, a Christian activist and former aide to Roy Moore, spoke to a crowd with a seven-foot-high banner of the Ten Commandments propped up behind him in McIntosh, Ala., on Saturday.CreditCreditMeggan Haller for The New York Times
McINTOSH, Ala. — At a Saturday night music festival about an hour north of Alabama’s gulf shore, the twangy refrain of a bluegrass song captured how seriously many religious conservatives are taking the battle over the Supreme Court nomination of Judge Brett M. Kavanaugh.
“Without a firm foundation, a house will fall apart,” the band sang, “but they can’t take the Ten Commandments out of the Bible or my heart.”
For many in the crowd of about 100, the commandments and Judge Kavanaugh are paramount concerns this election season. More than a decade after Roy S. Moore was ousted as Alabama’s chief justice for defying federal court orders to remove a 5,280-pound stone slab of the commandments from the state judicial building, voters will consider a constitutional amendment in November that would allow the Ten Commandments to be displayed in schools and other public property across Alabama.
The amendment’s supporters hope it passes not just on principle but because of the almost-guaranteed response: a legal challenge that ends up in federal courts. Those campaigning for it now say their goal is to get a case before Supreme Court, where they hope — if a Justice Kavanaugh is on the bench — a conservative majority will rule in favor of such displays.
It is the kind of legal fight that social conservatives had been looking forward to having, in front of a Supreme Court realigned by President Trump. After years of disappointing decisions on issues of fundamental importance to their movement like religious expression, abortion and gay rights, Judge Kavanaugh’s nomination was supposed to be the moment when the religious right had good reason to hope for a more sympathetic high court.
“The liberals, the left, they’re scared to death because Trump is doing what he said he’d do, which is to make the Supreme Court go by the Constitution,” said Dean Young, a Christian activist and former chief strategist to Mr. Moore, who lost a race for Senate last year after several women claimed he had groped and harassed them as teenagers.
As Mr. Young spoke to the crowd on Saturday, with a seven-foot-high banner of the Ten Commandments propped up behind him, he said he would like nothing more than for Alabama’s commandments amendment to be on the Supreme Court docket.
“They’ll make the decision that we are going to acknowledge God,” he said.
Though a Ten Commandments initiative has been proposed in years past and went nowhere, the issue was one that few Republican lawmakers wanted to oppose this year. The one Republican gubernatorial candidate who said he thought the amendment was unnecessary finished third out of four in the primary in June. (The candidate who came in fourth place was dead, having passed away unexpectedly a few months earlier.)
Alabama requires a three-fifths vote by the state legislature before a proposed amendment can go to the voters. It passed the State Senate 23-3 and the State House of Representatives 66-19, largely along party lines.
Image
Some want to see the amendment approved and ultimately decided by the Supreme Court with a conservative majority.CreditMeggan Haller for The New York Times
The overriding sentiment from the crowd in McIntosh, a mix of Baptists, Pentecostals and other Christian denominations, was that Judge Kavanaugh should be confirmed quickly. Even the latest allegation from a Yale classmate of his who said he exposed himself to her at a party — which had not surfaced by the time of the festival — would probably hold little sway given how skeptically they viewed the first accuser.
The willingness of many of the president’s defenders to reject almost any accusations leveled against him or his administration as embittered exaggerations by people who can’t accept that he won has become commonplace each time a new controversy hits. But perhaps because of the high stakes of the Kavanaugh nomination and its importance to social conservatives — who among Mr. Trump’s supporters believe they have the most to win, or lose, from his presidency — the backlash has been especially potent this time.
At the festival, people expressed different reactions to Christine Blasey Ford’s accusation that when they were in high school, a young Brett Kavanaugh pinned her to a bed at a party and tried to remove her clothes. Some said it never happened, dismissing it as the fabrications of an agent paid to lie by Democrats. Others excused it as ordinary, hormonal teenage misbehavior.
But like many prominent Republicans and conservatives who have rallied to the judge’s side, everyone interviewed by The New York Times said Dr. Blasey’s accusation was another attempt by Democrats to interfere with Mr. Trump’s presidency and stop him from honoring the promises they elected him to fulfill.
“If they can’t win, pull out a scandal,” said Trish Bernard, 55, a retired school bus driver and milk carrier from Mt. Vernon, Ala.
Ms. Bernard said she could still not get past what she suspected were the Democrats’ motives. The direction Mr. Trump has taken the country “may not be the way they wanted it,” she said. “And they haven’t been able to oust him.”
Bonnie Maddy, who retired a few years ago to Satsuma, Ala., from Ohio, said that from everything she had read and heard about Judge Kavanaugh, she believed he was “a good man,” and she said she thought he was being unfairly maligned.
“I believe that George Soros has a lot to do with it,” she said, referring to the billionaire funder of liberal causes. “I think a lot of money is being funneled into this.”
Like many at the bluegrass festival, Riley Chestang, 58, of Creola, Ala., was quick to invoke Judge Moore. Standing beside a yellow sign tacked to a tree that said, “No profanity, no alcohol, no smoking, no pets,” Mr. Chestang said, “The same thing happened to Judge Roy Moore. I mean, you can put any kind of propaganda out.”
Judge Kavanaugh, Mr. Chestang added, was put up by Mr. Trump — “the only president in my lifetime I’d take a bullet for,” he said — because he would “uphold the good that this nation needs.”
Image
Supporters of the Ten Commandments amendment gathered at an end-of-summer festival in McIntosh, Ala., on Saturday.CreditMeggan Haller for The New York Times
Judge Moore has been mostly out of the public eye since he lost the Senate election last December. Reached by telephone in his office last week, he contended that there were parallels between his experience and Judge Kavanaugh’s.
“I do think there’s a great similarity,” he said. “I highly regard women. So I don’t condone the mistreatment of women. But things that come up at the last minute, you’ve got to question when there’s such big consequences like a confirmation.”
Senate Republicans, he argued, should move forward with the vote. “This delay, it’s just not right. They should go on and do what they do,” he said.
After days of legal wrangling, Dr. Blasey and Judge Kavanaugh are both expected to testify on Thursday. But it could come at a cost for Republicans and the conservative movement, which has undertaken an aggressive campaign to defend Judge Kavanaugh that has, at times, veered into unsettling territory.
Within minutes of Dr. Blasey’s story appearing in the Washington Post last week, theories and conspiracies about her motives began gaining traction online, sometimes given currency by influential conservative media personalities who spread them, including the Drudge Report, the Fox News host Laura Ingraham and the conservative commentator Erick Erickson.
More broadly, the public relations machine defending the judge has big money behind it. In recent days, a conservative group that has spent tens of millions of dollars pushing for Mr. Trump’s judicial confirmations, the Judicial Crisis Network, has started running a television ad that calls the allegations “character assassination” and insists plainly: “This never happened.”
While some at the weekend festival in Alabama were inclined to believe that the accusation was fiction, others seemed focused on what they said was the Democrats’ real target: Mr. Trump.
“The harder they push against him, the more people like me who didn’t want him to start with are going with him,” said Bryan Bernard, 58, who is a barber and married to Trish, the retired bus driver.
Mr. Bernard said he initially supported Senator Ted Cruz of Texas in the 2016 presidential election but eventually came around to seeing Mr. Trump as the fighter conservatives needed. He said he sees the Ten Commandments amendment in a similar vein — a fight that Alabama is taking to the left. And he wants to see the amendment approved and ultimately decided by the Supreme Court — with Judge Kavanaugh on it.
“This is something we can do to punch them in the eye,” Mr. Bernard said.
Read More | https://www.nytimes.com/2018/09/25/us/politics/alabama-ten-commandments-supreme-court.html |
Nature Alabama Revisits Ten Commandments, Hoping for Help from Kavanaugh, in 2018-09-25 20:46:14
0 notes
Text
Nature Alabama Revisits Ten Commandments, Hoping for Help from Kavanaugh
Nature Alabama Revisits Ten Commandments, Hoping for Help from Kavanaugh Nature Alabama Revisits Ten Commandments, Hoping for Help from Kavanaugh http://www.nature-business.com/nature-alabama-revisits-ten-commandments-hoping-for-help-from-kavanaugh/
Nature
Image
Dean Young, a Christian activist and former aide to Roy Moore, spoke to a crowd with a seven-foot-high banner of the Ten Commandments propped up behind him in McIntosh, Ala., on Saturday.CreditCreditMeggan Haller for The New York Times
McINTOSH, Ala. — At a Saturday night music festival about an hour north of Alabama’s gulf shore, the twangy refrain of a bluegrass song captured how seriously many religious conservatives are taking the battle over the Supreme Court nomination of Judge Brett M. Kavanaugh.
“Without a firm foundation, a house will fall apart,” the band sang, “but they can’t take the Ten Commandments out of the Bible or my heart.”
For many in the crowd of about 100, the commandments and Judge Kavanaugh are paramount concerns this election season. More than a decade after Roy S. Moore was ousted as Alabama’s chief justice for defying federal court orders to remove a 5,280-pound stone slab of the commandments from the state judicial building, voters will consider a constitutional amendment in November that would allow the Ten Commandments to be displayed in schools and other public property across Alabama.
The amendment’s supporters hope it passes not just on principle but because of the almost-guaranteed response: a legal challenge that ends up in federal courts. Those campaigning for it now say their goal is to get a case before Supreme Court, where they hope — if a Justice Kavanaugh is on the bench — a conservative majority will rule in favor of such displays.
It is the kind of legal fight that social conservatives had been looking forward to having, in front of a Supreme Court realigned by President Trump. After years of disappointing decisions on issues of fundamental importance to their movement like religious expression, abortion and gay rights, Judge Kavanaugh’s nomination was supposed to be the moment when the religious right had good reason to hope for a more sympathetic high court.
“The liberals, the left, they’re scared to death because Trump is doing what he said he’d do, which is to make the Supreme Court go by the Constitution,” said Dean Young, a Christian activist and former chief strategist to Mr. Moore, who lost a race for Senate last year after several women claimed he had groped and harassed them as teenagers.
As Mr. Young spoke to the crowd on Saturday, with a seven-foot-high banner of the Ten Commandments propped up behind him, he said he would like nothing more than for Alabama’s commandments amendment to be on the Supreme Court docket.
“They’ll make the decision that we are going to acknowledge God,” he said.
Though a Ten Commandments initiative has been proposed in years past and went nowhere, the issue was one that few Republican lawmakers wanted to oppose this year. The one Republican gubernatorial candidate who said he thought the amendment was unnecessary finished third out of four in the primary in June. (The candidate who came in fourth place was dead, having passed away unexpectedly a few months earlier.)
Alabama requires a three-fifths vote by the state legislature before a proposed amendment can go to the voters. It passed the State Senate 23-3 and the State House of Representatives 66-19, largely along party lines.
Image
Some want to see the amendment approved and ultimately decided by the Supreme Court with a conservative majority.CreditMeggan Haller for The New York Times
The overriding sentiment from the crowd in McIntosh, a mix of Baptists, Pentecostals and other Christian denominations, was that Judge Kavanaugh should be confirmed quickly. Even the latest allegation from a Yale classmate of his who said he exposed himself to her at a party — which had not surfaced by the time of the festival — would probably hold little sway given how skeptically they viewed the first accuser.
The willingness of many of the president’s defenders to reject almost any accusations leveled against him or his administration as embittered exaggerations by people who can’t accept that he won has become commonplace each time a new controversy hits. But perhaps because of the high stakes of the Kavanaugh nomination and its importance to social conservatives — who among Mr. Trump’s supporters believe they have the most to win, or lose, from his presidency — the backlash has been especially potent this time.
At the festival, people expressed different reactions to Christine Blasey Ford’s accusation that when they were in high school, a young Brett Kavanaugh pinned her to a bed at a party and tried to remove her clothes. Some said it never happened, dismissing it as the fabrications of an agent paid to lie by Democrats. Others excused it as ordinary, hormonal teenage misbehavior.
But like many prominent Republicans and conservatives who have rallied to the judge’s side, everyone interviewed by The New York Times said Dr. Blasey’s accusation was another attempt by Democrats to interfere with Mr. Trump’s presidency and stop him from honoring the promises they elected him to fulfill.
“If they can’t win, pull out a scandal,” said Trish Bernard, 55, a retired school bus driver and milk carrier from Mt. Vernon, Ala.
Ms. Bernard said she could still not get past what she suspected were the Democrats’ motives. The direction Mr. Trump has taken the country “may not be the way they wanted it,” she said. “And they haven’t been able to oust him.”
Bonnie Maddy, who retired a few years ago to Satsuma, Ala., from Ohio, said that from everything she had read and heard about Judge Kavanaugh, she believed he was “a good man,” and she said she thought he was being unfairly maligned.
“I believe that George Soros has a lot to do with it,” she said, referring to the billionaire funder of liberal causes. “I think a lot of money is being funneled into this.”
Like many at the bluegrass festival, Riley Chestang, 58, of Creola, Ala., was quick to invoke Judge Moore. Standing beside a yellow sign tacked to a tree that said, “No profanity, no alcohol, no smoking, no pets,” Mr. Chestang said, “The same thing happened to Judge Roy Moore. I mean, you can put any kind of propaganda out.”
Judge Kavanaugh, Mr. Chestang added, was put up by Mr. Trump — “the only president in my lifetime I’d take a bullet for,” he said — because he would “uphold the good that this nation needs.”
Image
Supporters of the Ten Commandments amendment gathered at an end-of-summer festival in McIntosh, Ala., on Saturday.CreditMeggan Haller for The New York Times
Judge Moore has been mostly out of the public eye since he lost the Senate election last December. Reached by telephone in his office last week, he contended that there were parallels between his experience and Judge Kavanaugh’s.
“I do think there’s a great similarity,” he said. “I highly regard women. So I don’t condone the mistreatment of women. But things that come up at the last minute, you’ve got to question when there’s such big consequences like a confirmation.”
Senate Republicans, he argued, should move forward with the vote. “This delay, it’s just not right. They should go on and do what they do,” he said.
After days of legal wrangling, Dr. Blasey and Judge Kavanaugh are both expected to testify on Thursday. But it could come at a cost for Republicans and the conservative movement, which has undertaken an aggressive campaign to defend Judge Kavanaugh that has, at times, veered into unsettling territory.
Within minutes of Dr. Blasey’s story appearing in the Washington Post last week, theories and conspiracies about her motives began gaining traction online, sometimes given currency by influential conservative media personalities who spread them, including the Drudge Report, the Fox News host Laura Ingraham and the conservative commentator Erick Erickson.
More broadly, the public relations machine defending the judge has big money behind it. In recent days, a conservative group that has spent tens of millions of dollars pushing for Mr. Trump’s judicial confirmations, the Judicial Crisis Network, has started running a television ad that calls the allegations “character assassination” and insists plainly: “This never happened.”
While some at the weekend festival in Alabama were inclined to believe that the accusation was fiction, others seemed focused on what they said was the Democrats’ real target: Mr. Trump.
“The harder they push against him, the more people like me who didn’t want him to start with are going with him,” said Bryan Bernard, 58, who is a barber and married to Trish, the retired bus driver.
Mr. Bernard said he initially supported Senator Ted Cruz of Texas in the 2016 presidential election but eventually came around to seeing Mr. Trump as the fighter conservatives needed. He said he sees the Ten Commandments amendment in a similar vein — a fight that Alabama is taking to the left. And he wants to see the amendment approved and ultimately decided by the Supreme Court — with Judge Kavanaugh on it.
“This is something we can do to punch them in the eye,” Mr. Bernard said.
Read More | https://www.nytimes.com/2018/09/25/us/politics/alabama-ten-commandments-supreme-court.html |
Nature Alabama Revisits Ten Commandments, Hoping for Help from Kavanaugh, in 2018-09-25 20:46:14
0 notes
Text
Higher Ground Labs is betting tech can help sway the 2020 elections for Democrats
When Shomik Dutta and Betsy Hoover first met in 2007, he was coordinating fundraising and get-out-the-vote efforts for Barack Obama’s first presidential campaign and she was a deputy field director for the campaign.
Over the next two election cycles the two would become part of an organizing and fundraising team that transformed the business of politics through its use of technology — supposedly laying the groundwork for years of Democratic dominance in organizing, fundraising, polling and grassroots advocacy.
Then came Donald J. Trump and the 2016 election.
For both Dutta and Hoover, the 2016 outcome was a wake-up call against complacency. What had worked for the Democratic party in 2008 and 2012 wasn’t going to be effective in future election cycles, so they created the investment firm Higher Ground Labs to provide financing and a launching pad for new companies serving Democratic campaigns and progressive organizations.
Higher Ground Labs backs 13 startups to help Democrats win in 2018 and beyond
“As the political world shifts from analog to digital, we need a lot more tools to capture that spend,” says Dutta. “Democrats are spending on average 70 cents of every dollar raised on television ads. We are addicted to old ways of campaigning. If we want to activate and engage an enduring majority of voters we have to go where they are (and that’s increasingly online) and we have to adapt to be able to have these conversations wherever they are.”
Social media and the rise of “direct to consumer” politics
While the Obama campaign effectively used the internet as a mobilization tool in its two campaigns, the lessons of social media and mobile technologies that offer a “direct-to-consumer” politics circumventing traditional norms have, in the ensuing years, been harnessed most effectively by conservative organizations, according to some scholars and activists.
“The internet is a tool and in that sense it’s neutral, but just like other communication tools from the past, people with more power, with more resources, with more organization, have been able to take advantage of it,” Jen Schradie, an assistant professor at the Observatoire sociologique du changement at Sciences Po in Paris, told Vox in an interview earlier this month.
Schradie is a scholar whose recent book, “The Revolution That Wasn’t,” contends that the internet’s early application as a progressive organizing tool has been overtaken by more conservative elements. “The idea of neutrality seems more true of the internet because the costs of distributing information are dramatically lower than with something like television or radio or other communication tools,” she said. “However, to make full use of the internet, you still need substantial resources and time and motivation. The people who can afford to do this, who can fund the right digital strategy, create a major imbalance in their favor.”
Schradie contends that a web of privately funded think tanks, media organizations, talk radio and — increasingly — mobile applications have woven a conservative stitch into the fabric of social media. The medium’s own tendency to promote polarizing and fringe viewpoints also served to amplify the views of pundits who were previously believed to be political outliers.
Essentially, these sites have enabled commentators and personalities to create a patchwork of “grassroots” organizations and media operations dedicated to reaching an audience receptive to their particular political message that’s funded by billionaire donors and apolitical corporate ad dollars.
Then there’s the technology companies, like Cambridge Analytica, which improperly used access to Facebook data for targeting purposes — also financed by these same billionaires.
Bannon and Cambridge Analytica planned suppression of black voters, whistleblower tells Senate
“The last six years have witnessed millions and millions of dollars of private Koch money and Mercer money that have gone to pretty sophisticated data and media efforts to advance the Republican agenda,” says Dutta. “I want to even the scale.”
Dutta is referring to Charles and David Koch and Robert Mercer, the scions and founder (respectively) of two family dynasties worth billions. The Koch brothers support a web of political advocacy groups, while Mercer and his daughter were large backers of Breitbart News and Cambridge Analytica, two organizations that arguably provided much of the policy underpinnings and online political machinery for the Trump presidential campaign.
But there’s also the simple fact that Donald Trump’s digital strategy director, Brad Parscale, was able to effectively and inexpensively leverage the social media tools and data troves amassed by the Republican National Committee that were already available to the candidate who won the Republican primary. In fact, in the wake of Romney’s loss, Republicans spent years building up profiles of 200 million Americans for targeted messaging in the 2016 election.
“Who controls Facebook controls the 2016 election,” Parscale said during a speaking engagement at the Romanian Academy of Sciences, according to a report in Forbes.
Parscale, now the campaign manager for the president’s 2020 reelection campaign recalled, “These guys from Facebook walked into my office and said: ‘we have a beta … it’s a new onboarding tool … you can onboard audiences straight into Facebook and we will match them to their Facebook accounts,’ ” according to Forbes .
During the 2016 campaign, Hillary Clinton’s team made 66,000 visual ads, according to Parscale, while the Trump campaign made 5.9 million ads by leveraging social media networks and the language of memes. And in the run-up to the 2020 election, Parscale intends to go back to the same well. The Trump campaign has already spent more than $5 million on Facebook ads in the current election cycle, according to The New York Times — outspending every single Democratic candidate in the field and roughly all of the Democrats combined.
Reaching higher ground
Dutta and Hoover are working to offset this movement with investments of their own. Back in 2017, the two launched Higher Ground Labs, an early-stage company accelerator and investment firm dedicated to financing technology companies that could support progressive causes.
The firm has $15 million committed from investors, including Reid Hoffman, the co-founder of LinkedIn and a partner at Greylock; Ron Conway, the founder of SV Angel and an early backer of Google, Facebook and Twitter; Chris Sacca, an early investor in Uber; and Elizabeth Cutler, the founder of SoulCycle. Already, Higher Ground has invested in more than 30 companies focused on services like advocacy outreach, polling and campaign organizing — among others.
The latest cohort of companies to receive backing Higher Ground Labs
“It is vitally important that Democrats learn to do their campaigns online,” says Dutta. “The way you recruit volunteers; the way you poll sentiment; the way you target and mobilize voters has to be done with online tools and has to improve in the progressive movement and that’s the job of Higher Ground Labs to fix.”
For-profit companies have a critical role to play in election organizing and mobilization, Dutta says. Thanks to government regulation, only private companies are allowed to trade data across organizations and causes (provided they do it at fair market value). That means advocacy groups, unions and others can tap the information these companies collect — for a fee.
The Democratic Party already has one highly valued private company that it uses for its technology services. Formed from the merger of NGP Software and Voter Activation Network, two companies that got their start in the late 1990s and early 2000s, NGP VAN is the largest software and technology services provider for Democratic campaigns. It’s also a highly valued company, which received roughly $100 million in financing last year from the private equity firm Insight Venture Partners, according to people familiar with the investment. Terms of the deal were not disclosed.
“Our vision has been to build a platform that would break down the painful data silos that exist in the campaigns and nonprofit space, and to offer truly best-in-class digital, fundraising and organizing features that could serve both the largest and the smallest nonprofits and campaigns, all with one unified CRM,” wrote Stu Trevelyan, the chief executive of NGP VAN + EveryAction, in an August blogpost announcing the investment. “We’re so excited that others, like our new partners at Insight, share that vision, and we can’t wait to continue innovating and growing together in the coming years.”
Can startups lead the way?
Even as private equity dollars boost the firepower of organizations like NGP VAN, venture capitalists are financing several companies from the Higher Ground Labs portfolio.
Civis Analytics, a startup founded by the former chief analytics officer of Barack Obama’s 2012 reelection campaign, raised $22 million from outside investors, and counts Higher Ground Labs among its backers. Qriously, another Higher Ground Labs portfolio company, was acquired by Brandwatch, as was GroundBase, a messaging platform acquired by the nonprofit progressive advocacy organization ACRONYM.
Other companies in the portfolio are also attracting serious attention from investors. Standouts like Civis Analytics and Hustle, which raised $30 million last May, show that investors are buying into the proposition that these companies can build lasting businesses serving Democratic and progressive political campaigns and corporate businesses that would also like to rally employees or personalize a marketing pitch to customers.
Hustle rallies $30M for grassroots texting tool Republicans can’t use
These are companies like Change Research, an earlier-stage company that just launched from Higher Ground Labs accelerator last year. That company, founded by Mike Greenfield, a serial Silicon Valley entrepreneur who was the first data scientist working on the problem of fraud detection at PayPal, and Pat Reilly, a communications professional who worked with state and local Democratic politicians, is slashing the cost of political polling.
“I wanted to do something for American democracy to try and improve the state of things,” Greenfield said in an interview last year.
For Greenfield, that meant increasing access to polling information. He cited the test case of a Kansas special election in a district that Donald Trump had won by 27 points. Using his own proprietary polling data, Greenfield predicted that the Democratic challenger, James Thompson, would pose a significant threat to his Republican opponent, Mike Estes.
Estes went on to a 7% victory at the ballot, but Thompson’s campaign did not have access to polling data that could have helped inform his messaging and — potentially — sway the election, said Greenfield.
“Public opinion is used to ween out who can be most successful based on how much money they’re able to raise for a poll,” says Reilly. It’s another way that electoral politics is skewed in favor of the people with disposable income to spend what is a not-insignificant amount of money on campaigns.
Polls alone can cost between $20,000 to $30,000 — and Change Research has been able to cut that by 80% to 90%, according to the company’s founders.
“It’s safe to say that most of the world was stunned by the outcome [of the presidential election] because most polls predicted the opposite,” says Greenfield. “Being a good American and as a parent of a 10-year-old and a 12-year-old, providing forward-thinking candidates and causes with the kind of insight they needed to win up and down the ballot could not only be a good business, but really help us save our democracy.”
Change Research isn’t just polling for politicians. Last year, the company conducted roughly 500 polls for political candidates and advocacy groups.
“The way that I’ve described Change Research to investors is that we want to simultaneously move the world in a better direction and having a positive impact while building a substantial business,” says Greenfield. “We’re only going to work with candidates and causes that we’re aligned with.”
Being exclusively focused on progressive causes isn’t the liability that many in the broader business community would think, says Dutta. Many Democratic organizations won’t work with companies that sell services to both sides of the aisle.
For Higher Ground Labs, a stipulation for receiving their money is a commitment not to work with any Republican candidate. Corporations are okay, but conservative causes and organizations are forbidden.
“We’re in a moment of existential crisis in America and this Republican party is deeply toxic to the health and future of our country,” says Dutta. “The only path out of this mess is to vote Republicans out of office and to do that we need to make it easier for good candidates to run for office and to engage a broader electorate into voting regularly.”
from iraidajzsmmwtv https://ift.tt/2Lw0Q7d via IFTTT
0 notes
Text
How Russia’s online influence campaign engaged with millions for years
Russian efforts to influence U.S. politics and sway public opinion were consistent and, as far as engaging with target audiences, largely successful, according to a report from Oxford’s Computational Propaganda Project published today. Based on data provided to Congress by Facebook, Instagram, Google and Twitter, the study paints a portrait of the years-long campaign that’s less than flattering to the companies.
The report, which you can read here, was published today but given to some outlets over the weekend; it summarizes the work of the Internet Research Agency, Moscow’s online influence factory and troll farm. The data cover various periods for different companies, but 2016 and 2017 showed by far the most activity.
A clearer picture
If you’ve only checked into this narrative occasionally during the last couple of years, the Comprop report is a great way to get a bird’s-eye view of the whole thing, with no “we take this very seriously” palaver interrupting the facts.
If you’ve been following the story closely, the value of the report is mostly in deriving specifics and some new statistics from the data, which Oxford researchers were provided some seven months ago for analysis. The numbers, predictably, all seem to be a bit higher or more damning than those provided by the companies themselves in their voluntary reports and carefully practiced testimony.
Previous estimates have focused on the rather nebulous metric of “encountering” or “seeing” IRA content put on these social metrics. This had the dual effect of increasing the affected number — to over 100 million on Facebook alone — but “seeing” could easily be downplayed in importance; after all, how many things do you “see” on the internet every day?
Facebook will show which Russian election troll accounts you followed
The Oxford researchers better quantify the engagement, on Facebook first, with more specific and consequential numbers. For instance, in 2016 and 2017, nearly 30 million people on Facebook actually shared Russian propaganda content, with similar numbers of likes garnered, and millions of comments generated.
Note that these aren’t ads that Russian shell companies were paying to shove into your timeline — these were pages and groups with thousands of users on board who actively engaged with and spread posts, memes and disinformation on captive news sites linked to by the propaganda accounts.
The content itself was, of course, carefully curated to touch on a number of divisive issues: immigration, gun control, race relations and so on. Many different groups (i.e. black Americans, conservatives, Muslims, LGBT communities) were targeted; all generated significant engagement, as this breakdown of the above stats shows:
Although the targeted communities were surprisingly diverse, the intent was highly focused: stoke partisan divisions, suppress left-leaning voters and activate right-leaning ones.
Black voters in particular were a popular target across all platforms, and a great deal of content was posted both to keep racial tensions high and to interfere with their actual voting. Memes were posted suggesting followers withhold their votes, or with deliberately incorrect instructions on how to vote. These efforts were among the most numerous and popular of the IRA’s campaign; it’s difficult to judge their effectiveness, but certainly they had reach.
Examples of posts targeting black Americans.
In a statement, Facebook said that it was cooperating with officials and that “Congress and the intelligence community are best placed to use the information we and others provide to determine the political motivations of actors like the Internet Research Agency.” It also noted that it has “made progress in helping prevent interference on our platforms during elections, strengthened our policies against voter suppression ahead of the 2018 midterms, and funded independent research on the impact of social media on democracy.”
Instagram on the rise
Based on the narrative thus far, one might expect that Facebook — being the focus for much of it — was the biggest platform for this propaganda, and that it would have peaked around the 2016 election, when the evident goal of helping Donald Trump get elected had been accomplished.
In fact Instagram was receiving as much or more content than Facebook, and it was being engaged with on a similar scale. Previous reports disclosed that around 120,000 IRA-related posts on Instagram had reached several million people in the run-up to the election. The Oxford researchers conclude, however, that 40 accounts received in total some 185 million likes and 4 million comments during the period covered by the data (2015-2017).
A partial explanation for these rather high numbers may be that, also counter to the most obvious narrative, IRA posting in fact increased following the election — for all platforms, but particularly on Instagram.
IRA-related Instagram posts jumped from an average of 2,611 per month in 2016 to 5,956 in 2017; note that the numbers don’t match the above table exactly because the time periods differ slightly.
Twitter posts, while extremely numerous, are quite steady at just under 60,000 per month, totaling around 73 million engagements over the period studied. To be perfectly frank, this kind of voluminous bot and sock puppet activity is so commonplace on Twitter, and the company seems to have done so little to thwart it, that it hardly bears mentioning. But it was certainly there, and often reused existing bot nets that previously had chimed in on politics elsewhere and in other languages.
In a statement, Twitter said that it has “made significant strides since 2016 to counter manipulation of our service, including our release of additional data in October related to previously disclosed activities to enable further independent academic research and investigation.”
Google too is somewhat hard to find in the report, though not necessarily because it has a handle on Russian influence on its platforms. Oxford’s researchers complain that Google and YouTube have been not just stingy, but appear to have actively attempted to stymie analysis.
Google chose to supply the Senate committee with data in a non-machine-readable format. The evidence that the IRA had bought ads on Google was provided as images of ad text and in PDF format whose pages displayed copies of information previously organized in spreadsheets. This means that Google could have provided the useable ad text and spreadsheets—in a standard machine- readable file format, such as CSV or JSON, that would be useful to data scientists—but chose to turn them into images and PDFs as if the material would all be printed out on paper.
This forced the researchers to collect their own data via citations and mentions of YouTube content. As a consequence, their conclusions are limited. Generally speaking, when a tech company does this, it means that the data they could provide would tell a story they don’t want heard.
For instance, one interesting point brought up by a second report published today, by New Knowledge, concerns the 1,108 videos uploaded by IRA-linked accounts on YouTube. These videos, a Google statement explained, “were not targeted to the U.S. or to any particular sector of the U.S. population.”
In fact, all but a few dozen of these videos concerned police brutality and Black Lives Matter, which as you’ll recall were among the most popular topics on the other platforms. Seems reasonable to expect that this extremely narrow targeting would have been mentioned by YouTube in some way. Unfortunately it was left to be discovered by a third party and gives one an idea of just how far a statement from the company can be trusted. (Google did not immediately respond to a request for comment.)
Desperately seeking transparency
In its conclusion, the Oxford researchers — Philip N. Howard, Bharath Ganesh and Dimitra Liotsiou — point out that although the Russian propaganda efforts were (and remain) disturbingly effective and well organized, the country is not alone in this.
“During 2016 and 2017 we saw significant efforts made by Russia to disrupt elections around the world, but also political parties in these countries spreading disinformation domestically,” they write. “In many democracies it is not even clear that spreading computational propaganda contravenes election laws.”
“It is, however, quite clear that the strategies and techniques used by government cyber troops have an impact,” the report continues, “and that their activities violate the norms of democratic practice… Social media have gone from being the natural infrastructure for sharing collective grievances and coordinating civic engagement, to being a computational tool for social control, manipulated by canny political consultants, and available to politicians in democracies and dictatorships alike.”
Predictably, even social networks’ moderation policies became targets for propagandizing.
Waiting on politicians is, as usual, something of a long shot, and the onus is squarely on the providers of social media and internet services to create an environment in which malicious actors are less likely to thrive.
Specifically, this means that these companies need to embrace researchers and watchdogs in good faith instead of freezing them out in order to protect some internal process or embarrassing misstep.
“Twitter used to provide researchers at major universities with access to several APIs, but has withdrawn this and provides so little information on the sampling of existing APIs that researchers increasingly question its utility for even basic social science,” the researchers point out. “Facebook provides an extremely limited API for the analysis of public pages, but no API for Instagram.” (And we’ve already heard what they think of Google’s submissions.)
If the companies exposed in this report truly take these issues seriously, as they tell us time and again, perhaps they should implement some of these suggestions.
from iraidajzsmmwtv https://ift.tt/2BoiW4g via IFTTT
0 notes
Link
Russian efforts to influence U.S. politics and sway public opinion were consistent and, as far as engaging with target audiences, largely successful, according to a report from Oxford’s Computational Propaganda Project published today. Based on data provided to Congress by Facebook, Instagram, Google, and Twitter, the study paints a portrait of the years-long campaign that’s less than flattering to the companies.
The report, which you can read here, was published today but given to some outlets over the weekend, summarizes the work of the Internet Research Agency, Moscow’s online influence factory and troll farm. The data cover various periods for different companies, but 2016 and 2017 showed by far the most activity.
A clearer picture
If you’ve only checked into this narrative occasionally during the last couple years, the Comprop report is a great way to get a bird’s-eye view of the whole thing, with no “we take this very seriously” palaver interrupting the facts.
If you’ve been following the story closely, the value of the report is mostly in deriving specifics and some new statistics from the data, which Oxford researchers were provided some seven months ago for analysis. The numbers, predictably, all seem to be a bit higher or more damning than those provided by the companies themselves in their voluntary reports and carefully practiced testimony.
Previous estimates have focused on the rather nebulous metric of “encountering” or “seeing” IRA content put on these social metrics. This had the dual effect of increasing the affected number — to over a hundred million on Facebook alone — but “seeing” could easily be downplayed in importance; after all, how many things do you “see” on the internet every day?
Facebook will show which Russian election troll accounts you followed
The Oxford researchers better quantify the engagement, on Facebook first, with more specific and consequential numbers. For instance, in 2016 and 2017, nearly 30 million people on Facebook actually shared Russian propaganda content, with similar numbers of likes garnered, and millions of comments generated.
Note that these aren’t ads that Russian shell companies were paying to shove into your timeline — these were pages and groups with thousands of users on board who actively engaged with and spread posts, memes, and disinformation on captive news sites linked to by the propaganda accounts.
The content itself was, of course, carefully curated to touch on a number of divisive issues: immigration, gun control, race relations, and so on. Many different groups (i.e. black Americans, conservatives, Muslims, LGBT communities) were targeted all generated significant engagement, as this breakdown of the above stats shows:
Although the targeted communities were surprisingly diverse, the intent was highly focused: stoke partisan divisions, suppress left-leaning voters, and activate right-leaning ones.
Black voters in particular were a popular target across all platforms, and a great deal of content was posted both to keep racial tensions high and to interfere with their actual voting. Memes were posted suggesting followers withhold their votes, or deliberately incorrect instructions on how to vote. These efforts were among the most numerous and popular of the IRA’s campaign; it’s difficult to judge their effectiveness, but certainly they had reach.
Examples of posts targeting black Americans.
In a statement, Facebook said that it was cooperating with officials and that “Congress and the intelligence community are best placed to use the information we and others provide to determine the political motivations of actors like the Internet Research Agency.” It also noted that it has “made progress in helping prevent interference on our platforms during elections, strengthened our policies against voter suppression ahead of the 2018 midterms, and funded independent research on the impact of social media on democracy.”
Instagram on the rise
Based on the narrative thus far, one might expect that Facebook — being the focus for much of it — was the biggest platform for this propaganda, and that it would have peaked around the 2016 election, when the evident goal of helping Donald Trump get elected had been accomplished.
In fact Instagram was receiving as much or more content than Facebook, and it was being engaged with on a similar scale. Previous reports disclosed that around 120,000 IRA-related posts on Instagram had reached several million people in the run-up to the election. The Oxford researchers conclude, however, that 40 accounts received in total some 185 million likes and 4 million comments during the period covered by the data (2015-2017).
A partial explanation for these rather high numbers may be that, also counter to the most obvious narrative, IRA posting in fact increased following the election — for all platforms, but particularly on Instagram.
IRA-related Instagram posts jumped from an average of 2,611 per month in 2016 to 5,956 in 2017; note that the numbers don’t match the above table exactly because the time periods differ slightly.
Twitter posts, while extremely numerous, are quite steady at just under 60,000 per month, totaling around 73 million engagements over the period studied. To be perfectly frank this kind of voluminous bot and sock puppet activity is so commonplace on Twitter, and the company seems to have done so little to thwart it, that it hardly bears mentioning. But it was certainly there, and often reused existing bot nets that previously had chimed in on politics elsewhere and in other languages.
In a statement, Twitter said that it has “made significant strides since 2016 to counter manipulation of our service, including our release of additional data in October related to previously disclosed activities to enable further independent academic research and investigation.”
Google too is somewhat hard to find in the report, though not necessarily because it has a handle on Russian influence on its platforms. Oxford’s researchers complain that Google and YouTube have been not just stingy, but appear to have actively attempted to stymie analysis.
Google chose to supply the Senate committee with data in a non-machine-readable format. The evidence that the IRA had bought ads on Google was provided as images of ad text and in PDF format whose pages displayed copies of information previously organized in spreadsheets. This means that Google could have provided the useable ad text and spreadsheets—in a standard machine- readable file format, such as CSV or JSON, that would be useful to data scientists—but chose to turn them into images and PDFs as if the material would all be printed out on paper.
This forced the researchers to collect their own data via citations and mentions of YouTube content. As a consequence their conclusions are limited. Generally speaking when a tech company does this, it means that the data they could provide would tell a story they don’t want heard.
For instance, one interesting point brought up by a second report published today, by New Knowledge, concerns the 1,108 videos uploaded by IRA-linked accounts on YouTube. These videos, a Google statement explained, “were not targeted to the U.S. or to any particular sector of the U.S. population.”
In fact, all but a few dozen of these videos concerned police brutality and Black Lives Matter, which as you’ll recall were among the most popular topics on the other platforms. Seems reasonable to expect that this extremely narrow targeting would have been mentioned by YouTube in some way. Unfortunately it was left to be discovered by a third party and gives one an idea of just how far a statement from the company can be trusted.
Desperately seeking transparency
In its conclusion, the Oxford researchers — Philip N. Howard, Bharath Ganesh, and Dimitra Liotsiou — point out that although the Russian propaganda efforts were (and remain) disturbingly effective and well organized, the country is not alone in this.
“During 2016 and 2017 we saw significant efforts made by Russia to disrupt elections around the world, but also political parties in these countries spreading disinformation domestically,” they write. “In many democracies it is not even clear that spreading computational propaganda contravenes election laws.”
“It is, however, quite clear that the strategies and techniques used by government cyber troops have an impact,” the report continues, “and that their activities violate the norms of democratic practice… Social media have gone from being the natural infrastructure for sharing collective grievances and coordinating civic engagement, to being a computational tool for social control, manipulated by canny political consultants, and available to politicians in democracies and dictatorships alike.”
Predictably, even social networks’ moderation policies became targets for propagandizing.
Waiting on politicians is, as usual, something of a long shot, and the onus is squarely on the providers of social media and internet services to create an environment in which malicious actors are less likely to thrive.
Specifically, this means that these companies need to embrace researchers and watchdogs in good faith instead of freezing them out in order to protect some internal process or embarrassing misstep.
“Twitter used to provide researchers at major universities with access to several APIs, but has withdrawn this and provides so little information on the sampling of existing APIs that researchers increasingly question its utility for even basic social science,” the researchers point out. “Facebook provides an extremely limited API for the analysis of public pages, but no API for Instagram.” (And we’ve already heard what they think of Google’s submissions.)
If the companies exposed in this report truly take these issues seriously, as they tell us time and again, perhaps they should implement some of these suggestions.
from Social – TechCrunch https://ift.tt/2BoiW4g Original Content From: https://techcrunch.com
0 notes