#and tech companies are never going to CHOOSE to act more ethically they have to be made to
Explore tagged Tumblr posts
Text
thanks for the thoughtful reply!
I'm not a professional artist so idk if I have enough skin in this game to give a properly considered answer (and I don't really want to speak for those who do), but to my understanding, calling generative AI plagiaristic comes from that place of them being by design incapable of providing attribution, regardless of how strongly their outputs are influenced by a specific artist. A human artist will typically know when it's appropriate to credit another artist with style / substance / etc. inspiration for one of their pieces and can provide attribution when appropriate. Generative AI has no way of knowing how heavily a specific artist's work has influenced its outputs because of the way a trained model works, so can't give attribution even if it's relevant or necessary.
When I added those tags, OP hadn't added the follow-ups that they were talking about copyright law, and I was looking at plagiarism from an academic standpoint rather than a legal one. In that framing I do still think plagiarism is a decent term for what these models do with their input data, if only because we don't currently have a more accurate word for the specific kind of large-scale impersonal unattributed use of other people's work that generative AI relies on. I don't know enough about copyright law (especially US copyright law, which I assume is what OP is talking about) to really have an opinion on that aspect.
The definition of "plagiarism" and "copying" being changed from "copying verbatim someone else's work" to "creating an entirely new never-seen-before piece of work with input from a tool that may have at one point read metadata about someone else's work" is such insane obvious batshit overreach, but people are repeating it as if it's a given just because it gives them a reason to hate the fucking machines.
So done with this conversation. After a year of trying to explain this stuff to people nicely I am just completely done with it.
#this is honestly one of those things I'm glad it's not my job to figure out like man I could never study law#they need to be regulated bc it's imo self-evidently unethical how they're currently being used and a LOT of that is by design#and tech companies are never going to CHOOSE to act more ethically they have to be made to#but I do think I agree with OP that copyright law isn't the way to go about it#the “how heavily a specific artist's work has influenced the thing” is largely irrelevant for things thousands of people have drawn#bc the amount of data does make it a lot more like human learning#not to anthropomorphise the statistical model#but for niche topics there'll often be one or two artists whose work is the overwhelming basis for whatever the AI spits out#if u ask an image generator for 'photorealistic pokemon' it's not gonna credit RJ palmer bc it doesn't know who that is#but that's absolutely where a lot of that data is coming from#and a human artist would know that's where their inspiration is coming from but an AI simply Does Not#idk it's muddy and messy#I did originally think OP was just being really pedantic about the dictionary definition of “plagiarism” for no reason so#that was where the original tags were coming from lmao#I stand by them but with the added context I maybe wouldn't have stepped in#chats#discourse#AI art#Also important to remember that AI doesn't learn like humans do it's a bunch of normal distributions in a trench coat#so where humans can learn AI can like#again we don't have a better term for it so learn is the best analogy but it's like learn in a different font
2K notes
·
View notes
Text
MD KaiJou ideas - Part 2
Kaiba: Starting scenarios
See Part 1 here
See Jonouchi Scenarios here
See Combined Scenarios here
See Mid-arc Dramas here
Generally Kaiba has to make the first move to invite Jonouchi into his life somehow (usually as a friend), as it is very difficult to convincingly shift his worldview without significant time and energy.
Because of this, there are various scenarios that are concocted to trigger this change in his life.
1) Mokuba hangs out with Jonouchi
Mokuba is developing an ongoing friendship with the main cast, with Jonouchi being a close friend. This causes Kaiba to constantly interact with Jonouchi at a consistent rate.
This is the best way to get a healthy relationship going between Kaiba and Jonouchi. Kaiba may try to push Mokuba away from the group but is likely to adhere to what Mokuba wants.
Jonouchi can start to learn more about Kaiba as a person and start to understand him. Kaiba will likewise learn about Jonouchi via Mokuba and eventually may open himself up to him.
Can get Jonouchi acting as an older brother too towards Mokuba.
For extra drama, could get Mokuba to have a one-sided crush on Jonouchi for a while.
2) Kaiba is prescribed friendship
Kaiba’s mental health is declining in some way (e.g. depression, manic episodes, etc) which is notably affecting his work. His psychiatrist/doctor/therapist recommends seeking out friendships and developing a trust system (Note: I am not a doctor - this advice is made up for the scenario).
Kaiba, who is taking this seriously and is now determined to improve himself, is now a bit more open to inviting Jonouchi (and the rest) into his life - especially now that he is making an effort it.
Can possibly even have Jonouchi be the perscriber, although that can make for an unhealthy doctor/client relationship.
3) Kaiba gets his coffee from the cafe Jonouchi works at
Kaiba gets his daily coffee from the Kaiba Corp coffee shop that Jonouchi works at. This opens up interactions with Jonouchi in a frequent manner.
This does, unfortunately, have the problematic ‘boss dating subordinate’ situation.
This also assumes Kaiba likes to get his own coffee. Maybe he uses it as an excuse to take a break from work?
4) Kaiba is an alcoholic
Kaiba is in a situation that leads him to alcoholism (not helped by Japan’s work drinking culture). A drunk Kaiba at functions can often lead to different scenarios, especially if Jou is a bartender or waiter.
Darker narratives can extend this to drug use.
5) Kaiba starts recreating other duelists in VR using AI
This can be a space for him to safely explore his sexuality, or even to just open himself up to said AI (since he has complete control over it)
This can sometimes lead to Kaiba simply settling for the AI and not the real person, although that in itself can make for some decent drama.
Can also explore narratives around the ethics of misusing AI based on real people.
6) Mokuba has left Domino City
Mokuba has decided to do his own thing in America (either extending Kaiba Corp or his own dream). This can intensify Kaiba’s loneliness, and thus he becomes more chatty or a bit more open to more interaction in his life.
The amount of distance between the brothers can vary. Could have Mokuba leave on a bad note and never communicate at all, or could have him chat to Kaiba every other day online.
For darker narratives this may result in the aforementioned alcoholism and/or declining mental health.
7) Kaiba is broke
Kaiba has become broke, his company has tanked, or the company has been taken from him in some way. This can even out the wealth gap between Jou and Kaiba and have them on a similar playing field.
Jonouchi could offer him a home, and can teach him how to be thrifty (should Kaiba needs to learn)
Depending on Kaiba’s mental health, he can either be driven to climb his way back, or be in despair and lose all hope.
Bonus points if Jou is a key factor in helping him get back ahead.
8) Kaiba is having dreams about Jonouchi
This can be sexual, intense or just very personal. Reasons for this happening can vary, but sometimes there doesn’t need to be a reason
This can cause Kaiba to try to figure out why he is having these dreams, and what it may mean (if he has mellowed in his dub-version’s skepticism).
Thus leading to observations and potential interactions with Jonouchi. (Or alternatively attempts at avoidance, which Jou picks up on and bugs him about)
9) Kaiba’s new bodyguard is Jonouchi
This allows for a lot of interaction between them, however does mean Kaiba has power over him as an employer.
Potential to have Jonouchi rescue Kaiba or get injured trying to protect him - thus providing potential hurt/comfort scenarios.
Can increase drama by having high stakes scenarios with big injuries.
10) Jonouchi’s dance/singing/performance impresses Kaiba
Kaiba attends a concert and finds Jonouchi performing (dancing/singing/etc), and is impressed by the performance. Lots of potential threads here.
Kaiba could become a secret hardcore fanboy
Kaiba could be seduced by the performance (either intentional by Jou or not)
Kaiba could sign him on to Kaiba Corp events, leading to more interactions
Can lead to private performances, and/or Kaiba listening to Jou’s music in his spare time.
11) Kaiba is in a coma / deep sleep / can’t wake up
Often attributed to faulty VR tech. Mokuba/Kaiba Corp creates some technology to allow Jonouchi to enter his mind and pull him out.
This has the benefit of Jonouchi being able to see the true Kaiba inside his mind - which can allow Jonouchi to understand and empathise with Kaiba much more.
This can lead to angst / dark narratives if unveiling some of Kaiba’s more twisted mind.
See hospital scenario for when tech can’t help.
12) Kaiba is stuck in VR
Very similar to the coma one, but he can now access digital technology so can interact with the outside world should he need to.
Also Jou would not be delving into his mind directly but simply into the VR world to retrieve him.
Could utilise glitching for humor and/or drama.
13) Kaiba decides to experiment with his sex life
May go out to a gay club undercover (where Jou goes/works), or go to a private sex work company (which could also be where Jou works).
In the case where Jou does sex work, could either have Jou and Kaiba meet there by surprise, Jou try pretend that he is someone else (in the event Kaiba does not realise he works there) or could have Kaiba specifically choose Jou (to Jou’s bewilderment).
14) Kaiba is competing with a rival company (that Jou works for)
Kaiba finds there is a rival company making their way into his territory and are making a name for themselves. Jonouchi, ideally, is involved with this company professionally.
This can elevate Jonouchi to a level that he can interact with Kaiba on at a professional level.
15) Kaiba discovers and befriends Jonouchi online
Can be done with VR avatars or random screen names, leading to drama at the reveal.
Bonus points if Kaiba has no idea it’s Jonouchi, and/or vise versa. Can lead to drama on the realisation on who each other are.
Alternatively, Kaiba may be a fan of Jonouchi streaming and become a bit of a fanboy (without Jonouchi knowing)
16) Kaiba sponsors Jonouchi as a professional duelist
This can lead to many potential interactions.
Main issue is the boss/employee relationship dynamic - Kaiba has significant power over Jonouchi.
Bonus points if Kaiba loses a duel to Jou at some point, causing drama.
17) Kaiba is recovering from a recent breakup
He may seek a rebound, or possibly Mokuba may seek Jou to help Kaiba recover.
Can enter into dark/abuse narratives with the ex.
Main difficulty here is getting Kaiba to trust anyone ever again after having his heart broken.
18) Kaiba is injured and is currently in hospital
This can either be for major injuries (e.g. gunshot wound, head injury) or minor ones.
This allows people to possibly visit him (like Jou). Depending on the injury he may or may not realise they are visiting (e.g. coma).
19) Kaiba wakes up, after having fallen for and started a relationship with Jou in a dream
For when you want a quick start to pining but don’t want all the setup
.A bit of a cop out, but it does the job.
Darker Narrative Scenarios
These are the more angst/dark/disturbing leaning scenarios. These are ok for large hits of drama and angst but often ends up changing Kaiba’s personality significantly depending on how his mental health goes.
1.) Mokuba has died
This would be a very traumatic event for Kaiba, so this will nearly always lead to a dark narrative often with mental health breakdown.
Can lead to suicide scenario.
Can lead to major personality shift
2) Kaiba has become suicidal
There can be a variety of reasons that lead to this.
Often the story has Jonouchi be present should Kaiba nearly attempt, opening the door for Jonouchi to help him.
This dark narrative may end up overburdening Jonouchi if not careful.
Mokuba often needs to be out of the picture somehow, or something to distance him from his brother.
Mokuba could be the person to bring in Jou and the gang to help.
3) Kaiba is kidnapped / trafficked / trapped
Jonouchi helps to rescue him and so Kaiba feels indebted to him. This can open up interactions.
Generally it’s Mokuba who seeks out Jou to help.
Kaiba may attempt to avoid Jonouchi afterwards, depending on what state he was in when Jonouchi rescues him.
4) Jonouchi has recently died, but has come back as a ghost and is haunting Kaiba
Usually a bittersweet scenario, unless a way to bring Jonouchi back to life is construed. Alternatively could eventually have Jou live in VR world a la Noah.
Kaiba may start to suffer from being haunted, depending on the nature of it.
Can have many ghost shenanigans such as: ghost commentary, Jonouchi voyeuring Kaiba, awkward moments where Kaiba can’t respond because he is in company of others, and various attempts to communicate should Kaiba not hear him
Can reverse the scenario so that Kaiba is the one who has died, and Jou is being haunted. Often here, headcanoni is that Kaiba is either in denial of his death or is desperate to find a way back to life.
5.) Jou sacrifices himself for Kaiba or Mokuba
Bittersweet in that Jou is already dead. Could have an affect on Kaiba’s mental health.
Bonus points for combining with the ghost scenario
That’s all I have for now. Next will be starting scenarios for Jou - click here to view the post.
3 notes
·
View notes
Text
Marc Benioff: We Need a New Capitalism
Should the Security and Exchange Commission require public companies to publicly disclose their key stakeholders and show how they are impacting those stakeholders: (1) Yes, (2) No? Why? What are the ethics underlying your decision?
Capitalism, I acknowledge, has been good to me.
Over the past 20 years, the company that I co-founded, Salesforce, has generated billions in profits and made me a very wealthy person. I have been fortunate to live a life beyond the wildest imaginations of my great-grandfather, who immigrated to San Francisco from Kiev in the late 1800s.
Yet, as a capitalist, I believe it’s time to say out loud what we all know to be true: Capitalism, as we know it, is dead.
Yes, free markets — and societies that cherish scientific research and innovation — have pioneered new industries, discovered cures that have saved millions from disease and unleashed prosperity that has lifted billions of people out of poverty. On a personal level, the success that I’ve achieved has allowed me to embrace philanthropy and invest in improving local public schools and reducing homelessness in the San Francisco Bay Area, advancing children’s health care and protecting our oceans.
But capitalism as it has been practiced in recent decades — with its obsession on maximizing profits for shareholders — has also led to horrifying inequality. Globally, the 26 richest people in the world now have as much wealth as the poorest 3.8 billion people, and the relentless spewing of carbon emissions is pushing the planet toward catastrophic climate change. In the United States, income inequality has reached its highest level in at least 50 years, with the top 0.1 percent — people like me — owning roughly 20 percent of the wealth while many Americans cannot afford to pay for a $400 emergency. It’s no wonder that support for capitalism has dropped, especially among young people.
To my fellow business leaders and billionaires, I say that we can no longer wash our hands of our responsibility or what people do with our products. Yes, profits are important, but so is society. And if our quest for greater profits leaves our world worse off than before, all we will have taught our children is the power of greed.
It’s time for a new capitalism — a more fair, equal and sustainable capitalism that actually works for everyone and where businesses, including tech companies, don’t just take from society but truly give back and have a positive impact.
What might a new capitalism look like?
First, business leaders need to embrace a broader vision of their responsibilities by looking beyond shareholder return and also measuring their stakeholder return. This requires that they focus not only on their shareholders, but also on all of their stakeholders — their employees, customers, communities and the planet. Fortunately, nearly 200 executives with the Business Roundtable recently committed their companies, including Salesforce, to this approach, saying that the “purpose of a corporation” includes “a fundamental commitment to all of our stakeholders.” As a next step, the government could formalize this commitment, perhaps with the Security and Exchange Commission requiring public companies to publicly disclose their key stakeholders and show how they are impacting those stakeholders.
Unfortunately, not everyone agrees. Some business leaders objected to the landmark declaration. The Council of Institutional Investors argued that “it is government, not companies, that should shoulder the responsibility of defining and addressing societal objectives.” When asked whether companies should serve all stakeholders and whether capitalism should be updated, Vice President Mike Pence warned against “leftist policies.”
But suggesting that companies must choose between doing well and doing good is a false choice. Successful businesses can and must do both. In fact, with political dysfunction in Washington, D.C., Americans overwhelmingly say C.E.O.s should take the lead on economic and social challenges, and employees, investors and customers increasingly seek out companies that share their values.
When government is unable or unwilling to act, business should not wait. Our experience at Salesforce shows that profit and purpose go hand in hand and that business can be the greatest platform for change.
Legislation to close loopholes in the Equal Pay Act have stalled in Congress for years, and today women still only make about 80 cents, on average, for every dollar earned by men. But congressional inaction does not absolve companies from their responsibility. Since learning that we were paying women less than men for equal work at Salesforce, we have spent $10.3 million to ensure equal pay; today we conduct annual audits to ensure that pay remains equal. Just about every company, I suspect, has a pay gap — and every company can close it now.
For many businesses, giving back to their communities is an afterthought — something they only do after they’ve turned a profit. But by integrating philanthropy into our company culture from the beginning — giving 1 percent of our equity, time and technology — Salesforce has donated nearly $300 million to worthy causes, including local public schools and addressing homelessness. To me, the boys and girls in local schools and homeless families on the streets of our city are our stakeholders, too. Entrepreneurs looking to develop great products and develop their communities can join the 9,000 companies in the Pledge 1% movement and commit to donating 1 percent of their equity, time and product, starting on their first day of business.
Nationally, despite massive breaches of consumer information, lawmakers in Washington seem unable to pass a national privacy law. California and other states are moving ahead with their own laws, forcing consumers and companies to navigate a patchwork of different regulations. Rather than instinctively opposing new regulations, tech leaders should support a strong, comprehensive national privacy law — perhaps modeled on the European Union’s General Data Protection Regulation — and recognize that protecting privacy and upholding trust is ultimately good for business.
Globally, few nations are meeting their targets to fight climate change, the current United States presidential administration remains determined to withdraw from the Paris Agreement and global emissions continue to rise. As governments fiddle, there are steps that business can take now, while there’s still time, to prevent the global temperature from rising more than 1.5 degrees Celsius. Every company can do something, whether reducing emissions in their operations and across their sector, striving for net-zero emissions like Salesforce, moving toward renewable energies or aligning their operations and supply chains with emissions reduction targets.
Skeptical business leaders who say that having a purpose beyond profit hurts the bottom line should look at the facts. Research shows that companies that embrace a broader mission — and, importantly, integrate that purpose into their corporate culture — outperform their peers, grow faster, and deliver higher profits. Salesforce is living proof that new capitalism can thrive and everyone can benefit. We don’t have to choose between doing well and doing good. They’re not mutually exclusive. In fact, since becoming a public company in 2004, Salesforce has delivered a 3,500 percent return to our shareholders. Values create value.
Of course, C.E.O. activism and corporate philanthropy alone will never be enough to meet the immense scale of today’s challenges. It could take $23 billion a year to address racial inequalities in our public schools. College graduates are drowning in $1.6 trillion of student debt. It will cost billions to retrain American workers for the digital jobs of the future. Trillions of dollars of investments will be needed to avert the worst effects of climate change. All this, when our budget deficit has already surpassed $1 trillion.
How, exactly, is our country going to pay for all this?
That is why a new capitalism must also include a tax system that generates the resources we need and includes higher taxes on the wealthiest among us. Local efforts — like the tax I supported last year on San Francisco’s largest companies to address our city’s urgent homelessness crisis — will help. Nationally, increasing taxes on high-income individuals like myself would help generate the trillions of dollars that we desperately need to improve education and health care and fight climate change.
The culture of corporate America needs to change, and it shouldn’t take an act of Congress to do it. Every C.E.O. and every company must recognize that their responsibilities do not stop at the edge of the corporate campus. When we finally start focusing on stakeholder value as well as shareholder value, our companies will be more successful, our communities will be more equal, our societies will be more just and our planet will be healthier.
1 note
·
View note
Link
Opinion
THE PRIVACY PROJECT
Twelve Million Phones, One Dataset, Zero Privacy
By Stuart A. Thompson and Charlie WarzelDEC. 19, 2019
515
EVERY MINUTE OF EVERY DAY, everywhere on the planet, dozens of companies — largely unregulated, little scrutinized — are logging the movements of tens of millions of people with mobile phones and storing the information in gigantic data files. The Times Privacy Project obtained one such file, by far the largest and most sensitive ever to be reviewed by journalists. It holds more than 50 billion location pings from the phones of more than 12 million Americans as they moved through several major cities, including Washington, New York, San Francisco and Los Angeles.
Each piece of information in this file represents the precise location of a single smartphone over a period of several months in 2016 and 2017. The data was provided to Times Opinion by sources who asked to remain anonymous because they were not authorized to share it and could face severe penalties for doing so. The sources of the information said they had grown alarmed about how it might be abused and urgently wanted to inform the public and lawmakers.
[Related: How to Track President Trump — Read more about the national security risks found in the data.]
After spending months sifting through the data, tracking the movements of people across the country and speaking with dozens of data companies, technologists, lawyers and academics who study this field, we feel the same sense of alarm. In the cities that the data file covers, it tracks people from nearly every neighborhood and block, whether they live in mobile homes in Alexandria, Va., or luxury towers in Manhattan.
One search turned up more than a dozen people visiting the Playboy Mansion, some overnight. Without much effort we spotted visitors to the estates of Johnny Depp, Tiger Woods and Arnold Schwarzenegger, connecting the devices’ owners to the residences indefinitely.
If you lived in one of the cities the dataset covers and use apps that share your location — anything from weather apps to local news apps to coupon savers — you could be in there, too.
If you could see the full trove, you might never use your phone the same way again.
A typical day at Grand Central Terminal
in New York CitySatellite imagery: Microsoft
THE DATA REVIEWED BY TIMES OPINION didn’t come from a telecom or giant tech company, nor did it come from a governmental surveillance operation. It originated from a location data company, one of dozens quietly collecting precise movements using software slipped onto mobile phone apps. You’ve probably never heard of most of the companies — and yet to anyone who has access to this data, your life is an open book. They can see the places you go every moment of the day, whom you meet with or spend the night with, where you pray, whether you visit a methadone clinic, a psychiatrist’s office or a massage parlor.
The Times and other news organizations have reported on smartphone tracking in the past. But never with a data set so large. Even still, this file represents just a small slice of what’s collected and sold every day by the location tracking industry — surveillance so omnipresent in our digital lives that it now seems impossible for anyone to avoid.
Freaked Out? 3 Steps to Protect Your Phone
It doesn’t take much imagination to conjure the powers such always-on surveillance can provide an authoritarian regime like China’s. Within America’s own representative democracy, citizens would surely rise up in outrage if the government attempted to mandate that every person above the age of 12 carry a tracking device that revealed their location 24 hours a day. Yet, in the decade since Apple’s App Store was created, Americans have, app by app, consented to just such a system run by private companies. Now, as the decade ends, tens of millions of Americans, including many children, find themselves carrying spies in their pockets during the day and leaving them beside their beds at night — even though the corporations that control their data are far less accountable than the government would be.
[Related: Where Even the Children Are Being Tracked — We followed every move of people in one city. Then we went to tell them.]
“The seduction of these consumer products is so powerful that it blinds us to the possibility that there is another way to get the benefits of the technology without the invasion of privacy. But there is,” said William Staples, founding director of the Surveillance Studies Research Center at the University of Kansas. “All the companies collecting this location information act as what I have called Tiny Brothers, using a variety of data sponges to engage in everyday surveillance.”
In this and subsequent articles we’ll reveal what we’ve found and why it has so shaken us. We’ll ask you to consider the national security risks the existence of this kind of data creates and the specter of what such precise, always-on human tracking might mean in the hands of corporations and the government. We’ll also look at legal and ethical justifications that companies rely on to collect our precise locations and the deceptive techniques they use to lull us into sharing it.
Today, it’s perfectly legal to collect and sell all this information. In the United States, as in most of the world, no federal law limits what has become a vast and lucrative trade in human tracking. Only internal company policies and the decency of individual employees prevent those with access to the data from, say, stalking an estranged spouse or selling the evening commute of an intelligence officer to a hostile foreign power.
Companies say the data is shared only with vetted partners. As a society, we’re choosing simply to take their word for that, displaying a blithe faith in corporate beneficence that we don’t extend to far less intrusive yet more heavily regulated industries. Even if these companies are acting with the soundest moral code imaginable, there’s ultimately no foolproof way they can secure the data from falling into the hands of a foreign security service. Closer to home, on a smaller yet no less troubling scale, there are often few protections to stop an individual analyst with access to such data from tracking an ex-lover or a victim of abuse.
A DIARY OF YOUR EVERY MOVEMENT
THE COMPANIES THAT COLLECT all this information on your movements justify their business on the basis of three claims: People consent to be tracked, the data is anonymous and the data is secure.
None of those claims hold up, based on the file we’ve obtained and our review of company practices.
Yes, the location data contains billions of data points with no identifiable information like names or email addresses. But it’s child’s play to connect real names to the dots that appear on the maps.
Here’s what that looks like.
IN MOST CASES, ascertaining a home location and an office location was enough to identify a person. Consider your daily commute: Would any other smartphone travel directly between your house and your office every day?Describing location data as anonymous is “a completely false claim” that has been debunked in multiple studies, Paul Ohm, a law professor and privacy researcher at the Georgetown University Law Center, told us. “Really precise, longitudinal geolocation information is absolutely impossible to anonymize.”“D.N.A.,” he added, “is probably the only thing that’s harder to anonymize than precise geolocation information.”[Work in the location tracking industry? Seen an abuse of data? We want to hear from you. Using a non-work phone or computer, contact us on a secure line at 440-295-5934, @charliewarzel on Wire or email Charlie Warzel and Stuart A. Thompson directly.]Yet companies continue to claim that the data are anonymous. In marketing materials and at trade conferences, anonymity is a major selling point — key to allaying concerns over such invasive monitoring.To evaluate the companies’ claims, we turned most of our attention to identifying people in positions of power. With the help of publicly available information, like home addresses, we easily identified and then tracked scores of notables. We followed military officials with security clearances as they drove home at night. We tracked law enforcement officers as they took their kids to school. We watched high-powered lawyers (and their guests) as they traveled from private jets to vacation properties. We did not name any of the people we identified without their permission.The data set is large enough that it surely points to scandal and crime but our purpose wasn’t to dig up dirt. We wanted to document the risk of underregulated surveillance.Watching dots move across a map sometimes revealed hints of faltering marriages, evidence of drug addiction, records of visits to psychological facilities.Connecting a sanitized ping to an actual human in time and place could feel like reading someone else’s diary.In one case, we identified Mary Millben, a singer based in Virginia who has performed for three presidents, including President Trump. She was invited to the service at the Washington National Cathedral the morning after the president’s inauguration. That’s where we first found her.Mary Millben has performed for three presidents during her singing career. GETTY IMAGESShe remembers how, surrounded by dignitaries and the first family, she was moved by the music echoing through the recesses of the cathedral while members of both parties joined together in prayer. All the while, the apps on her phone were also monitoring the moment, recording her position and the length of her stay in meticulous detail. For the advertisers who might buy access to the data, the intimate prayer service could well supply some profitable marketing insights.“To know that you have a list of places I have been, and my phone is connected to that, that’s scary,” Ms. Millben told us. “What’s the business of a company benefiting off of knowing where I am? That seems a little dangerous to me.”Like many people we identified in the data, Ms. Millben said she was careful about limiting how she shared her location. Yet like many of them, she also couldn’t name the app that might have collected it. Our privacy is only as secure as the least secure app on our device.“That makes me uncomfortable,” she said. “I’m sure that makes every other person uncomfortable, to know that companies can have free rein to take your data, locations, whatever else they’re using. It is disturbing.”The writers of this piece, Stuart A. Thompson and Charlie Warzel, are available to answer your questions.0 wordsCONTINUE »The inauguration weekend yielded a trove of personal stories and experiences: elite attendees at presidential ceremonies, religious observers at church services, supporters assembling across the National Mall — all surveilled and recorded permanently in rigorous detail.Protesters were tracked just as rigorously. After the pings of Trump supporters, basking in victory, vanished from the National Mall on Friday evening, they were replaced hours later by those of participants in the Women’s March, as a crowd of nearly half a million descended on the capital. Examining just a photo from the event, you might be hard-pressed to tie a face to a name. But in our data, pings at the protest connected to clear trails through the data, documenting the lives of protesters in the months before and after the protest, including where they lived and worked.We spotted a senior official at the Department of Defense walking through the Women’s March, beginning on the National Mall and moving past the Smithsonian National Museum of American History that afternoon. His wife was also on the mall that day, something we discovered after tracking him to his home in Virginia. Her phone was also beaming out location data, along with the phones of several neighbors.Senior Defense Department official and his wife identified at the Women’s MarchNote: Animated movement of the person’s location is inferred. Satellite imagery: Microsoft and DigitalGlobe.The official’s data trail also led to a high school, homes of friends, a visit to Joint Base Andrews, workdays spent in the Pentagon and a ceremony at Joint Base Myer-Henderson Hall with President Barack Obama in 2017 (nearly a dozen more phones were tracked there, too).Inauguration Day weekend was marked by other protests — and riots. Hundreds of protesters, some in black hoods and masks, gathered north of the National Mall that Friday, eventually setting fire to a limousine near Franklin Square. The data documented those rioters, too. Filtering the data to that precise time and location led us to the doorsteps of some who were there. Police were present as well, many with faces obscured by riot gear. The data led us to the homes of at least two police officers who had been at the scene.As revealing as our searches of Washington were, we were relying on just one slice of data, sourced from one company, focused on one city, covering less than one year. Location data companies collect orders of magnitude more information every day than the totality of what Times Opinion received.Data firms also typically draw on other sources of information that we didn’t use. We lacked the mobile advertising IDs or other identifiers that advertisers often combine with demographic information like home ZIP codes, age, gender, even phone numbers and emails to create detailed audience profiles used in targeted advertising. When datasets are combined, privacy risks can be amplified. Whatever protections existed in the location dataset can crumble with the addition of only one or two other sources.There are dozens of companies profiting off such data daily across the world — by collecting it directly from smartphones, creating new technology to better capture the data or creating audience profiles for targeted advertising.The full collection of companies can feel dizzying, as it’s constantly changing and seems impossible to pin down. Many use technical and nuanced language that may be confusing to average smartphone users.While many of them have been involved in the business of tracking us for years, the companies themselves are unfamiliar to most Americans. (Companies can work with data derived from GPS sensors, Bluetooth beacons and other sources. Not all companies in the location data business collect, buy, sell or work with granular location data.)A Selection of Companies Workingin the Location Data BusinessSources: MightySignal, LUMA Partners and AppFigures.Location data companies generally downplay the risks of collecting such revealing information at scale. Many also say they’re not very concerned about potential regulation or software updates that could make it more difficult to collect location data.“No, it doesn’t really keep us up at night,” Brian Czarny, chief marketing officer at Factual, one such company, said. He added that Factual does not resell detailed data like the information we reviewed. “We don’t feel like anybody should be doing that because it’s a risk to the whole business,” he said.In absence of a federal privacy law, the industry has largely relied on self-regulation. Several industry groups offer ethical guidelines meant to govern it. Factual joined the Mobile Marketing Association, along with many other data location and marketing companies, in drafting a pledge intended to improve its self-regulation. The pledge is slated to be released next year.States are starting to respond with their own laws. The California Consumer Protection Act goes into effect next year and adds new protections for residents there, like allowing them to ask companies to delete their data or prevent its sale. But aside from a few new requirements, the law could leave the industry largely unencumbered.“If a private company is legally collecting location data, they’re free to spread it or share it however they want,” said Calli Schroeder, a lawyer for the privacy and data protection company VeraSafe.The companies are required to disclose very little about their data collection. By law, companies need only describe their practices in their privacy policies, which tend to be dense legal documents that few people read and even fewer can truly understand.
EVERYTHING CAN BE HACKED
DOES IT REALLY MATTER that your information isn’t actually anonymous? Location data companies argue that your data is safe — that it poses no real risk because it’s stored on guarded servers. This assurance has been undermined by the parade of publicly reported data breaches — to say nothing of breaches that don’t make headlines. In truth, sensitive information can be easily transferred or leaked, as evidenced by this very story.
We’re constantly shedding data, for example, by surfing the internet or making credit card purchases. But location data is different. Our precise locations are used fleetingly in the moment for a targeted ad or notification, but then repurposed indefinitely for much more profitable ends, like tying your purchases to billboard ads you drove past on the freeway. Many apps that use your location, like weather services, work perfectly well without your precise location — but collecting your location feeds a lucrative secondary business of analyzing, licensing and transferring that information to third parties.
The data contains simple information like date, latitude and longitude, making it easy to inspect, download and transfer. Note: Values are randomized to protect sources and device owners.
For many Americans, the only real risk they face from having their information exposed would be embarrassment or inconvenience. But for others, like survivors of abuse, the risks could be substantial. And who can say what practices or relationships any given individual might want to keep private, to withhold from friends, family, employers or the government? We found hundreds of pings in mosques and churches, abortion clinics, queer spaces and other sensitive areas.
In one case, we observed a change in the regular movements of a Microsoft engineer. He made a visit one Tuesday afternoon to the main Seattle campus of a Microsoft competitor, Amazon. The following month, he started a new job at Amazon. It took minutes to identify him as Ben Broili, a manager now for Amazon Prime Air, a drone delivery service.
“I can’t say I’m surprised,” Mr. Broili told us in early December. “But knowing that you all can get ahold of it and comb through and place me to see where I work and live — that’s weird.” That we could so easily discern that Mr. Broili was out on a job interview raises some obvious questions, like: Could the internal location surveillance of executives and employees become standard corporate practice?
Ben Broili’s interview at Amazon was captured in the data. GRANT HINDSLEY FOR THE NEW YORK TIMES
Mr. Broili wasn’t worried about apps cataloguing his every move, but he said he felt unsure about whether the tradeoff between the services offered by the apps and the sacrifice of privacy was worth it. “It’s an awful lot of data,” he said. “And I really still don’t understand how it’s being used. I’d have to see how the other companies were weaponizing or monetizing it to make that call.”
If this kind of location data makes it easy to keep tabs on employees, it makes it just as simple to stalk celebrities. Their private conduct — even in the dead of night, in residences and far from paparazzi — could come under even closer scrutiny.
Reporters hoping to evade other forms of surveillance by meeting in person with a source might want to rethink that practice. Every major newsroom covered by the data contained dozens of pings; we easily traced one Washington Post journalist through Arlington, Va.
In other cases, there were detours to hotels and late-night visits to the homes of prominent people. One person, plucked from the data in Los Angeles nearly at random, was found traveling to and from roadside motels multiple times, for visits of only a few hours each time.
While these pointillist pings don’t in themselves reveal a complete picture, a lot can be gleaned by examining the date, time and length of time at each point.
Large data companies like Foursquare — perhaps the most familiar name in the location data business — say they don’t sell detailed location data like the kind reviewed for this story but rather use it to inform analysis, such as measuring whether you entered a store after seeing an ad on your mobile phone.
But a number of companies do sell the detailed data. Buyers are typically data brokers and advertising companies. But some of them have little to do with consumer advertising, including financial institutions, geospatial analysis companies and real estate investment firms that can process and analyze such large quantities of information. They might pay more than $1 million for a tranche of data, according to a former location data company employee who agreed to speak anonymously.
Location data is also collected and shared alongside a mobile advertising ID, a supposedly anonymous identifier about 30 digits long that allows advertisers and other businesses to tie activity together across apps. The ID is also used to combine location trails with other information like your name, home address, email, phone number or even an identifier tied to your Wi-Fi network.
The data can change hands in almost real time, so fast that your location could be transferred from your smartphone to the app’s servers and exported to third parties in milliseconds. This is how, for example, you might see an ad for a new car some time after walking through a dealership.
That data can then be resold, copied, pirated and abused. There’s no way you can ever retrieve it.
Location data is about far more than consumers seeing a few more relevant ads. This information provides critical intelligence for big businesses. The Weather Channel app’s parent company, for example, analyzed users’ location data for hedge funds, according to a lawsuit filed in Los Angeles this year that was triggered by Times reporting. And Foursquare received much attention in 2016 after using its data trove to predict that after an E. coli crisis, Chipotle’s sales would drop by 30 percent in the coming months. Its same-store sales ultimately fell 29.7 percent.
Much of the concern over location data has focused on telecom giants like Verizon and AT&T, which have been selling location data to third parties for years. Last year, Motherboard, Vice’s technology website, found that once the data was sold, it was being shared to help bounty hunters find specific cellphones in real time. The resulting scandal forced the telecom giants to pledge they would stop selling location movements to data brokers.
Yet no law prohibits them from doing so.
Location data is transmitted from your phone via software development kits, or S.D.Ks. as they’re known in the trade. The kits are small programs that can be used to build features within an app. They make it easy for app developers to simply include location-tracking features, a useful component of services like weather apps. Because they’re so useful and easy to use, S.D.K.s are embedded in thousands of apps. Facebook, Google and Amazon, for example, have extremely popular S.D.K.s that allow smaller apps to connect to bigger companies’ ad platforms or help provide web traffic analytics or payment infrastructure.
But they could also sit on an app and collect location data while providing no real service back to the app. Location companies may pay the apps to be included — collecting valuable data that can be monetized.
“If you have an S.D.K. that’s frequently collecting location data, it is more than likely being resold across the industry,” said Nick Hall, chief executive of the data marketplace company VenPath.
THE ‘HOLY GRAIL’ FOR MARKETERS
IF THIS INFORMATION IS SO SENSITIVE, why is it collected in the first place?
For brands, following someone’s precise movements is key to understanding the “customer journey” — every step of the process from seeing an ad to buying a product. It’s the Holy Grail of advertising, one marketer said, the complete picture that connects all of our interests and online activity with our real-world actions.
Once they have the complete customer journey, companies know a lot about what we want, what we buy and what made us buy it. Other groups have begun to find ways to use it too. Political campaigns could analyze the interests and demographics of rally attendees and use that information to shape their messages to try to manipulate particular groups. Governments around the world could have a new tool to identify protestors.
Pointillist location data also has some clear benefits to society. Researchers can use the raw data to provide key insights for transportation studies and government planners. The City Council of Portland, Ore., unanimously approved a deal to study traffic and transit by monitoring millions of cellphones. Unicef announced a plan to use aggregated mobile location data to study epidemics, natural disasters and demographics.
For individual consumers, the value of constant tracking is less tangible. And the lack of transparency from the advertising and tech industries raises still more concerns.
Does a coupon app need to sell second-by-second location data to other companies to be profitable? Does that really justify allowing companies to track millions and potentially expose our private lives?
Data companies say users consent to tracking when they agree to share their location. But those consent screens rarely make clear how the data is being packaged and sold. If companies were clearer about what they were doing with the data, would anyone agree to share it?
What about data collected years ago, before hacks and leaks made privacy a forefront issue? Should it still be used, or should it be deleted for good?
If it’s possible that data stored securely today can easily be hacked, leaked or stolen, is this kind of data worth that risk?
Is all of this surveillance and risk worth it merely so that we can be served slightly more relevant ads? Or so that hedge fund managers can get richer?
The companies profiting from our every move can’t be expected to voluntarily limit their practices. Congress has to step in to protect Americans’ needs as consumers and rights as citizens.
Until then, one thing is certain: We are living in the world’s most advanced surveillance system. This system wasn’t created deliberately. It was built through the interplay of technological advance and the profit motive. It was built to make money. The greatest trick technology companies ever played was persuading society to surveil itself.
Stuart A. Thompson ([email protected]) is a writer and editor in the Opinion section. Charlie Warzel ([email protected]) is a writer at large for Opinion.
Lora Kelley, Ben Smithgall, Vanessa Swales and Susan Beachy contributed research. Alex Kingsbury contributed reporting. Graphics by Stuart A. Thompson. Additional production by Jessia Ma and Gus Wezerek. Note: Visualizations have been adjusted to protect device owners.
Opening satellite imagery: Microsoft (New York Stock Exchange); Imagery (Pentagon, Los Angeles); Google and DigitalGlobe (White House); Microsoft and DigitalGlobe (Washington, D.C.); Imagery and Maxar Technologies (Mar-a-Lago).
Like other media companies, The Times collects data on its visitors when they read stories like this one. For more detail please see our privacy policy and our publisher's descriptionof The Times's practices and continued steps to increase transparency and protections.
COMMENT
0 notes
Text
Inside the history of Silicon Valley labor, with Louis Hyman
As I wrote for TechCrunch recently, immigration is not an issue always associated with tech — not even when thinking about the ethics of technology, as I do here.
So when I was moved to tears a few weeks ago, on seeing footage of groups of 18 Jewish protestors link arms to block the entrances to ICE detention facilities, bearing banners reading “Never Again” in reference to the Holocaust — these mostly young women risking their physical freedom and safety to try to help the children this country’s immigration service is placing in concentration camps today, one of my first thoughts was: I can’t cover that for my TechCrunch column. It’s about ethics of course, but not about tech.
It turns out that wasn’t correct. Immigration is a tech issue. In fact, companies such as Wayfair (furniture), Amazon (web services), and Palantir (the software used to track undocumented immigrants) have borne heavy criticism for their support of and partnership with ICE’s efforts under the current administration.
And as I discussed earlier this month with Jaclyn Friedman, a leading sex ethics expert and one of the ICE protestors arrested in a major demonstration in Boston, social media technology has been instrumental in building and amplifying those protests.
But there’s more. IBM, for example, has an unfortunate and dark history of support for Nazi extermination efforts, and many recent commentators have drawn parallels between what IBM did during the Holocaust and what companies like Palantir are beginning to do now.
Dozens of protestors huddle in the rain outside Palantir HQ.
I say “companies,” plural, with intention: immigrant advocacy organization Mijente recently released news that Anduril, the company founded by Palmer Luckey and composed of Palantir veterans, now has a $13.5 million contract with the Marine corps for their autonomous surveillance “Lattice” towers at four different USMC bases, including one border base. Documents procured via the Freedom of Information Act show the Marines mention “the intrusion dilemma” in their justification for choosing Anduril.
So now it seems the kinds of surveillance tech we know are badly biased at best — facial recognition? Panopticon-style observation? Algorithms of various other kinds — will be put to work by the most powerful fighting force ever designed, for expanded intervention into our immigration system.
Will the Silicon Valley elite say “no”? To what extent will new protests emerge, where the sorts of people likely to be reading this writing might draw a line and make work more difficult for their peers at places like Anduril?
Maybe the problem, however, is that most of us think of immigration ethics as an issue that might touch on a small handful of particularly libertarian-leaning tech companies, but surely it doesn’t go beyond that, right? Can’t the average techie in San Francisco or elsewhere safely and accurately say these problems don’t actually implicate them?
Turns out that’s not right either.
Which is why I had to speak this week with Cornell University historian Louis Hyman. Hyman is a Professor at Cornell’s School of Industrial and Labor Relations, and Director of the ILR’s Institute for Workplace Studies, in New York. In our conversation, Hyman and I dig into Silicon Valley’s history with labor rights, startup work structures and the role of immigration in the US tech ecosystem. Beyond that, I’ll let him introduce himself and his extraordinary work, below.
Louis Hyman. (Image by Jesse Winter)
Greg Epstein: I discovered your work via a piece you wrote in the Washington Post, which drew from your 2018 book, Temp: How American Work, American Business, and the American Dream Became Temporary. In it, you wrote, “Undocumented workers have been foundational to the rise of our most vaunted hub of innovative capitalism: Silicon Valley.”
And in the book itself, you write at one point, “To understand the electronics industry is simple: every time someone says “robot,” simply picture a woman of color. Instead of self-aware robots, workers—all women, mostly immigrants, sometimes undocumented—hunched over tables with magnifying glasses assembling parts, sometimes on a factory line and sometimes on a kitchen table. Though it paid a lot of lip service to automation, Silicon Valley truly relied upon a transient workforce of workers outside of traditional labor relations.”
Can you just give us a brief introduction to the historical context behind these kinds of comments?
Louis Hyman: Sure. One of the key questions all of us ask is why is there only one Silicon Valley. There are different answers for that.
0 notes
Text
HW5case Q2
US PATRIOT ACT
https://www.justice.gov/archive/ll/highlights.html Abuse citation: https://www.nytimes.com/2015/05/08/us/nsa-phone-records-collection-ruled-illegal-by-appeals-court.html
In response to 9/11 the Government took a look at what allowed such an attack to take place. It was determined that the attack could have been prevented because various departments had information about the attack, the problem was there was a failure to communicate the various pieces of information between these individual departments. The attack slipped through the various cracks of our Government. One action that was taken was to build the Department of Homeland Security which took various departments and placed them into one larger Department. They also increased communication and information sharing between our various law enforcement agencies.
Another action was taken in addition to these. One that is highly controversial. On one hand it is credited with stopping over 50 terrorist attacks since it was put into place, on the other hand people say that it is a gross invasion of our privacy and a violation of our rights. Just over a month after 9/11, the "Uniting and Strengthening America by Providing Appropriate Tools Required to Intercept and Obstruct Terrorism" act was passed. It is more commonly known as the USA PATRIOT act.
This act gave the government the ability to spy on people without probable cause, and it enabled law enforcement to bypass the bill of rights. One of the best examples of this is something called the 'sneak and peak' inwhich law enforcement can search your property or business without having to inform you, and without you ever knowing. Another notorious thing to come out of this was something that Edward Snowden leaked, and that is the NSA collecting phone records and search histories of anyone they wanted too, this despite the original stipulation of the target of such actions had to be a suspected terrorist. I guess when you give sweeping authority, everyone becomes a suspected terrorist.
At the center of this controversy is the struggle between security and liberty.
The more you have of one, the less you have of the other. People have famously said, "You have nothing to worry about as long as you have nothing to hide" I agree with this sentiment to a certain degree. I agree that logically it makes sense. However, we have to give up our privacy and the protections of the bill of rights if we accept this. Hence, the controversy.
Despite being able to recognize the argument for the USA PATRIOT act, I do not agree with some of its alleged abuses.
Questions:
1) Was the implementation of the USA PATRIOT act a sound decision following 9/11? I believe it was because at the time we did not know if more attacks were coming or to what degree we had been compromised. Considering the PATRIOT act has stopped over 50 such attacks, I believe the lives that were saved were worth the passing of the act. However, once the abuses started I am not so sure.
2) Is it worth giving up our liberties if it ensures our safety and security? I don't think it is. Our liberties are what makes America such a wonderful place to live. Once we start inviting authoritarianism into our lives, it will be a hard thing to shake. Any country that gives ultimate authority to its government loses the ability to stop their government if the power falls into the wrong hands. Not to mention there is a saying that "Absolute power corrupts, absolutely."
When you look at some of the abuses that people have alleged about the USA PATRIOT act, and the proof of those allegations, you see a trend of this prophecy already coming true. The people in our Government have abused the PATRIOT act in the past. It stands to reason if we gave up even more security we would see further abuses. Look at the FISA court in the news recently. Just last friday some footnotes were unredacted that indicate the FBI willingly used what was considered to be Russian propaganda to spy on a person running for President.
Love the President or hate him, the government is guilty of trying to overturn an election. That is not their right or their responsiblity to do. We should never allow people in government to have the power to overturn the will of the people.
3) Does using the PATRIOT act outside of fighting terrorism seem acceptable? The fact it has been used more to arrest drug dealers, etc.
Terrorism is in the name. Unless the drug dealers are part of some coordinated effort to harm our citizens, then I do not think it is just or fair to use powers intended to prevent terrorist attacks to make arrests. However, if the drug dealers are also involved in using violence and other means to harm or kill others, then that is another thing entirely. At that point, even I would classify them as domestic terrorists because that is what they are.
4) Is sneak and peak ethical? Can we trust that all people in positions to utilize this will do so as it was intended? Can we trust that all law enforcement will have the integrity to not abuse such powers?
I do not think it is ethical at all. People who wield such powers are almost certain to abuse them. I don't have the means to provide citation for this at this time, but very recently an FBI agent was caught on a nanny cam doing some questionable things with a child's under-garments. Such a disgusting thing is hard for me to even write about, but the fact it happened just goes to show you cannot just blindly trust people in Government. Even weirdo's get in. It stands to reason that all kinds of people are in positions in our government, including the type who would abuse their power.
In addition to that disgusting story I mentioned about the FBI agent, lets also remember the FISA court abuses. If people are violating things such as this, they are certain to violate a sneak and peak at some point in time. We need to limit the powers we give to our Government for that very reason.
5) In regard to our privacy, should the government have the power to view our search histories and our digital footprints? If they can monitor our text messages, the pictures we send to our significant others, etc. Should Americans be okay with that?
This is a clear violation of our right to a reasonable amount of privacy. The government is fully aware of this being a violation as well which is why it tried to keep such things a secret. I don't respect Edward Snowden very much, but I do appreciate the fact that he informed the public that this was going on. Though I wish he would have used legitimate means to do so, the fact the government was spying on essentially every American seems pretty crazy. People in position of power certainly would have abused this information.
Three Standard Questions Plus One
1.) How can you apply deontological ethics (rule-based) to this case?
The US Constitution is a pretty good set of rules to guide the USA PATRIOT act, unfortunately if we use this lens it becomes clear that the constitution was violated and therefore the USA PATRIOT act, or at the very least the people who used it in such a bad way, is unethical.
2.) How can you apply utilitarian ethics (similar to consequentialist ethics) to this case?
This is where things get interesting. If you consider the good of the many, then a few violations of liberty here and there is worth protecting the many from terrorist attacks, or the plague of drug dealers. It would only become unethical in instances where more harm is being done, which to those who say, "You have nothing to worry about if you have nothing to hide" would probably set the bar pretty high for that.
Protecting life is probably the most noble thing you can do, so if the ends justify the means then one would most likely consider the USA PATRIOT act to be ethical.
3.) How can you apply virtue ethics (character-based) to this case?
If those with integrity and good character were the ones using the PATRIOT act, then I believe we would not have seen some of the abuses that we have. People with good character would probably use the act as it was intended to be used and only use it on suspected terrorists. They would also be less likely to abuse it to spy on citizens or use it for their own personal reasons to violate someones rights.
4.) What connection can you devise to computer security?
Computer security is a pretty big factor in this. Because so many big tech companies willingly gave access to our private information, our security was compromised. At the same time, this violation of our rights has stopped many terrorist attacks, so by violating our rights they also violate the terrorists rights because of the wide net that was cast.
It stands to reason that if we take steps to provide computer security for the masses, we would also provide security to would-be terrorists. The more technology that becomes available to protect us online will also inevitably protect these terrorists too.
Therefore we are right back at the center of this debate between having to choose between security and liberty.
0 notes
Text
Here's why we need a Futurist in the White House.
This is going to be a long write-up so skip this if you don't think you have time on your hands.
I genuinely don't see much getting done with any of those other candidates because they aren't tackling something at the core of human nature : The fact that we require incentives to drive us and progress.
The question becomes, what kind of incentives are these, and how can we prevent Corporations or even a Government from exploiting this.
This very problem has been at the core of why many of us, do not see a viable path forward in our modern society.
Our institutions no longer serve us! They are outdated, and instead of the pursuit of innovative ways to tackle this problem, what we find ourselves with is a bunch politicians with no frame of reference as to the nature of our modern society. Instead, they seek either 19th/20th century solutions to problems we have not yet encountered.
The solution lies in Andrew's Human Centered Capitalism. Good policy aligns social incentives with that of corporate and governmental goals. If you don't have policies that keep the human components of bureaucracies in check, irrespective of your insights,goals or desire for revolution you ultimately flop.
Andrew's platform is one that simply transcends the mere political process and seeks to solve this fundamental problem. It seeks to hold the human component of organisations irrespective of their makeup, cognizant of their biases and drive them to pro-social goals. This is Yuuuge!
If anyone of those other candidates win, you will either have a persistent status quo, or an incompetent ultimately concentrated govermment that believes it knows better for the individual than what they may be facing in their lives.
We see this with Bernie's FJG; we see this with Trump's tax cuts (which do nothing more than benefit the wealthy); We see this with Warren's lack of acceptance of the problems that automation presents and will present, and ultimately the lack of Technological knowledgethat is so obvious within the American Government.
The last one bothers me most. The reason for this happens to be that, Technology has been at the core of human progress. It has also however, been at the core of Inequality as well. The true elites of our societies are those who harness Technology and are aware of the influence of Technology in our society today. Our biggest companies today are all Tech Companies for this very reason. And you have a government that is simply clueless as to what awaits us!
Never have we been able to alter the human DNA to our benefit! This is bound to revolutionise healthcare, but in other contexts will present questions about the nature of humanity, and ethical dilemmas which we already see today!
Just recently a Chinese Scientist by the name of He Jiankui, edited the Genes of twins in the hope of becoming world famous and profiting of his fame. Gene editing may present such problems because the vast population knows nothing about this! It may further worsen inequality by giving individuals certain traits that will further give them cognitive as well as physical advantages.
Pattern finding had been at the core of human innovation. And yet we have built tools that seek to not only do tasks more efficiently than we do, but learns to do them better, with each iteration of the performance of the task. And surprisingly, even those at the forefront of those technologies, are not aware of their full capabilities and seek guidelines as well.
All these aren't distant from here. They aren't some far off reality. We are living these today!
We have already seen Andrew's ability to get politicians to see the impacts and seriousness of Automation. But this isn't enough. We need a politician who realizes the importance of Technology today and Tomorrow.
The one persistent problem I've realized about politics today is that it is reactive and not proactive. Politicians want the problem to be apparent to every single individual before they choose to tackle it. And by the time that happens, you realise that the problem has metamorphosed and the government supposed to tackle these problems, lags decades behind in the implementation of solutions.
We need individuals who see these trends in data and who act to provide solutions before the people face these problems.
So far, the only candidate who presents solutions to both the meta components of the problems America faces, and the more immediately obvious problems that the people faces, is Andrew Yang.
So visionary is Yang that when he stood on the podium at the last debate, and stated the very fact that America is 10 years too late on Climate Change he was met with dead silence and rebuke. Fast forward a few weeks and months later we realize that he was more than right if not even understated in his claim. We are 50 years behind.
When he brought out his Climate Change proposal, many shunned him for the claim that we both need to get to Higher Ground and the fact that we need to focus on adaptation as well. Green Peace gave him a D- for his plan. Today, the UN seeks to hold a summit with people at the forefront of the battle including Bill Gates on the very nature of adaptation regarding the problems we face with Climate Change today.
Yang realizes before the others the necessity of certain solutions! This vision, this insight that isn't bound by ideology but by a genuine desire to solve the problems that face not only America but the world as a whole, is what we would expect from our politicians, but is so obviously absent it hurts. His willingness to speak the truth to the best extent he can, is a mark of originality that is lost in society and politics at large.
Andrew Yang isn't flawless. But he is human and an intelligent one at that. I for example am a massive proponent of a UBI. But I would be lying if I said I had few concerns about his implementation. But he offers policies like making Data A Property Right, Crypto and Digital Asset Regulation for Consumer Protection,the use of Thorium Reactors, and many more that shows me that he's more than willing to make use and at least try to understand Technology.
We need to get Andrew Yang into the White House this coming season. If we can't, we have to make sure he persists. Because the truth is that we need him and his approach to problem solving, more than ever. We need his insights. We need a futurist! We need Andrew Yang!
TL:DR : Andrew Yang seems to be the only candidate who truly sees trends in data and seeks to act on those trends in society and Technology at large. It's an ability we would expect to have in our politicians today, but do not. This visionary insight is key to the progress of humanity as a whole.
submitted by /u/onlyartist6 [link] [comments] source https://www.reddit.com/r/Futurology/comments/d24yb9/heres_why_we_need_a_futurist_in_the_white_house/
0 notes
Text
UK parliament calls for antitrust, data abuse probe of Facebook
A final report by a British parliamentary committee which spent months last year investigating online political disinformation makes very uncomfortable reading for Facebook — with the company singled out for “disingenuous” and “bad faith” responses to democratic concerns about the misuse of people’s data.
In the report, published today, the committee has also called for Facebook’s use of user data to be investigated by the UK’s data watchdog.
In an evidence session to the committee late last year, the Information Commissioner’s Office (ICO) suggested Facebook needs to change its business model — warning the company risks burning user trust for good.
Last summer the ICO also called for an ethical pause of social media ads for election campaigning, warning of the risk of developing “a system of voter surveillance by default”.
Interrogating the distribution of ‘fake news’
The UK parliamentary enquiry looked into both Facebook’s own use of personal data to further its business interests, such as by providing access to user data to developers and advertisers in order to increase revenue and/or usage; and examined what Facebook claimed as ‘abuse’ of its platform by the disgraced (and now defunct) political data company Cambridge Analytica — which in 2014 paid a developer with access to Facebook’s developer platform to extract information on millions of Facebook users in build voter profiles to try to influence elections.
The committee’s conclusion about Facebook’s business is a damning one with the company accused of operating a business model that’s predicated on selling abusive access to people’s data.
“Far from Facebook acting against “sketchy” or “abusive” apps, of which action it has produced no evidence at all, it, in fact, worked with such apps as an intrinsic part of its business model,” the committee argues. “This explains why it recruited the people who created them, such as Joseph Chancellor [the co-founder of GSR, the developer which sold Facebook user data to Cambridge Analytica]. Nothing in Facebook’s actions supports the statements of Mark Zuckerberg who, we believe, lapsed into “PR crisis mode”, when its real business model was exposed.
“This is just one example of the bad faith which we believe justifies governments holding a business such as Facebook at arms’ length. It seems clear to us that Facebook acts only when serious breaches become public. This is what happened in 2015 and 2018.”
“We consider that data transfer for value is Facebook’s business model and that Mark Zuckerberg’s statement that ‘we’ve never sold anyone’s data” is simply untrue’,” the committee also concludes.
We’ve reached out to Facebook for comment on the committee’s report.
Last fall the company was issued the maximum possible fine under relevant UK data protection law for failing to safeguard user data from Cambridge Analytica saga. Although Facebook is appealing the ICO’s penalty, claiming there’s no evidence UK users’ data got misused.
During the course of a multi-month enquiry last year investigating disinformation and fake news, the Digital, Culture, Media and Sport (DCMS) committee heard from 73 witnesses in 23 oral evidence sessions, as well as taking in 170 written submissions. In all the committee says it posed more than 4,350 questions.
Its wide-ranging, 110-page report makes detailed observations on a number of technologies and business practices across the social media, adtech and strategic communications space, and culminates in a long list of recommendations for policymakers and regulators — reiterating its call for tech platforms to be made legally liable for content.
Among the report’s main recommendations are:
clear legal liabilities for tech companies to act against “harmful or illegal content”, with the committee calling for a compulsory Code of Ethics overseen by a independent regulatory with statutory powers to obtain information from companies; instigate legal proceedings and issue (“large”) fines for non-compliance
privacy law protections to cover inferred data so that models used to make inferences about individuals are clearly regulated under UK data protection rules
a levy on tech companies operating in the UK to support enhanced regulation of such platforms
a call for the ICO to investigate Facebook’s platform practices and use of user data
a call for the Competition Markets Authority to comprehensively “audit” the online advertising ecosystem, and also to investigate whether Facebook specifically has engaged in anti-competitive practices
changes to UK election law to take account of digital campaigning, including “absolute transparency of online political campaigning” — including “full disclosure of the targeting used” — and more powers for the Electoral Commission
a call for a government review of covert digital influence campaigns by foreign actors (plus a review of legislation in the area to consider if it’s adequate) — including the committee urging the government to launch independent investigations of recent past elections to examine “foreign influence, disinformation, funding, voter manipulation, and the sharing of data, so that appropriate changes to the law can be made and lessons can be learnt for future elections and referenda”
a requirement on social media platforms to develop tools to distinguish between “quality journalism” and low quality content sources, and/or work with existing providers to make such services available to users
Among the areas the committee’s report covers off with detailed commentary are data use and targeting; advertising and political campaigning — including foreign influence; and digital literacy.
It argues that regulation is urgently needed to restore democratic accountability and “make sure the people stay in charge of the machines”.
Ministers are due to produce a White Paper on social media safety regulation this winter and the committee writes that it hopes its recommendations will inform government thinking.
“Much has been said about the coarsening of public debate, but when these factors are brought to bear directly in election campaigns then the very fabric of our democracy is threatened,” the committee writes. “This situation is unlikely to change. What does need to change is the enforcement of greater transparency in the digital sphere, to ensure that we know the source of what we are reading, who has paid for it and why the information has been sent to us. We need to understand how the big tech companies work and what happens to our data.”
The report calls for tech companies to be regulated as a new category “not necessarily either a ‘platform’ or a ‘publisher”, but which legally tightens their liability for harmful content published on their platforms.
Last month another UK parliamentary committee also urged the government to place a legal ‘duty of care’ on platforms to protect users under the age of 18 — and the government said then that it has not ruled out doing so.
“Digital gangsters”
Competition concerns are also raised several times by the committee.
“Companies like Facebook should not be allowed to behave like ‘digital gangsters’ in the online world, considering themselves to be ahead of and beyond the law,” the DCMS committee writes, going on to urge the government to investigate whether Facebook specifically has been involved in any anti-competitive practices and conduct a review of its business practices towards other developers “to decide whether Facebook is unfairly using its dominant market position in social media to decide which businesses should succeed or fail”.
“The big tech companies must not be allowed to expand exponentially, without constraint or proper regulatory oversight,” it adds.
The committee suggests existing legal tools are up to the task of reining in platform power, citing privacy laws, data protection legislation, antitrust and competition law — and calling for a “comprehensive audit” of the social media advertising market by the UK’s Competition and Markets Authority, and a specific antitrust probe of Facebook’s business practices.
“If companies become monopolies they can be broken up, in whatever sector,” the committee points out. “Facebook’s handling of personal data, and its use for political campaigns, are prime and legitimate areas for inspection by regulators, and it should not be able to evade all editorial responsibility for the content shared by its users across its platforms.”
The social networking giant was the recipient of many awkward queries during the course of the committee’s enquiry but it refused repeated requests for its founder Mark Zuckerberg to testify — sending a number of lesser staffers in his stead.
That decision continues to be seized upon by the committee as evidence of a lack of democratic accountability. It also accuses Facebook of having an intentionally “opaque management structure”.
“By choosing not to appear before the Committee and by choosing not to respond personally to any of our invitations, Mark Zuckerberg has shown contempt towards both the UK Parliament and the ‘International Grand Committee’, involving members from nine legislatures from around the world,” the committee writes.
“The management structure of Facebook is opaque to those outside the business and this seemed to be designed to conceal knowledge of and responsibility for specific decisions. Facebook used the strategy of sending witnesses who they said were the most appropriate representatives, yet had not been properly briefed on crucial issues, and could not or chose not to answer many of our questions. They then promised to follow up with letters, which—unsurprisingly—failed to address all of our questions. We are left in no doubt that this strategy was deliberate.”
It doubles down on the accusation that Facebook sought to deliberately mislead its enquiry — pointing to incorrect and/or inadequate responses from staffers who did testify.
“We are left with the impression that either [policy VP] Simon Milner and [CTO] Mike Schroepfer deliberately misled the Committee or they were deliberately not briefed by senior executives at Facebook about the extent of Russian interference in foreign elections,” it suggests.
In an unusual move late last year the committee used rare parliamentary powers to seize a cache of documents related to an active US lawsuit against Facebook filed by a developer called Six4Three.
Seized cache of Facebook docs raise competition and consent questions
The cache of documents is referenced extensively in the final report, and appears to have fuelled antitrust concerns, with the committee arguing that the evidence obtained from the internal company documents “indicates that Facebook was willing to override its users’ privacy settings in order to transfer data to some app developers, to charge high prices in advertising to some developers, for the exchange of that data, and to starve some developers… of that data, thereby causing them to lose their business”.
“It seems clear that Facebook was, at the very least, in violation of its Federal Trade Commission [privacy] settlement,” the committee also argues, citing evidence from the former chief technologist of the FTC, Ashkan Soltani .
On Soltani’s evidence, it writes:
Ashkan Soltani rejected [Facebook’s] claim, saying that up until 2012, platform controls did not exist, and privacy controls did not apply to apps. So even if a user set their profile to private, installed apps would still be able to access information. After 2012, Facebook added platform controls and made privacy controls applicable to apps. However, there were ‘whitelisted’ apps that could still access user data without permission and which, according to Ashkan Soltani, could access friends’ data for nearly a decade before that time. Apps were able to circumvent users’ privacy of platform settings and access friends’ information, even when the user disabled the Platform. This was an example of Facebook’s business model driving privacy violations.
While Facebook is singled out for the most eviscerating criticism in the report (and targeted for specific investigations), the committee’s long list of recommendations are addressed at social media businesses and online advertisers generally.
It also calls for far more transparency from platforms, writing that: “Social media companies need to be more transparent about their own sites, and how they work. Rather than hiding behind complex agreements, they should be informing users of how their sites work, including curation functions and the way in which algorithms are used to prioritise certain stories, news and videos, depending on each user’s profile. The more people know how the sites work, and how the sites use individuals’ data, the more informed we shall all be, which in turn will make choices about the use and privacy of sites easier to make.”
The committee also urges a raft of updates to UK election law — branding it “not fit for purpose” in the digital era.
Its interim report, published last summer, made many of the same recommendations.
Russian interest
But despite pressing the government for urgent action there was only a cool response from ministers then, with the government remaining tied up trying to shape a response to the 2016 Brexit vote which split the country (with social media’s election-law-deforming help). Instead it opted for a ‘wait and see‘ approach.
The government accepted just three of the preliminary report’s forty-two recommendations outright, and fully rejected four.
Nonetheless, the committee has doubled down on its preliminary conclusions, reiterating earlier recommendations and pushing the government once again to act.
It cites fresh evidence, including from additional testimony, as well as pointing to other reports (such as the recently published Cairncross Review) which it argues back up some of the conclusions reached.
“Our inquiry over the last year has identified three big threats to our society. The challenge for the year ahead is to start to fix them; we cannot delay any longer,” writes Damian Collins MP and chair of the DCMS Committee, in a statement. “Democracy is at risk from the malicious and relentless targeting of citizens with disinformation and personalised ‘dark adverts’ from unidentifiable sources, delivered through the major social media platforms we use every day. Much of this is directed from agencies working in foreign countries, including Russia.
“The big tech companies are failing in the duty of care they owe to their users to act against harmful content, and to respect their data privacy rights. Companies like Facebook exercise massive market power which enables them to make money by bullying the smaller technology companies and developers who rely on this platform to reach their customers.”
“These are issues that the major tech companies are well aware of, yet continually fail to address. The guiding principle of the ‘move fast and break things’ culture often seems to be that it is better to apologise than ask permission. We need a radical shift in the balance of power between the platforms and the people,” he added.
“The age of inadequate self-regulation must come to an end. The rights of the citizen need to be established in statute, by requiring the tech companies to adhere to a code of conduct written into law by Parliament, and overseen by an independent regulator.”
The committee says it expects the government to respond to its recommendations within two months — noting rather dryly: “We hope that this will be much more comprehensive, practical, and constructive than their response to the Interim Report, published in October 2018. Several of our recommendations were not substantively answered and there is now an urgent need for the Government to respond to them.”
It also makes a point of including an analysis of Internet traffic to the government’s own response to its preliminary report last year — in which it highlights a “high proportion” of online visitors hailing from Russian cities including Moscow and Saint Petersburg…
Source: Web and publications unit, House of Commons
“This itself demonstrates the very clear interest from Russia in what we have had to say about their activities in overseas political campaigns,” the committee remarks, criticizing the government response to its preliminary report for claiming there’s no evidence of “successful” Russian interference in UK elections and democratic processes.
“It is surely a sufficient matter of concern that the Government has acknowledged that interference has occurred, irrespective of the lack of evidence of impact. The Government should be conducting analysis to understand the extent of Russian targeting of voters during elections,” it adds.
Three senior managers knew
Another interesting tidbit from the report is confirmation that the ICO has shared the names of three “senior managers” at Facebook who knew about the Cambridge Analytica data breach prior to the first press report in December 2015 — which is the date Facebook has repeatedly told the committee was when it first learnt of the breach, contradicting what the ICO found via its own investigations.
The committee’s report does not disclose the names of the three senior managers — saying the ICO has asked the names to remain confidential (we’ve reached out to the ICO to ask why it is not making this information public) — and implies the execs did not relay the information to Zuckerberg.
The committee dubs this as an example of “a profound failure” of internal governance, and also branding it evidence of “fundamental weakness” in how Facebook manages its responsibilities to users.
Here’s the committee’s account of that detail:
We were keen to know when and which people working at Facebook first knew about the GSR/Cambridge Analytica breach. The ICO confirmed, in correspondence with the Committee, that three “senior managers” were involved in email exchanges earlier in 2015 concerning the GSR breach before December 2015, when it was first reported by The Guardian. At the request of the ICO, we have agreed to keep the names confidential, but it would seem that this important information was not shared with the most senior executives at Facebook, leading us to ask why this was the case.
The scale and importance of the GSR/Cambridge Analytica breach was such that its occurrence should have been referred to Mark Zuckerberg as its CEO immediately. The fact that it was not is evidence that Facebook did not treat the breach with the seriousness it merited. It was a profound failure of governance within Facebook that its CEO did not know what was going on, the company now maintains, until the issue became public to us all in 2018. The incident displays the fundamental weakness of Facebook in managing its responsibilities to the people whose data is used for its own commercial interests.
from iraidajzsmmwtv https://tcrn.ch/2BHGGRI via IFTTT
0 notes
Text
UK parliament calls for antitrust, data abuse probe of Facebook
A final report by a British parliamentary committee which spent months last year investigating online political disinformation makes very uncomfortable reading for Facebook — with the company singled out for “disingenuous” and “bad faith” responses to democratic concerns about the misuse of people’s data.
In the report, published today, the committee has also called for Facebook’s use of user data to be investigated by the UK’s data watchdog.
In an evidence session to the committee late last year, the Information Commissioner’s Office (ICO) suggested Facebook needs to change its business model — warning the company risks burning user trust for good.
Last summer the ICO also called for an ethical pause of social media ads for election campaigning, warning of the risk of developing “a system of voter surveillance by default”.
Interrogating the distribution of ‘fake news’
The UK parliamentary enquiry looked into both Facebook’s own use of personal data to further its business interests, such as by providing access to user data to developers and advertisers in order to increase revenue and/or usage; and examined what Facebook claimed as ‘abuse’ of its platform by the disgraced (and now defunct) political data company Cambridge Analytica — which in 2014 paid a developer with access to Facebook’s developer platform to extract information on millions of Facebook users in build voter profiles to try to influence elections.
The committee’s conclusion about Facebook’s business is a damning one with the company accused of operating a business model that’s predicated on selling abusive access to people’s data.
“Far from Facebook acting against “sketchy” or “abusive” apps, of which action it has produced no evidence at all, it, in fact, worked with such apps as an intrinsic part of its business model,” the committee argues. “This explains why it recruited the people who created them, such as Joseph Chancellor [the co-founder of GSR, the developer which sold Facebook user data to Cambridge Analytica]. Nothing in Facebook’s actions supports the statements of Mark Zuckerberg who, we believe, lapsed into “PR crisis mode”, when its real business model was exposed.
“This is just one example of the bad faith which we believe justifies governments holding a business such as Facebook at arms’ length. It seems clear to us that Facebook acts only when serious breaches become public. This is what happened in 2015 and 2018.”
“We consider that data transfer for value is Facebook’s business model and that Mark Zuckerberg’s statement that ‘we’ve never sold anyone’s data” is simply untrue’,” the committee also concludes.
We’ve reached out to Facebook for comment on the committee’s report.
Last fall the company was issued the maximum possible fine under relevant UK data protection law for failing to safeguard user data from Cambridge Analytica saga. Although Facebook is appealing the ICO’s penalty, claiming there’s no evidence UK users’ data got misused.
During the course of a multi-month enquiry last year investigating disinformation and fake news, the Digital, Culture, Media and Sport (DCMS) committee heard from 73 witnesses in 23 oral evidence sessions, as well as taking in 170 written submissions. In all the committee says it posed more than 4,350 questions.
Its wide-ranging, 110-page report makes detailed observations on a number of technologies and business practices across the social media, adtech and strategic communications space, and culminates in a long list of recommendations for policymakers and regulators — reiterating its call for tech platforms to be made legally liable for content.
Among the report’s main recommendations are:
clear legal liabilities for tech companies to act against “harmful or illegal content”, with the committee calling for a compulsory Code of Ethics overseen by a independent regulatory with statutory powers to obtain information from companies; instigate legal proceedings and issue (“large”) fines for non-compliance
privacy law protections to cover inferred data so that models used to make inferences about individuals are clearly regulated under UK data protection rules
a levy on tech companies operating in the UK to support enhanced regulation of such platforms
a call for the ICO to investigate Facebook’s platform practices and use of user data
a call for the Competition Markets Authority to comprehensively “audit” the online advertising ecosystem, and also to investigate whether Facebook specifically has engaged in anti-competitive practices
changes to UK election law to take account of digital campaigning, including “absolute transparency of online political campaigning” — including “full disclosure of the targeting used” — and more powers for the Electoral Commission
a call for a government review of covert digital influence campaigns by foreign actors (plus a review of legislation in the area to consider if it’s adequate) — including the committee urging the government to launch independent investigations of recent past elections to examine “foreign influence, disinformation, funding, voter manipulation, and the sharing of data, so that appropriate changes to the law can be made and lessons can be learnt for future elections and referenda”
a requirement on social media platforms to develop tools to distinguish between “quality journalism” and low quality content sources, and/or work with existing providers to make such services available to users
Among the areas the committee’s report covers off with detailed commentary are data use and targeting; advertising and political campaigning — including foreign influence; and digital literacy.
It argues that regulation is urgently needed to restore democratic accountability and “make sure the people stay in charge of the machines”.
Ministers are due to produce a White Paper on social media safety regulation this winter and the committee writes that it hopes its recommendations will inform government thinking.
“Much has been said about the coarsening of public debate, but when these factors are brought to bear directly in election campaigns then the very fabric of our democracy is threatened,” the committee writes. “This situation is unlikely to change. What does need to change is the enforcement of greater transparency in the digital sphere, to ensure that we know the source of what we are reading, who has paid for it and why the information has been sent to us. We need to understand how the big tech companies work and what happens to our data.”
The report calls for tech companies to be regulated as a new category “not necessarily either a ‘platform’ or a ‘publisher”, but which legally tightens their liability for harmful content published on their platforms.
Last month another UK parliamentary committee also urged the government to place a legal ‘duty of care’ on platforms to protect users under the age of 18 — and the government said then that it has not ruled out doing so.
“Digital gangsters”
Competition concerns are also raised several times by the committee.
“Companies like Facebook should not be allowed to behave like ‘digital gangsters’ in the online world, considering themselves to be ahead of and beyond the law,” the DCMS committee writes, going on to urge the government to investigate whether Facebook specifically has been involved in any anti-competitive practices and conduct a review of its business practices towards other developers “to decide whether Facebook is unfairly using its dominant market position in social media to decide which businesses should succeed or fail”.
“The big tech companies must not be allowed to expand exponentially, without constraint or proper regulatory oversight,” it adds.
The committee suggests existing legal tools are up to the task of reining in platform power, citing privacy laws, data protection legislation, antitrust and competition law — and calling for a “comprehensive audit” of the social media advertising market by the UK’s Competition and Markets Authority, and a specific antitrust probe of Facebook’s business practices.
“If companies become monopolies they can be broken up, in whatever sector,” the committee points out. “Facebook’s handling of personal data, and its use for political campaigns, are prime and legitimate areas for inspection by regulators, and it should not be able to evade all editorial responsibility for the content shared by its users across its platforms.”
The social networking giant was the recipient of many awkward queries during the course of the committee’s enquiry but it refused repeated requests for its founder Mark Zuckerberg to testify — sending a number of lesser staffers in his stead.
That decision continues to be seized upon by the committee as evidence of a lack of democratic accountability. It also accuses Facebook of having an intentionally “opaque management structure”.
“By choosing not to appear before the Committee and by choosing not to respond personally to any of our invitations, Mark Zuckerberg has shown contempt towards both the UK Parliament and the ‘International Grand Committee’, involving members from nine legislatures from around the world,” the committee writes.
“The management structure of Facebook is opaque to those outside the business and this seemed to be designed to conceal knowledge of and responsibility for specific decisions. Facebook used the strategy of sending witnesses who they said were the most appropriate representatives, yet had not been properly briefed on crucial issues, and could not or chose not to answer many of our questions. They then promised to follow up with letters, which—unsurprisingly—failed to address all of our questions. We are left in no doubt that this strategy was deliberate.”
It doubles down on the accusation that Facebook sought to deliberately mislead its enquiry — pointing to incorrect and/or inadequate responses from staffers who did testify.
“We are left with the impression that either [policy VP] Simon Milner and [CTO] Mike Schroepfer deliberately misled the Committee or they were deliberately not briefed by senior executives at Facebook about the extent of Russian interference in foreign elections,” it suggests.
In an unusual move late last year the committee used rare parliamentary powers to seize a cache of documents related to an active US lawsuit against Facebook filed by a developer called Six4Three.
Seized cache of Facebook docs raise competition and consent questions
The cache of documents is referenced extensively in the final report, and appears to have fuelled antitrust concerns, with the committee arguing that the evidence obtained from the internal company documents “indicates that Facebook was willing to override its users’ privacy settings in order to transfer data to some app developers, to charge high prices in advertising to some developers, for the exchange of that data, and to starve some developers… of that data, thereby causing them to lose their business”.
“It seems clear that Facebook was, at the very least, in violation of its Federal Trade Commission [privacy] settlement,” the committee also argues, citing evidence from the former chief technologist of the FTC, Ashkan Soltani .
On Soltani’s evidence, it writes:
Ashkan Soltani rejected [Facebook’s] claim, saying that up until 2012, platform controls did not exist, and privacy controls did not apply to apps. So even if a user set their profile to private, installed apps would still be able to access information. After 2012, Facebook added platform controls and made privacy controls applicable to apps. However, there were ‘whitelisted’ apps that could still access user data without permission and which, according to Ashkan Soltani, could access friends’ data for nearly a decade before that time. Apps were able to circumvent users’ privacy of platform settings and access friends’ information, even when the user disabled the Platform. This was an example of Facebook’s business model driving privacy violations.
While Facebook is singled out for the most eviscerating criticism in the report (and targeted for specific investigations), the committee’s long list of recommendations are addressed at social media businesses and online advertisers generally.
It also calls for far more transparency from platforms, writing that: “Social media companies need to be more transparent about their own sites, and how they work. Rather than hiding behind complex agreements, they should be informing users of how their sites work, including curation functions and the way in which algorithms are used to prioritise certain stories, news and videos, depending on each user’s profile. The more people know how the sites work, and how the sites use individuals’ data, the more informed we shall all be, which in turn will make choices about the use and privacy of sites easier to make.”
The committee also urges a raft of updates to UK election law — branding it “not fit for purpose” in the digital era.
Its interim report, published last summer, made many of the same recommendations.
Russian interest
But despite pressing the government for urgent action there was only a cool response from ministers then, with the government remaining tied up trying to shape a response to the 2016 Brexit vote which split the country (with social media’s election-law-deforming help). Instead it opted for a ‘wait and see‘ approach.
The government accepted just three of the preliminary report’s forty-two recommendations outright, and fully rejected four.
Nonetheless, the committee has doubled down on its preliminary conclusions, reiterating earlier recommendations and pushing the government once again to act.
It cites fresh evidence, including from additional testimony, as well as pointing to other reports (such as the recently published Cairncross Review) which it argues back up some of the conclusions reached.
“Our inquiry over the last year has identified three big threats to our society. The challenge for the year ahead is to start to fix them; we cannot delay any longer,” writes Damian Collins MP and chair of the DCMS Committee, in a statement. “Democracy is at risk from the malicious and relentless targeting of citizens with disinformation and personalised ‘dark adverts’ from unidentifiable sources, delivered through the major social media platforms we use every day. Much of this is directed from agencies working in foreign countries, including Russia.
“The big tech companies are failing in the duty of care they owe to their users to act against harmful content, and to respect their data privacy rights. Companies like Facebook exercise massive market power which enables them to make money by bullying the smaller technology companies and developers who rely on this platform to reach their customers.”
“These are issues that the major tech companies are well aware of, yet continually fail to address. The guiding principle of the ‘move fast and break things’ culture often seems to be that it is better to apologise than ask permission. We need a radical shift in the balance of power between the platforms and the people,” he added.
“The age of inadequate self-regulation must come to an end. The rights of the citizen need to be established in statute, by requiring the tech companies to adhere to a code of conduct written into law by Parliament, and overseen by an independent regulator.”
The committee says it expects the government to respond to its recommendations within two months — noting rather dryly: “We hope that this will be much more comprehensive, practical, and constructive than their response to the Interim Report, published in October 2018. Several of our recommendations were not substantively answered and there is now an urgent need for the Government to respond to them.”
It also makes a point of including an analysis of Internet traffic to the government’s own response to its preliminary report last year — in which it highlights a “high proportion” of online visitors hailing from Russian cities including Moscow and Saint Petersburg…
Source: Web and publications unit, House of Commons
“This itself demonstrates the very clear interest from Russia in what we have had to say about their activities in overseas political campaigns,” the committee remarks, criticizing the government response to its preliminary report for claiming there’s no evidence of “successful” Russian interference in UK elections and democratic processes.
“It is surely a sufficient matter of concern that the Government has acknowledged that interference has occurred, irrespective of the lack of evidence of impact. The Government should be conducting analysis to understand the extent of Russian targeting of voters during elections,” it adds.
Three senior managers knew
Another interesting tidbit from the report is confirmation that the ICO has shared the names of three “senior managers” at Facebook who knew about the Cambridge Analytica data breach prior to the first press report in December 2015 — which is the date Facebook has repeatedly told the committee was when it first learnt of the breach, contradicting what the ICO found via its own investigations.
The committee’s report does not disclose the names of the three senior managers — saying the ICO has asked the names to remain confidential (we’ve reached out to the ICO to ask why it is not making this information public) — and implies the execs did not relay the information to Zuckerberg.
The committee dubs this as an example of “a profound failure” of internal governance, and also branding it evidence of “fundamental weakness” in how Facebook manages its responsibilities to users.
Here’s the committee’s account of that detail:
We were keen to know when and which people working at Facebook first knew about the GSR/Cambridge Analytica breach. The ICO confirmed, in correspondence with the Committee, that three “senior managers” were involved in email exchanges earlier in 2015 concerning the GSR breach before December 2015, when it was first reported by The Guardian. At the request of the ICO, we have agreed to keep the names confidential, but it would seem that this important information was not shared with the most senior executives at Facebook, leading us to ask why this was the case.
The scale and importance of the GSR/Cambridge Analytica breach was such that its occurrence should have been referred to Mark Zuckerberg as its CEO immediately. The fact that it was not is evidence that Facebook did not treat the breach with the seriousness it merited. It was a profound failure of governance within Facebook that its CEO did not know what was going on, the company now maintains, until the issue became public to us all in 2018. The incident displays the fundamental weakness of Facebook in managing its responsibilities to the people whose data is used for its own commercial interests.
from RSSMix.com Mix ID 8204425 https://tcrn.ch/2BHGGRI via IFTTT
0 notes
Text
The Future of College Is Online, and It’s Cheaper
In 2019, the average cost of attending a four-year private college was over $200,000. For a four-year public college, it was over $100,000. Georgia Tech, a top engineering school, launched an online masters in computer science in 2014. The degree costs just $7,000 (one-sixth the cost of its in-person program), and the school now has nearly 10,000 students enrolled, making it the largest computer science program in the country. Similarly, in 2015, the University of Illinois launched an online M.B.A. for $22,000, a fraction of the cost of most business schools. In order to provide a forum for networking and experiential learning, critical to the business school experience, the university created micro-immersions, where students can connect with other students and work on live projects at companies at a regional level. If you were a college administrator, given the impact of the pandemic, would you invest more scarce budget resources in developing (1) traditional campus-based face-to-face classes for more personal interactions or (2) online degrees that are accessible to more students? Why? What are the ethics underlying your decision?
Forty years ago, going to college in America was a reliable pathway for upward mobility. Today, it has become yet another 21st-century symbol of privilege for the wealthy. Through this period, tuition rates soared 260 percent, double the rate of inflation. In 2019, the average cost of attending a four-year private college was over $200,000. For a four-year public college, it was over $100,000. To sustain these prices, more students are now admitted from the top 1 percent of the income scale than the entire bottom 40 percent at the top 80 colleges. Universities have also opened the floodgates to wealthy international students, willing to pay full tuition for the American brand.
Covid-19 is about to ravage that business model. Mass unemployment is looming large and is likely to put college out of reach for many. With America now the epicenter of the pandemic and bungling its response, many students are looking to defer enrollment. Foreign students are questioning whether to register at all, with greater uncertainty around visas and work prospects. The “Trump Effect” had already begun to cause declining foreign student enrollment over the past three years.
The mightiest of institutions are bracing for the worst. Harvard, home to the country’s largest endowment, recently announced drastic steps to manage the fallout, including salary cuts for its leadership, hiring freezes and cuts in discretionary spending. Most other universities have been forced to make similar decisions, and are nervous that if they continue with online teaching this fall, students will demand at least a partial remission of tuition.
Up until now, online education has been relegated to the equivalent of a hobby at most universities. With the pandemic, it has become a backup plan. But if universities embrace this moment strategically, online education could expand access exponentially and drop its cost by magnitudes — all while shoring up revenues for universities in a way that is more recession-proof, policy-proof and pandemic-proof.
To be clear, the scramble to move online over just a few days this March did not go well. Faculty members were forced to revamp lesson plans overnight. “Zoom-bombers” took advantage of lax privacy protocols. Students fled home, with many in faraway time zones prolonging jet lag just to continue synchronous learning. Not surprisingly, the experience for both students and faculty has left much to be desired. According to one survey, more than 75 percent of students do not feel they received a quality learning experience after classrooms closed.
But what surveys miss are the numerous spirited efforts to break new ground, as only a crisis can be the impetus for.
One professor at New York University’s Tisch School of the Arts taught a drama course that allows students to “act” with each other in virtual reality using Oculus Quest headsets. A music professor at Stanford trained his students on software that allows musicians in different locations to perform together using internet streaming. Professors are pioneering new methods and ed-tech companies are developing platforms at a pace not seen before, providing a glimpse into the untapped potential of online education. Not to be forgotten, of course, is the fact that just a few years ago, a transition to online learning at the current scale would have been unimaginable.
Before the pandemic, most universities never truly embraced online education, at least not strategically. For years, universities have allowed professors to offer some courses online, making them accessible through aggregators such as edX or Coursera. But rarely do universities offer their most popular and prestigious degrees remotely. It is still not possible to get an M.B.A. at Stanford, a biology degree at M.I.T. or a computer science degree at Brown online.
On one hand, universities don’t want to be seen as limiting access to education, so they have dabbled in the space. But to fully embrace it might render much of the faculty redundant, reduce the exclusivity of those degrees, and threaten the very existence of the physical campus, for which vast resources have been allocated over centuries.
For good reason, many educators have been skeptical of online learning. They have questioned how discussion-based courses, which require more intimate settings, would be coordinated. They wonder how lab work might be administered. Of course, no one doubts that the student experience would not be as holistic. But universities don’t need to abandon in-person teaching for students who see the value in it.
They simply need to create “parallel” online degrees for all their core degree programs. By doing so, universities could expand their reach by thousands, creating the economies of scale to drop their costs by tens of thousands.
There are a few, but instructive, examples of prestigious universities that have already shown the way. Georgia Tech, a top engineering school, launched an online masters in computer science in 2014. The degree costs just $7,000 (one-sixth the cost of its in-person program), and the school now has nearly 10,000 students enrolled, making it the largest computer science program in the country. Notably, the online degree has not cannibalized its on-campus revenue stream. Instead, it has opened up a prestigious degree program to a different population, mostly midcareer applicants looking for a meaningful skills upgrade.
Similarly, in 2015, the University of Illinois launched an online M.B.A. for $22,000, a fraction of the cost of most business schools. In order to provide a forum for networking and experiential learning, critical to the business school experience, the university created micro-immersions, where students can connect with other students and work on live projects at companies at a regional level.
To do this would require a major reorientation of university resources and activities. Classrooms would need to be fitted with new technology so that lectures could be simultaneously delivered to students on campus as well as across the world. Professors would need to undergo training on how to effectively teach to a blended classroom. Universities would also be well served to build competencies in content production. Today, almost all theory-based content, whether in chemistry, computer science or finance, can be produced in advance and effectively delivered asynchronously. By tapping their best-rated professors to be the stars of those productions, universities could actually raise the pedagogical standard.
There are already strong examples of this. Most biology professors, for instance, would find themselves hard pressed to match the pedagogical quality, production values and inspirational nature of Eric Lander’s online Introduction to Biology course at M.I.T. That free course currently has over 134,000 students enrolled this semester.
Once universities have developed a library of content, they can choose to draw from it for asynchronous delivery for years, both for their on-campus and online programs. Students may not mind. It would, after all, open up professor capacity for a larger number of live interactions. Three-hour lectures, which were never good for anyone, would become a thing of the past. Instead, a typical day might be broken up into one-hour sessions with a focus on problem-solving, Q. and A. or discussion.
Many universities are sounding bold about reopening in-person instruction this fall. The current business model requires them to, or face financial ruin. But a hasty decision driven by the financial imperative could prove lethal, and do little to help them weather a storm. The pandemic provides universities an opportunity to reimagine education around the pillars of access and affordability with the myriad tools and techniques now at their disposal. It could make them true pathways of upward mobility again.
0 notes
Link
A final report by a British parliamentary committee which spent months last year investigating online political disinformation makes very uncomfortable reading for Facebook — with the company singled out for “disingenuous” and “bad faith” responses to democratic concerns about the misuse of people’s data.
In the report, published today, the committee has also called for Facebook’s use of user data to be investigated by the UK’s data watchdog.
In an evidence session to the committee late last year, the Information Commissioner’s Office (ICO) suggested Facebook needs to change its business model — warning the company risks burning user trust for good.
Last summer the ICO also called for an ethical pause of social media ads for election campaigning, warning of the risk of developing “a system of voter surveillance by default”.
Interrogating the distribution of ‘fake news’
The UK parliamentary enquiry looked into both Facebook’s own use of personal data to further its business interests, such as by providing access to user data to developers and advertisers in order to increase revenue and/or usage; and examined what Facebook claimed as ‘abuse’ of its platform by the disgraced (and now defunct) political data company Cambridge Analytica — which in 2014 paid a developer with access to Facebook’s developer platform to extract information on millions of Facebook users in build voter profiles to try to influence elections.
The committee’s conclusion about Facebook’s business is a damning one with the company accused of operating a business model that’s predicated on selling abusive access to people’s data.
“Far from Facebook acting against “sketchy” or “abusive” apps, of which action it has produced no evidence at all, it, in fact, worked with such apps as an intrinsic part of its business model,” the committee argues. “This explains why it recruited the people who created them, such as Joseph Chancellor [the co-founder of GSR, the developer which sold Facebook user data to Cambridge Analytica]. Nothing in Facebook’s actions supports the statements of Mark Zuckerberg who, we believe, lapsed into “PR crisis mode”, when its real business model was exposed.
“This is just one example of the bad faith which we believe justifies governments holding a business such as Facebook at arms’ length. It seems clear to us that Facebook acts only when serious breaches become public. This is what happened in 2015 and 2018.”
“We consider that data transfer for value is Facebook’s business model and that Mark Zuckerberg’s statement that ‘we’ve never sold anyone’s data” is simply untrue’,” the committee also concludes.
We’ve reached out to Facebook for comment on the committee’s report.
Last fall the company was issued the maximum possible fine under relevant UK data protection law for failing to safeguard user data from Cambridge Analytica saga. Although Facebook is appealing the ICO’s penalty, claiming there’s no evidence UK users’ data got misused.
During the course of a multi-month enquiry last year investigating disinformation and fake news, the Digital, Culture, Media and Sport (DCMS) committee heard from 73 witnesses in 23 oral evidence sessions, as well as taking in 170 written submissions. In all the committee says it posed more than 4,350 questions.
Its wide-ranging, 110-page report makes detailed observations on a number of technologies and business practices across the social media, adtech and strategic communications space, and culminates in a long list of recommendations for policymakers and regulators — reiterating its call for tech platforms to be made legally liable for content.
Among the report’s main recommendations are:
clear legal liabilities for tech companies to act against “harmful or illegal content”, with the committee calling for a compulsory Code of Ethics overseen by a independent regulatory with statutory powers to obtain information from companies; instigate legal proceedings and issue (“large”) fines for non-compliance
privacy law protections to cover inferred data so that models used to make inferences about individuals are clearly regulated under UK data protection rules
a levy on tech companies operating in the UK to support enhanced regulation of such platforms
a call for the ICO to investigate Facebook’s platform practices and use of user data
a call for the Competition Markets Authority to comprehensively “audit” the online advertising ecosystem, and also to investigate whether Facebook specifically has engaged in anti-competitive practices
changes to UK election law to take account of digital campaigning, including “absolute transparency of online political campaigning” — including “full disclosure of the targeting used” — and more powers for the Electoral Commission
a call for a government review of covert digital influence campaigns by foreign actors (plus a review of legislation in the area to consider if it’s adequate) — including the committee urging the government to launch independent investigations of recent past elections to examine “foreign influence, disinformation, funding, voter manipulation, and the sharing of data, so that appropriate changes to the law can be made and lessons can be learnt for future elections and referenda”
a requirement on social media platforms to develop tools to distinguish between “quality journalism” and low quality content sources, and/or work with existing providers to make such services available to users
Among the areas the committee’s report covers off with detailed commentary are data use and targeting; advertising and political campaigning — including foreign influence; and digital literacy.
It argues that regulation is urgently needed to restore democratic accountability and “make sure the people stay in charge of the machines”.
Ministers are due to produce a White Paper on social media safety regulation this winter and the committee writes that it hopes its recommendations will inform government thinking.
“Much has been said about the coarsening of public debate, but when these factors are brought to bear directly in election campaigns then the very fabric of our democracy is threatened,” the committee writes. “This situation is unlikely to change. What does need to change is the enforcement of greater transparency in the digital sphere, to ensure that we know the source of what we are reading, who has paid for it and why the information has been sent to us. We need to understand how the big tech companies work and what happens to our data.”
The report calls for tech companies to be regulated as a new category “not necessarily either a ‘platform’ or a ‘publisher”, but which legally tightens their liability for harmful content published on their platforms.
Last month another UK parliamentary committee also urged the government to place a legal ‘duty of care’ on platforms to protect users under the age of 18 — and the government said then that it has not ruled out doing so.
“Digital gangsters”
Competition concerns are also raised several times by the committee.
“Companies like Facebook should not be allowed to behave like ‘digital gangsters’ in the online world, considering themselves to be ahead of and beyond the law,” the DCMS committee writes, going on to urge the government to investigate whether Facebook specifically has been involved in any anti-competitive practices and conduct a review of its business practices towards other developers “to decide whether Facebook is unfairly using its dominant market position in social media to decide which businesses should succeed or fail”.
“The big tech companies must not be allowed to expand exponentially, without constraint or proper regulatory oversight,” it adds.
The committee suggests existing legal tools are up to the task of reining in platform power, citing privacy laws, data protection legislation, antitrust and competition law — and calling for a “comprehensive audit” of the social media advertising market by the UK’s Competition and Markets Authority, and a specific antitrust probe of Facebook’s business practices.
“If companies become monopolies they can be broken up, in whatever sector,” the committee points out. “Facebook’s handling of personal data, and its use for political campaigns, are prime and legitimate areas for inspection by regulators, and it should not be able to evade all editorial responsibility for the content shared by its users across its platforms.”
The social networking giant was the recipient of many awkward queries during the course of the committee’s enquiry but it refused repeated requests for its founder Mark Zuckerberg to testify — sending a number of lesser staffers in his stead.
That decision continues to be seized upon by the committee as evidence of a lack of democratic accountability. It also accuses Facebook of having an intentionally “opaque management structure”.
“By choosing not to appear before the Committee and by choosing not to respond personally to any of our invitations, Mark Zuckerberg has shown contempt towards both the UK Parliament and the ‘International Grand Committee’, involving members from nine legislatures from around the world,” the committee writes.
“The management structure of Facebook is opaque to those outside the business and this seemed to be designed to conceal knowledge of and responsibility for specific decisions. Facebook used the strategy of sending witnesses who they said were the most appropriate representatives, yet had not been properly briefed on crucial issues, and could not or chose not to answer many of our questions. They then promised to follow up with letters, which—unsurprisingly—failed to address all of our questions. We are left in no doubt that this strategy was deliberate.”
It doubles down on the accusation that Facebook sought to deliberately mislead its enquiry — pointing to incorrect and/or inadequate responses from staffers who did testify.
“We are left with the impression that either [policy VP] Simon Milner and [CTO] Mike Schroepfer deliberately misled the Committee or they were deliberately not briefed by senior executives at Facebook about the extent of Russian interference in foreign elections,” it suggests.
In an unusual move late last year the committee used rare parliamentary powers to seize a cache of documents related to an active US lawsuit against Facebook filed by a developer called Six4Three.
Seized cache of Facebook docs raise competition and consent questions
The cache of documents is referenced extensively in the final report, and appears to have fuelled antitrust concerns, with the committee arguing that the evidence obtained from the internal company documents “indicates that Facebook was willing to override its users’ privacy settings in order to transfer data to some app developers, to charge high prices in advertising to some developers, for the exchange of that data, and to starve some developers… of that data, thereby causing them to lose their business”.
“It seems clear that Facebook was, at the very least, in violation of its Federal Trade Commission [privacy] settlement,” the committee also argues, citing evidence from the former chief technologist of the FTC, Ashkan Soltani.
On Soltani’s evidence, it writes:
Ashkan Soltani rejected [Facebook’s] claim, saying that up until 2012, platform controls did not exist, and privacy controls did not apply to apps. So even if a user set their profile to private, installed apps would still be able to access information. After 2012, Facebook added platform controls and made privacy controls applicable to apps. However, there were ‘whitelisted’ apps that could still access user data without permission and which, according to Ashkan Soltani, could access friends’ data for nearly a decade before that time. Apps were able to circumvent users’ privacy of platform settings and access friends’ information, even when the user disabled the Platform. This was an example of Facebook’s business model driving privacy violations.
While Facebook is singled out for the most eviscerating criticism in the report (and targeted for specific investigations), the committee’s long list of recommendations are addressed at social media businesses and online advertisers generally.
It also calls for far more transparency from platforms, writing that: “Social media companies need to be more transparent about their own sites, and how they work. Rather than hiding behind complex agreements, they should be informing users of how their sites work, including curation functions and the way in which algorithms are used to prioritise certain stories, news and videos, depending on each user’s profile. The more people know how the sites work, and how the sites use individuals’ data, the more informed we shall all be, which in turn will make choices about the use and privacy of sites easier to make.”
The committee also urges a raft of updates to UK election law — branding it “not fit for purpose” in the digital era.
Its interim report, published last summer, made many of the same recommendations.
Russian interest
But despite pressing the government for urgent action there was only a cool response from ministers then, with the government remaining tied up trying to shape a response to the 2016 Brexit vote which split the country (with social media’s election-law-deforming help). Instead it opted for a ‘wait and see‘ approach.
The government accepted just three of the preliminary report’s forty-two recommendations outright, and fully rejected four.
Nonetheless, the committee has doubled down on its preliminary conclusions, reiterating earlier recommendations and pushing the government once again to act.
It cites fresh evidence, including from additional testimony, as well as pointing to other reports (such as the recently published Cairncross Review) which it argues back up some of the conclusions reached.
“Our inquiry over the last year has identified three big threats to our society. The challenge for the year ahead is to start to fix them; we cannot delay any longer,” writes Damian Collins MP and chair of the DCMS Committee, in a statement. “Democracy is at risk from the malicious and relentless targeting of citizens with disinformation and personalised ‘dark adverts’ from unidentifiable sources, delivered through the major social media platforms we use every day. Much of this is directed from agencies working in foreign countries, including Russia.
“The big tech companies are failing in the duty of care they owe to their users to act against harmful content, and to respect their data privacy rights. Companies like Facebook exercise massive market power which enables them to make money by bullying the smaller technology companies and developers who rely on this platform to reach their customers.”
“These are issues that the major tech companies are well aware of, yet continually fail to address. The guiding principle of the ‘move fast and break things’ culture often seems to be that it is better to apologise than ask permission. We need a radical shift in the balance of power between the platforms and the people,” he added.
“The age of inadequate self-regulation must come to an end. The rights of the citizen need to be established in statute, by requiring the tech companies to adhere to a code of conduct written into law by Parliament, and overseen by an independent regulator.”
The committee says it expects the government to respond to its recommendations within two months — noting rather dryly: “We hope that this will be much more comprehensive, practical, and constructive than their response to the Interim Report, published in October 2018. Several of our recommendations were not substantively answered and there is now an urgent need for the Government to respond to them.”
It also makes a point of including an analysis of Internet traffic to the government’s own response to its preliminary report last year — in which it highlights a “high proportion” of online visitors hailing from Russian cities including Moscow and Saint Petersburg…
Source: Web and publications unit, House of Commons
“This itself demonstrates the very clear interest from Russia in what we have had to say about their activities in overseas political campaigns,” the committee remarks, criticizing the government response to its preliminary report for claiming there’s no evidence of “successful” Russian interference in UK elections and democratic processes.
“It is surely a sufficient matter of concern that the Government has acknowledged that interference has occurred, irrespective of the lack of evidence of impact. The Government should be conducting analysis to understand the extent of Russian targeting of voters during elections,” it adds.
Three senior managers knew
Another interesting tidbit from the report is confirmation that the ICO has shared the names of three “senior managers” at Facebook who knew about the Cambridge Analytica data breach prior to the first press report in December 2015 — which is the date Facebook has repeatedly told the committee was when it first learnt of the breach, contradicting what the ICO found via its own investigations.
The committee’s report does not disclose the names of the three senior managers — saying the ICO has asked the names to remain confidential (we’ve reached out to the ICO to ask why it is not making this information public) — and implies the execs did not relay the information to Zuckerberg.
The committee dubs this as an example of “a profound failure” of internal governance, and also branding it evidence of “fundamental weakness” in how Facebook manages its responsibilities to users.
Here’s the committee’s account of that detail:
We were keen to know when and which people working at Facebook first knew about the GSR/Cambridge Analytica breach. The ICO confirmed, in correspondence with the Committee, that three “senior managers” were involved in email exchanges earlier in 2015 concerning the GSR breach before December 2015, when it was first reported by The Guardian. At the request of the ICO, we have agreed to keep the names confidential, but it would seem that this important information was not shared with the most senior executives at Facebook, leading us to ask why this was the case.
The scale and importance of the GSR/Cambridge Analytica breach was such that its occurrence should have been referred to Mark Zuckerberg as its CEO immediately. The fact that it was not is evidence that Facebook did not treat the breach with the seriousness it merited. It was a profound failure of governance within Facebook that its CEO did not know what was going on, the company now maintains, until the issue became public to us all in 2018. The incident displays the fundamental weakness of Facebook in managing its responsibilities to the people whose data is used for its own commercial interests.
from Social – TechCrunch https://tcrn.ch/2BHGGRI Original Content From: https://techcrunch.com
0 notes
Text
UK parliament calls for antitrust, data abuse probe of Facebook
A final report by a British parliamentary committee which spent months last year investigating online political disinformation makes very uncomfortable reading for Facebook — with the company singled out for “disingenuous” and “bad faith” responses to democratic concerns about the misuse of people’s data.
In the report, published today, the committee has also called for Facebook’s use of user data to be investigated by the UK’s data watchdog.
In an evidence session to the committee late last year, the Information Commissioner’s Office (ICO) suggested Facebook needs to change its business model — warning the company risks burning user trust for good.
Last summer the ICO also called for an ethical pause of social media ads for election campaigning, warning of the risk of developing “a system of voter surveillance by default”.
Interrogating the distribution of ‘fake news’
The UK parliamentary enquiry looked into both Facebook’s own use of personal data to further its business interests, such as by providing access to user data to developers and advertisers in order to increase revenue and/or usage; and examined what Facebook claimed as ‘abuse’ of its platform by the disgraced (and now defunct) political data company Cambridge Analytica — which in 2014 paid a developer with access to Facebook’s developer platform to extract information on millions of Facebook users in build voter profiles to try to influence elections.
The committee’s conclusion about Facebook’s business is a damning one with the company accused of operating a business model that’s predicated on selling abusive access to people’s data.
“Far from Facebook acting against “sketchy” or “abusive” apps, of which action it has produced no evidence at all, it, in fact, worked with such apps as an intrinsic part of its business model,” the committee argues. “This explains why it recruited the people who created them, such as Joseph Chancellor [the co-founder of GSR, the developer which sold Facebook user data to Cambridge Analytica]. Nothing in Facebook’s actions supports the statements of Mark Zuckerberg who, we believe, lapsed into “PR crisis mode”, when its real business model was exposed.
“This is just one example of the bad faith which we believe justifies governments holding a business such as Facebook at arms’ length. It seems clear to us that Facebook acts only when serious breaches become public. This is what happened in 2015 and 2018.”
“We consider that data transfer for value is Facebook’s business model and that Mark Zuckerberg’s statement that ‘we’ve never sold anyone’s data” is simply untrue’,” the committee also concludes.
We’ve reached out to Facebook for comment on the committee’s report.
Last fall the company was issued the maximum possible fine under relevant UK data protection law for failing to safeguard user data from Cambridge Analytica saga. Although Facebook is appealing the ICO’s penalty, claiming there’s no evidence UK users’ data got misused.
During the course of a multi-month enquiry last year investigating disinformation and fake news, the Digital, Culture, Media and Sport (DCMS) committee heard from 73 witnesses in 23 oral evidence sessions, as well as taking in 170 written submissions. In all the committee says it posed more than 4,350 questions.
Its wide-ranging, 110-page report makes detailed observations on a number of technologies and business practices across the social media, adtech and strategic communications space, and culminates in a long list of recommendations for policymakers and regulators — reiterating its call for tech platforms to be made legally liable for content.
Among the report’s main recommendations are:
clear legal liabilities for tech companies to act against “harmful or illegal content”, with the committee calling for a compulsory Code of Ethics overseen by a independent regulatory with statutory powers to obtain information from companies; instigate legal proceedings and issue (“large”) fines for non-compliance
privacy law protections to cover inferred data so that models used to make inferences about individuals are clearly regulated under UK data protection rules
a levy on tech companies operating in the UK to support enhanced regulation of such platforms
a call for the ICO to investigate Facebook’s platform practices and use of user data
a call for the Competition Markets Authority to comprehensively “audit” the online advertising ecosystem, and also to investigate whether Facebook specifically has engaged in anti-competitive practices
changes to UK election law to take account of digital campaigning, including “absolute transparency of online political campaigning” — including “full disclosure of the targeting used” — and more powers for the Electoral Commission
a call for a government review of covert digital influence campaigns by foreign actors (plus a review of legislation in the area to consider if it’s adequate) — including the committee urging the government to launch independent investigations of recent past elections to examine “foreign influence, disinformation, funding, voter manipulation, and the sharing of data, so that appropriate changes to the law can be made and lessons can be learnt for future elections and referenda”
a requirement on social media platforms to develop tools to distinguish between “quality journalism” and low quality content sources, and/or work with existing providers to make such services available to users
Among the areas the committee’s report covers off with detailed commentary are data use and targeting; advertising and political campaigning — including foreign influence; and digital literacy.
It argues that regulation is urgently needed to restore democratic accountability and “make sure the people stay in charge of the machines”.
Ministers are due to produce a White Paper on social media safety regulation this winter and the committee writes that it hopes its recommendations will inform government thinking.
“Much has been said about the coarsening of public debate, but when these factors are brought to bear directly in election campaigns then the very fabric of our democracy is threatened,” the committee writes. “This situation is unlikely to change. What does need to change is the enforcement of greater transparency in the digital sphere, to ensure that we know the source of what we are reading, who has paid for it and why the information has been sent to us. We need to understand how the big tech companies work and what happens to our data.”
The report calls for tech companies to be regulated as a new category “not necessarily either a ‘platform’ or a ‘publisher”, but which legally tightens their liability for harmful content published on their platforms.
Last month another UK parliamentary committee also urged the government to place a legal ‘duty of care’ on platforms to protect users under the age of 18 — and the government said then that it has not ruled out doing so.
“Digital gangsters”
Competition concerns are also raised several times by the committee.
“Companies like Facebook should not be allowed to behave like ‘digital gangsters’ in the online world, considering themselves to be ahead of and beyond the law,” the DCMS committee writes, going on to urge the government to investigate whether Facebook specifically has been involved in any anti-competitive practices and conduct a review of its business practices towards other developers “to decide whether Facebook is unfairly using its dominant market position in social media to decide which businesses should succeed or fail”.
“The big tech companies must not be allowed to expand exponentially, without constraint or proper regulatory oversight,” it adds.
The committee suggests existing legal tools are up to the task of reining in platform power, citing privacy laws, data protection legislation, antitrust and competition law — and calling for a “comprehensive audit” of the social media advertising market by the UK’s Competition and Markets Authority, and a specific antitrust probe of Facebook’s business practices.
“If companies become monopolies they can be broken up, in whatever sector,” the committee points out. “Facebook’s handling of personal data, and its use for political campaigns, are prime and legitimate areas for inspection by regulators, and it should not be able to evade all editorial responsibility for the content shared by its users across its platforms.”
The social networking giant was the recipient of many awkward queries during the course of the committee’s enquiry but it refused repeated requests for its founder Mark Zuckerberg to testify — sending a number of lesser staffers in his stead.
That decision continues to be seized upon by the committee as evidence of a lack of democratic accountability. It also accuses Facebook of having an intentionally “opaque management structure”.
“By choosing not to appear before the Committee and by choosing not to respond personally to any of our invitations, Mark Zuckerberg has shown contempt towards both the UK Parliament and the ‘International Grand Committee’, involving members from nine legislatures from around the world,” the committee writes.
“The management structure of Facebook is opaque to those outside the business and this seemed to be designed to conceal knowledge of and responsibility for specific decisions. Facebook used the strategy of sending witnesses who they said were the most appropriate representatives, yet had not been properly briefed on crucial issues, and could not or chose not to answer many of our questions. They then promised to follow up with letters, which—unsurprisingly—failed to address all of our questions. We are left in no doubt that this strategy was deliberate.”
It doubles down on the accusation that Facebook sought to deliberately mislead its enquiry — pointing to incorrect and/or inadequate responses from staffers who did testify.
“We are left with the impression that either [policy VP] Simon Milner and [CTO] Mike Schroepfer deliberately misled the Committee or they were deliberately not briefed by senior executives at Facebook about the extent of Russian interference in foreign elections,” it suggests.
In an unusual move late last year the committee used rare parliamentary powers to seize a cache of documents related to an active US lawsuit against Facebook filed by a developer called Six4Three.
Seized cache of Facebook docs raise competition and consent questions
The cache of documents is referenced extensively in the final report, and appears to have fuelled antitrust concerns, with the committee arguing that the evidence obtained from the internal company documents “indicates that Facebook was willing to override its users’ privacy settings in order to transfer data to some app developers, to charge high prices in advertising to some developers, for the exchange of that data, and to starve some developers… of that data, thereby causing them to lose their business”.
“It seems clear that Facebook was, at the very least, in violation of its Federal Trade Commission [privacy] settlement,” the committee also argues, citing evidence from the former chief technologist of the FTC, Ashkan Soltani .
On Soltani’s evidence, it writes:
Ashkan Soltani rejected [Facebook’s] claim, saying that up until 2012, platform controls did not exist, and privacy controls did not apply to apps. So even if a user set their profile to private, installed apps would still be able to access information. After 2012, Facebook added platform controls and made privacy controls applicable to apps. However, there were ‘whitelisted’ apps that could still access user data without permission and which, according to Ashkan Soltani, could access friends’ data for nearly a decade before that time. Apps were able to circumvent users’ privacy of platform settings and access friends’ information, even when the user disabled the Platform. This was an example of Facebook’s business model driving privacy violations.
While Facebook is singled out for the most eviscerating criticism in the report (and targeted for specific investigations), the committee’s long list of recommendations are addressed at social media businesses and online advertisers generally.
It also calls for far more transparency from platforms, writing that: “Social media companies need to be more transparent about their own sites, and how they work. Rather than hiding behind complex agreements, they should be informing users of how their sites work, including curation functions and the way in which algorithms are used to prioritise certain stories, news and videos, depending on each user’s profile. The more people know how the sites work, and how the sites use individuals’ data, the more informed we shall all be, which in turn will make choices about the use and privacy of sites easier to make.”
The committee also urges a raft of updates to UK election law — branding it “not fit for purpose” in the digital era.
Its interim report, published last summer, made many of the same recommendations.
Russian interest
But despite pressing the government for urgent action there was only a cool response from ministers then, with the government remaining tied up trying to shape a response to the 2016 Brexit vote which split the country (with social media’s election-law-deforming help). Instead it opted for a ‘wait and see‘ approach.
The government accepted just three of the preliminary report’s forty-two recommendations outright, and fully rejected four.
Nonetheless, the committee has doubled down on its preliminary conclusions, reiterating earlier recommendations and pushing the government once again to act.
It cites fresh evidence, including from additional testimony, as well as pointing to other reports (such as the recently published Cairncross Review) which it argues back up some of the conclusions reached.
“Our inquiry over the last year has identified three big threats to our society. The challenge for the year ahead is to start to fix them; we cannot delay any longer,” writes Damian Collins MP and chair of the DCMS Committee, in a statement. “Democracy is at risk from the malicious and relentless targeting of citizens with disinformation and personalised ‘dark adverts’ from unidentifiable sources, delivered through the major social media platforms we use every day. Much of this is directed from agencies working in foreign countries, including Russia.
“The big tech companies are failing in the duty of care they owe to their users to act against harmful content, and to respect their data privacy rights. Companies like Facebook exercise massive market power which enables them to make money by bullying the smaller technology companies and developers who rely on this platform to reach their customers.”
“These are issues that the major tech companies are well aware of, yet continually fail to address. The guiding principle of the ‘move fast and break things’ culture often seems to be that it is better to apologise than ask permission. We need a radical shift in the balance of power between the platforms and the people,” he added.
“The age of inadequate self-regulation must come to an end. The rights of the citizen need to be established in statute, by requiring the tech companies to adhere to a code of conduct written into law by Parliament, and overseen by an independent regulator.”
The committee says it expects the government to respond to its recommendations within two months — noting rather dryly: “We hope that this will be much more comprehensive, practical, and constructive than their response to the Interim Report, published in October 2018. Several of our recommendations were not substantively answered and there is now an urgent need for the Government to respond to them.”
It also makes a point of including an analysis of Internet traffic to the government’s own response to its preliminary report last year — in which it highlights a “high proportion” of online visitors hailing from Russian cities including Moscow and Saint Petersburg…
Source: Web and publications unit, House of Commons
“This itself demonstrates the very clear interest from Russia in what we have had to say about their activities in overseas political campaigns,” the committee remarks, criticizing the government response to its preliminary report for claiming there’s no evidence of “successful” Russian interference in UK elections and democratic processes.
“It is surely a sufficient matter of concern that the Government has acknowledged that interference has occurred, irrespective of the lack of evidence of impact. The Government should be conducting analysis to understand the extent of Russian targeting of voters during elections,” it adds.
Three senior managers knew
Another interesting tidbit from the report is confirmation that the ICO has shared the names of three “senior managers” at Facebook who knew about the Cambridge Analytica data breach prior to the first press report in December 2015 — which is the date Facebook has repeatedly told the committee was when it first learnt of the breach, contradicting what the ICO found via its own investigations.
The committee’s report does not disclose the names of the three senior managers — saying the ICO has asked the names to remain confidential (we’ve reached out to the ICO to ask why it is not making this information public) — and implies the execs did not relay the information to Zuckerberg.
The committee dubs this as an example of “a profound failure” of internal governance, and also branding it evidence of “fundamental weakness” in how Facebook manages its responsibilities to users.
Here’s the committee’s account of that detail:
We were keen to know when and which people working at Facebook first knew about the GSR/Cambridge Analytica breach. The ICO confirmed, in correspondence with the Committee, that three “senior managers” were involved in email exchanges earlier in 2015 concerning the GSR breach before December 2015, when it was first reported by The Guardian. At the request of the ICO, we have agreed to keep the names confidential, but it would seem that this important information was not shared with the most senior executives at Facebook, leading us to ask why this was the case.
The scale and importance of the GSR/Cambridge Analytica breach was such that its occurrence should have been referred to Mark Zuckerberg as its CEO immediately. The fact that it was not is evidence that Facebook did not treat the breach with the seriousness it merited. It was a profound failure of governance within Facebook that its CEO did not know what was going on, the company now maintains, until the issue became public to us all in 2018. The incident displays the fundamental weakness of Facebook in managing its responsibilities to the people whose data is used for its own commercial interests.
Via Natasha Lomas https://techcrunch.com
0 notes
Text
UK parliament calls for antitrust, data abuse probe of Facebook
A final report by a British parliamentary committee which spent months last year investigating online political disinformation makes very uncomfortable reading for Facebook — with the company singled out for “disingenuous” and “bad faith” responses to democratic concerns about the misuse of people’s data.
In the report, published today, the committee has also called for Facebook’s use of user data to be investigated by the UK’s data watchdog.
In an evidence session to the committee late last year, the Information Commissioner’s Office (ICO) suggested Facebook needs to change its business model — warning the company risks burning user trust for good.
Last summer the ICO also called for an ethical pause of social media ads for election campaigning, warning of the risk of developing “a system of voter surveillance by default”.
Interrogating the distribution of ‘fake news’
The UK parliamentary enquiry looked into both Facebook’s own use of personal data to further its business interests, such as by providing access to user data to developers and advertisers in order to increase revenue and/or usage; and examined what Facebook claimed as ‘abuse’ of its platform by the disgraced (and now defunct) political data company Cambridge Analytica — which in 2014 paid a developer with access to Facebook’s developer platform to extract information on millions of Facebook users in build voter profiles to try to influence elections.
The committee’s conclusion about Facebook’s business is a damning one with the company accused of operating a business model that’s predicated on selling abusive access to people’s data.
“Far from Facebook acting against “sketchy” or “abusive” apps, of which action it has produced no evidence at all, it, in fact, worked with such apps as an intrinsic part of its business model,” the committee argues. “This explains why it recruited the people who created them, such as Joseph Chancellor [the co-founder of GSR, the developer which sold Facebook user data to Cambridge Analytica]. Nothing in Facebook’s actions supports the statements of Mark Zuckerberg who, we believe, lapsed into “PR crisis mode”, when its real business model was exposed.
“This is just one example of the bad faith which we believe justifies governments holding a business such as Facebook at arms’ length. It seems clear to us that Facebook acts only when serious breaches become public. This is what happened in 2015 and 2018.”
“We consider that data transfer for value is Facebook’s business model and that Mark Zuckerberg’s statement that ‘we’ve never sold anyone’s data” is simply untrue’,” the committee also concludes.
We’ve reached out to Facebook for comment on the committee’s report.
Last fall the company was issued the maximum possible fine under relevant UK data protection law for failing to safeguard user data from Cambridge Analytica saga. Although Facebook is appealing the ICO’s penalty, claiming there’s no evidence UK users’ data got misused.
During the course of a multi-month enquiry last year investigating disinformation and fake news, the Digital, Culture, Media and Sport (DCMS) committee heard from 73 witnesses in 23 oral evidence sessions, as well as taking in 170 written submissions. In all the committee says it posed more than 4,350 questions.
Its wide-ranging, 110-page report makes detailed observations on a number of technologies and business practices across the social media, adtech and strategic communications space, and culminates in a long list of recommendations for policymakers and regulators — reiterating its call for tech platforms to be made legally liable for content.
Among the report’s main recommendations are:
clear legal liabilities for tech companies to act against “harmful or illegal content”, with the committee calling for a compulsory Code of Ethics overseen by a independent regulatory with statutory powers to obtain information from companies; instigate legal proceedings and issue (“large”) fines for non-compliance
privacy law protections to cover inferred data so that models used to make inferences about individuals are clearly regulated under UK data protection rules
a levy on tech companies operating in the UK to support enhanced regulation of such platforms
a call for the ICO to investigate Facebook’s platform practices and use of user data
a call for the Competition Markets Authority to comprehensively “audit” the online advertising ecosystem, and also to investigate whether Facebook specifically has engaged in anti-competitive practices
changes to UK election law to take account of digital campaigning, including “absolute transparency of online political campaigning” — including “full disclosure of the targeting used” — and more powers for the Electoral Commission
a call for a government review of covert digital influence campaigns by foreign actors (plus a review of legislation in the area to consider if it’s adequate) — including the committee urging the government to launch independent investigations of recent past elections to examine “foreign influence, disinformation, funding, voter manipulation, and the sharing of data, so that appropriate changes to the law can be made and lessons can be learnt for future elections and referenda”
a requirement on social media platforms to develop tools to distinguish between “quality journalism” and low quality content sources, and/or work with existing providers to make such services available to users
Among the areas the committee’s report covers off with detailed commentary are data use and targeting; advertising and political campaigning — including foreign influence; and digital literacy.
It argues that regulation is urgently needed to restore democratic accountability and “make sure the people stay in charge of the machines”.
Ministers are due to produce a White Paper on social media safety regulation this winter and the committee writes that it hopes its recommendations will inform government thinking.
“Much has been said about the coarsening of public debate, but when these factors are brought to bear directly in election campaigns then the very fabric of our democracy is threatened,” the committee writes. “This situation is unlikely to change. What does need to change is the enforcement of greater transparency in the digital sphere, to ensure that we know the source of what we are reading, who has paid for it and why the information has been sent to us. We need to understand how the big tech companies work and what happens to our data.”
The report calls for tech companies to be regulated as a new category “not necessarily either a ‘platform’ or a ‘publisher”, but which legally tightens their liability for harmful content published on their platforms.
Last month another UK parliamentary committee also urged the government to place a legal ‘duty of care’ on platforms to protect users under the age of 18 — and the government said then that it has not ruled out doing so.
“Digital gangsters”
Competition concerns are also raised several times by the committee.
“Companies like Facebook should not be allowed to behave like ‘digital gangsters’ in the online world, considering themselves to be ahead of and beyond the law,” the DCMS committee writes, going on to urge the government to investigate whether Facebook specifically has been involved in any anti-competitive practices and conduct a review of its business practices towards other developers “to decide whether Facebook is unfairly using its dominant market position in social media to decide which businesses should succeed or fail”.
“The big tech companies must not be allowed to expand exponentially, without constraint or proper regulatory oversight,” it adds.
The committee suggests existing legal tools are up to the task of reining in platform power, citing privacy laws, data protection legislation, antitrust and competition law — and calling for a “comprehensive audit” of the social media advertising market by the UK’s Competition and Markets Authority, and a specific antitrust probe of Facebook’s business practices.
“If companies become monopolies they can be broken up, in whatever sector,” the committee points out. “Facebook’s handling of personal data, and its use for political campaigns, are prime and legitimate areas for inspection by regulators, and it should not be able to evade all editorial responsibility for the content shared by its users across its platforms.”
The social networking giant was the recipient of many awkward queries during the course of the committee’s enquiry but it refused repeated requests for its founder Mark Zuckerberg to testify — sending a number of lesser staffers in his stead.
That decision continues to be seized upon by the committee as evidence of a lack of democratic accountability. It also accuses Facebook of having an intentionally “opaque management structure”.
“By choosing not to appear before the Committee and by choosing not to respond personally to any of our invitations, Mark Zuckerberg has shown contempt towards both the UK Parliament and the ‘International Grand Committee’, involving members from nine legislatures from around the world,” the committee writes.
“The management structure of Facebook is opaque to those outside the business and this seemed to be designed to conceal knowledge of and responsibility for specific decisions. Facebook used the strategy of sending witnesses who they said were the most appropriate representatives, yet had not been properly briefed on crucial issues, and could not or chose not to answer many of our questions. They then promised to follow up with letters, which—unsurprisingly—failed to address all of our questions. We are left in no doubt that this strategy was deliberate.”
It doubles down on the accusation that Facebook sought to deliberately mislead its enquiry — pointing to incorrect and/or inadequate responses from staffers who did testify.
“We are left with the impression that either [policy VP] Simon Milner and [CTO] Mike Schroepfer deliberately misled the Committee or they were deliberately not briefed by senior executives at Facebook about the extent of Russian interference in foreign elections,” it suggests.
In an unusual move late last year the committee used rare parliamentary powers to seize a cache of documents related to an active US lawsuit against Facebook filed by a developer called Six4Three.
Seized cache of Facebook docs raise competition and consent questions
The cache of documents is referenced extensively in the final report, and appears to have fuelled antitrust concerns, with the committee arguing that the evidence obtained from the internal company documents “indicates that Facebook was willing to override its users’ privacy settings in order to transfer data to some app developers, to charge high prices in advertising to some developers, for the exchange of that data, and to starve some developers… of that data, thereby causing them to lose their business”.
“It seems clear that Facebook was, at the very least, in violation of its Federal Trade Commission [privacy] settlement,” the committee also argues, citing evidence from the former chief technologist of the FTC, Ashkan Soltani .
On Soltani’s evidence, it writes:
Ashkan Soltani rejected [Facebook’s] claim, saying that up until 2012, platform controls did not exist, and privacy controls did not apply to apps. So even if a user set their profile to private, installed apps would still be able to access information. After 2012, Facebook added platform controls and made privacy controls applicable to apps. However, there were ‘whitelisted’ apps that could still access user data without permission and which, according to Ashkan Soltani, could access friends’ data for nearly a decade before that time. Apps were able to circumvent users’ privacy of platform settings and access friends’ information, even when the user disabled the Platform. This was an example of Facebook’s business model driving privacy violations.
While Facebook is singled out for the most eviscerating criticism in the report (and targeted for specific investigations), the committee’s long list of recommendations are addressed at social media businesses and online advertisers generally.
It also calls for far more transparency from platforms, writing that: “Social media companies need to be more transparent about their own sites, and how they work. Rather than hiding behind complex agreements, they should be informing users of how their sites work, including curation functions and the way in which algorithms are used to prioritise certain stories, news and videos, depending on each user’s profile. The more people know how the sites work, and how the sites use individuals’ data, the more informed we shall all be, which in turn will make choices about the use and privacy of sites easier to make.”
The committee also urges a raft of updates to UK election law — branding it “not fit for purpose” in the digital era.
Its interim report, published last summer, made many of the same recommendations.
Russian interest
But despite pressing the government for urgent action there was only a cool response from ministers then, with the government remaining tied up trying to shape a response to the 2016 Brexit vote which split the country (with social media’s election-law-deforming help). Instead it opted for a ‘wait and see‘ approach.
The government accepted just three of the preliminary report’s forty-two recommendations outright, and fully rejected four.
Nonetheless, the committee has doubled down on its preliminary conclusions, reiterating earlier recommendations and pushing the government once again to act.
It cites fresh evidence, including from additional testimony, as well as pointing to other reports (such as the recently published Cairncross Review) which it argues back up some of the conclusions reached.
“Our inquiry over the last year has identified three big threats to our society. The challenge for the year ahead is to start to fix them; we cannot delay any longer,” writes Damian Collins MP and chair of the DCMS Committee, in a statement. “Democracy is at risk from the malicious and relentless targeting of citizens with disinformation and personalised ‘dark adverts’ from unidentifiable sources, delivered through the major social media platforms we use every day. Much of this is directed from agencies working in foreign countries, including Russia.
“The big tech companies are failing in the duty of care they owe to their users to act against harmful content, and to respect their data privacy rights. Companies like Facebook exercise massive market power which enables them to make money by bullying the smaller technology companies and developers who rely on this platform to reach their customers.”
“These are issues that the major tech companies are well aware of, yet continually fail to address. The guiding principle of the ‘move fast and break things’ culture often seems to be that it is better to apologise than ask permission. We need a radical shift in the balance of power between the platforms and the people,” he added.
“The age of inadequate self-regulation must come to an end. The rights of the citizen need to be established in statute, by requiring the tech companies to adhere to a code of conduct written into law by Parliament, and overseen by an independent regulator.”
The committee says it expects the government to respond to its recommendations within two months — noting rather dryly: “We hope that this will be much more comprehensive, practical, and constructive than their response to the Interim Report, published in October 2018. Several of our recommendations were not substantively answered and there is now an urgent need for the Government to respond to them.”
It also makes a point of including an analysis of Internet traffic to the government’s own response to its preliminary report last year — in which it highlights a “high proportion” of online visitors hailing from Russian cities including Moscow and Saint Petersburg…
Source: Web and publications unit, House of Commons
“This itself demonstrates the very clear interest from Russia in what we have had to say about their activities in overseas political campaigns,” the committee remarks, criticizing the government response to its preliminary report for claiming there’s no evidence of “successful” Russian interference in UK elections and democratic processes.
“It is surely a sufficient matter of concern that the Government has acknowledged that interference has occurred, irrespective of the lack of evidence of impact. The Government should be conducting analysis to understand the extent of Russian targeting of voters during elections,” it adds.
Three senior managers knew
Another interesting tidbit from the report is confirmation that the ICO has shared the names of three “senior managers” at Facebook who knew about the Cambridge Analytica data breach prior to the first press report in December 2015 — which is the date Facebook has repeatedly told the committee was when it first learnt of the breach, contradicting what the ICO found via its own investigations.
The committee’s report does not disclose the names of the three senior managers — saying the ICO has asked the names to remain confidential (we’ve reached out to the ICO to ask why it is not making this information public) — and implies the execs did not relay the information to Zuckerberg.
The committee dubs this as an example of “a profound failure” of internal governance, and also branding it evidence of “fundamental weakness” in how Facebook manages its responsibilities to users.
Here’s the committee’s account of that detail:
We were keen to know when and which people working at Facebook first knew about the GSR/Cambridge Analytica breach. The ICO confirmed, in correspondence with the Committee, that three “senior managers” were involved in email exchanges earlier in 2015 concerning the GSR breach before December 2015, when it was first reported by The Guardian. At the request of the ICO, we have agreed to keep the names confidential, but it would seem that this important information was not shared with the most senior executives at Facebook, leading us to ask why this was the case.
The scale and importance of the GSR/Cambridge Analytica breach was such that its occurrence should have been referred to Mark Zuckerberg as its CEO immediately. The fact that it was not is evidence that Facebook did not treat the breach with the seriousness it merited. It was a profound failure of governance within Facebook that its CEO did not know what was going on, the company now maintains, until the issue became public to us all in 2018. The incident displays the fundamental weakness of Facebook in managing its responsibilities to the people whose data is used for its own commercial interests.
source https://techcrunch.com/2019/02/17/uk-parliament-calls-for-antitrust-data-abuse-probe-of-facebook/
0 notes
Text
How smartphones and social media are changing Christianity
By Chris Stokel-Walker, BBC, 23 February 2017
When the Reverend Pete Phillips first arrived in Durham nine years ago, he was ejected from the city’s cathedral. He had been reading the Bible on his mobile phone in the pews. Phones were not allowed in the holy place, and the individual who accosted him would not believe that he was using his phone for worship and asked him to leave. “I was a bit miffed about that,” says Phillips, who is director of the Codec Research Centre for Digital Theology at Durham University in the UK. “But that was 2008.”
Next year Durham Cathedral will have been standing for 1,000 years. But its phone policy is now up to date. “They allow people to take photos, to use phones for devotional reasons--whatever they want to do,” says Phillips. “The attitude has changed because to restrict people from mobile phone use now is to ask them to cut their arm off.”
This more relaxed approach to phones is not the only tech-related update the Church has undergone in the past few years. The rise of apps and social media is changing the way many of the world’s two billion Christians worship--and even what it means to be religious.
The Reverend Liam Beadle became Yorkshire’s youngest vicar when he took up his role at St Mary’s Anglican Church in Honley, a village of 6,000 people five miles south of Huddersfield. He runs his parish’s Twitter account. A colleague runs the church community’s Facebook profile. The Bishop of Leeds, the Right Reverend Nick Baines--who is the head of Beadle’s diocese--was one of the first bishops to start a blog and is known in the church as the “blogging bishop”.
But Beadle contrasts the Church’s approach to social media with its reaction to the printing press. “The difference between then and now is that with the invention of the printing press we were proactive,” he says. “With the advent of social media, I think we are being reactive, we’re jumping on the bandwagon.”
The ubiquity of smartphones and social media are changing the way people practise their religion. Faiths are adopting online technologies to make it easier for people to communicate ideas and worship, says Phillips. “But that technology has shaped religious people themselves and changed their behaviour.”
Many people scrolling through their phones in Christian churches are probably looking at a Bible app called YouVersion, which has been installed more than 260 million times worldwide since its launch in 2008. Similarly popular apps exist for the Torah and Koran.
“One of the first things Christians did with the computer was to put the Bible into digital formats,” says Phillips. Those digitised Bibles then made their way onto phones. “To some extent, the mobile phone Bible is now replacing the book Bible.”
According to the company behind YouVersion, people have spent more than 235 billion minutes using the app and have highlighted 636 million Bible verses. But reading the Bible in this way could be changing people’s overall sense of it. “If you go to the Bible as a paper book, it’s quite large and complicated and you’ve got to thumb through it,” says Phillips.
“But you know that Revelations is the last book and Genesis is the first and Psalms is in between. With a digital version you don’t get any of that, you don’t get the boundaries. You don’t flick through: you just go to where you’ve asked it to go to, and you’ve no sense of what came before or after.”
Quite how interacting with the Bible in bite-sized nuggets might affect people’s views of it is now being explored by researchers like Phillips. The way religious scriptures are read can influence how they are interpreted. For example, studies suggest that text read on screens is generally taken more literally than text read in books. Aesthetic features of a text, such as its broader themes and emotional content, are also more likely to be drawn out when it is read as a book.
In a religious text, that distinction can be crucial. “When you’re on a screen, you tend to miss out all the feeling stuff and go straight for the information,” says Phillips. “It’s a flat kind of reading, which the Bible wasn’t written for. You end up reading the text as though it was Wikipedia, rather than it being a sacred text in itself.”
For many, it’s no longer necessary to set foot in a church. In the US, one in five people who identify as Catholics and one in four Protestants seldom or never attend organised services, according to a survey conducted by the Pew Research Centre.
Apps and social media accounts tweeting out Bible verses allow a private expression of faith that takes place between a person and their phone screen. And the ability to pick and choose means they can avoid doctrine that does not appeal. A lot of people who consider themselves to be active Christians may not strictly even believe in God or Jesus or the acts described in the Bible.
“A new kind of mutated Christianity for a digital age is appearing,” says Phillips. “One that follows many of the ethics of the secular world.” Known as moralistic therapeutic deism, this form of belief is focused more on the charitable and moral side of the Bible--the underlying tenets of religion, rather than the notion that the Universe was created by an all-seeing, all-powerful leader.
This new form of religion was first described by sociologists in 2005, but it has been supercharged by the internet and social media. “People are looking for a more personalised religious experience,” says Heidi Campbell at Texas A&M University, who studies religion and digital culture.
“Millennials prefer this generalised picture of God rather than an interventionist God, and they prefer God to Jesus, because he’s non-specific,” says Phillips. “He stands behind them and allows them to get on with their own lives rather than Jesus, who comes in and interferes with everything.”
Sharing Bible verses on social media lets worshippers find their own readings rather than sitting through ones chosen by a priest every Sunday.
Pick-and-mix religious beliefs are not new. But it is easier than ever to fashion an individualised faith. “The internet and social media help people to do it in more concrete ways,” says Campbell. “We have more access to more information, more viewpoints, and we can create a spiritual rhythm and path that’s more personalised.”
And that includes bringing sacred figures into memes. Story Time Jesus--where classical religious iconography is overlaid with bold text that describes religious verses in colloquial language--became a viral meme in 2012 and has remained popular since. Others include Bunny Christ and Republican Jesus.
Many of these memes may have started as jokes, but they are being used to spread religious ideas too. “People are using memes as a way to provoke debate about religion and affirm beliefs,” says Campbell. “You can’t meme a theological truth in depth but you can summarise the essence to draw people’s attention, using them as teasers.” That applies to tweeting too. There are churches around the world that encourage their congregations to live-tweet sermons.
It’s a source of friction, however. A few years ago a UK cathedral started live-tweeting its services. “There were questions about how appropriate it was,” says Beadle. “I think the jury’s still out on that one. There probably is a case to be made that if you’re on Twitter you’re not engaging as fully as when you’re not on Twitter.”
On top of that, there are concerns that a series of short tweets is not an appropriate way to represent complex and subtle concepts.
Religion of all hues--not just Christianity--is becoming less about the preacher in the pulpit, she says. “Digital is all about two-way communication. People come with a certain expectation of what a community looks like and what freedom they’ll have, and religious institutions need to either adapt to that or be an exception.”
If nothing else, organised faith is good at adapting--Christianity has been reinventing itself for nearly 2,000 years. Smartphones and social media are just the latest developments to force a change.
0 notes