#how indigenous groups are leading the way on data privacy
Explore tagged Tumblr posts
goldislops · 2 months ago
Text
How Indigenous Groups Are Leading the Way on Data Privacy
Indigenous groups are developing data storage technology that gives users privacy and control. Could their work influence those fighting back against invasive apps?
Rina Diane Caballar
A person in a purple tshirt wallking in a forest
A member of the Wayana people in the Amazon rain forest in Maripasoula, French Guiana. Some in the Wayana community use the app Terrastories as part of their mapping project.
Emeric Fohlen/NurPhoto via Getty Images
Indigenous groups are developing data storage technology that gives users privacy and control. Could their work influence those fighting back against invasive apps?
Even as Indigenous communities find increasingly helpful uses for digital technology, many worry that outside interests could take over their data and profit from it, much like colonial powers plundered their physical homelands. But now some Indigenous groups are reclaiming control by developing their own data protection technologies—work that demonstrates how ordinary people have the power to sidestep the tech companies and data brokers who hold and sell the most intimate details of their identities, lives and cultures.
When governments, academic institutions or other external organizations gather information from Indigenous communities, they can withhold access to it or use it for other purposes without the consent of these communities.
“The threats of data colonialism are real,” says Tahu Kukutai, a professor at New Zealand’s University of Waikato and a founding member of Te Mana Raraunga, the Māori Data Sovereignty Network. “They’re a continuation of old processes of extraction and exploitation of our land—the same is being done to our information.”
To shore up their defenses, some Indigenous groups are developing new privacy-first storage systems that give users control and agency over all aspects of this information: what is collected and by whom, where it’s stored, how it’s used and, crucially, who has access to it.
Storing data in a user’s device—rather than in the cloud or in centralized servers controlled by a tech company—is an essential privacy feature of these technologies. Rudo Kemper is founder of Terrastories, a free and open-source app co-created with Indigenous communities to map their land and share stories about it. He recalls a community in Guyana that was emphatic about having an offline, on-premise installation of the Terrastories app. To members of this group, the issue was more than just the lack of Internet access in the remote region where they live. “To them, the idea of data existing in the cloud is almost like the knowledge is leaving the territory because it’s not physically present,” Kemper says.
Likewise, creators of Our Data Indigenous, a digital survey app designed by academic researchers in collaboration with First Nations communities across Canada, chose to store their database in local servers in the country rather than in the cloud. (Canada has strict regulations on disclosing personal information without prior consent.) In order to access this information on the go, the app’s developers also created a portable backpack kit that acts as a local area network without connections to the broader Internet. The kit includes a laptop, battery pack and router, with data stored on the laptop. This allows users to fill out surveys in remote locations and back up the data immediately without relying on cloud storage.
Āhau, a free and open-source app developed by and for Māori to record ancestry data, maintain tribal registries and share cultural narratives, takes a similar approach. A tribe can create its own Pātaka (the Māori word for storehouse), or community server, which is simply a computer running a database connected to the Internet. From the Āhau app, tribal members can then connect to this Pātaka via an invite code, or they can set up their database and send invite codes to specific tribal or family members. Once connected, they can share ancestry data and records with one another. All of the data are encrypted and stored directly on the Pātaka.
Another privacy feature of Indigenous-led apps is a more customized and granular level of access and permissions. With Terrastories, for instance, most maps and stories are only viewable by members who have logged in to the app using their community’s credentials—but certain maps and stories can also be made publicly viewable to those who do not have a login. Adding or editing stories requires editor access, while creating new users and modifying map settings requires administrative access.
For Our Data Indigenous, access levels correspond to the ways communities can use the app. They can conduct surveys using an offline backpack kit or generate a unique link to the survey that invites community members to complete it online. For mobile use, they can download the app from Google Play or Apple’s App Store to fill out surveys. The last two methods do require an Internet connection and the use of app marketplaces. But no information about the surveys is collected, and no identifying information about individual survey participants is stored, according to Shanna Lorenz, an associate professor at Occidental College in Los Angeles and a product manager and education facilitator at Our Data Indigenous.
Such efforts to protect data privacy go beyond the abilities of the technology involved to also encompass the design process. Some Indigenous communities have created codes of use that people must follow to get access to community data. And most tech platforms created by or with an Indigenous community follow that group’s specific data principles. Āhau, for example, adheres to the Te Mana Raraunga principles of Māori data sovereignty. These include giving Māori communities authority over their information and acknowledging the relationships they have with it; recognizing the obligations that come with managing data; ensuring information is used for the collective benefit of communities; practicing reciprocity in terms of respect and consent; and exercising guardianship when accessing and using data. Meanwhile Our Data Indigenous is committed to the First Nations principles of ownership, control, access and possession (OCAP). “First Nations communities are setting their own agenda in terms of what kinds of information they want to collect,” especially around health and well-being, economic development, and cultural and language revitalization, among others, Lorenz says. “Even when giving surveys, they’re practicing and honoring local protocols of community interaction.”
Crucially, Indigenous communities are involved in designing these data management systems themselves, Āhau co-founder Kaye-Maree Dunn notes, acknowledging the tribal and community early adopters who helped shape the Āhau app’s prototype. “We’re taking the technology into the community so that they can see themselves reflected back in it,” she says.
For the past two years, Errol Kayseas has been working with Our Data Indigenous as a community outreach coordinator and app specialist. He attributes the app’s success largely to involving trusted members of the community. “We have our own people who know our people,” says Kayseas, who is from the Fishing Lake First Nation in Saskatchewan. “Having somebody like myself, who understands the people, is only the most positive thing in reconciliation and healing for the academic world, the government and Indigenous people together.”
This community engagement and involvement helps ensure that Indigenous-led apps are built to meet community needs in meaningful ways. Kayseas points out, for instance, that survey data collected with the Our Data Indigenous app will be used to back up proposals for government grants geared toward reparations. “It’s a powerful combination of being rooted in community and serving,” Kukutai says. “They’re not operating as individuals; everything is a collective approach, and there are clear accountabilities and responsibilities to the community.”
Even though these data privacy techniques are specific to Indigenous-led apps, they could still be applied to any other app or tech solution. Storage apps that keep data on devices rather than in the cloud could find adopters outside Indigenous communities, and a set of principles to govern data use is an idea that many tech users might support. “Technology obviously can’t solve all the problems,” Kemper says. “But it can—at least when done in a responsible way and when cocreated with communities—lead to greater control of data.”
1 note · View note
tzifron · 1 year ago
Text
“The threats of data colonialism are real,” says Tahu Kukutai, a professor at New Zealand’s University of Waikato and a founding member of Te Mana Raraunga, the Māori Data Sovereignty Network. “They’re a continuation of old processes of extraction and exploitation of our land—the same is being done to our information.” To shore up their defenses, some Indigenous groups are developing new privacy-first storage systems that give users control and agency over all aspects of this information: what is collected and by whom, where it’s stored, how it’s used and, crucially, who has access to it. Storing data in a user’s device—rather than in the cloud or in centralized servers controlled by a tech company—is an essential privacy feature of these technologies. Rudo Kemper is founder of Terrastories, a free and open-source app co-created with Indigenous communities to map their land and share stories about it. He recalls a community in Guyana that was emphatic about having an offline, on-premise installation of the Terrastories app. To members of this group, the issue was more than just the lack of Internet access in the remote region where they live. “To them, the idea of data existing in the cloud is almost like the knowledge is leaving the territory because it’s not physically present,” Kemper says. Likewise, creators of Our Data Indigenous, a digital survey app designed by academic researchers in collaboration with First Nations communities across Canada, chose to store their database in local servers in the country rather than in the cloud. (Canada has strict regulations on disclosing personal information without prior consent.) In order to access this information on the go, the app’s developers also created a portable backpack kit that acts as a local area network without connections to the broader Internet. The kit includes a laptop, battery pack and router, with data stored on the laptop. This allows users to fill out surveys in remote locations and back up the data immediately without relying on cloud storage.
50 notes · View notes
tsmom1219 · 9 months ago
Text
How indigenous groups are leading the way on data privacy
Read the full story in Scientific American. Indigenous groups are developing data storage technology that gives users privacy and control. Could their work influence those fighting back against invasive apps?
View On WordPress
0 notes
xtruss · 3 years ago
Text
America Makes Aircraft Carriers, China Makes Money
— Fred Reed | Anti-Empire | May 20, 2021
Tumblr media
First, America increasingly relies on strong-arm tactics instead of competence. For example, in the de facto 5G competition, Washington cannot offer Europe a better product at a better price, so it forbids European countries to buy from China. The US cannot compete with China in manufacturing, so it resorts to a trade war. The US cannot make the crucial EUV lithography equipment to make advanced semiconductors, as neither can China, but it can forbid ASML, the Dutch company, from selling to China. Similarly, the US cannot compete with Russia in the price of natural gas to Europe, so by means of sanctions it seeks to keep Europe from buying from Russia. This is not reassuring.
Second, the Chinese are a commercial people, agile, fast to market, cutthroat, known for this throughout Asia. America is a bureaucratized military empire, torpid by comparison. America has legacy control over a few important technologies, most notably the crucial semiconductor field and the international financial system. Washington is using these to try to cripple China’s advance.
A consequence has been a realization by the Chinese that America is not a competitor but an enemy, and a subsequent explosion of investment and R&D aimed at reducing dependence on American technology. There is the well-known 1.4 trillion-dollar five-year plan to this end. One now encounters a flood of stories about advances in tech “to which China has intellectual-property rights” or similar wording.
They seem deadly serious about this. Given that Biden couldn’t tell a transistor from an ox cart, I wonder whether he realizes that every time the US pushes China to become independent in x, American firms lose the Chinese market for X, and later get to compete with Chinese X in the international market. Anyway, give Trump his due. He lit this fuse.
A few snippets
Tumblr media
Prototype of China’s 385 mph maglev train
The above beast, developed entirely in China, is the first to use high-temperature superconducting magnets to keep the train floating just above the rails. HTSC magnets are a Big Deal because they can achieve superconductivity using liquid nitrogen as coolant instead of liquid helium for classic superconductivity, this costing, say the Chinese, a fiftieth of the price of using helium. The use of HTSC is very, very slick. The train will extensively use carbon-fiber materials to keep weight down, suggesting that the Chinese cannot distinguish between a train and an airplane.
Asia Times “China’s Hydrogen Dream is taking Shape in Shandong”
“A detailed pilot plan being worked out to transform Shandong, a regional industrial powerhouse, into a “hydrogen society” holds out much hope of delivering on the green promise.”
The article, hard to summarize in a sentence, is worth reading. As so often, the Chinese do things, try things, while the US talks, riots, imposes sanctions, sucks its thumb, and spends grimly on intercontinental nuclear bombers.
“Huawei is Developing Smart Roads Instead of Smart Cars”
“Multiple sensors, cameras, and radars embedded in the road, traffic lights, and street signs help the bus to drive safely, while it in turn transmits information back to this network-“
“Quantum Cryptography Network Spans 4,600 Km in China”
Quantum Key Distribution, QKD, allows unhackable communications. China read Ed Snowden’s book on NSA’s snooping, realized it had a problem, and set out to correct it. If this spreads to other countries—see below—much of the world could go black to American intel agencies.
The Chinese may have thought of this.
“…colleagues will further expand the network by working with partners in Austria, Italy, Russia and Canada. The team is also developing low-cost satellites and ground stations for QKD.”
The last sentence is interesting. If China begins selling genuinely secure commo gear abroad, it is going to make a lot of intel agencies very unhappy. Did I mention that the Chinese are a commercial people?
Further:
“Chinese scientists achieve quantum information masking, paving way for encrypted communication application.”
My knowledge of this might rise to the level of blank ignorance after a good night’s sleep and three cups of coffee. However, the achievement made the American technical press, and suggests Chinese seriousness about gaining privacy.
The video below shows how China constructs high-speed rail lines as if painting a stripe on a highway. Since they can’t innovate, they have to get by with inventing things.
China to Europe rail freight: “Over 10,000 trains and 927,000 containers were forwarded via the China-EU-China route in 2020, China Railways has announced. The current volume of traffic has grown by 98.3% year-to-year, covering 21 countries and 92 cities in Europe.”
America makes aircraft carriers. China sells stuff.
NikkeiAsia: “What China’s Rapidly Expanding Nuclear Industry Means for the West”
One Chinese reactor in Pakistan just went live, with another expected in a few months. Says Nikkei, “The Karachi reactor is just the latest of these to come onstream, with the World Nuclear Organization listing a dozen different projects at the development or planning stage across a dozen countries from Argentina to Egypt in its recent survey. Many more are under discussion.”
In addition, says Nikkei, China intends to have the whole industry from technology to materials indigenous to China and outside of American sanctions. See above, about driving China to make things.
First China-Built DRAM Chip Reaches Market DRAM, dynamic random-access memory, appears in almost everything electronic and is a juicy market. Chang Xin Memory, which makes it, redesigned it slightly to remove American technology. If Chang Sin can ramp up volume, which has yet to be established, guess what foreign companies won’t sell much of in China any more.
Tumblr media
Pingtang Bridge, recently opened. Well over a thousand feet high
Even in my short two weeks recently in China, I saw that the Chinese do not believe in vertical motion. An American, encountering a mountain, would, sensibly enough, go up and over. This is not the Chinese way. They go through. Similarly, on finding a valley, they do not go down and up. They go across. There may be some genetic abnormality behind this, or maybe interbreeding with space aliens. But it results in hellacious bridges.
“Is China Emerging as the World Leader in AI?”
“Summary. China is quickly closing the once formidable lead the U.S. maintained on AI research. Chinese researchers now publish more papers on AI and secure more patents than U.S. researchers do. The country seems poised to become a leader in AI-empowered…”
Some argue that Chinese patents are of low quality. Maybe so. But don’t bet the college funds.
“China begins construction of world’s longest superconducting cable project”
“China’s first 35 kV high-temperature superconducting cable demonstration project has started construction by State Grid in Shanghai and it is expected to be completed by the end of the year. This is the world’s largest transmission capacity, the longest distance, 2000A current the highest commercial 35 kV superconducting cable project.”
Regarding the 5G War Trump could have bought 5G from Huawei, gotten a sweetheart deal, great prices, factories in America, and so on. Instead he banned Huawei from the US and then twisted arms of the vassal states of Europe. Thus neither America or Europe has the service, but China is rolling it out fast. Brilliant, Don. This gives China a running start on smart factories, smart cities, autonomous vehicles, and the like.
Tumblr media
“An almost entirely automated port in China, during unload of a container ship.“
America talks about 5G, China uses it.
NikkeiAsia: “The port is an example of how operator China Merchants Group has been working to automate and mechanize more operations using ultrafast fifth-generation wireless technology. By developing innovative ways to run the port as efficiently as possible, the company aims to accelerate overseas expansion.”
Aviation Week “Face It: The J-20 is a Fifth Generation Fighter”
Says AvWeek: “Clearly, Chengdu’s engineers understand the foundation of fifth-generation design: the ability to attain situational awareness through advanced fused sensors while denying situational awareness to the adversary through stealth and electronic warfare. The J-20 features an ambitious integrated avionics suite consisting of multispectral sensors that provide 360-deg. coverage. This includes a large active, electronically scanned array radar designed by the 14th Research Institute, electro-optical distributed aperture system, electro-optical targeting system, electronic support measures system and possibly side-array radars.
“In a 2017 CNTV interview, J-20 pilot Zhang Hao said: “Thanks to the multiple sensors onboard the aircraft and the very advanced data fusion, the level of automation of J-20 is very high. . . . The battlefield has become more and more transparent for us.”
Most of the story is visible only if you have a subscription to AvWeek.
Asia Times: Tesla loses lead to local upstart in China’s EV market .
The headline is kidding. The car that is outselling Tesla is a $4,200 el cheapo for short-haul shopping and picking up the kids in the city.
Tumblr media
Sexy as a truss ad, but…useful. I’m telling you, put the college funds in this company, not truss ads. Made by an SAIC-GM partnership, majority owned by China, where it was designed and made. Will be sold internationally.
“Unlike Tesla, which requires purpose-built charging stations, the Mini can be plugged into a home power system to charge, which takes about nine hours. It has a range of about 120 kilometers and a top speed of 100 kilometers per hour, according to the carmaker’s promotional materials.” Designed and put into production in one year. (Did I mention that the Chinese are a commercial people?)
China’s Y-20 strategic transport aircraft gets key indigenous engine: reports Chinese design. How close it is to being ready for prime time is not clear, but it is flying. An inability to make high-end engines has been a problem for China.
The WS’20 is a high-bypass turbofan of Chinese design.
Finally, Global Times”, Beijing’s news site: “China’s trade volume increases 37% y-o-y in April, marking 11 consecutive months of positive growth”
Nuff said.
— Source: The Unz Review
0 notes
savetopnow · 7 years ago
Text
2018-03-27 03 NEWS now
NEWS
Associated Press
Tech, banks lead US stocks sharply higher; Oil heads lower
Diplomats ousted: US, Europe punish Russia over spy case
Daniels' lawyer won't give evidence of alleged Trump affair
Witness in Mueller probe aided UAE agenda in Congress
AP-NORC Poll: Americans open to Trump's planned NKorea talks
BBC News
'It's embarrassing we're called cheats'
Stormy Daniels - I was threatened over Trump affair
Hundreds of thousands rally for gun control legislation
Spain Catalonia: Protesters clash with police after Puigdemont arrest
Is Turkey going too far to stop migrant boats?
Chicago Tribune
After 23 years in prison as an innocent man, former White Sox groundskeeper returns to his old job
Cook County sues Facebook, Cambridge Analytica after alleged misuse of millions of Illinoisans' data
3 members of same family killed in Des Plaines crash, police say
Stormy Daniels' lawyer saw 'soft underbelly of politics' while working for Rahm Emanuel
Man shot in arm in Logan Square neighborhood
LA Times
Essential Education: Women now outnumber men as Cal State campus presidents
Listening to Parkland students, marching for gun control, what teachers want: What's new in education
Trump lawyer and fixer demands apology from Stormy Daniels
California joins other states to demand answers from Facebook on Cambridge Analytica's use of personal data
Political campaigns will run more digital ads this year than ever. Here's how they'll find you
NPR News
Is This Any Way To Drive An Omnibus? 10 Questions About What Just Happened
Killer Mike Apologizes For Interview With NRA, Claims It Was Misused
Ejecting Russians: Who Is A 'Spy'?
U.S. Not Alone In Expelling Russian Diplomats
U.S. Stock Market Rebounds On Report Of Trade Talks With China
New York Times
Facebook Comes Under F.T.C. Scrutiny as Stock Slides
Trump Can’t Stop Tweeting, but Goes Silent on Stormy Daniels
You Know Sister Jean. Meet Father Rob.
Op-Ed Contributor: The War on Drugs Breeds Crafty Traffickers
ProPublica
Here’s One Issue Blue and Red States Agree On: Preventing Deaths of Expectant and New Mothers
Warren Buffett Recommends Investing in Index Funds — But Many of His Employees Don’t Have That Option
Seeing Journalism Make a Difference in Election Results
Cutting ‘Old Heads’ at IBM
How the Crowd Led Us to Investigate IBM
Reddit News
Spy poisoning: US to expel 60 Russian diplomats
Motion reveals Pulse gunman's father was FBI informant
School district that armed teachers with rocks increasing security
Montreal elementary school is latest to ban homework
“Mad” Mike Hughes, the rocket man who believes the Earth is flat, propelled himself about 1,875 feet into the air Saturday before a hard landing in the Mojave Desert.
Reuters
Barrage of missiles on Saudi Arabia ramps up Yemen war
Facebook shares tumble as U.S. regulator announces privacy probe
Hyundai union head fears GM-like crisis; says electric cars destroy jobs
U.S. and EU to expel more than 100 Russian diplomats over UK nerve attack
Sisi to win Egyptian election but seeks high turnout
Reveal News
Nation’s largest janitorial company faces new allegations of rape
A group of janitors started a movement to stop sexual abuse
The Hate Report: How white supremacists recruit online
New documents about Jehovah’s Witnesses’ sex abuse begin to leak out
California is preparing to defend its waters from Trump order
The Altantic
West Virginia's Teachers Are Not Satisfied
This Average Joe Is the Most Quoted Man in News
The Unsinkable Benjamin Netanyahu?
Eric Garcetti Isn't Expecting Much From Washington
The Particular Horror of Church Shootings
The Guardian
Police treat killing of elderly woman in Paris as antisemitic attack
North Korea: Kim Jong-un in China for 'unannounced state visit'
Justin Trudeau to exonerate six indigenous chiefs who were executed
Malaysia accused of muzzling critics with jail term for fake news
Trump's lawyer sends Stormy Daniels cease-and-desist letter over threat claim
The Independent
Russia shopping centre fire: Dozens of children feared to be among 64 killed in blaze at packed mall in Siberia
Langlands & Bell, Internet Giants: Masters of the Universe, Ikon, Birmingham, review: The sculptures are a feat of artistic endeavour
Mark Zuckerberg asked to testify before Congress as agency confirms investigation into Facebook data scandal
UKAD hack: Anti-doping agency holding thousands of sports stars' drug test details hit by cyber attack
Trump struggling to find lawyers to represent him as Mueller investigation enters critical phase
The Intercept
Trump Administration Fights Effort to Unionize Uber Drivers
Three Years Into the Yemen War, a Collective of Women Street Artists Cope With the Destruction
The Radical Imagination of Eve Ewing
Terrible Mistreatment of Haitians Is a Shared Pastime of Donald Trump and the “Deep State”
The Only Good Thing About John Bolton in the White House Is That He’s Not a General
The Quartz
Looks like China beat the US to meeting Kim Jong-un in person
Steven Spielberg explains why Netflix’s “TV movies” aren’t Oscar-worthy
Trump told Stormy Daniels he wants all sharks to die. Here’s why that’s a bad idea
“You’re no genius”: Her father’s shutdowns made Angela Duckworth a world expert on grit
In America, politics is the new religion
Wall Street Journal
Trade Deal Eases Way for U.S., South Korea to Collaborate on North Korea
EU Seeks to Preserve Migration Pact With Turkey Despite Fraying Ties
How a Tiny Latvian Bank Became a Haven for the World's Dirty Money
U.S., China Quietly Discuss Trade Solutions
U.S., Allies Expel Scores of Russians Over Poisoning of Ex-Spy in U.K.
0 notes
ladystylestores · 4 years ago
Text
How AI can empower communities and strengthen democracy
Each Fourth of July for the past five years I’ve written about AI with the potential to positively impact democratic societies. I return to this question in hopes of shining a light on technology that can strengthen communities, protect privacy and freedoms, and otherwise support the public good.
This series is grounded in the principle that artificial intelligence is capable of not just value extraction, but individual and societal empowerment. While AI solutions often propagate bias, they can also be used to detect that bias. As Dr. Safiya Noble has pointed out, artificial intelligence is one of the critical human rights issues of our lifetimes. AI literacy is also, as Microsoft CTO Kevin Scott asserted, a critical part of being an informed citizen in the 21st century.
This year, I posed the question on Twitter to gather a broader range of insights. Thank you to everyone who contributed.
I’m writing a story and wondering: What’s some of your favorite AI that can strengthen or defend democracy?
— Khari Johnson (@kharijohnson) July 2, 2020
VB Transform 2020 Online – July 15-17. Join leading AI executives: Register for the free livestream.
This selection is not meant to be comprehensive, and some ideas included here may be in the early stages, but they all represent ways AI might enable the development of more free and just societies.
Machine learning for open source intelligence 
Open source intelligence, or OSINT, is the collection and analysis of freely available public material. This can power solutions for cryptology and security, but it can also be used to hold governments accountable.
Crowdsourced efforts by groups like Bellingcat were once looked upon as interesting side projects. But findings based on open source evidence from combat zones — like an MH-17 being shot down over Ukraine and a 2013 sarin gas attack in Syria — have proved valuable to investigative authorities.
Groups like the International Consortium of Investigative Journalists (ICIJ) are using machine learning in their collaborative work. Last year, the ICIJ’s Marina Walker Guevara detailed lessons drawn from the Machine Learning for Investigations reporting process, conducted in partnership with Stanford AI Lab.
In May, researchers from Universidade Nove de Julho in Sao Paulo, Brazil published a systematic review of AI for open source intelligence that found nearly 250 examples of OSINT using AI in works published between 1990 and 2019. Topics range from AI for crawling web text and documents to applications for social media, business, and — increasingly — cybersecurity.
Along similar lines, an open source initiative out of Swansea University is currently using machine learning to investigate alleged war crimes happening in Yemen.
AI for emancipation 
Last month, shortly after some of the largest protests in U.S. history engulfed American cities and spread around the world, I wrote about an analysis of AI bias in language models. Although I did not raise the point in that piece, the study stood out as the first time I’d come across the word “emancipation” in AI research. The term came up in relation to researchers’ best practice recommendations for NLP bias analysts in the field of sociolinguistics.
I asked lead author Su Lin Blodgett to speak more about this idea, which would treat marginalized people as coequal researchers or producers of knowledge. Blodgett said she’s not aware of any AI system today that can be defined as emancipatory in its design, but she is excited by the work of groups like the Indigenous Protocol and Artificial Intelligence Working Group.
Blodgett said AI that touches on emancipation includes NLP projects to help revitalize or reclaim languages and projects for creating natural language processing for low-resource languages. She also cited AI directed at helping people resist censorship and hold government officials accountable.
Chelsea Barabas explored similar themes in an ACM FAccT conference presentation earlier this year. Barabas drew on the work of anthropologist Laura Nader, who finds that anthropologists tend to study disadvantaged groups in ways that perpetuate stereotypes. Instead, Nader called for anthropologists to expand their fields of inquiry to include “study of the colonizers rather than the colonized, the culture of power rather than the culture of the powerless, the culture of affluence rather than the culture of poverty.”
In her presentation, Barabas likewise urged data scientists to redirect their critical gaze in the interests of fairness. As an example, both Barabas and Blodgett endorsed research that scrutinizes “white collar” crimes with the level of attention typically reserved for other offenses.
In Race After Technology, Princeton University professor Ruha Benjamin also champions the notion of abolitionist tools in tech. Catherine D’Ignazio and Lauren F. Klein’s Data Feminism and Sasha Costanza-Chock’s Design Justice: Community-Led Practices to Build the Worlds We Need offer further examples of data sets that can be used to challenge power.
Racial bias detection for police officers
Taking advantage of NLP’s ability to process data at scale, Stanford University researchers examined recordings of conversations between police officers and people stopped for traffic violations. Using computational linguistics, the researchers were able to demonstrate that officers paid less respect to Black citizens during traffic stops.
The work published in the Proceedings of the National Academy of Science in 2017 highlighted ways police body camera footage can be used to build trust between communities and law enforcement agencies. The analysis was based on recordings collected over the course of years and drew conclusions from a batch of data instead of parsing instances one by one.
An algorithmic bill of rights
The idea of an algorithmic bill of rights recently came up in a conversation with Black roboticists about building better AI. The notion was introduced in the 2019 book A Human’s Guide to Machine Intelligence and further fleshed out by Vox staff writer Sigal Samuel.
A core tenet of the idea is transparency, meaning each person has the right to know when an algorithm is making a decision that affects them, along with any factors being considered. An algorithmic bill of rights would include freedom from bias, data portability, freedom to grant or refuse consent, and a right to dispute algorithmic results with human review.
As Samuel points out in her reporting, some of these notions, such as freedom from bias, have appeared in laws proposed in Congress, such as the 2019 Algorithmic Accountability Act.
Fact-checking and fighting misinformation
Beyond bots that provide civic services or promote public accountability, AI can be used to fight deepfakes and misinformation. Examples include Full Fact’s work with Africa Check, Chequeado, and the Open Data Institute to automate fact-checking as part of the Google AI Impact Challenge.
youtube
Deepfakes are a major concern heading into the U.S. election this November. In a fall 2019 report about upcoming elections, the New York University Stern Center for Business and Human Rights warned of domestic forms of disinformation, as well as potential external interference from China, Iran, or Russia. The Deepfake Detection Challenge aims to help counter such deceptive videos, and Facebook has also introduced a data set of videos for training and benchmarking deepfake detection systems.
Pol.is
Recommendation algorithms from companies like Facebook and YouTube — with documented histories of stoking division to boost user engagement — have been identified as another threat to democratic societies.
Pol.is uses machine learning to achieve opposite aims, gamifying consensus and grouping citizens on a vector map. To reach consensus, participants need to revise their answers until they reach agreement. Pol.is has been used to help draft legislation in Taiwan and Spain.
Algorithmic bias and housing
In Los Angeles County, individuals who are homeless and White exit homelessness at a rate 1.4 times greater than people of color, a fact that could be related to housing policy or discrimination. Citing structural racism, a homeless population count for Los Angeles released last month found that Black people make up only 8% of the county population but nearly 34% of its homeless population.
To redress this injustice, the University of Southern California Center for AI in Society will explore ways artificial intelligence can help ensure housing is fairly distributed. Last month, USC announced $1.5 million in funding to advance this effort in partnership with the Los Angeles Homeless Services Authority.
USC’s School for Social Work and the Center for AI in Society have been investigating ways to reduce bias in the allocation of housing resources since 2017. Homelessness is a major problem in California and could worsen in the months ahead as more people face evictions due to pandemic-related job losses. 
Putting AI ethics principles into practice
Implementing principles for ethical AI is not just an urgent matter for tech companies, which have virtually all released vague statements about their ethical intentions in recent years. As a study from the UC Berkeley Center for Long-Term Cybersecurity found earlier this year, it’s also essential that governments establish ethical guidelines for their own use of the technology.
Through the Organization for Economic Co-operation and Development (OECD) and G20, many of the world’s democratic governments have committed to AI ethics principles. But deciding what constitutes ethical use of AI is meaningless without implementation. Accordingly, in February the OECD established the Public Observatory to help nations put these principles into practice.
At the same time, governments around the world are outlining their own ethical parameters. Trump administration officials introduced ethical guidelines for federal agencies in January that, among other things, encourage public participation in establishing AI regulation. However, the guidelines also reject regulation the White House considers overly burdensome, such as bans on facial recognition technology.
One analysis recently found the need for more AI expertise in government. A joint Stanford-NYU study released in February examines the idea of “algorithmic governance,” or AI playing an increasing role in government. Analysis of AI used by the U.S. federal government today found that more than 40% of agencies have experimented with AI but only 15% of those solutions can be considered highly sophisticated. The researchers implore the federal government to hire more in-house AI talent for vetting AI systems and warn that algorithmic governance could widen the public-private technology gap and, if poorly implemented, erode public trust or give major corporations an unfair advantage over small businesses.
Another crucial part of the equation is how governments choose to award contracts to AI startups and tech giants. In what was believed to be a first, last fall the World Economic Forum, U.K. government, and businesses like Salesforce worked together to produce a set of rules and guidelines for government employees in charge of procuring services or awarding contracts.
Such government contracts must be closely monitored as businesses with ties to far-right or white supremacist groups — like Clearview AI and Banjo — continue selling surveillance software to governments and law enforcement agencies. Peter Thiel’s Palantir has also collected a number of lucrative government contracts in recent months. Earlier this week, Palmer Luckey’s Anduril, also backed by Thiel, raised $200 million and was awarded a contract to build a digital border wall using surveillance hardware and AI.
AI ethics documents like those mentioned above invariably espouse the importance of “trustworthy AI.” If you’re inclined to roll your eyes at the phrase, I certainly don’t blame you. It’s a favorite of governments and businesses peddling principles to push through their agendas. The White House uses it, the European Commission uses it, and tech giants and groups advising the U.S. military on ethics use it, but efforts to put ethics principles into action could someday give the term some meaning and weight.
Protection against ransomware attacks
Before local governments began scrambling to respond to the coronavirus and structural racism, ransomware attacks had established themselves as another growing threat to stability and city finances.
In 2019, ransomware attacks on public-facing institutions like hospitals, schools, and governments were rising at unprecedented rates, siphoning off public funds to pay ransoms, recover files, or replace hardware.
Security companies working with U.S. cities told VentureBeat earlier this year that machine learning is being used to combat these attacks through approaches like anomaly detection and quickly isolating infected devices.
Robot fish in city pipes
Beyond averting ransomware attacks, AI can help municipal governments avoid catastrophic financial burdens by monitoring infrastructure, catching leaks or vulnerable city pipes before they burst.
Engineers at the University of Southern California built a robot for pipe inspections to address these costly issues. Named Pipefish, it can swim into city pipe systems through fire hydrants and collect imagery and other data.
Facial recognition protection with AI
When it comes to shielding people from facial recognition systems, efforts range from shirts to face paint to full-on face projections.
EqualAIs was developed at MIT’s Media Lab in 2018 to make it harder for facial recognition tech to identify subjects in photographs, project manager Daniel Pedraza told VentureBeat. The tool uses adversarial machine learning to modify images in order to evade facial recognition detection and preserve privacy. EqualAIs was developed as a prototype to show the technical feasibility of attacking facial recognition algorithms, creating a layer of protection around images uploaded in public forums like Facebook or Twitter. Open source code and other resources from the project are available online.
Other apps and AI can recognize and remove people from photos or blur faces to protect an individual’s identity. University of North Carolina at Charlotte assistant professor Liyue Fan published work that applies differential privacy to images for added protection when using pixelization to hide a face. Should tech like EqualAIs be widely adopted, it may offer a glimmer of hope to privacy advocates who call Clearview AI the end of privacy.
Legislators in Congress are currently considering a bill that would prohibit facial recognition use by federal officials and withhold some funding from state or local governments that choose to use the technology.
Whether you favor the idea of a permanent ban, a temporary moratorium, or minimal regulation, facial recognition legislation is an imperative issue for democratic societies. Racial bias and false identification of crime suspects are major reasons people across the political landscape are beginning to agree that facial recognition tech is unfit for public use today.
ACM, one of the largest groups for computer scientists in the world, this week urged governments and businesses to stop using the technology. Members of Congress have also voiced concern about the use of facial recognition at protests or political rallies. Experts testifying before Congress have warned that the technology has the potential to dampen people’s constitutional right to free speech.
Protestors and others might have used face masks to evade detection in the past, but in the COVID-19 era, facial recognition systems are getting better at recognizing people wearing masks.
Final thoughts
This story is written with a clear understanding that techno-solutionism is no panacea and AI can be used for both positive and negative purposes. But the series is published on an annual basis because we all deserve to keep dreaming about ways AI can empower people and help build stronger communities and a more just society.
We hope you enjoyed this year’s selection. If you have additional ideas, please feel free to comment on the tweet or email [email protected] to share suggestions for stories on this or related topics.
Source link
قالب وردپرس
from World Wide News https://ift.tt/3dYnKOT
0 notes
viditure · 4 years ago
Text
Catching up: maximising value, minimising data collection, Covid data explosion, post-virus media strategy, and what happens next?
Getting you up to speed with all the latest in the Catching up.
Maximising the value of privacy-driven analytics 
On the two-year anniversary of the GDPR, many are waiting for the regulation to really show its teeth and start laying down the long-awaited enforcement against the tech giants. Nonetheless, the global  privacy movement is well and truly on the march and it has become essential to adopt a GDPR-compliant privacy-centric approach to your data collection. Aside from elevating your organisation’s overall performance, you can considerably increase the value of your offer. A comprehensive ethical privacy strategy can significantly raise brand confidence with consumers – leading to increased and more frequent spend on the company’s products and services. Covid or no covid, it’s time to zoom in on Data Privacy and choose an analytics partner you can trust! 
Minimise to survive and thrive 
In the wake of the Cambridge Analytica and post-GDPR privacy storm, consumer confidence in online data collection practices at an all-time low. Now brands are in the race to rebuild relationships based on trust while ensuring that gather enough customer knowledge to meet the demand for increasingly personalised experiences. By focusing on data quality over quantity, brands can leverage GDPR-compliant minimised data to create a virtuous cycle of trust and significantly boost user retention. A minimal and fully transparent approach to data collection allows brands to optimise their CX, which leads to increased consumer trust and the willingness to share their personal data. Trust and GDPR compliance pays!  
‘Covid-driven’ media data 
To the point of overdose, data analysis has been put to work more than ever before during this corona crisis. Here’s a comprehensive overview (in French) of the devices and innovations used by the media over the last few months, put together by Datagif. Dataviz, images, photos, comics, live real time, geolocalisation, dashboard guides, fact-checking, newsletters – it’s all gone into hyperdrive. 
Analytics & Covid-19 @Le Monde 
The last few months have been an unprecedented time for online news consumption, and digital analytics has truly taken central stage. A recent article about Le Monde dives into how the media group has handled the spectacular increase in Covid-related traffic – with the explosion of conversions/subscriptions, and the skyrocketing of visitor frequency. However, one burning question remains – how Le Monde will return to ‘normal’ news coverage in a post-Covid world. According head of digital research, Pierre Buffet, the key to post-pandemic analytics will be in monitoring the commitment of “lockdown subscribers” and putting in place effective re-engagement scenarios for the different categories of churners. Retaining new users will involve exposing them to the most relevant content via all available channels, and maximizing the use of targeted push mails etc.   
So what happens next? 
Wondering what the future may hold for the global pandemic? Look no further… Canadian indie game developer and blogger Nicky Case and epidemiologist Marcel Salathé have put together a series of playable simulations that cover every possible outcome of Covid-19. Aimed at channelling fear into understanding, the simulations look at both the optimistic and pessimistic aspects of the situation so far, and how we can beat the virus in a way that also protects our mental & financial health over the coming months and years. “The optimist invents the airplane and the pessimist the parachute.” 
And now for something completely different… 
Tired of hearing the same old news? The World Economic Forum has released a series of podcasts that dive deep into a wide range of social and economic issues surrounding the pandemic. To mark Earth Day on April 22nd, the WEF put the spotlight on climate action and how the fight against climate change must not be considered an unaffordable luxury as we struggle with the virus. It rounded off by discussing a short film from 2017 that has recently been re-released – based on a meeting with a dozen indigenous leaders from around the world – that has found new relevance in the age of Covid-19. Their message to all of us? We have to take advantage of the fact that everything has now slowed down and ask ourselves what we must collectively do to change the world.
See you next time on the Internets! 
Credits: Braden Jarvis 
Article Catching up: maximising value, minimising data collection, Covid data explosion, post-virus media strategy, and what happens next? first appeared on Digital Analytics Blog.
from Digital Analytics Blog https://ift.tt/2M4KiCn via IFTTT
0 notes
lodelss · 5 years ago
Link
Shoshana Zuboff | An excerpt adapted from The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power | PublicAffairs | 2019 | 23 minutes (6,281 words)
  In 2000 a group of computer scientists and engineers at Georgia Tech collaborated on a project called the “Aware Home.” It was meant to be a “living laboratory” for the study of “ubiquitous computing.” They imagined a “human-home symbiosis” in which many animate and inanimate processes would be captured by an elaborate network of “context aware sensors” embedded in the house and by wearable computers worn by the home’s occupants. The design called for an “automated wireless collaboration” between the platform that hosted personal information from the occupants’ wearables and a second one that hosted the environmental information from the sensors.
There were three working assumptions: first, the scientists and engineers understood that the new data systems would produce an entirely new knowledge domain. Second, it was assumed that the rights to that new knowledge and the power to use it to improve one’s life would belong exclusively to the people who live in the house. Third, the team assumed that for all of its digital wizardry, the Aware Home would take its place as a modern incarnation of the ancient conventions that understand “home” as the private sanctuary of those who dwell within its walls.
All of this was expressed in the engineering plan. It emphasized trust, simplicity, the sovereignty of the individual, and the inviolability of the home as a private domain. The Aware Home information system was imagined as a simple “closed loop” with only two nodes and controlled entirely by the home’s occupants. Because the house would be “constantly monitoring the occupants’ whereabouts and activities…even tracing its inhabitants’ medical conditions,” the team concluded, “there is a clear need to give the occupants knowledge and control of the distribution of this information.” All the information was to be stored on the occupants’ wearable computers “to insure the privacy of an individual’s information.”
By 2018, the global “smart-home” market was valued at $36 billion and expected to reach $151 billion by 2023. The numbers betray an earthquake beneath their surface. Consider just one smart-home device: the Nest thermostat, which was made by a company that was owned by Alphabet, the Google holding company, and then merged with Google in 2018. The Nest thermostat does many things imagined in the Aware Home. It collects data about its uses and environment. It uses motion sensors and computation to “learn” the behaviors of a home’s inhabitants. Nest’s apps can gather data from other connected products such as cars, ovens, fitness trackers, and beds. Such systems can, for example, trigger lights if an anomalous motion is detected, signal video and audio recording, and even send notifications to homeowners or others. As a result of the merger with Google, the thermostat, like other Nest products, will be built with Google’s artificial intelligence capabilities, including its personal digital “assistant.” Like the Aware Home, the thermostat and its brethren devices create immense new stores of knowledge and therefore new power — but for whom?
Local Bookstores Amazon
Wi-Fi–enabled and networked, the thermostat’s intricate, personalized data stores are uploaded to Google’s servers. Each thermostat comes with a “privacy policy,” a “terms-of-service agreement,” and an “end-user licensing agreement.” These reveal oppressive privacy and security consequences in which sensitive household and personal information are shared with other smart devices, unnamed personnel, and third parties for the purposes of predictive analyses and sales to other unspecified parties. Nest takes little responsibility for the security of the information it collects and none for how the other companies in its ecosystem will put those data to use. A detailed analysis of Nest’s policies by two University of London scholars concluded that were one to enter into the Nest ecosystem of connected devices and apps, each with their own equally burdensome and audacious terms, the purchase of a single home thermostat would entail the need to review nearly a thousand so-called contracts.
Should the customer refuse to agree to Nest’s stipulations, the terms of service indicate that the functionality and security of the thermostat will be deeply compromised, no longer supported by the necessary updates meant to ensure its reliability and safety. The consequences can range from frozen pipes to failed smoke alarms to an easily hacked internal home system.
By 2018, the assumptions of the Aware Home were gone with the wind. Where did they go? What was that wind? The Aware Home, like many other visionary projects, imagined a digital future that empowers individuals to lead more-effective lives. What is most critical is that in the year 2000 this vision naturally assumed an unwavering commitment to the privacy of individual experience. Should an individual choose to render her experience digitally, then she would exercise exclusive rights to the knowledge garnered from such data, as well as exclusive rights to decide how such knowledge might be put to use. Today these rights to privacy, knowledge, and application have been usurped by a bold market venture powered by unilateral claims to others’ experience and the knowledge that flows from it. What does this sea change mean for us, for our children, for our democracies, and for the very possibility of a human future in a digital world? It is the darkening of the digital dream into a voracious and utterly novel commercial project that I call surveillance capitalism.
*
Surveillance capitalism runs contrary to the early digital dream, consigning the Aware Home to ancient history. Instead, it strips away the illusion that the networked form has some kind of indigenous moral content, that being “connected” is somehow intrinsically pro-social, innately inclusive, or naturally tending toward the democratization of knowledge. Digital connection is now a means to others’ commercial ends. At its core, surveillance capitalism is parasitic and self-referential. It revives Karl Marx’s old image of capitalism as a vampire that feeds on labor, but with an unexpected turn. Instead of labor, surveillance capitalism feeds on every aspect of every human’s experience. Google invented and perfected surveillance capitalism in much the same way that a century ago General Motors invented and perfected managerial capitalism. Google was the pioneer of surveillance capitalism in thought and practice, the deep pocket for research and development, and the trailblazer in experimentation and implementation, but it is no longer the only actor on this path. Surveillance capitalism quickly spread to Facebook and later to Microsoft. Evidence suggests that Amazon has veered in this direction, and it is a constant challenge to Apple, both as an external threat and as a source of internal debate and conflict.
As the pioneer of surveillance capitalism, Google launched an unprecedented market operation into the unmapped spaces of the internet, where it faced few impediments from law or competitors, like an invasive species in a landscape free of natural predators. Its leaders drove the systemic coherence of their businesses at a breakneck pace that neither public institutions nor individuals could follow. Google also benefited from historical events when a national security apparatus galvanized by the attacks of 9/11 was inclined to nurture, mimic, shelter, and appropriate surveillance capitalism’s emergent capabilities for the sake of total knowledge and its promise of certainty.
Our personal experiences are scraped and packaged as the means to others’ ends…We are the sources of surveillance capitalism’s crucial surplus.
Surveillance capitalists quickly realized that they could do anything they wanted, and they did. They dressed in the fashions of advocacy and emancipation, appealing to and exploiting contemporary anxieties, while the real action was hidden offstage. Theirs was an invisibility cloak woven in equal measure to the rhetoric of the empowering web, the ability to move swiftly, the confidence of vast revenue streams, and the wild, undefended nature of the territory they would conquer and claim. They were protected by the inherent illegibility of the automated processes that they rule, the ignorance that these processes breed, and the sense of inevitability that they foster.
Surveillance capitalism is no longer confined to the competitive dramas of the large internet companies, where behavioral futures markets were first aimed at online advertising. Its mechanisms and economic imperatives have become the default model for most internet-based businesses. Eventually, competitive pressure drove expansion into the offline world, where the same foundational mechanisms that expropriate your online browsing, likes, and clicks are trained on your run in the park, breakfast conversation, or hunt for a parking space. Today’s prediction products are traded in behavioral futures markets that extend beyond targeted online ads to many other sectors, including insurance, retail, finance, and an ever-widening range of goods and services companies determined to participate in these new and profitable markets. Whether it’s a “smart” home device, what the insurance companies call “behavioral underwriting,” or any one of thousands of other transactions, we now pay for our own domination.
Surveillance capitalism’s products and services are not the objects of a value exchange. They do not establish constructive producer-consumer reciprocities. Instead, they are the “hooks” that lure users into their extractive operations in which our personal experiences are scraped and packaged as the means to others’ ends. We are not surveillance capitalism’s “customers.” Although the saying tells us “If it’s free, then you are the product,” that is also incorrect. We are the sources of surveillance capitalism’s crucial surplus: the objects of a technologically advanced and increasingly inescapable raw-material-extraction operation. Surveillance capitalism’s actual customers are the enterprises that trade in its markets for future behavior.
*
Google is to surveillance capitalism what the Ford Motor Company and General Motors were to mass-production–based managerial capitalism. New economic logics and their commercial models are discovered by people in a time and place and then perfected through trial and error. In our time Google became the pioneer, discoverer, elaborator, experimenter, lead practitioner, role model, and diffusion hub of surveillance capitalism. GM and Ford’s iconic status as pioneers of twentieth-century capitalism made them enduring objects of scholarly research and public fascination because the lessons they had to teach resonated far beyond the individual companies. Google’s practices deserve the same kind of examination, not merely as a critique of a single company but rather as the starting point for the codification of a powerful new form of capitalism.
With the triumph of mass production at Ford and for decades thereafter, hundreds of researchers, businesspeople, engineers, journalists, and scholars would excavate the circumstances of its invention, origins, and consequences. Decades later, scholars continued to write extensively about Ford, the man and the company. GM has also been an object of intense scrutiny. It was the site of Peter Drucker’s field studies for his seminal Concept of the Corporation, the 1946 book that codified the practices of the twentieth-century business organization and established Drucker’s reputation as a management sage. In addition to the many works of scholarship and analysis on these two firms, their own leaders enthusiastically articulated their discoveries and practices. Henry Ford and his general manager, James Couzens, and Alfred Sloan and his marketing man, Henry “Buck” Weaver, reflected on, conceptualized, and proselytized their achievements, specifically locating them in the evolutionary drama of American capitalism.
Google is a notoriously secretive company, and one is hard-pressed to imagine a Drucker equivalent freely roaming the scene and scribbling in the hallways. Its executives carefully craft their messages of digital evangelism in books and blog posts, but its operations are not easily accessible to outside researchers or journalists. In 2016 a lawsuit brought against the company by a product manager alleged an internal spying program in which employees are expected to identify coworkers who violate the firm’s confidentiality agreement: a broad prohibition against divulging anything about the company to anyone. The closest thing we have to a Buck Weaver or James Couzens codifying Google’s practices and objectives is the company’s longtime chief economist, Hal Varian, who aids the cause of understanding with scholarly articles that explore important themes. Varian has been described as “the Adam Smith of the discipline of Googlenomics” and the “godfather” of its advertising model. It is in Varian’s work that we find hidden-in-plain-sight important clues to the logic of surveillance capitalism and its claims to power.
In two extraordinary articles in scholarly journals, Varian explored the theme of “computer-mediated transactions” and their transformational effects on the modern economy. Both pieces are written in amiable, down-to-earth prose, but Varian’s casual understatement stands in counterpoint to his often-startling declarations: “Nowadays there is a computer in the middle of virtually every transaction…now that they are available these computers have several other uses.” He then identifies four such new uses: “data extraction and analysis,” “new contractual forms due to better monitoring,” “personalization and customization,” and “continuous experiments.”
Varian’s discussions of these new “uses” are an unexpected guide to the strange logic of surveillance capitalism, the division of learning that it shapes, and the character of the information civilization toward which it leads. “Data extraction and analysis,” Varian writes, “is what everyone is talking about when they talk about big data.”
*
Google was incorporated in 1998, founded by Stanford graduate students Larry Page and Sergey Brin just two years after the Mosaic browser threw open the doors of the world wide web to the computer-using public. From the start, the company embodied the promise of information capitalism as a liberating and democratic social force that galvanized and delighted second-modernity populations around the world.
Thanks to this wide embrace, Google successfully imposed computer mediation on broad new domains of human behavior as people searched online and engaged with the web through a growing roster of Google services. As these new activities were informated for the first time, they produced wholly new data resources. For example, in addition to key words, each Google search query produces a wake of collateral data such as the number and pattern of search terms, how a query is phrased, spelling, punctuation, dwell times, click patterns, and location.
There was no reliable way to turn investors’ money into revenue…The behavioral value reinvestment cycle produced a very cool search function, but it was not yet capitalism.
Early on, these behavioral by-products were haphazardly stored and operationally ignored. Amit Patel, a young Stanford graduate student with a special interest in “data mining,” is frequently credited with the groundbreaking insight into the significance of Google’s accidental data caches. His work with these data logs persuaded him that detailed stories about each user — thoughts, feelings, interests — could be constructed from the wake of unstructured signals that trailed every online action. These data, he concluded, actually provided a “broad sensor of human behavior” and could be put to immediate use in realizing cofounder Larry Page’s dream of Search as a comprehensive artificial intelligence.
Google’s engineers soon grasped that the continuous flows of collateral behavioral data could turn the search engine into a recursive learning system that constantly improved search results and spurred product innovations such as spell check, translation, and voice recognition. As Kenneth Cukier observed at that time,
Other search engines in the 1990s had the chance to do the same, but did not pursue it. Around 2000 Yahoo! saw the potential, but nothing came of the idea. It was Google that recognized the gold dust in the detritus of its interactions with its users and took the trouble to collect it up…Google exploits information that is a by-product of user interactions, or data exhaust, which is automatically recycled to improve the service or create an entirely new product.
What had been regarded as waste material — “data exhaust” spewed into Google’s servers during the combustive action of Search — was quickly reimagined as a critical element in the transformation of Google’s search engine into a reflexive process of continuous learning and improvement.
At that early stage of Google’s development, the feedback loops involved in improving its Search functions produced a balance of power: Search needed people to learn from, and people needed Search to learn from. This symbiosis enabled Google’s algorithms to learn and produce ever-more relevant and comprehensive search results. More queries meant more learning; more learning produced more relevance. More relevance meant more searches and more users. By the time the young company held its first press conference in 1999, to announce a $25 million equity investment from two of the most revered Silicon Valley venture capital firms, Sequoia Capital and Kleiner Perkins, Google Search was already fielding seven million requests each day. A few years later, Hal Varian, who joined Google as its chief economist in 2002, would note, “Every action a user performs is considered a signal to be analyzed and fed back into the system.” The Page Rank algorithm, named after its founder, had already given Google a significant advantage in identifying the most popular results for queries. Over the course of the next few years it would be the capture, storage, analysis, and learning from the by-products of those search queries that would turn Google into the gold standard of web search.
Kickstart your weekend reading by getting the week’s best Longreads delivered to your inbox every Friday afternoon.
Sign up
The key point for us rests on a critical distinction. During this early period, behavioral data were put to work entirely on the user’s behalf. User data provided value at no cost, and that value was reinvested in the user experience in the form of improved services: enhancements that were also offered at no cost to users. Users provided the raw material in the form of behavioral data, and those data were harvested to improve speed, accuracy, and relevance and to help build ancillary products such as translation. I call this the behavioral value reinvestment cycle, in which all behavioral data are reinvested in the improvement of the product or service.
The cycle emulates the logic of the iPod; it worked beautifully at Google but with one critical difference: the absence of a sustainable market transaction. In the case of the iPod, the cycle was triggered by the purchase of a high-margin physical product. Subsequent reciprocities improved the iPod product and led to increased sales. Customers were the subjects of the commercial process, which promised alignment with their “what I want, when I want, where I want” demands. At Google, the cycle was similarly oriented toward the individual as its subject, but without a physical product to sell, it floated outside the marketplace, an interaction with “users” rather than a market transaction with customers.
This helps to explain why it is inaccurate to think of Google’s users as its customers: there is no economic exchange, no price, and no profit. Nor do users function in the role of workers. When a capitalist hires workers and provides them with wages and means of production, the products that they produce belong to the capitalist to sell at a profit. Not so here. Users are not paid for their labor, nor do they operate the means of production. Finally, people often say that the user is the “product.” This is also misleading. Users are not products, but rather we are the sources of raw-material supply. Surveillance capitalism’s unusual products manage to be derived from our behavior while remaining indifferent to our behavior. Its products are about predicting us, without actually caring what we do or what is done to us.
At this early stage of Google’s development, whatever Search users inadvertently gave up that was of value to the company they also used up in the form of improved services. In this reinvestment cycle, serving users with amazing Search results “consumed” all the value that users created when they provided extra behavioral data. The fact that users needed Search about as much as Search needed users created a balance of power between Google and its populations. People were treated as ends in themselves, the subjects of a nonmarket, self-contained cycle that was perfectly aligned with Google’s stated mission “to organize the world’s information, making it universally accessible and useful.”
*
By 1999, despite the splendor of Google’s new world of searchable web pages, its growing computer science capabilities, and its glamorous venture backers, there was no reliable way to turn investors’ money into revenue. The behavioral value reinvestment cycle produced a very cool search function, but it was not yet capitalism. The balance of power made it financially risky and possibly counterproductive to charge users a fee for search services. Selling search results would also have set a dangerous precedent for the firm, assigning a price to indexed information that Google’s web crawler had already taken from others without payment. Without a device like Apple’s iPod or its digital songs, there were no margins, no surplus, nothing left over to sell and turn into revenue.
Google had relegated advertising to steerage class: its AdWords team consisted of seven people, most of whom shared the founders’ general antipathy toward ads. The tone had been set in Sergey Brin and Larry Page’s milestone paper that unveiled their search engine conception, “The Anatomy of a Large-Scale Hypertextual Web Search Engine,” presented at the 1998 World Wide Web Conference: “We expect that advertising funded search engines will be inherently biased towards the advertisers and away from the needs of the consumers. This type of bias is very difficult to detect but could still have a significant effect on the market…we believe the issue of advertising causes enough mixed incentives that it is crucial to have a competitive search engine that is transparent and in the academic realm.”
Google’s first revenues depended instead on exclusive licensing deals to provide web services to portals such as Yahoo! and Japan’s BIGLOBE. It also generated modest revenue from sponsored ads linked to search query keywords. There were other models for consideration. Rival search engines such as Overture, used exclusively by the then-giant portal AOL, or Inktomi, the search engine adopted by Microsoft, collected revenues from the sites whose pages they indexed. Overture was also successful in attracting online ads with its policy of allowing advertisers to pay for high-ranking search listings, the very format that Brin and Page scorned.
Prominent analysts publicly doubted whether Google could compete with its more-established rivals. As the New York Times asked, “Can Google create a business model even remotely as good as its technology?” A well-known Forrester Research analyst proclaimed that there were only a few ways for Google to make money with Search: “build a portal [like Yahoo!]…partner with a portal…license the technology…wait for a big company to purchase them.”
Despite these general misgivings about Google’s viability, the firm’s prestigious venture backing gave the founders confidence in their ability to raise money. This changed abruptly in April 2000, when the legendary dot-com economy began its steep plunge into recession, and Silicon Valley’s Garden of Eden unexpectedly became the epicenter of a financial earthquake.
The idea of being able to deliver a particular message to a particular person at just the moment when it might have a high probability of actually influencing his or her behavior was, and had always been, the holy grail of advertising.
By mid-April, Silicon Valley’s fast-money culture of privilege was under siege with the implosion of what came to be known as the “dot-com bubble.” It is easy to forget exactly how terrifying things were for the valley’s ambitious young people and their slightly older investors. Startups with outsized valuations just months earlier were suddenly forced to shutter. Prominent articles such as “Doom Stalks the Dotcoms” noted that the stock prices of Wall Street’s most-revered internet “high flyers” were “down for the count,” with many of them trading below their initial offering price: “With many dotcoms declining, neither venture capitalists nor Wall Street is eager to give them a dime…” The news brimmed with descriptions of shell-shocked investors. The week of April 10 saw the worst decline in the history of the NASDAQ, where many internet companies had gone public, and there was a growing consensus that the “game” had irreversibly changed.
As the business environment in Silicon Valley unraveled, investors’ prospects for cashing out by selling Google to a big company seemed far less likely, and they were not immune to the rising tide of panic. Many Google investors began to express doubts about the company’s prospects, and some threatened to withdraw support. Pressure for profit mounted sharply, despite the fact that Google Search was widely considered the best of all the search engines, traffic to its website was surging, and a thousand résumés flooded the firm’s Mountain View office each day. Page and Brin were seen to be moving too slowly, and their top venture capitalists, John Doerr from Kleiner Perkins and Michael Moritz from Sequoia, were frustrated. According to Google chronicler Steven Levy, “The VCs were screaming bloody murder. Tech’s salad days were over, and it wasn’t certain that Google would avoid becoming another crushed radish.”
The specific character of Silicon Valley’s venture funding, especially during the years leading up to dangerous levels of startup inflation, also contributed to a growing sense of emergency at Google. As Stanford sociologist Mark Granovetter and his colleague Michel Ferrary found in their study of valley venture firms, “A connection with a high-status VC firm signals the high status of the startup and encourages other agents to link to it.” These themes may seem obvious now, but it is useful to mark the anxiety of those months of sudden crisis. Prestigious risk investment functioned as a form of vetting — much like acceptance to a top university sorts and legitimates students, elevating a few against the backdrop of the many — especially in the “uncertain” environment characteristic of high-tech investing. Loss of that high-status signaling power assigned a young company to a long list of also-rans in Silicon Valley’s fast-moving saga.
Other research findings point to the consequences of the impatient money that flooded the valley as inflationary hype drew speculators and ratcheted up the volatility of venture funding. Studies of pre-bubble investment patterns showed a “big-score” mentality in which bad results tended to stimulate increased investing as funders chased the belief that some young company would suddenly discover the elusive business model destined to turn all their bets into rivers of gold. Startup mortality rates in Silicon Valley outstripped those for other venture capital centers such as Boston and Washington, DC, with impatient money producing a few big wins and many losses. Impatient money is also reflected in the size of Silicon Valley startups, which during this period were significantly smaller than in other regions, employing an average of 68 employees as compared to an average of 112 in the rest of the country. This reflects an interest in quick returns without spending much time on growing a business or deepening its talent base, let alone developing the institutional capabilities. These propensities were exacerbated by the larger Silicon Valley culture, where net worth was celebrated as the sole measure of success for valley parents and their children.
For all their genius and principled insights, Brin and Page could not ignore the mounting sense of emergency. By December 2000, the Wall Street Journal reported on the new “mantra” emerging from Silicon Valley’s investment community: “Simply displaying the ability to make money will not be enough to remain a major player in the years ahead. What will be required will be an ability to show sustained and exponential profits.”
*
The declaration of a state of exception functions in politics as cover for the suspension of the rule of law and the introduction of new executive powers justified by crisis. At Google in late 2000, it became a rationale for annulling the reciprocal relationship that existed between Google and its users, steeling the founders to abandon their passionate and public opposition to advertising. As a specific response to investors’ anxiety, the founders tasked the tiny AdWords team with the objective of looking for ways to make more money. Page demanded that the whole process be simplified for advertisers. In this new approach, he insisted that advertisers “shouldn’t even get involved with choosing keywords — Google would choose them.”
Operationally, this meant that Google would turn its own growing cache of behavioral data and its computational power and expertise toward the single task of matching ads with queries. New rhetoric took hold to legitimate this unusual move. If there was to be advertising, then it had to be “relevant” to users. Ads would no longer be linked to keywords in a search query, but rather a particular ad would be “targeted” to a particular individual. Securing this holy grail of advertising would ensure relevance to users and value to Advertisers.
Absent from the new rhetoric was the fact that in pursuit of this new aim, Google would cross into virgin territory by exploiting sensitivities that only its exclusive and detailed collateral behavioral data about millions and later billions of users could reveal. To meet the new objective, the behavioral value reinvestment cycle was rapidly and secretly subordinated to a larger and more complex undertaking. The raw materials that had been solely used to improve the quality of search results would now also be put to use in the service of targeting advertising to individual users. Some data would continue to be applied to service improvement, but the growing stores of collateral signals would be repurposed to improve the profitability of ads for both Google and its advertisers. These behavioral data available for uses beyond service improvement constituted a surplus, and it was on the strength of this behavioral surplus that the young company would find its way to the “sustained and exponential profits” that would be necessary for survival. Thanks to a perceived emergency, a new mutation began to gather form and quietly slip its moorings in the implicit advocacy-oriented social contract of the firm’s original relationship with users.
Google’s declared state of exception was the backdrop for 2002, the watershed year during which surveillance capitalism took root. The firm’s appreciation of behavioral surplus crossed another threshold that April, when the data logs team arrived at their offices one morning to find that a peculiar phrase had surged to the top of the search queries: “Carol Brady’s maiden name.” Why the sudden interest in a 1970s television character? It was data scientist and logs team member Amit Patel who recounted the event to the New York Times, noting, “You can’t interpret it unless you know what else is going on in the world.”
The team went to work to solve the puzzle. First, they discerned that the pattern of queries had produced five separate spikes, each beginning at forty-eight minutes after the hour. Then they learned that the query pattern occurred during the airing of the popular TV show Who Wants to Be a Millionaire? The spikes reflected the successive time zones during which the show aired, ending in Hawaii. In each time zone, the show’s host posed the question of Carol Brady’s maiden name, and in each zone the queries immediately flooded into Google’s servers.
As the New York Times reported, “The precision of the Carol Brady data was eye-opening for some.” Even Brin was stunned by the clarity of Search’s predictive power, revealing events and trends before they “hit the radar” of traditional media. As he told the Times, “It was like trying an electron microscope for the first time. It was like a moment-by-moment barometer.” Google executives were described by the Times as reluctant to share their thoughts about how their massive stores of query data might be commercialized. “There is tremendous opportunity with this data,” one executive confided.
Just a month before the Carol Brady moment, while the AdWords team was already working on new approaches, Brin and Page hired Eric Schmidt, an experienced executive, engineer, and computer science Ph.D., as chairman. By August, they appointed him to the CEO’s role. Doerr and Moritz had been pushing the founders to hire a professional manager who would know how to pivot the firm toward profit. Schmidt immediately implemented a “belt-tightening” program, grabbing the budgetary reins and heightening the general sense of financial alarm as fund-raising prospects came under threat. A squeeze on workspace found him unexpectedly sharing his office with none other than Amit Patel.
Schmidt later boasted that as a result of their close quarters over the course of several months, he had instant access to better revenue figures than did his own financial planners. We do not know (and may never know) what other insights Schmidt might have gleaned from Patel about the predictive power of Google’s behavioral data stores, but there is no doubt that a deeper grasp of the predictive power of data quickly shaped Google’s specific response to financial emergency, triggering the crucial mutation that ultimately turned AdWords, Google, the internet, and the very nature of information capitalism toward an astonishingly lucrative surveillance project.
That this no longer seems astonishing to us, or perhaps even worthy of note, is evidence of the profound psychic numbing that has inured us to a bold and unprecedented shift in capitalist methods.
Google’s earliest ads had been considered more effective than most online advertising at the time because they were linked to search queries and Google could track when users actually clicked on an ad, known as the “click-through” rate. Despite this, advertisers were billed in the conventional manner according to how many people viewed an ad. As Search expanded, Google created the self-service system called AdWords, in which a search that used the advertiser’s keyword would include that advertiser’s text box and a link to its landing page. Ad pricing depended upon the ad’s position on the search results page.
Rival search startup Overture had developed an online auction system for web page placement that allowed it to scale online advertising targeted to keywords. Google would produce a transformational enhancement to that model, one that was destined to alter the course of information capitalism. As a Bloomberg journalist explained in 2006, “Google maximizes the revenue it gets from that precious real estate by giving its best position to the advertiser who is likely to pay Google the most in total, based on the price per click multiplied by Google’s estimate of the likelihood that someone will actually click on the ad.” That pivotal multiplier was the result of Google’s advanced computational capabilities trained on its most significant and secret discovery: behavioral surplus. From this point forward, the combination of ever-increasing machine intelligence and ever-more-vast supplies of behavioral surplus would become the foundation of an unprecedented logic of accumulation. Google’s reinvestment priorities would shift from merely improving its user offerings to inventing and institutionalizing the most far-reaching and technologically advanced raw-material supply operations that the world had ever seen. Henceforth, revenues and growth would depend upon more behavioral surplus.
Google’s many patents filed during those early years illustrate the explosion of discovery, inventiveness, and complexity detonated by the state of exception that led to these crucial innovations and the firm’s determination to advance the capture of behavioral surplus. One patent submitted in 2003 by three of the firm’s top computer scientists is titled “Generating User Information for Use in Targeted Advertising.” The patent is emblematic of the new mutation and the emerging logic of accumulation that would define Google’s success. Of even greater interest, it also provides an unusual glimpse into the “economic orientation” baked deep into the technology cake by reflecting the mindset of Google’s distinguished scientists as they harnessed their knowledge to the firm’s new aims. In this way, the patent stands as a treatise on a new political economics of clicks and its moral universe, before the company learned to disguise this project in a fog of euphemism.
The patent reveals a pivoting of the backstage operation toward Google’s new audience of genuine customers. “The present invention concerns advertising,” the inventors announce. Despite the enormous quantity of demographic data available to advertisers, the scientists note that much of an ad budget “is simply wasted…it is very difficult to identify and eliminate such waste.”
Advertising had always been a guessing game: art, relationships, conventional wisdom, standard practice, but never “science.” The idea of being able to deliver a particular message to a particular person at just the moment when it might have a high probability of actually influencing his or her behavior was, and had always been, the holy grail of advertising. The inventors point out that online ad systems had also failed to achieve this elusive goal. The then-predominant approaches used by Google’s competitors, in which ads were targeted to keywords or content, were unable to identify relevant ads “for a particular user.” Now the inventors offered a scientific solution that exceeded the most-ambitious dreams of any advertising executive:
There is a need to increase the relevancy of ads served for some user request, such as a search query or a document request…to the user that submitted the request…The present invention may involve novel methods, apparatus, message formats and/or data structures for determining user profile information and using such determined user profile information for ad serving.
In other words, Google would no longer mine behavioral data strictly to improve service for users but rather to read users’ minds for the purposes of matching ads to their interests, as those interests are deduced from the collateral traces of online behavior. With Google’s unique access to behavioral data, it would now be possible to know what a particular individual in a particular time and place was thinking, feeling, and doing. That this no longer seems astonishing to us, or perhaps even worthy of note, is evidence of the profound psychic numbing that has inured us to a bold and unprecedented shift in capitalist methods.
The techniques described in the patent meant that each time a user queries Google’s search engine, the system simultaneously presents a specific configuration of a particular ad, all in the fraction of a moment that it takes to fulfill the search query. The data used to perform this instant translation from query to ad, a predictive analysis that was dubbed “matching,” went far beyond the mere denotation of search terms. New data sets were compiled that would dramatically enhance the accuracy of these predictions. These data sets were referred to as “user profile information” or “UPI.” These new data meant that there would be no more guesswork and far less waste in the advertising budget. Mathematical certainty would replace all of that.
* * *
From THE AGE OF SURVEILLANCE CAPITALISM: The Fight for a Human Future at the New Frontier of Power,  by Shoshana Zuboff.  Reprinted with permission from PublicAffairs, a division of the Hachette Book Group.
Shoshana Zuboff is the Charles Edward Wilson Professor emerita, Harvard Business School. She is the author of In The Age of the Smart Machine: the Future of Work and Power and The Support Economy: Why Corporations Are Failing Individuals and the Next Episode of Capitalism.
Longreads Editor: Dana Snitzky
0 notes
shirlleycoyle · 6 years ago
Text
Canadian Cops Will Scan Social Media to Predict Who Could Go Missing
Police in Canada are building a predictive policing system that will analyze social media posts, police records, and social services information to predict who might go missing, says a government report.
According to Defence Research and Development Canada (DRDC), an agency of the Department of National Defence, Saskatchewan is developing “predictive models” to allow police and other public safety authorities to identify common “risk factors” before individuals go missing, and to intervene before something happens. Risk factors can include a history of running away or violence in the home, among dozens of others.
A DRDC report published last month shows the Saskatchewan Police Predictive Analytics Lab (SPPAL)—a partnership between police, the provincial Ministry of Justice, and the University of Saskatchewan—is analyzing historical missing persons data with a focus on children in provincial care, habitual runaways, and missing Indigenous persons, and building tools to predict who might go missing. In the next phase of the project, SPPAL will add social service and social media data.
The report doesn’t specify what kind of predictive insights authorities expect to glean about individuals from social media posts, but police already use social media to monitor people and events for signs of crime, or in the case of missing persons investigations, to discern when a person went missing. For example, police in Ontario made a missing woman’s case a priority after noticing that her usual patterns of social media activity had ceased.
The DRDC report states that municipal police services in Saskatchewan as well as the Ministry of Social Services Child and Family programs and regional RCMP have agreed in principle to share information with SPPAL. In Saskatchewan, more than 70 percent of children in provincial care are Indigenous, and over 100 long-term missing persons cases haven’t been solved.
Tamir Israel, a lawyer with the Canadian Internet Policy and Public Interest Clinic (CIPPIC), told Motherboard that using predictive models to inform decisions on child welfare interventions is concerning.
“We know that predictive models are far from infallible in real-life settings, and that there will be false positives,” Israel said in an email. “The consequences of an intervention based on a false positive can be very serious.”
Israel said that the risk of false positives increases when predictive models use data of “questionable fidelity” such as social media posts. He pointed out that the high number of missing Indigenous women and children in Canada makes them and other marginalized groups especially vulnerable to flaws or biases concealed in predictive models.
“We have already seen cases where predictive models had deep racial biases,” Israel said. He explained that while a model may be predictively valid across all communities, it could ignore cultural differences that lead to “distorted outcomes” when the same model is applied specifically to minority groups.
Read More: Police in Canada Are Tracking People’s ‘Negative’ Behavior In a ‘Risk’ Database
Ronald Kruzeniski, Saskatchewan’s Information and Privacy Commissioner, told Motherboard in an email that his office advises police “not to collect or use social media data” because of concerns about accuracy.
Kruzeniski cited the difficulty in knowing the true identity of a person behind a social media account and the relevance of old posts as reasons why social media data shouldn’t be used.
Motherboard reached out to Saskatchewan’s Advocate for Children and Youth for comment on SPPAL but did not receive a response.
Dr. Keira Stockdale, a psychologist with the Saskatoon Police Service (SPS) who authored the DRDC report,said the work done by SPPAL is done in accordance with legal requirements and “governed by [the] highest ethical and professional standards to proactively protect privacy and promote public safety.”
Stockdale believes the tools SPPAL is developing can be applied to a number of community safety problems, such as developing coordinated responses to the “illicit use” of opioids.
SPPAL is an outgrowth of a intervention-based approach to policing called the Hub model that partners cops with social workers and schools to identify and intervene with people believed to be at risk of becoming criminals or victims. In Saskatchewan and Ontario, information about people assessed for intervention by Hubs is entered into a Risk-driven Tracking Database (RTD) to store and analyze the data.
In February, a Motherboard investigation found that minors aged 12 to 17 were the most prevalent age group found in the Ontario RTD in 2017 and that children as young as six have been subject to Hub interventions. The Hub model of policing was developed in Saskatchewan before being exported to Ontario and across Canada. Stockdale said that the tools developed by SPPAL could be used by Hubs when assessing people for intervention.
The DRDC report notes that the work of SPPAL is intended to help build a data analytics solution to “support public safety partners and social services agencies across Canada.”
Israel noted that legislation recently introduced in Ontario could pave the way for predictive policing to flourish in that province.
Israel said the Comprehensive Ontario Police Services Act, which received royal assent last month, empowers the Minister of Community Safety and Correctional Services (MCSCS) to “collect high volumes of information from various policing agencies throughout Ontario” with little oversight from the province’s Information and Privacy Commissioner.
While the law does not require predictive policing, it opens the door to “broad-based adoption” of the practice and puts incentives in place without including necessary safeguards for personal information, Israel said.
Israel said the DRDC report also suggests that police could use SPPAL’s predictive models to triage missing persons cases and prioritize certain cases over others—for example, a possible kidnapping victim versus a habitual runaway.
“Law enforcement will be called on to rely on the outcome of the predictive model” when deciding how seriously to take a missing person case, Israel said. “But these predictive models are often opaque in their operation, relying on factors that the police officers themselves cannot assess or second-guess.”
Listen to CYBER, Motherboard’s weekly podcast about hacking and cybersecurity.
Canadian Cops Will Scan Social Media to Predict Who Could Go Missing syndicated from https://triviaqaweb.wordpress.com/feed/
0 notes
florenekerr5833-blog · 7 years ago
Text
Professors & Staff.
The Updates Service is actually the entrance aspect for journalists which need information concerning the University from Minnesota. The UA's Institution from Details Assets and also Collection Science graduate plan positions among the greatest in the country. Relying on the score, an applicant could need to register for a recommended English as a 2nd Language (ESL) program in the very first term she or he is actually enlisted. As an example, as well as employing Schauer, by asking whether authority-based reasoning (ie the teaching from model) is actually a misconception; and also, drawing on Kahneman, by checking out the task participated in by emotional heuristics with all forms of decision-making consisting of legal forms. Reside Like a Pupil: Know the best ways to reside within your budget plan, how to avoid of financial debt, where to locate markdowns on school and various other ideas for economic excellence during the course of your university career. In the National Trainee Poll 2016, Biomedical Science at Kent was actually placed 3rd for the top quality of its mentor. possesses among the highest possible college graduation results fees one of ACC universities. Christina Chan introduces panelists Jason Tan, Lukas Svec, and also Allen Chen. The Admissions Office will definitely secure College grades off the Pupil Management after the June level examinations. Our team must see forecasted levels for all certifications not but accomplished, featuring for each and every topic you are taking in your final certifications, in order that our team could ensure you fulfill all minimal access demands. Within this module, rather than conducting an authentic piece of research study, you are offered a collection of inquiries along with analyses. You should communicate along with your UW departmental swap coordinator for details concerns regarding program selection. The Workplace of Infotech (OIT) consists of the IT Solution Work desk, Equipment & System Design, Venture Development & Apps Solutions, Data Facility Operations, IT Surveillance, Software Licensing, as well as Telecom. Pupils should accomplish the premajor programs for audit majors and at least 27 recognition hours from upper-division training off the College of Book keeping. Dramatically, Cambridge obtains its world-leading position together with Harvard, Stanford, Berkeley as well as MIT with a lot smaller moneys. For additional information see International Candidate Financial Relevant information Do certainly not send monetary claims until requested by Graduate University. In 2013-14 she came to be the very first Gopher since Carol Ann Shudlick to lead the Large 10 in scoring, balancing 22.1 aspects every activity. A main concern of this module is actually whether, and also to exactly what magnitude, there is everything unique concerning lawful thinking compared to reasoning as a whole. You'll likewise be able to continuously send out and receive e-mails off your UW e-mail deal with as well as get access to various other UW online companies, including UW Alumni Affiliation systems and also getting records online.
15 New Feelings About gel That Will definitely Switch Your World Upside-down. Should you have any kind of issues relating to in which in addition to tips on how to employ click the following page, you are able to email us from the website.
5 Unanticipated Ways gel May Make Your Life Better.
The schedule consists of the Move Tailgate, the TWD Hangout, Excursion de TSA (Transfer Student Emissaries), alongside choose Accept Full week occasions. University ( 2017), the average starting wage for graduates of this particular level is ₤ 18,000. The training program is going to mostly pay attention to studying issues at an extra aggregate amount, so this incorporates especially effectively along with EC 570 (Microeconomics from Progression-- taught in the Springtime phrase), which focuses on understanding the behaviour from personal agents in building countries. Exclusive features consist of an electronic library guideline class as well as many sorts of seating and research lodgings, consisting of 6 team research rooms for group jobs and also a laptop research region which has plug-ins in the flooring. Along with the addition offered through all the gamers over this time period to gain such excellences that will certainly not have been actually feasible without the undying visibility from Frank (Bombing plane) Lancaster: the Premier Teams marker. A joint level includes one academic degree along with two courses. In the mid-80s when I began practicing regulation," he says, "indigenous individuals were significantly heading to the UN to attempt to have their grievances addressed, having certainly not acquired ample or even sufficient reactions within their domestic nearby setups. Biomedical Science pupils that classed from Kent in 2015 were actually the absolute most prosperous in the UK at locating job or even further study possibilities (DLHE). When you have actually been approved to a course, utilize the links listed below to guide you by means of the whole procedure studying abroad. You might certify for a finance off the UK federal government if you are actually coming to Manchester this year to begin postgraduate research study.
Knowing The History Of gel.
By using UW-Madison web sites you are actually allowing the regards to this personal privacy statement. The Educational institution from Wisconsin-Madison has actually given graduate research for much more than a century. Not just performed Fox and also Simmons-Potter appreciate the enjoyment from observing their project profit prosperous results, yet they have actually gotten ideal kudos coming from the qualified community too.
0 notes
tweetadvise · 8 years ago
Text
3 Ways to Do Audience-Based Advertising (1st-Party Data Collection) Without 3rd-Party Cookie Data
In the effort to promptly collect information from customers about their behaviors and also inclinations online, the third-party cookie utilized to be king. By merely placing a DMP's Third event pixel on your web page, it was simple to record a wide range of details about the individuals seeing your website. You can then make use of that information to properly sub-segment and target certain customers with content and experiences that were certain to increase general conversion.
However, Safari now obstructs third-party cookies by default, and unusual is the individual that opts to allow this third-party information record willingly. As the on the internet globe relocations past cookies, the capability to track and also record consumer habits making use of third-party cookies has actually ended up being much less trustworthy, as well as companies need to discover new means to record this data without breaching the privacy of their customers.
Using the First-Party Domain
While this might appear to represent an obstruction to typical data-capture techniques, it actually offers an unique possibility to find brand-new, much more reliable, safe, and precise ways to accumulate customer information compared to the old third-party cookie methodology. As an instance, Adobe Analytics runs within the first-party domain. This allows experts to catch information that might be obstructed by a web internet browser or mobile phone,, and also ensures that the target market specified by the DMP is 100% consistent with the data collected by your analytics system. Marketing experts looking to comprehend as well as influence the customer trip should first make certain their advertising tools are catching the same information and also have the same, regular definition of target market.
The File Things Design (DOM)
Another source for gathering data from your site is the Record Object Design (DOM). The DOM is the convention for standing for items in HTML, and also it describes the manner in which different items of a webpage interact with each other.
Most web monitoring devices and tag monitoring systems (consisting of Adobe's DTM) accumulate info passed with a JavaScript range to the data layer, which leads to a spick-and-span data collection process, independent of how the page is designed or coded.
The problem with total dependence on the data layer is that you need your IT/ Web Development team to pass particular name-value sets into the data layer. Adjustments to your website commonly indicate modifications to the data layer, and gradually, some important information could be missed.
Having the capability to not only well structured details from the information layer, yet additionally raw information straight from the DOM tree can make certain no information is left behind, also if the web advancement group hasn't already subjected new data to analyst or marketing teams.
This is additionally important due to the fact that it still stands for first-party data that is explicitly relevant to the content that marketing experts are placing on the display, yet, it could be recorded without going against the client's privacy as well as without depending on third-party cookies.
Software Development Kit (SDK)
While catching information at the first-party cookie degree as well as utilizing the DOM as a resource of details are exceptionally pertinent to online browsers, a riches of info could likewise be caught from cookie-less atmospheres such as indigenous mobile applications. Cookies are not utilized in these environments, but rather an advertising and marketing ID provided by either Android or iOS.
What behaves regarding the Adobe SDK is that it does greater than simply accumulate data. Lots of businesses are reluctant to incorporate multiple SDKs right into their applications as a result of concerns of data leakage, IT sources should execute, as well as so on. Nevertheless, a mobile application is anticipated to do a number of various features such as perform analytics, connect through press informs, or individualize the material delivered through it. Companies can minimize the variety of SDKs made use of to build a single application by integrating Adobe's SDK, which supports every one of this functionality and also more.
SDKs are likewise the main approach of information collection for Smart TVs, OTT Instruments such as Roku or Apple TELEVISION, and various other linked devices. If your firms' applications prolong beyond smart phones and also tablets, making certain that you could gather information from the Internet of Traits will certainly be vital as a growing number of connected gadgets wind up in the hands of your customers.
Http Endpoints, Application Programs Interfaces (APIs), and also Server Side Files
Another technique for capturing data from cookie-less atmospheres includes making use of http endpoints. Applications should consist of some kind of http code in them to display material and also allow functionality. For businesses that do not desire to integrate an SDK right into their application, information could be recorded via using APIs (application programming interfaces) that draw info directly from the http code that has been created into the application.
One more procedure that could be made use of to capture information from mobile sites or apps (beyond the use of an SDK or http endpoints) would certainly be using the server side data. All of the details regarding user interactions with these kinds of sites will normally be saved someplace on the web server that is holding the application. It is straightforward to extract these declare details regarding users' actions as they connect with the material that is presented to them. While this approach does not take place in "real-time", it's an additional technique to make certain that all of your First event information winds up in a single data platform, even if IT hurdles stand in your way.
In completion, using third-party cookies still exists and also still continues to be relevant in the effort to sync customer information with advertisement servers or demand-side platforms. Also in a world where browsers and also individuals were not moving away from allowing the use of third-party cookies, it would still be important to take advantage of other resources of information-gathering to have the broadest sight possible of your audience. Today, individuals interact with brands through a plethora of various avenues, and a single technique for collecting information concerning end-users no longer fits with advancements in modern technology or the means in which customer behavior is altering. Having a broad array of devices to capture information across a variety of different channels raises the accuracy as well as completeness of your data and enhances your understanding of the market in addition to how consumers prefer to connect with your brand.
0 notes
lodelss · 5 years ago
Text
How Google Discovered the Value of Surveillance
Shoshana Zuboff | An excerpt adapted from The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power | PublicAffairs | 2019 | 23 minutes (6,281 words)
  In 2000 a group of computer scientists and engineers at Georgia Tech collaborated on a project called the “Aware Home.” It was meant to be a “living laboratory” for the study of “ubiquitous computing.” They imagined a “human-home symbiosis” in which many animate and inanimate processes would be captured by an elaborate network of “context aware sensors” embedded in the house and by wearable computers worn by the home’s occupants. The design called for an “automated wireless collaboration” between the platform that hosted personal information from the occupants’ wearables and a second one that hosted the environmental information from the sensors.
There were three working assumptions: first, the scientists and engineers understood that the new data systems would produce an entirely new knowledge domain. Second, it was assumed that the rights to that new knowledge and the power to use it to improve one’s life would belong exclusively to the people who live in the house. Third, the team assumed that for all of its digital wizardry, the Aware Home would take its place as a modern incarnation of the ancient conventions that understand “home” as the private sanctuary of those who dwell within its walls.
All of this was expressed in the engineering plan. It emphasized trust, simplicity, the sovereignty of the individual, and the inviolability of the home as a private domain. The Aware Home information system was imagined as a simple “closed loop” with only two nodes and controlled entirely by the home’s occupants. Because the house would be “constantly monitoring the occupants’ whereabouts and activities…even tracing its inhabitants’ medical conditions,” the team concluded, “there is a clear need to give the occupants knowledge and control of the distribution of this information.” All the information was to be stored on the occupants’ wearable computers “to insure the privacy of an individual’s information.”
By 2018, the global “smart-home” market was valued at $36 billion and expected to reach $151 billion by 2023. The numbers betray an earthquake beneath their surface. Consider just one smart-home device: the Nest thermostat, which was made by a company that was owned by Alphabet, the Google holding company, and then merged with Google in 2018. The Nest thermostat does many things imagined in the Aware Home. It collects data about its uses and environment. It uses motion sensors and computation to “learn” the behaviors of a home’s inhabitants. Nest’s apps can gather data from other connected products such as cars, ovens, fitness trackers, and beds. Such systems can, for example, trigger lights if an anomalous motion is detected, signal video and audio recording, and even send notifications to homeowners or others. As a result of the merger with Google, the thermostat, like other Nest products, will be built with Google’s artificial intelligence capabilities, including its personal digital “assistant.” Like the Aware Home, the thermostat and its brethren devices create immense new stores of knowledge and therefore new power — but for whom?
Local Bookstores Amazon
Wi-Fi–enabled and networked, the thermostat’s intricate, personalized data stores are uploaded to Google’s servers. Each thermostat comes with a “privacy policy,” a “terms-of-service agreement,” and an “end-user licensing agreement.” These reveal oppressive privacy and security consequences in which sensitive household and personal information are shared with other smart devices, unnamed personnel, and third parties for the purposes of predictive analyses and sales to other unspecified parties. Nest takes little responsibility for the security of the information it collects and none for how the other companies in its ecosystem will put those data to use. A detailed analysis of Nest’s policies by two University of London scholars concluded that were one to enter into the Nest ecosystem of connected devices and apps, each with their own equally burdensome and audacious terms, the purchase of a single home thermostat would entail the need to review nearly a thousand so-called contracts.
Should the customer refuse to agree to Nest’s stipulations, the terms of service indicate that the functionality and security of the thermostat will be deeply compromised, no longer supported by the necessary updates meant to ensure its reliability and safety. The consequences can range from frozen pipes to failed smoke alarms to an easily hacked internal home system.
By 2018, the assumptions of the Aware Home were gone with the wind. Where did they go? What was that wind? The Aware Home, like many other visionary projects, imagined a digital future that empowers individuals to lead more-effective lives. What is most critical is that in the year 2000 this vision naturally assumed an unwavering commitment to the privacy of individual experience. Should an individual choose to render her experience digitally, then she would exercise exclusive rights to the knowledge garnered from such data, as well as exclusive rights to decide how such knowledge might be put to use. Today these rights to privacy, knowledge, and application have been usurped by a bold market venture powered by unilateral claims to others’ experience and the knowledge that flows from it. What does this sea change mean for us, for our children, for our democracies, and for the very possibility of a human future in a digital world? It is the darkening of the digital dream into a voracious and utterly novel commercial project that I call surveillance capitalism.
*
Surveillance capitalism runs contrary to the early digital dream, consigning the Aware Home to ancient history. Instead, it strips away the illusion that the networked form has some kind of indigenous moral content, that being “connected” is somehow intrinsically pro-social, innately inclusive, or naturally tending toward the democratization of knowledge. Digital connection is now a means to others’ commercial ends. At its core, surveillance capitalism is parasitic and self-referential. It revives Karl Marx’s old image of capitalism as a vampire that feeds on labor, but with an unexpected turn. Instead of labor, surveillance capitalism feeds on every aspect of every human’s experience. Google invented and perfected surveillance capitalism in much the same way that a century ago General Motors invented and perfected managerial capitalism. Google was the pioneer of surveillance capitalism in thought and practice, the deep pocket for research and development, and the trailblazer in experimentation and implementation, but it is no longer the only actor on this path. Surveillance capitalism quickly spread to Facebook and later to Microsoft. Evidence suggests that Amazon has veered in this direction, and it is a constant challenge to Apple, both as an external threat and as a source of internal debate and conflict.
As the pioneer of surveillance capitalism, Google launched an unprecedented market operation into the unmapped spaces of the internet, where it faced few impediments from law or competitors, like an invasive species in a landscape free of natural predators. Its leaders drove the systemic coherence of their businesses at a breakneck pace that neither public institutions nor individuals could follow. Google also benefited from historical events when a national security apparatus galvanized by the attacks of 9/11 was inclined to nurture, mimic, shelter, and appropriate surveillance capitalism’s emergent capabilities for the sake of total knowledge and its promise of certainty.
Our personal experiences are scraped and packaged as the means to others’ ends…We are the sources of surveillance capitalism’s crucial surplus.
Surveillance capitalists quickly realized that they could do anything they wanted, and they did. They dressed in the fashions of advocacy and emancipation, appealing to and exploiting contemporary anxieties, while the real action was hidden offstage. Theirs was an invisibility cloak woven in equal measure to the rhetoric of the empowering web, the ability to move swiftly, the confidence of vast revenue streams, and the wild, undefended nature of the territory they would conquer and claim. They were protected by the inherent illegibility of the automated processes that they rule, the ignorance that these processes breed, and the sense of inevitability that they foster.
Surveillance capitalism is no longer confined to the competitive dramas of the large internet companies, where behavioral futures markets were first aimed at online advertising. Its mechanisms and economic imperatives have become the default model for most internet-based businesses. Eventually, competitive pressure drove expansion into the offline world, where the same foundational mechanisms that expropriate your online browsing, likes, and clicks are trained on your run in the park, breakfast conversation, or hunt for a parking space. Today’s prediction products are traded in behavioral futures markets that extend beyond targeted online ads to many other sectors, including insurance, retail, finance, and an ever-widening range of goods and services companies determined to participate in these new and profitable markets. Whether it’s a “smart” home device, what the insurance companies call “behavioral underwriting,” or any one of thousands of other transactions, we now pay for our own domination.
Surveillance capitalism’s products and services are not the objects of a value exchange. They do not establish constructive producer-consumer reciprocities. Instead, they are the “hooks” that lure users into their extractive operations in which our personal experiences are scraped and packaged as the means to others’ ends. We are not surveillance capitalism’s “customers.” Although the saying tells us “If it’s free, then you are the product,” that is also incorrect. We are the sources of surveillance capitalism’s crucial surplus: the objects of a technologically advanced and increasingly inescapable raw-material-extraction operation. Surveillance capitalism’s actual customers are the enterprises that trade in its markets for future behavior.
*
Google is to surveillance capitalism what the Ford Motor Company and General Motors were to mass-production–based managerial capitalism. New economic logics and their commercial models are discovered by people in a time and place and then perfected through trial and error. In our time Google became the pioneer, discoverer, elaborator, experimenter, lead practitioner, role model, and diffusion hub of surveillance capitalism. GM and Ford’s iconic status as pioneers of twentieth-century capitalism made them enduring objects of scholarly research and public fascination because the lessons they had to teach resonated far beyond the individual companies. Google’s practices deserve the same kind of examination, not merely as a critique of a single company but rather as the starting point for the codification of a powerful new form of capitalism.
With the triumph of mass production at Ford and for decades thereafter, hundreds of researchers, businesspeople, engineers, journalists, and scholars would excavate the circumstances of its invention, origins, and consequences. Decades later, scholars continued to write extensively about Ford, the man and the company. GM has also been an object of intense scrutiny. It was the site of Peter Drucker’s field studies for his seminal Concept of the Corporation, the 1946 book that codified the practices of the twentieth-century business organization and established Drucker’s reputation as a management sage. In addition to the many works of scholarship and analysis on these two firms, their own leaders enthusiastically articulated their discoveries and practices. Henry Ford and his general manager, James Couzens, and Alfred Sloan and his marketing man, Henry “Buck” Weaver, reflected on, conceptualized, and proselytized their achievements, specifically locating them in the evolutionary drama of American capitalism.
Google is a notoriously secretive company, and one is hard-pressed to imagine a Drucker equivalent freely roaming the scene and scribbling in the hallways. Its executives carefully craft their messages of digital evangelism in books and blog posts, but its operations are not easily accessible to outside researchers or journalists. In 2016 a lawsuit brought against the company by a product manager alleged an internal spying program in which employees are expected to identify coworkers who violate the firm’s confidentiality agreement: a broad prohibition against divulging anything about the company to anyone. The closest thing we have to a Buck Weaver or James Couzens codifying Google’s practices and objectives is the company’s longtime chief economist, Hal Varian, who aids the cause of understanding with scholarly articles that explore important themes. Varian has been described as “the Adam Smith of the discipline of Googlenomics” and the “godfather” of its advertising model. It is in Varian’s work that we find hidden-in-plain-sight important clues to the logic of surveillance capitalism and its claims to power.
In two extraordinary articles in scholarly journals, Varian explored the theme of “computer-mediated transactions” and their transformational effects on the modern economy. Both pieces are written in amiable, down-to-earth prose, but Varian’s casual understatement stands in counterpoint to his often-startling declarations: “Nowadays there is a computer in the middle of virtually every transaction…now that they are available these computers have several other uses.” He then identifies four such new uses: “data extraction and analysis,” “new contractual forms due to better monitoring,” “personalization and customization,” and “continuous experiments.”
Varian’s discussions of these new “uses” are an unexpected guide to the strange logic of surveillance capitalism, the division of learning that it shapes, and the character of the information civilization toward which it leads. “Data extraction and analysis,” Varian writes, “is what everyone is talking about when they talk about big data.”
*
Google was incorporated in 1998, founded by Stanford graduate students Larry Page and Sergey Brin just two years after the Mosaic browser threw open the doors of the world wide web to the computer-using public. From the start, the company embodied the promise of information capitalism as a liberating and democratic social force that galvanized and delighted second-modernity populations around the world.
Thanks to this wide embrace, Google successfully imposed computer mediation on broad new domains of human behavior as people searched online and engaged with the web through a growing roster of Google services. As these new activities were informated for the first time, they produced wholly new data resources. For example, in addition to key words, each Google search query produces a wake of collateral data such as the number and pattern of search terms, how a query is phrased, spelling, punctuation, dwell times, click patterns, and location.
There was no reliable way to turn investors’ money into revenue…The behavioral value reinvestment cycle produced a very cool search function, but it was not yet capitalism.
Early on, these behavioral by-products were haphazardly stored and operationally ignored. Amit Patel, a young Stanford graduate student with a special interest in “data mining,” is frequently credited with the groundbreaking insight into the significance of Google’s accidental data caches. His work with these data logs persuaded him that detailed stories about each user — thoughts, feelings, interests — could be constructed from the wake of unstructured signals that trailed every online action. These data, he concluded, actually provided a “broad sensor of human behavior” and could be put to immediate use in realizing cofounder Larry Page’s dream of Search as a comprehensive artificial intelligence.
Google’s engineers soon grasped that the continuous flows of collateral behavioral data could turn the search engine into a recursive learning system that constantly improved search results and spurred product innovations such as spell check, translation, and voice recognition. As Kenneth Cukier observed at that time,
Other search engines in the 1990s had the chance to do the same, but did not pursue it. Around 2000 Yahoo! saw the potential, but nothing came of the idea. It was Google that recognized the gold dust in the detritus of its interactions with its users and took the trouble to collect it up…Google exploits information that is a by-product of user interactions, or data exhaust, which is automatically recycled to improve the service or create an entirely new product.
What had been regarded as waste material — “data exhaust” spewed into Google’s servers during the combustive action of Search — was quickly reimagined as a critical element in the transformation of Google’s search engine into a reflexive process of continuous learning and improvement.
At that early stage of Google’s development, the feedback loops involved in improving its Search functions produced a balance of power: Search needed people to learn from, and people needed Search to learn from. This symbiosis enabled Google’s algorithms to learn and produce ever-more relevant and comprehensive search results. More queries meant more learning; more learning produced more relevance. More relevance meant more searches and more users. By the time the young company held its first press conference in 1999, to announce a $25 million equity investment from two of the most revered Silicon Valley venture capital firms, Sequoia Capital and Kleiner Perkins, Google Search was already fielding seven million requests each day. A few years later, Hal Varian, who joined Google as its chief economist in 2002, would note, “Every action a user performs is considered a signal to be analyzed and fed back into the system.” The Page Rank algorithm, named after its founder, had already given Google a significant advantage in identifying the most popular results for queries. Over the course of the next few years it would be the capture, storage, analysis, and learning from the by-products of those search queries that would turn Google into the gold standard of web search.
Kickstart your weekend reading by getting the week’s best Longreads delivered to your inbox every Friday afternoon.
Sign up
The key point for us rests on a critical distinction. During this early period, behavioral data were put to work entirely on the user’s behalf. User data provided value at no cost, and that value was reinvested in the user experience in the form of improved services: enhancements that were also offered at no cost to users. Users provided the raw material in the form of behavioral data, and those data were harvested to improve speed, accuracy, and relevance and to help build ancillary products such as translation. I call this the behavioral value reinvestment cycle, in which all behavioral data are reinvested in the improvement of the product or service.
The cycle emulates the logic of the iPod; it worked beautifully at Google but with one critical difference: the absence of a sustainable market transaction. In the case of the iPod, the cycle was triggered by the purchase of a high-margin physical product. Subsequent reciprocities improved the iPod product and led to increased sales. Customers were the subjects of the commercial process, which promised alignment with their “what I want, when I want, where I want” demands. At Google, the cycle was similarly oriented toward the individual as its subject, but without a physical product to sell, it floated outside the marketplace, an interaction with “users” rather than a market transaction with customers.
This helps to explain why it is inaccurate to think of Google’s users as its customers: there is no economic exchange, no price, and no profit. Nor do users function in the role of workers. When a capitalist hires workers and provides them with wages and means of production, the products that they produce belong to the capitalist to sell at a profit. Not so here. Users are not paid for their labor, nor do they operate the means of production. Finally, people often say that the user is the “product.” This is also misleading. Users are not products, but rather we are the sources of raw-material supply. Surveillance capitalism’s unusual products manage to be derived from our behavior while remaining indifferent to our behavior. Its products are about predicting us, without actually caring what we do or what is done to us.
At this early stage of Google’s development, whatever Search users inadvertently gave up that was of value to the company they also used up in the form of improved services. In this reinvestment cycle, serving users with amazing Search results “consumed” all the value that users created when they provided extra behavioral data. The fact that users needed Search about as much as Search needed users created a balance of power between Google and its populations. People were treated as ends in themselves, the subjects of a nonmarket, self-contained cycle that was perfectly aligned with Google’s stated mission “to organize the world’s information, making it universally accessible and useful.”
*
By 1999, despite the splendor of Google’s new world of searchable web pages, its growing computer science capabilities, and its glamorous venture backers, there was no reliable way to turn investors’ money into revenue. The behavioral value reinvestment cycle produced a very cool search function, but it was not yet capitalism. The balance of power made it financially risky and possibly counterproductive to charge users a fee for search services. Selling search results would also have set a dangerous precedent for the firm, assigning a price to indexed information that Google’s web crawler had already taken from others without payment. Without a device like Apple’s iPod or its digital songs, there were no margins, no surplus, nothing left over to sell and turn into revenue.
Google had relegated advertising to steerage class: its AdWords team consisted of seven people, most of whom shared the founders’ general antipathy toward ads. The tone had been set in Sergey Brin and Larry Page’s milestone paper that unveiled their search engine conception, “The Anatomy of a Large-Scale Hypertextual Web Search Engine,” presented at the 1998 World Wide Web Conference: “We expect that advertising funded search engines will be inherently biased towards the advertisers and away from the needs of the consumers. This type of bias is very difficult to detect but could still have a significant effect on the market…we believe the issue of advertising causes enough mixed incentives that it is crucial to have a competitive search engine that is transparent and in the academic realm.”
Google’s first revenues depended instead on exclusive licensing deals to provide web services to portals such as Yahoo! and Japan’s BIGLOBE. It also generated modest revenue from sponsored ads linked to search query keywords. There were other models for consideration. Rival search engines such as Overture, used exclusively by the then-giant portal AOL, or Inktomi, the search engine adopted by Microsoft, collected revenues from the sites whose pages they indexed. Overture was also successful in attracting online ads with its policy of allowing advertisers to pay for high-ranking search listings, the very format that Brin and Page scorned.
Prominent analysts publicly doubted whether Google could compete with its more-established rivals. As the New York Times asked, “Can Google create a business model even remotely as good as its technology?” A well-known Forrester Research analyst proclaimed that there were only a few ways for Google to make money with Search: “build a portal [like Yahoo!]…partner with a portal…license the technology…wait for a big company to purchase them.”
Despite these general misgivings about Google’s viability, the firm’s prestigious venture backing gave the founders confidence in their ability to raise money. This changed abruptly in April 2000, when the legendary dot-com economy began its steep plunge into recession, and Silicon Valley’s Garden of Eden unexpectedly became the epicenter of a financial earthquake.
The idea of being able to deliver a particular message to a particular person at just the moment when it might have a high probability of actually influencing his or her behavior was, and had always been, the holy grail of advertising.
By mid-April, Silicon Valley’s fast-money culture of privilege was under siege with the implosion of what came to be known as the “dot-com bubble.” It is easy to forget exactly how terrifying things were for the valley’s ambitious young people and their slightly older investors. Startups with outsized valuations just months earlier were suddenly forced to shutter. Prominent articles such as “Doom Stalks the Dotcoms” noted that the stock prices of Wall Street’s most-revered internet “high flyers” were “down for the count,” with many of them trading below their initial offering price: “With many dotcoms declining, neither venture capitalists nor Wall Street is eager to give them a dime…” The news brimmed with descriptions of shell-shocked investors. The week of April 10 saw the worst decline in the history of the NASDAQ, where many internet companies had gone public, and there was a growing consensus that the “game” had irreversibly changed.
As the business environment in Silicon Valley unraveled, investors’ prospects for cashing out by selling Google to a big company seemed far less likely, and they were not immune to the rising tide of panic. Many Google investors began to express doubts about the company’s prospects, and some threatened to withdraw support. Pressure for profit mounted sharply, despite the fact that Google Search was widely considered the best of all the search engines, traffic to its website was surging, and a thousand résumés flooded the firm’s Mountain View office each day. Page and Brin were seen to be moving too slowly, and their top venture capitalists, John Doerr from Kleiner Perkins and Michael Moritz from Sequoia, were frustrated. According to Google chronicler Steven Levy, “The VCs were screaming bloody murder. Tech’s salad days were over, and it wasn’t certain that Google would avoid becoming another crushed radish.”
The specific character of Silicon Valley’s venture funding, especially during the years leading up to dangerous levels of startup inflation, also contributed to a growing sense of emergency at Google. As Stanford sociologist Mark Granovetter and his colleague Michel Ferrary found in their study of valley venture firms, “A connection with a high-status VC firm signals the high status of the startup and encourages other agents to link to it.” These themes may seem obvious now, but it is useful to mark the anxiety of those months of sudden crisis. Prestigious risk investment functioned as a form of vetting — much like acceptance to a top university sorts and legitimates students, elevating a few against the backdrop of the many — especially in the “uncertain” environment characteristic of high-tech investing. Loss of that high-status signaling power assigned a young company to a long list of also-rans in Silicon Valley’s fast-moving saga.
Other research findings point to the consequences of the impatient money that flooded the valley as inflationary hype drew speculators and ratcheted up the volatility of venture funding. Studies of pre-bubble investment patterns showed a “big-score” mentality in which bad results tended to stimulate increased investing as funders chased the belief that some young company would suddenly discover the elusive business model destined to turn all their bets into rivers of gold. Startup mortality rates in Silicon Valley outstripped those for other venture capital centers such as Boston and Washington, DC, with impatient money producing a few big wins and many losses. Impatient money is also reflected in the size of Silicon Valley startups, which during this period were significantly smaller than in other regions, employing an average of 68 employees as compared to an average of 112 in the rest of the country. This reflects an interest in quick returns without spending much time on growing a business or deepening its talent base, let alone developing the institutional capabilities. These propensities were exacerbated by the larger Silicon Valley culture, where net worth was celebrated as the sole measure of success for valley parents and their children.
For all their genius and principled insights, Brin and Page could not ignore the mounting sense of emergency. By December 2000, the Wall Street Journal reported on the new “mantra” emerging from Silicon Valley’s investment community: “Simply displaying the ability to make money will not be enough to remain a major player in the years ahead. What will be required will be an ability to show sustained and exponential profits.”
*
The declaration of a state of exception functions in politics as cover for the suspension of the rule of law and the introduction of new executive powers justified by crisis. At Google in late 2000, it became a rationale for annulling the reciprocal relationship that existed between Google and its users, steeling the founders to abandon their passionate and public opposition to advertising. As a specific response to investors’ anxiety, the founders tasked the tiny AdWords team with the objective of looking for ways to make more money. Page demanded that the whole process be simplified for advertisers. In this new approach, he insisted that advertisers “shouldn’t even get involved with choosing keywords — Google would choose them.”
Operationally, this meant that Google would turn its own growing cache of behavioral data and its computational power and expertise toward the single task of matching ads with queries. New rhetoric took hold to legitimate this unusual move. If there was to be advertising, then it had to be “relevant” to users. Ads would no longer be linked to keywords in a search query, but rather a particular ad would be “targeted” to a particular individual. Securing this holy grail of advertising would ensure relevance to users and value to Advertisers.
Absent from the new rhetoric was the fact that in pursuit of this new aim, Google would cross into virgin territory by exploiting sensitivities that only its exclusive and detailed collateral behavioral data about millions and later billions of users could reveal. To meet the new objective, the behavioral value reinvestment cycle was rapidly and secretly subordinated to a larger and more complex undertaking. The raw materials that had been solely used to improve the quality of search results would now also be put to use in the service of targeting advertising to individual users. Some data would continue to be applied to service improvement, but the growing stores of collateral signals would be repurposed to improve the profitability of ads for both Google and its advertisers. These behavioral data available for uses beyond service improvement constituted a surplus, and it was on the strength of this behavioral surplus that the young company would find its way to the “sustained and exponential profits” that would be necessary for survival. Thanks to a perceived emergency, a new mutation began to gather form and quietly slip its moorings in the implicit advocacy-oriented social contract of the firm’s original relationship with users.
Google’s declared state of exception was the backdrop for 2002, the watershed year during which surveillance capitalism took root. The firm’s appreciation of behavioral surplus crossed another threshold that April, when the data logs team arrived at their offices one morning to find that a peculiar phrase had surged to the top of the search queries: “Carol Brady’s maiden name.” Why the sudden interest in a 1970s television character? It was data scientist and logs team member Amit Patel who recounted the event to the New York Times, noting, “You can’t interpret it unless you know what else is going on in the world.”
The team went to work to solve the puzzle. First, they discerned that the pattern of queries had produced five separate spikes, each beginning at forty-eight minutes after the hour. Then they learned that the query pattern occurred during the airing of the popular TV show Who Wants to Be a Millionaire? The spikes reflected the successive time zones during which the show aired, ending in Hawaii. In each time zone, the show’s host posed the question of Carol Brady’s maiden name, and in each zone the queries immediately flooded into Google’s servers.
As the New York Times reported, “The precision of the Carol Brady data was eye-opening for some.” Even Brin was stunned by the clarity of Search’s predictive power, revealing events and trends before they “hit the radar” of traditional media. As he told the Times, “It was like trying an electron microscope for the first time. It was like a moment-by-moment barometer.” Google executives were described by the Times as reluctant to share their thoughts about how their massive stores of query data might be commercialized. “There is tremendous opportunity with this data,” one executive confided.
Just a month before the Carol Brady moment, while the AdWords team was already working on new approaches, Brin and Page hired Eric Schmidt, an experienced executive, engineer, and computer science Ph.D., as chairman. By August, they appointed him to the CEO’s role. Doerr and Moritz had been pushing the founders to hire a professional manager who would know how to pivot the firm toward profit. Schmidt immediately implemented a “belt-tightening” program, grabbing the budgetary reins and heightening the general sense of financial alarm as fund-raising prospects came under threat. A squeeze on workspace found him unexpectedly sharing his office with none other than Amit Patel.
Schmidt later boasted that as a result of their close quarters over the course of several months, he had instant access to better revenue figures than did his own financial planners. We do not know (and may never know) what other insights Schmidt might have gleaned from Patel about the predictive power of Google’s behavioral data stores, but there is no doubt that a deeper grasp of the predictive power of data quickly shaped Google’s specific response to financial emergency, triggering the crucial mutation that ultimately turned AdWords, Google, the internet, and the very nature of information capitalism toward an astonishingly lucrative surveillance project.
That this no longer seems astonishing to us, or perhaps even worthy of note, is evidence of the profound psychic numbing that has inured us to a bold and unprecedented shift in capitalist methods.
Google’s earliest ads had been considered more effective than most online advertising at the time because they were linked to search queries and Google could track when users actually clicked on an ad, known as the “click-through” rate. Despite this, advertisers were billed in the conventional manner according to how many people viewed an ad. As Search expanded, Google created the self-service system called AdWords, in which a search that used the advertiser’s keyword would include that advertiser’s text box and a link to its landing page. Ad pricing depended upon the ad’s position on the search results page.
Rival search startup Overture had developed an online auction system for web page placement that allowed it to scale online advertising targeted to keywords. Google would produce a transformational enhancement to that model, one that was destined to alter the course of information capitalism. As a Bloomberg journalist explained in 2006, “Google maximizes the revenue it gets from that precious real estate by giving its best position to the advertiser who is likely to pay Google the most in total, based on the price per click multiplied by Google’s estimate of the likelihood that someone will actually click on the ad.” That pivotal multiplier was the result of Google’s advanced computational capabilities trained on its most significant and secret discovery: behavioral surplus. From this point forward, the combination of ever-increasing machine intelligence and ever-more-vast supplies of behavioral surplus would become the foundation of an unprecedented logic of accumulation. Google’s reinvestment priorities would shift from merely improving its user offerings to inventing and institutionalizing the most far-reaching and technologically advanced raw-material supply operations that the world had ever seen. Henceforth, revenues and growth would depend upon more behavioral surplus.
Google’s many patents filed during those early years illustrate the explosion of discovery, inventiveness, and complexity detonated by the state of exception that led to these crucial innovations and the firm’s determination to advance the capture of behavioral surplus. One patent submitted in 2003 by three of the firm’s top computer scientists is titled “Generating User Information for Use in Targeted Advertising.” The patent is emblematic of the new mutation and the emerging logic of accumulation that would define Google’s success. Of even greater interest, it also provides an unusual glimpse into the “economic orientation” baked deep into the technology cake by reflecting the mindset of Google’s distinguished scientists as they harnessed their knowledge to the firm’s new aims. In this way, the patent stands as a treatise on a new political economics of clicks and its moral universe, before the company learned to disguise this project in a fog of euphemism.
The patent reveals a pivoting of the backstage operation toward Google’s new audience of genuine customers. “The present invention concerns advertising,” the inventors announce. Despite the enormous quantity of demographic data available to advertisers, the scientists note that much of an ad budget “is simply wasted…it is very difficult to identify and eliminate such waste.”
Advertising had always been a guessing game: art, relationships, conventional wisdom, standard practice, but never “science.” The idea of being able to deliver a particular message to a particular person at just the moment when it might have a high probability of actually influencing his or her behavior was, and had always been, the holy grail of advertising. The inventors point out that online ad systems had also failed to achieve this elusive goal. The then-predominant approaches used by Google’s competitors, in which ads were targeted to keywords or content, were unable to identify relevant ads “for a particular user.” Now the inventors offered a scientific solution that exceeded the most-ambitious dreams of any advertising executive:
There is a need to increase the relevancy of ads served for some user request, such as a search query or a document request…to the user that submitted the request…The present invention may involve novel methods, apparatus, message formats and/or data structures for determining user profile information and using such determined user profile information for ad serving.
In other words, Google would no longer mine behavioral data strictly to improve service for users but rather to read users’ minds for the purposes of matching ads to their interests, as those interests are deduced from the collateral traces of online behavior. With Google’s unique access to behavioral data, it would now be possible to know what a particular individual in a particular time and place was thinking, feeling, and doing. That this no longer seems astonishing to us, or perhaps even worthy of note, is evidence of the profound psychic numbing that has inured us to a bold and unprecedented shift in capitalist methods.
The techniques described in the patent meant that each time a user queries Google’s search engine, the system simultaneously presents a specific configuration of a particular ad, all in the fraction of a moment that it takes to fulfill the search query. The data used to perform this instant translation from query to ad, a predictive analysis that was dubbed “matching,” went far beyond the mere denotation of search terms. New data sets were compiled that would dramatically enhance the accuracy of these predictions. These data sets were referred to as “user profile information” or “UPI.” These new data meant that there would be no more guesswork and far less waste in the advertising budget. Mathematical certainty would replace all of that.
* * *
From THE AGE OF SURVEILLANCE CAPITALISM: The Fight for a Human Future at the New Frontier of Power,  by Shoshana Zuboff.  Reprinted with permission from PublicAffairs, a division of the Hachette Book Group.
Shoshana Zuboff is the Charles Edward Wilson Professor emerita, Harvard Business School. She is the author of In The Age of the Smart Machine: the Future of Work and Power and The Support Economy: Why Corporations Are Failing Individuals and the Next Episode of Capitalism.
Longreads Editor: Dana Snitzky
from Blogger https://ift.tt/2MUwnkw via IFTTT
0 notes