#Global APP Store Monetisation Market 2018
Explore tagged Tumblr posts
helloancycruzworld · 5 years ago
Text
Global APP Store Monetisation Market 2019 by Size, Development Status, Trends and Forecast to 2023
Global APP Store Monetisation Market 2019 by Size, Development Status, Trends and Forecast to 2023
According to this study, over the next five years the APP Store Monetisation market will register a xx% CAGR in terms of revenue, the global market size will reach US$ xx million by 2023, from US$ xx million in 2017. In particular, this report presents the global revenue market share of key companies in APP Store Monetisation business, shared in Chapter 3.
This report presents a comprehensive…
View On WordPress
0 notes
classyfoxdestiny · 3 years ago
Text
How Facebook undermines privacy protections for its 2 billion WhatsApp users
How Facebook undermines privacy protections for its 2 billion WhatsApp users
When Mark Zuckerberg unveiled a new “privacy-focused vision” for Facebook in March 2019, he cited the company’s global messaging service, WhatsApp, as a model.
Acknowledging that “we don’t currently have a strong reputation for building privacy protective services,” the Facebook CEO wrote that “I believe the future of communication will increasingly shift to private, encrypted services where people can be confident what they say to each other stays secure and their messages and content won’t stick around forever. This is the future I hope we will help bring about. We plan to build this the way we’ve developed WhatsApp.”
Zuckerberg’s vision centred on WhatsApp’s signature feature, which he said the company was planning to apply to Instagram and Facebook Messenger: end-to-end encryption, which converts all messages into an unreadable format that is only unlocked when they reach their intended destinations. WhatsApp messages are so secure, he said, that nobody else — not even the company — can read a word. As Zuckerberg had put it earlier, in testimony to the US Senate in 2018, “We don’t see any of the content in WhatsApp”.
WhatsApp emphasises this point so consistently that a flag with a similar assurance automatically appears on-screen before users send messages: “No one outside of this chat, not even WhatsApp, can read or listen to them.”
Given those sweeping assurances, you might be surprised to learn that WhatsApp has more than 1,000 contract workers filling floors of office buildings in Austin, Dublin and Singapore. Seated at computers in pods organised by work assignments, these hourly workers use special Facebook software to sift through millions of private messages, images and videos. They pass judgment on whatever flashes on their screen — claims of everything from fraud or spam to child porn and potential terrorist plotting — typically in less than a minute. The workers have access to only a subset of WhatsApp messages — those flagged by users and automatically forwarded to the company as possibly abusive.
The review is one element in a broader monitoring operation in which the company also reviews material that is not encrypted, including data about the sender and their account. Policing users while assuring them that their privacy is sacrosanct makes for an awkward mission at WhatsApp.
ALSO READ TECH NEWSLETTER OF THE DAY
Elizabeth Holmes, who dropped out of college to launch a blood-testing company Theranos and was once touted as the next Steve Jobs, faces up to 20 years in prison, as does her former boyfriend and COO Ramesh Balwani.
Read Now
A 49-slide internal company marketing presentation from December, obtained by ProPublica, emphasises the “fierce” promotion of WhatsApp’s “privacy narrative”. It compares its “brand character” to “the Immigrant Mother” and displays a photo of Malala Yousafzai, who survived a shooting by the Taliban and became a Nobel Peace Prize winner, in a slide titled “Brand tone parameters”. The presentation does not mention the company’s content moderation efforts.
WhatsApp’s director of communications, Carl Woog, acknowledged that teams of contractors in Austin and elsewhere review WhatsApp messages to identify and remove “the worst” abusers. But Woog told ProPublica that the company does not consider this work to be content moderation, saying: “We actually don’t typically use the term for WhatsApp.”
The company declined to make executives available for interviews for this article, but responded to questions with written comments.
“WhatsApp is a lifeline for millions of people around the world,” the company said. “The decisions we make around how we build our app are focused around the privacy of our users, maintaining a high degree of reliability and preventing abuse.”
WhatsApp’s denial that it moderates content is noticeably different from what Facebook says about WhatsApp’s corporate siblings, Instagram and Facebook. The company has said that some 15,000 moderators examine content on Facebook and Instagram, neither of which is encrypted. It releases quarterly transparency reports that detail how many accounts Facebook and Instagram have “actioned” for various categories of abusive content.
There is no such report for WhatsApp.
Deploying an army of content reviewers is just one of the ways that Facebook has compromised the privacy of WhatsApp users. Together, the company’s actions have left WhatsApp — the largest messaging app in the world, with two billion users — far less private than its users likely understand or expect. A ProPublica investigation, drawing on data, documents and dozens of interviews with current and former employees and contractors, reveals how, since purchasing WhatsApp in 2014, Facebook has quietly undermined its sweeping security assurances in multiple ways.
Many of the assertions by content moderators working for WhatsApp are echoed by a confidential whistleblower complaint filed last year with the US Securities and Exchange Commission. The complaint — which ProPublica obtained — details WhatsApp’s extensive use of outside contractors, artificial intelligence systems and account information to examine user messages, images and videos. It alleges that the company’s claims of protecting users’ privacy are false.
“We haven’t seen this complaint,” the company spokesperson said. The SEC has taken no public action on it; an agency spokesperson declined to comment.
Facebook has also downplayed how much data it collects from WhatsApp users, what it does with it and how much it shares with law enforcement authorities.
For example, WhatsApp shares metadata, unencrypted records that can reveal a lot about a user’s activity, with law enforcement agencies such as the Department of Justice. Some rivals, such as Signal, intentionally gather much less metadata to avoid incursions on its users’ privacy, and thus share far less with law enforcement. (“WhatsApp responds to valid legal requests,” the company spokesperson said, “including orders that require us to provide on a real-time going forward basis who a specific person is messaging.”)
WhatsApp user data, ProPublica has learned, helped prosecutors build a high-profile case against a Treasury Department employee who leaked confidential documents to BuzzFeed News that exposed how dirty money flows through US banks.
Like other social media and communications platforms, WhatsApp is caught between users who expect privacy and law enforcement entities that effectively demand the opposite: that WhatsApp turn over information that will help combat crime and online abuse. WhatsApp has responded to this dilemma by asserting that it’s no dilemma at all. “I think we absolutely can have security and safety for people through end-to-end encryption and work with law enforcement to solve crimes,” said Will Cathcart, whose title is head of WhatsApp, in a YouTube interview with an Australian think tank in July.
The tension between privacy and disseminating information to law enforcement is exacerbated by a second pressure: Facebook’s need to make money from WhatsApp. Since paying $22 billion to buy WhatsApp in 2014, Facebook has been trying to figure out how to generate profits from a service that doesn’t charge its users a penny. That conundrum has periodically led to moves that anger users, regulators or both.
The goal of monetising the app was part of the company’s 2016 decision to start sharing WhatsApp user data with Facebook, something the company had told European Union regulators was technologically impossible. The same impulse spurred a controversial plan, abandoned in late 2019, to sell advertising on WhatsApp. And the profit-seeking mandate was behind another botched initiative in January: the introduction of a new privacy policy for user interactions with businesses on WhatsApp, allowing businesses to use customer data in new ways. That announcement triggered a user exodus to competing apps.
WhatsApp’s increasingly aggressive business plan is focused on charging companies for an array of services — letting users make payments via WhatsApp and managing customer service chats — that offer convenience but fewer privacy protections.
The result is a confusing two-tiered privacy system within the same app where protections of end-to-end encryption are further eroded when WhatsApp users employ the service to communicate with businesses.
The company’s December marketing presentation captures WhatsApp’s diverging imperatives. It states that “privacy will remain important”. But it also conveys what seems to be a more urgent mission: the need to “open the aperture of the brand to encompass our future business objectives.”
Content Moderation Associates In many ways, the experience of being a content moderator for WhatsApp in Austin is identical to being a moderator for Facebook or Instagram, according to interviews with 29 current and former moderators. Mostly in their 20s and 30s, many with past experience as store clerks, grocery checkers and baristas, the moderators are hired and employed by Accenture, a huge corporate contractor that works for Facebook and other Fortune 500 behemoths.
The job listings advertise “Content Review” positions and make no mention of Facebook or WhatsApp. Employment documents list the workers’ initial title as “content moderation associate.” Pay starts around $16.50 an hour. Moderators are instructed to tell anyone who asks that they work for Accenture, and are required to sign sweeping non-disclosure agreements.
Citing the NDAs, almost all the current and former moderators interviewed by ProPublica insisted on anonymity. (An Accenture spokesperson declined comment, referring all questions about content moderation to WhatsApp.)
When the WhatsApp team was assembled in Austin in 2019, Facebook moderators already occupied the fourth floor of an office tower on Sixth Street, adjacent to the city’s famous bar-and-music scene. The WhatsApp team was installed on the floor above, with new glass-enclosed work pods and nicer bathrooms that sparked a tinge of envy in a few members of the Facebook team. Most of the WhatsApp team scattered to work from home during the pandemic.
Whether in the office or at home, they spend their days in front of screens, using a Facebook software tool to examine a stream of “tickets”, organised by subject into “reactive” and “proactive” queues.
Collectively, the workers scrutinise millions of pieces of WhatsApp content each week. Each reviewer handles upwards of 600 tickets a day, which gives them less than a minute per ticket. WhatsApp declined to reveal how many contract workers are employed for content review, but a partial staffing list reviewed by ProPublica suggests that, at Accenture alone, it’s more than 1,000. WhatsApp moderators, like their Facebook and Instagram counterparts, are expected to meet performance metrics for speed and accuracy, which are audited by Accenture.
Their jobs differ in other ways. Because WhatsApp’s content is encrypted, artificial intelligence systems can’t automatically scan all chats, images and videos, as they do on Facebook and Instagram. Instead, WhatsApp reviewers gain access to private content when users hit the “report” button on the app, identifying a message as allegedly violating the platform’s terms of service. This forwards five messages — the allegedly offending one along with the four previous ones in the exchange, including any images or videos — to WhatsApp in unscrambled form, according to former WhatsApp engineers and moderators. Automated systems then feed these tickets into “reactive” queues for contract workers to assess. Artificial intelligence initiates a second set of queues — so-called proactive ones — by scanning unencrypted data that WhatsApp collects about its users and comparing it against suspicious account information and messaging patterns (a new account rapidly sending out a high volume of chats is evidence of spam), as well as terms and images that have previously been deemed abusive.
The unencrypted data available for scrutiny is extensive. It includes the names and profile images of a user’s WhatsApp groups as well as their phone number, profile photo, status message, phone battery level, language and time zone, unique mobile phone ID and IP address, wireless signal strength and phone operating system, as a list of their electronic devices, any related Facebook and Instagram accounts, the last time they used the app and any previous history of violations. The WhatsApp reviewers have three choices when presented with a ticket for either type of queue: Do nothing, place the user on “watch” for further scrutiny, or ban the account. (Facebook and Instagram content moderators have more options, including removing individual postings. It’s that distinction — the fact that WhatsApp reviewers can’t delete individual items — that the company cites as its basis for asserting that WhatsApp reviewers are not “content moderators”.)
WhatsApp moderators must make subjective, sensitive and subtle judgements, interviews and documents examined by ProPublica show. They examine a wide range of categories, including “Spam Report”, “Civic Bad Actor” (political hate speech and disinformation), “Terrorism Global Credible Threat”, “CEI” (child exploitative imagery) and “CP” (child pornography). Another set of categories addresses the messaging and conduct of millions of small and large businesses that use WhatsApp to chat with customers and sell their wares. These queues have such titles as “business impersonation prevalence,” “commerce policy probable violators” and “business verification”.
Moderators say the guidance they get from WhatsApp and Accenture relies on standards that can be simultaneously arcane and disturbingly graphic. Decisions about abusive sexual imagery, for example, can rest on an assessment of whether a naked child in an image appears adolescent or prepubescent, based on comparison of hip bones and pubic hair to a medical index chart. One reviewer recalled a grainy video in a political-speech queue that depicted a machete-wielding man holding up what appeared to be a severed head: “We had to watch and say, ‘Is this a real dead body or a fake dead body?’”
In late 2020, moderators were informed of a new queue for alleged “sextortion.” It was defined in an explanatory memo as “a form of sexual exploitation where people are blackmailed with a nude image of themselves which have been shared by them or someone else on the Internet.” The memo said workers would review messages reported by users that “include predefined keywords typically used in sextortion/blackmail messages.”
WhatsApp’s review system is hampered by impediments, including buggy language translation. The service has users in 180 countries, with the vast majority located outside the US. Even though Accenture hires workers who speak a variety of languages, for messages in some languages there’s often no native speaker on site to assess abuse complaints. That means using Facebook’s language-translation tool, which reviewers said could be so inaccurate that it sometimes labelled messages in Arabic as being in Spanish. The tool also offered little guidance on local slang, political context or sexual innuendo.
“In the three years I’ve been there,” one moderator said, “it’s always been horrible.”
The process can be rife with errors and misunderstandings. Companies have been flagged for offering weapons for sale when they are selling straight shaving razors. Bras can be sold, but if the marketing language registers as “adult”, the seller can be labelled a forbidden “sexually oriented business”. And a flawed translation tool set off an alarm when it detected kids for sale and slaughter, which, upon closer scrutiny, turned out to involve young goats intended to be cooked and eaten in halal meals.
The system is also undercut by the human failings of the people who instigate reports. Complaints are frequently filed to punish, harass or prank someone, according to moderators. In messages from Brazil and Mexico, one moderator explained, “we had a couple of months where AI was banning groups left and right because people were messing with their friends by changing their group names” and then reporting them. “At the worst of it, we were probably getting tens of thousands of those. They figured out some words the algorithm did not like.”
Other reports fail to meet WhatsApp standards for an account ban. “Most of it is not violating,” one of the moderators said. “It’s content that is already on the internet, and it’s just people trying to mess with users.”
Still, each case can reveal up to five unencrypted messages, which are then examined by moderators. The judgment of WhatsApp’s AI is less than perfect, moderators say. “There were a lot of innocent photos on there that were not allowed to be on there,” said Carlos Sauceda, who left Accenture last year after nine months. “It might have been a photo of a child taking a bath, and there was nothing wrong with it.” As another WhatsApp moderator put it, “A lot of the time, the artificial intelligence is not that intelligent.”
Facebook’s written guidance to WhatsApp moderators acknowledges many problems, noting “we have made mistakes and our policies have been weaponised by bad actors to get good actors banned. When users write inquiries pertaining to abusive matters like these, it is up to WhatsApp to respond and act (if necessary) accordingly in a timely and pleasant manner.”
Of course, if a user appeals a ban that was prompted by a user report, according to one moderator, it entails having a second moderator examine the user’s content.
Industry Leaders In public statements and on the company’s websites, Facebook is noticeably vague about WhatsApp’s monitoring process. The company does not provide a regular accounting of how WhatsApp polices the platform. WhatsApp’s FAQ page and online complaint form note that it will receive “the most recent messages” from a user who has been flagged. They do not, however, disclose how many unencrypted messages are revealed when a report is filed, or that those messages are examined by outside contractors. (WhatsApp told ProPublica it limits that disclosure to keep violators from “gaming” the system.)
By contrast, both Facebook and Instagram post lengthy “Community Standards” documents detailing the criteria its moderators use to police content, along with articles and videos about “the unrecognised heroes who keep Facebook safe” and announcements on new content review sites. Facebook’s transparency reports detail how many pieces of content are “actioned” for each type of violation. WhatsApp is not included in this report.
When dealing with legislators, Facebook officials also offer few details — but are eager to assure them that they don’t let encryption stand in the way of protecting users from images of child sexual abuse and exploitation.
For example, when members of the Senate Judiciary Committee grilled Facebook about the impact of encrypting its platforms, the company, in written follow-up questions in January 2020, cited WhatsApp in boasting that it would remain responsive to law enforcement. “Even within an encrypted system,” one response noted, “we will still be able to respond to lawful requests for metadata, including potentially critical location or account information… We already have an encrypted messaging service, WhatsApp, that — in contrast to some other encrypted services — provides a simple way for people to report abuse or safety concerns.”
Sure enough, WhatsApp reported 400,000 instances of possible child-exploitation imagery to the National Center for Missing and Exploited Children in 2020, according to its head, Cathcart. That was ten times as many as in 2019. “We are by far the industry leaders in finding and detecting that behaviour in an end-to-end encrypted service,” he said.
During his YouTube interview with the Australian think tank, Cathcart also described WhatsApp’s reliance on user reporting and its AI systems’ ability to examine account information that isn’t subject to encryption.
Asked how many staffers WhatsApp employed to investigate abuse complaints from an app with more than two billion users, Cathcart didn’t mention content moderators or their access to encrypted content. “There’s a lot of people across Facebook who help with WhatsApp,” he explained. “If you look at people who work full time on WhatsApp, it’s above a thousand. I won’t get into the full breakdown of customer service, user reports, engineering, etc. But it’s a lot of that.”
In written responses for this article, the company spokesperson said: “We build WhatsApp in a manner that limits the data we collect while providing us tools to prevent spam, investigate threats, and ban those engaged in abuse, including based on user reports we receive. This work takes extraordinary effort from security experts and a valued trust and safety team that works tirelessly to help provide the world with private communication.”
The spokesperson noted that WhatsApp has released new privacy features, including “more controls about how people’s messages can disappear” or be viewed only once. He added, “Based on the feedback we’ve received from users, we’re confident people understand when they make reports to WhatsApp we receive the content they send us.”
Additional reporting by Alex Mierjeski and Doris Burke.
This story was originally published by ProPublica.
. Source link
0 notes
opticien2-0 · 5 years ago
Text
85% of the world’s grocery businesses lack the ability to monetise their data and drive customer experience, study shows
Tumblr media
Grocery failing to deliver experiences due to lack of data skills
85% of grocery retailers worldwide lack the capabilities, technology, people and processes to use insights to monetise their data and drive customer experience in the $5.9 trillion global grocery retail market.
  Despite the apparent barriers, the majority (82%) of UK grocery retailers view growing revenues as their top priority in 2020, and plan to do so by improving their use of data insights to develop customer strategies (78%) and to make business decisions (75%).
  So finds a new study out today from dunnhumby, The Future of Retail Revenues Must Be Data Led, which was conducted this month by Forrester.
  “The global grocery market is in a fight for survival against pure play and other non-traditional competitors, who are further squeezing razor-thin margins,” says David Clements, Global Retail Director at dunnhumby. “We commissioned this study to better understand why so many retailers aren’t taking advantage of new revenue streams, while improving the shopping experience for their customers. We believe the study findings highlight the growing importance of the role of customer data in attaining sustainable growth.”
  According to Forrester’s report How Customer Experience Drives Business Growth, 2018, retailers globally are missing out on more than $200 million in revenue on average, through failing to improve their customer experience.
  The latest study adds to this by finding that, just 15% of global grocery retailers are Leaders, differentiated by data-led customer strategies for growth and improved supplier relationships, while the majority are lagging behind. Forrester uncovered three levels of maturity in grocery retail: leaders (15%), intermediate (55%) and novice (30%).
  Leaders set themselves apart by showing improved CPG supplier collaboration with (1) sharing customer data insights, (2) providing insights and measurement solutions to support media planning, and (3) negotiating retail media placements as part of their annual trade agreements. Regionally, retailers in the United States, Brazil, Italy, United Kingdom, and Thailand standout as early leaders in developing customer strategies that drive revenue growth.
  The latest report also shows that 96% experience challenges trying to use data to develop customer strategies to drive growth. The main concern for retailers in the UK is the lack of data management tools or technology (40%). UK retailers are facing several other obstacles including a lack of accurate, consistent, or complete customer data (37%), an inability to harmonise data and recommendations across channels, banners, brands, and locations (35%) and an inability to tie together internal and external data (30%).
  Most grocers globally are not capitalising on the revenue potential of customer data and instore/online media channels, it finds. Specifically, in the UK, just over half (57%) of respondents use mobile app data and only 42% use customer data, such as from loyalty programs, to make decisions about customers. Even fewer use other sources such as point-of-sale (37%), promotions data (35%), customer location (43%) and web metric/clickstream data (37%).
  The majority of grocers leave money on the table by not monetising media assets. Only 35% of UK grocers are currently selling branding opportunities on their website, 32% are currently selling them for print media whilst only a quarter offer media placements in-store. Globally only 31% of grocers are selling branding opportunities on their mobile apps except for in the UK (37%), Brazil (49%), China (47%), and Spain (38%) where apps are used more frequently.
  All UK grocers who offer media opportunities for CPG suppliers on their apps saw an increase in revenue over the last 12 months, with 59% seeing an increase of more than 10%. In addition, 93% of UK firms offering ads and branding opportunities on their websites saw an increase in revenue from this channel.
  “Time is of the essence for grocery retailers to activate the data they already have to improve the customer experience and create new revenue streams that can support their businesses into the future,” said Clements. “There is a significant amount of untapped revenue in grocery. Those that unlock the potential of their data and media assets, and improved supplier collaboration, will thrive. Retailers that fail to adapt will fall behind in the increasingly competitive marketplace.”
from InternetRetailing https://ift.tt/2Ov7f25 via IFTTT
0 notes
jacobwill176 · 4 years ago
Link
– The “Global APP Store Monetisation Market ” report focuses on the market status, future forecast, growth opportunities, market trends and leading players.
0 notes
Text
Global APP Store Monetisation Market 2018 by Industry Research, Growth, Segmentation, Key Players Analysis and Forecast 2025
Global APP Store Monetisation Market 2018 by Industry Research, Growth, Segmentation, Key Players Analysis and Forecast 2025
The Exhaustive Study for “Global APP Store Monetisation Market” is added on Ameco Research.  The report covers the market aspect and its growth forecasts across the coming years. It also includes a review of the key merchants moving in this market.
In 2017, the Global APP Store Monetisation Marketsize was xx million US$ and it is expected to reach xx million US$ by the end of 2025, with a CAGR of…
View On WordPress
0 notes
nchyinotes · 6 years ago
Text
AdWeek 2018 - GDPR Sessions
March 22 2018
GDPR Threat Versus Global Privacy Reality
Speakers:
Catherine Armitage: Senior Public Affairs Manager, World Federation of Advertisers
Dan Burdett: Head of EMEA Marketing Lab, eBay
William Long: Partner, Sidley Austin
Angelika Westphal: GDPR Marketing Lead Europe, Mondelez International
Notes:
eBay: we need to do a lot of organisation for management of data, because we are a business that really runs on data
AI powers a lot of decision making - ie. patterns between buyer/seller data for best search results/items to come up
a lot of data we have has been v unstructured - so now a big effort to create a taxonomy, structure it so we can compare
moving from listing base search results to a product graph —> seamless search, you just get the product you want + similar or accessories
Have been preparing for GDPR since 2016, auditing all departments around the world, use cases on all gaps (privacy teams), gap analysis. Since may 2017 have been working to close all the gaps to make sure its compliant.  
We have binding corporate rules in place since 2012 - minimise risk faced by GDPR
binding corporate rules: way company can get their practices approved by regulators. data can be transferred from europe etc
GDPR is a forcing function basically
you’ve got to be doing right by your customers
impact on consumers and trust - less financially tangible impact, reputation of brand & business > fines
mondelez:
more classical FMCG company, data has always been at the heart because all campaigns are rooted in it + actionable insights
what type of data points do we have, how do we capture them? a lot of it is sitting in different departments/silos (like ebay)
GDPR is not just an administrative burden, but an opportunity to build more trust - important as marketing function
also started preparing in 2016, cross functional project team started in 2017, currently closing off all efforts (mapping, reviewing systems)
Data is a valuable asset in communicating to consumers in a meaningful way
william
fines are a huge driver
Will mean a much greater discussion between groups/departments in company, any campaigns involving big data will involve legal team ? much more emphasis in beginning of process, rather than at the end (privacy by design)
e-privacy regulation - not really talked about in general media, directly aimed at electronic comms
belgium - still sorting out what the regulator will look like with its new powers
top 3 concerns among marketing firms, in a survey: trust, brand reputation, consumers trust with brand —> all about trust
mechanisms, standards, industry codes
no benchmarking about how GDPR will pan out once it’s live - year after they go live is v important for everyone
examples of campaigns using insights from aggregated data sets to inform creative, in last 12 months:
campaign in australia by snickers
found that certain mood types (anger, annoyance) lead to certain snacks behaviours
used twitter to scrape emotional sentiment of australia for 3 months
reduced price of snickers when the sentiment monitored was predominantly was angry
campaign by links (??): is it okay for …
about masculinity today
aggregated search data from google, on topics that were front of mind for guys today. -- used common searches by guys that start with “is it okay for…”
spotify took commonly created playlists to create engaging outdoor campaigns - it’s the end of the world as we know it song (brexit?? we’re still here for you)
Possibility of moving away from really personalised approach using personal data, and using aggregated/pseudonymous data?
trying to look for other types of data sets that are more anonymous?
how should this function be split between marketing and legal?
Both ebay and mondelez have privacy champions from every function, who are the first line of defence - all are trained by privacy team in easy non technical questions
important for all depts to own the solution
But all have a privacy team that work on the issue predominantly
will definitely change brand/industry etc, but not sure if GDPR itself will genuinely change consumers behaviour.
but news may have higher impact, facebook in the press - will that affect people’s decisions in opting in, does the drop off rate look significant?
there should be enough of a movement that requests will come of GDPR
giving people their rights - right of erasure, portability. will we be exercising them? who knows
legitimate interest
in e privacy - you cannot use it for new cookie law.  
not really about “can we being doing this with the info”, but should we.
about customer trust. - ethical questions being asked over time, not as much about lawyers to answer, but business.
eBay: a lot of unsupervised algorithms send out the emails ??
does future DPA have to be trained in data ethics as well, and not just the law?
What does GDPR mean for your business?
Speakers:
Somer Simpson: Lead Product Manager, Quantcast
Struan Bartlett: Founder & CEO, NewsNow.co.uk
Notes:
quantcast has been working with IAB to work on the framework for 3 months - publishers, advertisers, tech, mar/adtech companies came to a consensus
gdpr overview
universal truths:
1) consumers have a right to privacy - we’re all consumers, we don’t want our privacy to be abused
2) a free press is not free - in order for consumers to continue to have access they want/need to news/info, publishers need to have a way to pay for that (ie. digital advertising)
—> Need to balance: 1) consumers privacy rights & access to content, with 2) publisher ability to create & monetise content
—> business as usual is not an option (the wild west)
background
1) strengthens data protection for individuals
2) regulates the processing of personal data (including online identifiers) by companies
3) harmonizes data protection regulation in the EU
4) EU regulation with global impact (so any company with traffic from the EU)
themes
1) privacy by design - put privacy first in the way you develop all mechanisms
2) transparency about data practices - consumers need to know who is tracking them and for what purpose
3) choice and control over personal data
transparency, choice and control
GDPR expands the definition and regulates the processing of personal data
consent is one of 6 legal grounds for processing personal data (not always needed)
current privacy directive (cookie directive requires consent to set a cookie, access or store info on device - most cookie based data collection for advertising will require GDPR consent)
GDPR changes the consent standard under the ePrivacy directive
what is consent?
freely given, specific, informed, unambiguous indication of agreement by statement or by a clear affirmative action
robust info disclosure requirements - identifying data controllers and purposes
companies must be able to demonstrate consent through a record
easily revocable by consumer - clear, persistent ability to access and change their stuff
publishers
in the way it’s written, publishers are controllers
more responsibility
transparency for you and your users
opportunity from choice - to understand who your partners have relationships with, who they have relationships with, etc.
opp to understand who all is playing in the area and how that might impact revenue.
balancing revenue needs with transparency, control and understanding of who’s playing in your playground and how to control that
potential revenue impact
advertisers
transparency + consent for your websites and apps
retargeting, behavioural targeting - needs consent for cookies and pixels on your own site that allow that to happen
transparency + consent for your campaigns
3rd party providers hired for additional campaign info
have direct convos with publishers, this is who i work with, this is important to me, build up that list
adtech
we are why we are here
step up to do what’s right for the whole ecosystem
consumer rights
partner with publishers
revenue implications
we all make money off services we provide for advertisers/publishers
should i stay or should i go?
opportunity to clean up and build trust
industry transparency & consent framework
common industry standards: fragmentation will lead to inefficient and poor consumer experiences
effective efficient, neutral industry governance
simple policies around use of the new technical standards to ensure mutual trust and reassurance
core elements (of quantcast choice)
industry wide standard where digital content and ecosystems work together
open source, non commercial - for good of industry and consumer
publisher centric tool, giving transparency, control and flexibility to publishers
pro consumer tool, providing transparency, choice and control
quantcast choice
3rd party management - options for managing list of companies that can track consumers through a publisher’s site
customisable UX
easy implementation
free
important for quantcast/industry to get a solution out there with lots of adoption, so that business is disrupted as little as possible (taking weight off publishers, continuing to connect advertisers with consumers)
q&a
online behavioural advertising probably under a lot of pressure
“potentially an ass covering activity"
minefield of vagueness and uncertainty
nobody is really ready for this (DPAs), some legal advisors saying it’s not possible to comply totally?
PR advantage - credibility, trust, buy in from users
top 3 concerns from publishers
1) cost of everything - appointing DPA, legal counsel, advice fees
2) potential reputational damage for breaches (that may not be their fault - upstream parties that you don’t know, we have no idea who’s running what code. when people misbehave in that ecosystem, publishers are the one that takes the hit.)
3) not being ready in time + having systems in place (ie. basis on which you are processing some personal data - you decide it’s consent based, and later on want to switch to legit interest?? v tricky if impossible to do)
programmatic advertising - so many unknowns, we are only as ready as the industry partners we’re working with. we need ssp's, analytics suppliers to be ready.
working out what we need to rely on to get legitimate consent, how to best secure consent
where can we do without personal data altogether?
not just a UX thing, but a industry problem on how they’re actually using this data
bundling consent under very broad headings - agree on a common set of purposes where they are considered compliant, but not too many ??
Questioning partnerships: do we really want these people to be mentioned/shown on our website? is that good for our brand?
we don’t even know how much of our revenue currently depends on personal data - to quantify risk
0 notes
mywiseguysreports · 7 years ago
Link
portunities, Growth- Analysis to 2025 – ABNewswire – Press Release Distribution Service – Paid Press Release Distribution Newswire
0 notes
jacobwill176 · 4 years ago
Link
This report studies the global APP Store Monetisation market, analyzes and researches the APP Store Monetisation development status and forecast in United States, EU, Japan, China, India and Southeast Asia. This report focuses on the top players in global market, like Apple
0 notes