#stop giving up on your partners with disabilities before making accessibility provisions.
Explore tagged Tumblr posts
manty-monster · 2 years ago
Text
It's always messed me up how manageable some symptoms of BPD could be with the right communication and understanding of the mental illness.
It's a disability, and like many disabilities, there are accessibility options, and there are concessions that can be made, knowledge that can be expanded on. Having a relationship when you have a disability can be work; sometimes, you or your partner may have to help each other. This shouldn't be different for mental disabilities.
So I wanted to come up with some basic practices that can help you if you or your loved one has BPD.
disclaimer - this is one person with BPD's opinion and may not be true for everyone, communication is key, BPD is a complex but manageable illness.
Understanding what a trigger is, neurobiologically. When a person with BPD has a trigger, their brain floods with chemicals, driving them into panic. The things they say and do should be taken under that consideration. This is not to say dismiss what's being said. There is still a conscious mind behind the words and the things and those topics may hint at a core insecurity that should be discussed later, but understand on a conscious level that your loved one may not necessarily be in a clear state of mind.
Stop responding to everything at face value. Building off #1, once you recognize that your partner is emotionally compromised, with a disorder that creates black and white thinking aka splitting, or heightened emotional responses, you can't respond to everything in the same way you would respond to a casual question. If your partner hits you with the classic "Are you mad at me?" that should be a cue to you to try and explore that more deeply.
Initiate open communication. ☆ Which brings us to communication. Open, loving communication has to come from a place of empathy first and foremost. It requires briefly stepping into your partner's shoes in communication. It's not easy to steel your immediate reaction when someone says something untrue or hurtful to you, but it does become easier if you can recognize the emotional meaning behind words as well as the literal meaning. "Are you mad?" becomes helpful inside-shorthand for "Hey, I'm feeling insecure right now. Could you help me manage that?" rather than a frustrating phrase. I had to put a star there because holy shit is it important to understand emotive communication, heightened emotions, and cognitive empathy/perspective-taking when communicating with someone with BPD. This one's gonna involve some metacognition, folks.
Calming techniques. As you learn more about each other, try to include learning what calms you or your partner down. Comforting/soothing actions can help the chemicals from being triggered or splitting to dissipate faster. Learning what makes your partner calm or happy will go a long way towards easy caring management of some symptoms. While things like "Please calm down" can make things much worse, a simple "Can you tell me about [aspect of their special interest]?" "Do you want me to turn my webcam on?" "Can I put [favorite band/show] on?" "Do you want to be held?" are much more personal, show you have an interest in helping them feel better, and can diffuse a situation. Context matters, of course. Sometimes all that's needed is "I'm listening, I love you."
Understand your partner's symptoms. Looking up symptoms of BPD and understanding them is crucial to understanding what is going on. For instance, people with BPD have a warped sense of object permanence, and sending small messages while you're away can be a way to manage this. Rejection sensitivity, which is also seen in other neurodiversities like ADHD, autism, and CPTSD (which shares like 99% of symptoms and cause with BPD), can be managed by establishing a vocabulary together to navigate rejection, trust, and symptom recognition.
Understand your partner. Every person is different, and their history and trauma are unique. Some people with BPD were neglected and abandoned, while others were parentified or victims of other forms of abuse. Many people with BPD have other comorbid neurodivergences. It's important to be curious about your own and your partner's minds. Preventing a trigger is much better than resolving one. Knowing the things that could cause yourself or your partner to split or experience another symptom can allow you to discuss it beforehand, even set up a plan to prevent it. This can include making plans for things to do in your or their absence, having an object to hold to help remember that you or they are loved (such as a bracelet or stuffed animal), or setting an alarm or using post-it notes to remember important dates or schedules. Using self-aids is a good thing!
TLDR: So much of BPD can be almost totally mitigated with empathy, pre-planning, and understanding. Having a partner with a mental illness isn't always easy, but we could be doing a lot better for people with CPTSD/BPD, and frankly, for anyone with a mental or physical disability than we are right now with leave him sis dating culture. Obviously, this whole post depends on both people being able to introspect enough to enact these things. If you have BPD, mindfulness, CBT (the therapy kind), and DBT can be very helpful for consciously managing your way through triggers.
Sources:
https://pubmed.ncbi.nlm.nih.gov/35357883/
https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3182008/
https://www.nami.org/Blogs/NAMI-Blog/January-2022/Understanding-Mental-Illness-Triggers
https://bpspsychub.onlinelibrary.wiley.com/doi/10.1111/bjc.12216
https://www.nimh.nih.gov/health/topics/borderline-personality-disorder
https://en.wikipedia.org/wiki/Parentification
https://www.psychologytoday.com/us/blog/understanding-ptsd/202006/is-it-borderline-personality-disorder-or-is-it-really-complex-ptsd
https://psychcentral.com/ptsd/how-ptsd-cptsd-and-bpd-can-impact-relationships
https://mark-havens.medium.com/understanding-cognitive-empathy-the-key-to-better-relationships-and-communication-8b3ea7a4370c
https://www.youtube.com/@HowtoADHD
https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6026651/
https://www.sciencedirect.com/topics/psychology/affective-perspective-taking
you made it this far heres a cool storm i saw from a plane
44 notes · View notes
payprosalaska · 5 years ago
Text
Court Puts Kibosh on Policy Requiring Two Calls to Request FMLA Leave
Tumblr media
A few months ago I wrote that employers can and should consider requiring that employees make two calls to request leave under the Family and Medical Leave Act (FMLA). For instance, you might require one call to the supervisor to report the absence, and a second call to Human Resources (or your third-party administrator) to request FMLA leave.
All good, right?
Well, let me share a cautionary tale for those who have implemented or are contemplating this two-call requirement, because one federal court just threw us a curve ball. (Moore v. GPS Hospitality Partners IV, LLC, S.D. Ala. June 3, 2019.)
The Facts
LaShondra Moore was employed at a local Burger King restaurant owned by the defendant, and during her Saturday shift, she told her boss that her mom was in a “life-or-death situation that required surgery,” and that she needed “a week off” to be with her. In response, her supervisor told her to “take all the time” she needed.
She stayed in touch with her boss about her continued absence for a few days, but then was spotty in her communications on several other days the following week. It was not until the following Wednesday that Moore asked her supervisor for FMLA leave. In the meantime, however, she had a no-call, no-show that same Wednesday and, although the reasons for her termination the following week were unclear, the no-call, no-show surely was a key factor.
Under the Burger King FMLA policy, which was outlined in the restaurant’s employee handbook, employees like Moore were obligated to contact both their supervisor and Human Resources to request FMLA leave. In this instance, Moore called her supervisor, but did not call HR to request FMLA leave as required in the policy.
In defending against Moore’s eventual FMLA claims, the restaurant pointed to her failure to comply with both components of the notice requirements of the FMLA policy. Although Moore may have alerted her supervisor, she failed to follow the second part of the notice requirement—contacting Human Resources to request FMLA leave.
How Did This One Turn Out?
Over the past few years, employers have scored victory after victory where they have implemented a two-phone-call notice requirement and the employee has, in turn, not followed the procedure. As I noted in my previous post on this topic, numerous federal appellate courts have upheld the employer’s right to maintain this rigorous notice obligation.
Not this court.
After analyzing the notice provisions of the FMLA regulations (and preamble!) in painstaking detail, the court rejected the restaurant’s argument that Moore’s failure to notify Human Resources precluded her from taking FMLA leave. Specifically, the court held that an employer can maintain a “two-call-in” requirement only if this approach applies across the board for all leave requests. In other words, this court determined that an employer cannot deny FMLA leave based on an FMLA notice requirement that includes more procedural hurdles than what the employer requires for other types of leave.
Sadly, the court didn’t stop there, as it found there were unusual circumstances that prohibited Moore from following the call-in requirements anyway. Notably, the court found it unreasonable for Moore to have read and understood the obligations contained in the FMLA policy since she had only been given access to the new employee handbook (with the 2.5-page FMLA policy contained therein) two months earlier and she “didn’t have time” to review the policy.
Curiously, the court also appeared concerned that the employee did not receive an actual hard copy of the handbook, though it was readily accessible to Moore in an online format.
[SHRM members-only toolkit: Managing Family and Medical Leave]
Insights for Employers
I had a visceral reaction to this decision after I read it, and my knee-jerk reaction was to wad it up and throw it in the garbage can.
Let me explain.
As an initial matter, the court failed to recognize that the FMLA, by its very own bureaucratic terms, demands that employers and employees alike assume a host of somewhat challenging and time-consuming obligations that simply aren’t required in an ordinary sick-leave situation. Indeed, the 2009 regulatory changes made clear that these amendments hoisted several additional responsibilities on employees that do not apply in a typical sick-leave situation.
Moreover, from a practical standpoint, it’s quite common for employers to have several different processes for requesting sick leave vs. paid time off vs. vacation vs. short-term disability vs. military leave vs. FMLA leave. So, which of these processes should an employer select so as to remain complaint with this court decision?
Following this decision leads potentially to absurd results, though we need to give it due consideration (see recommendations below).
Then, there’s the issue of the employee handbook. How long should employees have to acquaint themselves with a handbook before the employer can start enforcing its provisions? 6 months? 12 months? Perhaps longer if employees can show they “didn’t have time” to review it? Where is the personal accountability here? Can you imagine the lawless workplaces we’d encounter if employers were handcuffed from enforcing reasonable provisions in an employee handbook? This kind of judicial officiating doesn’t operate in reality.
I haven’t even gotten to the point that several other appellate courts have found this two-call policy perfectly appropriate. How much weight do we give this decision, given the weight of these several other, persuasive decisions?
Perhaps not much. But let’s be careful. This decision reminds us of a few important principles:
Whenever possible, align paid-leave procedures with your FMLA procedures. There is much here to suggest that this case could be limited in persuasive value because of its distinguishable facts, but let’s use it for what it’s worth—we’re in a more defensible position when our procedures for requesting leave of any kind align.
Managers must have an understanding of their role in the FMLA process. Although I did not focus much on the managers’ response to Moore’s eventual request for FMLA leave, the reaction is not going to win any best-practice awards. In fact, their reaction to her request for leave was pretty horrible and made it fairly clear to me that they didn’t have a clue about their responsibilities under the FMLA. FMLA training is critical. Don’t push it off.
Managers must be able to recognize when an employee’s request is potentially for an FMLA-qualifying reason and to take steps to ensure that neither the supervisor nor the staff interferes with an employee taking leave protected by the law.
On that same note, one of the quirky facts about this case was the FMLA policy’s requirement that a manager, when informed of the need for FMLA leave, was obligated to advise the employee to go to Human Resources to make the FMLA request. Get this kind of stuff out of your FMLA policy! Don’t put responsibility on the manager to respond in this way, because once they don’t, you’re on the hook for the breakdown. Keep the responsibility always on the employee to report the need for FMLA leave.
That doesn’t mean that managers are off the hook—they must be trained on how to properly handle an FMLA request (see above!), which should include counseling the employee to report the absence per the employer’s absence policy, but the policy should not bind the manager to respond in a certain manner.
As we see here, the court took issue with the fact that the FMLA policy required the manager to act in such a manner, but he didn’t do so. This artificial, procedural hurdle created yet another problem for this employer.
This decision gives heartburn to employers that use third-party administrators, as there are very few TPAs that handle all the leave administration for an employer (another reason why this decision makes no practical sense). Employers should consider whether leave requests generally should flow through a common location, such as a TPA or Human Resources.
Jeff Nowak is a shareholder at Littler, an employment and labor law practice representing management, and author of the FMLA Insights blog, where this article originally appeared in a slightly different form. © 2019 Jeff Nowak. All rights reserved. Republished with permission.
Visit SHRM’s resource page for the Family and Medical Leave Act.
//window.fbAsyncInit = function() { FB.init({ appId: '649819231827420', xfbml: true, version: 'v2.5' }); };
//(function(d, s, id){ // var js, fjs = d.getElementsByTagName(s)[0]; // if (d.getElementById(id)) {return;} // js = d.createElement(s); js.id = id; // js.src = "http://connect.facebook.net/en_US/sdk.js"; // fjs.parentNode.insertBefore(js, fjs); //}(document, 'script', 'facebook-jssdk')); function shrm_encodeURI(s) { return encodeURIComponent(s); } function RightsLinkPopUp() { var url = "https://s100.copyright.com/AppDispatchServlet"; var location = url + "?publisherName=" + shrm_encodeURI("shrm") + "&publication=" + shrm_encodeURI("Legal_Issues") + "&title=" + shrm_encodeURI("Justices Hear ERISA Reimbursement Case") + "&publicationDate=" + shrm_encodeURI("11/11/2015 12:00:00 AM") + "&contentID=" + shrm_encodeURI("6badda72-62e7-49e8-b4bd-ebbd02a72a17") + "&charCnt=" + shrm_encodeURI("7399") + "&orderBeanReset=" + shrm_encodeURI("True"); window.open(location, "RightsLink", "location=no,toolbar=no,directories=no,status=no,menubar=no,scrollbars=yes,resizable=yes,width=650,height=550"); } Source link
The post Court Puts Kibosh on Policy Requiring Two Calls to Request FMLA Leave appeared first on consultant pro.
0 notes
atakportal · 6 years ago
Photo
Tumblr media
New Post has been published on https://adz.cloud/2019/02/01/we-dismantle-facebooks-memo-defending-its-research-techcrunch/
We dismantle Facebook’s memo defending its “Research” – TechCrunch
Facebook published an internal memo today trying to minimize the morale damage of TechCrunch’s investigation that revealed it’d been paying people to suck in all their phone data. Attained by Business Insider’s Rob Price, the memo from Facebook’s VP of production engineering and security Pedro Canahuati gives us more detail about exactly what data Facebook was trying to collect from teens and adults in the US and India. But it also tries to claim the program wasn’t secret, wasn’t spying, and that Facebook doesn’t see it as a violation of Apple’s policy against using its Enterprise Certificate system to distribute apps to non-employees — despite Apple punishing it for the violation.
For reference, Facebook was recruiting users age 13-35 to install a Research app, VPN, and give it root network access so it could analyze all their traffic. It’s pretty sketchy to be buying people’s privacy, and despite being shut down on iOS, it’s still running on Android.
Here we lay out the memo with section by section responses to Facebook’s claims challenging TechCrunch’s reporting. Our responses are in bold and we’ve added images.
Memo from Facebook VP Pedro Canahuati
APPLE ENTERPRISE CERTS REINSTATED
Early this morning, we received agreement from Apple to issue a new enterprise certificate; this has allowed us to produce new builds of our public and enterprise apps for use by employees and contractors. Because we have a few dozen apps to rebuild, we’re initially focusing on the most critical ones, prioritized by usage and importance: Facebook, Messenger, Workplace, Work Chat, Instagram, and Mobile Home.
New builds of these apps will soon be available and we’ll email all iOS users for detailed instructions on how to reinstall. We’ll also post to iOS FYI with full details.
Meanwhile, we’re expecting a follow-up article from the New York Times later today, so I wanted to share a bit more information and background on the situation.
What happened?
On Tuesday TechCrunch reported on our Facebook Research program. This is a market research program that helps us understand consumer behavior and trends to build better mobile products.
TechCrunch implied we hid the fact that this is by Facebook – we don’t. Participants have to download an app called Facebook Research App to be involved in the stud. They also characterized this as “spying,” which we don’t agree with. People participated in this program with full knowledge that Facebook was sponsoring this research, and were paid for it. They could opt-out at any time. As we built this program, we specifically wanted to make sure we were as transparent as possible about what we were doing, what information we were gathering, and what it was for — see the screenshots below.
We used an app that we built ourselves, which wasn’t distributed via the App Store, to do this work. Instead it was side-loaded via our enterprise certificate. Apple has indicated that this broke their Terms of Service so disabled our enterprise certificates which allow us to install our own apps on devices outside of the official app store for internal dogfooding.
Author’s response: To start, “build better products” is a vague way of saying determining what’s popular and buying or building it. Facebook has used competitive analysis gathered by its similar Onavo Protect app and Facebook Research app for years to figure out what apps were gaining momentum and either bring them in or box them out. Onavo’s data is how Facebook knew WhatsApp was sending twice as many messages as Messenger, and it should invest $19 billion to acquire it.
Facebook claims it didn’t hide the program, but it was never formally announced like every other Facebook product. There were no Facebook Help pages, blog posts, or support info from the company. It used intermediaries Applause (which owns uTest) and CentreCode (which owns Betabound) to run the program under names like Project Atlas and Project Kodiak. Users only found out Facebook was involved once they started the sign-up process and signed a non-disclosure agreement prohibiting them from discussing it publicly.
TechCrunch has reviewed communications indicating Facebook would threaten legal action if a user spoke publicly about being part of the Research program. While the program had run since 2016, it had never been reported on. We believe that these facts combined justify characterizing the program as “secret”
The Facebook Research program was called Project Atlas until you signed up
How does this program work?
We partner with a couple of market research companies (Applause and CentreCode) to source and onboard candidates based in India and USA for this research project. Once people are onboarded through a generic registration page, they are informed that this research will be for Facebook and can decline to participate or opt out at any point. We rely on a 3rd party vendor for a number of reasons, including their ability to target a Diverse and representative pool of participants. They use a generic initial Registration Page to avoid bias in the people who choose to participate.
After generic onboarding people are asked to download an app called the ‘Facebook Research App,’ which takes them through a consent flow that requires people to check boxes to confirm they understand what information will be collected. As mentioned above, we worked hard to make this as explicit and clear as possible.
This is part of a broader set of research programs we conduct. Asking users to allow us to collect data on their device usage is a highly efficient way of getting industry data from closed ecosystems, such as iOS and Android. We believe this is a valid method of market research.
Author’s response: Facebook claims it wasn’t “spying”, yet it never fully laid out the specific kinds of information it would collect. In some cases, descriptions of the app’s data collection power were included in merely a footnote. The program did not specify specific data types gathered, only saying it would scoop up “which apps are on your phone, how and when you use them” and “information about your internet browsing activity”
The parental consent form from Facebook and Applause lists none of the specific types of data collected or the extent of Facebook’s access. Under “Risks/Benefits”, the form states “There are no known risks associated with this project however you acknowledge that the inherent nature of the project involves the tracking of personal information via your child’s use of Apps. You will be compensated by Applause for your child’s participation.” It gives parents no information about what data their kids are giving up.
Facebook claims it uses third-parties to target a diverse pool of participants. Yet Facebook conducts other user feedback and research programs on its own without the need for intermediaries that obscure its identity, and only ran the program in two countries. It claims to use a generic signup page to avoid biasing who will choose to participate, yet the cash incentive and technical process of installing the root certificate also bias who will participate, and the intermediaries conveniently prevent Facebook from being publicly associated with the program at first glance. Meanwhile, other clients of the Betabound testing platform like Amazon, Norton, and SanDisk reveal their names immediately before users sign up.
Facebook’s ads recruiting teens for the program didn’t disclose its involvement
Did we intentionally hide our identity as Facebook?
No — The Facebook brand is very prominent throughout the download and installation process, before any data is collected. Also, the app name of the device appears as “Facebook Research” — see attached screenshots. We use third parties to source participants in the research study, to avoid bias in the people who choose to participate. But as soon as they register, they become aware this is research for Facebook
Author’s response: Facebook here admits that users did not know Facebook was involved before they registered.
What data do we collect? Do we read people’s private messages?
No, we don’t read private messages. We collect data to understand how people use apps, but this market research was not designed to look at what they share or see. We’re interested in information such as watch time, video duration, and message length, not that actual content of videos, messages, stories or photos. The app specifically ignores information shared via financial or health apps.
Author’s response: We never reported that Facebook was reading people’s private messages, but that it had the ability to collect them. Facebook here admits that the program was “not designed to look at what they share or see”, but stops far short of saying that data wasn’t collected. Fascinatingly, Facebook reveals it was that it was closely monitoring how much time people spent on different media types.
Facebook Research abused the Enterprise Certificate system meant for employee-only apps
Did we break Apple’s terms of service?
Apple’s view is that we violated their terms by sideloading this app, and they decide the rules for their platform, We’ve worked with Apple to address any issues; as a result, our internal apps are back up and running. Our relationship with Apple is really important — many of us use Apple products at work every day, and we rely on iOS for many of our employee apps, so we wouldn’t put that relationship at any risk intentionally. Mark and others will be available to talk about this further at Q&A later today.
Author’s response: TechCrunch reported that Apple’s policy plainly states that the Enterprise Certificate program requires companies to “Distribute Provisioning Profiles only to Your Employees and only in conjunction with Your Internal Use Applications for the purpose of developing and testing” and that “You may not use, distribute or otherwise make Your Internal Use Applications available to Your Customers”. Apple took a firm stance in its statement that Facebook did violate the program’s policies, stating “Facebook has been using their membership to distribute a data-collecting app to consumers, which is a clear breach of their agreement with Apple.”
Given Facebook distributed the Research apps to teenagers that never signed tax forms or formal employment agreements, they were obviously not employees or contractors, and most likely use some Facebook-owned service that qualifies them as customers. Also, I’m pretty sure you can’t pay employees in gift cards.
Source link
0 notes
technicalsolutions88 · 6 years ago
Link
Facebook published an internal memo today trying to minimize the morale damage of TechCrunch’s investigation that revealed it’d been paying people to suck in all their phone data. Attained by Business Insider’s Rob Price, the memo from Facebook’s VP of production engineering and security Pedro Canahuati gives us more detail about exactly what data Facebook was trying to collect from teens and adults in the US and India. But it also tries to claim the program wasn’t secret, wasn’t spying, and that Facebook doesn’t see it as a violation of Apple’s policy against using its Enterprise Certificate system to distribute apps to non-employees — despite Apple punishing it for the violation.
For reference, Facebook was recruiting users age 13-35 to install a Research app, VPN, and give it root network access so it could analyze all their traffic. It’s pretty sketchy to be buying people’s privacy, and despite being shut down on iOS, it’s still running on Android.
Here we lay out the memo with section by section responses to Facebook’s claims challenging TechCrunch’s reporting. Our responses are in bold and we’ve added images.
Memo from Facebook VP Pedro Canahuati
APPLE ENTERPRISE CERTS REINSTATED
Early this morning, we received agreement from Apple to issue a new enterprise certificate; this has allowed us to produce new builds of our public and enterprise apps for use by employees and contractors. Because we have a few dozen apps to rebuild, we’re initially focusing on the most critical ones, prioritized by usage and importance: Facebook, Messenger, Workplace, Work Chat, Instagram, and Mobile Home.
New builds of these apps will soon be available and we’ll email all iOS users for detailed instructions on how to reinstall. We’ll also post to iOS FYI with full details.
Meanwhile, we’re expecting a follow-up article from the New York Times later today, so I wanted to share a bit more information and background on the situation.
What happened?
On Tuesday TechCrunch reported on our Facebook Research program. This is a market research program that helps us understand consumer behavior and trends to build better mobile products.
TechCrunch implied we hid the fact that this is by Facebook – we don’t. Participants have to download an app called Facebook Research App to be involved in the stud. They also characterized this as “spying,” which we don’t agree with. People participated in this program with full knowledge that Facebook was sponsoring this research, and were paid for it. They could opt-out at any time. As we built this program, we specifically wanted to make sure we were as transparent as possible about what we were doing, what information we were gathering, and what it was for — see the screenshots below.
We used an app that we built ourselves, which wasn’t distributed via the App Store, to do this work. Instead it was side-loaded via our enterprise certificate. Apple has indicated that this broke their Terms of Service so disabled our enterprise certificates which allow us to install our own apps on devices outside of the official app store for internal dogfooding.
Author’s response: To start, “build better products” is a vague way of saying determining what’s popular and buying or building it. Facebook has used competitive analysis gathered by its similar Onavo Protect app and Facebook Research app for years to figure out what apps were gaining momentum and either bring them in or box them out. Onavo’s data is how Facebook knew WhatsApp was sending twice as many messages as Messenger, and it should invest $19 billion to acquire it.
Facebook claims it didn’t hide the program, but it was never formally announced like every other Facebook product. There were no Facebook Help pages, blog posts, or support info from the company. It used intermediaries Applause (which owns uTest) and CentreCode (which owns Betabound) to run the program under names like Project Atlas and Project Kodiak. Users only found out Facebook was involved once they started the sign-up process and signed a non-disclosure agreement prohibiting them from discussing it publicly.
TechCrunch has reviewed communications indicating Facebook would threaten legal action if a user spoke publicly about being part of the Research program. While the program had run since 2016, it had never been reported on. We believe that these facts combined justify characterizing the program as “secret”
The Facebook Research program was called Project Atlas until you signed up
How does this program work?
We partner with a couple of market research companies (Applause and CentreCode) to source and onboard candidates based in India and USA for this research project. Once people are onboarded through a generic registration page, they are informed that this research will be for Facebook and can decline to participate or opt out at any point. We rely on a 3rd party vendor for a number of reasons, including their ability to target a Diverse and representative pool of participants. They use a generic initial Registration Page to avoid bias in the people who choose to participate.
After generic onboarding people are asked to download an app called the ‘Facebook Research App,’ which takes them through a consent flow that requires people to check boxes to confirm they understand what information will be collected. As mentioned above, we worked hard to make this as explicit and clear as possible.
This is part of a broader set of research programs we conduct. Asking users to allow us to collect data on their device usage is a highly efficient way of getting industry data from closed ecosystems, such as iOS and Android. We believe this is a valid method of market research.
Author’s response: Facebook claims it wasn’t “spying”, yet it never fully laid out the specific kinds of information it would collect. In some cases, descriptions of the app’s data collection power were included in merely a footnote. The program did not specify specific data types gathered, only saying it would scoop up “which apps are on your phone, how and when you use them” and “information about your internet browsing activity”
The parental consent form from Facebook and Applause lists none of the specific types of data collected or the extent of Facebook’s access. Under “Risks/Benefits”, the form states “There are no known risks associated with this project however you acknowledge that the inherent nature of the project involves the tracking of personal information via your child’s use of Apps. You will be compensated by Applause for your child’s participation.” It gives parents no information about what data their kids are giving up.
Facebook claims it uses third-parties to target a diverse pool of participants. Yet Facebook conducts other user feedback and research programs on its own without the need for intermediaries that obscure its identity, and only ran the program in two countries. It claims to use a generic signup page to avoid biasing who will choose to participate, yet the cash incentive and technical process of installing the root certificate also bias who will participate, and the intermediaries conveniently prevent Facebook from being publicly associated with the program at first glance. Meanwhile, other clients of the Betabound testing platform like Amazon, Norton, and SanDisk reveal their names immediately before users sign up.
Facebook’s ads recruiting teens for the program didn’t disclose its involvement
Did we intentionally hide our identity as Facebook?
No — The Facebook brand is very prominent throughout the download and installation process, before any data is collected. Also, the app name of the device appears as “Facebook Research” — see attached screenshots. We use third parties to source participants in the research study, to avoid bias in the people who choose to participate. But as soon as they register, they become aware this is research for Facebook
Author’s response: Facebook here admits that users did not know Facebook was involved before they registered.
What data do we collect? Do we read people’s private messages?
No, we don’t read private messages. We collect data to understand how people use apps, but this market research was not designed to look at what they share or see. We’re interested in information such as watch time, video duration, and message length, not that actual content of videos, messages, stories or photos. The app specifically ignores information shared via financial or health apps.
Author’s response: We never reported that Facebook was reading people’s private messages, but that it had the ability to collect them. Facebook here admits that the program was “not designed to look at what they share or see”, but stops far short of saying that data wasn’t collected. Fascinatingly, Facebook reveals it was that it was closely monitoring how much time people spent on different media types.
Facebook Research abused the Enterprise Certificate system meant for employee-only apps
Did we break Apple’s terms of service?
Apple’s view is that we violated their terms by sideloading this app, and they decide the rules for their platform, We’ve worked with Apple to address any issues; as a result, our internal apps are back up and running. Our relationship with Apple is really important — many of us use Apple products at work every day, and we rely on iOS for many of our employee apps, so we wouldn’t put that relationship at any risk intentionally. Mark and others will be available to talk about this further at Q&A later today.
Author’s response: TechCrunch reported that Apple’s policy plainly states that the Enterprise Certificate program requires companies to “Distribute Provisioning Profiles only to Your Employees and only in conjunction with Your Internal Use Applications for the purpose of developing and testing” and that “You may not use, distribute or otherwise make Your Internal Use Applications available to Your Customers”. Apple took a firm stance in its statement that Facebook did violate the program’s policies, stating “Facebook has been using their membership to distribute a data-collecting app to consumers, which is a clear breach of their agreement with Apple.”
Given Facebook distributed the Research apps to teenagers that never signed tax forms or formal employment agreements, they were obviously not employees or contractors, and most likely use some Facebook-owned service that qualifies them as customers. Also, I’m pretty sure you can’t pay employees in gift cards.
from Social – TechCrunch https://tcrn.ch/2S2wIEU Original Content From: https://techcrunch.com
0 notes
sheminecrafts · 6 years ago
Text
We dismantle Facebook’s memo defending its Research data-grab
Facebook published an internal memo today trying to minimize the morale damage of TechCrunch’investigation that revealed it’d been paying people to suck in all their phone data. Attained by Business Insider’s Rob Price, the memo from Facebook’s VP of production engineering and security Pedro Canahuati gives us more detail about exactly what data Facebook was trying to collect from teens and adults in the US and India. But it also tries to claim the program wasn’t secret, wasn’t spying, and that Facebook doesn’t see it as a violation of Apple’s policy against using its Enterprise Certificate system to distribute apps to non-employees.
Here we lay out the memo with section by section responses to Facebook’s claims challenging TechCrunch’s reporting. Our responses are in bold and we’ve added images.
Memo from Facebook VP Pedro Canahuati
APPLE ENTERPRISE CERTS REINSTATED
Early this morning, we received agreement from Apple to issue a new enterprise certificate; this has allowed us to produce new builds of our public and enterprise apps for use by employees and contractors. Because we have a few dozen apps to rebuild, we’re initially focusing on the most critical ones, prioritized by usage and importance: Facebook, Messenger, Workplace, Work Chat, Instagram, and Mobile Home.
New builds of these apps will soon be available and we’ll email all iOS users for detailed instructions on how to reinstall. We’ll also post to iOS FYI with full details.
Meanwhile, we’re expecting a follow-up article from the New York Times later today, so I wanted to share a bit more information and background on the situation.
What happened?
On Tuesday TechCrunch reported on our Facebook Research program. This is a market research program that helps us understand consumer behavior and trends to build better mobile products.
TechCrunch implied we hid the fact that this is by Facebook – we don’t. Participants have to download an app called Facebook Research App to be involved in the stud. They also characterized this as “spying,” which we don’t agree with. People participated in this program with full knowledge that Facebook was sponsoring this research, and were paid for it. They could opt-out at any time. As we built this program, we specifically wanted to make sure we were as transparent as possible about what we were doing, what information we were gathering, and what it was for — see the screenshots below.
We used an app that we built ourselves, which wasn’t distributed via the App Store, to do this work. Instead it was side-loaded via our enterprise certificate. Apple has indicated that this broke their Terms of Service so disabled our enterprise certificates which allow us to install our own apps on devices outside of the official app store for internal dogfooding.
Author’s response: To start, “build better products” is a vague way of saying determining what’s popular and buying or building it. Facebook has used competitive analysis gathered by its similar Onavo Protect app and Facebook Reserch for years to figure out what apps were gaining momentum and either bring them in or box them out. Onavo’s data is how Facebook knew WhatsApp was sending twice as many messages as Messenger, and it should invest $19 billion to acquire it.
Facebook claims it didn’t hide the program, but it was never formally announced like every other Facebook product. There were no Facebook Help pages, blog posts, or support info from the company. It used intermediaries Applause (which owns uTest) and CentreCode (which owns Betabound) to run the program under names like Project Atlas and Project Kodiak. Users only found out Facebook was involved once they started the sign-up process and signed a non-disclosure agreement prohibiting them from discussing it publicly. TechCrunch has reviewed communications indicating Facebook would threaten legal action if a user spoke publicly about being part of the Research program. While the program had run since 2016, it had never been reported on. We believe that these facts combined justify characterizing the program as “secret”
The Facebook Research program was called Project Atlas until you signed up
How does this program work?
We partner with a couple of market research companies (Applause and CentreCode) to source and onboard candidates based in India and USA for this research project. Once people are onboarded through a generic registration page, they are informed that this research will be for Facebook and can decline to participate or opt out at any point. We rely on a 3rd party vendor for a number of reasons, including their ability to target a Diverse and representative pool of participants. They use a generic initial Registration Page to avoid bias in the people who choose to participate.
After generic onboarding people are asked to download an app called the ‘Facebook Research App,’ which takes them through a consent flow that requires people to check boxes to confirm they understand what information will be collected. As mentioned above, we worked hard to make this as explicit and clear as possible.
This is part of a broader set of research programs we conduct. Asking users to allow us to collect data on their device usage is a highly efficient way of getting industry data from closed ecosystems, such as iOS and Android. We believe this is a valid method of market research.
Author’s response: Facebook claims it wasn’t “spying”, yet it never fully laid out the specific kinds of information it would collect. In some cases, descriptions of the app’s data collection power were described in merely a footnote. The program did not specify specific data types gathered, only saying it would scoop up “which apps are on your phone, how and when you use them” and “information about your internet browsing activity”
The parental consent form from Facebook and Applause lists none of the specific types of data collected or the extent of Facebook’s access. Under “Risks/Benefits”, the form states “There are no known risks associated with this project¨ however you acknowledge that the inherent nature of the project involves the tracking of personal information via your child’s use of Apps. You will be compensated by Applause for your child’s participation.” It gives parents no information about what data their kids are giving up.
Facebook claims it uses third-parties to target a diverse pool of participants. Yet Facebook conducts other research programs on its own without the need for intermediaries that obscure its identity, and only ran the program in two countries. It claims to use a generic signup page to avoid biasing who will choose to participate, yet the cash incentive and technical process of installing the root certification also bias who will participate, and the intermediaries conveniently prevent Facebook from being publicly associated with the program at first glance. Meanwhile, other clients of the Betabound testing platform like Amazon, Norton, and SanDisk reveal their names immediately
Facebook’s ads recruiting teens for the program didn’t disclose its involvement
Did we intentionally hide our identity as Facebook?
No — The Facebook brand is very prominent throughout the download and installation process, before any data is collected. Also, the app name of the device appears as “Facebook Research” — see attached screenshots. We use third parties to source participants in the research study, to avoid bias in the people who choose to participate. But as soon as they register, they become aware this is research for Facebook
Author’s response: Facebook here admits that users did not know Facebook was involved before they registered.
What data do we collect? Do we read people’s private messages?
No, we don’t read private messages. We collect data to understand how people use apps, but this market research was not designed to look at what they share or see. We’re interested in information such as watch time, video duration, and message length, not that actual content of videos, messages, stories or photos. The app specifically ignores information shared via financial or health apps.
Author’s response: We never reported that Facebook was reading people’s private messages, but that it had the ability to collect them. Facebook here admits that the program was “not designed to look at what they share or see”, but stops far short of saying that data wasn’t collected. Fascinatingly, Facebook reveals it was that it was closely monitoring how much time people spent on different media types.
Facebook Research abused the Enterprise Certificate system meant for employee-only apps
Did we break Apple’s terms of service?
Apple’s view is that we violated their terms by sideloading this app, and they decide the rules for their platform, We’ve worked with Apple to address any issues; as a result, our internal apps are back up and running. Our relationship with Apple is really important — many of us use Apple products at work every day, and we rely on iOS for many of our employee apps, so we wouldn’t put that relationship at any risk intentionally. Mark and others will be available to talk about this further at Q&A later today.
Author’s response: TechCrunch reported that Apple’s policy plainly states that the Enterprise Certificate program requires companies to “Distribute Provisioning Profiles only to Your Employees and only in conjunction with Your Internal Use Applications for the purpose of developing and testing” and that “You may not use, distribute or otherwise make Your Internal Use Applications available to Your Customers”. Apple took a firm stance in its statement that Facebook did violate the program’s policies, stating “Facebook has been using their membership to distribute a data-collecting app to consumers, which is a clear breach of their agreement with Apple.”
Given Facebook distributed the Research apps to teenagers that never signed tax forms or formal employment agreements, they were obviously not employees or contractors, and most likely use some Facebook-owned service that qualifies them as customers. Also, I’m pretty sure you can’t pay employees in gift cards.
from iraidajzsmmwtv https://tcrn.ch/2S2wIEU via IFTTT
0 notes
toomanysinks · 6 years ago
Text
We dismantle Facebook’s memo defending its Research data-grab
Facebook published an internal memo today trying to minimize the morale damage of TechCrunch’s investigation that revealed it’d been paying people to suck in all their phone data. Attained by Business Insider’s Rob Price, the memo from Facebook’s VP of production engineering and security Pedro Canahuati gives us more detail about exactly what data Facebook was trying to collect from teens and adults in the US and India. But it also tries to claim the program wasn’t secret, wasn’t spying, and that Facebook doesn’t see it as a violation of Apple’s policy against using its Enterprise Certificate system to distribute apps to non-employees — despite Apple punishing it for the violation.
Here we lay out the memo with section by section responses to Facebook’s claims challenging TechCrunch’s reporting. Our responses are in bold and we’ve added images.
Memo from Facebook VP Pedro Canahuati
APPLE ENTERPRISE CERTS REINSTATED
Early this morning, we received agreement from Apple to issue a new enterprise certificate; this has allowed us to produce new builds of our public and enterprise apps for use by employees and contractors. Because we have a few dozen apps to rebuild, we’re initially focusing on the most critical ones, prioritized by usage and importance: Facebook, Messenger, Workplace, Work Chat, Instagram, and Mobile Home.
New builds of these apps will soon be available and we’ll email all iOS users for detailed instructions on how to reinstall. We’ll also post to iOS FYI with full details.
Meanwhile, we’re expecting a follow-up article from the New York Times later today, so I wanted to share a bit more information and background on the situation.
What happened?
On Tuesday TechCrunch reported on our Facebook Research program. This is a market research program that helps us understand consumer behavior and trends to build better mobile products.
TechCrunch implied we hid the fact that this is by Facebook – we don’t. Participants have to download an app called Facebook Research App to be involved in the stud. They also characterized this as “spying,” which we don’t agree with. People participated in this program with full knowledge that Facebook was sponsoring this research, and were paid for it. They could opt-out at any time. As we built this program, we specifically wanted to make sure we were as transparent as possible about what we were doing, what information we were gathering, and what it was for — see the screenshots below.
We used an app that we built ourselves, which wasn’t distributed via the App Store, to do this work. Instead it was side-loaded via our enterprise certificate. Apple has indicated that this broke their Terms of Service so disabled our enterprise certificates which allow us to install our own apps on devices outside of the official app store for internal dogfooding.
Author’s response: To start, “build better products” is a vague way of saying determining what’s popular and buying or building it. Facebook has used competitive analysis gathered by its similar Onavo Protect app and Facebook Research app for years to figure out what apps were gaining momentum and either bring them in or box them out. Onavo’s data is how Facebook knew WhatsApp was sending twice as many messages as Messenger, and it should invest $19 billion to acquire it.
Facebook claims it didn’t hide the program, but it was never formally announced like every other Facebook product. There were no Facebook Help pages, blog posts, or support info from the company. It used intermediaries Applause (which owns uTest) and CentreCode (which owns Betabound) to run the program under names like Project Atlas and Project Kodiak. Users only found out Facebook was involved once they started the sign-up process and signed a non-disclosure agreement prohibiting them from discussing it publicly.
TechCrunch has reviewed communications indicating Facebook would threaten legal action if a user spoke publicly about being part of the Research program. While the program had run since 2016, it had never been reported on. We believe that these facts combined justify characterizing the program as “secret”
The Facebook Research program was called Project Atlas until you signed up
How does this program work?
We partner with a couple of market research companies (Applause and CentreCode) to source and onboard candidates based in India and USA for this research project. Once people are onboarded through a generic registration page, they are informed that this research will be for Facebook and can decline to participate or opt out at any point. We rely on a 3rd party vendor for a number of reasons, including their ability to target a Diverse and representative pool of participants. They use a generic initial Registration Page to avoid bias in the people who choose to participate.
After generic onboarding people are asked to download an app called the ‘Facebook Research App,’ which takes them through a consent flow that requires people to check boxes to confirm they understand what information will be collected. As mentioned above, we worked hard to make this as explicit and clear as possible.
This is part of a broader set of research programs we conduct. Asking users to allow us to collect data on their device usage is a highly efficient way of getting industry data from closed ecosystems, such as iOS and Android. We believe this is a valid method of market research.
Author’s response: Facebook claims it wasn’t “spying”, yet it never fully laid out the specific kinds of information it would collect. In some cases, descriptions of the app’s data collection power were included in merely a footnote. The program did not specify specific data types gathered, only saying it would scoop up “which apps are on your phone, how and when you use them” and “information about your internet browsing activity”
The parental consent form from Facebook and Applause lists none of the specific types of data collected or the extent of Facebook’s access. Under “Risks/Benefits”, the form states “There are no known risks associated with this project however you acknowledge that the inherent nature of the project involves the tracking of personal information via your child’s use of Apps. You will be compensated by Applause for your child’s participation.” It gives parents no information about what data their kids are giving up.
Facebook claims it uses third-parties to target a diverse pool of participants. Yet Facebook conducts other user feedback and research programs on its own without the need for intermediaries that obscure its identity, and only ran the program in two countries. It claims to use a generic signup page to avoid biasing who will choose to participate, yet the cash incentive and technical process of installing the root certificate also bias who will participate, and the intermediaries conveniently prevent Facebook from being publicly associated with the program at first glance. Meanwhile, other clients of the Betabound testing platform like Amazon, Norton, and SanDisk reveal their names immediately before users sign up.
Facebook’s ads recruiting teens for the program didn’t disclose its involvement
Did we intentionally hide our identity as Facebook?
No — The Facebook brand is very prominent throughout the download and installation process, before any data is collected. Also, the app name of the device appears as “Facebook Research” — see attached screenshots. We use third parties to source participants in the research study, to avoid bias in the people who choose to participate. But as soon as they register, they become aware this is research for Facebook
Author’s response: Facebook here admits that users did not know Facebook was involved before they registered.
What data do we collect? Do we read people’s private messages?
No, we don’t read private messages. We collect data to understand how people use apps, but this market research was not designed to look at what they share or see. We’re interested in information such as watch time, video duration, and message length, not that actual content of videos, messages, stories or photos. The app specifically ignores information shared via financial or health apps.
Author’s response: We never reported that Facebook was reading people’s private messages, but that it had the ability to collect them. Facebook here admits that the program was “not designed to look at what they share or see”, but stops far short of saying that data wasn’t collected. Fascinatingly, Facebook reveals it was that it was closely monitoring how much time people spent on different media types.
Facebook Research abused the Enterprise Certificate system meant for employee-only apps
Did we break Apple’s terms of service?
Apple’s view is that we violated their terms by sideloading this app, and they decide the rules for their platform, We’ve worked with Apple to address any issues; as a result, our internal apps are back up and running. Our relationship with Apple is really important — many of us use Apple products at work every day, and we rely on iOS for many of our employee apps, so we wouldn’t put that relationship at any risk intentionally. Mark and others will be available to talk about this further at Q&A later today.
Author’s response: TechCrunch reported that Apple’s policy plainly states that the Enterprise Certificate program requires companies to “Distribute Provisioning Profiles only to Your Employees and only in conjunction with Your Internal Use Applications for the purpose of developing and testing” and that “You may not use, distribute or otherwise make Your Internal Use Applications available to Your Customers”. Apple took a firm stance in its statement that Facebook did violate the program’s policies, stating “Facebook has been using their membership to distribute a data-collecting app to consumers, which is a clear breach of their agreement with Apple.”
Given Facebook distributed the Research apps to teenagers that never signed tax forms or formal employment agreements, they were obviously not employees or contractors, and most likely use some Facebook-owned service that qualifies them as customers. Also, I’m pretty sure you can’t pay employees in gift cards.
source https://techcrunch.com/2019/01/31/facebook-researchgate/
0 notes
fmservers · 6 years ago
Text
We dismantle Facebook’s memo defending its Research data-grab
Facebook published an internal memo today trying to minimize the morale damage of TechCrunch’investigation that revealed it’d been paying people to suck in all their phone data. Attained by Business Insider’s Rob Price, the memo from Facebook’s VP of production engineering and security Pedro Canahuati gives us more detail about exactly what data Facebook was trying to collect from teens and adults in the US and India. But it also tries to claim the program wasn’t secret, wasn’t spying, and that Facebook doesn’t see it as a violation of Apple’s policy against using its Enterprise Certificate system to distribute apps to non-employees.
Here we lay out the memo with section by section responses to Facebook’s claims challenging TechCrunch’s reporting. Our responses are in bold and we’ve added images.
Memo from Facebook VP Pedro Canahuati
APPLE ENTERPRISE CERTS REINSTATED
Early this morning, we received agreement from Apple to issue a new enterprise certificate; this has allowed us to produce new builds of our public and enterprise apps for use by employees and contractors. Because we have a few dozen apps to rebuild, we’re initially focusing on the most critical ones, prioritized by usage and importance: Facebook, Messenger, Workplace, Work Chat, Instagram, and Mobile Home.
New builds of these apps will soon be available and we’ll email all iOS users for detailed instructions on how to reinstall. We’ll also post to iOS FYI with full details.
Meanwhile, we’re expecting a follow-up article from the New York Times later today, so I wanted to share a bit more information and background on the situation.
What happened?
On Tuesday TechCrunch reported on our Facebook Research program. This is a market research program that helps us understand consumer behavior and trends to build better mobile products.
TechCrunch implied we hid the fact that this is by Facebook – we don’t. Participants have to download an app called Facebook Research App to be involved in the stud. They also characterized this as “spying,” which we don’t agree with. People participated in this program with full knowledge that Facebook was sponsoring this research, and were paid for it. They could opt-out at any time. As we built this program, we specifically wanted to make sure we were as transparent as possible about what we were doing, what information we were gathering, and what it was for — see the screenshots below.
We used an app that we built ourselves, which wasn’t distributed via the App Store, to do this work. Instead it was side-loaded via our enterprise certificate. Apple has indicated that this broke their Terms of Service so disabled our enterprise certificates which allow us to install our own apps on devices outside of the official app store for internal dogfooding.
Author’s response: To start, “build better products” is a vague way of saying determining what’s popular and buying or building it. Facebook has used competitive analysis gathered by its similar Onavo Protect app and Facebook Reserch for years to figure out what apps were gaining momentum and either bring them in or box them out. Onavo’s data is how Facebook knew WhatsApp was sending twice as many messages as Messenger, and it should invest $19 billion to acquire it.
Facebook claims it didn’t hide the program, but it was never formally announced like every other Facebook product. There were no Facebook Help pages, blog posts, or support info from the company. It used intermediaries Applause (which owns uTest) and CentreCode (which owns Betabound) to run the program under names like Project Atlas and Project Kodiak. Users only found out Facebook was involved once they started the sign-up process and signed a non-disclosure agreement prohibiting them from discussing it publicly. TechCrunch has reviewed communications indicating Facebook would threaten legal action if a user spoke publicly about being part of the Research program. While the program had run since 2016, it had never been reported on. We believe that these facts combined justify characterizing the program as “secret”
The Facebook Research program was called Project Atlas until you signed up
How does this program work?
We partner with a couple of market research companies (Applause and CentreCode) to source and onboard candidates based in India and USA for this research project. Once people are onboarded through a generic registration page, they are informed that this research will be for Facebook and can decline to participate or opt out at any point. We rely on a 3rd party vendor for a number of reasons, including their ability to target a Diverse and representative pool of participants. They use a generic initial Registration Page to avoid bias in the people who choose to participate.
After generic onboarding people are asked to download an app called the ‘Facebook Research App,’ which takes them through a consent flow that requires people to check boxes to confirm they understand what information will be collected. As mentioned above, we worked hard to make this as explicit and clear as possible.
This is part of a broader set of research programs we conduct. Asking users to allow us to collect data on their device usage is a highly efficient way of getting industry data from closed ecosystems, such as iOS and Android. We believe this is a valid method of market research.
Author’s response: Facebook claims it wasn’t “spying”, yet it never fully laid out the specific kinds of information it would collect. In some cases, descriptions of the app’s data collection power were described in merely a footnote. The program did not specify specific data types gathered, only saying it would scoop up “which apps are on your phone, how and when you use them” and “information about your internet browsing activity”
The parental consent form from Facebook and Applause lists none of the specific types of data collected or the extent of Facebook’s access. Under “Risks/Benefits”, the form states “There are no known risks associated with this project¨ however you acknowledge that the inherent nature of the project involves the tracking of personal information via your child’s use of Apps. You will be compensated by Applause for your child’s participation.” It gives parents no information about what data their kids are giving up.
Facebook claims it uses third-parties to target a diverse pool of participants. Yet Facebook conducts other research programs on its own without the need for intermediaries that obscure its identity, and only ran the program in two countries. It claims to use a generic signup page to avoid biasing who will choose to participate, yet the cash incentive and technical process of installing the root certification also bias who will participate, and the intermediaries conveniently prevent Facebook from being publicly associated with the program at first glance. Meanwhile, other clients of the Betabound testing platform like Amazon, Norton, and SanDisk reveal their names immediately
Facebook’s ads recruiting teens for the program didn’t disclose its involvement
Did we intentionally hide our identity as Facebook?
No — The Facebook brand is very prominent throughout the download and installation process, before any data is collected. Also, the app name of the device appears as “Facebook Research” — see attached screenshots. We use third parties to source participants in the research study, to avoid bias in the people who choose to participate. But as soon as they register, they become aware this is research for Facebook
Author’s response: Facebook here admits that users did not know Facebook was involved before they registered.
What data do we collect? Do we read people’s private messages?
No, we don’t read private messages. We collect data to understand how people use apps, but this market research was not designed to look at what they share or see. We’re interested in information such as watch time, video duration, and message length, not that actual content of videos, messages, stories or photos. The app specifically ignores information shared via financial or health apps.
Author’s response: We never reported that Facebook was reading people’s private messages, but that it had the ability to collect them. Facebook here admits that the program was “not designed to look at what they share or see”, but stops far short of saying that data wasn’t collected. Fascinatingly, Facebook reveals it was that it was closely monitoring how much time people spent on different media types.
Facebook Research abused the Enterprise Certificate system meant for employee-only apps
Did we break Apple’s terms of service?
Apple’s view is that we violated their terms by sideloading this app, and they decide the rules for their platform, We’ve worked with Apple to address any issues; as a result, our internal apps are back up and running. Our relationship with Apple is really important — many of us use Apple products at work every day, and we rely on iOS for many of our employee apps, so we wouldn’t put that relationship at any risk intentionally. Mark and others will be available to talk about this further at Q&A later today.
Author’s response: TechCrunch reported that Apple’s policy plainly states that the Enterprise Certificate program requires companies to “Distribute Provisioning Profiles only to Your Employees and only in conjunction with Your Internal Use Applications for the purpose of developing and testing” and that “You may not use, distribute or otherwise make Your Internal Use Applications available to Your Customers”. Apple took a firm stance in its statement that Facebook did violate the program’s policies, stating “Facebook has been using their membership to distribute a data-collecting app to consumers, which is a clear breach of their agreement with Apple.”
Given Facebook distributed the Research apps to teenagers that never signed tax forms or formal employment agreements, they were obviously not employees or contractors, and most likely use some Facebook-owned service that qualifies them as customers. Also, I’m pretty sure you can’t pay employees in gift cards.
Via Josh Constine https://techcrunch.com
0 notes
vincentvelour · 6 years ago
Text
Important Lessons From Germany’s First GDPR-Related Fine
Important Lessons From Germany’s First GDPR-Related Fine
1/16/2019
Share
        By Paul Sutton, Head of Legal Advisory Group, Radius
Germany has issued its first GDPR fine. The penalty underscores the willingness of data protection authorities to enforce the law, but its relatively low amount — just 20,000 euros, or less than $23,000 — may also indicate leniency for companies that report violations promptly, fully comply with authorities and swiftly take action to fix the problem.
The fine was levied against social media chat app Knuddels, which failed to encrypt the personal data of some of its customers. The site was breached in July, and the hack was discovered in September, when 330,000 customer email addresses and passwords were posted on the internet.
According to German magazine Der Spiegel, a total of over 800,000 email addresses and 1.8 million user names are suspected of being stolen, though only the 330,000 cases have been verified so far. Some customers used their real names and listed their home addresses on the site. Whether that information was taken is still unclear.
Founded in 1999, Knuddels is one of the oldest and largest German chat platforms. It began encrypting user passwords in 2012, but continued to save the old, unencrypted versions on a backup server with an outdated operating system. After learning of the breach, the company deleted its database of unencrypted user information and notified the local Baden-Württemberg data protection authority about the breach.
The company also apologized for its actions, promptly notified customers and had them change their passwords and made extensive changes to improve its data security. It has plans to make further technology improvements in the coming weeks.
“Knuddels is safer than ever,” Holger Kujath, the managing director of Knuddels, told Spiegel Online.
Regulators appear to agree.
“Those who learn from harm and act transparently to improve data protection can emerge stronger as a company from a hacker attack," said Stefan Brink, the data protection and freedom of information officer for Baden- Württemberg, in a statement.
Significantly, he added that regulators are “not interested in entering into a competition for the highest possible fines. The bottom line is improving privacy and data security for users.”
Given the possible penalties involved, Knuddels’ fine itself was effectively a slap on the wrist. Depending on the severity of the incident, the GDPR allows for fines of up to 20 million euros or 4 percent of annual revenue.
That said, it’s critical for multinationals to understand that while the fine in this case made headlines in part for its leniency, there were other costs involved. Knuddels’ prompt actions, for example, represent what must have been serious administrative burdens. It is also common in such cases to incur legal and other third-party costs, such as those related to PR. Finally, there may be reputational costs which, while difficult to measure, can be significant, particularly over the long term.
Other Euro Fines
While being forthcoming about mistakes and acting quickly to improve security may help reduce fines, it won’t stop authorities from enforcing the law, and Germany is not the first country to act.
A Portuguese hospital was fined 400,000 euros for giving too many users access to patient data. Nearly a thousand users had physician-access rights, while fewer than 300 doctors were employed at the hospital. The hospital is appealing the fine.
An Austrian retailer was fined 4,800 euros for using a surveillance camera that captured too much of the sidewalk outside. In addition, the camera didn’t warn passers-by that they might be recorded.
Going After Big Game
These fines may be the tip of the iceberg for GDPR enforcement. Complaints have been filed against several major technology companies about the way they track users. 
Privacy International, a UK-based nonprofit, has filed GDPR complaints against seven corporations, including data brokers Acxiom and Oracle, credit bureaus Equifax and Experian, and several ad tech companies. These firms use cookies and IP addresses to track users without obtaining adequate permission, the group and other privacy advocates say.
A separate complaint was filed against Google and other ad tech companies, claiming that current online advertising technology — which affects most internet users — violates European privacy standards. The complaint says that when someone is shown a personalized ad online, what they are watching is broadcast to a host of other ad companies in an attempt to get them to bid on targeting the individual. The complaint says that procedure violates privacy under the GDPR. If complaints like this are found to be valid, they could upend the current business model that supports most sites.
Another group has filed complaints against Google for tracking user location even when the “Location History” option is turned off (users must adjust an additional setting to disable location tracking).
Facebook, which was fined 500,000 pounds for the Cambridge Analytica scandal, could be hit with a billion-dollar fine after the data of up to 30 million users was exposed through a bug in the platform’s “View As” feature. The problem has since been fixed.
Another complaint was filed against Facebook shortly after GDPR went into effect in May for not obtaining adequate opt-in consent from users for data collection.
Twitter is being investigated by GDPR authorities for failing to disclose to users how their information is tracked when they click links.
What to Do
What these companies have in common — besides their size and notoriety — is their alleged failure to obtain permission before collecting data and failure to explain how the collected data will be used, both key provisions of the GDPR.
Multinationals that collect information about customers or employees in the EU should review the GDPR with a focus on permission and explanation procedures. It’s important to remember that you are also responsible for ensuring that your partners and contractors follow the law.
While the GDPR’s protocol for a data breach is straightforward, the language surrounding permission and consent has been accused of being murky and ambiguous. This may be a deliberate measure designed to give companies choices about how they achieve the law’s aims. Authorities’ reaction to existing complaints will shed more light on enforcement and expectations.
In the meantime, for a data breach, the Knuddels fine makes it clear that intention and attitude matter a lot. While prompt reporting and corrective action won’t help you avoid a fine — or some of the related costs mentioned, such as administrative burdens and legal fees — it appears that regulators are trying their best to make the punishment fit the crime.
    Join hundreds of global business leaders who receive weekly international expansion updates and need-to-know global information.
  <!--//--><![CDATA[// ><!-- MktoForms2.loadForm("//app-sji.marketo.com", "466-WXJ-405", 2983); //--><!]]>
0 notes