#they send emails spoofed as coming from our ceo
Explore tagged Tumblr posts
aholefilledwithtwigs · 2 months ago
Text
Corporate espionage is so dumb why am i getting doxxed by [redacted] you’re not even trying to get cool secrets
5 notes · View notes
shirlleycoyle · 4 years ago
Text
Threatening Voter Emails Included Highly Suspicious ‘Hacking’ Video
On Tuesday, an unknown number of Demoratic voters in Florida, Arizona, and Alaska received a threatening email urging them to vote for "Vote for Trump or else!" 
"We are in possession of all your information," the email read. "You are currently registered as a Democrat, and we know this because we have gained access into the entire voting infrastructure. You will vote for Trump on Election Day or we will come after you. Change your party affiliation to Republican to let us know you received our message and will comply. We will know which candidate you voted for. I would take this seriously if I were you. Good luck."
The emails, as Motherboard reported Tuesday, were spoofed to appear that they came from the Proud Boys, a violent far-right group. Proud Boys claimed to have nothing to do with the emails.
One of the people who was the target of this email campaign received a second, identical email that also contained a link to a video, which was uploaded to the cloud file sharing website Orangedox. The video seems to be an attempt to scare voters into thinking that this goes beyond an email-related intimidation campaign and is an organized effort to disrupt the mail-in ballot portion of the election. The strategy shown in the video appears to be similar to a version of a scheme laid out by 4chan earlier this week that Motherboard has already debunked as a serious election threat. 
Election experts say that the strategy shown in this video is a fear mongering tactic that shows a method of manipulating votes that will not work, and is likely intended to undermine faith in the electoral process. Motherboard is not publishing the video because it contains some voters' personal information and it is also a propaganda video designed to intimidate voters.
"This is just bullshit fear mongering"
The two-minute video plays over an instrumental of Metallica's "Enter Sandman." The video opens with footage of President Trump during a previous press briefing, in which he says, "I think that mail in voting is a terrible thing." The video then immediately cuts to a logo with the Proud Boys name. The video shows a screen recording of an alleged hacker scrolling through what they present as voter data. They do this in part with a tool called sqlmap, an established tool for taking advantage of vulnerabilities in websites, often to extract data. The alleged hacker then uses some of the information contained in the databases to access the website of the Federal Voting Assistance Program, and then prints out a Federal Write-in Absentee Ballots (FWAB) as a PDF document.
FWAB ballots are described by the government as "emergency backup ballots" for military members and citizens who live overseas and did not receive mail-in or absentee ballots from their states. Election experts say this is designed to make voting easy, that FWAB ballots are only used as a matter of last resort, and that other types of ballots supersede FWAB ones. Also, only certain people qualify for FWAB ballots.
"FWABs are typically ballots of last resort, and if any other ballot has been submitted by the voter, they trump the FWAB," Matt Bernhard, a cybersecurity researcher who works for the elections security non profit VotingWorks, told Motherboard in an online chat. 
Do you work on election security? Do you do vulnerability reserch on voting machines or ssystems? We’d love to hear from you. You can contact Lorenzo Franceschi-Bicchierai securely on Signal at +1 917 257 1382, Wire/Wickr @lorenzofb, or email [email protected]. You can contact Joseph Cox on Signal on +44 20 8133 5190, Wickr on josephcox, or email [email protected]
At the end of the video, the person controlling the computer shows that they have several folders labeled with the name of American states, and shows files that are supposedly mail-in ballots PDFs they downloaded. It's impossible from the video to know whether these files actually contain ballots, and whether the folders all contain files or not.
Ben Adida, the executive director of VotingWorks, said that it's possible to do what the video shows, "but this attack is detectable and comes with very harsh penalties [for voter fraud]."
"This is just bullshit fear mongering," Bernhard said. "First of all, showing us a bunch of files in a file system doesn't prove anything. second of all, the databases shown are quite possibly ones that are publicly available anyways, or that have been posted to dark web sites after leaks." 
Gregory Miller, a cybersecurity expert and one of the founders of the Trust the Vote Project, also found the video to be suspicious.
“There are lots of reasons to believe this is misinformation as much as anything. As we’re looking more closely, the manueves they’re taking to ‘suggest’ they are hacking into voter information stores appears to be lots of smoke and mirrors, if you will. In other words, it appears on closer inspection to be a hoax of its own,” he said in an email.
Tumblr media
A screenshot of the video contained the a threatening email seen by Motherboard.
The video also shows data that includes names, email addresses, physical addresses, phone numbers, and redacted Social Security Numbers—all types of data that may be publicly available.
Bernhard explained that the ballots shown in the video are federal absentee ballots that have to be signed, and then physically mailed for them to count as votes. 
"Unless they're going to figure out a way to forge thousands of signatures, that ain't gonna work," Bernhard said, adding that it would require a lot of effort to print all the ballots, forge hundreds or thousands of signatures, and then mail them out.
There is no convincing indication that the alleged hacker is using sqlmap to break into any website or server that may be hosting such data. It is possible to use sqlmap locally; that is, not point the tool to a remote website, and instead interact with files stored locally. 
Michael Patterson, who was named in some of the data, told Motherboard that the information included in the video was accurate.
“That is pretty crazy. So, if I understand it correctly, they are sending emails to people telling them to vote for Trump and some of the emails contain a video proving that they have personal information? If some fascists want to show up to my house, I feel bad for them. I am a combat veteran and a communist, it wouldn’t go well for them,” he told Motherboard in an email.
There are few clues about who made this video. Metadata for the video does not seem to show anything that could be used to help identify who made it.
A few hours after a source sent Motherboard the video, the file was removed from the sharing site Orangedox. It is not clear if the user deleted it, or if Orangedox itself did so. Motherboard archived a copy of the video before it was removed.
Chad Brown, CEO of Orangedox, said the company's privacy policy prevented him from sharing information unless contacted by a law enforcement agency. Orangedox would then provide the agency with all necessary information on the user account.
"We don't make any copies of the files that our users post," Brown added.
The video and the viral 4Chan post instructing people to attempt to disrupt the mail-in voting process is similar in that election security experts agree that both pose very little risk, but give the appearance that impropriety is possible in the mail-in vote despite the risk being very low.
The FBI did not immediately respond to a request for comment on the video itself. On Tuesday, local media reported that the FBI was investigating the threatening emails more generally.
(Disclosure: Gavin McInnes founded the Proud Boys in 2016. He was also a co-founder of VICE. He left the company in 2008 and has had no involvement since then.)
Would you like to read more stories about hacking, privacy, and surveillance? Subscribe to our pop-up 'zine The Mail. The next issue is about hacking culture.
Threatening Voter Emails Included Highly Suspicious ‘Hacking’ Video syndicated from https://triviaqaweb.wordpress.com/feed/
0 notes
robertsiciliano1 · 6 years ago
Text
The Top Cyber Security Threats to Real Estate Companies
Gone are the days when hackers would only target retailers. These days, the bad guys an target businesses in any industry, especially those that aren’t quite up on cyber security.
Tumblr media
The real estate industry is one such group, and according to a recent survey, about half of businesses in the real estate industry are not prepared to handle a cyberattack. Federal law requires some industries, like hospitals and banks, to have some type of security in place for things like that, but the real estate industry is quite vulnerable. Here are some of the threats you should look out for if you’re in the real estate industry:
Business Email Compromise (BEC)
A BEC, or business email compromise, is a type of cyberattack that tricks a business into wiring money to a criminal’s bank account. The hackers do this by spoofing email addresses and sending fake messages that seem like they are from a trusted business professional, such as the CEO or a company attorney. The FBI has found that multi-billions in business losses can be attributed to BEC.
That’s scary enough, but the FBI also says that real estate companies are specially targeted in these attacks and every participant in the real estate transaction is a possible victim.
Mortgage Closing Wire Scam
Prior to closing on the sale of a home, the buyer receives an email from their real estate agent, title attorney or other trusted service professional with specific details of the time, date and location of the closing. In this same email, there are detailed and urgent instructions on how to wire money for the down payment but to a criminal’s bank account. Within moments of the wire transfer, the money is withdrawn, and the cash disappears.
A report by the FBI's Internet Crime Complaint Center totals the number of victims of the mortgage closing wire scam ballooned to 10,000 victims, an 1,110 percent increase in the years 2015 to 2017 with financial losses totaling over $56 million, which is a 2,200 percent increase.
Ransomware
Another threat to real estate companies is ransomware. This is the type of malware that makes the data on your device or network unavailable until you pay a ransom. This is very profitable for hackers, of course, and it is becoming more and more popular. All it takes is one member of your team clicking on a link in an email, and all of your data could be locked.
Ransomware doesn’t just target computers though. It can target any device that is connected to the internet including smart locks, smart thermostats and even smart lights, which are gaining a lot of popularity in American homes. When digital devices get infected with ransomware, they will fail to work.
Generic Malware
Though most people hear about ransomware these days, there are other types of malware out there that hackers use, too. For instance, you have probably heard of Trojans a.k.a. Spyware or Malware, which is very much still around. These can be used by cybercriminals to spy on their victims and get a person’s banking information or even wipe out their accounts. Malware can also be used to steal personal information and even employee information, such as client data, credit card numbers and Social Security numbers. Again, real estate companies are not exempt from this type of attack and are now even bigger targets.
Cloud Computing Providers
If you are part of the real estate industry, your business is also at risk of becoming a victim thanks to cloud computing, which is more economical these days. A cyber thief doesn’t have to hack into a company to get its data; all they need to do instead is target the company’s cloud provider.
It might seem that by using a cloud company you are lowering the risk of your business becoming a target, but the truth is, the risk still lies with your company, how secure your own devices are and how effective passwords are managed. In most contracts with cloud computing companies, the customer, which would be your business, is not well-protected in the case of a cyberattack.
Protecting Your Real Estate Company from Becoming a Victim of a Cyberattack
Now that you know your real estate company is a potential target of cybercriminals, you might be wondering what you can do to mitigate this risk. Here are some tips:
Create New Policies – One of the things you can do is to develop new policies in your agency. For example, in the case of BEC scams, if you have a policy that you never wire money to someone based only on information given via email, you won’t have to worry about becoming victimized in this type of scam. Instead, you should talk to the person sending the email in person or via a phone call just to confirm. Make sure, however, that you don’t call a number from the suspicious email, as this could put you right in touch with the scammer.
Train Your Staff – Another thing that you should consider is better staff training. Most hacking attempts come via email, so by training your staff not to blindly open attachments or click on any links in emails, you could certainly save your staff from these scams. Check out our S.A.F.E. Secure Agent for Everyone Certification Designation course, which is a marketing differentiator that offers ideas and methods to promote proactive strategies to ensure incident-free results. Learn how to develop client-centered procedures customized for safety and security.
Train Your Clients – Mortgage closing wire fraud scams can be manageable if not preventable. Inform your clients that in the process of buying or selling a home, there will be many emails to and from your real estate agent and other service professionals including your attorney, mortgage broker, insurance companies and home inspector. Tell them: Call Your Agent: Under no circumstances and at no time in this process should the client or service professional engage in a money wire transfer unless the client specifically speaks to the real estate agent in person or over the phone to confirm the legitimacy of the money wire transaction. Email Disclosure: Clients should always look for language in the real estate agent’s email communications stating the above or a similar facsimile.
Back Up Your Systems – It is also very important that you always back up everything. This way, if your system does get hacked, you won’t have to pay a ransom, and you will be able to quickly restore everything that you need.
Better Your Cloud Computing Contracts – Since you know that cloud providers don’t really like to take on the responsibility in the case of a cyberattack, you might want to start negotiating with the company in question about what you can do about that. This might include getting better security or adding some type of notification requirements.
Consider Cyber-Liability Insurance – You also have the ability to get cyberliability insurance. This could really help you to cut the risk to your real estate business. There are all types of policies out there so make sure to do your research, or better yet, speak to a pro about what you might need.
Robert Siciliano personal security and identity theft expert and speaker is the author of Identity Theft Privacy: Security Protection and Fraud Prevention: Your Guide to Protecting Yourself from Identity Theft and Computer Fraud. See him knock’em dead in this Security Awareness Training video.
0 notes
biofunmy · 5 years ago
Text
Grindr And Roku Were Both Exploited By An Ad Fraud Scheme
In just the past three weeks, Grindr, the popular gay dating app, has been slammed by the Norwegian Consumer Council for exposing users’ personal information, suspended from Twitter’s ad network as a result of that investigation, and alleged to have been the way a Michigan hairstylist met the man who brutally murdered him.
Adding to those concerns is new research showing that the company’s Android app was exploited by ad fraudsters in a scheme that stole money from advertisers — and drained the phone batteries and depleted the data plans of Grindr’s users.
Amin Bandeali, CTO of Pixalate, the Palo Alto ad fraud detection firm that identified the scam, said Grindr was likely targeted because of its large user base.
“If I���m a fraudster, I would love to target an app that has a lot of user engagement. These dating apps — users are on them constantly,” he told BuzzFeed News.
Along with Grindr, the scheme exploited Roku apps and devices. Brands are projected to spend $7 billion this year to show ads on connected devices, like Roku, and over-the-top media services, which are streaming platforms like Hulu. Yet close to a quarter of that money will be stolen by fraudsters, according to data from Pixalate.
“This scheme is just one example in the universe of [over-the-top] fraud,” Pixalate CEO Jalal Nasir told BuzzFeed News. Pixalate dubbed the scheme “DiCaprio” after seeing that word used in a file containing some of the malicious code.
“DiCaprio is one of the most sophisticated OTT ad fraud schemes we have seen to date,” Nasir said.
A Grindr spokesperson told BuzzFeed News the company wasn’t aware of the scheme prior to being contacted for this story but was “taking steps to address it and are continually working to implement new strategies to protect our users.”
“Grindr is committed to creating a safe and secure environment to help our community connect and thrive. Any fraudulent activity is a clear violation of our and conditions and something we take very seriously,” the spokesperson said.
Tricia Mifsud, Roku’s vice president of communications, said brands need to take steps to protect themselves when they purchase OTT ads using open exchanges rather than buying direct from publishers or platforms.
“We recommend that OTT ad buyers buy directly from Roku or publishers on the platform. When buying from other sources and especially open exchanges, the buyer may be better served to use technology that can help with verifying the source of the ad requests,” she said.
Ad spoofing
Here’s how the scheme worked: A normal banner ad was bought on Grindr’s Android app. The fraudsters then attached code that disguised the Grindr banner ad to look like a Roku video ad slot. This fake ad space was sold on programmatic advertising exchanges, the online marketplaces where digital ads are bought and sold. Making one ad unit look like another is called spoofing, and it has been a problem for years. This attack is similar to one revealed by BuzzFeed News and detection firm Protected Media last year. In both cases, cheap banner ads were used to resell more expensive video ads.
Nasir said this kind of video ad can cost as much as 25 times that of a mobile banner ad: “So that’s very lucrative for someone to make quick money — and a lot of it.”
These video ads did not appear in the Roku app and were never seen by humans. But the ad tech middleware vendors who facilitated the ad placement still took their cuts.
One such company is S&W Media, an Israeli firm that operates an ad network that places ads in Roku apps and on other connected TV platforms. The company also operates roughly 20 of its own Roku content channels under the SnowTV brand. Pixalate’s research, reporting by BuzzFeed News, and data from a company used by the fraudsters to deliver the video ads suggested multiple connections between S&W Media and the scheme. As a result, at least one partner has ended its relationship with S&W, calling its activity “highly suspect.”
CEO Nadav Slutzky denies involvement, telling BuzzFeed News this type of spoofing has occurred on his ad platform in the past and that he has refunded advertisers when fraud was detected.
“In August 2019, one of our advertisers brought to our attention that some of the traffic we were sending him was suspected of being fake. We immediately worked to locate the traffic sources and stopped working with this supply, in addition to not paying them for this traffic,” he said. “We do everything in our power to battle fraudulent traffic including using third-party verifications tools. We as a mediator have suffered the most from this kind of activity and will do anything in our power to stop it, including developing inside tools to fight this.”
The code that placed the invalid video ads used S&W’s ad network, called AdservME, to track the ads being sold and included instruction to display an ad for a jewelry business owned in part by Slutzky if a paid ad were not purchased to fill the slot.
Slutzky said the section of code referencing AdservME, and the use of an Austaras banner, was standard code used by his company and was copied by the fraudsters.
Another section of malicious code identified by Pixalate included a list of Roku apps owned and operated by S&W’s SnowTV. These apps would have been spoofed as part of the scheme, and any video ads placed as a result would have earned S&W money as both the ad network selling the inventory and the publisher of the app.
SnowTV says on its website that it uses Moat and Pixalate to protect its apps against invalid traffic. Pixalate told BuzzFeed News that’s false and said it stopped working with S&W in 2017. Slutzky subsequently acknowledged that his company is not currently working directly with Moat, either.
Slutzky said that the DiCaprio fraudsters, whom he could not identify, chose to spoof his SnowTV apps because they appealed to advertisers.
He said his company “spent countless hours building our apps and marketing them to get them to a place we are proud of. The fact that they are whitelisted by many advertisers made them a target for whoever wrote the code you showed me.”
The malicious code was hosted on alefcdn.com, a site that was taken offline within minutes of BuzzFeed News emailing Slutzky, Grindr, and SpringServe, a company exploited by the scheme. Slutzky said his company does not own alefcdn.com and that the code is not his.
“This code is not our code — it’s the first time I’m seeing this code,” he said. He said alefcdn.com was offline when he tried to visit it.
The fraudsters used SpringServe, an American video ad platform, to solicit buyers for their spoofed ads and help place them. After being contacted by BuzzFeed News, SpringServe conducted an internal investigation and said the account used to place some of the invalid video ads belonged to S&W Media.
“Upon receipt of the recent information provided by BuzzFeed and our own internal investigation, SpringServe has concluded that the activity in question was highly suspect and has immediately suspended this company from utilizing its platform,” SpringServe CTO David Buonasera told BuzzFeed News. “This issue underscores the need for greater industry communication and cooperation to prevent invalid inventory.”
Slutzky said any suspicious activity on its SpringServe account was the result of someone misusing his company’s service.
“We serve billions of requests a day on our ad servers. It’s unavoidable that as a middleman a portion of this will be fraudulent. We do everything in our power to avoid this and stop this,” he said.
Nasir, Pixalate’s CEO, said the DiCaprio scheme highlights how a lack of standards and measurements for ads on internet-connected TVs and over-the-top services has let bad actors run wild.
“This makes it the right breeding ground for a fraudster to come and exploit, even with minimal effort,” he said.
Sahred From Source link Business
from WordPress http://bit.ly/2RwfZYv via IFTTT
0 notes
terabitweb · 5 years ago
Text
Original Post from SC Magazine Author: Victor M. Thomas
No matter how sophisticated computer security technology becomes, the human desire for connection and friendship appears to be an endless opening for social engineering attacks.
The costs of socially engineered attacks remain considerable. Individual costs can range between $25,000 and $100,000 per person per incident, says Mark Bernard, principal of Secure Knowledge Management, a security consulting firm. Multiply that by millions of individuals annually, and the costs are in the stratosphere.
Whether it’s online or in person, bad guys are looking for and taking any exploit they can find.
“One of my clients was a financial services firm and extremely paranoid about security,” says security consultant Steve Hunt. “They had upgraded their physical and IT security with surveillance cameras, guards armed with Tasers walking around doing rounds of the facility and upgrading their access controls and digital security controls.”
As part of a “red team” penetration test, Hunt’s team lured those guards away from the building with some equipment inside their parked car: a rogue Wi-Fi access point whose signal wandered from point to point, thanks to being placed inside a potato chip can tied to an oscillating fan. Then, while the guards were distracted outside, the red team was able to physically enter the building by tailgating an employee. The red team was able to take photos of whiteboards, insert USB sticks in servers and all manner of other attacks.
There are also the bits of information that attackers can glean off the Dark Web from previous breaches – data from Equifax, Anthem and elsewhere – all of which could fill in gaps in personal information about a target.
While working at a payment company as an enterprise security manager, Bernard says one of the company’s controllers received a memo purportedly from the chief executive officer, requesting that the controller set up an account and transfer money into an account belonging to someone outside the company. “The thing was, we thought it was a little suspicious, even though it looked legit,” Bernard says. “Usually in finance, there’s several layers of signatures and reviews in order to verify and validate a major transaction.”
In this case, the outsider was someone who had been blocked from the payments company previously for laundering money there and had hired someone to execute this phishing attack, Bernard says.
Social engineering works as well as it does for a variety of reasons, one of them being that mindset that people think it could never happen to them, but it does happen to them, sometimes when they least expect it.
“Sometimes when you’re involved in your work and you’re like heads down, and all of a sudden something comes out of left field,” Bernard says. “Sometimes you don’t pay it much attention, and you just do whatever just to get rid of it, so you answer the question or give the information up and then you just keep moving on.”
Bernard says C-suite executives have gotten savvy about the threat of spear phishing attacks, but vice presidents and directors are more likely to suffer such attacks these days.
Mark Bernard, principal, Secure Knowledge Management
Another factor lulling individuals into a false sense of security is their increasing use of second-factor authentication, which they believe will protect them from social engineering attacks. In fact, these individuals may be just as vulnerable, if they believe the attacker is a friend, not a foe, Bernard says.
Organizations need to do their risk assessments, to know where gaps are, and to segregate duties, Bernard says. For instance, “you don’t allow the same person who signs the check to actually print it out,” he says. “As they go through those checks and balances, there should be a set of criteria that needs to be checked off, and reviewed.”
The recently-enacted General Data Protection Regulation (GDPR) offers guidelines on the kind of personal information that should be handed out only with great care, Bernard says. If a caller asks for that information, until you can verify their identity and authorization to receive that information, “tell them a little story,” he says. “Say, ‘you know what, I’m kind of busy right now, and I can get back to you. Give me a phone number I can call you back on,’ or set up an alternative channel. A lot of them will just hang up.”
Phishing represents its own social engineering threat, arguably the most common one these days. A standard precaution is not to click on any links within an email, or to tell employees not to do so. Software also routinely monitors web addresses and compares them to a blacklist shared by security vendors or the security community.
More recently, organizations called public partnerships, such as the High Tech Crime Investigation Association, have been building intelligence hubs about fraudsters and criminals, with the ultimate goal of incorporating this into the technology we use, Bernard says. “When you click on your web browser, your software will warn you that this site has been known to post phishing attacks, and you should back out,” he says.
Enterprises need to continue to build out defense in depth, implementing a variety of strategies, such as restricting access control depending on roles and situations, Bernard says.
“I published a security architecture framework back in 2010,” he says. “It’s been downloaded more than 90,000 times. It’s got 11 layers, and certainly the architecture of most enterprises need to have those layers.”
Another staple of enterprise security these days is the phishing drill – deploying emails designed to test employees’ ability to avoid being phished.
“At Morgan Stanley [a one-time client of Bernard’s], as part of our security program, we did quarterly phishing tests,” Bernard says. “Compliance was mandatory. If somebody failed a phishing test, they would have to do it again until they got it right.”
Such tests must be ongoing, because current employees forget to be on guard against social engineering, and new employees need to be educated in the first place.
Steve Hunt, security consultant
Bernard has used Bloom’s Taxonomy to develop curriculum to educate employees about social engineering threats.
In this case, Bloom’s Taxonomy breaks down social engineering knowledge transfer, as it does all knowledge transfer, into six stages: knowledge of the subject; comprehension of the threat; methods of prevention; analysis to facilitate changes to processes; evaluation; and synthesis.
Another aspect of phishing drills is to not warn employees and IT staff that the red team drill is occurring, Hunt says.
“If you do continual social engineering testing, your employees learn pretty quickly that they’re being tested,” Hunt says. “That constitutes their warning.”
Utilizing programs that send out mock phishing emails to employees, these drills display prominent messages on users’ screens when they have clicked on one of the program’s attempted phishing emails.
“Now people are on high alert every time they open any email,” Hunt says. “They think any email could be a test, so they’re looking at it more carefully.”
Immediate prior warning is not recommended if an organization is doing any sort of comprehensive social engineering test, Hunt says. “The only exception is possibly the CFO or COO knows that the testing is going to happen,” he says.
With the permission of the client, Hunt has performed phishing attacks, telephone attacks with spoofed identities, and physical attacks at facilities, including breaking into buildings or extracting information face to face from an employee. “They all have different tricks for success, but they’re all useful, especially if the client’s security department does not know they’re happening,” he says.
Despite Bernard’s contention, Hunt says one of the most common and surprisingly successful phishing attacks is someone fabricating an email from the CEO to the CEO’s assistant or someone in finance, requesting authorization of a wire transfer to an account. “Nowadays, many companies have caught onto that and don’t fall for it anymore, but still it’s successful,” Hunt says. The best defense is employing some form of secondary verification, he adds.
Christopher Burgess, author and former security consultant
Many criminal hackers pose as consultants or prospective employees, or as visitors to physical facilities, and attempt to walk around to find or overhear secrets, take pictures of documents or whiteboards, and often insert USB sticks into the backs of computers that act as Trojan horses to allow outsiders to enter enterprise networks, Hunt says.
Educators and IT departments need to measure success or failure of anti-social engineering efforts, evaluate the curriculum continually ask what is working and what isn’t, and figure out where to push harder, Bernard says.
“If the C-suite and the board accept the risks, and we’ve done our job as security professionals to define what a phishing attack could do to us, then so be it,” he says.
If a friendly-seeming attacker seems friendly enough, and succeeds in giving all the right answers to all the right questions, what then do individuals do?
It may be best to develop a kind of spidersense to determine when a conversation leaves you uneasy, and to reconsider whether to continue such conversations in the midst of them, if they do not feel right, security consultants say.
The best advice in such situations: Don’t be afraid to be firm and just say the information being asked for is too personal, or that you don’t know the questioner well enough to reveal it.
Even if the questioner has all the right credentials, don’t be afraid to call their purported employer to confirm the identity of the questioner before providing personal information, security experts say.
Another tricky situation can present itself when a junior employee of an organization witnesses a senior executive give out personal information to someone the junior employee suspects of having criminal intent. In such cases, IT departments have to be capable of receiving anonymous tips from those subordinates, who might otherwise have real doubts about stepping forward with their concerns.
So, despite all this advice, let’s suppose you’ve been socially engineered. Now what?
Again, culture can make a big difference. Company culture can compound the damage a social engineering attack can do. Employees, left to their own decision process, may want to keep quiet because they’re simply embarrassed, if they feel that they have made a mistake leading to compromising the security of their company.
Employees should feel they are free to be able to contact their enterprise IT help desk after such an incident. Enterprises must assure their employees that they know social engineering attacks are going to happen, and that employees are only human. If they take a zero-tolerance attitude instead, corporations could compound their problems by risking not being made aware of such attacks early on.
Employees also need to know reporting it will matter.
“I always question: If I report it, what is it going to actually do for me,” Bernard says. “Law enforcement needs the information to do their job. But will it prevent anyone else from being attacked?”
After a phishing attack, IT departments should monitor egress traffic, also known as traffic leaving their network, Hunt says. They should be looking for malware attempting to contact the criminal’s control server wherever it lives, or for data that malware is exfiltrating from the enterprise.
At the same time, management should let everyone in the company know that a phishing attack is underway, and not to open emails with certain subject lines, Hunt says.
“Phishing is still successful, and it’s going to be successful as long as there are emails to click,” Hunt says. “If you consider ransomware a type of social engineering attack, the number of attacks is definitely going to continue to grow.”
But to truly combat social engineering, individuals still have an increasingly important role to play.
According to Christopher Burgess, author, retired security consultant and a 30-plus year operative with the Central Intelligence Agency, individuals should consider stronger password retrieval questions, and if they aren’t willing to use a password manager, they should keep two physical notebooks – one with login names, the other with passwords. “You change something from a technological security problem to a physical security problem,” he says.
If users must reuse passwords, Burgess recommends making no reuse for key accounts such as email and financial information. “And anything you don’t want to see on the front page of the Washington Post,” he adds.
“Whether the threat is from a competitor, criminal or a nation state, the best defense companies have is to be alert to anomalies in all facets of their normal life,” Burgess says.
The post Social Engineering: Telling the good guys from the bad appeared first on SC Media.
#gallery-0-5 { margin: auto; } #gallery-0-5 .gallery-item { float: left; margin-top: 10px; text-align: center; width: 33%; } #gallery-0-5 img { border: 2px solid #cfcfcf; } #gallery-0-5 .gallery-caption { margin-left: 0; } /* see gallery_shortcode() in wp-includes/media.php */
Go to Source Author: Victor M. Thomas Social Engineering: Telling the good guys from the bad Original Post from SC Magazine Author: Victor M. Thomas No matter how sophisticated computer security technology…
0 notes
dorothydelgadillo · 7 years ago
Text
3 Cringe-Worthy Marketing Mistakes & the Lessons Learned from Them
As a marketer, you likely spend hours crafting the perfect email.
You rearrange copy, contemplate calls-to-action, and slave over the perfect subject line.
Once it comes time to finally hit “send,” as the number of recipients glows boldy on your screen, you hold your breath and let it go.
Then it happens.
You realize you linked to the wrong offer entirely.
As you break into a sweat, you frantically fumble for the option to cancel the send midway through.
Yes, I made this exact mistake at my first job. I thought this was bad until I heard some horror stories from my marketing friends!
It turns out, we’ve all been there - or if not exactly there, in a similar spot. We’ve all made a marketing mistake that we’ll never forget, but when it happens it’s easy to feel embarrassed, afraid, and alone. (Note: Here are some tips for recovering gracefully, though.)
Fortunately, IMPACT Elite, our community of inbound professionals, is a great place to share marketing horror stories and be reminded you’re never alone.
Recently, in IMPACT Elite, one of our members, Juan José Bello, posted about his cringe-worthy marketing mistake. He said:
“Once, I accidentally spent $23,900 in Google Adwords in a single day. I thought that the currency was in Colombian Pesos, not US Dollars. In the end, everything was solved, thankfully. I almost died.”
Juan then asked what everyone else’s biggest marketing mistakes were, and the Elite community was quick to respond!
Similarly to Juan’s story, Michele shared her experience with overspending on Adwords.
“I also once mega overspent on an AdWords campaign. I can't quite remember what happened - maybe I was mistakenly monitoring daily spend instead of total or something? All I remember now is that I always, always, ALWAYS set an end date.” - Michele Aymold, Vice President of Marketing at Parker Dewey
From both Juan and Michele’s stories, we can learn a few key things when it comes to using Adwords (or any paid promotion channel, really) -- Always set up safety nets including end dates, maximum spend caps, and spending threshold notifications. Here are three more cringe-worthy marketing mistakes that were shared with us and the lessons learned from them.
1. When Marketers Face Copyright
  IMPACT Elite member, Kendall Shiffler Guinn, shared her mistake and lesson saying:
“Circa 2009 I wrote a blog post (a cute poem) spoofing Shell Silverstein’s poem/book ‘where the sidewalk ends’ to promote a sidewalk sale. I also photoshopped the book cover as a spoof to promote the sale. It was so cute. About a month and a half later… About the time it started to rank in search and Google images but after the sale had passed...my boss got a strongly worded cease and desist letter from Shel Silverstein's attorneys demanding we take the post down and chastising us for “defacing” the cover art. Of course we complied but we did have to get our attorneys involved which ended up costing some $ because they didn’t understand we couldn’t just remove it from google images ourselves. I was a very young content manager at the time and was super embarrassed. Luckily my boss thought the whole thing was hilarious!” - Kendall Shiffler Guinn, Marketing Director at AQUILA Commercial, LLC
Lesson Learned: Get Proper Permission for Everything You Use
Kendall followed up saying:
“Lesson learned about copyrighted materials and now I’m a crazy person about making sure we have proper rights/permissions for everything we use.” 
2. Emailing Those Hot Leads
“We once had a workflow set up in HubSpot to notify our client via an email when they got “a new hot lead” (yes, the email used those words). After around 10 emails we realized they were actually being delivered to the lead instead of our client 😬We just hoped the leads took being called “hot” as a compliment.” - Tammy Duggan-Herd, Ph.D., Marketing Manager at Campaign Creators
Lesson Learned: Get a Second Set of Eyes
While an obvious lesson learned may be to test everything, things like this can still slip through the cracks even when testing!
When it comes to internal versus external communication, it’s always a good idea to have a second set of eyes on your work. It helps to grab someone who hasn’t been very involved thus far. They’ll be more likely to spot issues you may miss when you’re so close to what you’re doing.
3. Drunk Tweeting from the Company Account
Elite member, Justin Keller, shared this horrifying yet hilarious story with us: 
“I was going to be in New York for a tradeshow and decided to fly in early to visit my best friend in Brooklyn. That night, we embarked on what could only be described as a “hipster bar crawl through Williamsburg.” Before we knew it, it was 1:00am and we decided to call it a night - though not before grabbing some Chinese food on the way home. 
Without checking which account I was logged into, I typed out a tweet (something to the effect of "Having a really good time in WilliamsBLAAAARRRGHHH") and hit send. 
10 minutes later my CEO was calling me from San Francisco (where it was two hours earlier) asking me what was going on with our Twitter account! Commence my drunken frenzy to delete my tweet. Fortunately, nothing too inflammatory or profane went out, but our Twitter account definitely looked drunk! 
Pretty soon thereafter my first board member started following me on Twitter and from that point on my Twitter account has been tragically less fun.” -Justin Keller, Vice President of Marketing at Sigstr
Lesson Learned: Don’t Drink & Tweet
However, if you find you’re prone to quickly tweeting in a burst of excitement, be sure to double check that you’re on your own account! 
It also might be a good idea to logout of any professional accounts before a night out. 
So, What Have We Learned? 
Don’t drunk tweet, always check that your internal emails are actually going to be sent internally, be careful of copyright infringement, and pay attention to spending on paid platforms and set up alerts! 
Most importantly though, you’re not the only one who has made a mistake like this! Just make sure you learn from it and never make the same mistake twice. 
What’s the worst marketing mistake you’ve ever made? Let us know in the comments or join us in IMPACT Elite to continue the discussion there. 
from Web Developers World https://www.impactbnd.com/blog/marketing-mistakes-and-the-lessons-learned
0 notes
trendingnewsb · 7 years ago
Text
Inside the Two Years That Shook Facebookand the World
One day in late February of 2016, Mark Zuckerberg sent a memo to all of Facebook’s employees to address some troubling behavior in the ranks. His message pertained to some walls at the company’s Menlo Park headquarters where staffers are encouraged to scribble notes and signatures. On at least a couple of occasions, someone had crossed out the words “Black Lives Matter” and replaced them with “All Lives Matter.” Zuckerberg wanted whoever was responsible to cut it out.
“ ‘Black Lives Matter’ doesn’t mean other lives don’t,” he wrote. “We’ve never had rules around what people can write on our walls,” the memo went on. But “crossing out something means silencing speech, or that one person’s speech is more important than another’s.” The defacement, he said, was being investigated.
All around the country at about this time, debates about race and politics were becoming increasingly raw. Donald Trump had just won the South Carolina primary, lashed out at the Pope over immigration, and earned the enthusiastic support of David Duke. Hillary Clinton had just defeated Bernie Sanders in Nevada, only to have an activist from Black Lives Matter interrupt a speech of hers to protest racially charged statements she’d made two decades before. And on Facebook, a popular group called Blacktivist was gaining traction by blasting out messages like “American economy and power were built on forced migration and torture.”
So when Zuckerberg’s admonition circulated, a young contract employee named Benjamin Fearnow decided it might be newsworthy. He took a screenshot on his personal laptop and sent the image to a friend named Michael Nuñez, who worked at the tech-news site Gizmodo. Nuñez promptly published a brief story about Zuckerberg’s memo.
A week later, Fearnow came across something else he thought Nuñez might like to publish. In another internal communication, Facebook had invited its employees to submit potential questions to ask Zuckerberg at an all-hands meeting. One of the most up-voted questions that week was “What responsibility does Facebook have to help prevent President Trump in 2017?” Fearnow took another screenshot, this time with his phone.
Fearnow, a recent graduate of the Columbia Journalism School, worked in Facebook’s New York office on something called Trending Topics, a feed of popular news subjects that popped up when people opened Facebook. The feed was generated by an algorithm but moderated by a team of about 25 people with backgrounds in journalism. If the word “Trump” was trending, as it often was, they used their news judgment to identify which bit of news about the candidate was most important. If The Onion or a hoax site published a spoof that went viral, they had to keep that out. If something like a mass shooting happened, and Facebook’s algorithm was slow to pick up on it, they would inject a story about it into the feed.
March 2018. Subscribe to WIRED.
Jake Rowland/Esto
Facebook prides itself on being a place where people love to work. But Fearnow and his team weren’t the happiest lot. They were contract employees hired through a company called BCforward, and every day was full of little reminders that they weren’t really part of Facebook. Plus, the young journalists knew their jobs were doomed from the start. Tech companies, for the most part, prefer to have as little as possible done by humans—because, it’s often said, they don’t scale. You can’t hire a billion of them, and they prove meddlesome in ways that algorithms don’t. They need bathroom breaks and health insurance, and the most annoying of them sometimes talk to the press. Eventually, everyone assumed, Facebook’s algorithms would be good enough to run the whole project, and the people on Fearnow’s team—who served partly to train those algorithms—would be expendable.
The day after Fearnow took that second screenshot was a Friday. When he woke up after sleeping in, he noticed that he had about 30 meeting notifications from Facebook on his phone. When he replied to say it was his day off, he recalls, he was nonetheless asked to be available in 10 minutes. Soon he was on a video­conference with three Facebook employees, including Sonya Ahuja, the company’s head of investigations. According to his recounting of the meeting, she asked him if he had been in touch with Nuñez. He denied that he had been. Then she told him that she had their messages on Gchat, which Fearnow had assumed weren’t accessible to Facebook. He was fired. “Please shut your laptop and don’t reopen it,” she instructed him.
That same day, Ahuja had another conversation with a second employee at Trending Topics named Ryan Villarreal. Several years before, he and Fearnow had shared an apartment with Nuñez. Villarreal said he hadn’t taken any screenshots, and he certainly hadn’t leaked them. But he had clicked “like” on the story about Black Lives Matter, and he was friends with Nuñez on Facebook. “Do you think leaks are bad?” Ahuja demanded to know, according to Villarreal. He was fired too. The last he heard from his employer was in a letter from BCforward. The company had given him $15 to cover expenses, and it wanted the money back.
The firing of Fearnow and Villarreal set the Trending Topics team on edge—and Nuñez kept digging for dirt. He soon published a story about the internal poll showing Facebookers’ interest in fending off Trump. Then, in early May, he published an article based on conversations with yet a third former Trending Topics employee, under the blaring headline “Former Facebook Workers: We Routinely Suppressed Conservative News.” The piece suggested that Facebook’s Trending team worked like a Fox News fever dream, with a bunch of biased curators “injecting” liberal stories and “blacklisting” conservative ones. Within a few hours the piece popped onto half a dozen highly trafficked tech and politics websites, including Drudge Report and Breitbart News.
The post went viral, but the ensuing battle over Trending Topics did more than just dominate a few news cycles. In ways that are only fully visible now, it set the stage for the most tumultuous two years of Facebook’s existence—triggering a chain of events that would distract and confuse the company while larger disasters began to engulf it.
This is the story of those two years, as they played out inside and around the company. WIRED spoke with 51 current or former Facebook employees for this article, many of whom did not want their names used, for reasons anyone familiar with the story of Fearnow and Villarreal would surely understand. (One current employee asked that a WIRED reporter turn off his phone so the company would have a harder time tracking whether it had been near the phones of anyone from Facebook.)
The stories varied, but most people told the same basic tale: of a company, and a CEO, whose techno-optimism has been crushed as they’ve learned the myriad ways their platform can be used for ill. Of an election that shocked Facebook, even as its fallout put the company under siege. Of a series of external threats, defensive internal calculations, and false starts that delayed Facebook’s reckoning with its impact on global affairs and its users’ minds. And—in the tale’s final chapters—of the company’s earnest attempt to redeem itself.
In that saga, Fearnow plays one of those obscure but crucial roles that history occasionally hands out. He’s the Franz Ferdinand of Facebook—or maybe he’s more like the archduke’s hapless young assassin. Either way, in the rolling disaster that has enveloped Facebook since early 2016, Fearnow’s leaks probably ought to go down as the screenshots heard round the world.
II
By now, the story of Facebook’s all-consuming growth is practically the creation myth of our information era. What began as a way to connect with your friends at Harvard became a way to connect with people at other elite schools, then at all schools, and then everywhere. After that, your Facebook login became a way to log on to other internet sites. Its Messenger app started competing with email and texting. It became the place where you told people you were safe after an earthquake. In some countries like the Philippines, it effectively is the internet.
The furious energy of this big bang emanated, in large part, from a brilliant and simple insight. Humans are social animals. But the internet is a cesspool. That scares people away from identifying themselves and putting personal details online. Solve that problem—make people feel safe to post—and they will share obsessively. Make the resulting database of privately shared information and personal connections available to advertisers, and that platform will become one of the most important media technologies of the early 21st century.
But as powerful as that original insight was, Facebook’s expansion has also been driven by sheer brawn. Zuckerberg has been a determined, even ruthless, steward of the company’s manifest destiny, with an uncanny knack for placing the right bets. In the company’s early days, “move fast and break things” wasn’t just a piece of advice to his developers; it was a philosophy that served to resolve countless delicate trade-offs—many of them involving user privacy—in ways that best favored the platform’s growth. And when it comes to competitors, Zuckerberg has been relentless in either acquiring or sinking any challengers that seem to have the wind at their backs.
Facebook’s Reckoning
Two years that forced the platform to change
by Blanca Myers
March 2016
Facebook suspends Benjamin Fearnow, a journalist-­curator for the platform’s Trending Topics feed, after he leaks to Gizmodo.
May 2016
Gizmodo reports that Trending Topics “routinely suppressed conservative news.” The story sends Facebook scrambling.
July 2016
Rupert Murdoch tells Zuckerberg that Facebook is wreaking havoc on the news industry and threatens to cause trouble.
August 2016
Facebook cuts loose all of its Trending Topics journalists, ceding authority over the feed to engineers in Seattle.
November 2016
Donald Trump wins. Zuckerberg says it’s “pretty crazy” to think fake news on Facebook helped tip the election.
December 2016
Facebook declares war on fake news, hires CNN alum Campbell Brown to shepherd relations with the publishing industry.
September 2017
Facebook announces that a Russian group paid $100,000 for roughly 3,000 ads aimed at US voters.
October 2017
Researcher Jonathan Albright reveals that posts from six Russian propaganda accounts were shared 340 million times.
November 2017
Facebook general counsel Colin Stretch gets pummeled during congressional Intelligence Committee hearings.
January 2018
Facebook begins announcing major changes, aimed to ensure that time on the platform will be “time well spent.”
In fact, it was in besting just such a rival that Facebook came to dominate how we discover and consume news. Back in 2012, the most exciting social network for distributing news online wasn’t Facebook, it was Twitter. The latter’s 140-character posts accelerated the speed at which news could spread, allowing its influence in the news industry to grow much faster than Facebook’s. “Twitter was this massive, massive threat,” says a former Facebook executive heavily involved in the decisionmaking at the time.
So Zuckerberg pursued a strategy he has often deployed against competitors he cannot buy: He copied, then crushed. He adjusted Facebook’s News Feed to fully incorporate news (despite its name, the feed was originally tilted toward personal news) and adjusted the product so that it showed author bylines and headlines. Then Facebook’s emissaries fanned out to talk with journalists and explain how to best reach readers through the platform. By the end of 2013, Facebook had doubled its share of traffic to news sites and had started to push Twitter into a decline. By the middle of 2015, it had surpassed Google as the leader in referring readers to publisher sites and was now referring 13 times as many readers to news publishers as Twitter. That year, Facebook launched Instant Articles, offering publishers the chance to publish directly on the platform. Posts would load faster and look sharper if they agreed, but the publishers would give up an element of control over the content. The publishing industry, which had been reeling for years, largely assented. Facebook now effectively owned the news. “If you could reproduce Twitter inside of Facebook, why would you go to Twitter?” says the former executive. “What they are doing to Snapchat now, they did to Twitter back then.”
It appears that Facebook did not, however, carefully think through the implications of becoming the dominant force in the news industry. Everyone in management cared about quality and accuracy, and they had set up rules, for example, to eliminate pornography and protect copyright. But Facebook hired few journalists and spent little time discussing the big questions that bedevil the media industry. What is fair? What is a fact? How do you signal the difference between news, analysis, satire, and opinion? Facebook has long seemed to think it has immunity from those debates because it is just a technology company—one that has built a “platform for all ideas.”
This notion that Facebook is an open, neutral platform is almost like a religious tenet inside the company. When new recruits come in, they are treated to an orientation lecture by Chris Cox, the company’s chief product officer, who tells them Facebook is an entirely new communications platform for the 21st century, as the telephone was for the 20th. But if anyone inside Facebook is unconvinced by religion, there is also Section 230 of the 1996 Communications Decency Act to recommend the idea. This is the section of US law that shelters internet intermediaries from liability for the content their users post. If Facebook were to start creating or editing content on its platform, it would risk losing that immunity—and it’s hard to imagine how Facebook could exist if it were liable for the many billion pieces of content a day that users post on its site.
And so, because of the company’s self-image, as well as its fear of regulation, Facebook tried never to favor one kind of news content over another. But neutrality is a choice in itself. For instance, Facebook decided to present every piece of content that appeared on News Feed—whether it was your dog pictures or a news story—in roughly the same way. This meant that all news stories looked roughly the same as each other, too, whether they were investigations in The Washington Post, gossip in the New York Post, or flat-out lies in the Denver Guardian, an entirely bogus newspaper. Facebook argued that this democratized information. You saw what your friends wanted you to see, not what some editor in a Times Square tower chose. But it’s hard to argue that this wasn’t an editorial decision. It may be one of the biggest ever made.
In any case, Facebook’s move into news set off yet another explosion of ways that people could connect. Now Facebook was the place where publications could connect with their readers—and also where Macedonian teenagers could connect with voters in America, and operatives in Saint Petersburg could connect with audiences of their own choosing in a way that no one at the company had ever seen before.
III
In February of 2016, just as the Trending Topics fiasco was building up steam, Roger ­McNamee became one of the first Facebook insiders to notice strange things happening on the platform. McNamee was an early investor in Facebook who had mentored Zuckerberg through two crucial decisions: to turn down Yahoo’s offer of $1 billion to acquire Facebook in 2006; and to hire a Google executive named Sheryl Sandberg in 2008 to help find a business model. McNamee was no longer in touch with Zuckerberg much, but he was still an investor, and that month he started seeing things related to the Bernie Sanders campaign that worried him. “I’m observing memes ostensibly coming out of a Facebook group associated with the Sanders campaign that couldn’t possibly have been from the Sanders campaign,” he recalls, “and yet they were organized and spreading in such a way that suggested somebody had a budget. And I’m sitting there thinking, ‘That’s really weird. I mean, that’s not good.’ ”
But McNamee didn’t say anything to anyone at Facebook—at least not yet. And the company itself was not picking up on any such worrying signals, save for one blip on its radar: In early 2016, its security team noticed an uptick in Russian actors attempting to steal the credentials of journalists and public figures. Facebook reported this to the FBI. But the company says it never heard back from the government, and that was that.
Instead, Facebook spent the spring of 2016 very busily fending off accusations that it might influence the elections in a completely different way. When Gizmodo published its story about political bias on the Trending Topics team in May, the ­article went off like a bomb in Menlo Park. It quickly reached millions of readers and, in a delicious irony, appeared in the Trending Topics module itself. But the bad press wasn’t what really rattled Facebook—it was the letter from John Thune, a Republican US senator from South Dakota, that followed the story’s publication. Thune chairs the Senate Commerce Committee, which in turn oversees the Federal Trade Commission, an agency that has been especially active in investigating Facebook. The senator wanted Facebook’s answers to the allegations of bias, and he wanted them promptly.
The Thune letter put Facebook on high alert. The company promptly dispatched senior Washington staffers to meet with Thune’s team. Then it sent him a 12-page single-spaced letter explaining that it had conducted a thorough review of Trending Topics and determined that the allegations in the Gizmodo story were largely false.
Facebook decided, too, that it had to extend an olive branch to the entire American right wing, much of which was raging about the company’s supposed perfidy. And so, just over a week after the story ran, Facebook scrambled to invite a group of 17 prominent Republicans out to Menlo Park. The list included television hosts, radio stars, think tankers, and an adviser to the Trump campaign. The point was partly to get feedback. But more than that, the company wanted to make a show of apologizing for its sins, lifting up the back of its shirt, and asking for the lash.
Related Stories
Emily Dreyfuss
The Community Zuck Longs to Build Remains a Distant Dream
Issie Lapowsky
To Fix Its Toxic Ad Problem, Facebook Must Break Itself
Nitasha Tiku
Mark Zuckerberg Essentially Launched Facebook’s Reelection Campaign
According to a Facebook employee involved in planning the meeting, part of the goal was to bring in a group of conservatives who were certain to fight with one another. They made sure to have libertarians who wouldn’t want to regulate the platform and partisans who would. Another goal, according to the employee, was to make sure the attendees were “bored to death” by a technical presentation after Zuckerberg and Sandberg had addressed the group.
The power went out, and the room got uncomfortably hot. But otherwise the meeting went according to plan. The guests did indeed fight, and they failed to unify in a way that was either threatening or coherent. Some wanted the company to set hiring quotas for conservative employees; others thought that idea was nuts. As often happens when outsiders meet with Facebook, people used the time to try to figure out how they could get more followers for their own pages.
Afterward, Glenn Beck, one of the invitees, wrote an essay about the meeting, praising Zuckerberg. “I asked him if Facebook, now or in the future, would be an open platform for the sharing of all ideas or a curator of content,” Beck wrote. “Without hesitation, with clarity and boldness, Mark said there is only one Facebook and one path forward: ‘We are an open platform.’��
Inside Facebook itself, the backlash around Trending Topics did inspire some genuine soul-searching. But none of it got very far. A quiet internal project, codenamed Hudson, cropped up around this time to determine, according to someone who worked on it, whether News Feed should be modified to better deal with some of the most complex issues facing the product. Does it favor posts that make people angry? Does it favor simple or even false ideas over complex and true ones? Those are hard questions, and the company didn’t have answers to them yet. Ultimately, in late June, Facebook announced a modest change: The algorithm would be revised to favor posts from friends and family. At the same time, Adam Mosseri, Facebook’s News Feed boss, posted a manifesto titled “Building a Better News Feed for You.” People inside Facebook spoke of it as a document roughly resembling the Magna Carta; the company had never spoken before about how News Feed really worked. To outsiders, though, the document came across as boilerplate. It said roughly what you’d expect: that the company was opposed to clickbait but that it wasn’t in the business of favoring certain kinds of viewpoints.
The most important consequence of the Trending Topics controversy, according to nearly a dozen former and current employees, was that Facebook became wary of doing anything that might look like stifling conservative news. It had burned its fingers once and didn’t want to do it again. And so a summer of deeply partisan rancor and calumny began with Facebook eager to stay out of the fray.
IV
Shortly after Mosseri published his guide to News Feed values, Zuckerberg traveled to Sun Valley, Idaho, for an annual conference hosted by billionaire Herb Allen, where moguls in short sleeves and sunglasses cavort and make plans to buy each other’s companies. But Rupert Murdoch broke the mood in a meeting that took place inside his villa. According to numerous accounts of the conversation, Murdoch and Robert Thomson, the CEO of News Corp, explained to Zuckerberg that they had long been unhappy with Facebook and Google. The two tech giants had taken nearly the entire digital ad market and become an existential threat to serious journalism. According to people familiar with the conversation, the two News Corp leaders accused Facebook of making dramatic changes to its core algorithm without adequately consulting its media partners, wreaking havoc according to Zuckerberg’s whims. If Facebook didn’t start offering a better deal to the publishing industry, Thomson and Murdoch conveyed in stark terms, Zuckerberg could expect News Corp executives to become much more public in their denunciations and much more open in their lobbying. They had helped to make things very hard for Google in Europe. And they could do the same for Facebook in the US.
Facebook thought that News Corp was threatening to push for a government antitrust investigation or maybe an inquiry into whether the company deserved its protection from liability as a neutral platform. Inside Facebook, executives believed Murdoch might use his papers and TV stations to amplify critiques of the company. News Corp says that was not at all the case; the company threatened to deploy executives, but not its journalists.
Zuckerberg had reason to take the meeting especially seriously, according to a former Facebook executive, because he had firsthand knowledge of Murdoch’s skill in the dark arts. Back in 2007, Facebook had come under criticism from 49 state attorneys general for failing to protect young Facebook users from sexual predators and inappropriate content. Concerned parents had written to Connecticut attorney general Richard Blumenthal, who opened an investigation, and to The New York Times, which published a story. But according to a former Facebook executive in a position to know, the company believed that many of the Facebook accounts and the predatory behavior the letters referenced were fakes, traceable to News Corp lawyers or others working for Murdoch, who owned Facebook’s biggest competitor, MySpace. “We traced the creation of the Facebook accounts to IP addresses at the Apple store a block away from the MySpace offices in Santa Monica,” the executive says. “Facebook then traced interactions with those accounts to News Corp lawyers. When it comes to Facebook, Murdoch has been playing every angle he can for a long time.” (Both News Corp and its spinoff 21st Century Fox declined to comment.)
Zuckerberg took Murdoch’s threats seriously—he had firsthand knowledge of the older man’s skill in the dark arts.
When Zuckerberg returned from Sun Valley, he told his employees that things had to change. They still weren’t in the news business, but they had to make sure there would be a news business. And they had to communicate better. One of those who got a new to-do list was Andrew Anker, a product manager who’d arrived at Facebook in 2015 after a career in journalism (including a long stint at WIRED in the ’90s). One of his jobs was to help the company think through how publishers could make money on the platform. Shortly after Sun Valley, Anker met with Zuckerberg and asked to hire 60 new people to work on partnerships with the news industry. Before the meeting ended, the request was approved.
But having more people out talking to publishers just drove home how hard it would be to resolve the financial problems Murdoch wanted fixed. News outfits were spending millions to produce stories that Facebook was benefiting from, and Facebook, they felt, was giving too little back in return. Instant Articles, in particular, struck them as a Trojan horse. Publishers complained that they could make more money from stories that loaded on their own mobile web pages than on Facebook Instant. (They often did so, it turned out, in ways that short-changed advertisers, by sneaking in ads that readers were unlikely to see. Facebook didn’t let them get away with that.) Another seemingly irreconcilable difference: Outlets like Murdoch’s Wall Street Journal depended on paywalls to make money, but Instant Articles banned paywalls; Zuckerberg disapproved of them. After all, he would often ask, how exactly do walls and toll booths make the world more open and connected?
The conversations often ended at an impasse, but Facebook was at least becoming more attentive. This newfound appreciation for the concerns of journalists did not, however, extend to the journalists on Facebook’s own Trending Topics team. In late August, everyone on the team was told that their jobs were being eliminated. Simultaneously, authority over the algorithm shifted to a team of engineers based in Seattle. Very quickly the module started to surface lies and fiction. A headline days later read, “Fox News Exposes Traitor Megyn Kelly, Kicks Her Out For Backing Hillary."
V
While Facebook grappled internally with what it was becoming—a company that dominated media but didn’t want to be a media company—Donald Trump’s presidential campaign staff faced no such confusion. To them Facebook’s use was obvious. Twitter was a tool for communicating directly with supporters and yelling at the media. Facebook was the way to run the most effective direct-­marketing political operation in history.
In the summer of 2016, at the top of the general election campaign, Trump’s digital operation might have seemed to be at a major disadvantage. After all, Hillary Clinton’s team was flush with elite talent and got advice from Eric Schmidt, known for running ­Google. Trump’s was run by Brad Parscale, known for setting up the Eric Trump Foundation’s web page. Trump’s social media director was his former caddie. But in 2016, it turned out you didn’t need digital experience running a presidential campaign, you just needed a knack for Facebook.
Over the course of the summer, Trump’s team turned the platform into one of its primary vehicles for fund-­raising. The campaign uploaded its voter files—the names, addresses, voting history, and any other information it had on potential voters—to Facebook. Then, using a tool called Look­alike Audiences, Facebook identified the broad characteristics of, say, people who had signed up for Trump newsletters or bought Trump hats. That allowed the campaign to send ads to people with similar traits. Trump would post simple messages like “This election is being rigged by the media pushing false and unsubstantiated charges, and outright lies, in order to elect Crooked Hillary!” that got hundreds of thousands of likes, comments, and shares. The money rolled in. Clinton’s wonkier messages, meanwhile, resonated less on the platform. Inside Facebook, almost everyone on the executive team wanted Clinton to win; but they knew that Trump was using the platform better. If he was the candidate for Facebook, she was the candidate for LinkedIn.
Trump’s candidacy also proved to be a wonderful tool for a new class of scammers pumping out massively viral and entirely fake stories. Through trial and error, they learned that memes praising the former host of The Apprentice got many more readers than ones praising the former secretary of state. A website called Ending the Fed proclaimed that the Pope had endorsed Trump and got almost a million comments, shares, and reactions on Facebook, according to an analysis by BuzzFeed. Other stories asserted that the former first lady had quietly been selling weapons to ISIS, and that an FBI agent suspected of leaking Clinton’s emails was found dead. Some of the posts came from hyperpartisan Americans. Some came from overseas content mills that were in it purely for the ad dollars. By the end of the campaign, the top fake stories on the platform were generating more engagement than the top real ones.
Even current Facebookers acknowledge now that they missed what should have been obvious signs of people misusing the platform. And looking back, it’s easy to put together a long list of possible explanations for the myopia in Menlo Park about fake news. Management was gun-shy because of the Trending Topics fiasco; taking action against partisan disinformation—or even identifying it as such—might have been seen as another act of political favoritism. Facebook also sold ads against the stories, and sensational garbage was good at pulling people into the platform. Employees’ bonuses can be based largely on whether Facebook hits certain growth and revenue targets, which gives people an extra incentive not to worry too much about things that are otherwise good for engagement. And then there was the ever-present issue of Section 230 of the 1996 Communications Decency Act. If the company started taking responsibility for fake news, it might have to take responsibility for a lot more. Facebook had plenty of reasons to keep its head in the sand.
Roger McNamee, however, watched carefully as the nonsense spread. First there were the fake stories pushing Bernie Sanders, then he saw ones supporting Brexit, and then helping Trump. By the end of the summer, he had resolved to write an op-ed about the problems on the platform. But he never ran it. “The idea was, look, these are my friends. I really want to help them.” And so on a Sunday evening, nine days before the 2016 election, McNamee emailed a 1,000-word letter to Sandberg and Zuckerberg. “I am really sad about Facebook,” it began. “I got involved with the company more than a decade ago and have taken great pride and joy in the company’s success … until the past few months. Now I am disappointed. I am embarrassed. I am ashamed.”
Eddie Guy
VI
It’s not easy to recognize that the machine you’ve built to bring people together is being used to tear them apart, and Mark Zuckerberg’s initial reaction to Trump’s victory, and Facebook’s possible role in it, was one of peevish dismissal. Executives remember panic the first few days, with the leadership team scurrying back and forth between Zuckerberg’s conference room (called the Aquarium) and Sandberg’s (called Only Good News), trying to figure out what had just happened and whether they would be blamed. Then, at a conference two days after the election, Zuckerberg argued that filter bubbles are worse offline than on Facebook and that social media hardly influences how people vote. “The idea that fake news on Facebook—of which, you know, it’s a very small amount of the content—influenced the election in any way, I think, is a pretty crazy idea,” he said.
Zuckerberg declined to be interviewed for this article, but people who know him well say he likes to form his opinions from data. And in this case he wasn’t without it. Before the interview, his staff had worked up a back-of-the-­envelope calculation showing that fake news was a tiny percentage of the total amount of election-­related content on the platform. But the analysis was just an aggregate look at the percentage of clearly fake stories that appeared across all of Facebook. It didn’t measure their influence or the way fake news affected specific groups. It was a number, but not a particularly meaningful one.
Zuckerberg’s comments did not go over well, even inside Facebook. They seemed clueless and self-absorbed. “What he said was incredibly damaging,” a former executive told WIRED. “We had to really flip him on that. We realized that if we didn’t, the company was going to start heading down this pariah path that Uber was on.”
A week after his “pretty crazy” comment, Zuckerberg flew to Peru to give a talk to world leaders about the ways that connecting more people to the internet, and to Facebook, could reduce global poverty. Right after he landed in Lima, he posted something of a mea culpa. He explained that Facebook did take misinformation seriously, and he presented a vague seven-point plan to tackle it. When a professor at the New School named David Carroll saw Zuckerberg’s post, he took a screenshot. Alongside it on Carroll’s feed ran a headline from a fake CNN with an image of a distressed Donald Trump and the text “DISQUALIFIED; He’s GONE!”
At the conference in Peru, Zuckerberg met with a man who knows a few things about politics: Barack Obama. Media reports portrayed the encounter as one in which the lame-duck president pulled Zuckerberg aside and gave him a “wake-up call” about fake news. But according to someone who was with them in Lima, it was Zuckerberg who called the meeting, and his agenda was merely to convince Obama that, yes, Facebook was serious about dealing with the problem. He truly wanted to thwart misinformation, he said, but it wasn’t an easy issue to solve.
One employee compared Zuckerberg to Lennie in Of Mice and Men—a man with no understanding of his own strength.
Meanwhile, at Facebook, the gears churned. For the first time, insiders really began to question whether they had too much power. One employee told WIRED that, watching Zuckerberg, he was reminded of Lennie in Of Mice and Men, the farm-worker with no understanding of his own strength.
Very soon after the election, a team of employees started working on something called the News Feed Integrity Task Force, inspired by a sense, one of them told WIRED, that hyperpartisan misinformation was “a disease that’s creeping into the entire platform.” The group, which included Mosseri and Anker, began to meet every day, using whiteboards to outline different ways they could respond to the fake-news crisis. Within a few weeks the company announced it would cut off advertising revenue for ad farms and make it easier for users to flag stories they thought false.
In December the company announced that, for the first time, it would introduce fact-checking onto the platform. Facebook didn’t want to check facts itself; instead it would outsource the problem to professionals. If Facebook received enough signals that a story was false, it would automatically be sent to partners, like Snopes, for review. Then, in early January, Facebook announced that it had hired Campbell Brown, a former anchor at CNN. She immediately became the most prominent journalist hired by the company.
Soon Brown was put in charge of something called the Facebook Journalism Project. “We spun it up over the holidays, essentially,” says one person involved in discussions about the project. The aim was to demonstrate that Facebook was thinking hard about its role in the future of journalism—essentially, it was a more public and organized version of the efforts the company had begun after Murdoch’s tongue-lashing. But sheer anxiety was also part of the motivation. “After the election, because Trump won, the media put a ton of attention on fake news and just started hammering us. People started panicking and getting afraid that regulation was coming. So the team looked at what Google had been doing for years with News Lab”—a group inside Alphabet that builds tools for journalists—“and we decided to figure out how we could put together our own packaged program that shows how seriously we take the future of news.”
Facebook was reluctant, however, to issue any mea culpas or action plans with regard to the problem of filter bubbles or Facebook’s noted propensity to serve as a tool for amplifying outrage. Members of the leadership team regarded these as issues that couldn’t be solved, and maybe even shouldn’t be solved. Was Facebook really more at fault for amplifying outrage during the election than, say, Fox News or MSNBC? Sure, you could put stories into people’s feeds that contradicted their political viewpoints, but people would turn away from them, just as surely as they’d flip the dial back if their TV quietly switched them from Sean Hannity to Joy Reid. The problem, as Anker puts it, “is not Facebook. It’s humans.”
VII
Zuckerberg’s “pretty crazy” statement about fake news caught the ear of a lot of people, but one of the most influential was a security researcher named Renée DiResta. For years, she’d been studying how misinformation spreads on the platform. If you joined an antivaccine group on Facebook, she observed, the platform might suggest that you join flat-earth groups or maybe ones devoted to Pizzagate—putting you on a conveyor belt of conspiracy thinking. Zuckerberg’s statement struck her as wildly out of touch. “How can this platform say this thing?” she remembers thinking.
Roger McNamee, meanwhile, was getting steamed at Facebook’s response to his letter. Zuckerberg and Sandberg had written him back promptly, but they hadn’t said anything substantial. Instead he ended up having a months-long, ultimately futile set of email exchanges with Dan Rose, Facebook’s VP for partnerships. McNamee says Rose’s message was polite but also very firm: The company was doing a lot of good work that McNamee couldn’t see, and in any event Facebook was a platform, not a media company.
“And I’m sitting there going, ‘Guys, seriously, I don’t think that’s how it works,’” McNamee says. “You can assert till you’re blue in the face that you’re a platform, but if your users take a different point of view, it doesn’t matter what you assert.”
As the saying goes, heaven has no rage like love to hatred turned, and McNamee’s concern soon became a cause—and the beginning of an alliance. In April 2017 he connected with a former Google design ethicist named Tristan Harris when they appeared together on Bloomberg TV. Harris had by then gained a national reputation as the conscience of Silicon Valley. He had been profiled on 60 Minutes and in The Atlantic, and he spoke eloquently about the subtle tricks that social media companies use to foster an addiction to their services. “They can amplify the worst aspects of human nature,” Harris told WIRED this past December. After the TV appearance, McNamee says he called Harris up and asked, “Dude, do you need a wingman?”
The next month, DiResta published an ­article comparing purveyors of disinformation on social media to manipulative high-frequency traders in financial markets. “Social networks enable malicious actors to operate at platform scale, because they were designed for fast information flows and virality,” she wrote. Bots and sock puppets could cheaply “create the illusion of a mass groundswell of grassroots activity,” in much the same way that early, now-illegal trading algorithms could spoof demand for a stock. Harris read the article, was impressed, and emailed her.
The three were soon out talking to anyone who would listen about Facebook’s poisonous effects on American democracy. And before long they found receptive audiences in the media and Congress—groups with their own mounting grievances against the social media giant.
VIII
Even at the best of times, meetings between Facebook and media executives can feel like unhappy family gatherings. The two sides are inextricably bound together, but they don’t like each other all that much. News executives resent that Facebook and Google have captured roughly three-quarters of the digital ad business, leaving the media industry and other platforms, like Twitter, to fight over scraps. Plus they feel like the preferences of Facebook’s algorithm have pushed the industry to publish ever-dumber stories. For years, The New York Times resented that Facebook helped elevate BuzzFeed; now BuzzFeed is angry about being displaced by clickbait.
And then there’s the simple, deep fear and mistrust that Facebook inspires. Every publisher knows that, at best, they are sharecroppers on Facebook’s massive industrial farm. The social network is roughly 200 times more valuable than the Times. And journalists know that the man who owns the farm has the leverage. If Facebook wanted to, it could quietly turn any number of dials that would harm a publisher—by manipulating its traffic, its ad network, or its readers.
Emissaries from Facebook, for their part, find it tiresome to be lectured by people who can’t tell an algorithm from an API. They also know that Facebook didn’t win the digital ad market through luck: It built a better ad product. And in their darkest moments, they wonder: What’s the point? News makes up only about 5 percent of the total content that people see on Facebook globally. The company could let it all go and its shareholders would scarcely notice. And there’s another, deeper problem: Mark Zuckerberg, according to people who know him, prefers to think about the future. He’s less interested in the news industry’s problems right now; he’s interested in the problems five or 20 years from now. The editors of major media companies, on the other hand, are worried about their next quarter—maybe even their next phone call. When they bring lunch back to their desks, they know not to buy green bananas.
This mutual wariness—sharpened almost to enmity in the wake of the election—did not make life easy for Campbell Brown when she started her new job running the nascent Facebook Journalism Project. The first item on her to-do list was to head out on yet another Facebook listening tour with editors and publishers. One editor describes a fairly typical meeting: Brown and Chris Cox, Facebook’s chief product officer, invited a group of media leaders to gather in late January 2017 at Brown’s apartment in Manhattan. Cox, a quiet, suave man, sometimes referred to as “the Ryan Gosling of Facebook Product,” took the brunt of the ensuing abuse. “Basically, a bunch of us just laid into him about how Facebook was destroying journalism, and he graciously absorbed it,” the editor says. “He didn’t much try to defend them. I think the point was really to show up and seem to be listening.” Other meetings were even more tense, with the occasional comment from journalists noting their interest in digital antitrust issues.
As bruising as all this was, Brown’s team became more confident that their efforts were valued within the company when Zuckerberg published a 5,700-word corporate manifesto in February. He had spent the previous three months, according to people who know him, contemplating whether he had created something that did more harm than good. “Are we building the world we all want?” he asked at the beginning of his post, implying that the answer was an obvious no. Amid sweeping remarks about “building a global community,” he emphasized the need to keep people informed and to knock out false news and clickbait. Brown and others at Facebook saw the manifesto as a sign that Zuckerberg understood the company’s profound civic responsibilities. Others saw the document as blandly grandiose, showcasing Zuckerberg’s tendency to suggest that the answer to nearly any problem is for people to use Facebook more.
Shortly after issuing the manifesto, Zuckerberg set off on a carefully scripted listening tour of the country. He began popping into candy shops and dining rooms in red states, camera crew and personal social media team in tow. He wrote an earnest post about what he was learning, and he deflected questions about whether his real goal was to become president. It seemed like a well-­meaning effort to win friends for Facebook. But it soon became clear that Facebook’s biggest problems emanated from places farther away than Ohio.
IX
One of the many things Zuckerberg seemed not to grasp when he wrote his manifesto was that his platform had empowered an enemy far more sophisticated than Macedonian teenagers and assorted low-rent purveyors of bull. As 2017 wore on, however, the company began to realize it had been attacked by a foreign influence operation. “I would draw a real distinction between fake news and the Russia stuff,” says an executive who worked on the company’s response to both. “With the latter there was a moment where everyone said ‘Oh, holy shit, this is like a national security situation.’”
That holy shit moment, though, didn’t come until more than six months after the election. Early in the campaign season, Facebook was aware of familiar attacks emanating from known Russian hackers, such as the group APT28, which is believed to be affiliated with Moscow. They were hacking into accounts outside of Facebook, stealing documents, then creating fake Facebook accounts under the banner of DCLeaks, to get people to discuss what they’d stolen. The company saw no signs of a serious, concerted foreign propaganda campaign, but it also didn’t think to look for one.
During the spring of 2017, the company’s security team began preparing a report about how Russian and other foreign intelligence operations had used the platform. One of its authors was Alex Stamos, head of Facebook’s security team. Stamos was something of an icon in the tech world for having reportedly resigned from his previous job at Yahoo after a conflict over whether to grant a US intelligence agency access to Yahoo servers. According to two people with direct knowledge of the document, he was eager to publish a detailed, specific analysis of what the company had found. But members of the policy and communications team pushed back and cut his report way down. Sources close to the security team suggest the company didn’t want to get caught up in the political whirlwind of the moment. (Sources on the politics and communications teams insist they edited the report down, just because the darn thing was hard to read.)
On April 27, 2017, the day after the Senate announced it was calling then FBI director James Comey to testify about the Russia investigation, Stamos’ report came out. It was titled “Information Operations and Facebook,” and it gave a careful step-by-step explanation of how a foreign adversary could use Facebook to manipulate people. But there were few specific examples or details, and there was no direct mention of Russia. It felt bland and cautious. As Renée DiResta says, “I remember seeing the report come out and thinking, ‘Oh, goodness, is this the best they could do in six months?’”
One month later, a story in Time suggested to Stamos’ team that they might have missed something in their analysis. The article quoted an unnamed senior intelligence official saying that Russian operatives had bought ads on Facebook to target Americans with propaganda. Around the same time, the security team also picked up hints from congressional investigators that made them think an intelligence agency was indeed looking into Russian Facebook ads. Caught off guard, the team members started to dig into the company’s archival ads data themselves.
Eventually, by sorting transactions according to a series of data points—Were ads purchased in rubles? Were they purchased within browsers whose language was set to Russian?—they were able to find a cluster of accounts, funded by a shadowy Russian group called the Internet Research Agency, that had been designed to manipulate political opinion in America. There was, for example, a page called Heart of Texas, which pushed for the secession of the Lone Star State. And there was Blacktivist, which pushed stories about police brutality against black men and women and had more followers than the verified Black Lives Matter page.
Numerous security researchers express consternation that it took Facebook so long to realize how the Russian troll farm was exploiting the platform. After all, the group was well known to Facebook. Executives at the company say they’re embarrassed by how long it took them to find the fake accounts, but they point out that they were never given help by US intelligence agencies. A staffer on the Senate Intelligence Committee likewise voiced exasperation with the company. “It seemed obvious that it was a tactic the Russians would exploit,” the staffer says.
When Facebook finally did find the Russian propaganda on its platform, the discovery set off a crisis, a scramble, and a great deal of confusion. First, due to a miscalculation, word initially spread through the company that the Russian group had spent millions of dollars on ads, when the actual total was in the low six figures. Once that error was resolved, a disagreement broke out over how much to reveal, and to whom. The company could release the data about the ads to the public, release everything to Congress, or release nothing. Much of the argument hinged on questions of user privacy. Members of the security team worried that the legal process involved in handing over private user data, even if it belonged to a Russian troll farm, would open the door for governments to seize data from other Facebook users later on. “There was a real debate internally,” says one executive. “Should we just say ‘Fuck it’ and not worry?” But eventually the company decided it would be crazy to throw legal caution to the wind “just because Rachel Maddow wanted us to.”
Ultimately, a blog post appeared under Stamos’ name in early September announcing that, as far as the company could tell, the Russians had paid Facebook $100,000 for roughly 3,000 ads aimed at influencing American politics around the time of the 2016 election. Every sentence in the post seemed to downplay the substance of these new revelations: The number of ads was small, the expense was small. And Facebook wasn’t going to release them. The public wouldn’t know what they looked like or what they were really aimed at doing.
This didn’t sit at all well with DiResta. She had long felt that Facebook was insufficiently forthcoming, and now it seemed to be flat-out stonewalling. “That was when it went from incompetence to malice,” she says. A couple of weeks later, while waiting at a Walgreens to pick up a prescription for one of her kids, she got a call from a researcher at the Tow Center for Digital Journalism named Jonathan Albright. He had been mapping ecosystems of misinformation since the election, and he had some excellent news. “I found this thing,” he said. Albright had started digging into CrowdTangle, one of the analytics platforms that Facebook uses. And he had discovered that the data from six of the accounts Facebook had shut down were still there, frozen in a state of suspended animation. There were the posts pushing for Texas secession and playing on racial antipathy. And then there were political posts, like one that referred to Clinton as “that murderous anti-American traitor Killary.” Right before the election, the Blacktivist account urged its supporters to stay away from Clinton and instead vote for Jill Stein. Albright downloaded the most recent 500 posts from each of the six groups. He reported that, in total, their posts had been shared more than 340 million times.
Eddie Guy
X
To McNamee, the way the Russians used the platform was neither a surprise nor an anomaly. “They find 100 or 1,000 people who are angry and afraid and then use Facebook’s tools to advertise to get people into groups,” he says. “That’s exactly how Facebook was designed to be used.”
McNamee and Harris had first traveled to DC for a day in July to meet with members of Congress. Then, in September, they were joined by DiResta and began spending all their free time counseling senators, representatives, and members of their staffs. The House and Senate Intelligence Committees were about to hold hearings on Russia’s use of social media to interfere in the US election, and McNamee, Harris, and ­DiResta were helping them prepare. One of the early questions they weighed in on was the matter of who should be summoned to testify. Harris recommended that the CEOs of the big tech companies be called in, to create a dramatic scene in which they all stood in a neat row swearing an oath with their right hands in the air, roughly the way tobacco executives had been forced to do a generation earlier. Ultimately, though, it was determined that the general counsels of the three companies—Facebook, Twitter, and Google—should head into the lion’s den.
And so on November 1, Colin Stretch arrived from Facebook to be pummeled. During the hearings themselves, DiResta was sitting on her bed in San Francisco, watching them with her headphones on, trying not to wake up her small children. She listened to the back-and-forth in Washington while chatting on Slack with other security researchers. She watched as Marco Rubio smartly asked whether Facebook even had a policy forbidding foreign governments from running an influence campaign through the platform. The answer was no. Rhode Island senator Jack Reed then asked whether Facebook felt an obligation to individually notify all the users who had seen Russian ads that they had been deceived. The answer again was no. But maybe the most threatening comment came from Dianne Feinstein, the senior senator from Facebook’s home state. “You’ve created these platforms, and now they’re being misused, and you have to be the ones to do something about it,” she declared. “Or we will.”
After the hearings, yet another dam seemed to break, and former Facebook executives started to go public with their criticisms of the company too. On November 8, billionaire entrepreneur Sean Parker, Facebook’s first president, said he now regretted pushing Facebook so hard on the world. “I don’t know if I really understood the consequences of what I was saying,” h
Read more: https://www.wired.com/story/inside-facebook-mark-zuckerberg-2-years-of-hell/
from Viral News HQ http://ift.tt/2F8OhJT via Viral News HQ
0 notes
aheliotech · 8 years ago
Text
Common phishing scams and how to prevent them
New Post has been published on https://www.aheliotech.com/blog/common-phishing-scams-and-how-to-prevent-them/
Common phishing scams and how to prevent them
I think by now we’ve all been contacted by a Nigerian prince looking for someone to help him move his wealth out of the country in return for a share of his fortune. We all know it’s a scam, but did you know a whopping 30% of you still click on phishing scam links?
Phishing scams in particular are getting so sophisticated these days that most of us will need a magnifying glass just to spot the inconsistencies that give away their fraudulent nature.
In today’s post, we will tell you exactly how to recognize a phishing scam and share some classic examples we’ve encountered.
Firstly, what are phishing scams?
The term ‘phishing’ was coined in 1996 by hackers who were stealing ‘America Online’ (better known as AOL) accounts and passwords. Employing the analogy of angling, scammers used email ‘lures,’ laying out ‘hooks’ to ‘fish’ for passwords and financial data. The letter ‘f’ was often interchanged with ‘ph’ as a nod to the original form of hacking known as phone phreaking: the reverse engineering of various tones used to re-route long distance calls.
While these ‘phreakers’ manipulated tone sequences to obtain free calls, the act itself could be argued to be victimless (Well, except for the phone companies…). This is not the case with phishing attacks. Phishers attempt to trick, steal or socially engineer you into divulging your private information. As businesses put complex security mechanisms in place to protect against unauthorized access, criminals target the weakest element in the system: you.
So, what types of phishing scams are out there?
There are two main types of phishing scams:
Advanced-fee fraud
An advance-fee scam is a type of fraud that involves promising the victim a significant share of a large sum of money in return for a small up-front payment. If a victim makes the payment, the fraudster will either invent a series of new fees for the victim to keep paying, or will simply disappear.
Traditional phishing scams
Phishing is the attempt to obtain sensitive information such as your username, password and credit card details by pretending to be a trustworthy entity such as Microsoft, Amazon, PayPal or even your bank.
While most traditional phishing scams are implemented via email, many phishing attempts happen via social media and even through your work suites such as Dropbox and Google Docs.
Over the years we’ve seen it all, like that time a Skype scambot tried to lure our CEO into plugging in his credit card details.
Or that other time when we teamed up with Bleeping Computer to watch a ‘tech support’ scammer (dubbed Mr Z by the team) attempt to convince us our virtual machine was infected with ‘trozens’ so we’d buy his fake product.
In fact: Tech support scams are so common we’ve covered them in depth here.
While there are countless different ways to phish, these are the most common phishing scam examples:
Deceptive phishing
Regardless of the delivery method, eg; Skype, email or phone call, deceptive phishing is a scammer impersonating a legitimate company in order to receive something from you, whether that be your personal information for identity theft, your credit card details or for you to feel pressured into buying a product that may or may not exist. The methods criminals are using to make you believe the site you are opening is the real deal are getting increasingly clever, as we’ll show you in our examples later on.
Spear phishing
Are deceptive phishing attacks but rather than attempting to scam an entire population of people, the attacks are targeted. You may receive an email that includes your name, position, company name and work phone number, or a contact request on Skype where you are directly confronted with personal information.
CEO Fraud
This is a type of spear phishing where the credentials of a business executive are commandeered via a phishing email, hoax call or Skype scam. These credentials are then used to conduct fraudulent activity. Common examples include Google’s Larry Page himself writing you to notify you about the “official” sweepstakes.
Cloud Storage Phishing
Utilising the suites that many people now rely on for work, these phishing scams are conducted via shared documents. In past phishing scams, Google and Dropbox have even unknowingly hosted these scams in the past with SSL certificates, meaning these scams appeared 100% legitimate.
Pharming
Redirects traffic from a legitimate site to a malicious one without your knowledge. Any personal information you enter into this page is going directly to the scammers. These pages are usually reached via links shared in deceptive phishing emails, Skype chats and social media ads.
How advanced-fee phishing attacks work: Chatting with a scammer
This email showed up in my inbox from ‘UBS Investment Bank London’ earlier this week. The sender email, [email protected], piqued my interest.
I was amused by the many inconsistencies in the email, such as the fact that it made absolutely no sense. I couldn’t help myself. I had to know. What was the deal? Who was Jerry Joe?
So I called him.
The phone number connects to a man in Basingstoke, England, who didn’t know what to say until I asked him why he emailed me. He continued to ask me my name while I questioned him and assured me repeatedly that if I simply gave him my full name he would be able to give me more information about the business adventure we were about to embark on together.
When I couldn’t get past Jerry Joe’s demands for my personal information on the phone, I responded from a different (pseudonymous) email address to learn more.
Within an hour I had received nothing short of an essay.
A 40% cut of almost £8m? Guaranteed 100% success? Sweet! If I wasn’t already sold on this wonderful opportunity, I had the following attachments to convince me.
Even though I was surprised that the account statement from 2014 looked like it was printed on paper from the 80s, this all seemed incredibly convincing. He went on to say:
Now assured “this was no child’s play,” and that “nothing stupid will happen in this business either now or later” I was to feel safe providing my brother/partner with:
My full legal name
My full address
My age
Occupation
Marital status
A copy of any of my identity documents, either international passport or drivers license.
I can only imagine if I had provided these things, this ‘SIR’ would have had her identity used online to scam other poor souls. And I can’t imagine this is the only aim of the scam. There is often some ‘small fee’ required to facilitate the transfer of my newly acquired fortune. Whereby I offer my credit card details or send money via Western Union or PayPal.
British comedian James Veitch repeatedly converses with scam emailers in a bid to extract some whimsy from the scourge of the internet. In his words, time wasted with him is time that these scammers are not out scamming adults out of their savings. The results are hilarious.
Though it is obvious in the context of this post that the above examples are indeed scams, and while humorous to behold, this is a serious problem. There are still many who are able to be convinced to give up their information through either falling for the initial scam email or being harassed until they are willing to do so. If you receive an email like the one above, simply delete it.
But what about less obvious scam emails?
How to identify a traditional phishing scam
Think about how meticulous you are about your spelling in an email to a customer, your boss or a work colleague. Now imagine the importance a financial organisation, such as your bank, would place on ensuring all brand communication was immaculately presented.
If you receive an email that looks like this, you can be sure it didn’t come from Bank of Scotland:
Though the general layout is quite neat, the incorrect email address, or email spoofing, is your first clue that something isn’t quite right. The random capitalisation in the main header text might not tip you off but the request for you to immediately log on and correct your details should.
There is not a financial organisation on earth that would lead you to a third party site to sign in to your account. If you receive an email like this, go to your online banking directly from your bank’s website in a separate window. Check your secure messages from within internet banking. See any message there about your online account? Didn’t think so.
Scammers take advantage of the fact that we are constantly being bombarded with information at all hours of the day. It is easy to become complacent about what we are clicking on and to whom are we are giving our information.
Keep a clear eye out for the following clues that an email is not what it seems:
An email is addressed vaguely with salutations such as ‘Dear Valued Customer’ or ‘Dear Customer.’
The subject uses urgent and/or threatening language such as ‘Account Suspended’ or ‘Unauthorized Login Attempt.’
You are being offered a lot of money for no reason.
The email simply makes no sense.
The message appears to be from a government agency.
An email, phone call or contact request is completely unsolicited and was not initiated by any action on your part.
You are being asked to surrender personal information such as your bank account details, credit card information or are being redirected to login with your internet banking credentials.
Something just doesn’t feel right. If an offer seems too good to be true or you just feel in your gut that something is off, it probably is.
Let’s take a look at a common example close up.
Think you can spot a scam now? Not so fast.
There’s one more type of phishing scam you need to be very aware of.
Unicode phishing
Now, let’s take a look at the browser bar below. If you were redirected here from an email you wouldn’t see any problem. It looks like Paypal.com. Great! Now, click on the image and look closely.
See the umlaut and tilde above the ‘a’s. This is a scam site that I was redirected to via a link in an email. It’s a common method that our lab team is increasingly seeing to trick users to believe they are accessing a legitimate site. In this case, phishers are exploiting the fact that unicode incorporates many writing systems that each have different codes for the same letter. Using punycode, scammers can register domain names that look identical to a real site.
It is because of legitimate looking login pages like the ones above and altered URL’s like this one that cause people to be so easily caught out.
So how can you protect yourself?
How to prevent phishing scams
Though scams are getting more sophisticated all the time, there are easy steps you can take to prevent phishing attacks.
DON’T click on any links in emails claiming to be from your bank or any other trusted organisation. Especially if it asks you to verify or update your personal details. Delete these immediately.
DO an internet search of specific names and phrases in an email you are unsure about. Many scams can be identified this way as other victims post their stories on online forums.
LOOK for https: in any website where you are asked to provide personal details. SSL certificates are used to encrypt the transmitted information to secure identities and financial information over the web. If you don’t see HTTPS in your browser search bar, close it and manually search for the secure address. Always pay very close attention to what is written in the browser search bar. Check for inconsistencies such as symbols that shouldn’t be there and scrambled URLs.
NEVER provide personal information in an unsolicited phone call. Even if you believe the person calling you is legitimately from your bank, call your bank directly on the number listed on their website to be sure. They will confirm if you were contacted and why. Never return a call on a phone number given to you by the caller directly.
ALWAYS report a scam to the Anti-Phishing Work Group to ensure that others who are affected by the same scam can find out about it online.
As you can see, some people will try anything to scam you out of your hard-earned money, and the lines are always being blurred between phishing for information and scamming users for financial gain. But never fear. You have all the tools you could possibly need to spot a phishing attack a mile off! All it takes is clear eyes and a few second’s consideration to avoid infection. Now that you know what to look for, you can even help someone else to do the same!
Have a great (scam-free) day!
We’ve shown you ours, now show us yours. What’s the craziest scam you’ve ever encountered? What did you do about it? Tell us in the comments.
Related Posts:
Curiosity arousing Facebook scams lead to nothing but…
Safe emails vs scams: the key differences
ALERT: The Google Drive Phishing Scam Returns!
Password Alert, Google’s new form of defense against…
Don’t spread the love: Valentine’s Day scams to…
0 notes
christophergill8 · 8 years ago
Text
Fake CEO phishing tax scam is back
Remember that tax scam last spring where crooks posing as company executives sent emails asking for workers' payroll data?
It was the one that fooled lots of folks, including a Milwaukee Bucks employee who thought that the message really did come from the NBA franchise's president. That chagrined Bucks' staffer sent the phishing crook 2015 tax year data on the team's employees, including his rank-and-file colleagues and highly-paid professional basketball players.
Well, that tax scam is back.
Email spoofs company CEO: The Internal Revenue Service says the email scam is now making its way across the nation for a second time. Once the crooks get the data, they then file fraudulent tax returns seeking refunds.
This type of phishing is known as a spoofing e-mail. In this instance, it contains the actual name of a company's chief executive officer or other top corporate official in an attempt to prompt employees, eager to do what the big boss asks, to send back workers' personal financial data.
The IRS says the emails are likely to ask scam targets such things as:
Kindly send me the individual 2016 W-2 (PDF) and earnings summary of all W-2 of our company staff for a quick review.
Can you send me the updated list of employees with full details (Name, Social Security Number, Date of Birth, Home Address, Salary)?
I want you to send me the list of W-2 copy of employees wage and tax statement for 2016, I need them in PDF file type, you can send it as an attachment. Kindly prepare the lists and email them to me ASAP.
In light of the scam's resurgence, the IRS urges company payroll officials to double check any executive-level or unusual requests for lists of W-2 forms or Social Security numbers.
Other scams out there, too: The IRS and its Security Summit partners also remind all taxpayers that this is just one of myriad tax scams that are or will appear this filing season.
While the IRS, state tax officials and the tax industry have made significant progress in slowing tax-related identity theft, cyber-criminals are using more sophisticated tactics to try to steal even more data that will allow them to impersonate taxpayers.
The bottom line, this filing season and year round, it so be careful out there. And be especially suspicious of any effort from any source and by any method to get personal and financial information from you.
You also might find these items of interest:
4 tax cyber security tips from IRS, NY tax officials
Phishing criminals pose as potential tax clients to infiltrate tax preparers' systems and steal data
5 ways to protect your identity (& money!) during National Tax Security Awareness Week (& year-round!)
from Tax News By Christopher http://feedproxy.google.com/~r/DontMessWithTaxes/~3/ehadmbm3-34/fake-ceo-tax-scam-is-back.html
0 notes
terabitweb · 5 years ago
Text
Original Post from SC Magazine Author: Victor M. Thomas
It starts out innocuously enough when an important-looking email comes in to a company employee. The sender’s email address is that of the company’s CEO, claiming that a payment needs to be made to a client or vendor immediately.
The email, which contains some sense of urgency, tells the employee to wire transfer an amount of money, perhaps $50,000 or more, to a specific company or bank account. The reasons vary but follow a common theme: A vendor has a new bank account and prior payments to that vendor failed. The company is “late” on its payments and a purchase needs to be made for necessary products or services. Whatever the purpose, the CEO does not have the time to go through normal check-request procedures and requires a quick response.
Often these requests are made when the CEO is out of town (the CEO’s or company’s own social media accounts might have mentioned he or she is at a conference or traveling on business — attackers have a lot of ways to determine when an executive is traveling) and confirmation might be difficult. So, in response to an email that looks like it comes from the CEO, the company employee immediately processes the check request and sends the wire transfer. The underlying concern for the employee is that if they do not process the request, their job could be in danger.
Poof. A relatively untraceable wire payment was just made to cyberthieves who just pulled off a quick scam by playing on the emotions, worries and goodwill of an unsuspecting company employee. The company was just victimized by a CEO fraud email attack, also known in law enforcement circles as a business email compromise (BEC) attack.
It could never happen to us in our business, say many executives. Hogwash.
It can and it does happen every day and it likely will continue to happen inside businesses for as long as cyberthieves play their emotion-throttled games with unsuspecting victims within companies where adequate training, policies, and procedures are lacking.
The FBI has been tracking these kinds of business email fraud attacks since 2013 and reports that companies have been victimized in every state and in more than 100 countries around the world, according to the agency. These crimes have happened to nonprofits, Fortune 500 corporations, churches, school systems and other businesses.
The global losses in 2018 alone are expected to exceed $9 billion from these crimes, according to a recent analysis from one cybersecurity vendor. That is up from $5 billion in such losses that were predicted by the FBI for 2017, and nearly triple the estimated $3.1 billion in global losses that were seen in 2016.
So, what is the root of the problem and how can it be curtailed or stopped?
“This is not a technology attack; it’s a psychological attack,” says Lance Spitzner, director of SANS security awareness at the SANS Institute, a security research and education group. The methods for stopping the attacks remain the same as they have since they began, says Spitzner: Start by training employees to view all suspicious emails, especially those with a rushed or emergency tone and unusual requests, as fake emails that are trying to steal money from the company.
Essentially, he says, employees need to be taught about the clues and indicators that point to email fraud attacks and then to always follow established procedures in response, such as verbally check with the CEO or other senior staffer to confirm that they sent the request.
Lance Spitzner, director, SANS security awareness, SANS Institute
While this type of attack is often called “CEO Fraud,” it could refer to any senior executive who is being impersonated by the attacker in order to get a lower-level staffer to take a specific action. Sometimes the action itself is not sending money; it could be a request to unlock a door that is normally locked (creating a physical breach vulnerability) or perhaps sending employees’ personal information, such as W2 tax documents or pay stubs, to a non-company email address in order to steal employees’ identities.
The employees must be trained carefully not to give in to emotions under stress when the resourceful and convincing thieves try to get them to respond by sending money, no matter what the threats or pleas are from the attackers, says Spitzner. “Their level of commitment to withstand the attacks rivals that of the guys who hold nuclear codes,” he says.
Establish codes
Clear policies and procedures are necessary for employees to use in order to confirm a request that seems unusual or perhaps sets off pre-determined policy alarms are triggered, experts agree. However, for these policies and procedures to be effective, it is essential that the senior executives who might be spoofed in the malicious emails — the CEO, president, CFO or other senior executives — agree to respond if an employee is doing their due diligence and requesting that the executive confirm a request made by email or text message, says Joseph Blankenship, principal analyst, at Cambridge, Mass-based Forrester Research. Companies must foster a work environment where no worker will be criticized, hassled or challenged when they inquire about such messages.
“People are often scared to challenge the CEO” by making such direct inquiries, which is what the cybercriminals hope will occur, he says.
One way to battle attackers is to establish clear and concise code words or phrases that can be used by the real CEO or other senior executive to authenticate his or her identity in an emergency. If the established code words are not known and repeated exactly by the attackers, then the employee can have a strong indication the email request is fake and they can reject it without concern about being fired for not following orders, says Christian Christiansen, an IT security analyst with Hurwitz & Associates of Needham, Mass.
Christian Christiansen, IT security analyst, Hurwitz & Associates
“It seems like CEO fraud is just the phishing attack that keeps on taking via wire fraud,” says Christiansen. “There are many solutions, even some that are tech-free, but people seem to mistakenly continue trusting email.”
That is where using secret codes, such as a few words in a pattern or specific statements about any topics that are known only to the real CEO and their employees, can be particularly effective to authenticate an email sender, he says. Also important are creating and maintaining financial transaction procedures that say that no wire transfers can be initiated solely by one person, regardless of who that single individual is. Instead, controls should be added so that all such transfers require a second or third person to authorize them over a certain amount, or if the money is being sent outside the United States, says Christiansen.
Similar controls should also be placed on corporate credit cards to prevent employees from having to be placed in these situations where they must make judgment calls during such attacks, he says.
Today’s attacks feature the same hallmarks as previous incidents, with the attackers conducting a wide range of basic research on the CEO using internet searches, often revealing travel plans, hobbies, favorite sports teams and other information the attackers use to try to bluff company employees and get them to think they are the person they are pretending to be. While companies strive to provide transparency about their organizations, attackers use this data to build more effective attacks.
Elevated privileges
While employee training for scenarios like these is critical, security teams need to remember to look at the company’s email traffic carefully so they can flag or spot any suspicious behaviors, particularly involving workers who are in the accounting, accounts receivable or other sensitive departments, he says. Instead of simply accepting emails from all domains, consider blocking suspicious ones from places where your company does not do business, Christiansen says.
“[For] people who have higher levels of financial access to your systems, you want to look and monitor those people pretty closely, people with elevated levels of privilege,” says Christiansen. “Often there can be coercion by attackers, or [attackers] can buy them drinks at a bar and ask about the company and its executives.”
Attempts to compromise corporate employees do not only focus on high-level executives with access to company secrets; systems administrators with privileged access to servers are often targets because their login credentials provide attackers with access to move through systems laterally without raising red flags. A compromised email administrator’s credentials, for example, could provide access to legitimate email accounts, making CEO fraud appear that much more legitimate.
Of course, companies must ensure that other basic but often neglected procedures are conducted, such as patching all desktop and laptop computer systems and related business infrastructure to protect them from succumbing to a wide range of security vulnerabilities. While it might seem easy to point to patching as a best practice, network administrators will tell you that before patches are moved to production systems, the IT team must ensure that the patch will not break some other system software. That time between delivery of the patch and how long it takes to verify it won’t break other applications often can be the difference between identifying a vulnerability and falling victim to it.
Another recommendation is never to call the phone number provided with a suspicious message. If employees want to reach the person requesting an unusual wire transfer or other action, they only should call the individual’s authenticated phone numbers to confirm the email’s request. Otherwise, they might end up calling a phone number being used by the cyberthieves themselves as part of the scam.
Use a holistic approach
Forrester’s Blankenship recommends using a holistic approach to battling CEO fraud email attacks, including knowing and recognizing the threats, stopping or flagging suspicious messages and effectively educating employees on how to circumvent such attacks.
Email filtering is often not effective enough on its own because the attackers usually mask their exploits and make them quite difficult to detect and filter out, says Blankenship.
What email filtering can do, however, is detect known spam and commodity phishing emails that have been reported or detected by others and stop them cold, he says. “What’s missing is the ability to detect suspicious emails or make targets aware that an email or other communication may be fraudulent. Some vendors are using machine learning and artificial intelligence to detect these, but the technology isn’t perfect yet and most businesses are not employing it.”
Joseph Blankenship, principal analyst, Forrester Research
Ultimately, because the known detection methods today are not foolproof, it is up to the email’s recipient to decide if a suspicious email is fraudulent or not, he adds. That can create its own conundrum: “Smart attackers will research their targets ahead of time and will work to gain trust before actually asking the target user to do something.”
To fight clever attackers, recipients must verify that incoming emails are real before taking any actions requested by the message, which is not easy to do during a busy and stressful work day, says Blankenship. “It’s up to security professionals to make sure their users and executives have the tools they need to defend themselves. Leaving it solely up to the user is doomed to fail.”
Depending on the size of the company and its internal IT organization, these needs can produce their own challenges because threat controls and training might not be available, he says. “Unfortunately, in a lot of these cases, these are typically mid-market or SMB companies, so they don’t have a big IT team fighting for them.”
In such cases, companies can subscribe to an ongoing security service for help, especially if they can provide real-time threat feedback, he notes. Another effective practice is to conduct regular procedural drills for employees so they can learn how to respond properly and securely to incoming “bait” emails that purport to be from the CEO or other executives.
One complication today is that since business email compromise attacks have persisted for years, plenty of data from past attacks is out on the internet and is available to be reused by today’s bad actors, says Blankenship. “All that data is floating around out there, so names and data are available. It becomes that much easier for a criminal to use that for their own means.”
Protecting company information
In the end, everything companies do to fight CEO fraud/BEC attacks is about protecting their businesses, employees and their operations, says James Pooley, a trial lawyer in Menlo Park, Calif., who specializes in trade secret and patent litigation.
Training employees to react to probing emails that come in with suspicious messages is one of the things he speaks about often with executives inside companies as they work to safeguard their IT systems.
One tactic he recommends is to set up carefully crafted protocols ahead of time so that incoming suspicious emails can be halted early in the process, says Poole. The protocols should include specific rules about any interactions that might come directly from the company’s CEO and other high-ranking executives, such as if an executive asks for money to be sent using specific instructions that might deviate from the norm.
Underscoring the need for code words to authenticate an instruction, Pools says the protocols might include “you will only get messages from me on these kinds of issues with this specific password or marker that can’t come in from the outside.”
Some new data loss prevention tools are using artificial intelligence (AI) to help weed out these kinds of attacks from cybercriminals, he added. “They are using AI that analyzes the nature of the communications themselves in ways that are far more sophisticated than just looking for words that match filtering lists. AI is really the way forward.”
So, will future CEO fraud email attacks ever be completely blocked? Not likely, says Poole. “If an outcome is affected by human behavior, you can’t 100 percent prevent errors by people. All you can do is try to react.”
The email fraud attacks “play on the fact that we are very busy and we don’t stop to question something that on its face has markers of plausibility,” says Poole. “Life is very fast these days, including inside the corporate environment, and people need to get things done now.”
The post CEO fraud: It’s human nature appeared first on SC Media.
#gallery-0-5 { margin: auto; } #gallery-0-5 .gallery-item { float: left; margin-top: 10px; text-align: center; width: 33%; } #gallery-0-5 img { border: 2px solid #cfcfcf; } #gallery-0-5 .gallery-caption { margin-left: 0; } /* see gallery_shortcode() in wp-includes/media.php */
Go to Source Author: Victor M. Thomas CEO fraud: It’s human nature Original Post from SC Magazine Author: Victor M. Thomas It starts out innocuously enough when an important-looking…
0 notes
trendingnewsb · 7 years ago
Text
Inside the Two Years That Shook Facebookand the World
One day in late February of 2016, Mark Zuckerberg sent a memo to all of Facebook’s employees to address some troubling behavior in the ranks. His message pertained to some walls at the company’s Menlo Park headquarters where staffers are encouraged to scribble notes and signatures. On at least a couple of occasions, someone had crossed out the words “Black Lives Matter” and replaced them with “All Lives Matter.” Zuckerberg wanted whoever was responsible to cut it out.
“ ‘Black Lives Matter’ doesn’t mean other lives don’t,” he wrote. “We’ve never had rules around what people can write on our walls,” the memo went on. But “crossing out something means silencing speech, or that one person’s speech is more important than another’s.” The defacement, he said, was being investigated.
All around the country at about this time, debates about race and politics were becoming increasingly raw. Donald Trump had just won the South Carolina primary, lashed out at the Pope over immigration, and earned the enthusiastic support of David Duke. Hillary Clinton had just defeated Bernie Sanders in Nevada, only to have an activist from Black Lives Matter interrupt a speech of hers to protest racially charged statements she’d made two decades before. And on Facebook, a popular group called Blacktivist was gaining traction by blasting out messages like “American economy and power were built on forced migration and torture.”
So when Zuckerberg’s admonition circulated, a young contract employee named Benjamin Fearnow decided it might be newsworthy. He took a screenshot on his personal laptop and sent the image to a friend named Michael Nuñez, who worked at the tech-news site Gizmodo. Nuñez promptly published a brief story about Zuckerberg’s memo.
A week later, Fearnow came across something else he thought Nuñez might like to publish. In another internal communication, Facebook had invited its employees to submit potential questions to ask Zuckerberg at an all-hands meeting. One of the most up-voted questions that week was “What responsibility does Facebook have to help prevent President Trump in 2017?” Fearnow took another screenshot, this time with his phone.
Fearnow, a recent graduate of the Columbia Journalism School, worked in Facebook’s New York office on something called Trending Topics, a feed of popular news subjects that popped up when people opened Facebook. The feed was generated by an algorithm but moderated by a team of about 25 people with backgrounds in journalism. If the word “Trump” was trending, as it often was, they used their news judgment to identify which bit of news about the candidate was most important. If The Onion or a hoax site published a spoof that went viral, they had to keep that out. If something like a mass shooting happened, and Facebook’s algorithm was slow to pick up on it, they would inject a story about it into the feed.
March 2018. Subscribe to WIRED.
Jake Rowland/Esto
Facebook prides itself on being a place where people love to work. But Fearnow and his team weren’t the happiest lot. They were contract employees hired through a company called BCforward, and every day was full of little reminders that they weren’t really part of Facebook. Plus, the young journalists knew their jobs were doomed from the start. Tech companies, for the most part, prefer to have as little as possible done by humans—because, it’s often said, they don’t scale. You can’t hire a billion of them, and they prove meddlesome in ways that algorithms don’t. They need bathroom breaks and health insurance, and the most annoying of them sometimes talk to the press. Eventually, everyone assumed, Facebook’s algorithms would be good enough to run the whole project, and the people on Fearnow’s team—who served partly to train those algorithms—would be expendable.
The day after Fearnow took that second screenshot was a Friday. When he woke up after sleeping in, he noticed that he had about 30 meeting notifications from Facebook on his phone. When he replied to say it was his day off, he recalls, he was nonetheless asked to be available in 10 minutes. Soon he was on a video­conference with three Facebook employees, including Sonya Ahuja, the company’s head of investigations. According to his recounting of the meeting, she asked him if he had been in touch with Nuñez. He denied that he had been. Then she told him that she had their messages on Gchat, which Fearnow had assumed weren’t accessible to Facebook. He was fired. “Please shut your laptop and don’t reopen it,” she instructed him.
That same day, Ahuja had another conversation with a second employee at Trending Topics named Ryan Villarreal. Several years before, he and Fearnow had shared an apartment with Nuñez. Villarreal said he hadn’t taken any screenshots, and he certainly hadn’t leaked them. But he had clicked “like” on the story about Black Lives Matter, and he was friends with Nuñez on Facebook. “Do you think leaks are bad?” Ahuja demanded to know, according to Villarreal. He was fired too. The last he heard from his employer was in a letter from BCforward. The company had given him $15 to cover expenses, and it wanted the money back.
The firing of Fearnow and Villarreal set the Trending Topics team on edge—and Nuñez kept digging for dirt. He soon published a story about the internal poll showing Facebookers’ interest in fending off Trump. Then, in early May, he published an article based on conversations with yet a third former Trending Topics employee, under the blaring headline “Former Facebook Workers: We Routinely Suppressed Conservative News.” The piece suggested that Facebook’s Trending team worked like a Fox News fever dream, with a bunch of biased curators “injecting” liberal stories and “blacklisting” conservative ones. Within a few hours the piece popped onto half a dozen highly trafficked tech and politics websites, including Drudge Report and Breitbart News.
The post went viral, but the ensuing battle over Trending Topics did more than just dominate a few news cycles. In ways that are only fully visible now, it set the stage for the most tumultuous two years of Facebook’s existence—triggering a chain of events that would distract and confuse the company while larger disasters began to engulf it.
This is the story of those two years, as they played out inside and around the company. WIRED spoke with 51 current or former Facebook employees for this article, many of whom did not want their names used, for reasons anyone familiar with the story of Fearnow and Villarreal would surely understand. (One current employee asked that a WIRED reporter turn off his phone so the company would have a harder time tracking whether it had been near the phones of anyone from Facebook.)
The stories varied, but most people told the same basic tale: of a company, and a CEO, whose techno-optimism has been crushed as they’ve learned the myriad ways their platform can be used for ill. Of an election that shocked Facebook, even as its fallout put the company under siege. Of a series of external threats, defensive internal calculations, and false starts that delayed Facebook’s reckoning with its impact on global affairs and its users’ minds. And—in the tale’s final chapters—of the company’s earnest attempt to redeem itself.
In that saga, Fearnow plays one of those obscure but crucial roles that history occasionally hands out. He’s the Franz Ferdinand of Facebook—or maybe he’s more like the archduke’s hapless young assassin. Either way, in the rolling disaster that has enveloped Facebook since early 2016, Fearnow’s leaks probably ought to go down as the screenshots heard round the world.
II
By now, the story of Facebook’s all-consuming growth is practically the creation myth of our information era. What began as a way to connect with your friends at Harvard became a way to connect with people at other elite schools, then at all schools, and then everywhere. After that, your Facebook login became a way to log on to other internet sites. Its Messenger app started competing with email and texting. It became the place where you told people you were safe after an earthquake. In some countries like the Philippines, it effectively is the internet.
The furious energy of this big bang emanated, in large part, from a brilliant and simple insight. Humans are social animals. But the internet is a cesspool. That scares people away from identifying themselves and putting personal details online. Solve that problem—make people feel safe to post—and they will share obsessively. Make the resulting database of privately shared information and personal connections available to advertisers, and that platform will become one of the most important media technologies of the early 21st century.
But as powerful as that original insight was, Facebook’s expansion has also been driven by sheer brawn. Zuckerberg has been a determined, even ruthless, steward of the company’s manifest destiny, with an uncanny knack for placing the right bets. In the company’s early days, “move fast and break things” wasn’t just a piece of advice to his developers; it was a philosophy that served to resolve countless delicate trade-offs—many of them involving user privacy—in ways that best favored the platform’s growth. And when it comes to competitors, Zuckerberg has been relentless in either acquiring or sinking any challengers that seem to have the wind at their backs.
Facebook’s Reckoning
Two years that forced the platform to change
by Blanca Myers
March 2016
Facebook suspends Benjamin Fearnow, a journalist-­curator for the platform’s Trending Topics feed, after he leaks to Gizmodo.
May 2016
Gizmodo reports that Trending Topics “routinely suppressed conservative news.” The story sends Facebook scrambling.
July 2016
Rupert Murdoch tells Zuckerberg that Facebook is wreaking havoc on the news industry and threatens to cause trouble.
August 2016
Facebook cuts loose all of its Trending Topics journalists, ceding authority over the feed to engineers in Seattle.
November 2016
Donald Trump wins. Zuckerberg says it’s “pretty crazy” to think fake news on Facebook helped tip the election.
December 2016
Facebook declares war on fake news, hires CNN alum Campbell Brown to shepherd relations with the publishing industry.
September 2017
Facebook announces that a Russian group paid $100,000 for roughly 3,000 ads aimed at US voters.
October 2017
Researcher Jonathan Albright reveals that posts from six Russian propaganda accounts were shared 340 million times.
November 2017
Facebook general counsel Colin Stretch gets pummeled during congressional Intelligence Committee hearings.
January 2018
Facebook begins announcing major changes, aimed to ensure that time on the platform will be “time well spent.”
In fact, it was in besting just such a rival that Facebook came to dominate how we discover and consume news. Back in 2012, the most exciting social network for distributing news online wasn’t Facebook, it was Twitter. The latter’s 140-character posts accelerated the speed at which news could spread, allowing its influence in the news industry to grow much faster than Facebook’s. “Twitter was this massive, massive threat,” says a former Facebook executive heavily involved in the decisionmaking at the time.
So Zuckerberg pursued a strategy he has often deployed against competitors he cannot buy: He copied, then crushed. He adjusted Facebook’s News Feed to fully incorporate news (despite its name, the feed was originally tilted toward personal news) and adjusted the product so that it showed author bylines and headlines. Then Facebook’s emissaries fanned out to talk with journalists and explain how to best reach readers through the platform. By the end of 2013, Facebook had doubled its share of traffic to news sites and had started to push Twitter into a decline. By the middle of 2015, it had surpassed Google as the leader in referring readers to publisher sites and was now referring 13 times as many readers to news publishers as Twitter. That year, Facebook launched Instant Articles, offering publishers the chance to publish directly on the platform. Posts would load faster and look sharper if they agreed, but the publishers would give up an element of control over the content. The publishing industry, which had been reeling for years, largely assented. Facebook now effectively owned the news. “If you could reproduce Twitter inside of Facebook, why would you go to Twitter?” says the former executive. “What they are doing to Snapchat now, they did to Twitter back then.”
It appears that Facebook did not, however, carefully think through the implications of becoming the dominant force in the news industry. Everyone in management cared about quality and accuracy, and they had set up rules, for example, to eliminate pornography and protect copyright. But Facebook hired few journalists and spent little time discussing the big questions that bedevil the media industry. What is fair? What is a fact? How do you signal the difference between news, analysis, satire, and opinion? Facebook has long seemed to think it has immunity from those debates because it is just a technology company—one that has built a “platform for all ideas.”
This notion that Facebook is an open, neutral platform is almost like a religious tenet inside the company. When new recruits come in, they are treated to an orientation lecture by Chris Cox, the company’s chief product officer, who tells them Facebook is an entirely new communications platform for the 21st century, as the telephone was for the 20th. But if anyone inside Facebook is unconvinced by religion, there is also Section 230 of the 1996 Communications Decency Act to recommend the idea. This is the section of US law that shelters internet intermediaries from liability for the content their users post. If Facebook were to start creating or editing content on its platform, it would risk losing that immunity—and it’s hard to imagine how Facebook could exist if it were liable for the many billion pieces of content a day that users post on its site.
And so, because of the company’s self-image, as well as its fear of regulation, Facebook tried never to favor one kind of news content over another. But neutrality is a choice in itself. For instance, Facebook decided to present every piece of content that appeared on News Feed—whether it was your dog pictures or a news story—in roughly the same way. This meant that all news stories looked roughly the same as each other, too, whether they were investigations in The Washington Post, gossip in the New York Post, or flat-out lies in the Denver Guardian, an entirely bogus newspaper. Facebook argued that this democratized information. You saw what your friends wanted you to see, not what some editor in a Times Square tower chose. But it’s hard to argue that this wasn’t an editorial decision. It may be one of the biggest ever made.
In any case, Facebook’s move into news set off yet another explosion of ways that people could connect. Now Facebook was the place where publications could connect with their readers—and also where Macedonian teenagers could connect with voters in America, and operatives in Saint Petersburg could connect with audiences of their own choosing in a way that no one at the company had ever seen before.
III
In February of 2016, just as the Trending Topics fiasco was building up steam, Roger ­McNamee became one of the first Facebook insiders to notice strange things happening on the platform. McNamee was an early investor in Facebook who had mentored Zuckerberg through two crucial decisions: to turn down Yahoo’s offer of $1 billion to acquire Facebook in 2006; and to hire a Google executive named Sheryl Sandberg in 2008 to help find a business model. McNamee was no longer in touch with Zuckerberg much, but he was still an investor, and that month he started seeing things related to the Bernie Sanders campaign that worried him. “I’m observing memes ostensibly coming out of a Facebook group associated with the Sanders campaign that couldn’t possibly have been from the Sanders campaign,” he recalls, “and yet they were organized and spreading in such a way that suggested somebody had a budget. And I’m sitting there thinking, ‘That’s really weird. I mean, that’s not good.’ ”
But McNamee didn’t say anything to anyone at Facebook—at least not yet. And the company itself was not picking up on any such worrying signals, save for one blip on its radar: In early 2016, its security team noticed an uptick in Russian actors attempting to steal the credentials of journalists and public figures. Facebook reported this to the FBI. But the company says it never heard back from the government, and that was that.
Instead, Facebook spent the spring of 2016 very busily fending off accusations that it might influence the elections in a completely different way. When Gizmodo published its story about political bias on the Trending Topics team in May, the ­article went off like a bomb in Menlo Park. It quickly reached millions of readers and, in a delicious irony, appeared in the Trending Topics module itself. But the bad press wasn’t what really rattled Facebook—it was the letter from John Thune, a Republican US senator from South Dakota, that followed the story’s publication. Thune chairs the Senate Commerce Committee, which in turn oversees the Federal Trade Commission, an agency that has been especially active in investigating Facebook. The senator wanted Facebook’s answers to the allegations of bias, and he wanted them promptly.
The Thune letter put Facebook on high alert. The company promptly dispatched senior Washington staffers to meet with Thune’s team. Then it sent him a 12-page single-spaced letter explaining that it had conducted a thorough review of Trending Topics and determined that the allegations in the Gizmodo story were largely false.
Facebook decided, too, that it had to extend an olive branch to the entire American right wing, much of which was raging about the company’s supposed perfidy. And so, just over a week after the story ran, Facebook scrambled to invite a group of 17 prominent Republicans out to Menlo Park. The list included television hosts, radio stars, think tankers, and an adviser to the Trump campaign. The point was partly to get feedback. But more than that, the company wanted to make a show of apologizing for its sins, lifting up the back of its shirt, and asking for the lash.
Related Stories
Emily Dreyfuss
The Community Zuck Longs to Build Remains a Distant Dream
Issie Lapowsky
To Fix Its Toxic Ad Problem, Facebook Must Break Itself
Nitasha Tiku
Mark Zuckerberg Essentially Launched Facebook’s Reelection Campaign
According to a Facebook employee involved in planning the meeting, part of the goal was to bring in a group of conservatives who were certain to fight with one another. They made sure to have libertarians who wouldn’t want to regulate the platform and partisans who would. Another goal, according to the employee, was to make sure the attendees were “bored to death” by a technical presentation after Zuckerberg and Sandberg had addressed the group.
The power went out, and the room got uncomfortably hot. But otherwise the meeting went according to plan. The guests did indeed fight, and they failed to unify in a way that was either threatening or coherent. Some wanted the company to set hiring quotas for conservative employees; others thought that idea was nuts. As often happens when outsiders meet with Facebook, people used the time to try to figure out how they could get more followers for their own pages.
Afterward, Glenn Beck, one of the invitees, wrote an essay about the meeting, praising Zuckerberg. “I asked him if Facebook, now or in the future, would be an open platform for the sharing of all ideas or a curator of content,” Beck wrote. “Without hesitation, with clarity and boldness, Mark said there is only one Facebook and one path forward: ‘We are an open platform.’”
Inside Facebook itself, the backlash around Trending Topics did inspire some genuine soul-searching. But none of it got very far. A quiet internal project, codenamed Hudson, cropped up around this time to determine, according to someone who worked on it, whether News Feed should be modified to better deal with some of the most complex issues facing the product. Does it favor posts that make people angry? Does it favor simple or even false ideas over complex and true ones? Those are hard questions, and the company didn’t have answers to them yet. Ultimately, in late June, Facebook announced a modest change: The algorithm would be revised to favor posts from friends and family. At the same time, Adam Mosseri, Facebook’s News Feed boss, posted a manifesto titled “Building a Better News Feed for You.” People inside Facebook spoke of it as a document roughly resembling the Magna Carta; the company had never spoken before about how News Feed really worked. To outsiders, though, the document came across as boilerplate. It said roughly what you’d expect: that the company was opposed to clickbait but that it wasn’t in the business of favoring certain kinds of viewpoints.
The most important consequence of the Trending Topics controversy, according to nearly a dozen former and current employees, was that Facebook became wary of doing anything that might look like stifling conservative news. It had burned its fingers once and didn’t want to do it again. And so a summer of deeply partisan rancor and calumny began with Facebook eager to stay out of the fray.
IV
Shortly after Mosseri published his guide to News Feed values, Zuckerberg traveled to Sun Valley, Idaho, for an annual conference hosted by billionaire Herb Allen, where moguls in short sleeves and sunglasses cavort and make plans to buy each other’s companies. But Rupert Murdoch broke the mood in a meeting that took place inside his villa. According to numerous accounts of the conversation, Murdoch and Robert Thomson, the CEO of News Corp, explained to Zuckerberg that they had long been unhappy with Facebook and Google. The two tech giants had taken nearly the entire digital ad market and become an existential threat to serious journalism. According to people familiar with the conversation, the two News Corp leaders accused Facebook of making dramatic changes to its core algorithm without adequately consulting its media partners, wreaking havoc according to Zuckerberg’s whims. If Facebook didn’t start offering a better deal to the publishing industry, Thomson and Murdoch conveyed in stark terms, Zuckerberg could expect News Corp executives to become much more public in their denunciations and much more open in their lobbying. They had helped to make things very hard for Google in Europe. And they could do the same for Facebook in the US.
Facebook thought that News Corp was threatening to push for a government antitrust investigation or maybe an inquiry into whether the company deserved its protection from liability as a neutral platform. Inside Facebook, executives believed Murdoch might use his papers and TV stations to amplify critiques of the company. News Corp says that was not at all the case; the company threatened to deploy executives, but not its journalists.
Zuckerberg had reason to take the meeting especially seriously, according to a former Facebook executive, because he had firsthand knowledge of Murdoch’s skill in the dark arts. Back in 2007, Facebook had come under criticism from 49 state attorneys general for failing to protect young Facebook users from sexual predators and inappropriate content. Concerned parents had written to Connecticut attorney general Richard Blumenthal, who opened an investigation, and to The New York Times, which published a story. But according to a former Facebook executive in a position to know, the company believed that many of the Facebook accounts and the predatory behavior the letters referenced were fakes, traceable to News Corp lawyers or others working for Murdoch, who owned Facebook’s biggest competitor, MySpace. “We traced the creation of the Facebook accounts to IP addresses at the Apple store a block away from the MySpace offices in Santa Monica,” the executive says. “Facebook then traced interactions with those accounts to News Corp lawyers. When it comes to Facebook, Murdoch has been playing every angle he can for a long time.” (Both News Corp and its spinoff 21st Century Fox declined to comment.)
Zuckerberg took Murdoch’s threats seriously—he had firsthand knowledge of the older man’s skill in the dark arts.
When Zuckerberg returned from Sun Valley, he told his employees that things had to change. They still weren’t in the news business, but they had to make sure there would be a news business. And they had to communicate better. One of those who got a new to-do list was Andrew Anker, a product manager who’d arrived at Facebook in 2015 after a career in journalism (including a long stint at WIRED in the ’90s). One of his jobs was to help the company think through how publishers could make money on the platform. Shortly after Sun Valley, Anker met with Zuckerberg and asked to hire 60 new people to work on partnerships with the news industry. Before the meeting ended, the request was approved.
But having more people out talking to publishers just drove home how hard it would be to resolve the financial problems Murdoch wanted fixed. News outfits were spending millions to produce stories that Facebook was benefiting from, and Facebook, they felt, was giving too little back in return. Instant Articles, in particular, struck them as a Trojan horse. Publishers complained that they could make more money from stories that loaded on their own mobile web pages than on Facebook Instant. (They often did so, it turned out, in ways that short-changed advertisers, by sneaking in ads that readers were unlikely to see. Facebook didn’t let them get away with that.) Another seemingly irreconcilable difference: Outlets like Murdoch’s Wall Street Journal depended on paywalls to make money, but Instant Articles banned paywalls; Zuckerberg disapproved of them. After all, he would often ask, how exactly do walls and toll booths make the world more open and connected?
The conversations often ended at an impasse, but Facebook was at least becoming more attentive. This newfound appreciation for the concerns of journalists did not, however, extend to the journalists on Facebook’s own Trending Topics team. In late August, everyone on the team was told that their jobs were being eliminated. Simultaneously, authority over the algorithm shifted to a team of engineers based in Seattle. Very quickly the module started to surface lies and fiction. A headline days later read, “Fox News Exposes Traitor Megyn Kelly, Kicks Her Out For Backing Hillary."
V
While Facebook grappled internally with what it was becoming—a company that dominated media but didn’t want to be a media company—Donald Trump’s presidential campaign staff faced no such confusion. To them Facebook’s use was obvious. Twitter was a tool for communicating directly with supporters and yelling at the media. Facebook was the way to run the most effective direct-­marketing political operation in history.
In the summer of 2016, at the top of the general election campaign, Trump’s digital operation might have seemed to be at a major disadvantage. After all, Hillary Clinton’s team was flush with elite talent and got advice from Eric Schmidt, known for running ­Google. Trump’s was run by Brad Parscale, known for setting up the Eric Trump Foundation’s web page. Trump’s social media director was his former caddie. But in 2016, it turned out you didn’t need digital experience running a presidential campaign, you just needed a knack for Facebook.
Over the course of the summer, Trump’s team turned the platform into one of its primary vehicles for fund-­raising. The campaign uploaded its voter files—the names, addresses, voting history, and any other information it had on potential voters—to Facebook. Then, using a tool called Look­alike Audiences, Facebook identified the broad characteristics of, say, people who had signed up for Trump newsletters or bought Trump hats. That allowed the campaign to send ads to people with similar traits. Trump would post simple messages like “This election is being rigged by the media pushing false and unsubstantiated charges, and outright lies, in order to elect Crooked Hillary!” that got hundreds of thousands of likes, comments, and shares. The money rolled in. Clinton’s wonkier messages, meanwhile, resonated less on the platform. Inside Facebook, almost everyone on the executive team wanted Clinton to win; but they knew that Trump was using the platform better. If he was the candidate for Facebook, she was the candidate for LinkedIn.
Trump’s candidacy also proved to be a wonderful tool for a new class of scammers pumping out massively viral and entirely fake stories. Through trial and error, they learned that memes praising the former host of The Apprentice got many more readers than ones praising the former secretary of state. A website called Ending the Fed proclaimed that the Pope had endorsed Trump and got almost a million comments, shares, and reactions on Facebook, according to an analysis by BuzzFeed. Other stories asserted that the former first lady had quietly been selling weapons to ISIS, and that an FBI agent suspected of leaking Clinton’s emails was found dead. Some of the posts came from hyperpartisan Americans. Some came from overseas content mills that were in it purely for the ad dollars. By the end of the campaign, the top fake stories on the platform were generating more engagement than the top real ones.
Even current Facebookers acknowledge now that they missed what should have been obvious signs of people misusing the platform. And looking back, it’s easy to put together a long list of possible explanations for the myopia in Menlo Park about fake news. Management was gun-shy because of the Trending Topics fiasco; taking action against partisan disinformation—or even identifying it as such—might have been seen as another act of political favoritism. Facebook also sold ads against the stories, and sensational garbage was good at pulling people into the platform. Employees’ bonuses can be based largely on whether Facebook hits certain growth and revenue targets, which gives people an extra incentive not to worry too much about things that are otherwise good for engagement. And then there was the ever-present issue of Section 230 of the 1996 Communications Decency Act. If the company started taking responsibility for fake news, it might have to take responsibility for a lot more. Facebook had plenty of reasons to keep its head in the sand.
Roger McNamee, however, watched carefully as the nonsense spread. First there were the fake stories pushing Bernie Sanders, then he saw ones supporting Brexit, and then helping Trump. By the end of the summer, he had resolved to write an op-ed about the problems on the platform. But he never ran it. “The idea was, look, these are my friends. I really want to help them.” And so on a Sunday evening, nine days before the 2016 election, McNamee emailed a 1,000-word letter to Sandberg and Zuckerberg. “I am really sad about Facebook,” it began. “I got involved with the company more than a decade ago and have taken great pride and joy in the company’s success … until the past few months. Now I am disappointed. I am embarrassed. I am ashamed.”
Eddie Guy
VI
It’s not easy to recognize that the machine you’ve built to bring people together is being used to tear them apart, and Mark Zuckerberg’s initial reaction to Trump’s victory, and Facebook’s possible role in it, was one of peevish dismissal. Executives remember panic the first few days, with the leadership team scurrying back and forth between Zuckerberg’s conference room (called the Aquarium) and Sandberg’s (called Only Good News), trying to figure out what had just happened and whether they would be blamed. Then, at a conference two days after the election, Zuckerberg argued that filter bubbles are worse offline than on Facebook and that social media hardly influences how people vote. “The idea that fake news on Facebook—of which, you know, it’s a very small amount of the content—influenced the election in any way, I think, is a pretty crazy idea,” he said.
Zuckerberg declined to be interviewed for this article, but people who know him well say he likes to form his opinions from data. And in this case he wasn’t without it. Before the interview, his staff had worked up a back-of-the-­envelope calculation showing that fake news was a tiny percentage of the total amount of election-­related content on the platform. But the analysis was just an aggregate look at the percentage of clearly fake stories that appeared across all of Facebook. It didn’t measure their influence or the way fake news affected specific groups. It was a number, but not a particularly meaningful one.
Zuckerberg’s comments did not go over well, even inside Facebook. They seemed clueless and self-absorbed. “What he said was incredibly damaging,” a former executive told WIRED. “We had to really flip him on that. We realized that if we didn’t, the company was going to start heading down this pariah path that Uber was on.”
A week after his “pretty crazy” comment, Zuckerberg flew to Peru to give a talk to world leaders about the ways that connecting more people to the internet, and to Facebook, could reduce global poverty. Right after he landed in Lima, he posted something of a mea culpa. He explained that Facebook did take misinformation seriously, and he presented a vague seven-point plan to tackle it. When a professor at the New School named David Carroll saw Zuckerberg’s post, he took a screenshot. Alongside it on Carroll’s feed ran a headline from a fake CNN with an image of a distressed Donald Trump and the text “DISQUALIFIED; He’s GONE!”
At the conference in Peru, Zuckerberg met with a man who knows a few things about politics: Barack Obama. Media reports portrayed the encounter as one in which the lame-duck president pulled Zuckerberg aside and gave him a “wake-up call” about fake news. But according to someone who was with them in Lima, it was Zuckerberg who called the meeting, and his agenda was merely to convince Obama that, yes, Facebook was serious about dealing with the problem. He truly wanted to thwart misinformation, he said, but it wasn’t an easy issue to solve.
One employee compared Zuckerberg to Lennie in Of Mice and Men—a man with no understanding of his own strength.
Meanwhile, at Facebook, the gears churned. For the first time, insiders really began to question whether they had too much power. One employee told WIRED that, watching Zuckerberg, he was reminded of Lennie in Of Mice and Men, the farm-worker with no understanding of his own strength.
Very soon after the election, a team of employees started working on something called the News Feed Integrity Task Force, inspired by a sense, one of them told WIRED, that hyperpartisan misinformation was “a disease that’s creeping into the entire platform.” The group, which included Mosseri and Anker, began to meet every day, using whiteboards to outline different ways they could respond to the fake-news crisis. Within a few weeks the company announced it would cut off advertising revenue for ad farms and make it easier for users to flag stories they thought false.
In December the company announced that, for the first time, it would introduce fact-checking onto the platform. Facebook didn’t want to check facts itself; instead it would outsource the problem to professionals. If Facebook received enough signals that a story was false, it would automatically be sent to partners, like Snopes, for review. Then, in early January, Facebook announced that it had hired Campbell Brown, a former anchor at CNN. She immediately became the most prominent journalist hired by the company.
Soon Brown was put in charge of something called the Facebook Journalism Project. “We spun it up over the holidays, essentially,” says one person involved in discussions about the project. The aim was to demonstrate that Facebook was thinking hard about its role in the future of journalism—essentially, it was a more public and organized version of the efforts the company had begun after Murdoch’s tongue-lashing. But sheer anxiety was also part of the motivation. “After the election, because Trump won, the media put a ton of attention on fake news and just started hammering us. People started panicking and getting afraid that regulation was coming. So the team looked at what Google had been doing for years with News Lab”—a group inside Alphabet that builds tools for journalists—“and we decided to figure out how we could put together our own packaged program that shows how seriously we take the future of news.”
Facebook was reluctant, however, to issue any mea culpas or action plans with regard to the problem of filter bubbles or Facebook’s noted propensity to serve as a tool for amplifying outrage. Members of the leadership team regarded these as issues that couldn’t be solved, and maybe even shouldn’t be solved. Was Facebook really more at fault for amplifying outrage during the election than, say, Fox News or MSNBC? Sure, you could put stories into people’s feeds that contradicted their political viewpoints, but people would turn away from them, just as surely as they’d flip the dial back if their TV quietly switched them from Sean Hannity to Joy Reid. The problem, as Anker puts it, “is not Facebook. It’s humans.”
VII
Zuckerberg’s “pretty crazy” statement about fake news caught the ear of a lot of people, but one of the most influential was a security researcher named Renée DiResta. For years, she’d been studying how misinformation spreads on the platform. If you joined an antivaccine group on Facebook, she observed, the platform might suggest that you join flat-earth groups or maybe ones devoted to Pizzagate—putting you on a conveyor belt of conspiracy thinking. Zuckerberg’s statement struck her as wildly out of touch. “How can this platform say this thing?” she remembers thinking.
Roger McNamee, meanwhile, was getting steamed at Facebook’s response to his letter. Zuckerberg and Sandberg had written him back promptly, but they hadn’t said anything substantial. Instead he ended up having a months-long, ultimately futile set of email exchanges with Dan Rose, Facebook’s VP for partnerships. McNamee says Rose’s message was polite but also very firm: The company was doing a lot of good work that McNamee couldn’t see, and in any event Facebook was a platform, not a media company.
“And I’m sitting there going, ‘Guys, seriously, I don’t think that’s how it works,’” McNamee says. “You can assert till you’re blue in the face that you’re a platform, but if your users take a different point of view, it doesn’t matter what you assert.”
As the saying goes, heaven has no rage like love to hatred turned, and McNamee’s concern soon became a cause—and the beginning of an alliance. In April 2017 he connected with a former Google design ethicist named Tristan Harris when they appeared together on Bloomberg TV. Harris had by then gained a national reputation as the conscience of Silicon Valley. He had been profiled on 60 Minutes and in The Atlantic, and he spoke eloquently about the subtle tricks that social media companies use to foster an addiction to their services. “They can amplify the worst aspects of human nature,” Harris told WIRED this past December. After the TV appearance, McNamee says he called Harris up and asked, “Dude, do you need a wingman?”
The next month, DiResta published an ­article comparing purveyors of disinformation on social media to manipulative high-frequency traders in financial markets. “Social networks enable malicious actors to operate at platform scale, because they were designed for fast information flows and virality,” she wrote. Bots and sock puppets could cheaply “create the illusion of a mass groundswell of grassroots activity,” in much the same way that early, now-illegal trading algorithms could spoof demand for a stock. Harris read the article, was impressed, and emailed her.
The three were soon out talking to anyone who would listen about Facebook’s poisonous effects on American democracy. And before long they found receptive audiences in the media and Congress—groups with their own mounting grievances against the social media giant.
VIII
Even at the best of times, meetings between Facebook and media executives can feel like unhappy family gatherings. The two sides are inextricably bound together, but they don’t like each other all that much. News executives resent that Facebook and Google have captured roughly three-quarters of the digital ad business, leaving the media industry and other platforms, like Twitter, to fight over scraps. Plus they feel like the preferences of Facebook’s algorithm have pushed the industry to publish ever-dumber stories. For years, The New York Times resented that Facebook helped elevate BuzzFeed; now BuzzFeed is angry about being displaced by clickbait.
And then there’s the simple, deep fear and mistrust that Facebook inspires. Every publisher knows that, at best, they are sharecroppers on Facebook’s massive industrial farm. The social network is roughly 200 times more valuable than the Times. And journalists know that the man who owns the farm has the leverage. If Facebook wanted to, it could quietly turn any number of dials that would harm a publisher—by manipulating its traffic, its ad network, or its readers.
Emissaries from Facebook, for their part, find it tiresome to be lectured by people who can’t tell an algorithm from an API. They also know that Facebook didn’t win the digital ad market through luck: It built a better ad product. And in their darkest moments, they wonder: What’s the point? News makes up only about 5 percent of the total content that people see on Facebook globally. The company could let it all go and its shareholders would scarcely notice. And there’s another, deeper problem: Mark Zuckerberg, according to people who know him, prefers to think about the future. He’s less interested in the news industry’s problems right now; he’s interested in the problems five or 20 years from now. The editors of major media companies, on the other hand, are worried about their next quarter—maybe even their next phone call. When they bring lunch back to their desks, they know not to buy green bananas.
This mutual wariness—sharpened almost to enmity in the wake of the election—did not make life easy for Campbell Brown when she started her new job running the nascent Facebook Journalism Project. The first item on her to-do list was to head out on yet another Facebook listening tour with editors and publishers. One editor describes a fairly typical meeting: Brown and Chris Cox, Facebook’s chief product officer, invited a group of media leaders to gather in late January 2017 at Brown’s apartment in Manhattan. Cox, a quiet, suave man, sometimes referred to as “the Ryan Gosling of Facebook Product,” took the brunt of the ensuing abuse. “Basically, a bunch of us just laid into him about how Facebook was destroying journalism, and he graciously absorbed it,” the editor says. “He didn’t much try to defend them. I think the point was really to show up and seem to be listening.” Other meetings were even more tense, with the occasional comment from journalists noting their interest in digital antitrust issues.
As bruising as all this was, Brown’s team became more confident that their efforts were valued within the company when Zuckerberg published a 5,700-word corporate manifesto in February. He had spent the previous three months, according to people who know him, contemplating whether he had created something that did more harm than good. “Are we building the world we all want?” he asked at the beginning of his post, implying that the answer was an obvious no. Amid sweeping remarks about “building a global community,” he emphasized the need to keep people informed and to knock out false news and clickbait. Brown and others at Facebook saw the manifesto as a sign that Zuckerberg understood the company’s profound civic responsibilities. Others saw the document as blandly grandiose, showcasing Zuckerberg’s tendency to suggest that the answer to nearly any problem is for people to use Facebook more.
Shortly after issuing the manifesto, Zuckerberg set off on a carefully scripted listening tour of the country. He began popping into candy shops and dining rooms in red states, camera crew and personal social media team in tow. He wrote an earnest post about what he was learning, and he deflected questions about whether his real goal was to become president. It seemed like a well-­meaning effort to win friends for Facebook. But it soon became clear that Facebook’s biggest problems emanated from places farther away than Ohio.
IX
One of the many things Zuckerberg seemed not to grasp when he wrote his manifesto was that his platform had empowered an enemy far more sophisticated than Macedonian teenagers and assorted low-rent purveyors of bull. As 2017 wore on, however, the company began to realize it had been attacked by a foreign influence operation. “I would draw a real distinction between fake news and the Russia stuff,” says an executive who worked on the company’s response to both. “With the latter there was a moment where everyone said ‘Oh, holy shit, this is like a national security situation.’”
That holy shit moment, though, didn’t come until more than six months after the election. Early in the campaign season, Facebook was aware of familiar attacks emanating from known Russian hackers, such as the group APT28, which is believed to be affiliated with Moscow. They were hacking into accounts outside of Facebook, stealing documents, then creating fake Facebook accounts under the banner of DCLeaks, to get people to discuss what they’d stolen. The company saw no signs of a serious, concerted foreign propaganda campaign, but it also didn’t think to look for one.
During the spring of 2017, the company’s security team began preparing a report about how Russian and other foreign intelligence operations had used the platform. One of its authors was Alex Stamos, head of Facebook’s security team. Stamos was something of an icon in the tech world for having reportedly resigned from his previous job at Yahoo after a conflict over whether to grant a US intelligence agency access to Yahoo servers. According to two people with direct knowledge of the document, he was eager to publish a detailed, specific analysis of what the company had found. But members of the policy and communications team pushed back and cut his report way down. Sources close to the security team suggest the company didn’t want to get caught up in the political whirlwind of the moment. (Sources on the politics and communications teams insist they edited the report down, just because the darn thing was hard to read.)
On April 27, 2017, the day after the Senate announced it was calling then FBI director James Comey to testify about the Russia investigation, Stamos’ report came out. It was titled “Information Operations and Facebook,” and it gave a careful step-by-step explanation of how a foreign adversary could use Facebook to manipulate people. But there were few specific examples or details, and there was no direct mention of Russia. It felt bland and cautious. As Renée DiResta says, “I remember seeing the report come out and thinking, ‘Oh, goodness, is this the best they could do in six months?’”
One month later, a story in Time suggested to Stamos’ team that they might have missed something in their analysis. The article quoted an unnamed senior intelligence official saying that Russian operatives had bought ads on Facebook to target Americans with propaganda. Around the same time, the security team also picked up hints from congressional investigators that made them think an intelligence agency was indeed looking into Russian Facebook ads. Caught off guard, the team members started to dig into the company’s archival ads data themselves.
Eventually, by sorting transactions according to a series of data points—Were ads purchased in rubles? Were they purchased within browsers whose language was set to Russian?—they were able to find a cluster of accounts, funded by a shadowy Russian group called the Internet Research Agency, that had been designed to manipulate political opinion in America. There was, for example, a page called Heart of Texas, which pushed for the secession of the Lone Star State. And there was Blacktivist, which pushed stories about police brutality against black men and women and had more followers than the verified Black Lives Matter page.
Numerous security researchers express consternation that it took Facebook so long to realize how the Russian troll farm was exploiting the platform. After all, the group was well known to Facebook. Executives at the company say they’re embarrassed by how long it took them to find the fake accounts, but they point out that they were never given help by US intelligence agencies. A staffer on the Senate Intelligence Committee likewise voiced exasperation with the company. “It seemed obvious that it was a tactic the Russians would exploit,” the staffer says.
When Facebook finally did find the Russian propaganda on its platform, the discovery set off a crisis, a scramble, and a great deal of confusion. First, due to a miscalculation, word initially spread through the company that the Russian group had spent millions of dollars on ads, when the actual total was in the low six figures. Once that error was resolved, a disagreement broke out over how much to reveal, and to whom. The company could release the data about the ads to the public, release everything to Congress, or release nothing. Much of the argument hinged on questions of user privacy. Members of the security team worried that the legal process involved in handing over private user data, even if it belonged to a Russian troll farm, would open the door for governments to seize data from other Facebook users later on. “There was a real debate internally,” says one executive. “Should we just say ‘Fuck it’ and not worry?” But eventually the company decided it would be crazy to throw legal caution to the wind “just because Rachel Maddow wanted us to.”
Ultimately, a blog post appeared under Stamos’ name in early September announcing that, as far as the company could tell, the Russians had paid Facebook $100,000 for roughly 3,000 ads aimed at influencing American politics around the time of the 2016 election. Every sentence in the post seemed to downplay the substance of these new revelations: The number of ads was small, the expense was small. And Facebook wasn’t going to release them. The public wouldn’t know what they looked like or what they were really aimed at doing.
This didn’t sit at all well with DiResta. She had long felt that Facebook was insufficiently forthcoming, and now it seemed to be flat-out stonewalling. “That was when it went from incompetence to malice,” she says. A couple of weeks later, while waiting at a Walgreens to pick up a prescription for one of her kids, she got a call from a researcher at the Tow Center for Digital Journalism named Jonathan Albright. He had been mapping ecosystems of misinformation since the election, and he had some excellent news. “I found this thing,” he said. Albright had started digging into CrowdTangle, one of the analytics platforms that Facebook uses. And he had discovered that the data from six of the accounts Facebook had shut down were still there, frozen in a state of suspended animation. There were the posts pushing for Texas secession and playing on racial antipathy. And then there were political posts, like one that referred to Clinton as “that murderous anti-American traitor Killary.” Right before the election, the Blacktivist account urged its supporters to stay away from Clinton and instead vote for Jill Stein. Albright downloaded the most recent 500 posts from each of the six groups. He reported that, in total, their posts had been shared more than 340 million times.
Eddie Guy
X
To McNamee, the way the Russians used the platform was neither a surprise nor an anomaly. “They find 100 or 1,000 people who are angry and afraid and then use Facebook’s tools to advertise to get people into groups,” he says. “That’s exactly how Facebook was designed to be used.”
McNamee and Harris had first traveled to DC for a day in July to meet with members of Congress. Then, in September, they were joined by DiResta and began spending all their free time counseling senators, representatives, and members of their staffs. The House and Senate Intelligence Committees were about to hold hearings on Russia’s use of social media to interfere in the US election, and McNamee, Harris, and ­DiResta were helping them prepare. One of the early questions they weighed in on was the matter of who should be summoned to testify. Harris recommended that the CEOs of the big tech companies be called in, to create a dramatic scene in which they all stood in a neat row swearing an oath with their right hands in the air, roughly the way tobacco executives had been forced to do a generation earlier. Ultimately, though, it was determined that the general counsels of the three companies—Facebook, Twitter, and Google—should head into the lion’s den.
And so on November 1, Colin Stretch arrived from Facebook to be pummeled. During the hearings themselves, DiResta was sitting on her bed in San Francisco, watching them with her headphones on, trying not to wake up her small children. She listened to the back-and-forth in Washington while chatting on Slack with other security researchers. She watched as Marco Rubio smartly asked whether Facebook even had a policy forbidding foreign governments from running an influence campaign through the platform. The answer was no. Rhode Island senator Jack Reed then asked whether Facebook felt an obligation to individually notify all the users who had seen Russian ads that they had been deceived. The answer again was no. But maybe the most threatening comment came from Dianne Feinstein, the senior senator from Facebook’s home state. “You’ve created these platforms, and now they’re being misused, and you have to be the ones to do something about it,” she declared. “Or we will.”
After the hearings, yet another dam seemed to break, and former Facebook executives started to go public with their criticisms of the company too. On November 8, billionaire entrepreneur Sean Parker, Facebook’s first president, said he now regretted pushing Facebook so hard on the world. “I don’t know if I really understood the consequences of what I was saying,” h
Read more: https://www.wired.com/story/inside-facebook-mark-zuckerberg-2-years-of-hell/
from Viral News HQ http://ift.tt/2F8OhJT via Viral News HQ
0 notes