#Cybersecurity AI tools
Explore tagged Tumblr posts
republicbusiness · 4 months ago
Text
How to navigate the world safely with AI
1 note · View note
jcmarchi · 17 hours ago
Text
Fighting Against AI-Enabled Attacks – The Proper Defensive Strategy
New Post has been published on https://thedigitalinsider.com/fighting-against-ai-enabled-attacks-the-proper-defensive-strategy/
Fighting Against AI-Enabled Attacks – The Proper Defensive Strategy
As AI-enabled threats proliferate, organizations must master how to prevent and defend against these types of attacks. One popular approach that is generating buzz is to use AI to defend against other, malicious AI. This is, however, only partly effective. AI can be used as a defensive shield, but only if employees have the knowledge to properly use it. It can also only be used as part of the solution, but fully depending on AI as a shield is not a cure-all.
Furthermore, while it’s important to focus on how AI can help defend against AI-enabled threats, the defensive strategies of an organization should not be fully centered on AI. Instead, security leaders need to focus their teams on being consistently prepared by continually practicing their response to cyberattacks, regardless of whether AI is being leveraged to inflict harm.
Leveraging experience in these scenarios is the only proper mechanism to help strengthen defenses. For example, a cybersecurity professional who has been in the field for less than a year but has learned how to deal with a range of simulated AI-enabled attacks is the best option to mount an effective defense compared to someone who is unfamiliar with the intricacies of an AI-generated attack.
Simply put, you have to have seen a bad actor in action to know what it looks like. Once you have seen malicious attacks with your own eyes, they no longer blend in with regular activity and you can more easily identify attacks of different variations in the future. This specific experience of defending gives employees the skills to know how to handle attacks effectively and efficiently.
Employ the Skills Needed to Outmaneuver Malicious Actors
Organizations that focus on preparing for AI-enabled attacks and leverage AI as a component of a broader overall defense strategy will position themselves well as the threat landscape intensifies. Although access to AI-powered tools is not driving more risks, workforces need to be better prepared to address threats from malicious developers who are leveraging AI technology to carry out attacks. By creating continuous opportunities to learn how to outmaneuver malicious actors, organizations will be better positioned to future-proof their cybersecurity strategy and maintain an advantage against threats.
Through a culture of continuous learning, organizations can unlock upskilling engagement by identifying existing skills to reveal gaps that need to be addressed. Leaders can be engaged in this process by understanding the skills their teams need and by promoting training as a way to improve team members’ confidence and enhance their job security.
By prioritizing skills and implementing active cybersecurity measures to defend against AI-powered threats, organizations can arm their technologists with the tools they need to stay one step ahead of threats. Traditional security roles may not be enough to successfully defend against AI-powered cyberattacks. In some cases it may be necessary to create new cybersecurity roles that are focused on threat intelligence and reverse engineering. Analyzing threat intelligence is crucial for gaining valuable insights into the methods and capabilities of malicious actors.
Know How AI is Being Leveraged to Launch Attacks
Now, more than ever before, it’s crucial to foster a cybersecurity culture that continually educates existing team members on emerging threats and recruits job candidates with previous attack defense experiences. To possess the necessary skills to mount a defense, cybersecurity teams need to be aware of the capabilities of malicious actors and how malware developers leverage AI tools to launch attacks.
By training teams on best practices for recognizing the most damaging types of attacks such as ransomware, malware, deep fakes, and social engineering, individuals will be prepared to quickly recognize and react to an incident. In particular, the losses that businesses suffer due to ransomware can be staggering. According to Chainalysis, global ransomware payments reached a record high of $1.1 billion in 2023, which was nearly double the amount paid in 2022.
Identify, Assess, and Mitigate Security Weaknesses
In addition to proactive defense measures, organizations can also enhance their cybersecurity strategy through initiatives such as vulnerability management, comprehensive risk management, and clearly defined incident response measures. These steps are critical for identifying, assessing, and mitigating security weaknesses in systems, applications, and networks. In particular, incident response planning ensures that an organization is prepared to detect, respond to, and recover from a cyberattack.
When cyberattacks do occur, it’s also important to recognize the source of the attack as a preventative step against future incidents. Although it can be a complex process, the steps for tracing an attack’s origin include IP Address tracking as well as analyzing domain name systems and geolocation. By taking these measures, cybersecurity teams can reveal information related to the attacker’s infrastructure, narrow down the physical location from which the incident originated, and obtain clues about the attacker’s identity.
Upskill Your Workforce as the Threat Environment Intensifies
The threat environment will continue to intensify going forward, making it critical for cybersecurity teams to expand the measures that are needed to keep their data and networks secure. According to a report from England’s National Cyber Security Centre, the development of novel AI tools “will lead to an increase in cyberattacks and lower the barrier of entry for less sophisticated hackers to do digital harm.”
As the world becomes increasingly more interconnected and digitized, organizations that upskill their workforce and execute the most effective cybersecurity strategies will position themselves for future success by protecting critical assets, ensuring business continuity, and mitigating risk.
0 notes
bob3160 · 17 days ago
Video
youtube
Tech Talk - 10-18-2024
0 notes
thisisgraeme · 4 months ago
Text
Empowering Educators: Harnessing AI in Education with ChatGPT
🔍 Curious about the impact of AI in education? Discover how ChatGPT can revolutionise your teaching practices! 📚✨ Learn practical steps to enhance student engagement, ensure data privacy, and mitigate biases. Empower your classroom with cutting-edge tec
Embracing Technological Advancements in Education In the rapidly evolving landscape of education, staying abreast of technological advancements is essential for fostering dynamic and engaging learning environments. Artificial Intelligence (AI) stands at the forefront of these innovations, offering powerful tools that can transform educational practices. Among these tools, ChatGPT, an AI…
0 notes
techtoio · 4 months ago
Text
Exploring the Latest Trends in Software Development
Introduction The software is something like an industry whose development is ever-evolving with new technologies and changing market needs as the drivers. To this end, developers must keep abreast with current trends in their fields of operation to remain competitive and relevant. Read to continue .....
0 notes
jobsbuster · 6 months ago
Text
0 notes
aitoolsa2z · 7 months ago
Text
18 AI-powered cybersecurity and fraud detection tools along with precautions you can take to protect yourself. Each tool has unique features, advantages, and considerations. Remember that staying informed and vigilant is crucial in the ever-evolving landscape of online threats.
0 notes
cybergrit · 8 months ago
Text
1 note · View note
debajitadhikary · 11 months ago
Text
10 Most important Technologies that IT Advisors use to Navigating the Future of Information Technology
Introduction as IT Advisors In the dynamic realm of Information Technology (IT), staying ahead requires a powerful arsenal of cutting-edge technologies. IT advisors, the guiding force behind digital transformations, leverage a spectrum of tools to optimize operations, enhance security, and propel their clients into the future. In this exploration, we’ll delve into the tech landscape that…
Tumblr media
View On WordPress
1 note · View note
allaboutaii-blog · 1 year ago
Text
0 notes
abubaker1937 · 1 year ago
Text
Tumblr media
🔒🕵️‍♂️ Dive into the world of cybersecurity and encryption! Our tutorial on using AESKeyFind will show you how to uncover hidden AES encryption keys in files and memory dumps. Get started on your journey to becoming a digital detective today!
Tutorial: https://medium.com/@abubakersiddique761/your-key-to-understanding-aes-encryption-59bd53ff89bf
0 notes
marciodpaulla-blog · 1 year ago
Text
Through the Digital Lens: How Communication Technologies are Reshaping the World of Work
As we navigate the evolving digital workspace, let's embrace technology while valuing human connection. Remember, we're not just part of this story, we're its architects. Together, we'll shape our shared digital future.
Once upon a time, in a world not too far away, nestled in the crux of the 20th century, there was an office. It was an office like any other – buzzing typewriters, clattering fax machines, and the perpetual hum of the telephone. Work was a set-piece in a monotonous diorama, the landscape of which was limited by cubicles and nine-to-five schedules. Fast forward to the 21st century, and a subtle,…
Tumblr media
View On WordPress
0 notes
jcmarchi · 7 days ago
Text
Fighting AI with AI in the Modern Threat Landscape
New Post has been published on https://thedigitalinsider.com/fighting-ai-with-ai-in-the-modern-threat-landscape/
Fighting AI with AI in the Modern Threat Landscape
It’s not exactly breaking news to say that AI has dramatically changed the cybersecurity industry. Both attackers and defenders alike are turning to artificial intelligence to uplevel their capabilities, each striving to stay one step ahead of the other. This cat-and-mouse game is nothing new—attackers have been trying to outsmart security teams for decades, after all—but the emergence of artificial intelligence has introduced a fresh (and often unpredictable) element to the dynamic. Attackers across the globe are rubbing their hands together with glee at the prospect of leveraging this new technology to develop innovative, never-before-seen attack methods.
At least, that’s the perception. But the reality is a little bit different. While it’s true that attackers are increasingly leveraging AI, they are mostly using it to increase the scale and complexity of their attacks, refining their approach to existing tactics rather than breaking new ground. The thinking here is clear: why spend the time and effort to develop the attack methods of tomorrow when defenders already struggle to stop today’s? Fortunately, modern security teams are leveraging AI capabilities of their own—many of which are helping to detect malware, phishing attempts, and other common attack tactics with greater speed and accuracy. As the “AI arms race” between attackers and defenders continues, it will be increasingly important for security teams to understand how adversaries are actually deploying the technology—and ensuring that their own efforts are focused in the right place.
How Attackers Are Leveraging AI
The idea of a semi-autonomous AI being deployed to methodically hack its way through an organization’s defenses is a scary one, but (for now) it remains firmly in the realm of William Gibson novels and other science fiction fare. It’s true that AI has advanced at an incredible rate over the past several years, but we’re still a long way off from the sort of artificial general intelligence (AGI) capable of perfectly mimicking human thought patterns and behaviors. That’s not to say today’s AI isn’t impressive—it certainly is. But generative AI tools and large language models (LLMs) are most effective at synthesizing information from existing material and generating small, iterative changes. It can’t create something entirely new on its own—but make no mistake, the ability to synthesize and iterate is incredibly useful.
In practice, this means that instead of developing new methods of attack, adversaries can instead uplevel their current ones. Using AI, an attacker might be able to send millions of phishing emails, instead of thousands. They can also use an LLM to craft a more convincing message, tricking more recipients into clicking a malicious link or downloading a malware-laden file. Tactics like phishing are effectively a numbers game: the vast majority of people won’t fall for a phishing email, but if millions of people receive it, even a 1% success rate can result in thousands of new victims. If LLMs can bump that 1% success rate up to 2% or more, scammers can effectively double the effectiveness of their attacks with little to no effort. The same goes for malware: if small tweaks to malware code can effectively camouflage it from detection tools, attackers can get far more mileage out of an individual malware program before they need to move on to something new.
The other element at play here is speed. Because AI-based attacks are not subject to human limitations, they can often conduct an entire attack sequence at a much faster rate than a human operator. That means an attacker could potentially break into a network and reach the victim’s crown jewels—their most sensitive or valuable data—before the security team even receives an alert, let alone responds to it. If attackers can move faster, they don’t need to be as careful—which means they can get away with noisier, more disruptive activities without being stopped. They aren’t necessarily doing anything new here, but by pushing forward with their attacks more quickly, they can outpace network defenses in a potentially game-changing way.
This is the key to understanding how attackers are leveraging AI. Social engineering scams and malware programs are already successful attack vectors—but now adversaries can make them even more effective, deploy them more quickly, and operate at an even greater scale. Rather than fighting off dozens of attempts per day, organizations might be fighting off hundreds, thousands, or even tens of thousands of fast-paced attacks. And if they don’t have solutions or processes in place to quickly detect those attacks, identify which represent real, tangible threats, and effectively remediate them, they are leaving themselves dangerously open to attackers. Instead of wondering how attackers might leverage AI in the future, organizations should leverage AI solutions of their own with the goal of handling existing attack methods at a greater scale.
Turning AI to Security Teams’ Advantage
Security experts at every level of both business and government are seeking out ways to leverage AI for defensive purposes. In August, the U.S. Defense Advanced Research Projects Agency (DARPA) announced the finalists for its recent AI Cyber Challenge (AIxCC), which awards prizes to security research teams working to train LLMs to identify and fix code-based vulnerabilities. The challenge is supported by major AI providers, including Google, Microsoft, and OpenAI, all of whom provide technological and financial support for these efforts to bolster AI-based security. Of course, DARPA is just one example—you can hardly shake a stick in Silicon Valley without hitting a dozen startup founders eager to tell you about their advanced new AI-based security solutions. Suffice it to say, finding new ways to leverage AI for defensive purposes is a high priority for organizations of all types and sizes.
But like attackers, security teams often find the most success when they use AI to amplify their existing capabilities. With attacks happening at an ever-increasing scale, security teams are often stretched thin—both in terms of time and resources—making it difficult to adequately identify, investigate, and remediate every security alert that pops up. There simply isn’t the time. AI solutions are playing an important role in alleviating that challenge by providing automated detection and response capabilities. If there’s one thing AI is good at, it’s identifying patterns—and that means AI tools are very good at recognizing abnormal behavior, especially if that behavior conforms to known attack patterns. Because AI can review vast amounts of data much more quickly than humans, this allows security teams to upscale their operations in a significant way. In many cases, these solutions can even automate basic remediation processes, controverting low-level attacks without the need for human intervention. They can also be used to automate the process of security validation, continuous poking and prodding around network defenses to ensure they are functioning as intended.
It’s also important to note that AI doesn’t just allow security teams to identify potential attack activity more quickly—it also dramatically improves their accuracy. Instead of chasing down false alarms, security teams can be confident that when an AI solution alerts them to a potential attack, it is worthy of their immediate attention. This is an element of AI that doesn’t get talked about nearly enough—while much of the discussion centers around AI “replacing” humans and taking their jobs, the reality is that AI solutions are enabling humans to do their jobs better and more efficiently, while also alleviating the burnout that comes with performing tedious and repetitive tasks. Far from having a negative impact on human operators, AI solutions are handling much of the perceived “busywork” associated with security positions, allowing humans to focus on more interesting and important tasks. At a time when burnout is at an all-time high and many businesses are struggling to attract new security talent, improving quality of life and job satisfaction can have a massive positive impact.
Therein lies the real advantage for security teams. Not only can AI solutions help them scale their operations to effectively combat attackers leveraging AI tools of their own—they can keep security professionals happier and more satisfied in their roles. That’s a rare win-win solution for everyone involved, and it should help today’s businesses recognize that the time to invest in AI-based security solutions is now.
The AI Arms Race Is Just Getting Started
The race to adopt AI solutions is on, with both attackers and defenders finding different ways to leverage the technology to their advantage. As attackers use AI to increase the speed, scale and complexity of their attacks, security teams will need to fight fire with fire, using AI tools of their own to improve the speed and accuracy of their detection and remediation capabilities. Fortunately, AI solutions are providing critical information to security teams, allowing them to better test and evaluate the efficacy of their own solutions while also freeing up time and resources for more mission-critical tasks. Make no mistake, the AI arms race is only getting started—but the fact that security professionals are already using AI to stay one step ahead of attackers is a very good sign.
0 notes
thequerydesk · 2 years ago
Text
How Is AI Used In Cyber Security?
As our world becomes more digitized, cybersecurity threats continue to grow at an alarming rate. Malware, viruses, and cyberattacks can cause significant damage to businesses, governments, and individuals. To combat these threats, many organizations are turning to artificial intelligence (AI) to help improve their cybersecurity defenses. Read More
Tumblr media
0 notes
mostlysignssomeportents · 8 months ago
Text
Wellness surveillance makes workers unwell
Tumblr media
I'm on tour with my new, nationally bestselling novel The Bezzle! Catch me in TORONTO on Mar 22, then with LAURA POITRAS in NYC on Mar 24, then Anaheim, and more!
Tumblr media
"National conversation" sounds like one of those meaningless buzzphrases – until you live through one. The first one I really participated in actively was the national conversation – the global conversation – about privacy following the Snowden revelations.
This all went down when my daughter was five, and as my wife and I talked about the news, our kid naturally grew curious about it. I had to literally "explain like I'm five" global mass surveillance:
https://locusmag.com/2014/05/cory-doctorow-how-to-talk-to-your-children-about-mass-surveillance/
But parenting is a two-way street, so even as I was explaining surveillance to my kid, my own experiences raising a child changed how I thought about surveillance. Obviously I knew about many of the harms that surveillance brings, but parenting helped me viscerally appreciate one of the least-discussed, most important aspects of being watched: how it compromises being your authentic self:
https://www.theguardian.com/technology/blog/2014/may/09/cybersecurity-begins-with-integrity-not-surveillance
As I wrote then:
There are times when she is working right at the limits of her abilities – drawing or dancing or writing or singing or building – and she catches me watching her and gets this look of mingled embarrassment and exasperation, and then she changes back to some task where she has more mastery. No one – not even a small child – likes to look foolish in front of other people.
Learning, growth, and fulfillment all require a zone of privacy, a time and place where we are not observed. Far from making us accountable, continuous, fine-grained surveillance by authority figures just scares us into living a cramped, inauthentic version of ourselves, where growth is all but impossible. Others have observed the role this plays in right-wing culture war bullshit: "an armed society is a polite society" is code for "people who make me feel uncomfortable just by existing should be terrorized into hiding their authentic selves from me." The point of Don't Say Gay laws and anti-trans bills isn't to eliminate gender nonconformity – it's to drive it into hiding.
Given all this, it's no surprise that workers who face workplace surveillance in the name of "wellness" feel unwell as a result:
https://www.ifow.org/publications/what-impact-does-exposure-to-workplace-technologies-have-on-workers-quality-of-life-briefing-paper
As the Future of Work Institute found in its study, some technologies – systems that make it easier to collaborate and communicate with colleagues – increase workers' sense of wellbeing. But wearables and AI tools make workers feel significantly worse:
https://assets-global.website-files.com/64d5f73a7fc5e8a240310c4d/65eef23e188fb988d1f19e58_Tech%20Exposure%20and%20Worker%20Wellbeing%20-%20Full%20WP%20-%20Final.pdf
Workers who reported these negative feelings confirmed that these tools make them feel "monitored." I mean, of course they do. Even where these tools are nominally designed to help you do your job better, they're also explicitly designed to help your boss keep track of you from moment to moment. As Brandon Vigliarolo writes for The Register, these are the same bosses who have been boasting to their investors about their plans to fire their workers and replace them with AI:
https://www.theregister.com/2024/03/14/advanced_workplace_tech_study/
"Bossware" is a key example of the shitty rainbow of "disciplinary technology," tools that exist to take away human agency by making it easier to surveil and control its users:
https://pluralistic.net/2020/07/01/bossware/#bossware
Bossware is one of the stages of the Shitty Technology Adoption Curve: the process by which abusive and immiserating technologies progress up the privilege gradient as their proponents refine and normalize dystopian technologies in order to impose them on wider and wider audiences:
https://pluralistic.net/2021/02/24/gwb-rumsfeld-monsters/#bossware
The kinds of metrics that bossware gathers might be useful to workers, but only if the workers get to decide when, whether and how to share that data with other people. Microsoft Office helps you catch typos by underlining words its dictionary doesn't recognize; the cloud-based, "AI-powered" Office365 tells your boss that you're the 11th-worst speller in your division and uses "sentiment analysis" to predict whether you are likely to cause trouble:
https://pluralistic.net/2022/08/21/great-taylors-ghost/#solidarity-or-bust
Two hundred years ago, Luddites rose up against machines. Contrary to the ahistorical libel you've heard, the Luddites weren't angry or frightened of machines – they were angry at the machines' owners. They understood – correctly – that the purpose of a machine "so easy a child could use it" was to fire skilled adult workers and replace them with kidnapped, indentured Napoleonic War orphans who could be maimed and killed on the job without consequence:
https://pluralistic.net/2023/03/12/gig-work-is-the-opposite-of-steampunk/
A hundred years ago, the "Taylorites" picked up where those mill owners left off: choreographing workers' movements to the finest degree in a pseudoscientific effort to produce a kind of kabuki of boss-pleasing robotic efficiency. The new, AI-based Taylorism goes even further, allowing bosses to automatically blacklist gig workers who refuse to cross picket-lines, monitor "self-employed" call center operators in their own homes, and monitor the eyeballs of Amazon drivers:
https://pluralistic.net/2023/04/12/algorithmic-wage-discrimination/#fishers-of-men
AI-based monitoring technologies dock workers' wages, suspend them, and even fire them, and when workers object, they're stuck arguing with a chatbot that is the apotheosis of Computer Says No:
https://pluralistic.net/2024/01/11/robots-stole-my-jerb/#computer-says-no
There's plenty of research about AI successfully "augmenting" workers, making them more productive and I'm the last person to say that automation can't help you get more done:
https://www.ibm.com/thought-leadership/institute-business-value/en-us/report/augmented-workforce
But without understanding how AI augments class warfare – disciplining workers with a scale, speed and granularity beyond the sadistic fantasies of even the most micromanaging asshole boss – this research is meaningless.
The irony of bosses imposing monitoring to improve "wellness" and stave off "burnout" is that nothing is more exhausting, more immiserating, more infuriating than being continuously watched and judged.
Tumblr media
Name your price for 18 of my DRM-free ebooks and support the Electronic Frontier Foundation with the Humble Cory Doctorow Bundle.
Tumblr media
If you'd like an essay-formatted version of this post to read or share, here's a link to it on pluralistic.net, my surveillance-free, ad-free, tracker-free blog:
https://pluralistic.net/2024/03/15/wellness-taylorism/#sick-of-spying
Tumblr media
Image: Cryteria (modified) https://commons.wikimedia.org/wiki/File:HAL9000.svg
CC BY 3.0 https://creativecommons.org/licenses/by/3.0/deed.en
624 notes · View notes
mariacallous · 5 months ago
Text
Microsoft's CEO Satya Nadella has hailed the company's new Recall feature, which stores a history of your computer desktop and makes it available to AI for analysis, as “photographic memory” for your PC. Within the cybersecurity community, meanwhile, the notion of a tool that silently takes a screenshot of your desktop every five seconds has been hailed as a hacker's dream come true and the worst product idea in recent memory.
Now, security researchers have pointed out that even the one remaining security safeguard meant to protect that feature from exploitation can be trivially defeated.
Since Recall was first announced last month, the cybersecurity world has pointed out that if a hacker can install malicious software to gain a foothold on a target machine with the feature enabled, they can quickly gain access to the user's entire history stored by the function. The only barrier, it seemed, to that high-resolution view of a victim's entire life at the keyboard was that accessing Recall's data required administrator privileges on a user's machine. That meant malware without that higher-level privilege would trigger a permission pop-up, allowing users to prevent access, and that malware would also likely be blocked by default from accessing the data on most corporate machines.
Then on Wednesday, James Forshaw, a researcher with Google's Project Zero vulnerability research team, published an update to a blog post pointing out that he had found methods for accessing Recall data without administrator privileges—essentially stripping away even that last fig leaf of protection. “No admin required ;-)” the post concluded.
“Damn,” Forshaw added on Mastodon. “I really thought the Recall database security would at least be, you know, secure.”
Forshaw's blog post described two different techniques to bypass the administrator privilege requirement, both of which exploit ways of defeating a basic security function in Windows known as access control lists that determine which elements on a computer require which privileges to read and alter. One of Forshaw's methods exploits an exception to those control lists, temporarily impersonating a program on Windows machines called AIXHost.exe that can access even restricted databases. Another is even simpler: Forshaw points out that because the Recall data stored on a machine is considered to belong to the user, a hacker with the same privileges as the user could simply rewrite the access control lists on a target machine to grant themselves access to the full database.
That second, simpler bypass technique “is just mindblowing, to be honest,” says Alex Hagenah, a cybersecurity strategist and ethical hacker. Hagenah recently built a proof-of-concept hacker tool called TotalRecall designed to show that someone who gained access to a victim's machine with Recall could immediately siphon out all the user's history recorded by the feature. Hagenah's tool, however, still required that hackers find another way to gain administrator privileges through a so-called “privilege escalation” technique before his tool would work.
With Forshaw's technique, “you don’t need any privilege escalation, no pop-up, nothing,” says Hagenah. “This would make sense to implement in the tool for a bad guy.”
In fact, just an hour after speaking to WIRED about Forshaw's finding, Hagenah added the simpler of Forshaw's two techniques to his TotalRecall tool, then confirmed that the trick worked by accessing all the Recall history data stored on another user's machine for which he didn't have administrator access. “So simple and genius,” he wrote in a text to WIRED after testing the technique.
That confirmation removes one of the last arguments Recall's defenders have had against criticisms that the feature acts as, essentially, a piece of pre-installed spyware on a user's machine, ready to be exploited by any hacker who can gain a foothold on the device. “It makes your security very fragile, in the sense that anyone who penetrates your computer for even a second can get your whole history,” says Dave Aitel, the founder of the cybersecurity firm Immunity and a former NSA hacker. “Which is not something people want.”
For now, security researchers have been testing Recall in preview versions of the tool ahead of its expected launch later this month. Microsoft said it plans to integrate Recall on compatible Copilot+ PCs with the feature turned on by default. WIRED reached out to the company for comment on Forshaw's findings about Recall's security issues, but the company has yet to respond.
The revelation that hackers can exploit Recall without even using a separate privilege escalation technique only contributes further to the sense that the feature was rushed to market without a proper review from the company's cybersecurity team—despite the company's CEO Nadella proclaiming just last month that Microsoft would make security its first priority in every decision going forward. “You cannot convince me that Microsoft's security teams looked at this and said ‘that looks secure,’” says Jake Williams, a former NSA hacker and now the VP of R&D at the cybersecurity consultancy Hunter Strategy, where he says he's been asked by some of the firm's clients to test Recall's security before they add Microsoft devices that use it to their networks.
“As it stands now, it’s a security dumpster fire,” Williams says. “This is one of the scariest things I’ve ever seen from an enterprise security standpoint.”
143 notes · View notes