#appsec
Explore tagged Tumblr posts
reconshell · 2 years ago
Link
5 notes · View notes
lifetechweb · 5 months ago
Text
6 tipos de testes de segurança de aplicativos que você precisa conhecer
O teste de segurança de aplicativos é um componente crítico do desenvolvimento de software moderno, garantindo que os aplicativos sejam robustos e resilientes contra ataques maliciosos. À medida que as ameaças cibernéticas continuam a evoluir em complexidade e frequência, a necessidade de integrar medidas de segurança abrangentes em todo o SDLC nunca foi tão essencial. O pentesting tradicional…
Tumblr media
View On WordPress
0 notes
jpmellojr · 5 months ago
Text
OSC&R report: 95% of organizations face severe software supply chain risk
Tumblr media
The first analysis of software supply chain security based on the Open Software Supply Chain Attack Reference (OSC&R) threat framework has been released, and the news isn't good. https://tinyurl.com/428xehca
0 notes
newcodesociety · 5 months ago
Text
0 notes
acluent · 5 months ago
Text
This article is absolutely shocking. It would be one thing if the author had just put this tripe out on his own blog, but the fact that the company published it and it's an "AppSec" company supposedly–is just shocking.
The conclusion is just priceless (humor me and lose a couple braincells reading it):
"As AI continues to revolutionize software engineering, integrating advanced security measures into every phase of the development lifecycle is crucial. Automated workflows must prioritize security to protect against emerging threats and vulnerabilities. By embedding security from the initial design through to deployment and maintenance, AI-driven development can produce not only efficient and innovative software but also secure and resilient applications.
The future of software engineering is here, and it is automated, intelligent, and secure. Ensuring that security remains at the forefront of these advancements will be the key to harnessing the full potential of AI in software development"
There are just so many things to say. The one I wish to focus on is: what "AI" tool does the author think shows the level of promise for AI writing software to merit such outlandish claims? Where is it? Because all the AI coding I have seen is just rubbish. I mean really secure software written by AI? For real? What universe? Maybe as a first start, but secure code form AI this guy is just off his rocker. Honestly, if you need a reason to avoid a company look at their blog posts. This is a great example of why you should hire qualified AppSec people and people pretending like this clown should not be allowed in the door, and certainly not allowed to write such ignorant tripe.
If they were smart they would remove the post from their site and do their best to scrub it and the evidence of it from the internet as we who made the mistake of reading–attempt to scrub its foolishness from our minds.
0 notes
jcmarchi · 7 months ago
Text
Hallucination Control: Benefits and Risks of Deploying LLMs as Part of Security Processes
New Post has been published on https://thedigitalinsider.com/hallucination-control-benefits-and-risks-of-deploying-llms-as-part-of-security-processes/
Hallucination Control: Benefits and Risks of Deploying LLMs as Part of Security Processes
Large Language Models (LLMs) trained on vast quantities of data can make security operations teams smarter. LLMs provide in-line suggestions and guidance on response, audits, posture management, and more. Most security teams are experimenting with or using LLMs to reduce manual toil in workflows. This can be both for mundane and complex tasks. 
For example, an LLM can query an employee via email if they meant to share a document that was proprietary and process the response with a recommendation for a security practitioner. An LLM can also be tasked with translating requests to look for supply chain attacks on open source modules and spinning up agents focused on specific conditions — new contributors to widely used libraries, improper code patterns — with each agent primed for that specific condition. 
That said, these powerful AI systems bear significant risks that are unlike other risks facing security teams. Models powering security LLMs can be compromised through prompt injection or data poisoning. Continuous feedback loops and machine learning algorithms without sufficient human guidance can allow bad actors to probe controls and then induce poorly targeted responses. LLMs are prone to hallucinations, even in limited domains. Even the best LLMs make things up when they don’t know the answer. 
Security processes and AI policies around LLM use and workflows will become more critical as these systems become more common across cybersecurity operations and research. Making sure those processes are complied with, and are measured and accounted for in governance systems, will prove crucial to ensuring that CISOs can provide sufficient GRC (Governance, Risk and Compliance) coverage to meet new mandates like the Cybersecurity Framework 2.0. 
The Huge Promise of LLMs in Cybersecurity
CISOs and their teams constantly struggle to keep up with the rising tide of new cyberattacks. According to Qualys, the number of CVEs reported in 2023 hit a new record of 26,447. That’s up more than 5X from 2013. 
This challenge has only become more taxing as the attack surface of the average organization grows larger with each passing year. AppSec teams must secure and monitor many more software applications. Cloud computing, APIs, multi-cloud and virtualization technologies have added additional complexity. With modern CI/CD tooling and processes, application teams can ship more code, faster, and more frequently. Microservices have both splintered monolithic app into numerous APIs and attack surface and also punched many more holes in global firewalls for communication with external services or customer devices.
Advanced LLMs hold tremendous promise to reduce the workload of cybersecurity teams and to improve their capabilities. AI-powered coding tools have widely penetrated software development. Github research found that 92% of developers are using or have used AI tools for code suggestion and completion. Most of these “copilot” tools have some security capabilities. In fact, programmatic disciplines with relatively binary outcomes such as coding (code will either pass or fail unit tests) are well suited for LLMs. Beyond code scanning for software development and in the CI/CD pipeline, AI could be valuable for cybersecurity teams in several other ways:   
Enhanced Analysis: LLMs can process massive amounts of security data (logs, alerts, threat intelligence) to identify patterns and correlations invisible to humans. They can do this across languages, around the clock, and across numerous dimensions simultaneously. This opens new opportunities for security teams. LLMs can burn down a stack of alerts in near real-time, flagging the ones that are most likely to be severe. Through reinforcement learning, the analysis should improve over time. 
Automation: LLMs can automate security team tasks that normally require conversational back and forth. For example, when a security team receives an IoC and needs to ask the owner of an endpoint if they had in fact signed into a device or if they are located somewhere outside their normal work zones, the LLM can perform these simple operations and then follow up with questions as required and links or instructions. This used to be an interaction that an IT or security team member had to conduct themselves. LLMs can also provide more advanced functionality. For example, a Microsoft Copilot for Security can generate incident analysis reports and translate complex malware code into natural language descriptions. 
Continuous Learning and Tuning: Unlike previous machine learning systems for security policies and comprehension, LLMs can learn on the fly by ingesting human ratings of its response and by retuning on newer pools of data that may not be contained in internal log files. In fact, using the same underlying foundational model, cybersecurity LLMs can be tuned for different teams and their needs, workflows, or regional or vertical-specific tasks. This also means that the entire system can instantly be as smart as the model, with changes propagating quickly across all interfaces. 
Risk of LLMs for Cybersecurity
As a new technology with a short track record, LLMs have serious risks. Worse, understanding the full extent of those risks is challenging because LLM outputs are not 100% predictable or programmatic. For example, LLMs can “hallucinate” and make up answers or answer questions incorrectly, based on imaginary data. Before adopting LLMs for cybersecurity use cases, one must consider potential risks including: 
Prompt Injection:  Attackers can craft malicious prompts specifically to produce misleading or harmful outputs. This type of attack can exploit the LLM’s tendency to generate content based on the prompts it receives. In cybersecurity use cases, prompt injection might be most risky as a form of insider attack or attack by an unauthorized user who uses prompts to permanently alter system outputs by skewing model behavior. This could generate inaccurate or invalid outputs for other users of the system. 
Data Poisoning:  The training data LLMs rely on can be intentionally corrupted, compromising their decision-making. In cybersecurity settings, where organizations are likely using models trained by tool providers, data poisoning might occur during the tuning of the model for the specific customer and use case. The risk here could be an unauthorized user adding bad data — for example, corrupted log files — to subvert the training process. An authorized user could also do this inadvertently. The result would be LLM outputs based on bad data.
Hallucinations: As mentioned previously, LLMs may generate factually incorrect, illogical, or even malicious responses due to misunderstandings of prompts or underlying data flaws. In cybersecurity use cases, hallucinations can result in critical errors that cripple threat intelligence, vulnerability triage and remediation, and more. Because cybersecurity is a mission critical activity, LLMs must be held to a higher standard of managing and preventing hallucinations in these contexts. 
As AI systems become more capable, their information security deployments are expanding rapidly. To be clear, many cybersecurity companies have long used pattern matching and machine learning for dynamic filtering. What is new in the generative AI era are interactive LLMs that provide a layer of intelligence atop existing workflows and pools of data, ideally improving the efficiency and enhancing the capabilities of cybersecurity teams. In other words, GenAI can help security engineers do more with less effort and the same resources, yielding better performance and accelerated processes. 
0 notes
love-is-normal · 10 months ago
Text
rock around the security flaw
youtube
dual life - rock and security flaws
0 notes
otaviogilbert · 1 year ago
Text
How to use OWASP Security Knowledge Framework | CyberSecurityTV
youtube
Learn how to harness the power of the OWASP Security Knowledge Framework with expert guidance on CyberSecurityTV! đź”’ Dive into the world of application security and sharpen your defenses. Get ready to level up your cybersecurity game with this must-watch video!
0 notes
alexriley2993 · 1 year ago
Text
0 notes
varamacreations · 1 year ago
Text
youtube
How To Generate Secure PGP Keys | CyberSecurityTV
🌟In the previous episodes we learned about encryption and decryption. Today, I will show you a couple methods to generate PGP keys and we will also see some of the attributes that we need to configure in order to generate a secure key. Once you have the key, we will also see how to use them to securely exchange the information.
0 notes
naybnet-tech-blog · 1 year ago
Text
Application Security : CSRF
Cross Site Request Forgery allows an attacker to capter or modify information from an app you are logged to by exploiting your authentication cookies.
First thing to know : use HTTP method carefully. For instance GET shoud be a safe method with no side effect. Otherwise a simple email opening or page loading can trigger the exploit of an app vulnerability
PortSwigger has a nice set of Labs to understand csrf vulnerabilities : https://portswigger.net/web-security/csrf
Use of CSRF protections in web frameworks
Nuxt
Based on express-csurf. I am not certain of potential vulnerabilities. The token is set in a header and the secret to validate the token in a cookie
Django
0 notes
sanjaycr · 2 years ago
Text
youtube
Content Security Policy provides defense in depth against XSS and other injection vulnerabilities. Let's look through the Facebook CSP policy for evaluation. This tool is a very easy way to review and evaluate CSP.
0 notes
lostlibrariangirl · 3 months ago
Text
Tumblr media Tumblr media Tumblr media Tumblr media Tumblr media Tumblr media
October 7, 2024
281/366 Days of Growth
Saturday study session and early morning. Started Python studies to improve my scripting skills.
One good news is that I found a Cybersecurity mentor and a cybersec community. All because I spoke with a guy at work who is a Senior AppSec coordinator and has the same client as me (but works at another consulting company). He is awesome and presented me to my mentor, and now I have a specific direction to go to 🤓
I walked a long way this year, and I know I have a lot more to walk, but at least all I did is finally feel right.
To complete my happiness, I only want to leave my job and find a Cybersecurity position... Things there are terrible for all teams, and are just getting worse. I am praying for a new job - and studying to help it to come true.
32 notes · View notes
jpmellojr · 6 months ago
Text
How platform engineering helps you get a good start on Secure by Design
Tumblr media
Self-service portals for developers can help organizations overcome challenges to getting up and running with CISA's software security initiative. https://jpmellojr.blogspot.com/2024/06/how-platform-engineering-helps-you-get.html
0 notes
ericvanderburg · 4 months ago
Text
AppSec Teams, DevOps Teams Facing Security Strain
http://securitytc.com/TC5Qsd
2 notes · View notes
cyber-sec · 1 year ago
Text
GitHub Enhances Security Capabilities With AI
Tumblr media
Source: https://www.securityweek.com/github-enhances-security-capabilities-with-ai/
More info: https://github.blog/2023-11-08-ai-powered-appsec/
2 notes · View notes