#appsec
Explore tagged Tumblr posts
Text
Secure by Design and Secure by Default: You need both to boost AppSec
![Tumblr media](https://64.media.tumblr.com/9cd3ef840baee6c46cb185ccbbd64ace/96adb1695872f468-05/s540x810/6fef090d8fe62672fd9e1dc50985c142247c48a8.jpg)
Discover why both Secure by Design and Secure by Default are essential for robust AppSec. https://jpmellojr.blogspot.com/2025/02/secure-by-design-and-secure-by-default.html
0 notes
Text
What is Web Application Security Testing?
Web Application Security Testing, also known as Web AppSec, is a method to test whether web applications are vulnerable to attacks. It involves a series of automated and manual tests and different methodologies to identify and mitigate security risks in any web application. read more
#WebApplicationSecurity#SecurityTesting#CyberSecurity#AppSec#PenetrationTesting#VulnerabilityAssessment#InfoSec#SecureDevelopment#WebSecurity#QAandTesting
0 notes
Text
6 tipos de testes de segurança de aplicativos que vocĂȘ precisa conhecer
O teste de segurança de aplicativos Ă© um componente crĂtico do desenvolvimento de software moderno, garantindo que os aplicativos sejam robustos e resilientes contra ataques maliciosos. Ă medida que as ameaças cibernĂ©ticas continuam a evoluir em complexidade e frequĂȘncia, a necessidade de integrar medidas de segurança abrangentes em todo o SDLC nunca foi tĂŁo essencial. O pentesting tradicionalâŠ
![Tumblr media](https://64.media.tumblr.com/f4e4d1b47666e096208ae092712e4593/2cf93bd634fd6269-d6/s540x810/7f0ede41579413e5d00066a364bdaa48088ccc79.webp)
View On WordPress
#AppSec#BreachLock#CĂber segurança#segurança cibernĂ©tica#Segurança de aplicativos#Teste de penetração
0 notes
Text
Hallucination Control: Benefits and Risks of Deploying LLMs as Part of Security Processes
New Post has been published on https://thedigitalinsider.com/hallucination-control-benefits-and-risks-of-deploying-llms-as-part-of-security-processes/
Hallucination Control: Benefits and Risks of Deploying LLMs as Part of Security Processes
Large Language Models (LLMs) trained on vast quantities of data can make security operations teams smarter. LLMs provide in-line suggestions and guidance on response, audits, posture management, and more. Most security teams are experimenting with or using LLMs to reduce manual toil in workflows. This can be both for mundane and complex tasks.Â
For example, an LLM can query an employee via email if they meant to share a document that was proprietary and process the response with a recommendation for a security practitioner. An LLM can also be tasked with translating requests to look for supply chain attacks on open source modules and spinning up agents focused on specific conditions â new contributors to widely used libraries, improper code patterns â with each agent primed for that specific condition.Â
That said, these powerful AI systems bear significant risks that are unlike other risks facing security teams. Models powering security LLMs can be compromised through prompt injection or data poisoning. Continuous feedback loops and machine learning algorithms without sufficient human guidance can allow bad actors to probe controls and then induce poorly targeted responses. LLMs are prone to hallucinations, even in limited domains. Even the best LLMs make things up when they donât know the answer.Â
Security processes and AI policies around LLM use and workflows will become more critical as these systems become more common across cybersecurity operations and research. Making sure those processes are complied with, and are measured and accounted for in governance systems, will prove crucial to ensuring that CISOs can provide sufficient GRC (Governance, Risk and Compliance) coverage to meet new mandates like the Cybersecurity Framework 2.0.Â
The Huge Promise of LLMs in Cybersecurity
CISOs and their teams constantly struggle to keep up with the rising tide of new cyberattacks. According to Qualys, the number of CVEs reported in 2023 hit a new record of 26,447. Thatâs up more than 5X from 2013.Â
This challenge has only become more taxing as the attack surface of the average organization grows larger with each passing year. AppSec teams must secure and monitor many more software applications. Cloud computing, APIs, multi-cloud and virtualization technologies have added additional complexity. With modern CI/CD tooling and processes, application teams can ship more code, faster, and more frequently. Microservices have both splintered monolithic app into numerous APIs and attack surface and also punched many more holes in global firewalls for communication with external services or customer devices.
Advanced LLMs hold tremendous promise to reduce the workload of cybersecurity teams and to improve their capabilities. AI-powered coding tools have widely penetrated software development. Github research found that 92% of developers are using or have used AI tools for code suggestion and completion. Most of these âcopilotâ tools have some security capabilities. In fact, programmatic disciplines with relatively binary outcomes such as coding (code will either pass or fail unit tests) are well suited for LLMs. Beyond code scanning for software development and in the CI/CD pipeline, AI could be valuable for cybersecurity teams in several other ways:Â Â Â
Enhanced Analysis: LLMs can process massive amounts of security data (logs, alerts, threat intelligence) to identify patterns and correlations invisible to humans. They can do this across languages, around the clock, and across numerous dimensions simultaneously. This opens new opportunities for security teams. LLMs can burn down a stack of alerts in near real-time, flagging the ones that are most likely to be severe. Through reinforcement learning, the analysis should improve over time.Â
Automation: LLMs can automate security team tasks that normally require conversational back and forth. For example, when a security team receives an IoC and needs to ask the owner of an endpoint if they had in fact signed into a device or if they are located somewhere outside their normal work zones, the LLM can perform these simple operations and then follow up with questions as required and links or instructions. This used to be an interaction that an IT or security team member had to conduct themselves. LLMs can also provide more advanced functionality. For example, a Microsoft Copilot for Security can generate incident analysis reports and translate complex malware code into natural language descriptions.Â
Continuous Learning and Tuning: Unlike previous machine learning systems for security policies and comprehension, LLMs can learn on the fly by ingesting human ratings of its response and by retuning on newer pools of data that may not be contained in internal log files. In fact, using the same underlying foundational model, cybersecurity LLMs can be tuned for different teams and their needs, workflows, or regional or vertical-specific tasks. This also means that the entire system can instantly be as smart as the model, with changes propagating quickly across all interfaces.Â
Risk of LLMs for Cybersecurity
As a new technology with a short track record, LLMs have serious risks. Worse, understanding the full extent of those risks is challenging because LLM outputs are not 100% predictable or programmatic. For example, LLMs can âhallucinateâ and make up answers or answer questions incorrectly, based on imaginary data. Before adopting LLMs for cybersecurity use cases, one must consider potential risks including:Â
Prompt Injection: Â Attackers can craft malicious prompts specifically to produce misleading or harmful outputs. This type of attack can exploit the LLMâs tendency to generate content based on the prompts it receives. In cybersecurity use cases, prompt injection might be most risky as a form of insider attack or attack by an unauthorized user who uses prompts to permanently alter system outputs by skewing model behavior. This could generate inaccurate or invalid outputs for other users of the system.Â
Data Poisoning:Â The training data LLMs rely on can be intentionally corrupted, compromising their decision-making. In cybersecurity settings, where organizations are likely using models trained by tool providers, data poisoning might occur during the tuning of the model for the specific customer and use case. The risk here could be an unauthorized user adding bad data â for example, corrupted log files â to subvert the training process. An authorized user could also do this inadvertently. The result would be LLM outputs based on bad data.
Hallucinations: As mentioned previously, LLMs may generate factually incorrect, illogical, or even malicious responses due to misunderstandings of prompts or underlying data flaws. In cybersecurity use cases, hallucinations can result in critical errors that cripple threat intelligence, vulnerability triage and remediation, and more. Because cybersecurity is a mission critical activity, LLMs must be held to a higher standard of managing and preventing hallucinations in these contexts.Â
As AI systems become more capable, their information security deployments are expanding rapidly. To be clear, many cybersecurity companies have long used pattern matching and machine learning for dynamic filtering. What is new in the generative AI era are interactive LLMs that provide a layer of intelligence atop existing workflows and pools of data, ideally improving the efficiency and enhancing the capabilities of cybersecurity teams. In other words, GenAI can help security engineers do more with less effort and the same resources, yielding better performance and accelerated processes.Â
#2023#agent#agents#ai#AI systems#ai tools#AI-powered#alerts#Algorithms#Analysis#APIs#app#applications#AppSec#Attack surface#attackers#automation#Behavior#binary#challenge#CI/CD#CISOs#Cloud#cloud computing#code#coding#communication#Companies#complexity#compliance
0 notes
Text
rock around the security flaw
youtube
dual life - rock and security flaws
0 notes
Text
How to use OWASP Security Knowledge Framework | CyberSecurityTV
youtube
Learn how to harness the power of the OWASP Security Knowledge Framework with expert guidance on CyberSecurityTV! đ Dive into the world of application security and sharpen your defenses. Get ready to level up your cybersecurity game with this must-watch video!
#OWASP#SecurityKnowledge#CyberSecurity#InfoSec#WebSecurity#AppSec#LearnSecurity#HackProof#OnlineSafety#SecureYourApps#CyberAware#Youtube
0 notes
Text
0 notes
Text
youtube
How To Generate Secure PGP Keys | CyberSecurityTV
đIn the previous episodes we learned about encryption and decryption. Today, I will show you a couple methods to generate PGP keys and we will also see some of the attributes that we need to configure in order to generate a secure key. Once you have the key, we will also see how to use them to securely exchange the information.
#owasptop10#webapppentest#appsec#applicationsecurity#apitesting#apipentest#cybersecurityonlinetraining#freesecuritytraining#penetrationtest#ethicalhacking#burpsuite#pentestforbegineers#Youtube
0 notes
Text
Application Security : CSRF
Cross Site Request Forgery allows an attacker to capter or modify information from an app you are logged to by exploiting your authentication cookies.
First thing to know : use HTTP method carefully. For instance GET shoud be a safe method with no side effect. Otherwise a simple email opening or page loading can trigger the exploit of an app vulnerability
PortSwigger has a nice set of Labs to understand csrf vulnerabilities : https://portswigger.net/web-security/csrf
Use of CSRF protections in web frameworks
Nuxt
Based on express-csurf. I am not certain of potential vulnerabilities. The token is set in a header and the secret to validate the token in a cookie
Django
0 notes
Text
youtube
Content Security Policy provides defense in depth against XSS and other injection vulnerabilities. Let's look through the Facebook CSP policy for evaluation. This tool is a very easy way to review and evaluate CSP.
#owasptop10#webapppentest#appsec#applicationsecurity#apitesting#apipentest#cybersecurityonlinetraining#freesecuritytraining#penetrationtest#ethicalhacking#burpsuite#pentestforbegineers#Youtube
0 notes
Text
BSIMM15 shines light on compliance and AI security â but updating tooling is key
![Tumblr media](https://64.media.tumblr.com/959d8c2d6d958820e3a22c1c68349d4d/e1d379dda6041c62-90/s540x810/7ef6eb126cf82c7f18546b01e87960ef11c581bf.jpg)
Discover the latest trends in software security with BSIMM15. Learn about the importance of SBOMs, AI security, and modern tooling. https://jpmellojr.blogspot.com/2025/01/bsimm15-shines-light-on-compliance-and.html
0 notes
Text
A Complete Security Testing Guide
In addition to being utilized by businesses, web-based payroll systems, shopping malls, banking, and stock trading software are now offered for sale as goods. read more
#SecurityTesting#CyberSecurity#AppSec#TestingGuide#PenetrationTesting#VulnerabilityAssessment#InfoSec#SoftwareSecurity#SecureDevelopment#QAandTesting
0 notes
Text
![Tumblr media](https://64.media.tumblr.com/59937296b5a042ab5fcf48622f2d632a/f7912fb50563649a-4b/s540x810/e8f1d9f5068a32dea6755a0ff8bc2561e8399530.jpg)
![Tumblr media](https://64.media.tumblr.com/0bcf8547d8549c61f5eec3595b994627/f7912fb50563649a-92/s540x810/cfabed275d35354bd2cc06832e346f8cbf9d0223.jpg)
![Tumblr media](https://64.media.tumblr.com/c3d6058d523dbb4d7d8b61f39dfb5a82/f7912fb50563649a-63/s540x810/554b7a4fbccaa0664e63bf9ca5545479840aa229.jpg)
![Tumblr media](https://64.media.tumblr.com/47f4167d70c959c0f1e5242fdb677fe2/f7912fb50563649a-17/s540x810/36f8aa03110bd53e1333f3ea7d52930cbc86b891.jpg)
![Tumblr media](https://64.media.tumblr.com/905bbdcaf3f8fb1367a72ab49631aac9/f7912fb50563649a-eb/s540x810/dd722f3c955d87753a91498aed10f98efc64b6a9.jpg)
![Tumblr media](https://64.media.tumblr.com/57f3cfe426d735edbfab5a2647a5b676/f7912fb50563649a-67/s540x810/d502d73056612e88824ce6f2293fc5064ee6b974.jpg)
October 7, 2024
281/366 Days of Growth
Saturday study session and early morning. Started Python studies to improve my scripting skills.
One good news is that I found a Cybersecurity mentor and a cybersec community. All because I spoke with a guy at work who is a Senior AppSec coordinator and has the same client as me (but works at another consulting company). He is awesome and presented me to my mentor, and now I have a specific direction to go to đ€
I walked a long way this year, and I know I have a lot more to walk, but at least all I did is finally feel right.
To complete my happiness, I only want to leave my job and find a Cybersecurity position... Things there are terrible for all teams, and are just getting worse. I am praying for a new job - and studying to help it to come true.
#studyblr#study#study blog#daily life#dailymotivation#study motivation#studying#study space#productivity#study desk#matcha#stemblr#cybersecurity
33 notes
·
View notes
Text
AppSec Teams, DevOps Teams Facing Security Strain
http://securitytc.com/TC5Qsd
2 notes
·
View notes
Text
Clearing the âFog of Moreâ in Cyber Security
New Post has been published on https://thedigitalinsider.com/clearing-the-fog-of-more-in-cyber-security/
Clearing the âFog of Moreâ in Cyber Security
At the RSA Conference in San Francisco this month, a dizzying array of dripping hot and new solutions were on display from the cybersecurity industry. Booth after booth claimed to be the tool that will save your organization from bad actors stealing your goodies or blackmailing you for millions of dollars.
After much consideration, I have come to the conclusion that our industry is lost. Lost in the soup of detect and respond with endless drivel claiming your problems will go away as long as you just add one more layer. Engulfed in a haze of technology investments, personnel, tools, and infrastructure layers, companies have now formed a labyrinth where they can no longer see the forest for the trees when it comes to identifying and preventing threat actors. These tools, meant to protect digital assets, are instead driving frustration for both security and development teams through increased workloads and incompatible tools. The âfog of moreâ is not working. But quite frankly, it never has.
Cyberattacks begin and end in code. Itâs that simple. Either you have a security flaw or vulnerability in code, or the code was written without security in mind. Either way, every attack or headline you read, comes from code. And itâs the software developers that face the ultimate full brunt of the problem. But developers arenât trained in security and, quite frankly, might never be. So they implement good old fashion code searching tools that simply grep the code for patterns. And be afraid for what you ask because as a result they get the alert tsunami, chasing down red herrings and phantoms for most of their day. In fact, developers are spending up to a third of their time chasing false positives and vulnerabilities. Only by focusing on prevention can enterprises really start fortifying their security programs and laying the foundation for a security-driven culture.
Finding and Fixing at the Code Level
Itâs often said that prevention is better than cure, and this adage holds particularly true in cybersecurity. Thatâs why even amid tighter economic constraints, businesses are continually investing and plugging in more security tools, creating multiple barriers to entry to reduce the likelihood of successful cyberattacks. But despite adding more and more layers of security, the same types of attacks keep happening. Itâs time for organizations to adopt a fresh perspective â one where we home in on the problem at the root level â by finding and fixing vulnerabilities in the code.
Applications often serve as the primary entry point for cybercriminals seeking to exploit weaknesses and gain unauthorized access to sensitive data. In late 2020, the SolarWinds compromise came to light and investigators found a compromised build process that allowed attackers to inject malicious code into the Orion network monitoring software. This attack underscored the need for securing every step of the software build process. By implementing robust application security, or AppSec, measures, organizations can mitigate the risk of these security breaches. To do this, enterprises need to look at a âshift leftâ mentality, bringing preventive and predictive methods to the development stage.
While this is not an entirely new idea, it does come with drawbacks. One significant downside is increased development time and costs. Implementing comprehensive AppSec measures can require significant resources and expertise, leading to longer development cycles and higher expenses. Additionally, not all vulnerabilities pose a high risk to the organization. The potential for false positives from detection tools also leads to frustration among developers. This creates a gap between business, engineering and security teams, whose goals may not align. But generative AI may be the solution that closes that gap for good.
Entering the AI-Era
By leveraging the ubiquitous nature of generative AI within AppSec we will finally learn from the past to predict and prevent future attacks. For example, you can train a Large Language Model or LLM on all known code vulnerabilities, in all their variants, to learn the essential features of them all. These vulnerabilities could include common issues like buffer overflows, injection attacks, or improper input validation. The model will also learn the nuanced differences by language, framework, and library, as well as what code fixes are successful. The model can then use this knowledge to scan an organizationâs code and find potential vulnerabilities that havenât even been identified yet. By using the context around the code, scanning tools can better detect real threats. This means short scan times and less time chasing down and fixing false positives and increased productivity for development teams.
Generative AI tools can also offer suggested code fixes, automating the process of generating patches, significantly reducing the time and effort required to fix vulnerabilities in codebases. By training models on vast repositories of secure codebases and best practices, developers can leverage AI-generated code snippets that adhere to security standards and avoid common vulnerabilities. This proactive approach not only reduces the likelihood of introducing security flaws but also accelerates the development process by providing developers with pre-tested and validated code components.
These tools can also adapt to different programming languages and coding styles, making them versatile tools for code security across various environments. They can improve over time as they continue to train on new data and feedback, leading to more effective and reliable patch generation.
The Human Element
Itâs essential to note that while code fixes can be automated, human oversight and validation are still crucial to ensure the quality and correctness of generated patches. While advanced tools and algorithms play a significant role in identifying and mitigating security vulnerabilities, human expertise, creativity, and intuition remain indispensable in effectively securing applications.
Developers are ultimately responsible for writing secure code. Their understanding of security best practices, coding standards, and potential vulnerabilities is paramount in ensuring that applications are built with security in mind from the outset. By integrating security training and awareness programs into the development process, organizations can empower developers to proactively identify and address security issues, reducing the likelihood of introducing vulnerabilities into the codebase.
Additionally, effective communication and collaboration between different stakeholders within an organization are essential for AppSec success. While AI solutions can help to âclose the gapâ between development and security operations, it takes a culture of collaboration and shared responsibility to build more resilient and secure applications.
In a world where the threat landscape is constantly evolving, itâs easy to become overwhelmed by the sheer volume of tools and technologies available in the cybersecurity space. However, by focusing on prevention and finding vulnerabilities in code, organizations can trim the âfatâ of their existing security stack, saving an exponential amount of time and money in the process. At root-level, such solutions will be able to not only find known vulnerabilities and fix zero-day vulnerabilities but also pre-zero-day vulnerabilities before they occur. We may finally keep pace, if not get ahead, of evolving threat actors.
#ai#ai tools#Algorithms#Application Security#applications#approach#AppSec#assets#attackers#awareness#Business#code#codebase#coding#Collaboration#communication#Companies#comprehensive#compromise#conference#creativity#cyber#cyber security#Cyberattacks#cybercriminals#cybersecurity#data#detection#developers#development
0 notes