#WebScanner
Explore tagged Tumblr posts
Text
SXDork: Google Dorking to Search for Specific Information | #GoogleDorking #GoogleDorks #SXDork #WebScanner #Web
0 notes
Link
1 note
·
View note
Photo
Detect Website Malware
The threat of infection or a security incident is constant. If you don't know what is happening, how can you appropriately respond? The idea of a website or environment that is immune from security issues is false. The ability to detect when incidents occur is fundamental to the establishment of a good security posture for your website. The Website Security Platform provides continuous monitoring of your website. Providing immediate alerts in the event of a security incident.
0 notes
Text
Archery - Open Source Vulnerability Assessment and Management
Archery - Open Source Vulnerability Assessment and Management #opensource #vulnerability #Assessment #Hacking #Linux
Archery is an opensource vulnerability assessment and management tool which helps developers and pentesters to perform scans and manage vulnerabilities. Archery uses popular opensource tools to perform comprehensive scaning for web application and network. It also performs web application dynamic authenticated scanning and covers the whole applications by using selenium. The developers can also…
View On WordPress
#assessment#developement#DevOps#linux#Management#Network vulnerability#Open Source#opensource#OWASP ZAP#Vulnerability Assessment#vulnerability Scanning#web scanning#webscanners
0 notes
Text
That is very interesting, I have heard you need an extra layer of protection above Windows Defender, but I could definitely understand if that is no longer necessary. That is actually nice piece of mind to have knowing that you don't have to have something extra running on your system.
I am still in the process of migrating to Firefox and setting up UBlock Origin, but once I get finished, I will probably uninstall what I have on my system now.
And come to think of it, the systems my work uses don't have anything other than a webscanner, not antivirus, and I always wondered why.
You know I ought to look up shipping things internationally via PayPal. If I can do that without having to drive to the post office, I could open up my listings to international buyers.
22 notes
·
View notes
Text
Start with Skipfish
Skipfish is a round one web application scanning tool. There has been some buzz around it lately, so I thought to provide some details for people wanting to check it out, see what is new. It is provided by Google. Say what you like about their privacy policy, as long as they keep paying Michal Zelewski to develop skipfish, they are alright in my book. We run it on OSX, well, because. Other considerations for this sort of tier one web application analysis: nikto, arachni, w3af. Personally, I have used nikto a lot in the past.
I wanted to run through a quick install process and first scan just to familiarize people with it. First off we want to use brew to install, otherwise we will have to do all those boring dependency checks ourselves1. First off I checked that brew had the latest version (skipfish updates once or twice a month minimum). It didn't. So let's edit it to get the latest version.
brew edit skipfish
If you have brew configured correctly, it should popup the install instructions in your text editor of choice. We are concerned with two lines. First up is the URL. At the time of writing skipfish latest was at 2.06b. Edit the file to reflect this as shown.
url 'http://skipfish.googlecode.com/files/skipfish-2.06b.tgz'
Now of course we will have to change the checksum as well. You can calculate2 it if you like, I did not see it available on the Google Code page. If you trust me, here it is, copy and paste it into the the appropriate field.
301f3f209ddf57dd7103a61256f62afa
Ready to go, install Skipfish with Brew as normal:
brew install skipfish
Skipfish has lots of features centered around dictionaries, which are very snazzy. Dictionary brute-force, listing potential sub-directories, etc. However, you don't really need to mess with them just to try it out. Here is a scan that will tell skipfish to just run without all that nonsense. If you don't give it either the skip option or a wordlist file to use, it will just error out.
skipfish -o test-dir -L -W- http://example.com/
Away you go! -L: Tells it not to auto-learn new words from the site. -W-: redirects learned words to /dev/null. -o test-dir: tells it to output the results into a specific directory. No need to create the directory beforehand. You should specify a directory for the output results, they are kinda messy with lots of little files in the root. It will display a nice little message for you when it starts off.
And then kicks into a nice little status screen while it runs. Pressing enter will change it to a list of URLs as it scans them, if you want to watch that. Finally it tells you it is done. It has been a great day for science indeed!
You can then cd into your results directory and run this to open the results:
open index.html
Wonderful. Now you can install, scan, and view the results of the skipfish scanner.
Why Skipfish
There are a few great things about skipfish that really recommend it as a starting point tool. If we dig into the results a little we can see exactly what makes them so useful. We first have the generic high, medium, low type of thing. That is all well and good, and seems to compare pretty well out of the box. About middle of the road3. Two things really stand out when using skipfish though, and they both have to do with what you do after you have run the scan. First, you may have noticed in the terminal notifications when skipfish has finished that it generates a file called 'pivots.txt'. This is a great file to feed into other scanners/sniffers/tools. It has all the URLs that skipfish found, ready to go. Check it out. The second thing is the 'Interesting Files' portion of the scan report. This points to swfs, pdfs, scripts, source code disclosures, that any decent pen tester would certainly want to check out.
All neatly arranged in one dropdown. Thanks, Skipfish!
Extra Fun
Just for fun we can do a little toe-dipping into some more advanced options that you may want to tweak. Here is an example of running against something fairly local, with the options set to be fairly aggressive so it runs faster. You should create a blank word list file for every scan, skipfish leverages this to do some of its brute-forcing, etc. Disclaimer: settings might fail spectacularly if you try to run against something in China. Try the defaults first.
touch bts.wl skipfish -g 100 -f 25 -t 5 -o example-results -W bts.wl -S minimal.wl -b ie http://www.example.com/
-g: maximum TCP connections, normally 40, tweaked to 100. Don't go above 5 probably if you are scanning production systems, because of DDoS concerns.
-f: allowed failures, normally 100, cranked down to 25. If I get more than 25 failures running locally, something else is wrong and I want to know it fast.
-t: total request timeout, normally 20, I want it to be 5. See above.
-W: Specify the blank word list file
-S: Specify a pre-populated file to use when brute-forcing passwords, etc. Larger files here mean longer scan times.
-b ie: pretend to be Internet Explorer. Because its funny.
That should pretty much get you started, all the options available are explained with skipfish -h. Some of the things you can look forward to in there: specifying cookies, html authentication parameters, and hard finish times (for those overnight jobs).
Not actually as hard as you might think. I was only missing one dependency when I tested it (for the sake of science), and it was relatively painless. If you don't brew. Which you should. ↩︎
md5 -r skipfish-2.06b.tgz > tmpsum.txt ↩︎
In almost every way. Catches about 50% of all SQLi, XSS, and this performance puts it almost in the middle of the pack for scanners. Upper middle. ↩︎
0 notes
Photo
WRecon: Open Source no Intussive Web Scanner | #reconnaissance #urlscan #webscanner #security
0 notes
Photo
SubDomainizer: Tool to Find Subdomains and Things Hidden | #Scanning #Subdomains #WebScanner #Web
0 notes