#is it bad that I have multiple SCP f/os?
Explore tagged Tumblr posts
Note
suddenly i want to get back to my 10 years old SCP phase
SCP is always cool tbh
6 notes
¡
View notes
Text
The command-line, for cybersec
On Twitter I made the mistake of asking people about command-line basics for cybersec professionals. A got a lot of useful responses, which I summarize in this long (5k words) post. Itâs mostly driven by the tools I use, with a bit of input from the tweets I got in response to my query. bash By command-line this document really means bash. There are many types of command-line shells. Windows has two, 'cmd.exe' and 'PowerShell'. Unix started with the Bourne shell âshâ, and there have been many variations of this over the years, âcshâ, âkshâ, âzshâ, âtcshâ, etc. When GNU rewrote Unix user-mode software independently, they called their shell âBourne Again Shellâ or âbashâ (queue "JSON Bourne" shell jokes here). Bash is the default shell for Linux and macOS. Itâs also available on Windows, as part of their special âWindows Subsystem for Linuxâ. The windows version of âbashâ has become my most used shell. For Linux IoT devices, BusyBox is the most popular shell. Itâs easy to clear, as it includes feature-reduced versions of popular commands. man âManâ is the command you should not run if you want help for a command. Man pages are designed to drive away newbies. They are only useful if you already mostly an expert with the command you desire help on. Man pages list all possible features of a program, but do not highlight examples of the most common features, or the most common way to use the commands. Take âsedâ as an example. Itâs used most commonly to do a search-and-replace in files, like so: $ sed 's/rob/dave/' foo.txt This usage is so common that many non-geeks know of it. Yet, if you type âman sedâ to figure out how to do a search and replace, youâll get nearly incomprehensible gibberish, and no example of this most common usage. I point this out because most guides on using the shell recommend âmanâ pages to get help. This is wrong, itâll just endlessly frustrate you. Instead, google the commands you need help on, or better yet, search StackExchange for answers. You might try asking questions, like on Twitter or forum sites, but this requires a strategy. If you ask a basic question, self-important dickholes will respond by telling you to ârtfmâ or âread the fucking manualâ. A better strategy is to exploit their dickhole nature, such as saying âtoo bad command xxx cannot do yyyâ. Helpful people will gladly explain why you are wrong, carefully explaining how xxx does yyy. If you must use 'man', use the 'apropos' command to find the right man page. Sometimes multiple things in the system have the same or similar names, leading you to the wrong page. apt-get install yum Using the command-line means accessing that huge open-source ecosystem. Most of the things in this guide do no already exist on the system. You have to either compile them from source, or install via a package-manager. Linux distros ship with a small footprint, but have a massive database of precompiled software âpackagesâ in the cloud somewhere. Use the "package manager" to install the software from the cloud. On Debian-derived systems (like Ubuntu, Kali, Raspbian), type âapt-get install masscanâ to install âmasscanâ (as an example). Use âapt-cache search scanâ to find a bunch of scanners you might want to install. On RedHat systems, use âyumâ instead. On BSD, use the âportsâ system, which you can also get working for macOS. If no pre-compiled package exists for a program, then youâll have to download the source code and compile it. Thereâs about an 80% chance this will work easy, following the instructions. There is a 20% chance youâll experience âdependency hellâ, for example, needing to install two mutually incompatible versions of Python. Bash is a scripting language Donât forget that shells are really scripting languages. The bit that executes a single command is just a degenerate use of the scripting language. For example, you can do a traditional for loop like: $ for i in $(seq 1 9); do echo $i; done In this way, âbashâ is no different than any other scripting language, like Perl, Python, NodeJS, PHP CLI, etc. Thatâs why a lot of stuff on the system actually exists as short âbashâ programs, aka. shell scripts. Few want to write bash scripts, but you are expected to be able to read them, either to tweek existing scripts on the system, or to read StackExchange help. File system commands The macOS âFinderâ or Windows âFile Explorerâ are just graphical shells that help you find files, open, and save them. The first commands you learn are for the same functionality on the command-line: pwd, cd, ls, touch, rm, rmdir, mkdir, chmod, chown, find, ln, mount. The command ârm ârf /â removes everything starting from the root directory. This will also follow mounted server directories, deleting files on the server. I point this out to give an appreciation of the raw power you have over the system from the command-line, and how easy you can disrupt things. Of particular interest is the âmountâ command. Desktop versions of Linux typically mount USB flash drives automatically, but on servers, you need to do it automatically, e.g.: $ mkdir ~/foobar $ mount /dev/sdb ~/foobar Youâll also use the âmountâ command to connect to file servers, using the âcifsâ package if they are Windows file servers: # apt-get install cifs-utils # mkdir /mnt/vids # mount -t cifs -o username=robert,password=foobar123  //192.168.1.11/videos /mnt/vids Linux system commands The next commands youâll learn are about syadmin the Linux system: ps, top, who, history, last, df, du, kill, killall, lsof, lsmod, uname, id, shutdown, and so on. The first thing hackers do when hacking into a system is run âunameâ (to figure out what version of the OS is running) and âidâ (to figure out which account theyâve acquired, like ârootâ or some other user). The Linux system command I use most is âdmesgâ (or âtail âf /var/log/dmesgâ) which shows you the raw system messages. For example, when I plug in USB drives to a server, I look in âdmesgâ to find out which device was added so that I can mount it. I donât know if this is the best way, itâs just the way I do it (servers donât automount USB drives like desktops do). Networking commands The permanent state of the network (what gets configured on the next bootup) is configured in text files somewhere. But there are a wealth of commands youâll use to via the current state of networking, make temporary changes, and diagnose problems. The âifconfigâ command has long been used to via the current TCP/IP configuration and make temporary changes. Learning how TCP/IP works means playing a lot with âifconfigâ. Use âifconfig âaâ for even more verbose information. Use the ârouteâ command to see if you are sending packets to the right router. Use âarpâ command to make sure you can reach the local router. Use âtracerouteâ to make sure packets are following the correct route to their destination. You should learn the nifty trick itâs based on (TTLs). You should also play with the TCP, UDP, and ICMP options. Use âpingâ to see if you can reach the target across the Internet. Usefully measures the latency in milliseconds, and congestion (via packet loss). For example, ping NetFlix throughout the day, and notice how the ping latency increases substantially during âprime timeâ viewing hours. Use âdigâ to make sure DNS resolution is working right. (Some use ânslookupâ instead). Dig is useful because itâs the raw universal DNS tool â every time they add some new standard feature to DNS, they add that feature into âdigâ as well. The ânetstat âtualnâ command views the current TCP/IP connections and which ports are listening. I forget what the various options âtualnâ mean, only itâs the output I always want to see, rather than the raw ânetstatâ command by itself. Youâll want to use âethtool âkâ to turn off checksum and segmentation offloading. These are features that break packet-captures sometimes. There is this new fangled âipâ system for Linux networking, replacing many of the above commands, but as an old timer, I havenât looked into that. Some other tools for diagnosing local network issues are âtcpdumpâ, ânmapâ, and ânetcatâ. These are described in more detail below. ssh In general, youâll remotely log into a system in order to use the command-line. We use âsshâ for that. It uses a protocol similar to SSL in order to encrypt the connection. There are two ways to use âsshâ to login, with a password or with a client-side certificate. When using SSH with a password, you type âssh username@servernameâ. The remote system will then prompt you for a password for that account. When using client-side certificates, use âssh-keygenâ to generate a key, then either copy the public-key of the client to the server manually, or use âssh-copy-idâ to copy it using the password method above. How this works is basic application of public-key cryptography. When logging in with a password, you get a copy of the serverâs public-key the first time you login, and if it ever changes, you get a nasty warning that somebody may be attempting a man in the middle attack. $ ssh [email protected] @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ @   WARNING: REMOTE HOST IDENTIFICATION HAS CHANGED!   @ @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ IT IS POSSIBLE THAT SOMEONE IS DOING SOMETHING NASTY! When using client-side certificates, the server trusts your public-key. This is similar to how client-side certificates work in SSL VPNs. You can use SSH for things other than loging into a remote shell. You can script âsshâ to run commands remotely on a system in a local shell script. You can use âscpâ (SSH copy) to transfer files to and from a remote system. You can do tricks with SSH to create tunnels, which is popular way to bypass the restrictive rules of your local firewall nazi. openssl This is your general cryptography toolkit, doing everything from simple encryption, to public-key certificate signing, to establishing SSL connections. It is extraordinarily user hostile, with terrible inconsistency among options. You can only figure out how to do things by looking up examples on the net, such as on StackExchange. There are competing SSL libraries with their own command-line tools, like GnuTLS and Mozilla NSS that you might find easier to use. The fundamental use of the âopensslâ tool is to create public-keys, âcertificate requestsâ, and creating self-signed certificates. All the web-site certificates Iâve ever obtained has been using the openssl command-line tool to create CSRs. You should practice using the âopensslâ tool to encrypt files, sign files, and to check signatures. You can use openssl just like PGP for encrypted emails/messages, but following the âS/MIMEâ standard rather than PGP standard. You might consider learning the âpgpâ command-line tools, or the open-source âgpgâ or âgpg2â tools as well. You should learn how to use the âopenssl s_clientâ feature to establish SSL connections, as well as the âopenssl s_serverâ feature to create an SSL proxy for a server that doesnât otherwise support SSL. Learning all the ways of using the âopensslâ tool to do useful things will go a long way in teaching somebody about crypto and cybersecurity. I can imagine an entire class consisting of nothing but learning âopensslâ. netcat (nc, socat, cyptocat, ncat) A lot of Internet protocols are based on text. That means you can create a raw TCP connection to the service and interact with them using your keyboard. The classic tool for doing this is known as ânetcatâ, abbreviated âncâ. For example, connect to Googleâs web server at port and type the HTTP HEAD command followed by a blank line (hit [return] twice): $ nc www.google.com 80 HEAD / HTTP/1.0 HTTP/1.0 200 OK Date: Tue, 17 Jan 2017 01:53:28 GMT Expires: -1 Cache-Control: private, max-age=0 Content-Type: text/html; charset=ISO-8859-1 P3P: CP="This is not a P3P policy! See https://www.google.com/support/accounts/answer/151657?hl=en for more info." Server: gws X-XSS-Protection: 1; mode=block X-Frame-Options: SAMEORIGIN Set-Cookie: NID=95=o7GT1uJCWTPhaPAefs4CcqF7h7Yd7HEqPdAJncZfWfDSnNfliWuSj3XfS5GJXGt67-QJ9nc8xFsydZKufBHLj-K242C3_Vak9Uz1TmtZwT-1zVVBhP8limZI55uXHuPrejAxyTxSCgR6MQ; expires=Wed, 19-Jul-2017 01:53:28 GMT; path=/; domain=.google.com; HttpOnly Accept-Ranges: none Vary: Accept-Encoding Another classic example is to connect to port 25 on a mail server to send email, spoofing the âMAIL FROMâ address. There are several versions of ânetcatâ that work over SSL as well. My favorite is âncatâ, which comes with ânmapâ, as itâs actively maintained. In theory, âopenssl s_clientâ should also work this way. nmap At some point, youâll need to port scan. The standard program for this is ânmapâ, and itâs the best. The classic way of using it is something like: # nmap âA scanme.nmap.org The â-Aâ option means to enable all the interesting features like OS detection, version detection, and basic scripts on the most common ports that a server might have open. It takes awhile to run. The âscanme.nmap.orgâ is a good site to practice on. Nmap is more than just a port scanner. It has a rich scripting system for probing more deeply into a system than just a port, and to gather more information useful for attacks. The scripting system essentially contains some attacks, such as password guessing. Scanning the Internet, finding services identified by ânmapâ scripts, and interacting with them with tools like âncatâ will teach you a lot about how the Internet works. BTW, if ânmapâ is too slow, using âmasscanâ instead. Itâs a lot faster, though has much more limited functionality. Packet sniffing with tcpdump and tshark All Internet traffic consists of packets going between IP addresses. You can capture those packets and view them using âpacket sniffersâ. The most important packet-sniffer is âWiresharkâ, a GUI. For the command-line, there is âtcpdumpâ and âtsharkâ. You can run tcpdump on the command-line to watch packets go in/out of the local computer. This performs a quick âdecodeâ of packets as they are captured. Itâll reverse-lookup IP addresses into DNS names, which means its buffers can overflow, dropping new packets while itâs waiting for DNS name responses for previous packets. # tcpdump âp âi eth0 A common task is to create a round-robin set of files, saving the last 100 files of 1-gig each. Older files are overwritten. Thus, when an attack happens, you can stop capture, and go backward in times and view the contents of the network traffic using something like Wireshark: # tcpdump âp -i eth0 -s65535 âC 1000 âW 100 âw cap Instead of capturing everything, youâll often set âBPFâ filters to narrow down to traffic from a specific target, or a specific port. The above examples use the âp option to capture traffic destined to the local computer. Sometimes you may want to look at all traffic going to other machines on the local network. Youâll need to figure out how to tap into wires, or setup âmonitorâ ports on switches for this to work. A more advanced command-line program is âtsharkâ. It can apply much more complex filters. It can also be used to extract the values of specific fields and dump them to a text files. Base64/hexdump/xxd/od These are some rather trivial commands, but you should know them. The âbase64â command encodes binary data in text. The text can then be passed around, such as in email messages. Base64 encoding is often automatic in the output from programs like openssl and PGP. In many cases, youâll need to view a hex dump of some binary data. There are many programs to do this, such as hexdump, xxd, od, and more. grep Grep searches for a pattern within a file. More important, it searches for a regular expression (regex) in a file. The fu of Unix is that a lot of stuff is stored in text files, and use grep for regex patterns in order to extra stuff stored in those files. The power of this tool really depends on your mastery of regexes. You should master enough that you can understand StackExhange posts that explain almost what you want to do, and then tweek them to make them work. Grep, by default, shows only the matching lines. In many cases, you only want the part that matches. To do that, use the âo option. (This is not available on all versions of grep). Youâll probably want the better, âextendedâ regular expressions, so use the âE option. Youâll often want âcase-insensitiveâ options (matching both upper and lower case), so use the âi option. For example, to extract all MAC address from a text file, you might do something like the following. This extracts all strings that are twelve hex digits. $ grep âEio â[0-9A-F]{12}â foo.txt Text processing Grep is just the first of the various âtext processing filtersâ. Other useful ones include âsedâ, âcutâ, âsortâ, and âuniqâ. Youâll be an expert as piping output of one to the input of the next. Youâll use âsort | uniqâ as god (Dennis Ritchie) intended and not the heresy of âsort âuâ. You might want to master âawkâ. Itâs a new programming language, but once you master it, itâll be easier than other mechanisms. Youâll end up using âwcâ (word-count) a lot. All it does is count the number of lines, words, characters in a file, but youâll find yourself wanting to do this a lot. csvkit and jq You get data in CSV format and JSON format a lot. The tools âcsvkitâ and âjqâ respectively help you deal with those tools, to convert these files into other formats, sticking the data in databases, and so forth. Itâll be easier using these tools that understand these text formats to extract data than trying to write âawkâ command or âgrepâ regexes. strings Most files are binary with a few readable ASCII strings. You use the program âstringsâ to extract those strings. This one simple trick sounds stupid, but itâs more powerful than youâd think. For example, I knew that a program probably contained a hard-coded password. I then blindly grabbed all the strings in the programâs binary file and sent them to a password cracker to see if they could decrypt something. And indeed, one of the 100,000 strings in the file worked, thus finding the hard-coded password. tail -f So âtailâ is just a standard Linux tool for looking at the end of files. If you want to keep checking the end of a live file thatâs constantly growing, then use âtail âfâ. Itâll sit there waiting for something new to be added to the end of the file, then print it out. I do this a lot, so I thought itâd be worth mentioning. tar âxvfz, gzip, xz, 7z In prehistorical times (like the 1980s), Unix was backed up to tape drives. The tar command could be used to combine a bunch of files into a single âarchiveâ to be sent to the tape drive, hence âtape archiveâ or âtarâ. These days, a lot of stuff you download will be in tar format (ending in .tar). Youâll need to learn how to extract it: $ tar âxvf something.tar Nobody knows what the âxvfâ options mean anymore, but these letters most be specified in that order. Iâm joking here, but only a little: somebody did a survey once and found that virtually nobody know how to use âtarâ other than the canned formulas such as this. Along with combining files into an archive you also need to compress them. In prehistoric Unix, the âcompressâ command would be used, which would replace a file with a compressed version ending in â.zâ. This would found to be encumbered with patents, so everyone switched to âgzipâ instead, which replaces a file with a new one ending with â.gzâ. $ ls foo.txt* foo.txt $ gzip foo.txt $ ls foo.txt* foo.txt.gz Combined with tar, you get files with either the â.tar.gzâ extension, or simply â.tgzâ. You can untar and uncompress at the same time: $ tar âxvfz something .tar.gz Gzip is always good enough, but nerds gonna nerd and want to compress with slightly better compression programs. Theyâll have extensions like â.bz2â, â.7zâ, â.xzâ, and so on. There are a ton of them. Some of them are supported directly by the âtarâ program: $ tar âxvfj something.tar.bz2 Then there is the âzip/unzipâ program, which supports Windows .zip file format. To create compressed archives these days, I donât bother with tar, but just use the ZIP format. For example, this will recursively descend a directory, adding all files to a ZIP file that can easily be extracted under Windows: $ zip âr test.zip ./test/ dd I should include this under the system tools at the top, but itâs interesting for a number of purposes. The usage is simply to copy one file to another, the in-file to the out-file. $ dd if=foo.txt of=foo2.txt But thatâs not interesting. What interesting is using it to write to âdevicesâ. The disk drives in your system also exist as raw devices under the /dev directory. For example, if you want to create a boot USB drive for your Raspberry Pi: # dd if=rpi-ubuntu.img of=/dev/sdb Or, you might want to hard erase an entire hard drive by overwriting random data: # dd if=/dev/urandom of=/dev/sdc Or, you might want to image a drive on the system, for later forensics, without stumbling on things like open files. # dd if=/dev/sda of=/media/Lexar/infected.img The âddâ program has some additional options, like block size and so forth, that youâll want to pay attention to. screen and tmux You log in remotely and start some long running tool. Unfortunately, if you log out, all the processes you started will be killed. If you want it to keep running, then you need a tool to do this. I use âscreenâ. Before I start a long running port scan, I run the âscreenâ command. Then, I type [ctrl-a][ctrl-d] to disconnect from that screen, leaving it running in the background. Then later, I type âscreen ârâ to reconnect to it. If there are more than one screen sessions, using â-râ by itself will list them all. Use â-r pidâ to reattach to the proper one. If you canât, then use â-D pidâ or â-D âRR pidâ to forced the other session to detached from whoever is using it. Tmux is an alternative to screen that many use. Itâs cool for also having lots of terminal screens open at once. curl and wget Sometimes you want to download files from websites without opening a browser. The âcurlâ and âwgetâ programs do that easily. Wget is the traditional way of doing this, but curl is a bit more flexible. I use curl for everything these days, except mirroring a website, in which case I just do âwget âm websiteâ. The thing that makes âcurlâ so powerful is that itâs really designed as a tool for poking and prodding all the various features of HTTP. That itâs also useful for downloading files is a happy coincidence. When playing with a target website, curl will allow you do lots of complex things, which you can then script via bash. For example, hackers often write their cross-site scripting/forgeries in bash scripts using curl. node/php/python/perl/ruby/lua As mentioned above, bash is its own programming language. But itâs weird, and annoying. So sometimes you want a real programming language. Here are some useful ones. Yes, PHP is a language that runs in a web server for creating web pages. But if you know the language well, itâs also a fine command-line language for doing stuff. Yes, JavaScript is a language that runs in the web browser. But if you know it well, itâs also a great language for doing stuff, especially with the ânodejsâ version. Then there are other good command line languages, like the Python, Ruby, Lua, and the venerable Perl. What makes all these great is the large library support. Somebody has already written a library that nearly does what you want that can be made to work with a little bit of extra code of your own. My general impression is that Python and NodeJS have the largest libraries likely to have what you want, but you should pick whichever language you like best, whichever makes you most productive. For me, thatâs NodeJS, because of the great Visual Code IDE/debugger. iptables, iptables-save I shouldnât include this in the list. Iptables isnât a command-line tool as such. The tool is the built-in firewalling/NAT features within the Linux kernel. Iptables is just the command to configure it. Firewalling is an important part of cybersecurity. Everyone should have some experience playing with a Linux system doing basic firewalling tasks: basic rules, NATting, and transparent proxying for mitm attacks. Use âiptables-saveâ in order to persistently save your changes. MySQL Similar to âiptablesâ, âmysqlâ isnât a tool in its own right, but a way of accessing a database maintained by another process on the system. Filters acting on text files only goes so far. Sometimes you need to dump it into a database, and make queries on that database. There is also the offensive skill needed to learn how targets store things in a database, and how attackers get the data. Hackers often publish raw SQL data theyâve stolen in their hacks (like the Ashley-Madisan dump). Being able to stick those dumps into your own database is quite useful. Hint: disable transaction logging while importing mass data. If you donât like SQL, you might consider NoSQL tools like Elasticsearch, MongoDB, and Redis that can similarly be useful for arranging and searching data. Youâll probably have to learn some JSON tools for formatting the data. Reverse engineering tools A cybersecurity specialty is âreverse engineeringâ. Some want to reverse engineer the target software being hacked, to understand vulnerabilities. This is needed for commercial software and device firmware where the source code is hidden. Others use these tools to analyze viruses/malware. The âfileâ command uses heuristics to discover the type of a file. Thereâs a whole skillset for analyzing PDF and Microsoft Office documents. I play with pdf-parser. Thereâs a long list at this website: https://zeltser.com/analyzing-malicious-documents/ Thereâs a whole skillset for analyzing executables. Binwalk is especially useful for analyzing firmware images. Qemu is useful is a useful virtual-machine. It can emulate full systems, such as an IoT device based on the MIPS processor. Like some other tools mentioned here, itâs more a full subsystem than a simple command-line tool. On a live system, you can use âstraceâ to view what system calls a process is making. Use âlsofâ to view which files and network connections a process is making. Password crackers A common cybersecurity specialty is âpassword crackingâ. Thereâs two kinds: online and offline password crackers. Typical online password crackers are âhydraâ and âmedusaâ. They can take files containing common passwords and attempt to log on to various protocols remotely, like HTTP, SMB, FTP, Telnet, and so on. I used âhydraâ recently in order to find the default/backdoor passwords to many IoT devices Iâve bought recently in my test lab. Online password crackers must open TCP connections to the target, and try to logon. This limits their speed. They also may be stymied by systems that lock accounts, or introduce delays, after too many bad password attempts. Typical offline password crackers are âhashcatâ and âjtrâ (John the Ripper). They work off of stolen encrypted passwords. They can attempt billions of passwords-per-second, because thereâs no network interaction, nothing slowing them down. Understanding offline password crackers means getting an appreciation for the exponential difficulty of the problem. A sufficiently long and complex encrypted password is uncrackable. Instead of brute-force attempts at all possible combinations, we must use tricks, like mutating the top million most common passwords. I use hashcat because of the great GPU support, but John is also a great program. WiFi hacking A common specialty in cybersecurity is WiFi hacking. The difficulty in WiFi hacking is getting the right WiFi hardware that supports the features (monitor mode, packet injection), then the right drivers installed in your operating system. Thatâs why I use Kali rather than some generic Linux distribution, because itâs got the right drivers installed. The âaircrack-ngâ suite is the best for doing basic hacking, such as packet injection. When the parents are letting the iPad babysit their kid with a loud movie at the otherwise quite coffeeshop, use âaircrack-ngâ to deauth the kid. The âreaverâ tool is useful for hacking into sites that leave WPS wide open and misconfigured. Remote exploitation A common specialty in cybersecurity is pentesting. Nmap, curl, and netcat (described above) above are useful tools for this. Some useful DNS tools are âdigâ (described above), dnsrecon/dnsenum/fierce that try to enumerate and guess as many names as possible within a domain. These tools all have unique features, but also have a lot of overlap. Nikto is a basic tool for probing for common vulnerabilities, out-of-date software, and so on. Itâs not really a vulnerability scanner like Nessus used by defenders, but more of a tool for attack. SQLmap is a popular tool for probing for SQL injection weaknesses. Then there is âmsfconsoleâ. It has some attack features. This is humor â it has all the attack features. Metasploit is the most popular tool for running remote attacks against targets, exploiting vulnerabilities. Text editor Finally, there is the decision of text editor. I use âviâ variants. Others like ânanoâ and variants. Thereâs no wrong answer as to which editor to use, unless that answer is âemacsâ. Conclusion Obviously, not every cybersecurity professional will be familiar with every tool in this list. If you donât do reverse-engineering, then you wonât use reverse-engineering tools. On the other hand, regardless of your specialty, you need to know basic crypto concepts, so you should know something like the âopensslâ tool. You need to know basic networking, so things like ânmapâ and âtcpdumpâ. You need to be comfortable processing large dumps of data, manipulating it with any tool available. You shouldnât be frightened by a little sysadmin work. The above list is therefore a useful starting point for cybersecurity professionals. Of course, those new to the industry wonât have much familiarity with them. But itâs fair to say that Iâve used everything listed above at least once in the last year, and the year before that, and the year before that. I spend a lot of time on StackExchange and Google searching the exact options I need, so Iâm not an expert, but I am familiar with the basic use of all these things. from The command-line, for cybersec
0 notes
Text
The command-line, for cybersec
On Twitter I made the mistake of asking people about command-line basics for cybersec professionals. A got a lot of useful responses, which I summarize in this long (5k words) post. Itâs mostly driven by the tools I use, with a bit of input from the tweets I got in response to my query. bash By command-line this document really means bash. There are many types of command-line shells. Windows has two, 'cmd.exe' and 'PowerShell'. Unix started with the Bourne shell âshâ, and there have been many variations of this over the years, âcshâ, âkshâ, âzshâ, âtcshâ, etc. When GNU rewrote Unix user-mode software independently, they called their shell âBourne Again Shellâ or âbashâ (queue "JSON Bourne" shell jokes here). Bash is the default shell for Linux and macOS. Itâs also available on Windows, as part of their special âWindows Subsystem for Linuxâ. The windows version of âbashâ has become my most used shell. For Linux IoT devices, BusyBox is the most popular shell. Itâs easy to clear, as it includes feature-reduced versions of popular commands. man âManâ is the command you should not run if you want help for a command. Man pages are designed to drive away newbies. They are only useful if you already mostly an expert with the command you desire help on. Man pages list all possible features of a program, but do not highlight examples of the most common features, or the most common way to use the commands. Take âsedâ as an example. Itâs used most commonly to do a search-and-replace in files, like so: $ sed 's/rob/dave/' foo.txt This usage is so common that many non-geeks know of it. Yet, if you type âman sedâ to figure out how to do a search and replace, youâll get nearly incomprehensible gibberish, and no example of this most common usage. I point this out because most guides on using the shell recommend âmanâ pages to get help. This is wrong, itâll just endlessly frustrate you. Instead, google the commands you need help on, or better yet, search StackExchange for answers. You might try asking questions, like on Twitter or forum sites, but this requires a strategy. If you ask a basic question, self-important dickholes will respond by telling you to ârtfmâ or âread the fucking manualâ. A better strategy is to exploit their dickhole nature, such as saying âtoo bad command xxx cannot do yyyâ. Helpful people will gladly explain why you are wrong, carefully explaining how xxx does yyy. If you must use 'man', use the 'apropos' command to find the right man page. Sometimes multiple things in the system have the same or similar names, leading you to the wrong page. apt-get install yum Using the command-line means accessing that huge open-source ecosystem. Most of the things in this guide do no already exist on the system. You have to either compile them from source, or install via a package-manager. Linux distros ship with a small footprint, but have a massive database of precompiled software âpackagesâ in the cloud somewhere. Use the "package manager" to install the software from the cloud. On Debian-derived systems (like Ubuntu, Kali, Raspbian), type âapt-get install masscanâ to install âmasscanâ (as an example). Use âapt-cache search scanâ to find a bunch of scanners you might want to install. On RedHat systems, use âyumâ instead. On BSD, use the âportsâ system, which you can also get working for macOS. If no pre-compiled package exists for a program, then youâll have to download the source code and compile it. Thereâs about an 80% chance this will work easy, following the instructions. There is a 20% chance youâll experience âdependency hellâ, for example, needing to install two mutually incompatible versions of Python. Bash is a scripting language Donât forget that shells are really scripting languages. The bit that executes a single command is just a degenerate use of the scripting language. For example, you can do a traditional for loop like: $ for i in $(seq 1 9); do echo $i; done In this way, âbashâ is no different than any other scripting language, like Perl, Python, NodeJS, PHP CLI, etc. Thatâs why a lot of stuff on the system actually exists as short âbashâ programs, aka. shell scripts. Few want to write bash scripts, but you are expected to be able to read them, either to tweek existing scripts on the system, or to read StackExchange help. File system commands The macOS âFinderâ or Windows âFile Explorerâ are just graphical shells that help you find files, open, and save them. The first commands you learn are for the same functionality on the command-line: pwd, cd, ls, touch, rm, rmdir, mkdir, chmod, chown, find, ln, mount. The command ârm ârf /â removes everything starting from the root directory. This will also follow mounted server directories, deleting files on the server. I point this out to give an appreciation of the raw power you have over the system from the command-line, and how easy you can disrupt things. Of particular interest is the âmountâ command. Desktop versions of Linux typically mount USB flash drives automatically, but on servers, you need to do it automatically, e.g.: $ mkdir ~/foobar $ mount /dev/sdb ~/foobar Youâll also use the âmountâ command to connect to file servers, using the âcifsâ package if they are Windows file servers: # apt-get install cifs-utils # mkdir /mnt/vids # mount -t cifs -o username=robert,password=foobar123  //192.168.1.11/videos /mnt/vids Linux system commands The next commands youâll learn are about syadmin the Linux system: ps, top, who, history, last, df, du, kill, killall, lsof, lsmod, uname, id, shutdown, and so on. The first thing hackers do when hacking into a system is run âunameâ (to figure out what version of the OS is running) and âidâ (to figure out which account theyâve acquired, like ârootâ or some other user). The Linux system command I use most is âdmesgâ (or âtail âf /var/log/dmesgâ) which shows you the raw system messages. For example, when I plug in USB drives to a server, I look in âdmesgâ to find out which device was added so that I can mount it. I donât know if this is the best way, itâs just the way I do it (servers donât automount USB drives like desktops do). Networking commands The permanent state of the network (what gets configured on the next bootup) is configured in text files somewhere. But there are a wealth of commands youâll use to via the current state of networking, make temporary changes, and diagnose problems. The âifconfigâ command has long been used to via the current TCP/IP configuration and make temporary changes. Learning how TCP/IP works means playing a lot with âifconfigâ. Use âifconfig âaâ for even more verbose information. Use the ârouteâ command to see if you are sending packets to the right router. Use âarpâ command to make sure you can reach the local router. Use âtracerouteâ to make sure packets are following the correct route to their destination. You should learn the nifty trick itâs based on (TTLs). You should also play with the TCP, UDP, and ICMP options. Use âpingâ to see if you can reach the target across the Internet. Usefully measures the latency in milliseconds, and congestion (via packet loss). For example, ping NetFlix throughout the day, and notice how the ping latency increases substantially during âprime timeâ viewing hours. Use âdigâ to make sure DNS resolution is working right. (Some use ânslookupâ instead). Dig is useful because itâs the raw universal DNS tool â every time they add some new standard feature to DNS, they add that feature into âdigâ as well. The ânetstat âtualnâ command views the current TCP/IP connections and which ports are listening. I forget what the various options âtualnâ mean, only itâs the output I always want to see, rather than the raw ânetstatâ command by itself. Youâll want to use âethtool âkâ to turn off checksum and segmentation offloading. These are features that break packet-captures sometimes. There is this new fangled âipâ system for Linux networking, replacing many of the above commands, but as an old timer, I havenât looked into that. Some other tools for diagnosing local network issues are âtcpdumpâ, ânmapâ, and ânetcatâ. These are described in more detail below. ssh In general, youâll remotely log into a system in order to use the command-line. We use âsshâ for that. It uses a protocol similar to SSL in order to encrypt the connection. There are two ways to use âsshâ to login, with a password or with a client-side certificate. When using SSH with a password, you type âssh username@servernameâ. The remote system will then prompt you for a password for that account. When using client-side certificates, use âssh-keygenâ to generate a key, then either copy the public-key of the client to the server manually, or use âssh-copy-idâ to copy it using the password method above. How this works is basic application of public-key cryptography. When logging in with a password, you get a copy of the serverâs public-key the first time you login, and if it ever changes, you get a nasty warning that somebody may be attempting a man in the middle attack. $ ssh [email protected] @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ @   WARNING: REMOTE HOST IDENTIFICATION HAS CHANGED!   @ @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ IT IS POSSIBLE THAT SOMEONE IS DOING SOMETHING NASTY! When using client-side certificates, the server trusts your public-key. This is similar to how client-side certificates work in SSL VPNs. You can use SSH for things other than loging into a remote shell. You can script âsshâ to run commands remotely on a system in a local shell script. You can use âscpâ (SSH copy) to transfer files to and from a remote system. You can do tricks with SSH to create tunnels, which is popular way to bypass the restrictive rules of your local firewall nazi. openssl This is your general cryptography toolkit, doing everything from simple encryption, to public-key certificate signing, to establishing SSL connections. It is extraordinarily user hostile, with terrible inconsistency among options. You can only figure out how to do things by looking up examples on the net, such as on StackExchange. There are competing SSL libraries with their own command-line tools, like GnuTLS and Mozilla NSS that you might find easier to use. The fundamental use of the âopensslâ tool is to create public-keys, âcertificate requestsâ, and creating self-signed certificates. All the web-site certificates Iâve ever obtained has been using the openssl command-line tool to create CSRs. You should practice using the âopensslâ tool to encrypt files, sign files, and to check signatures. You can use openssl just like PGP for encrypted emails/messages, but following the âS/MIMEâ standard rather than PGP standard. You might consider learning the âpgpâ command-line tools, or the open-source âgpgâ or âgpg2â tools as well. You should learn how to use the âopenssl s_clientâ feature to establish SSL connections, as well as the âopenssl s_serverâ feature to create an SSL proxy for a server that doesnât otherwise support SSL. Learning all the ways of using the âopensslâ tool to do useful things will go a long way in teaching somebody about crypto and cybersecurity. I can imagine an entire class consisting of nothing but learning âopensslâ. netcat (nc, socat, cyptocat, ncat) A lot of Internet protocols are based on text. That means you can create a raw TCP connection to the service and interact with them using your keyboard. The classic tool for doing this is known as ânetcatâ, abbreviated âncâ. For example, connect to Googleâs web server at port and type the HTTP HEAD command followed by a blank line (hit [return] twice): $ nc www.google.com 80 HEAD / HTTP/1.0 HTTP/1.0 200 OK Date: Tue, 17 Jan 2017 01:53:28 GMT Expires: -1 Cache-Control: private, max-age=0 Content-Type: text/html; charset=ISO-8859-1 P3P: CP="This is not a P3P policy! See https://www.google.com/support/accounts/answer/151657?hl=en for more info." Server: gws X-XSS-Protection: 1; mode=block X-Frame-Options: SAMEORIGIN Set-Cookie: NID=95=o7GT1uJCWTPhaPAefs4CcqF7h7Yd7HEqPdAJncZfWfDSnNfliWuSj3XfS5GJXGt67-QJ9nc8xFsydZKufBHLj-K242C3_Vak9Uz1TmtZwT-1zVVBhP8limZI55uXHuPrejAxyTxSCgR6MQ; expires=Wed, 19-Jul-2017 01:53:28 GMT; path=/; domain=.google.com; HttpOnly Accept-Ranges: none Vary: Accept-Encoding Another classic example is to connect to port 25 on a mail server to send email, spoofing the âMAIL FROMâ address. There are several versions of ânetcatâ that work over SSL as well. My favorite is âncatâ, which comes with ânmapâ, as itâs actively maintained. In theory, âopenssl s_clientâ should also work this way. nmap At some point, youâll need to port scan. The standard program for this is ânmapâ, and itâs the best. The classic way of using it is something like: # nmap âA scanme.nmap.org The â-Aâ option means to enable all the interesting features like OS detection, version detection, and basic scripts on the most common ports that a server might have open. It takes awhile to run. The âscanme.nmap.orgâ is a good site to practice on. Nmap is more than just a port scanner. It has a rich scripting system for probing more deeply into a system than just a port, and to gather more information useful for attacks. The scripting system essentially contains some attacks, such as password guessing. Scanning the Internet, finding services identified by ânmapâ scripts, and interacting with them with tools like âncatâ will teach you a lot about how the Internet works. BTW, if ânmapâ is too slow, using âmasscanâ instead. Itâs a lot faster, though has much more limited functionality. Packet sniffing with tcpdump and tshark All Internet traffic consists of packets going between IP addresses. You can capture those packets and view them using âpacket sniffersâ. The most important packet-sniffer is âWiresharkâ, a GUI. For the command-line, there is âtcpdumpâ and âtsharkâ. You can run tcpdump on the command-line to watch packets go in/out of the local computer. This performs a quick âdecodeâ of packets as they are captured. Itâll reverse-lookup IP addresses into DNS names, which means its buffers can overflow, dropping new packets while itâs waiting for DNS name responses for previous packets. # tcpdump âp âi eth0 A common task is to create a round-robin set of files, saving the last 100 files of 1-gig each. Older files are overwritten. Thus, when an attack happens, you can stop capture, and go backward in times and view the contents of the network traffic using something like Wireshark: # tcpdump âp -i eth0 -s65535 âC 1000 âW 100 âw cap Instead of capturing everything, youâll often set âBPFâ filters to narrow down to traffic from a specific target, or a specific port. The above examples use the âp option to capture traffic destined to the local computer. Sometimes you may want to look at all traffic going to other machines on the local network. Youâll need to figure out how to tap into wires, or setup âmonitorâ ports on switches for this to work. A more advanced command-line program is âtsharkâ. It can apply much more complex filters. It can also be used to extract the values of specific fields and dump them to a text files. Base64/hexdump/xxd/od These are some rather trivial commands, but you should know them. The âbase64â command encodes binary data in text. The text can then be passed around, such as in email messages. Base64 encoding is often automatic in the output from programs like openssl and PGP. In many cases, youâll need to view a hex dump of some binary data. There are many programs to do this, such as hexdump, xxd, od, and more. grep Grep searches for a pattern within a file. More important, it searches for a regular expression (regex) in a file. The fu of Unix is that a lot of stuff is stored in text files, and use grep for regex patterns in order to extra stuff stored in those files. The power of this tool really depends on your mastery of regexes. You should master enough that you can understand StackExhange posts that explain almost what you want to do, and then tweek them to make them work. Grep, by default, shows only the matching lines. In many cases, you only want the part that matches. To do that, use the âo option. (This is not available on all versions of grep). Youâll probably want the better, âextendedâ regular expressions, so use the âE option. Youâll often want âcase-insensitiveâ options (matching both upper and lower case), so use the âi option. For example, to extract all MAC address from a text file, you might do something like the following. This extracts all strings that are twelve hex digits. $ grep âEio â[0-9A-F]{12}â foo.txt Text processing Grep is just the first of the various âtext processing filtersâ. Other useful ones include âsedâ, âcutâ, âsortâ, and âuniqâ. Youâll be an expert as piping output of one to the input of the next. Youâll use âsort | uniqâ as god (Dennis Ritchie) intended and not the heresy of âsort âuâ. You might want to master âawkâ. Itâs a new programming language, but once you master it, itâll be easier than other mechanisms. Youâll end up using âwcâ (word-count) a lot. All it does is count the number of lines, words, characters in a file, but youâll find yourself wanting to do this a lot. csvkit and jq You get data in CSV format and JSON format a lot. The tools âcsvkitâ and âjqâ respectively help you deal with those tools, to convert these files into other formats, sticking the data in databases, and so forth. Itâll be easier using these tools that understand these text formats to extract data than trying to write âawkâ command or âgrepâ regexes. strings Most files are binary with a few readable ASCII strings. You use the program âstringsâ to extract those strings. This one simple trick sounds stupid, but itâs more powerful than youâd think. For example, I knew that a program probably contained a hard-coded password. I then blindly grabbed all the strings in the programâs binary file and sent them to a password cracker to see if they could decrypt something. And indeed, one of the 100,000 strings in the file worked, thus finding the hard-coded password. tail -f So âtailâ is just a standard Linux tool for looking at the end of files. If you want to keep checking the end of a live file thatâs constantly growing, then use âtail âfâ. Itâll sit there waiting for something new to be added to the end of the file, then print it out. I do this a lot, so I thought itâd be worth mentioning. tar âxvfz, gzip, xz, 7z In prehistorical times (like the 1980s), Unix was backed up to tape drives. The tar command could be used to combine a bunch of files into a single âarchiveâ to be sent to the tape drive, hence âtape archiveâ or âtarâ. These days, a lot of stuff you download will be in tar format (ending in .tar). Youâll need to learn how to extract it: $ tar âxvf something.tar Nobody knows what the âxvfâ options mean anymore, but these letters most be specified in that order. Iâm joking here, but only a little: somebody did a survey once and found that virtually nobody know how to use âtarâ other than the canned formulas such as this. Along with combining files into an archive you also need to compress them. In prehistoric Unix, the âcompressâ command would be used, which would replace a file with a compressed version ending in â.zâ. This would found to be encumbered with patents, so everyone switched to âgzipâ instead, which replaces a file with a new one ending with â.gzâ. $ ls foo.txt* foo.txt $ gzip foo.txt $ ls foo.txt* foo.txt.gz Combined with tar, you get files with either the â.tar.gzâ extension, or simply â.tgzâ. You can untar and uncompress at the same time: $ tar âxvfz something .tar.gz Gzip is always good enough, but nerds gonna nerd and want to compress with slightly better compression programs. Theyâll have extensions like â.bz2â, â.7zâ, â.xzâ, and so on. There are a ton of them. Some of them are supported directly by the âtarâ program: $ tar âxvfj something.tar.bz2 Then there is the âzip/unzipâ program, which supports Windows .zip file format. To create compressed archives these days, I donât bother with tar, but just use the ZIP format. For example, this will recursively descend a directory, adding all files to a ZIP file that can easily be extracted under Windows: $ zip âr test.zip ./test/ dd I should include this under the system tools at the top, but itâs interesting for a number of purposes. The usage is simply to copy one file to another, the in-file to the out-file. $ dd if=foo.txt of=foo2.txt But thatâs not interesting. What interesting is using it to write to âdevicesâ. The disk drives in your system also exist as raw devices under the /dev directory. For example, if you want to create a boot USB drive for your Raspberry Pi: # dd if=rpi-ubuntu.img of=/dev/sdb Or, you might want to hard erase an entire hard drive by overwriting random data: # dd if=/dev/urandom of=/dev/sdc Or, you might want to image a drive on the system, for later forensics, without stumbling on things like open files. # dd if=/dev/sda of=/media/Lexar/infected.img The âddâ program has some additional options, like block size and so forth, that youâll want to pay attention to. screen and tmux You log in remotely and start some long running tool. Unfortunately, if you log out, all the processes you started will be killed. If you want it to keep running, then you need a tool to do this. I use âscreenâ. Before I start a long running port scan, I run the âscreenâ command. Then, I type [ctrl-a][ctrl-d] to disconnect from that screen, leaving it running in the background. Then later, I type âscreen ârâ to reconnect to it. If there are more than one screen sessions, using â-râ by itself will list them all. Use â-r pidâ to reattach to the proper one. If you canât, then use â-D pidâ or â-D âRR pidâ to forced the other session to detached from whoever is using it. Tmux is an alternative to screen that many use. Itâs cool for also having lots of terminal screens open at once. curl and wget Sometimes you want to download files from websites without opening a browser. The âcurlâ and âwgetâ programs do that easily. Wget is the traditional way of doing this, but curl is a bit more flexible. I use curl for everything these days, except mirroring a website, in which case I just do âwget âm websiteâ. The thing that makes âcurlâ so powerful is that itâs really designed as a tool for poking and prodding all the various features of HTTP. That itâs also useful for downloading files is a happy coincidence. When playing with a target website, curl will allow you do lots of complex things, which you can then script via bash. For example, hackers often write their cross-site scripting/forgeries in bash scripts using curl. node/php/python/perl/ruby/lua As mentioned above, bash is its own programming language. But itâs weird, and annoying. So sometimes you want a real programming language. Here are some useful ones. Yes, PHP is a language that runs in a web server for creating web pages. But if you know the language well, itâs also a fine command-line language for doing stuff. Yes, JavaScript is a language that runs in the web browser. But if you know it well, itâs also a great language for doing stuff, especially with the ânodejsâ version. Then there are other good command line languages, like the Python, Ruby, Lua, and the venerable Perl. What makes all these great is the large library support. Somebody has already written a library that nearly does what you want that can be made to work with a little bit of extra code of your own. My general impression is that Python and NodeJS have the largest libraries likely to have what you want, but you should pick whichever language you like best, whichever makes you most productive. For me, thatâs NodeJS, because of the great Visual Code IDE/debugger. iptables, iptables-save I shouldnât include this in the list. Iptables isnât a command-line tool as such. The tool is the built-in firewalling/NAT features within the Linux kernel. Iptables is just the command to configure it. Firewalling is an important part of cybersecurity. Everyone should have some experience playing with a Linux system doing basic firewalling tasks: basic rules, NATting, and transparent proxying for mitm attacks. Use âiptables-saveâ in order to persistently save your changes. MySQL Similar to âiptablesâ, âmysqlâ isnât a tool in its own right, but a way of accessing a database maintained by another process on the system. Filters acting on text files only goes so far. Sometimes you need to dump it into a database, and make queries on that database. There is also the offensive skill needed to learn how targets store things in a database, and how attackers get the data. Hackers often publish raw SQL data theyâve stolen in their hacks (like the Ashley-Madisan dump). Being able to stick those dumps into your own database is quite useful. Hint: disable transaction logging while importing mass data. If you donât like SQL, you might consider NoSQL tools like Elasticsearch, MongoDB, and Redis that can similarly be useful for arranging and searching data. Youâll probably have to learn some JSON tools for formatting the data. Reverse engineering tools A cybersecurity specialty is âreverse engineeringâ. Some want to reverse engineer the target software being hacked, to understand vulnerabilities. This is needed for commercial software and device firmware where the source code is hidden. Others use these tools to analyze viruses/malware. The âfileâ command uses heuristics to discover the type of a file. Thereâs a whole skillset for analyzing PDF and Microsoft Office documents. I play with pdf-parser. Thereâs a long list at this website: https://zeltser.com/analyzing-malicious-documents/ Thereâs a whole skillset for analyzing executables. Binwalk is especially useful for analyzing firmware images. Qemu is useful is a useful virtual-machine. It can emulate full systems, such as an IoT device based on the MIPS processor. Like some other tools mentioned here, itâs more a full subsystem than a simple command-line tool. On a live system, you can use âstraceâ to view what system calls a process is making. Use âlsofâ to view which files and network connections a process is making. Password crackers A common cybersecurity specialty is âpassword crackingâ. Thereâs two kinds: online and offline password crackers. Typical online password crackers are âhydraâ and âmedusaâ. They can take files containing common passwords and attempt to log on to various protocols remotely, like HTTP, SMB, FTP, Telnet, and so on. I used âhydraâ recently in order to find the default/backdoor passwords to many IoT devices Iâve bought recently in my test lab. Online password crackers must open TCP connections to the target, and try to logon. This limits their speed. They also may be stymied by systems that lock accounts, or introduce delays, after too many bad password attempts. Typical offline password crackers are âhashcatâ and âjtrâ (John the Ripper). They work off of stolen encrypted passwords. They can attempt billions of passwords-per-second, because thereâs no network interaction, nothing slowing them down. Understanding offline password crackers means getting an appreciation for the exponential difficulty of the problem. A sufficiently long and complex encrypted password is uncrackable. Instead of brute-force attempts at all possible combinations, we must use tricks, like mutating the top million most common passwords. I use hashcat because of the great GPU support, but John is also a great program. WiFi hacking A common specialty in cybersecurity is WiFi hacking. The difficulty in WiFi hacking is getting the right WiFi hardware that supports the features (monitor mode, packet injection), then the right drivers installed in your operating system. Thatâs why I use Kali rather than some generic Linux distribution, because itâs got the right drivers installed. The âaircrack-ngâ suite is the best for doing basic hacking, such as packet injection. When the parents are letting the iPad babysit their kid with a loud movie at the otherwise quite coffeeshop, use âaircrack-ngâ to deauth the kid. The âreaverâ tool is useful for hacking into sites that leave WPS wide open and misconfigured. Remote exploitation A common specialty in cybersecurity is pentesting. Nmap, curl, and netcat (described above) above are useful tools for this. Some useful DNS tools are âdigâ (described above), dnsrecon/dnsenum/fierce that try to enumerate and guess as many names as possible within a domain. These tools all have unique features, but also have a lot of overlap. Nikto is a basic tool for probing for common vulnerabilities, out-of-date software, and so on. Itâs not really a vulnerability scanner like Nessus used by defenders, but more of a tool for attack. SQLmap is a popular tool for probing for SQL injection weaknesses. Then there is âmsfconsoleâ. It has some attack features. This is humor â it has all the attack features. Metasploit is the most popular tool for running remote attacks against targets, exploiting vulnerabilities. Text editor Finally, there is the decision of text editor. I use âviâ variants. Others like ânanoâ and variants. Thereâs no wrong answer as to which editor to use, unless that answer is âemacsâ. Conclusion Obviously, not every cybersecurity professional will be familiar with every tool in this list. If you donât do reverse-engineering, then you wonât use reverse-engineering tools. On the other hand, regardless of your specialty, you need to know basic crypto concepts, so you should know something like the âopensslâ tool. You need to know basic networking, so things like ânmapâ and âtcpdumpâ. You need to be comfortable processing large dumps of data, manipulating it with any tool available. You shouldnât be frightened by a little sysadmin work. The above list is therefore a useful starting point for cybersecurity professionals. Of course, those new to the industry wonât have much familiarity with them. But itâs fair to say that Iâve used everything listed above at least once in the last year, and the year before that, and the year before that. I spend a lot of time on StackExchange and Google searching the exact options I need, so Iâm not an expert, but I am familiar with the basic use of all these things. from The command-line, for cybersec
0 notes