Tumgik
#Captain crunch whistle phone hacker
pinerbench · 2 years
Text
Captain crunch whistle phone hacker
Tumblr media
CAPTAIN CRUNCH WHISTLE PHONE HACKER CODE
More and more people are coming forward with their versions of what happened with Captain Crunch. In the late 1970s, when Blaze was only in eighth or ninth grade, Draper became aggressive about sharing in those “exercises.”īlaze was freaked out, or “creeped” out, and dropped contact, but then Draper allegedly started stalking him - claimed to be a relative to take Blaze out of school and even tapped his family’s phone. As Ars pointed out, the respected cryptography professor Matt Blaze took to Twitter with his story. The article on BuzzFeed has caused numerous people to tell their own stories about Draper. The founder of the Houston Security Conference told Ars Technica that Draper had been scheduled to speak in April 2018, but he has now been “disinvited.” ToorCon also said the 74-year-old Captain Crunch will be banned from attending future conferences. In fact, the newest set of public accusations have resulted in Draper being banned from several hacker conferences. Draper banned from several hacker conferences
CAPTAIN CRUNCH WHISTLE PHONE HACKER CODE
Per DefCon’s Code of Conduct, this kind of behavior will result in a permanent ban from our events. The behavior described in these allegations is appalling and has no place in our community. We applaud those individuals bravely stepping forward to tell their stories. But now, organizers of DefCon are taking a stand in regard to Draper’s alleged unwanted sexual advances. In 2013, a team at DefCon called itself “Too Old for Cap’n Crunch.” A DefCon spokesperson said the conference had never received a formal complaint, yet some of the “goons” at DefCon would reportedly intercede on behalf of the minors and escort young men out of Draper’s company.Ī person can get by with a lot at hacker conferences, meaning it would take beyond a lot to get banned. Steve Wozniak claimed Draper tried that with Steve Jobs in the 1970s, but Jobs was not interested in helping Draper “exercise by sitting on Draper’s back.”Īlthough many in the security community were aware of Draper’s predatory actions and accusations that he was a pedophile, he wasn’t banned from attending conferences. “In multiple cases in which the men agreed, Draper would leap on their backs in ways the men described as unwanted sexual contact.” People knew about Draper's inappropriate behavior for years “These included him massaging men in public and urging them to come to his or their hotel room for private sessions,” BuzzFeed reported. If a teenage man took Draper up on the private invitation, it would reportedly lead to Draper’s request for them to participate in “energy” exercises. The article includes stories from six men with knowledge of Draper “habitually meeting young, often teenage men at conferences between 19.” Other people in the Twitter conversations said allegations of inappropriate behavior by Draper have been discussed online by alleged victims for over two decades.įast forward to last Friday when BuzzFeed published an article about Draper using his status as Captain Crunch while at security conferences to lure minors to his hotel room. One of the now-deleted tweets suggested “Captain Crunch is basically the Kevin Spacey of infosec.” Another referred to Draper’s actions as an “open secret,” adding, “If you run an event where Captain Crunch attends, ensure a member of staff is assigned to keep teenage boys away from him.” (No, I’m not linking to archived versions of them, since the researchers later deleted those tweets.) Tweets by security researchers about Draper’s “predatory behavior” first caught my attention in early November. Due to accusations of sexual misconduct, legendary hacker John Draper, aka Captain Crunch, has been banned from attending several hacker conferences.
Tumblr media
0 notes
antibiz · 4 years
Photo
Tumblr media
Insert a few quarters or use the Captain Crunch whistle to make a call. A good hacker can bend phone lines to their will. #antibiz #philly #streetwear #pixelart #fashion #videogame #comicbook #superhero #215 (at Philadelphia, Pennsylvania) https://www.instagram.com/p/CE-WOu3DIF2/?igshid=j3quno2aq09n
1 note · View note
Text
Week 5 Lectures
Morning Lecture
Wired Equivalent Privacy
With WiFi, unlike a wired connection, it is easy for other people to access packets that are being sent through the air. This means that you would want to encrypt your data before sending it.
WEP is very basic encryption with many vulnerabilities. What was interesting is that even though vulnerabilities were found, people kept using it for a while because they didn’t have many alternatives. 
Data sent in a WEP frame is broadcast, and only those with the correct MAC address will read it. But this doesn’t stop other people from taking these packets, modifying them and resending them.
Encryption is done using RC4 (which uses a random number generator), and XOR. The data is encrypted, but the order and structure is still the same. So given a packet, it is known which bits correspond to the IP packet’s destination IP address.
An attacker can take a packet sent by someone else, modify the packet’s destination IP address, and send it back to the access point. Instead of the attacker doing work, the access point will decrypt it and send it back to the attacker! Note that the attacker’s IP and the victim’s IP addresses are the same for the first 3/4 of bits, so there aren’t many different combinations to try.
This is an example of mixing data and control - changing the addresses within the IP packet (which is inside the WEP frame) also changes the control.
Phreaking (phone hacking)
Phones back in the day sent tones of different frequencies for control e.g. the frequency 2600Hz was used to give a free phone call.
There was a Captain Crunch promotion where they gave out whistles with frequency 2600Hz - the same frequency as the tone for free phone calls. So people bought the cereal and abused this to get free calls.
The main problem was that tones used for control were sent along the data line.
Guest Lecture - Doctor
There is a lot of bias going on even in the medical world, with patients, pharmaceutical companies, and with doctors.
Observation bias is when seeing what other people are doing influences our decisions. For example, there is a hormone tablet for breast cancer offered after a surgery which has a chance of preventing the cancer coming back, however it has some side effects. A doctor who just saw a patient who decided to take the tablet may become biased towards supporting the decision to use the tablet for the next patient.
There is also the idea of “quid pro quo”, something mentioned in the Social Engineering lecture. Sometimes pharmaceutical representatives take doctors/nurses out for a free lunch and tell them about a new drug. Because of this favour they have done, these doctors/nurses are more likely to recommend the company’s drugs.
What’s scary is that sometimes you think you are not being biased, but in reality you are subconsciously leaning towards one side or another. Next time I make a judgement on something, I’ll try to check if I am truly being fair, or if I’m just following my instincts.
A majority of problems occuring is from human error, be it negligence or poor judgement. Take for example hygiene. It is difficult to get doctors to wash their hands regularly or follow proper hygienic procedures, because they either forget or think its too much of a hassle.
A study showed that adding checklists in surgery halved the infection rate. Checklists have really simple things on them, and are cheap to create, but its usually the simple steps that aren’t followed which lead to poor hygiene. So instead of investing in high end equipment to reduce bacteria levels slightly, in this instance, it was more effective to bring about a culture change on the simple things.
Evening Lecture
Extended seminar - OPSEC (Operations security)
Protect information that could be used by the enemy against you
Identification of critical info
Analysis of threats
Analysis of vulnerabilities
Assessment of risk
Application of appropriate OPSEC measures
Random pieces of info aren’t useful, but together they can do damage.
Origin - Vietnam war
Snowden - “What would be the impact if my adversary were aware of my activities?”
If your threat model is too high - don’t do it.
How to OPSEC?
If you don’t need to share information, don’t.
If you do something you don’t want people to know about, ensure it can’t be traced back to you
Avoid bringing attention to youself
This is hard to pull off, so tradeoffs must be made e.g. where do you want to be secure, or and where do you want to be visible. It’s hard figure out how much you want to hide.
Avoid sharing information - only share if it’s needed, beware of social media, metadata, indicators - expensive clothes
Keep identity secret - Tor browser to remain anonymous,
You can use a false identity - hard to maintain
Be forgettable - blend in with everyone else so that you don’t draw attention to yourself
“There are no case studies of good OPSEC - you never hear about them.”
Case studies
WW2 - American congressman bragged that American subs survived because Jap depth charges weren’t deep enough. This cost the US lives, as Japanese set them deeper.
MI6 agent exposed because of wife who left Facebook on public.
Harvard bomb threat
Bomb threat listed his exam hall
Guerilla mail adds originating header - found out it was Tor
Tor was used on campus wifi - don’t be logged in if you want to be anonymous
Silk road - Ross Ulbricht
Asked for help with set up on his real email
Used same alias on multiple sites
Tor and VPN used in wrong order - negligence
Richard’s comments
Someone with good OPSEC used different computers and toolkits for different personas.
Even first contact is dangerous - you can roll back in time to when people were young and connected their accounts etc.
Extended seminar - Passwords
Most passwords used are weak. It’s hard to remember and to type a complex password, so people tend not to use them.
Passwords often use personal information such as name and birthday. So hackers can try cracking passwords using this information.
Good passwords are long without english grammar patterns.
Passwords are broken
Passwords are weak - full of meaning (47% based on name), often reused over multiple sites.
Personal Information Attack
Fake Facebook profile (Sally) - can see partner’s name, birthday, education, hobbies, pet’s name
cup.py - many combinations of passwords based on personal information, common replacements (a -> 4)
Password Crackers
John the Ripper on Kali Linux
Hashcat - for hashes
Why are passwords bad?
hard to remember and type good passwords
complicated rules for generation (letters, numbers, symbols)
regular renewal
little incentive to create unique passwords
low probability, high impact risk
Password Storage: bad practice
Some are still stored in plain text - mostly small to medium sized companies
Facebook had stored plaintext passwords in an internal database
Bad hashing (md5, sha1) - Rainbow tables are designed to match with passwords
Demo - using Linkedin passwords file
In 2012 - Linkedin was hacked, and passwords leaked.
Saeed had a file with the list of userid:hashedpassword.
Used Google - sha1 to reverse the hashed password
Looking through the first 1000 lines, 4 people had the same password
10000 lines - 26 people
Passwords frequency in descending order: password, 123456, LinkedIn
If you have a bad hash, anyone with google can hack passwords
John the Ripper - automatically cracks the passwords based on the hashes
Salt - add random string to end of the password, you get 2 different hashes. This way, there isn’t a problem if many people had a password. LinkedIn did not have a salt.
Best practices for storing passwords
Use a strong encryption method like a hashing function such as sCrypt or BCrypt
Store the salted hash, not the password
Salts should be long (at least 256 bits)
Don’t store password hints
Another solution - let a bigger company handle it. Log in with Google or Facebook, however this is
Maybe we can get rid of passwords altogether - but not yet.
Password Generation
Better ways to come up with memorable passwords
correcthorsebatterystaple
length of word creates enough entropy
avoids english grammar patterns
don’t use common words
passphrases
long and with wacky lexicon but good syntax to make it hard for AI to generate
memorable
initialisation of a phrase
take first letter of each word, removes English letter frequency
New policy - NIST 2016
don’t force regular password changes
don’t enforce composition rules
don’t provide password hints
allow user to opt for passwords to be viewed while typing
limit number of failed login attempts
Richard: just keep a list of bad passwords, don’t use any.
Long passwords are better than just adding symbols and numbers.
Richard: we think our passwords are good, but we overestimate it. Humans are bad at generating passwords - we follow patterns.
Buckland’s Lecture
Merkle Damgard construction
https://en.wikipedia.org/wiki/Merkle%E2%80%93Damg%C3%A5rd_construction
SHA2 - different types depending on size (SHA-256 means SHA2 with 256 bits)
We have a long message, but we need a small hash, so we break the message into blocks.
Tumblr media
This is a method of building collision resistant cryptographic hash functions from collision resistant one way compression functions.
It is used in hash algorithms such as MD5, SHA1 and SHA2.
The message is split up into blocks.
The algorithm starts with an initial value, the initialisation vector (IV).
The result so far (initially just the IV) is combined with the next message blockis, then the compression function f is applied.
Step 3 is repeated until all blocks have been added.
The last result may be passed through a finalisation function.
Bank messaging problem
We want integrity and authentication. MACs give us both.
We can add the secret key before the message, and then hash it.
MAC: h(key|data)
The problem with this is that an intercepted message with known hash and message length can be extended. This is a length extension attack.
Take the hash, append a new message to it and pass it into f, the compression function. In this way, you can modify the message, even without knowing the secret key.
HMAC (hash-based message authentication code) puts the password after the message, instead of the beginning. h( key | h(key|m) )
Digital Signature
DSA - Digital Signature Algorithm
A digital signature is used to verify authenticity of digital messages or documents. A valid digital signature gives the recipient strong reason to believe the message was truly from the sender (authentication) and that the message was not altered in transit (integrity).
Signing larger files directly takes a long time. To sign large files, hash the file and then sign the encrypted hash.
Collisions with digital signatures
A collision attack requires half the number of bits in the hash size.
Example: Alice has a pdf saying “I will give Bob $100″, then Alice signs it, and sends it along with the signature to Bob. If an attacker can create another document with the same hash as Alice’s document, then the attacker can use the same signature with this new document, so it looks like Alice has signed the new document.
The attacker can change 1 bit in each document that doesn’t change anything visible (e.g. whitespace) and then keep hashing them until you find 2 identical hashes. Ask Alice to sign the first document, and you can reuse the signature for the second one.
Passwords
Password attack types:
online - typing the password on a website manually
website can detect
offline - obtaining the file containing hashes of passwords and decrypting locally
/etc/shadow
password file used to be protected by md5
Salt is random data added to the password before hashing. Salts help to prevent collisions in the case that users have the same password. Salts also protect against the use of rainbow tables, because the password will need to be hashed with the random salt to be in the table.
0 notes
Text
Week 1 History of Hacking
In the old days not much could be gained from hacking - e.g. you get access to someone’s tax file info but could you get their money? No... These days this much more potential!
200,000BC - Homo Sapiens appears, hence the first hackers appeared. It is a state of mind.
1970 - Phone phreaking - Tones sent down the phone line to control things e.g. hang up call. If you reproduce those tones you could control the phone network! You could get a free long distance call by using a whistle from a Captain Crunch cereal box. In band - when Data and Control are sent on the same band.
Out of spec - Computing is all about meeting the spec. Security is all about what happens if the security is out of specification? Bad guys don’t use things according to the specification. Often not detected or thought about. But the trick is not moving everything into spec... But important to think about it.
Richard’s Telnet hack - Telnet to his brother’s IP address. There was an error message, with some information about the broadband modem. He searched the modem documentation for the default password, and hacked!
Hacking was easy in the old days.
WEP security - everyone uses WPA now, but old days was WEP. WiFi is like a hub - it broadcasts. The idea of WEP is to encrypt the radio transmission.XOR your data with a string of random numbers to encrypt and decrypt. However if they are sending things like TCP packets then they know the structure of the packet, so you can guess the code, and brute force parts you don’t know. Change the IP address to the hacker them self and rebroadcast the packet, and the access point decodes it and sends it back to the hacker! This is a problem where Data and Control are on the same band.
Tumblr media
Microsoft Money 1994 - Do banking on your computer. The game changed. Now hacking has a real monetary benefit!!
Specialisation - Gold hats (finding vulnerabilities for money). The sophistication of attacks today are insane.
In old days hacking made no sense - there was little motivation. These days so much can be done by hacking.
Other things that I found interesting online about malware
ILOVEYOU - Apparently this was one of the fastest spreading worms in computer history. The interesting thing about the worm was that upon opening it, it would send the worm to everyone in your address book! What was worse was that the file was a simple VBS script, but Microsoft Outlook didn’t show the .vbs extension, so it seemed like a legit file. Also, At the time of 2000, the idea of not opening junk mail was cemented in people’s mind at the time, so I assume people were more vulnerable to these sort of attacks compared to now.
CHERNOBYL (CIH) VIRUS - The interesting thing about this virus was that I regard it as the most destructive virus ever - it destroys your computer’s BIOS. You can’t even reimage your computer if you are infected. It was released in 1998. What was worse was that several IBM computers pre-shipped with the virus. The virus was activated on April 26, 1999, destroying tons of computers. 
0 notes
temporal-index · 8 years
Quote
The hacker ethic helped make hackers particularly appealing to Stewart Brand and Kevin Kelly. Soon after Levy had shown them his book, Brand and Kelly got in touch with members of the hacking community, including Lee Felsenstein; Bill Budge, a software author; Andy Hertzfeld, a key member of Apple’s Macintosh development team; and Doug Carlston, founder and president of Broderbund Software Inc. Together they invited some four hundred self-described hackers to pay ninety dollars each to join them, the Whole Earth crew, and about twenty mainstream journalists for a three-day weekend in November 1984 at Fort Cronkhite, a former army base in the Marin Headlands just across the Golden Gate Bridge from San Francisco. [...] Something like 150 hackers actually arrived. Among others, they included luminaries such as Steve Wozniak of Apple, Ted Nelson, free software pioneer Richard Stallman, and Ted Draper—known as Captain Crunch for his discovery that a toy whistle he found in a box of the cereal gave just the right tone to grant him free access to the phone system. Some of the hackers worked alone, part-time, at home; others represented such diverse institutions as MIT, Stanford, Lotus Development, and various software makers. Most had come to meet others like themselves. Their hosts offered them food, computers, audiovisual supplies, and places to sleep— and a regular round of facilitated conversations. By all accounts, two themes dominated those conversations: the definition of a hacker ethic and the description of emerging business forms in the computer industry. The two themes were, of course, entwined. The hacker ethic that Levy described—the single thread ostensibly running through all of the participants’ careers—had emerged at a moment when sharing products and processes improved profits for all. By the mid-1980s, however, the finances of computer and software development had changed radically. As Stewart Brand pointed out, in what would soon become a famous formulation, information-based products embodied an economic paradox. “On the one hand,” he said, “information wants to be expensive, because it’s so valuable. The right information in the right place just changes your life. On the other hand, information wants to be free, because the cost of getting it out is getting lower and lower all the time. So you have these two fighting against each other.” Throughout the conference, hackers discussed different ways they had managed this dilemma. Some, like Richard Greenblatt, an early and renowned MIT hacker, argued that source code must always be made freely available. Others, like game designer Robert Woodhead, suggested that they would happily give away the electronic tools they had used to make products such as computer games, but they would not give away the games themselves. “That’s my soul in that product,” explained Woodhead. “I don’t want anyone fooling with that.” In discussion Bob Wallace said he had marketed his text editor PC-WRITE as shareware (in shareware, users got the software for free but paid if they wanted documentation and support), whereas Andrew Fluegelman indicated that he had distributed his telecommunications program PC-TALK as freeware (users voluntarily paid a small fee to use the software). Others, including Macintosh designer Bill Atkinson, defended corporate prerogatives, arguing that no one should be forced to give away the code at the heart of their software. The debate took on particular intensity because, according to the hacker ethic, certain business practices—like giving away your code—allowed you to claim the identity of hacker. In part for this reason, participants in a morning-long forum called “The Future of the Hacker Ethic,” led by Levy, began to focus on other elements of the hacker’s personality and to modify their stance on the free distribution of information goods. For instance, participants agreed that hackers were driven to compute and that they would regard people who impeded their computing as bureaucrats rather than legitimate authorities. By and large, they agreed that although the free dissemination of information was a worthy ideal, in some cases it was clearly only an ideal. If they could not agree on proper hacker business practice, they could agree that being a hacker—in this case, being the sort of person who was invited to the Hackers’ Conference—was valuable in its own right. Lee Felsenstein explained, “That little bit of cultural identity [was] extremely important.” In the popular press, hackers had been characterized as machine-obsessed, antisocial, and potentially criminal loners. Gathered in the stucco halls of Fort Cronkhite, hackers could recognize themselves as something else. Lee Felsenstein recalls feeling empowered: “Don’t avoid the word Hackers. Don’t let somebody else define you. No apologies: we’re hackers. We define what a hacker is...nobody else.” In the end, the group did not come to any consensus on the right approach to take toward the emerging challenges of the software industry. But they had begun to reformulate their own identities, partially in terms of Whole Earth ideals. In the Hackers’ Conference, Brand and company provided computer workers with a venue in which to develop and live a group identity around the idea of hacking and to make sense of emerging economic forms in terms of that identity. This work had the effect of rehabilitating hackers in the public eye, but it also explicitly and securely linked Whole Earth people and the Whole Earth ethos to the world of computing. Virtually all of the journalistic reports that came from the Conference echoed John Markoff’s comments in Byte magazine: “Anyone attending would instantly have realized that the stereotype of computer hackers as isolated individuals is nowhere near accurate.” Some of those same reports picked up on another theme as well, however. Several either quoted or paraphrased Ted Nelson’s exclamation “This is the Woodstock of the computer elite!” One listed Stewart Brand among the “luminaries of the personal computer ‘revolution.’” Another described Brand as a “long-time supporter of hackers.” Quietly, almost without noticing it, the invited reporters had begun to intertwine the countercultural play of Woodstock, and countercultural players such as Brand, with an industry and a work style that had emerged within and at the edges of such culturally central institutions as MIT, Stanford, and Hewlett-Packard. Hackers were not simply highly individualistic and innovative engineers; they were cultural rebels.
Fred Turner, From Counterculture to Cyberculture (2006)
2 notes · View notes
Text
Hacking for Regular IT People – History and Evolution
For most people these days, the word “hacking” conjures images of nefarious intruders attempting to gain illegal access to financial institutions, corporations, and private citizens’ computers for theft and profit. Exploitation of unsecured computer systems, cloud services, and networks make headlines daily, with large breaches of private consumer information becoming a regular event. Various studies predict the impact of global cybercrime, with some estimating damages to exceed $6 trillion dollars by 2021. The impact of this is felt all over the world, with organizations rallying to protect their data, and spending over $80 billion in 2016 on cyber security.
 There does remain some differentiation in the hacking world between “good” and “evil” and a variety of moral postures in between. Each of these terms being subjective and dependent on the point of view of the person using them, of course. There are the “good guys” – white hat hackers, and the “bad guys” – black hat hackers, and gray hats in-between. Terms and labels attributed to the traditional indicators of good and bad in Western movies and cowboys.
 Tracing its Origins
 Hacking in its infancy wasn’t about exploitation or theft. It also didn’t have anything to do with computers, necessarily. It was a term used to describe a method of solving a problem or fixing something using unorthodox or unusual methods. MacGyver, from the 1985 television show of the same name, was a hacker. He used whatever he had available to him at the moment, and his Swiss Army knife, to “hack” his way out of a jam.
The modern sense of the word hack has its origins dating back to the M.I.T. Tech Model Railroad Club minutes in 1955.
               “Mr. Eccles requests that anyone working or hacking on the electrical system turn off the power to avoid fuse blowing.”
 There are some positive uses of the word in modern society, the website Lifehacker as one example, showing people how to solve everyday problems in unconventional, totally legal ways.
 Captain Crunch
 Early hacking took shape with tech-savvy individuals like John Draper, aka Captain Crunch, attempting to learn more about programmable systems, specifically phone networks. Coined “phreaking” at the time, these guys would hack the public switched phone system, often just for fun, or to learn as much as they could about them, and even for free phone calls. John Draper’s infamous nickname Captain Crunch was derived from the fact that a toy whistle found in Cap’n Crunch cereal, emitted a 2600 Hz tone that was used by phone carriers to cause a telephone switch to end a call, which left an open carrier line. This line could then be used to make free phone calls.
 There were many such exploits on older telephone systems. In the mid-80’s I used to carry a safety pin with me at all times. Why? To make free phone calls. I didn’t understand the mechanism of how this worked at the time, but I knew that if I connected the pin end to the center hole of a pay-phone mouthpiece, and touched the other end to any exposed metal surface on the phone, often the handset cradle, you would hear a crackle or clicking noise, followed by a dial tone, and you would then be able to dial any number on the phone, without putting any money in it.
 Later I would learn that this was due to the fact that older phone systems used ground-start signaling which required the phone line to be grounded to receive dial tone. Normally this grounding was accomplished with a coin inserted into the phone, which controlled a switch that would ground the line, but my method using a safety pin did the same thing.
 I’m assuming of course, that the statute of limitations has run out on these types of phone hacks…
 Hacking Motivation
 Phone phreakers like Captain Crunch and even his friend Steve Wozniak (yes, the Woz) later on would develop these techniques further to hack the phone system and more often than not, for relatively harmless purposes. Draper cites a number of pranks they pulled through their phone hacking that included:
 Calling the Pope to confess over the phone
Obtaining the CIA crisis hotline to the White House to let them know they were out of toilet paper
Punking Richard Nixon after learning his code name was “Olympus” when someone wanted to speak with him on the phone
 Draper would eventually get caught and serve jail time for his phone escapades, but what he had done wasn’t done for profit or malicious reasons. He did it to learn how phone systems worked. Nothing more.
 Kevin Mitnick, arguably the world’s most infamous hacker speaks in his books and his talks about the same thing. His adventures in hacking computer systems were done mostly “because he could” not because he thought there would be any big payoff from doing so. He found it a challenge and wanted to see how far he could get into some of these early networks and systems.
 Hacking for the IT Professional
 For the modern IT professional, hacking continues to hold a few different meanings. The first is the thing you must protect your network and your information from – malicious hacking. The next might be your approach to solving problems in non-traditional ways – hacking together a fix or solution to an IT problems. The next might be exposing yourself to the methods and techniques used by the black hat community in order to better understand and protect yourself from them – arguably the white hat hacking.
 IT staff, especially those with responsibility for security can and should learn, practice, and develop some hacking skills to understand where their main vulnerabilities lie. How do we do this without getting arrested?
 Over the next several posts, I'm going to discuss different options that you have, as the everyday IT pro, to learn and develop some practical, real-world hacking skills, safely and legally.
 That said, I will offer a disclaimer here and in subsequent posts: Please check your local, state, county, provincial, and/or federal regulations regarding any of the methods, techniques, or equipment outlined in these articles before attempting to use any of them. And always use your own private, isolated test/lab environment.
 Remember how much trouble Matthew Broderick got himself into in WarGames? And all he wanted to do was play some chess.
The post Hacking for Regular IT People – History and Evolution appeared first on Computer Systems Design.
from Computer Systems Design http://ift.tt/2viAkqc
0 notes