#25 quintillion years later….
Explore tagged Tumblr posts
Text
8 notes
·
View notes
Text
Texas: Family members arrested for hiding fugitive Muslim who (honor) killed his two daughters (for dating non-Muslims)
FOR IMMEDIATE RELEASE Friday, August 28, 2020
Yaser Said Family Members Charged With Concealing '10 Most Wanted' Suspect From Arrest
Two relatives of Yaser Said – a capital murder suspect arrested Wednesday – have been charged with helping Mr. Said evade capture for more than 12 years, announced U.S. Attorney for the Northern District of Texas Erin Nealy Cox.
Islam Yaser-Abdel Said, Yaser’s 32-year-old son, and Yassein Said, Yaser’s 59-year-old brother, were arrested Wednesday in Euless, Texas by the FBI’s Dallas Violent Crimes Task Force and charged via criminal complaint with concealing a person from arrest. Both made their initial appearances before U.S. Magistrate Judge Hal R. Ray in Fort Worth Friday afternoon.
Yaser Said, 63, had been a fugitive from justice since New Year’s Day 2008, when he allegedly murdered his teenage daughters, Amina and Sarah. According to law enforcement, Yaser drove them to a location in Irving and shot them to death inside his taxicab, abandoning their bodies inside the vehicle. The following day, he was charged by the state with two counts of capital murder. In Dec. 2014, Yaser was placed on the FBI’s “Ten Most Wanted” list, where he remained until his capture this week.
“For years, Islam and Yassein Said — Sarah and Amina’s own brother and uncle – allegedly harbored the girls’ killer,” said U.S. Attorney Erin Nealy Cox. “In concealing Yaser Said from arrest, not only did these men waste countless law enforcement hours in the hunt for a brutal fugitive, they also delayed justice for Sarah and Amina. Thankfully, their day of reckoning has finally arrived. We are hopeful all three arrests will bring a measure of comfort to the girls’ mother, relatives, and friends.”
“The defendants provided aid and comfort to an individual who is accused of murdering his own daughters,” said FBI Dallas Special Agent in Charge of the Dallas Field Division Matthew DeSarno. “Harboring a dangerous fugitive is unacceptable. The FBI and our law enforcement partners will pursue anyone who helps a criminal evade capture.”
According to the criminal complaint against Islam and Yassein, nine years after the murder, on Aug. 14, 2017, investigators caught a break: A maintenance worker at the Copper Canyon Apartment complex in Bedford, Texas, spotted Yaser inside a unit leased to his son, Islam.
Dispatched to repair a water leak, the maintenance worker knocked on the apartment door, but when no one answered, he used a key to unlock it. To his surprise, he found the interior deadbolt locked, indicating someone was inside the apartment. He knocked again, announcing himself as a maintenance worker. A tall, middle-aged Middle Eastern man opened the door and permitted him to make the repairs.
The maintenance worker later reported the incident to his apartment manager, who was aware of Islam’s relationship to a fugitive. The maintenance worker confirmed to his boss that the photo on Yaser Said’s wanted poster matched the man he’d seen in the apartment, and the pair immediately contacted the FBI. That same day, FBI Dallas dispatched a Violent Crimes Task Force agent to interview the maintenance worker. The agent showed him photos of Yaser’s brothers, along with Yaser himself. The maintenance worker pinpointed Yaser as the man he’d seen in the apartment.
At approximately 6:30 p.m. that evening, the same agent attempted to interview Islam, asking him for permission to search the apartment. Islam, upset, allegedly refused to cooperate. He then called placed a call, saying, “we have a problem.” AT&T records indicate Islam was in contact with his uncles.
At 1 a.m. the following morning, the FBI Dallas SWAT team executed a search warrant on Islam’s apartment. Finding the front door locked, they were forced to breach the door. They did not discover anyone inside, but observed the sliding glass patio door open. Underneath the patio, they noticed a bush with broken branches, suggesting someone had jumped off the patio and landed on the bush. Next to the flattened bush, they found a pair of eyeglasses, which they collected as evidence.
Agents also collected several pieces of evidence from inside the apartment, including several cigarette butts and a toothbrush inside a luggage bag in a closet. The FBI Laboratory in Quantico, Virginia cross-referenced DNA found on these items with DNA collected from Amina and Sarah. Analyists determined a 1 in 5.3 quintillion probably that the DNA found on the cigarette butts, eyeglasses, and toothbrush came from Amina and Sarah’s biological father: Yaser Said.
Twelve days after the raid on the apartment, on August 26, 2017, Customs and Border Patrol located Islam more than 1,000 miles away, inside a car selected for secondary screening at the U.S. Canada border. The driver of the car, Hany Medhat, told CBP agents that he and Islam had decided to take a “crazy road trip;” however, a search of his phone revealed he’d told his employer he had a “family emergency.”
Three years later, on Aug. 17, 2020, FBI agents began 24-hour surveillance of a home in Justin, Texas, purchased in the name of Dalal Said, Yassein’s daughter. They watched Islam and Yassein allegedly drive up to the home, deliver grocery bags inside, and carry trash bags back to their car.
Two days later, at 11:51 p.m. on Aug. 19, after Yassein and Islam had departed the residence, agents observed what appeared to be a shadow of a person walk across the interior of the residence in front of window twice.
On Aug. 25, agents once again observed Islam and Yassein exit the home with two bags of trash:
The agents followed the pair to a shopping center in Southlake, TX, approximately 19 miles from the house. They watched as Islam exited the vehicle, and Yassein pulled around to the side of the shopping center. Once the vehicle had pulled out of the parking lot, agents began to dig through the garbage cans on the side of the shopping center.
Inside the garbage cans, they located two bags matching the bags they’d seen the men carrying out to the car. They seized the bags and transported them back to the FBI Field Office, where they found numerous cigarette butts and other garbage.
The following day, agents executed a search warrant on the home, where they arrested Yaser Said. They arrested Yassein and Islam at a separate location in Euless, Texas.
A criminal complaint is merely an allegation of wrongdoing, not evidence. Like all defendants, Yassein and Islam Said are presumed innocent unless and until proven guilty in a court of law, as is Yaser Said.
If convicted, Yassein and Islam face up to five years in federal prison. Yaser, indicted by the state on capital murder charges, faces the death penalty.
--------------------------------
More via Texas cabbie accused in 'honor killing' of teenage daughters didn't want to raise 'whores,' wife recalled
The Texas cab driver accused of murdering his two teenage daughters more than a decade ago was unhappy that they were dating non-Muslims and was unwilling to raise "whores as daughters," according to his family.
Patricia "Tissie" Owens, told Fox News that her husband became upset after learning that his daughters Amina and Sarah, ages 18 and 17 respectively, had started dating non-Muslims, and shared rare details about their death and Said's disappearance in the Fox Nation special "A Question of Honor."
Unable to reach Said, Owens called his brother pleading for information about the whereabouts of her husband and children.
"He said he didn’t know," she recollected, "but he said Yaser didn’t want to raise whores as daughters."
While Said remained on the run, Owens and her oldest son Islam moved in with his family. It became clear during those two months that some of Said's siblings not only believed he killed his daughters, but defended his alleged motives for doing so.
"Just hearing them talk and being around them the months that I was, I picked up on things," she said. " One of his brothers told me that I was really lucky that he [Said] left their bodies for me to find, for me to put my girls to rest. If it was him, nobody would find his girls."
7 notes
·
View notes
Text
Does Bitcoin Make Good Sense?
What is the hottest technology minister to of 2013? Most experts will mitigation to the rise of bitcoin.
Bitcoin is taking place for the rise as a digital currency used worldwide. It is a type of maintenance controlled and stored intensely by computers pro across the Internet. More people and more businesses are starting to utilize it.
Unlike a plain U.S. dollar or Euro, bull bitcoin is along with a form of payment system sort of like Paypal or a report card network.
You can retain regarding the order of to it, spend it or trade it. It can be moved a propos cheaply and easily almost as soon as sending an email.
Bitcoin allows you to make transactions without revealing your identity. Yet the system operates in plain public view.
Anyone can view these transactions which are recorded online. This transparency can steer a subsidiary trust in the economy. It even resulted in the downfall of an illegal drug arena, discovered shuffling funds utilizing bitcoin and shut all along by the U.S. Government.
In many ways bitcoin is on top of just a currency. It's a on the subject of-engineering of international finance. It can terminate barriers in the middle of countries and frees currency from the rule of federal governments. However it yet relies going a propos for the U.S. dollar for its value.
The technology following this is attractive to declare the least. Bitcoin is controlled by retrieve source software. It operates according to the laws of mathematics, and by the people who collectively oversee this software. The software runs around speaking thousands of machines worldwide, but it can be tainted. Changes can by yourself occur however following the majority of those overseeing the software be of the same opinion to it.
The bitcoin software system was built by computer programmers a propos five years ago and released onto the Internet. It was expected to control across a large network of machines called bitcoin miners. Anyone on the subject of earth could badly be in pain one of these machines.
This distributed software generated the supplementary currency, creating a small number of bitcoins. Basically, bitcoins are just long digital addresses and balances, stored in an online ledger called the "blockchain." But the system design enabled the currency to slowly go ahead, and to by now taking place bitcoin miners to save the system itself growing.
When the system creates new bitcoins it gives them to the miners. Miners maintenance track of all the bitcoin transactions and ensue them to the blockchain ledger. In argument, they profit the privilege of awarding themselves a few accumulation bitcoins. Right now, 25 bitcoins are paid out to the world's miners very very not quite six times per hour. Those rates can fiddle considering greater than era.
Miners watch bitcoin trades through electronic keys. The keys play-deed conjunction gone a complicated email domicile. If they don't accrual taking place a miner can renounce the transaction.
Back in the hours of hours of day, you could reach bitcoin mining almost your residence PC. But as the price of bitcoins has shot taking place, the mining game has morphed into a bit of a freshen-race. Professional players, custom-expected hardware, and suddenly expanding giving out skill have all jumped upon board.
Today, all of the computers vying for those 25 bitcoins bureau 5 quintillion mathematical calculations per second. To put it in tilt, that's more or less 150 epoch as many mathematical operations as the world's most powerful supercomputer.
And mining can be beautiful dangerous. Companies that construct these custom machines typically feat you for the hardware yet to be, and every allocation of day you wait for delivery is a daylight back than it becomes harder to mine bitcoins. That reduces the amount of money you can earn.
Why realize these bitcoins have value? It's pretty easy. They've evolved into something that a lot of people sensitive and they'on in limited supply. Though the system continues to crank out bitcoins, this will cease as soon as it reaches 21 million, which was expected to happen in approximately the year 2140.
Bitcoin has fascinated many in the tech community. However, if you follow the accretion proclamation, you know the value of a bitcoin can fluctuate greatly. It originally sold for $13 as regards the to come part of 2013. Since later it has hit $900 and continues to dread happening and also to wildly upon a daily basis.
The real gone of bitcoin depends much anew upon the views of a few investors. In a recent interview upon reddit, Cameron Winklevoss one of the twins operational in the Facebook suit as soon as Mark Zuckerberg and an eager bitcoin swashbuckler, predicted that one bitcoin could get hold of a value of $40,000. That is ten period what it is today.
A more realizable view suggests that speculators will eventually cause bitcoin to danger. It does not incorporate the getting bond of to utilize its currency in the retail atmosphere, seemingly a must for long term sham. Its wild fluctuations along with make it a terrible risk for investment purposes.
Still bull bitcoin pushes the boundaries of technology progress. Much along together along in the middle of Paypal in its infancy, the marketplace will have to regard as brute if the risk related subsequent to this type of digital currency and payment system makes for fine long term issue prudence.
1 note
·
View note
Text
Bitcoin Hash Rate Drops Almost 45% Since 2020 Peak
Bitcoin Hash Rate Drops Almost 45% Since 2020 Peak:
The Bitcoin (BTC) network hash rate has just taken a steep plummet and is now down almost 45% from its 2020 peak.
The network’s hash rate sank from 136.2 quintillion hashes per second (EH/s) on March 1 to 7.5.7 EH/s today, March 26, according to data from Blockchain.com.
Coin.dance — another analytics site for the coin’s blockchain — reveals a similar pattern, if less stark. The site reported a 2020 peak of roughly 150 EH/s on March 5, today down to 105.6 EH/s — a 29% decrease.
Bitcoin network hash rate, April 19, 2019–March 27, 2020, Source: blockchain.com
Hash rate and difficulty
The hash rate of a cryptocurrency is a parameter that gives the measure of the number of calculations that a given network can perform each second.
A higher hash rate means greater competition among miners to validate new blocks; it also increases the number of resources needed for performing a 51% attack, making the network more secure.
After a volatile month in which Bitcoin saw dramatic, if short-lived, losses of as high as 60% to around $3,600 in mid-March, the network’s difficulty yesterday decreased by close to 16%.
Difficulty — or how challenging it is computationally to solve and validate a block on the blockchain — is set to adjust every 2016 blocks, or two weeks, in order to maintain a consistent ~10-minute block verification time.
This has a close connection to the network’s hash rate. Typically, when the network sees a low level of participating mining power, the difficulty will tumble — while in periods of intense network participation, it rises, working as a counterbalancing mechanism.
As reported yesterday, the last downward adjustment in difficulty was on February 25 of this year, when the coin’s price was around $9,900. Just three days later, it dropped to around $8,800, and by March 14, to nearly $4,800 — and as low as $3,600 on some exchanges, as noted above.
Interpreting the data
Theis relationship between price, hash rate, and difficulty has historically generated a trend that some analysts refer to as a “miners’ capitulation cycle.”
The theory holds that while Bitcoin’s price remains high, and mining is profitable, both hash rate and difficulty inch upwards until they reach a threshold at which miners are squeezed and forced to liquidate more and more of their holdings to cover their expenses — leading to an increased supply of Bitcoin on the market.
The “capitulation point” — at which some can no longer afford to keep mining altogether — then involves a decline in hash rate (reflecting lower participation) — as can be seen today — and a subsequent reset in the network’s difficulty.
According to data from btc.com, Bitcoin’s difficulty is currently forecast to decrease by a further 16% in 14 days’ time.
0 notes
Link
The Bitcoin (BTC) network hash rate has just taken a steep plummet and is now down almost 45% from its 2020 peak.
The network’s hash rate sank from 136.2 quintillion hashes per second (EH/s) on March 1 to 7.5.7 EH/s today, March 26, according to data from Blockchain.com.
Coin.dance — another analytics site for the coin’s blockchain — reveals a similar pattern, if less stark. The site reported a 2020 peak of roughly 150 EH/s on March 5, today down to 105.6 EH/s — a 29% decrease.
Bitcoin network hash rate, April 19, 2019–March 27, 2020, Source: blockchain.com
Hash rate and difficulty
The hash rate of a cryptocurrency is a parameter that gives the measure of the number of calculations that a given network can perform each second.
A higher hash rate means greater competition among miners to validate new blocks; it also increases the number of resources needed for performing a 51% attack, making the network more secure.
After a volatile month in which Bitcoin saw dramatic, if short-lived, losses of as high as 60% to around $3,600 in mid-March, the network’s difficulty yesterday decreased by close to 16%.
Difficulty — or how challenging it is computationally to solve and validate a block on the blockchain — is set to adjust every 2016 blocks, or two weeks, in order to maintain a consistent ~10-minute block verification time.
This has a close connection to the network’s hash rate. Typically, when the network sees a low level of participating mining power, the difficulty will tumble — while in periods of intense network participation, it rises, working as a counterbalancing mechanism.
As reported yesterday, the last downward adjustment in difficulty was on February 25 of this year, when the coin’s price was around $9,900. Just three days later, it dropped to around $8,800, and by March 14, to nearly $4,800 — and as low as $3,600 on some exchanges, as noted above.
Interpreting the data
Theis relationship between price, hash rate, and difficulty has historically generated a trend that some analysts refer to as a “miners’ capitulation cycle.”
The theory holds that while Bitcoin’s price remains high, and mining is profitable, both hash rate and difficulty inch upwards until they reach a threshold at which miners are squeezed and forced to liquidate more and more of their holdings to cover their expenses — leading to an increased supply of Bitcoin on the market.
The “capitulation point” — at which some can no longer afford to keep mining altogether — then involves a decline in hash rate (reflecting lower participation) — as can be seen today — and a subsequent reset in the network’s difficulty.
According to data from btc.com, Bitcoin’s difficulty is currently forecast to decrease by a further 16% in 14 days’ time.
0 notes
Text
Bitcoin Hash Rate Drops Almost 45% Since 2020 Peak
The Bitcoin (BTC) network hash rate has just taken a steep plummet and is now down almost 45% from its 2020 peak.
The network’s hash rate sank from 136.2 quintillion hashes per second (EH/s) on March 1 to 7.5.7 EH/s today, March 26, according to data from Blockchain.com.
Coin.dance — another analytics site for the coin’s blockchain — reveals a similar pattern, if less stark. The site reported a 2020 peak of roughly 150 EH/s on March 5, today down to 105.6 EH/s — a 29% decrease.
Bitcoin network hash rate, April 19, 2019–March 27, 2020, Source: blockchain.com
Hash rate and difficulty
The hash rate of a cryptocurrency is a parameter that gives the measure of the number of calculations that a given network can perform each second.
A higher hash rate means greater competition among miners to validate new blocks; it also increases the number of resources needed for performing a 51% attack, making the network more secure.
After a volatile month in which Bitcoin saw dramatic, if short-lived, losses of as high as 60% to around $3,600 in mid-March, the network’s difficulty yesterday decreased by close to 16%.
Difficulty — or how challenging it is computationally to solve and validate a block on the blockchain — is set to adjust every 2016 blocks, or two weeks, in order to maintain a consistent ~10-minute block verification time.
This has a close connection to the network’s hash rate. Typically, when the network sees a low level of participating mining power, the difficulty will tumble — while in periods of intense network participation, it rises, working as a counterbalancing mechanism.
As reported yesterday, the last downward adjustment in difficulty was on February 25 of this year, when the coin’s price was around $9,900. Just three days later, it dropped to around $8,800, and by March 14, to nearly $4,800 — and as low as $3,600 on some exchanges, as noted above.
Interpreting the data
Theis relationship between price, hash rate, and difficulty has historically generated a trend that some analysts refer to as a “miners’ capitulation cycle.”
The theory holds that while Bitcoin’s price remains high, and mining is profitable, both hash rate and difficulty inch upwards until they reach a threshold at which miners are squeezed and forced to liquidate more and more of their holdings to cover their expenses — leading to an increased supply of Bitcoin on the market.
The “capitulation point” — at which some can no longer afford to keep mining altogether — then involves a decline in hash rate (reflecting lower participation) — as can be seen today — and a subsequent reset in the network’s difficulty.
According to data from btc.com, Bitcoin’s difficulty is currently forecast to decrease by a further 16% in 14 days’ time.
window.fbAsyncInit = function () { FB.init({ appId: '1922752334671725', xfbml: true, version: 'v2.9' }); FB.AppEvents.logPageView(); }; (function (d, s, id) { var js, fjs = d.getElementsByTagName(s)[0]; if (d.getElementById(id)) { return; } js = d.createElement(s); js.id = id; js.src = "http://connect.facebook.net/en_US/sdk.js"; js.defer = true; fjs.parentNode.insertBefore(js, fjs); }(document, 'script', 'facebook-jssdk')); !function (f, b, e, v, n, t, s) { if (f.fbq) return; n = f.fbq = function () { n.callMethod ? n.callMethod.apply(n, arguments) : n.queue.push(arguments) }; if (!f._fbq) f._fbq = n; n.push = n; n.loaded = !0; n.version = '2.0'; n.queue = []; t = b.createElement(e); t.defer = !0; t.src = v; s = b.getElementsByTagName(e)[0]; s.parentNode.insertBefore(t, s) }(window, document, 'script', 'https://connect.facebook.net/en_US/fbevents.js'); fbq('init', '1922752334671725'); fbq('track', 'PageView'); Source link
The post Bitcoin Hash Rate Drops Almost 45% Since 2020 Peak appeared first on For Crypto.
from For Crypto https://ift.tt/2QJlFNZ
0 notes
Text
What is Big Data? – A Beginner’s Guide to the World of Big Data
There is no place where Big Data does not exist! The curiosity about what is Big Data has been soaring in the past few years. Let me tell you some mind-boggling facts! Forbes reports that every minute, users watch 4.15 million YouTube videos, send 456,000 tweets on Twitter, post 46,740 photos on Instagram and there are 510,000 comments posted and 293,000 statuses updated on Facebook!
Just imagine the huge chunk of data that is produced with such activities. This constant creation of data using social media, business applications, telecom and various other domains is leading to the formation of Big Data.
In order to explain what is Big Data, I will be covering the following topics:
Evolution of Big Data
Big Data Defined
Characteristics of Big Data
Big Data Analytics
Industrial Applications of Big Data
Scope of Big Data
Evolution of Big Data
Before exploring what is Big Data, let me begin by giving some insight into why the term Big Data has gained so much importance.
When was the last time you guys remember using a floppy or a CD to store your data? Let me guess, had to go way back in the early 21st century right? The use of manual paper records, files, floppy and discs have now become obsolete. The reason for this is the exponential growth of data. People began storing their data in relational database systems but with the hunger for new inventions, technologies, applications with quick response time and with the introduction of the internet, even that is insufficient now. This generation of continuous and massive data can be referred to as Big Data. There are a few other factors that characterize Big Data which I will be explaining later in this blog.
Forbes reports that there are 2.5 quintillion bytes of data created each day at our current pace, but that pace is only accelerating. Internet of Things(IoT) is one such technology which plays a major role in this acceleration. 90% of all data today was generated in the last two years.
What is Big Data | Big Data Analytics | Edureka
Big Data Definition
What is Big Data?
So before I explain what is Big Data, let me also tell you what it is not! The most common myth associated with Big Data is that it is just about the size or volume of data. But actually, it’s not just about the “big” amounts of data being collected. Big Data refers to the large amounts of data which is pouring in from various data sources and has different formats. Even previously there was huge data which were being stored in databases, but because of the varied nature of this Data, the traditional relational database systems are incapable of handling this Data. Big Data is much more than a collection of datasets with different formats, it is an important asset which can be used to obtain enumerable benefits.
The three different formats of big data are:
Structured: Organised data format with a fixed schema. Ex: RDBMS
Semi-Structured: Partially organised data which does not have a fixed format. Ex: XML, JSON
Unstructured: Unorganised data with an unknown schema. Ex: Audio, video files etc.
Characteristics of Big Data
These are the following characteristics associated with Big Data:
Five V's of Big Data - What is Big Data - Edureka
The above image depicts the five V’s of Big Data but as and when the data keeps evolving so will the V’s. I am listing five more V’s which have developed gradually over time:
Validity: correctness of data
Variability: dynamic behaviour
Volatility: tendency to change in time
Vulnerability: vulnerable to breach or attacks
Visualization: visualizing meaningful usage of data
Big Data AnalyticsNow that I have told you what is Big Data and how it’s being generated exponentially, let me present to you a very interesting example of how Starbucks, one of the leading coffeehouse chain is making use of this Big Data.
I came across this article by Forbes which reported how Starbucks made use of Big Data to analyse the preferences of their customers to enhance and personalize their experience. They analysed their member’s coffee buying habits along with their preferred drinks to what time of day they are usually ordering. So, even when people visit a “new” Starbucks location, that store’s point-of-sale system is able to identify the customer through their smartphone and give the barista their preferred order. In addition, based on ordering preferences, their app will suggest new products that the customers might be interested in trying. This my friends is what we call Big Data Analytics.
Big Data Training
Basically, Big Data Analytics is largely used by companies to facilitate their growth and development. This majorly involves applying various data mining algorithms on the given set of data, which will then aid them in better decision making.
There are multiple tools for processing Big Data such as Hadoop, Pig, Hive, Cassandra, Spark, Kafka, etc. depending upon the requirement of the organisation.
Big Data Tools - What is Big Data - Edureka
Big Data Applications
These are some of the following domains where Big Data Applications has been revolutionized:
Entertainment: Netflix and Amazon use Big Data to make shows and movie recommendations to their users.
Insurance: Uses Big data to predict illness, accidents and price their products accordingly.
Driver-less Cars: Google’s driver-less cars collect about one gigabyte of data per second. These experiments require more and more data for their successful execution.
Education: Opting for big data powered technology as a learning tool instead of traditional lecture methods, which enhanced the learning of students as well aided the teacher to track their performance better.
Automobile: Rolls Royce has embraced Big Data by fitting hundreds of sensors into its engines and propulsion systems, which record every tiny detail about their operation. The changes in data in real-time are reported to engineers who will decide the best course of action such as scheduling maintenance or dispatching engineering teams should the problem require it.
Government: A very interesting use of Big Data is in the field of politics to analyse patterns and influence election results. Cambridge Analytica Ltd. is one such organisation which completely drives on data to change audience behaviour and plays a major role in the electoral process.
Scope of Big Data
Numerous Job opportunities: The career opportunities pertaining to the field of Big data include, Big Data Analyst, Big Data Engineer, Big Data solution architect etc. According to IBM, 59% of all Data Science and Analytics (DSA) job demand is in Finance and Insurance, Professional Services, and IT.
Rising demand for Analytics Professional: An article by Forbes reveals that “IBM predicts demand for Data Scientists will soar by 28%”. By 2020, the number of jobs for all US data professionals will increase by 364,000 openings to 2,720,000 according to IBM.
Salary Aspects: Forbes reported that employers are willing to pay a premium of $8,736 above median bachelor’s and graduate-level salaries, with successful applicants earning a starting salary of $80,265
Adoption of Big Data analytics: Immense growth in the usage of big data analysis across the world.[Source]-https://www.edureka.co/blog/what-is-big-data/
Beginners & Advanced level big data and hadoop training in Mumbai. Asterix Solution's 25 Hour Docker Training gives broad hands-on practicals.
0 notes
Text
A View to the Cloud
What really happens when your data is stored on far-off servers in distant data centers
ul.listicle.SerifType { font-family: “Georgia”, serif; margin: 1em 0; padding: 0; list-style-type: none; list-item-decoration: none; } h3.listicle-item-hed a { color: #d80404!important; } ul.listicle.SerifType li p strong { font-family: “Georgia”, serif; margin: 1em 0; font-weight: bold !important; padding: 0; list-style-type: none; }
Illustration: Francesco Muzzi/Story TK
We live in a world that’s awash in information. Way back in 2011, an IBM study estimated that nearly 3 quintillion—that’s a 3 with 18 zeros after it—bytes of data were being generated every single day. We’re well past that mark now, given the doubling in the number of Internet users since 2011, the powerful rise of social media and machine learning, and the explosive growth in mobile computing, streaming services, and Internet of Things devices. Indeed, according to the latest Cisco Global Cloud Index, some 220,000 quintillion bytes—or if you prefer, 220 zettabytes—were generated “by all people, machines, and things” in 2016, on track to reach nearly 850 ZB in 2021.
Much of that data is considered ephemeral, and so it isn’t stored. But even a tiny fraction of a huge number can still be impressively large. When it comes to data, Cisco estimates that 1.8 ZB was stored in 2016, a volume that will quadruple to 7.2 ZB in 2021.
Our brains can’t really comprehend something as large as a zettabyte, but maybe this mental image will help: If each megabyte occupied the space of the period at the end of this sentence, then 1.8 ZB would cover about 460 square kilometers, or an area about eight times the size of Manhattan.
Of course, an actual zettabyte of data doesn’t occupy any space at all—data is an abstract concept. Storing data, on the other hand, does take space, as well as materials, energy, and sophisticated hardware and software. We need a reliable way to store those many 0s and 1s of data so that we can retrieve them later on, whether that’s an hour from now or five years. And if the information is in some way valuable—whether it’s a digitized family history of interest mainly to a small circle of people, or a film library of great cultural significance—the data may need to be archived more or less indefinitely.
Projected global data-center storage capacity, 2016 to 2021
Source: Cisco Global Cloud Index
The grand challenge of data storage was hard enough when the rate of accumulation was much lower and nearly all of the data was stored on our own devices. These days, however, we’re sending off more data to “the cloud”—that (forgive the pun) nebulous term for the remote data centers operated by the likes of Amazon Web Services, Google Cloud, IBM Cloud, and Microsoft Azure. Businesses and government agencies are increasingly transferring more of their workloads—not just peripheral functions but also mission-critical work—to the cloud. Consumers, who make up a growing segment of cloud users, are turning to the cloud because it allows them to access content and services on any device wherever they go.
And yet, despite our growing reliance on the cloud, how many of us have a clear picture of how the cloud operates or, perhaps more important, how our data is stored? Even if it isn’t your job to understand such things, the fact remains that your life in more ways than you probably know relies on the very basic process of storing 0s and 1s. The infrographics below offer a step-by-step guide to storing data, both locally and remotely, as well as a more detailed look into the mechanics of cloud storage.
I. The Basics of Data Storage
Illustration: Francesco Muzzi/Story TK
Storing Data Locally on a Solid-State Drive
Step 1: Clicking on the “Save” icon in a program invokes firmware that locates where the data is to be stored on the drive.
Step 2: Inside the drive, data is physically located in different blocks. A quirk of the flash memory used in solid-state drives is that when data is being written, individual bits can only be changed from 1 to 0, never from 0 to 1. So when data is written to a block, all of the bits are first set to 1, erasing any previous data. Then the 0s are written, creating the correct pattern of 1s and 0s.
Step 3: Another quirk of flash memory is that it’s prone to corrupting stored bits, and this corruption tends to affect clusters of bits that are located close together. Error-correcting codes can compensate for only a certain number of corrupted bits per byte, so each bit in a byte of data is stored in a different block, to minimize the likelihood of multiple bits in a given byte being corrupted.
Step 4: Because erasing a block is slow, each time a portion of the data on a block is updated, the updated data is written to an empty part of the block, if possible, and the original data is marked as invalid. Eventually, though, the block must be erased to allow new data to be written to it.
Step 5: Reading back data always introduces errors, but error-correcting codes can locate the errors in a block and correct them, provided that the block has only one or two errors. If the block has multiple errors, the software can’t correct them, and the block is deemed unreadable. Errors tend to occur in bursts and may be caused by stray electromagnetic fields—for example, a phone ringing or a motor turning on. Errors also arise from imperfections in the storage medium.
Storing Data Remotely in the Cloud
Step 1: Data is saved locally, in the form of blocks.
Step 2: Data is transmitted over the Internet to a data center [see below, “A Deeper Dive into the Cloud”].
Step 3: For redundancy, data is stored on at least two hard-disk drives or two solid-state drives (which may not be in the same data center), following the same basic method described above for local storage.
Step 4: Later that day, if the data center follows best practices, the data is backed up onto magnetic tape. However, not all data centers use tape backup.
Step 5: Reading back data always introduces errors, but error-correcting codes can locate the errors in a block and correct them, provided that the block has only one or two errors. If the block has multiple errors, the software can’t correct them, and the block is deemed unreadable. Errors tend to occur in bursts and may be caused by stray electromagnetic fields—for example, a phone ringing or a motor turning on. Errors also arise from imperfections in the storage medium.
II. A Deeper Dive Into the Cloud
Though most data that gets stored is still retained locally, a growing number of people and devices are sending an ever-greater share of their data to remote data centers.
Illustration: Francesco Muzzi/Story TK
Data from multiple users moves over the Internet to a cloud data center, which is connected to the outside world via optical fiber or, in some cases, satellite gigabit-per-second links.
The cloud data center is basically a warehouse for data, with multiple racks of specialized storage computers called database servers.
There are three basic types of cloud data storage: hard-disk drives, solid-state drives, and magnetic tape cartridges, which have the following features:
Magnetic Tape Hard-Disk Drive Solid-State Drive Access time, read/write 10–60 seconds 7 milliseconds 50/1,000 nanoseconds Capacity 12 terabytes 8 terabytes 2 terabytes Data persistence 10–30 years 3–6 years 8–10 years Read/write cycles Indefinite Indefinite 1,000
Cloud security: Cloud data centers protect data using encryption and firewalls. Most centers offer multiple layers of each. Opting for the highest level of encryption and firewall protection will of course increase the amount of time it takes to store and retrieve the data.
Perpendicular magnetic recording allows for data densities of about 3 gigabits per square millimeter.
Magnetic Storage vs. Solid State
Hard-disk drives and magnetic tape store data by magnetizing particles that coat their surfaces. The amount of data that can be stored in a given space—the density, that is—is a function of the size of the smallest magnetized area that the recording head can create. Perpendicular recording [right] can store about 3 gigabits per square millimeter. Newer drives based on heat-assisted magnetic recording and microwave-assisted magnetic recording boast even higher densities. Flash memory in solid-state drives uses a single transistor for each bit. Data centers are replacing HDDs with SDDs, which cost four to five times as much but are several orders of magnitude faster. On the outside, a solid-state drive may look similar to an HDD, but inside you’ll find a printed circuit board studded with flash memory chips.
About the Author
Barry M. Lunt is a professor of information technology at Brigham Young University, in Provo, Utah.
A View to the Cloud syndicated from https://jiohowweb.blogspot.com
0 notes
Text
Anatomy of Bitcoin
About five years ago, using the pseudonym Satoshi Nakamoto, an anonymous computer programmer or group of programmers built the Bitcoin software system and released it onto the internet. This was something that was designed to run across a large network of machines – called bitcoin miners – and anyone on earth could operate one of these machines.
This distributed software seeded the new currency, creating a small number of bitcoins. Basically, bitcoins are just long digital addresses and balances, stored in an online ledger called the "blockchain." But the system was also designed so that the currency would slowly expand, and so that people would be encouraged to operate bitcoin miners and keep the system itself growing.
When the system creates new bitcoins, you see, it gives them to the miners. Miners keep track of all the bitcoin transactions and add them to the blockchain ledger, and in exchange, they get the privilege of, every so often, awarding themselves a few extra bitcoins. Right now, 25 bitcoins are paid out to the world's miners about six times per hour, but that rate changes over time.
Why do these bitcoins have value? It's pretty simple. They've evolved into something that a lot of people want – like a dollar or a yen or the cowry shells swapped for goods on the coast of Africa over 3,000 years ago – and they're in limited supply. Though the system continues to crank out bitcoins, this will stop when it reaches 21 million, which was designed to happen in about the year 2140.
The idea was to create a currency whose value couldn't be watered down by some central authority, like the Federal Reserve.
When the system quits making new money, the value of each bitcoin will necessarily rise as demand rises – it's what's called a deflationary currency – but although the supply of coins will stop expanding, it will be still be relatively easy to spend. Bitcoins can be broken into tiny pieces. Each bitcoin can be divided into one hundred million units, called Satoshis, after the currency's founder.
The Key to the System ———————
How do you spend bitcoins? Trade them? Keep people from stealing them? Bitcoin is a math-based currency. That means that the rules that govern bitcoin's accounting are controlled by cryptography. Basically, if you own some bitcoins, you own a private cryptography key that's associated with an address on the internet that contains a balance in the public ledger. The address and the private key let you make transactions.
The internet address is something everyone can see. Think of it like a really complicated email address for online payments. Something like this: 1DTAXPKS1Sz7a5hL2Skp8bykwGaEL5JyrZ. If someone wants to send you bitcoins, they need your address.
>If you own some bitcoins, what you really own is a private cryptography key that's associated with an address on the internet
If you want to send your bitcoins to someone else, you need your address and their address – but you also need your private cryptography key. This is an even more complicated string that you use to authorize a payment.
Using the math associated with these keys and addresses, the system's public network of peer-to-peer computers – the bitcoin miners – check every transaction that happens on the network. If the math doesn't add up, the transaction is rejected.
Crypto systems like this do get cracked, and the software behind Bitcoin could have flaws in it. But at this point, Bitcoin has been tested pretty thoroughly, and it seems to be pretty darned secure.
For the ordinary people who use this network – the people who do the buying and the selling and the transferring – managing addresses and keys can be a bit of a hassle. But there are many different types of programs – called wallets – that keep track of these numbers for you. You can install a wallet on your computer or your mobile phone, or use one that sits on a website.
With these wallets, you can easily send and receive bitcoins via the net. You can, say, buy a pizza on a site that's set up to take bitcoin payments. You can donate money to a church. You can even pay for plastic surgery. The number of online merchants accepting bitcoins grows with each passing day.
But you can also make transactions here in the real world. That's what a mobile wallet is good for. The Pink Cow, a restaurant in Tokyo, plugs into the Bitcoin system via a tablet PC sitting beside its cash register. If you want to pay for your dinner in bitcoins, you hold up your phone and scan a QR code – a kind of bar code – that pops up on the tablet.
How to Get a Bitcoin ——————–
If all that makes sense and you wanna give it try, the first thing you do is get a wallet. We like blockchain.info, which offers an app that you can download to your phone. Then, once you have a wallet, you need some bitcoins.
In the U.S., the easiest way to buy and sell bitcoins is via a website called Coinbase. For a one percent fee, Coinbase links to your bank account and then acts as a proxy for you, buying and selling bitcoins on an exchange. Coinbase also offers an easy-to-use wallet. You can also make much larger bitcoin purchases on big exchanges like Mt. Gox or Bitstamp, but to trade on these exchanges, you need to first send them cash using costly and time-consuming international wire transfers.
>Ironically, the best way to keep bitcoin purchases anonymous is to meet up with someone here in the real world and make a trade.
Yes, you can keep your purchases anonymous – or at least mostly anonymous. If you use a service like Coinbase or Mt. Gox, you'll have to provide a bank account and identification. But other services, such as LocalBitcoins, let you buy bitcoins without providing personal information. Ironically, the best way to do this is to meet up with someone here in the real world and make the trade in-person.
LocalBitcoins will facilitate such meetups, where one person provides cash and the other then sends bitcoins over the net. Or you can attend a regular Bitcoin meetup in your part the world. Because credit card and bank transactions are reversible and bitcoin transactions are not, you need to be very careful if you're ever selling bitcoins to an individual. That's one reason why many sellers like to trade bitcoins for cash.
The old-school way of getting new bitcoins is mining. That means turning your computer into a bitcoin miner, one of those nodes on Bitcoin's peer-to-peer network. Your machine would run the open source Bitcoin software.
Back in the day, you could do bitcoin mining on your home PC. But as the price of bitcoins has shot up, the mining game has morphed into a bit of a space-race – with professional players, custom-designed hardware, and rapidly expanding processing power.
Today, all of the computers vying for those 25 bitcoins perform 5 quintillion mathematical calculations per second. To put it in perspective, that's about 150 times as many mathematical operations as the world's most powerful supercomputer.
And mining can be pretty risky. Companies that build these custom machines typically charge you for the hardware upfront, and every day you wait for delivery is a day when it becomes harder to mine bitcoins. That reduces the amount of money you can earn.
This spring, WIRED tested out a custom-designed system built by a Kansas City, Missouri company called Butterfly Labs. We were lucky enough to receive one of the first 50 units of a $275 machine built by the company.
We hooked it up to a network of mining computers that pool together computing resources and share bitcoin profits. And in six months, it has earned more than 13 bitcoins. That's more than $10,000 at today's bitcoin prices. But people who got the machine later than we did (and there were plenty of them) didn't make quite so much money.
Online Thievery —————
Once you get your hands on some bitcoins, be careful. If somebody gets access to your Bitcoin wallet or that private key, they can take your money. And in the Bitcoin world, when money is gone, it's gone for good.
This can be a problem whether you're running a wallet on your own machine or on a website run by a third party. Recently, hackers busted into a site called inputs.io – which stores bitcoins in digital wallets for people across the globe – and they made off with about $1.2 million in bitcoins.
>In the bitcoin world, when money is gone, it's pretty much gone for good.
So, as their bitcoins start to add up, many pros move their wallets off of their computers. For instance, they'll save them on a thumb drive that's not connected to the internet.
Some people will even move their bitcoins into a real physical wallet or onto something else that's completely separate from the computer world. How is that possible? Basically, they'll write their private key on a piece of paper. Others will engrave their crypto key on a ring or even on a metal coin.
Sure, you could lose this. But the same goes for a $100 bill.
The good news is that the public nature of the bitcoin ledger may make it theoretically possible to figure out who has stolen your bitcoins. You can always see the address that they were shipped off to, and if you ever link that address to a specific person, then you've found your thief.
But don't count on it. This is an extremely complex process, and researchers are only just beginning to explore the possibilities.
Bitcoin vs. the U.S.A. ———————-
Bitcoin is starting to work as a currency, but because of the way it's built, it also operates as an extremely low-cost money-moving platform. In theory, it could be a threat to PayPal, to Western Union, even to Visa and Mastercard. With Bitcoin, you can move money anywhere in the world without paying the fees.
The process isn't instant. The miners bundle up those transactions every 10 minutes or so. But today, payment processors like BitPay have stepped in to smooth things out and speed them up.
>The feds have stopped short of trying to kill Bitcoin, but they've created an atmosphere where anybody who wants to link the U.S. financial system to Bitcoin is going to have to proceed with extreme caution
The trouble is that federal regulators still haven't quite figured out how to deal with Bitcoin.
The currency is doing OK in China, Japan, parts of Europe, and Canada, but it's getting its bumpiest ride in the U.S., where authorities are worried about the very features that make Bitcoin so exciting to merchants and entrepreneurs. Here, the feds have stopped short of trying to kill Bitcoin, but they've created an atmosphere where anybody who wants to link the U.S. financial system to Bitcoin is going to have to proceed with extreme caution.
Earlier this year, the U.S. Department of Homeland Security closed the U.S. bank accounts belonging to Mt. Gox, which has generally been the world's largest Bitcoin exchange. Mt. Gox, based in Japan, let U.S. residents trade bitcoins for cash, but it hadn't registered with the federal government as a money transmitter, and it hadn't registered in the nearly 50 U.S. states that also require this.
The Homeland Security action against Mt. Gox had an immediate chilling effect in the U.S. Soon, American Bitcoin companies started reporting that their banks were dropping them, but not because they had done anything illegal. The banks simply don't want the risk.
Now, other Bitcoin companies that have moved fast to operate within the U.S. are facing the possibility of being shut down if they're not following state and federal guidelines.
Even if the feds were interested in shutting down Bitcoin, they probably couldn't if they tried, and now, they seem to understand its promise. In testimony on Capitol hill earlier this week, Jennifer Shasky Calvery, the director of the Treasury Department’s Financial Crimes Enforcement Network, said that Bitcoin poses problems, but she also said that it's a bit like the internet in its earliest days.
"So often, when there is a new type of financial service or a new player in the financial industry, the first reaction by those of us who are concerned about money laundering or terrorist finance is to think about the gaps and the vulnerabilities that it creates in the financial system," she said. "But it’s also important that we step back and recognize that innovation is a very important part of our economy."
It is. And Bitcoin richly provides that innovation. It just may take a while for the world to completely catch on.
0 notes
Text
A View to the Cloud
What really happens when your data is stored on far-off servers in distant data centers
ul.listicle.SerifType { font-family: “Georgia”, serif; margin: 1em 0; padding: 0; list-style-type: none; list-item-decoration: none; } h3.listicle-item-hed a { color: #d80404!important; } ul.listicle.SerifType li p strong { font-family: “Georgia”, serif; margin: 1em 0; font-weight: bold !important; padding: 0; list-style-type: none; }
Illustration: Francesco Muzzi/Story TK
We live in a world that’s awash in information. Way back in 2011, an IBM study estimated that nearly 3 quintillion—that’s a 3 with 18 zeros after it—bytes of data were being generated every single day. We’re well past that mark now, given the doubling in the number of Internet users since 2011, the powerful rise of social media and machine learning, and the explosive growth in mobile computing, streaming services, and Internet of Things devices. Indeed, according to the latest Cisco Global Cloud Index, some 220,000 quintillion bytes—or if you prefer, 220 zettabytes—were generated “by all people, machines, and things” in 2016, on track to reach nearly 850 ZB in 2021.
Much of that data is considered ephemeral, and so it isn’t stored. But even a tiny fraction of a huge number can still be impressively large. When it comes to data, Cisco estimates that 1.8 ZB was stored in 2016, a volume that will quadruple to 7.2 ZB in 2021.
Our brains can’t really comprehend something as large as a zettabyte, but maybe this mental image will help: If each megabyte occupied the space of the period at the end of this sentence, then 1.8 ZB would cover about 460 square kilometers, or an area about eight times the size of Manhattan.
Of course, an actual zettabyte of data doesn’t occupy any space at all—data is an abstract concept. Storing data, on the other hand, does take space, as well as materials, energy, and sophisticated hardware and software. We need a reliable way to store those many 0s and 1s of data so that we can retrieve them later on, whether that’s an hour from now or five years. And if the information is in some way valuable—whether it’s a digitized family history of interest mainly to a small circle of people, or a film library of great cultural significance—the data may need to be archived more or less indefinitely.
Projected global data-center storage capacity, 2016 to 2021
Source: Cisco Global Cloud Index
The grand challenge of data storage was hard enough when the rate of accumulation was much lower and nearly all of the data was stored on our own devices. These days, however, we’re sending off more data to “the cloud”—that (forgive the pun) nebulous term for the remote data centers operated by the likes of Amazon Web Services, Google Cloud, IBM Cloud, and Microsoft Azure. Businesses and government agencies are increasingly transferring more of their workloads—not just peripheral functions but also mission-critical work—to the cloud. Consumers, who make up a growing segment of cloud users, are turning to the cloud because it allows them to access content and services on any device wherever they go.
And yet, despite our growing reliance on the cloud, how many of us have a clear picture of how the cloud operates or, perhaps more important, how our data is stored? Even if it isn’t your job to understand such things, the fact remains that your life in more ways than you probably know relies on the very basic process of storing 0s and 1s. The infrographics below offer a step-by-step guide to storing data, both locally and remotely, as well as a more detailed look into the mechanics of cloud storage.
I. The Basics of Data Storage
Illustration: Francesco Muzzi/Story TK
Storing Data Locally on a Solid-State Drive
Step 1: Clicking on the “Save” icon in a program invokes firmware that locates where the data is to be stored on the drive.
Step 2: Inside the drive, data is physically located in different blocks. A quirk of the flash memory used in solid-state drives is that when data is being written, individual bits can only be changed from 1 to 0, never from 0 to 1. So when data is written to a block, all of the bits are first set to 1, erasing any previous data. Then the 0s are written, creating the correct pattern of 1s and 0s.
Step 3: Another quirk of flash memory is that it’s prone to corrupting stored bits, and this corruption tends to affect clusters of bits that are located close together. Error-correcting codes can compensate for only a certain number of corrupted bits per byte, so each bit in a byte of data is stored in a different block, to minimize the likelihood of multiple bits in a given byte being corrupted.
Step 4: Because erasing a block is slow, each time a portion of the data on a block is updated, the updated data is written to an empty part of the block, if possible, and the original data is marked as invalid. Eventually, though, the block must be erased to allow new data to be written to it.
Step 5: Reading back data always introduces errors, but error-correcting codes can locate the errors in a block and correct them, provided that the block has only one or two errors. If the block has multiple errors, the software can’t correct them, and the block is deemed unreadable. Errors tend to occur in bursts and may be caused by stray electromagnetic fields—for example, a phone ringing or a motor turning on. Errors also arise from imperfections in the storage medium.
Storing Data Remotely in the Cloud
Step 1: Data is saved locally, in the form of blocks.
Step 2: Data is transmitted over the Internet to a data center [see below, “A Deeper Dive into the Cloud”].
Step 3: For redundancy, data is stored on at least two hard-disk drives or two solid-state drives (which may not be in the same data center), following the same basic method described above for local storage.
Step 4: Later that day, if the data center follows best practices, the data is backed up onto magnetic tape. However, not all data centers use tape backup.
Step 5: Reading back data always introduces errors, but error-correcting codes can locate the errors in a block and correct them, provided that the block has only one or two errors. If the block has multiple errors, the software can’t correct them, and the block is deemed unreadable. Errors tend to occur in bursts and may be caused by stray electromagnetic fields—for example, a phone ringing or a motor turning on. Errors also arise from imperfections in the storage medium.
II. A Deeper Dive Into the Cloud
Though most data that gets stored is still retained locally, a growing number of people and devices are sending an ever-greater share of their data to remote data centers.
Illustration: Francesco Muzzi/Story TK
Data from multiple users moves over the Internet to a cloud data center, which is connected to the outside world via optical fiber or, in some cases, satellite gigabit-per-second links.
The cloud data center is basically a warehouse for data, with multiple racks of specialized storage computers called database servers.
There are three basic types of cloud data storage: hard-disk drives, solid-state drives, and magnetic tape cartridges, which have the following features:
Magnetic Tape Hard-Disk Drive Solid-State Drive Access time, read/write 10–60 seconds 7 milliseconds 50/1,000 nanoseconds Capacity 12 terabytes 8 terabytes 2 terabytes Data persistence 10–30 years 3–6 years 8–10 years Read/write cycles Indefinite Indefinite 1,000
Cloud security: Cloud data centers protect data using encryption and firewalls. Most centers offer multiple layers of each. Opting for the highest level of encryption and firewall protection will of course increase the amount of time it takes to store and retrieve the data.
Perpendicular magnetic recording allows for data densities of about 3 gigabits per square millimeter.
Magnetic Storage vs. Solid State
Hard-disk drives and magnetic tape store data by magnetizing particles that coat their surfaces. The amount of data that can be stored in a given space—the density, that is—is a function of the size of the smallest magnetized area that the recording head can create. Perpendicular recording [right] can store about 3 gigabits per square millimeter. Newer drives based on heat-assisted magnetic recording and microwave-assisted magnetic recording boast even higher densities. Flash memory in solid-state drives uses a single transistor for each bit. Data centers are replacing HDDs with SDDs, which cost four to five times as much but are several orders of magnitude faster. On the outside, a solid-state drive may look similar to an HDD, but inside you’ll find a printed circuit board studded with flash memory chips.
About the Author
Barry M. Lunt is a professor of information technology at Brigham Young University, in Provo, Utah.
A View to the Cloud syndicated from https://jiohowweb.blogspot.com
0 notes
Text
A View to the Cloud
What really happens when your data is stored on far-off servers in distant data centers
ul.listicle.SerifType { font-family: “Georgia”, serif; margin: 1em 0; padding: 0; list-style-type: none; list-item-decoration: none; } h3.listicle-item-hed a { color: #d80404!important; } ul.listicle.SerifType li p strong { font-family: “Georgia”, serif; margin: 1em 0; font-weight: bold !important; padding: 0; list-style-type: none; }
Illustration: Francesco Muzzi/Story TK
We live in a world that’s awash in information. Way back in 2011, an IBM study estimated that nearly 3 quintillion—that’s a 3 with 18 zeros after it—bytes of data were being generated every single day. We’re well past that mark now, given the doubling in the number of Internet users since 2011, the powerful rise of social media and machine learning, and the explosive growth in mobile computing, streaming services, and Internet of Things devices. Indeed, according to the latest Cisco Global Cloud Index, some 220,000 quintillion bytes—or if you prefer, 220 zettabytes—were generated “by all people, machines, and things” in 2016, on track to reach nearly 850 ZB in 2021.
Much of that data is considered ephemeral, and so it isn’t stored. But even a tiny fraction of a huge number can still be impressively large. When it comes to data, Cisco estimates that 1.8 ZB was stored in 2016, a volume that will quadruple to 7.2 ZB in 2021.
Our brains can’t really comprehend something as large as a zettabyte, but maybe this mental image will help: If each megabyte occupied the space of the period at the end of this sentence, then 1.8 ZB would cover about 460 square kilometers, or an area about eight times the size of Manhattan.
Of course, an actual zettabyte of data doesn’t occupy any space at all—data is an abstract concept. Storing data, on the other hand, does take space, as well as materials, energy, and sophisticated hardware and software. We need a reliable way to store those many 0s and 1s of data so that we can retrieve them later on, whether that’s an hour from now or five years. And if the information is in some way valuable—whether it’s a digitized family history of interest mainly to a small circle of people, or a film library of great cultural significance—the data may need to be archived more or less indefinitely.
Projected global data-center storage capacity, 2016 to 2021
Source: Cisco Global Cloud Index
The grand challenge of data storage was hard enough when the rate of accumulation was much lower and nearly all of the data was stored on our own devices. These days, however, we’re sending off more data to “the cloud”—that (forgive the pun) nebulous term for the remote data centers operated by the likes of Amazon Web Services, Google Cloud, IBM Cloud, and Microsoft Azure. Businesses and government agencies are increasingly transferring more of their workloads—not just peripheral functions but also mission-critical work—to the cloud. Consumers, who make up a growing segment of cloud users, are turning to the cloud because it allows them to access content and services on any device wherever they go.
And yet, despite our growing reliance on the cloud, how many of us have a clear picture of how the cloud operates or, perhaps more important, how our data is stored? Even if it isn’t your job to understand such things, the fact remains that your life in more ways than you probably know relies on the very basic process of storing 0s and 1s. The infrographics below offer a step-by-step guide to storing data, both locally and remotely, as well as a more detailed look into the mechanics of cloud storage.
I. The Basics of Data Storage
Illustration: Francesco Muzzi/Story TK
Storing Data Locally on a Solid-State Drive
Step 1: Clicking on the “Save” icon in a program invokes firmware that locates where the data is to be stored on the drive.
Step 2: Inside the drive, data is physically located in different blocks. A quirk of the flash memory used in solid-state drives is that when data is being written, individual bits can only be changed from 1 to 0, never from 0 to 1. So when data is written to a block, all of the bits are first set to 1, erasing any previous data. Then the 0s are written, creating the correct pattern of 1s and 0s.
Step 3: Another quirk of flash memory is that it’s prone to corrupting stored bits, and this corruption tends to affect clusters of bits that are located close together. Error-correcting codes can compensate for only a certain number of corrupted bits per byte, so each bit in a byte of data is stored in a different block, to minimize the likelihood of multiple bits in a given byte being corrupted.
Step 4: Because erasing a block is slow, each time a portion of the data on a block is updated, the updated data is written to an empty part of the block, if possible, and the original data is marked as invalid. Eventually, though, the block must be erased to allow new data to be written to it.
Step 5: Reading back data always introduces errors, but error-correcting codes can locate the errors in a block and correct them, provided that the block has only one or two errors. If the block has multiple errors, the software can’t correct them, and the block is deemed unreadable. Errors tend to occur in bursts and may be caused by stray electromagnetic fields—for example, a phone ringing or a motor turning on. Errors also arise from imperfections in the storage medium.
Storing Data Remotely in the Cloud
Step 1: Data is saved locally, in the form of blocks.
Step 2: Data is transmitted over the Internet to a data center [see below, “A Deeper Dive into the Cloud”].
Step 3: For redundancy, data is stored on at least two hard-disk drives or two solid-state drives (which may not be in the same data center), following the same basic method described above for local storage.
Step 4: Later that day, if the data center follows best practices, the data is backed up onto magnetic tape. However, not all data centers use tape backup.
Step 5: Reading back data always introduces errors, but error-correcting codes can locate the errors in a block and correct them, provided that the block has only one or two errors. If the block has multiple errors, the software can’t correct them, and the block is deemed unreadable. Errors tend to occur in bursts and may be caused by stray electromagnetic fields—for example, a phone ringing or a motor turning on. Errors also arise from imperfections in the storage medium.
II. A Deeper Dive Into the Cloud
Though most data that gets stored is still retained locally, a growing number of people and devices are sending an ever-greater share of their data to remote data centers.
Illustration: Francesco Muzzi/Story TK
Data from multiple users moves over the Internet to a cloud data center, which is connected to the outside world via optical fiber or, in some cases, satellite gigabit-per-second links.
The cloud data center is basically a warehouse for data, with multiple racks of specialized storage computers called database servers.
There are three basic types of cloud data storage: hard-disk drives, solid-state drives, and magnetic tape cartridges, which have the following features:
Magnetic Tape Hard-Disk Drive Solid-State Drive Access time, read/write 10–60 seconds 7 milliseconds 50/1,000 nanoseconds Capacity 12 terabytes 8 terabytes 2 terabytes Data persistence 10–30 years 3–6 years 8–10 years Read/write cycles Indefinite Indefinite 1,000
Cloud security: Cloud data centers protect data using encryption and firewalls. Most centers offer multiple layers of each. Opting for the highest level of encryption and firewall protection will of course increase the amount of time it takes to store and retrieve the data.
Perpendicular magnetic recording allows for data densities of about 3 gigabits per square millimeter.
Magnetic Storage vs. Solid State
Hard-disk drives and magnetic tape store data by magnetizing particles that coat their surfaces. The amount of data that can be stored in a given space—the density, that is—is a function of the size of the smallest magnetized area that the recording head can create. Perpendicular recording [right] can store about 3 gigabits per square millimeter. Newer drives based on heat-assisted magnetic recording and microwave-assisted magnetic recording boast even higher densities. Flash memory in solid-state drives uses a single transistor for each bit. Data centers are replacing HDDs with SDDs, which cost four to five times as much but are several orders of magnitude faster. On the outside, a solid-state drive may look similar to an HDD, but inside you’ll find a printed circuit board studded with flash memory chips.
About the Author
Barry M. Lunt is a professor of information technology at Brigham Young University, in Provo, Utah.
A View to the Cloud syndicated from https://jiohowweb.blogspot.com
0 notes
Text
A View to the Cloud
What really happens when your data is stored on far-off servers in distant data centers
ul.listicle.SerifType { font-family: “Georgia”, serif; margin: 1em 0; padding: 0; list-style-type: none; list-item-decoration: none; } h3.listicle-item-hed a { color: #d80404!important; } ul.listicle.SerifType li p strong { font-family: “Georgia”, serif; margin: 1em 0; font-weight: bold !important; padding: 0; list-style-type: none; }
Illustration: Francesco Muzzi/Story TK
We live in a world that’s awash in information. Way back in 2011, an IBM study estimated that nearly 3 quintillion—that’s a 3 with 18 zeros after it—bytes of data were being generated every single day. We’re well past that mark now, given the doubling in the number of Internet users since 2011, the powerful rise of social media and machine learning, and the explosive growth in mobile computing, streaming services, and Internet of Things devices. Indeed, according to the latest Cisco Global Cloud Index, some 220,000 quintillion bytes—or if you prefer, 220 zettabytes—were generated “by all people, machines, and things” in 2016, on track to reach nearly 850 ZB in 2021.
Much of that data is considered ephemeral, and so it isn’t stored. But even a tiny fraction of a huge number can still be impressively large. When it comes to data, Cisco estimates that 1.8 ZB was stored in 2016, a volume that will quadruple to 7.2 ZB in 2021.
Our brains can’t really comprehend something as large as a zettabyte, but maybe this mental image will help: If each megabyte occupied the space of the period at the end of this sentence, then 1.8 ZB would cover about 460 square kilometers, or an area about eight times the size of Manhattan.
Of course, an actual zettabyte of data doesn’t occupy any space at all—data is an abstract concept. Storing data, on the other hand, does take space, as well as materials, energy, and sophisticated hardware and software. We need a reliable way to store those many 0s and 1s of data so that we can retrieve them later on, whether that’s an hour from now or five years. And if the information is in some way valuable—whether it’s a digitized family history of interest mainly to a small circle of people, or a film library of great cultural significance—the data may need to be archived more or less indefinitely.
Projected global data-center storage capacity, 2016 to 2021
Source: Cisco Global Cloud Index
The grand challenge of data storage was hard enough when the rate of accumulation was much lower and nearly all of the data was stored on our own devices. These days, however, we’re sending off more data to “the cloud”—that (forgive the pun) nebulous term for the remote data centers operated by the likes of Amazon Web Services, Google Cloud, IBM Cloud, and Microsoft Azure. Businesses and government agencies are increasingly transferring more of their workloads—not just peripheral functions but also mission-critical work—to the cloud. Consumers, who make up a growing segment of cloud users, are turning to the cloud because it allows them to access content and services on any device wherever they go.
And yet, despite our growing reliance on the cloud, how many of us have a clear picture of how the cloud operates or, perhaps more important, how our data is stored? Even if it isn’t your job to understand such things, the fact remains that your life in more ways than you probably know relies on the very basic process of storing 0s and 1s. The infrographics below offer a step-by-step guide to storing data, both locally and remotely, as well as a more detailed look into the mechanics of cloud storage.
I. The Basics of Data Storage
Illustration: Francesco Muzzi/Story TK
Storing Data Locally on a Solid-State Drive
Step 1: Clicking on the “Save” icon in a program invokes firmware that locates where the data is to be stored on the drive.
Step 2: Inside the drive, data is physically located in different blocks. A quirk of the flash memory used in solid-state drives is that when data is being written, individual bits can only be changed from 1 to 0, never from 0 to 1. So when data is written to a block, all of the bits are first set to 1, erasing any previous data. Then the 0s are written, creating the correct pattern of 1s and 0s.
Step 3: Another quirk of flash memory is that it’s prone to corrupting stored bits, and this corruption tends to affect clusters of bits that are located close together. Error-correcting codes can compensate for only a certain number of corrupted bits per byte, so each bit in a byte of data is stored in a different block, to minimize the likelihood of multiple bits in a given byte being corrupted.
Step 4: Because erasing a block is slow, each time a portion of the data on a block is updated, the updated data is written to an empty part of the block, if possible, and the original data is marked as invalid. Eventually, though, the block must be erased to allow new data to be written to it.
Step 5: Reading back data always introduces errors, but error-correcting codes can locate the errors in a block and correct them, provided that the block has only one or two errors. If the block has multiple errors, the software can’t correct them, and the block is deemed unreadable. Errors tend to occur in bursts and may be caused by stray electromagnetic fields—for example, a phone ringing or a motor turning on. Errors also arise from imperfections in the storage medium.
Storing Data Remotely in the Cloud
Step 1: Data is saved locally, in the form of blocks.
Step 2: Data is transmitted over the Internet to a data center [see below, “A Deeper Dive into the Cloud”].
Step 3: For redundancy, data is stored on at least two hard-disk drives or two solid-state drives (which may not be in the same data center), following the same basic method described above for local storage.
Step 4: Later that day, if the data center follows best practices, the data is backed up onto magnetic tape. However, not all data centers use tape backup.
Step 5: Reading back data always introduces errors, but error-correcting codes can locate the errors in a block and correct them, provided that the block has only one or two errors. If the block has multiple errors, the software can’t correct them, and the block is deemed unreadable. Errors tend to occur in bursts and may be caused by stray electromagnetic fields—for example, a phone ringing or a motor turning on. Errors also arise from imperfections in the storage medium.
II. A Deeper Dive Into the Cloud
Though most data that gets stored is still retained locally, a growing number of people and devices are sending an ever-greater share of their data to remote data centers.
Illustration: Francesco Muzzi/Story TK
Data from multiple users moves over the Internet to a cloud data center, which is connected to the outside world via optical fiber or, in some cases, satellite gigabit-per-second links.
The cloud data center is basically a warehouse for data, with multiple racks of specialized storage computers called database servers.
There are three basic types of cloud data storage: hard-disk drives, solid-state drives, and magnetic tape cartridges, which have the following features:
Magnetic Tape Hard-Disk Drive Solid-State Drive Access time, read/write 10–60 seconds 7 milliseconds 50/1,000 nanoseconds Capacity 12 terabytes 8 terabytes 2 terabytes Data persistence 10–30 years 3–6 years 8–10 years Read/write cycles Indefinite Indefinite 1,000
Cloud security: Cloud data centers protect data using encryption and firewalls. Most centers offer multiple layers of each. Opting for the highest level of encryption and firewall protection will of course increase the amount of time it takes to store and retrieve the data.
Perpendicular magnetic recording allows for data densities of about 3 gigabits per square millimeter.
Magnetic Storage vs. Solid State
Hard-disk drives and magnetic tape store data by magnetizing particles that coat their surfaces. The amount of data that can be stored in a given space—the density, that is—is a function of the size of the smallest magnetized area that the recording head can create. Perpendicular recording [right] can store about 3 gigabits per square millimeter. Newer drives based on heat-assisted magnetic recording and microwave-assisted magnetic recording boast even higher densities. Flash memory in solid-state drives uses a single transistor for each bit. Data centers are replacing HDDs with SDDs, which cost four to five times as much but are several orders of magnitude faster. On the outside, a solid-state drive may look similar to an HDD, but inside you’ll find a printed circuit board studded with flash memory chips.
About the Author
Barry M. Lunt is a professor of information technology at Brigham Young University, in Provo, Utah.
A View to the Cloud syndicated from https://jiohowweb.blogspot.com
0 notes