#000 lines of code that were written in the histori
Explore tagged Tumblr posts
viraltiger · 6 years ago
Photo
Tumblr media
This is Andrew Chael. He wrote 850,000 of the 900,000 lines of code that were written in the historic black-hole image algorithm! - See more viral images on ViralTiger.org
0 notes
multimetaverse · 6 years ago
Note
I'm sure you've already gotten this ask, but just in case: why do you think there was such a sharp decline in viewership for One in a Minyan, and what does that mean for the show/Tyrus/Cyrus?
It was a very disappointing drop, it was one of the best episodes of the entire series but it got less viewers than the very boring Secret Society did. I think there are a few interrelated factors why this episode didn’t land as well as it deserved to: general weariness among the casual audience in regards to Cyrus’ story line, a fear of queerbaiting, and Disney’s poor marketing and scheduling.
I think that for a fair bit of the actual and potential audience of this show Cyrus coming out to Jonah was just too late. The last time the audience saw Cyrus come out to someone was last February and the last time his sexuality was directly addressed was back in July. If Disney wanted Cyrus coming out to Jonah to really have an impact they should have let it happen near the end of S2; it would have been a classic rule of 3, Cyrus comes out to someone at the beginning, middle, and end of the season, just like we saw with ‘um’. And we can set aside talk of being ‘realistic’, this is a tv show and the demands of good and interesting story telling trump realness. 
I was watching the Shine On media recap of 3x11 and Lisa mentioned that it’s clear that TJ is set up to reciprocate Cyrus’ feelings and that she wishes something would come of it but also feels that the show only really deals with Cyrus’ gay storyline every 10 eps or so. Not only is that more accurate than she knows as there will be no payoff until 3x21 but I think that’s probably a widespread sentiment among the audience. People are cluing into the restrictions imposed on Cyrus’ story line and are seeing the pattern; Cyrus gets two coming outs per season and one episode a season where he can be gay and Jewish. 
And really this episode left us in the exact same place that we’ve been in since late S2; Cyrus is over Jonah and TJ is his new love interest. People talk about Tyrus being a slow burn but it’s really more of a no burn. From their first scene together in 2x08 to getting together in 3x21 is 39 eps or two-thirds of the series entire run. Cyrus is more than his ships but when a big chunk of his story line can only be addressed in coded ways that starts to bore the audience; stories are written in text not subtext. 
The show is not queerbaiting and both Cyrus and TJ are gay and will get together in the finale but it’s also true that pretty much up until the finale their story will look exactly the same as it would if the show was queerbaiting. So it does make sense for an audience member who is specifically interested in Cyrus’ story line to just skip the show and then watch the clips of Cyrus’ story line on youtube after it’s over and judging from the sharply declining ratings that’s likely happening for a lot of people. And Disney’s always interfered in ways to make the show less gay so it’s understandable that people don’t trust it.
I saw a lot of articles published about Cyrus saying the word gay and coming out to Jonah but almost all of them were published after the ep aired which does nothing for the ratings. It’s not like Disney doesn’t know how to gin up interest; they released articles about Cyrus coming out to Buffy two days before the ep aired, they had the cast on Good Morning America to discuss Cyrus’ Bash Mitzvah and announce the S3 renewal, and they released articles about the gun plot two days in advance as well. 
But because Disney is still so afraid of conservative backlash they are very careful in the ways they promote Cyrus’ story line. And they deliberately killed a lot of the hype surrounding Cyrus’ story line when they edited the look back to make it look like TJ was looking at Buffy and even worse the look back was completely ignored anyways which further killed hype. Now even if they wanted to promote Cyrus’ story line they don’t have much to promote between now and the finale which of course is a consequence of Disney scrapping Terri’s original plans. They also made a mistake in only featuring Cyrus coming out in the extended promo rather than the next on promo that aired at the end of 3x10, a fair chunk of the audience stops watching after the ep ends and doesn’t stick around for other shows just to catch the extended promo.
Disney’s scheduling has always been awful but giving Andi Mack a two week break guarantees lower ratings when it returns. I mean that literally, throughout the entire run of the series every time the show has gone on any kind of break no matter how small the ratings have always declined when it comes back. If 3x12 manages to get higher ratings than 3x11 that will be the first time in the show’s history that it happened and far more likely is that the ratings for 3x12 will be even lower. For a serialized show like Andi Mack momentum is important and consistent airings are necessary to keep the audience engaged. 
It’s also hard for potential new viewers to get into the series because older eps aren’t easily available and with Stoney’s arrest some eps will likely not air on tv or be available on Disney + when it launches for quite a while if ever. Not airing 3x12 on the same night as the KP movie was an unforced error, airing eps the same night as Zombies and Freaky Friday aired helped boost the ratings and draws in viewers who might not have heard of the show before. 
As for what it means for the future of the show and for Tyrus, not much. It does seem like the ceiling for ratings tops out at close to 900 000 and might never get higher than that again. Certainly the hiatus between 3b and 3c will lead to lower ratings as well. I think Disney has gone to far not to air the Tyrus confession scene in 3x21 and everything before that is subtext anyways so Disney doesn’t need to cut anything
59 notes · View notes
technato · 7 years ago
Text
A View to the Cloud
What really happens when your data is stored on far-off servers in distant data centers
ul.listicle.SerifType { font-family: “Georgia”, serif; margin: 1em 0; padding: 0; list-style-type: none; list-item-decoration: none; } h3.listicle-item-hed a { color: #d80404!important; } ul.listicle.SerifType li p strong { font-family: “Georgia”, serif; margin: 1em 0; font-weight: bold !important; padding: 0; list-style-type: none; }
Illustration: Francesco Muzzi/Story TK
We live in a world that’s awash in information. Way back in 2011, an IBM study estimated that nearly 3 quintillion—that’s a 3 with 18 zeros after it—bytes of data were being generated every single day. We’re well past that mark now, given the doubling in the number of Internet users since 2011, the powerful rise of social media and machine learning, and the explosive growth in mobile computing, streaming services, and Internet of Things devices. Indeed, according to the latest Cisco Global Cloud Index, some 220,000 quintillion bytes—or if you prefer, 220 zettabytes—were generated “by all people, machines, and things” in 2016, on track to reach nearly 850 ZB in 2021.
Much of that data is considered ephemeral, and so it isn’t stored. But even a tiny fraction of a huge number can still be impressively large. When it comes to data, Cisco estimates that 1.8 ZB was stored in 2016, a volume that will quadruple to 7.2 ZB in 2021.
Our brains can’t really comprehend something as large as a zettabyte, but maybe this mental image will help: If each megabyte occupied the space of the period at the end of this sentence, then 1.8 ZB would cover about 460 square kilometers, or an area about eight times the size of Manhattan.
Of course, an actual zettabyte of data doesn’t occupy any space at all—data is an abstract concept. Storing data, on the other hand, does take space, as well as materials, energy, and sophisticated hardware and software. We need a reliable way to store those many 0s and 1s of data so that we can retrieve them later on, whether that’s an hour from now or five years. And if the information is in some way valuable—whether it’s a digitized family history of interest mainly to a small circle of people, or a film library of great cultural significance—the data may need to be archived more or less indefinitely.
Projected global data-center storage capacity, 2016 to 2021
Source: Cisco Global Cloud Index
The grand challenge of data storage was hard enough when the rate of accumulation was much lower and nearly all of the data was stored on our own devices. These days, however, we’re sending off more data to “the cloud”—that (forgive the pun) nebulous term for the remote data centers operated by the likes of Amazon Web Services, Google Cloud, IBM Cloud, and Microsoft Azure. Businesses and government agencies are increasingly transferring more of their workloads—not just peripheral functions but also mission-critical work—to the cloud. Consumers, who make up a growing segment of cloud users, are turning to the cloud because it allows them to access content and services on any device wherever they go.
And yet, despite our growing reliance on the cloud, how many of us have a clear picture of how the cloud operates or, perhaps more important, how our data is stored? Even if it isn’t your job to understand such things, the fact remains that your life in more ways than you probably know relies on the very basic process of storing 0s and 1s. The infrographics below offer a step-by-step guide to storing data, both locally and remotely, as well as a more detailed look into the mechanics of cloud storage. 
I. The Basics of Data Storage
  Illustration: Francesco Muzzi/Story TK
Storing Data Locally on a Solid-State Drive
Step 1:  Clicking on the “Save” icon in a program invokes firmware that locates where the data is to be stored on the drive.
Step 2: Inside the drive, data is physically located in different blocks. A quirk of the flash memory used in solid-state drives is that when data is being written, individual bits can only be changed from 1 to 0, never from 0 to 1. So when data is written to a block, all of the bits are first set to 1, erasing any previous data. Then the 0s are written, creating the correct pattern of 1s and 0s.
Step 3: Another quirk of flash memory is that it’s prone to corrupting stored bits, and this corruption tends to affect clusters of bits that are located close together. Error-correcting codes can compensate for only a certain number of corrupted bits per byte, so each bit in a byte of data is stored in a different block, to minimize the likelihood of multiple bits in a given byte being corrupted.
Step 4: Because erasing a block is slow, each time a portion of the data on a block is updated, the updated data is written to an empty part of the block, if possible, and the original data is marked as invalid. Eventually, though, the block must be erased to allow new data to be written to it.
Step 5: Reading back data always introduces errors, but error-correcting codes can locate the errors in a block and correct them, provided that the block has only one or two errors. If the block has multiple errors, the software can’t correct them, and the block is deemed unreadable. Errors tend to occur in bursts and may be caused by stray electromagnetic fields—for example, a phone ringing or a motor turning on. Errors also arise from imperfections in the storage medium.
Storing Data Remotely in the Cloud
Step 1: Data is saved locally, in the form of blocks.
Step 2: Data is transmitted over the Internet to a data center  [see below, “A Deeper Dive into the Cloud”].
Step 3: For redundancy, data is stored on at least two hard-disk drives or two solid-state drives (which may not be in the same data center), following the same basic method described above for local storage.
Step 4: Later that day, if the data center follows best practices, the data is backed up onto magnetic tape. However, not all data centers use tape backup.
Step 5: Reading back data always introduces errors, but error-correcting codes can locate the errors in a block and correct them, provided that the block has only one or two errors. If the block has multiple errors, the software can’t correct them, and the block is deemed unreadable. Errors tend to occur in bursts and may be caused by stray electromagnetic fields—for example, a phone ringing or a motor turning on. Errors also arise from imperfections in the storage medium.
II. A Deeper Dive Into the Cloud
Though most data that gets stored is still retained locally, a growing number of people and devices are sending an ever-greater share of their data to remote data centers.
  Illustration: Francesco Muzzi/Story TK
Data from multiple users moves over the Internet to a cloud data center, which is connected to the outside world via optical fiber or, in some cases, satellite gigabit-per-second links.
The cloud data center is basically a warehouse for data, with multiple racks of specialized storage computers called database servers.
There are three basic types of cloud data storage: hard-disk drives, solid-state drives, and magnetic tape cartridges, which have the following features:
Magnetic Tape Hard-Disk Drive Solid-State Drive Access time, read/write 10–60 seconds 7 milliseconds 50/1,000 nanoseconds Capacity 12 terabytes 8 terabytes 2 terabytes Data persistence 10–30 years 3–6 years 8–10 years Read/write cycles Indefinite Indefinite 1,000
Cloud security: Cloud data centers protect data using encryption and firewalls. Most centers offer multiple layers of each. Opting for the highest level of encryption and firewall protection will of course increase the amount of time it takes to store and retrieve the data.
Perpendicular magnetic recording allows for data densities of about 3 gigabits per square millimeter.
Magnetic Storage vs. Solid State
Hard-disk drives and magnetic tape store data by magnetizing particles that coat their surfaces. The amount of data that can be stored in a given space—the density, that is—is a function of the size of the smallest magnetized area that the recording head can create. Perpendicular recording [right] can store about 3 gigabits per square millimeter. Newer drives based on heat-assisted magnetic recording and microwave-assisted magnetic recording boast even higher densities. Flash memory in solid-state drives uses a single transistor for each bit. Data centers are replacing HDDs with SDDs, which cost four to five times as much but are several orders of magnitude faster. On the outside, a solid-state drive may look similar to an HDD, but inside you’ll find a printed circuit board studded with flash memory chips.
About the Author
Barry M. Lunt is a professor of information technology at Brigham Young University, in Provo, Utah. 
A View to the Cloud syndicated from https://jiohowweb.blogspot.com
0 notes
technato · 7 years ago
Text
A View to the Cloud
What really happens when your data is stored on far-off servers in distant data centers
ul.listicle.SerifType { font-family: “Georgia”, serif; margin: 1em 0; padding: 0; list-style-type: none; list-item-decoration: none; } h3.listicle-item-hed a { color: #d80404!important; } ul.listicle.SerifType li p strong { font-family: “Georgia”, serif; margin: 1em 0; font-weight: bold !important; padding: 0; list-style-type: none; }
Illustration: Francesco Muzzi/Story TK
We live in a world that’s awash in information. Way back in 2011, an IBM study estimated that nearly 3 quintillion—that’s a 3 with 18 zeros after it—bytes of data were being generated every single day. We’re well past that mark now, given the doubling in the number of Internet users since 2011, the powerful rise of social media and machine learning, and the explosive growth in mobile computing, streaming services, and Internet of Things devices. Indeed, according to the latest Cisco Global Cloud Index, some 220,000 quintillion bytes—or if you prefer, 220 zettabytes—were generated “by all people, machines, and things” in 2016, on track to reach nearly 850 ZB in 2021.
Much of that data is considered ephemeral, and so it isn’t stored. But even a tiny fraction of a huge number can still be impressively large. When it comes to data, Cisco estimates that 1.8 ZB was stored in 2016, a volume that will quadruple to 7.2 ZB in 2021.
Our brains can’t really comprehend something as large as a zettabyte, but maybe this mental image will help: If each megabyte occupied the space of the period at the end of this sentence, then 1.8 ZB would cover about 460 square kilometers, or an area about eight times the size of Manhattan.
Of course, an actual zettabyte of data doesn’t occupy any space at all—data is an abstract concept. Storing data, on the other hand, does take space, as well as materials, energy, and sophisticated hardware and software. We need a reliable way to store those many 0s and 1s of data so that we can retrieve them later on, whether that’s an hour from now or five years. And if the information is in some way valuable—whether it’s a digitized family history of interest mainly to a small circle of people, or a film library of great cultural significance—the data may need to be archived more or less indefinitely.
Projected global data-center storage capacity, 2016 to 2021
Source: Cisco Global Cloud Index
The grand challenge of data storage was hard enough when the rate of accumulation was much lower and nearly all of the data was stored on our own devices. These days, however, we’re sending off more data to “the cloud”—that (forgive the pun) nebulous term for the remote data centers operated by the likes of Amazon Web Services, Google Cloud, IBM Cloud, and Microsoft Azure. Businesses and government agencies are increasingly transferring more of their workloads—not just peripheral functions but also mission-critical work—to the cloud. Consumers, who make up a growing segment of cloud users, are turning to the cloud because it allows them to access content and services on any device wherever they go.
And yet, despite our growing reliance on the cloud, how many of us have a clear picture of how the cloud operates or, perhaps more important, how our data is stored? Even if it isn’t your job to understand such things, the fact remains that your life in more ways than you probably know relies on the very basic process of storing 0s and 1s. The infrographics below offer a step-by-step guide to storing data, both locally and remotely, as well as a more detailed look into the mechanics of cloud storage. 
I. The Basics of Data Storage
  Illustration: Francesco Muzzi/Story TK
Storing Data Locally on a Solid-State Drive
Step 1:  Clicking on the “Save” icon in a program invokes firmware that locates where the data is to be stored on the drive.
Step 2: Inside the drive, data is physically located in different blocks. A quirk of the flash memory used in solid-state drives is that when data is being written, individual bits can only be changed from 1 to 0, never from 0 to 1. So when data is written to a block, all of the bits are first set to 1, erasing any previous data. Then the 0s are written, creating the correct pattern of 1s and 0s.
Step 3: Another quirk of flash memory is that it’s prone to corrupting stored bits, and this corruption tends to affect clusters of bits that are located close together. Error-correcting codes can compensate for only a certain number of corrupted bits per byte, so each bit in a byte of data is stored in a different block, to minimize the likelihood of multiple bits in a given byte being corrupted.
Step 4: Because erasing a block is slow, each time a portion of the data on a block is updated, the updated data is written to an empty part of the block, if possible, and the original data is marked as invalid. Eventually, though, the block must be erased to allow new data to be written to it.
Step 5: Reading back data always introduces errors, but error-correcting codes can locate the errors in a block and correct them, provided that the block has only one or two errors. If the block has multiple errors, the software can’t correct them, and the block is deemed unreadable. Errors tend to occur in bursts and may be caused by stray electromagnetic fields—for example, a phone ringing or a motor turning on. Errors also arise from imperfections in the storage medium.
Storing Data Remotely in the Cloud
Step 1: Data is saved locally, in the form of blocks.
Step 2: Data is transmitted over the Internet to a data center  [see below, “A Deeper Dive into the Cloud”].
Step 3: For redundancy, data is stored on at least two hard-disk drives or two solid-state drives (which may not be in the same data center), following the same basic method described above for local storage.
Step 4: Later that day, if the data center follows best practices, the data is backed up onto magnetic tape. However, not all data centers use tape backup.
Step 5: Reading back data always introduces errors, but error-correcting codes can locate the errors in a block and correct them, provided that the block has only one or two errors. If the block has multiple errors, the software can’t correct them, and the block is deemed unreadable. Errors tend to occur in bursts and may be caused by stray electromagnetic fields—for example, a phone ringing or a motor turning on. Errors also arise from imperfections in the storage medium.
II. A Deeper Dive Into the Cloud
Though most data that gets stored is still retained locally, a growing number of people and devices are sending an ever-greater share of their data to remote data centers.
  Illustration: Francesco Muzzi/Story TK
Data from multiple users moves over the Internet to a cloud data center, which is connected to the outside world via optical fiber or, in some cases, satellite gigabit-per-second links.
The cloud data center is basically a warehouse for data, with multiple racks of specialized storage computers called database servers.
There are three basic types of cloud data storage: hard-disk drives, solid-state drives, and magnetic tape cartridges, which have the following features:
Magnetic Tape Hard-Disk Drive Solid-State Drive Access time, read/write 10–60 seconds 7 milliseconds 50/1,000 nanoseconds Capacity 12 terabytes 8 terabytes 2 terabytes Data persistence 10–30 years 3–6 years 8–10 years Read/write cycles Indefinite Indefinite 1,000
Cloud security: Cloud data centers protect data using encryption and firewalls. Most centers offer multiple layers of each. Opting for the highest level of encryption and firewall protection will of course increase the amount of time it takes to store and retrieve the data.
Perpendicular magnetic recording allows for data densities of about 3 gigabits per square millimeter.
Magnetic Storage vs. Solid State
Hard-disk drives and magnetic tape store data by magnetizing particles that coat their surfaces. The amount of data that can be stored in a given space—the density, that is—is a function of the size of the smallest magnetized area that the recording head can create. Perpendicular recording [right] can store about 3 gigabits per square millimeter. Newer drives based on heat-assisted magnetic recording and microwave-assisted magnetic recording boast even higher densities. Flash memory in solid-state drives uses a single transistor for each bit. Data centers are replacing HDDs with SDDs, which cost four to five times as much but are several orders of magnitude faster. On the outside, a solid-state drive may look similar to an HDD, but inside you’ll find a printed circuit board studded with flash memory chips.
About the Author
Barry M. Lunt is a professor of information technology at Brigham Young University, in Provo, Utah. 
A View to the Cloud syndicated from https://jiohowweb.blogspot.com
0 notes
technato · 7 years ago
Text
A View to the Cloud
What really happens when your data is stored on far-off servers in distant data centers
ul.listicle.SerifType { font-family: “Georgia”, serif; margin: 1em 0; padding: 0; list-style-type: none; list-item-decoration: none; } h3.listicle-item-hed a { color: #d80404!important; } ul.listicle.SerifType li p strong { font-family: “Georgia”, serif; margin: 1em 0; font-weight: bold !important; padding: 0; list-style-type: none; }
Illustration: Francesco Muzzi/Story TK
We live in a world that’s awash in information. Way back in 2011, an IBM study estimated that nearly 3 quintillion—that’s a 3 with 18 zeros after it—bytes of data were being generated every single day. We’re well past that mark now, given the doubling in the number of Internet users since 2011, the powerful rise of social media and machine learning, and the explosive growth in mobile computing, streaming services, and Internet of Things devices. Indeed, according to the latest Cisco Global Cloud Index, some 220,000 quintillion bytes—or if you prefer, 220 zettabytes—were generated “by all people, machines, and things” in 2016, on track to reach nearly 850 ZB in 2021.
Much of that data is considered ephemeral, and so it isn’t stored. But even a tiny fraction of a huge number can still be impressively large. When it comes to data, Cisco estimates that 1.8 ZB was stored in 2016, a volume that will quadruple to 7.2 ZB in 2021.
Our brains can’t really comprehend something as large as a zettabyte, but maybe this mental image will help: If each megabyte occupied the space of the period at the end of this sentence, then 1.8 ZB would cover about 460 square kilometers, or an area about eight times the size of Manhattan.
Of course, an actual zettabyte of data doesn’t occupy any space at all—data is an abstract concept. Storing data, on the other hand, does take space, as well as materials, energy, and sophisticated hardware and software. We need a reliable way to store those many 0s and 1s of data so that we can retrieve them later on, whether that’s an hour from now or five years. And if the information is in some way valuable—whether it’s a digitized family history of interest mainly to a small circle of people, or a film library of great cultural significance—the data may need to be archived more or less indefinitely.
Projected global data-center storage capacity, 2016 to 2021
Source: Cisco Global Cloud Index
The grand challenge of data storage was hard enough when the rate of accumulation was much lower and nearly all of the data was stored on our own devices. These days, however, we’re sending off more data to “the cloud”—that (forgive the pun) nebulous term for the remote data centers operated by the likes of Amazon Web Services, Google Cloud, IBM Cloud, and Microsoft Azure. Businesses and government agencies are increasingly transferring more of their workloads—not just peripheral functions but also mission-critical work—to the cloud. Consumers, who make up a growing segment of cloud users, are turning to the cloud because it allows them to access content and services on any device wherever they go.
And yet, despite our growing reliance on the cloud, how many of us have a clear picture of how the cloud operates or, perhaps more important, how our data is stored? Even if it isn’t your job to understand such things, the fact remains that your life in more ways than you probably know relies on the very basic process of storing 0s and 1s. The infrographics below offer a step-by-step guide to storing data, both locally and remotely, as well as a more detailed look into the mechanics of cloud storage. 
I. The Basics of Data Storage
  Illustration: Francesco Muzzi/Story TK
Storing Data Locally on a Solid-State Drive
Step 1:  Clicking on the “Save” icon in a program invokes firmware that locates where the data is to be stored on the drive.
Step 2: Inside the drive, data is physically located in different blocks. A quirk of the flash memory used in solid-state drives is that when data is being written, individual bits can only be changed from 1 to 0, never from 0 to 1. So when data is written to a block, all of the bits are first set to 1, erasing any previous data. Then the 0s are written, creating the correct pattern of 1s and 0s.
Step 3: Another quirk of flash memory is that it’s prone to corrupting stored bits, and this corruption tends to affect clusters of bits that are located close together. Error-correcting codes can compensate for only a certain number of corrupted bits per byte, so each bit in a byte of data is stored in a different block, to minimize the likelihood of multiple bits in a given byte being corrupted.
Step 4: Because erasing a block is slow, each time a portion of the data on a block is updated, the updated data is written to an empty part of the block, if possible, and the original data is marked as invalid. Eventually, though, the block must be erased to allow new data to be written to it.
Step 5: Reading back data always introduces errors, but error-correcting codes can locate the errors in a block and correct them, provided that the block has only one or two errors. If the block has multiple errors, the software can’t correct them, and the block is deemed unreadable. Errors tend to occur in bursts and may be caused by stray electromagnetic fields—for example, a phone ringing or a motor turning on. Errors also arise from imperfections in the storage medium.
Storing Data Remotely in the Cloud
Step 1: Data is saved locally, in the form of blocks.
Step 2: Data is transmitted over the Internet to a data center  [see below, “A Deeper Dive into the Cloud”].
Step 3: For redundancy, data is stored on at least two hard-disk drives or two solid-state drives (which may not be in the same data center), following the same basic method described above for local storage.
Step 4: Later that day, if the data center follows best practices, the data is backed up onto magnetic tape. However, not all data centers use tape backup.
Step 5: Reading back data always introduces errors, but error-correcting codes can locate the errors in a block and correct them, provided that the block has only one or two errors. If the block has multiple errors, the software can’t correct them, and the block is deemed unreadable. Errors tend to occur in bursts and may be caused by stray electromagnetic fields—for example, a phone ringing or a motor turning on. Errors also arise from imperfections in the storage medium.
II. A Deeper Dive Into the Cloud
Though most data that gets stored is still retained locally, a growing number of people and devices are sending an ever-greater share of their data to remote data centers.
  Illustration: Francesco Muzzi/Story TK
Data from multiple users moves over the Internet to a cloud data center, which is connected to the outside world via optical fiber or, in some cases, satellite gigabit-per-second links.
The cloud data center is basically a warehouse for data, with multiple racks of specialized storage computers called database servers.
There are three basic types of cloud data storage: hard-disk drives, solid-state drives, and magnetic tape cartridges, which have the following features:
Magnetic Tape Hard-Disk Drive Solid-State Drive Access time, read/write 10–60 seconds 7 milliseconds 50/1,000 nanoseconds Capacity 12 terabytes 8 terabytes 2 terabytes Data persistence 10–30 years 3–6 years 8–10 years Read/write cycles Indefinite Indefinite 1,000
Cloud security: Cloud data centers protect data using encryption and firewalls. Most centers offer multiple layers of each. Opting for the highest level of encryption and firewall protection will of course increase the amount of time it takes to store and retrieve the data.
Perpendicular magnetic recording allows for data densities of about 3 gigabits per square millimeter.
Magnetic Storage vs. Solid State
Hard-disk drives and magnetic tape store data by magnetizing particles that coat their surfaces. The amount of data that can be stored in a given space—the density, that is—is a function of the size of the smallest magnetized area that the recording head can create. Perpendicular recording [right] can store about 3 gigabits per square millimeter. Newer drives based on heat-assisted magnetic recording and microwave-assisted magnetic recording boast even higher densities. Flash memory in solid-state drives uses a single transistor for each bit. Data centers are replacing HDDs with SDDs, which cost four to five times as much but are several orders of magnitude faster. On the outside, a solid-state drive may look similar to an HDD, but inside you’ll find a printed circuit board studded with flash memory chips.
About the Author
Barry M. Lunt is a professor of information technology at Brigham Young University, in Provo, Utah. 
A View to the Cloud syndicated from https://jiohowweb.blogspot.com
0 notes
technato · 7 years ago
Text
A View to the Cloud
What really happens when your data is stored on far-off servers in distant data centers
ul.listicle.SerifType { font-family: “Georgia”, serif; margin: 1em 0; padding: 0; list-style-type: none; list-item-decoration: none; } h3.listicle-item-hed a { color: #d80404!important; } ul.listicle.SerifType li p strong { font-family: “Georgia”, serif; margin: 1em 0; font-weight: bold !important; padding: 0; list-style-type: none; }
Illustration: Francesco Muzzi/Story TK
We live in a world that’s awash in information. Way back in 2011, an IBM study estimated that nearly 3 quintillion—that’s a 3 with 18 zeros after it—bytes of data were being generated every single day. We’re well past that mark now, given the doubling in the number of Internet users since 2011, the powerful rise of social media and machine learning, and the explosive growth in mobile computing, streaming services, and Internet of Things devices. Indeed, according to the latest Cisco Global Cloud Index, some 220,000 quintillion bytes—or if you prefer, 220 zettabytes—were generated “by all people, machines, and things” in 2016, on track to reach nearly 850 ZB in 2021.
Much of that data is considered ephemeral, and so it isn’t stored. But even a tiny fraction of a huge number can still be impressively large. When it comes to data, Cisco estimates that 1.8 ZB was stored in 2016, a volume that will quadruple to 7.2 ZB in 2021.
Our brains can’t really comprehend something as large as a zettabyte, but maybe this mental image will help: If each megabyte occupied the space of the period at the end of this sentence, then 1.8 ZB would cover about 460 square kilometers, or an area about eight times the size of Manhattan.
Of course, an actual zettabyte of data doesn’t occupy any space at all—data is an abstract concept. Storing data, on the other hand, does take space, as well as materials, energy, and sophisticated hardware and software. We need a reliable way to store those many 0s and 1s of data so that we can retrieve them later on, whether that’s an hour from now or five years. And if the information is in some way valuable—whether it’s a digitized family history of interest mainly to a small circle of people, or a film library of great cultural significance—the data may need to be archived more or less indefinitely.
Projected global data-center storage capacity, 2016 to 2021
Source: Cisco Global Cloud Index
The grand challenge of data storage was hard enough when the rate of accumulation was much lower and nearly all of the data was stored on our own devices. These days, however, we’re sending off more data to “the cloud”—that (forgive the pun) nebulous term for the remote data centers operated by the likes of Amazon Web Services, Google Cloud, IBM Cloud, and Microsoft Azure. Businesses and government agencies are increasingly transferring more of their workloads—not just peripheral functions but also mission-critical work—to the cloud. Consumers, who make up a growing segment of cloud users, are turning to the cloud because it allows them to access content and services on any device wherever they go.
And yet, despite our growing reliance on the cloud, how many of us have a clear picture of how the cloud operates or, perhaps more important, how our data is stored? Even if it isn’t your job to understand such things, the fact remains that your life in more ways than you probably know relies on the very basic process of storing 0s and 1s. The infrographics below offer a step-by-step guide to storing data, both locally and remotely, as well as a more detailed look into the mechanics of cloud storage. 
I. The Basics of Data Storage
  Illustration: Francesco Muzzi/Story TK
Storing Data Locally on a Solid-State Drive
Step 1:  Clicking on the “Save” icon in a program invokes firmware that locates where the data is to be stored on the drive.
Step 2: Inside the drive, data is physically located in different blocks. A quirk of the flash memory used in solid-state drives is that when data is being written, individual bits can only be changed from 1 to 0, never from 0 to 1. So when data is written to a block, all of the bits are first set to 1, erasing any previous data. Then the 0s are written, creating the correct pattern of 1s and 0s.
Step 3: Another quirk of flash memory is that it’s prone to corrupting stored bits, and this corruption tends to affect clusters of bits that are located close together. Error-correcting codes can compensate for only a certain number of corrupted bits per byte, so each bit in a byte of data is stored in a different block, to minimize the likelihood of multiple bits in a given byte being corrupted.
Step 4: Because erasing a block is slow, each time a portion of the data on a block is updated, the updated data is written to an empty part of the block, if possible, and the original data is marked as invalid. Eventually, though, the block must be erased to allow new data to be written to it.
Step 5: Reading back data always introduces errors, but error-correcting codes can locate the errors in a block and correct them, provided that the block has only one or two errors. If the block has multiple errors, the software can’t correct them, and the block is deemed unreadable. Errors tend to occur in bursts and may be caused by stray electromagnetic fields—for example, a phone ringing or a motor turning on. Errors also arise from imperfections in the storage medium.
Storing Data Remotely in the Cloud
Step 1: Data is saved locally, in the form of blocks.
Step 2: Data is transmitted over the Internet to a data center  [see below, “A Deeper Dive into the Cloud”].
Step 3: For redundancy, data is stored on at least two hard-disk drives or two solid-state drives (which may not be in the same data center), following the same basic method described above for local storage.
Step 4: Later that day, if the data center follows best practices, the data is backed up onto magnetic tape. However, not all data centers use tape backup.
Step 5: Reading back data always introduces errors, but error-correcting codes can locate the errors in a block and correct them, provided that the block has only one or two errors. If the block has multiple errors, the software can’t correct them, and the block is deemed unreadable. Errors tend to occur in bursts and may be caused by stray electromagnetic fields—for example, a phone ringing or a motor turning on. Errors also arise from imperfections in the storage medium.
II. A Deeper Dive Into the Cloud
Though most data that gets stored is still retained locally, a growing number of people and devices are sending an ever-greater share of their data to remote data centers.
  Illustration: Francesco Muzzi/Story TK
Data from multiple users moves over the Internet to a cloud data center, which is connected to the outside world via optical fiber or, in some cases, satellite gigabit-per-second links.
The cloud data center is basically a warehouse for data, with multiple racks of specialized storage computers called database servers.
There are three basic types of cloud data storage: hard-disk drives, solid-state drives, and magnetic tape cartridges, which have the following features:
Magnetic Tape Hard-Disk Drive Solid-State Drive Access time, read/write 10–60 seconds 7 milliseconds 50/1,000 nanoseconds Capacity 12 terabytes 8 terabytes 2 terabytes Data persistence 10–30 years 3–6 years 8–10 years Read/write cycles Indefinite Indefinite 1,000
Cloud security: Cloud data centers protect data using encryption and firewalls. Most centers offer multiple layers of each. Opting for the highest level of encryption and firewall protection will of course increase the amount of time it takes to store and retrieve the data.
Perpendicular magnetic recording allows for data densities of about 3 gigabits per square millimeter.
Magnetic Storage vs. Solid State
Hard-disk drives and magnetic tape store data by magnetizing particles that coat their surfaces. The amount of data that can be stored in a given space—the density, that is—is a function of the size of the smallest magnetized area that the recording head can create. Perpendicular recording [right] can store about 3 gigabits per square millimeter. Newer drives based on heat-assisted magnetic recording and microwave-assisted magnetic recording boast even higher densities. Flash memory in solid-state drives uses a single transistor for each bit. Data centers are replacing HDDs with SDDs, which cost four to five times as much but are several orders of magnitude faster. On the outside, a solid-state drive may look similar to an HDD, but inside you’ll find a printed circuit board studded with flash memory chips.
About the Author
Barry M. Lunt is a professor of information technology at Brigham Young University, in Provo, Utah. 
A View to the Cloud syndicated from https://jiohowweb.blogspot.com
0 notes