#file size was MASSIVE so i had to compress it a billion times
Explore tagged Tumblr posts
redcomet-stims · 8 days ago
Text
Content warning: flashing
Tumblr media
divider credit: 1 | 2
Mobile Suit Gundam Episode 41 - A Cosmic Glow
Tumblr media Tumblr media Tumblr media Tumblr media Tumblr media Tumblr media Tumblr media Tumblr media Tumblr media Tumblr media Tumblr media Tumblr media Tumblr media Tumblr media Tumblr media Tumblr media Tumblr media
17 notes · View notes
williamedwardscoder · 5 years ago
Text
Slow database? It might not be your fault
<rant>
Okay, it usually is your fault. If you logged the SQL your ORM was generating, or saw how you are doing joins in code, or realised what that indexed UUID does to your insert rate etc you’d probably admit it was all your fault. And the fault of your tooling, of course.
In my experience, most databases are tiny.  Tiny tiny.  Tables with a few thousand rows.  If your web app is slow, its going to all be your fault.  Stop building something webscale with microservices and just get things done right there in your database instead.  Etc.
But, quite often, each company has one or two databases that have at least one or two large tables.  Tables with tens of millions of rows.  I work on databases with billions of rows.  They exist.  And that’s the kind of database where your database server is underserving you.  There could well be a metric ton of actual performance improvements that your database is leaving on the table. Areas where your database server hasn’t kept up with recent (as in the past 20 years) of regular improvements in how programs can work with the kernel, for example.
Over the years I’ve read some really promising papers that have speeded up databases.  But as far as I can tell, nothing ever happens.  What is going on?
For example, your database might be slow just because its making a lot of syscalls.  Back in 2010, experiments with syscall batching improved MySQL performance by 40% (and lots of other regular software by similar or better amounts!).   That was long before spectre patches made the costs of syscalls even higher.
So where are our batched syscalls?  I can’t see a downside to them.  Why isn’t linux offering them and glib using them, and everyone benefiting from them?  It’ll probably speed up your IDE and browser too.
Of course, your database might be slow just because you are using default settings.  The historic defaults for MySQL were horrid.  Pretty much the first thing any innodb user had to do was go increase the size of buffers and pools and various incantations they find by googling.  I haven’t investigated, but I’d guess that a lot of the performance claims I’ve heard about innodb on MySQL 8 is probably just sensible modern defaults.
I would hold tokudb up as being much better at the defaults.  That took over half your RAM, and deliberately left the other half to the operating system buffer cache.
That mention of the buffer cache brings me to another area your database could improve.  Historically, databases did ‘direct’ IO with the disks, bypassing the operating system.  These days, that is a metric ton of complexity for very questionable benefit.  Take tokudb again: that used normal buffered read writes to the file system and deliberately left the OS half the available RAM so the file system had somewhere to cache those pages.  It didn’t try and reimplement and outsmart the kernel.
This paid off handsomely for tokudb because they combined it with absolutely great compression.  It completely blows the two kinds of innodb compression right out of the water.  Well, in my tests, tokudb completely blows innodb right out of the water, but then teams who adopted it had to live with its incomplete implementation e.g. minimal support for foreign keys.  Things that have nothing to do with the storage, and only to do with how much integration boilerplate they wrote or didn’t write.  (tokudb is being end-of-lifed by percona; don’t use it for a new project 😞) 
However, even tokudb didn’t take the next step: they didn’t go to async IO.  I’ve poked around with async IO, both for networking and the file system, and found it to be a major improvement.  Think how quickly you could walk some tables by asking for pages breath-first and digging deeper as soon as the OS gets something back, rather than going through it depth-first and blocking, waiting for the next page to come back before you can proceed.
I’ve gone on enough about tokudb, which I admit I use extensively.  Tokutek went the patent route (no, it didn’t pay off for them) and Google released leveldb and Facebook adapted leveldb to become the MySQL MyRocks engine.  That’s all history now.
In the actual storage engines themselves there have been lots of advances.  Fractal Trees came along, then there was a SSTable+LSM renaissance, and just this week I heard about a fascinating paper on B+ + LSM beating SSTable+LSM. A user called Jules commented, wondered about B-epsilon trees instead of B+, and that got my brain going too.  There are lots of things you can imagine an LSM tree using instead of SSTable at each level.
But how invested is MyRocks in SSTable?  And will MyRocks ever close the performance gap between it and tokudb on the kind of workloads they are both good at?
Of course, what about Postgres?  TimescaleDB is a really interesting fork based on Postgres that has a ‘hypertable’ approach under the hood, with a table made from a collection of smaller, individually compressed tables.  In so many ways it sounds like tokudb, but with some extra finesse like storing the min/max values for columns in a segment uncompressed so the engine can check some constraints and often skip uncompressing a segment.
Timescaledb is interesting because its kind of merging the classic OLAP column-store with the classic OLTP row-store.  I want to know if TimescaleDB’s hypertable compression works for things that aren’t time-series too?  I’m thinking ‘if we claim our invoice line items are time-series data…’
Compression in Postgres is a sore subject, as is out-of-tree storage engines generally.  Saying the file system should do compression means nobody has big data in Postgres because which stable file system supports decent compression?  Postgres really needs to have built-in compression and really needs to go embrace the storage engines approach rather than keeping all the cool new stuff as second class citizens.
Of course, I fight the query planner all the time.  If, for example, you have a table partitioned by day and your query is for a time span that spans two or more partitions, then you probably get much faster results if you split that into n queries, each for a corresponding partition, and glue the results together client-side!  There was even a proxy called ShardQuery that did that.  Its crazy.  When people are making proxies in PHP to rewrite queries like that, it means the database itself is leaving a massive amount of performance on the table.
And of course, the client library you use to access the database can come in for a lot of blame too.  For example, when I profile my queries where I have lots of parameters, I find that the mysql jdbc drivers are generating a metric ton of garbage in their safe-string-split approach to prepared-query interpolation.  It shouldn’t be that my insert rate doubles when I do my hand-rolled string concatenation approach.  Oracle, stop generating garbage!
This doesn’t begin to touch on the fancy cloud service you are using to host your DB.  You’ll probably find that your laptop outperforms your average cloud DB server.  Between all the spectre patches (I really don’t want you to forget about the syscall-batching possibilities!) and how you have to mess around buying disk space to get IOPs and all kinds of nonsense, its likely that you really would be better off perforamnce-wise by leaving your dev laptop in a cabinet somewhere.
Crikey, what a lot of complaining!  But if you hear about some promising progress in speeding up databases, remember it's not realistic to hope the databases you use will ever see any kind of benefit from it.  The sad truth is, your database is still stuck in the 90s.  Async IO?  Huh no.  Compression?  Yeah right.  Syscalls?  Okay, that’s a Linux failing, but still!
Right now my hopes are on TimescaleDB.  I want to see how it copes with billions of rows of something that aren’t technically time-series.  That hybrid row and column approach just sounds so enticing.
Oh, and hopefully MyRocks2 might find something even better than SSTable for each tier?
But in the meantime, hopefully someone working on the Linux kernel will rediscover the batched syscalls idea…? ;)
2 notes · View notes
faithfulnews · 5 years ago
Text
Is the vastness of the universe evidence against God’s existence?
Tumblr media
Christianity and the progress of science
Physicist Hugh Ross writes about it in Salvo Magazine.
First a quick blurb about Hugh Ross:
Hugh Ross launched his career at age seven when he went to the library to find out why stars are hot. Physics and astronomy captured his curiosity and never let go. At age seventeen he became the youngest person ever to serve as director of observations for Vancouver’s Royal Astronomical Society. With the help of a provincial scholarship and a National Research Council (NRC) of Canada fellowship, he completed his undergraduate degree in physics (University of British Columbia) and graduate degrees in astronomy (University of Toronto). The NRC also sent him to the United States for postdoctoral studies. At Caltech he researched quasi-stellar objects, or “quasars,” some of the most distant and ancient objects in the universe.
Now back to the topic “Is the vastness of the universe incompatible with God’s existence?”
Here’s Ross’ introduction:
Scientists seem more difficult to please than the golden-haired girl of fairy-tale fame. While Goldilocks troubled herself over the just-right porridge, chair, and bed, astronomers appear preoccupied with the size of the universe.
In the days before telescopes, when an observer could count a few thousand stars in the night sky, many considered the universe too small and unimpressive to be the work of an almighty, all-knowing Creator. Only an infinite cosmos, they said, would befit an infinite deity. But then, others argued, an infinite cosmos might eliminate the need for a Creator.
Thanks to the Hubble space telescope, scientists now see that the universe contains roughly 200 billion large- and medium-sized galaxies and about a hundred times as many dwarf galaxies. The stars in those galaxies add up to about fifty billion trillion, and they comprise a mere one percent of the mass of the observable universe.
Because of the travel time of light, the universe humans can observe is really the universe of the past. What researchers know about the expansion and geometry of the universe informs us that the universe of today is at least several hundred times more enormous than the universe we can see. The universe is trillions of trillions of times larger and more spectacular than what the earliest astronomers presumed!
And yet, this new knowledge of the vastness of the universe has led to new complaints. In his book, God: The Failed Hypothesis, Victor Stenger says, “If God created the universe as a special place for humanity, he seems to have wasted an awfully large amount of space.” Stephen Hawking, in the best-selling science book of all time, A Brief History of Time, shares Stenger’s view: “Our solar system certainly is a prerequisite for our existence. . . . But there does not seem to be any need for all these other galaxies.” So now the universe is too big to befit the all-wise, all-powerful God of the Bible.
I like how he quotes an atheist physicist to get the challenge right. No sense in caricaturing the claim of your opponent.
I formalized Stenger’s argument like this:
If all things in the universe are not done the way that Victor Stenger likes them, then there is no God.
All things in the universe were not done the way Victor Stenger likes them.
Therefore, there is no God.
I would deny premise 1, there, since there is no reason to believe that’s it’s true.
Anyway, let’s see what Hugh Ross says:
The hot big bang model (now firmly established by observations) tells us that at the moment of cosmic creation, the universe was infinitely or near-infinitely hot and compressed, and all the ordinary matter existed in the form of hydrogen. As the universe expanded, it cooled. The rate at which the universe expanded and cooled depended in large part on its mass—the greater the mass, the slower the expansion and cooling rate. The slower the expansion and cooling rate, the more time the universe would spend in the temperature range (13–150 million degrees Centigrade) at which nuclear fusion can occur.
Because of its mass, the universe spent about twenty seconds in the nuclear fusion temperature range when it was between three and four minutes old. As a result, 24.77 percent of the universe’s hydrogen (by mass) fused into helium. Thus, when stars began to form—about 380,000 years later—they started off composed of about 75 percent hydrogen, 25 percent helium, and trace amounts of deuterium, lithium, and beryllium.
In the nuclear furnaces of the stars themselves, more hydrogen fused into helium, and, in addition to the extra helium, all the rest of the elements that appear in the periodic table were synthesized (created). The capacity of stellar nuclear furnaces to produce an abundance of elements heavier than helium (all but two of the elements) depended critically on how much of the universe’s initial hydrogen was fused into helium and heavier elements during the first several minutes after the cosmic creation event. How much fusion of the universe’s primordial hydrogen actually occurred at this time depended, in turn, on the universe’s mass or mass density.
If the universe’s mass (or cosmic mass density) had been even the slightest bit less than a hundred times the fifty billion trillion stars occupying the observable universe, nuclear fusion during the first several minutes of its existence would have proceeded less efficiently. Thus, the cosmos would have been forever incapable of generating elements heavier than helium—elements such as carbon, nitrogen, oxygen, phosphorus, sodium, and potassium—all of which are essential for any conceivable kind of physical life.
On the other hand, if the universe’s mass had been even the slightest bit greater, nuclear fusion during the first several minutes after its beginning would have been too productive, and all the hydrogen in the universe eventually would have been fused (after just two generations of stars) into elements as heavy as iron or heavier. Again, all the most life-essential elements, including hydrogen itself, would have ceased to exist.
Basically, your body is made up of heavier elements, and if the universe was not as massive as it is (and as old as it is), then there would not be enough heavy elements to make you, or to make massive stars like our Sun which burn steady for long periods of time. We need the heavy elements and we need the steady source of heat.
Dr. Ross has another reason why God would use vast space and long periods of time, and if you want to read that, you can click here. I think that it’s important for us all to get used to the idea that we all need to understand science apologetics. God put these evidences into the universe for us to discover and use.
Go to the article
0 notes
shirlleycoyle · 6 years ago
Text
The Most Clever ‘Zip Bomb’ Ever Made Explodes a 46MB File to 4.5 Petabytes
In the roughly three decades since the Zip file format debuted, compressed files have been subject to widespread use—and occasional misuse.
Certainly, what programmer and engineer David Fifield recently did with the Zip file format fits in the latter category. Effectively, Fifield invented a new type of “Zip bomb” or “compression bomb”—a malware variant in which a Zip file as small as a few kilobytes is so compressed it would completely fill up your hard drive when decompressed. And then some.
Files like these aren’t unheard-of, but Fifield’s techniques, revealed in an article on his website, are particularly novel because they aren’t “recursive,” or built like an inverted Russian nesting doll where the files get bigger as you decompress multiple layers of Zip files. Instead, Fifield figured out how to “overlap” files inside of a Zip archive, allowing for compression rates far beyond those of a traditional archive—in his most impressive case, he managed to get a 46-megabyte file to compress 4.5 petabytes of data.
It’s just the latest twist in a decades-old variant of malware that isn’t a virus but could still do a number on your laptop.
How We Got Zip Bombs
So what makes Zip bombs possible in the first place? First, a little history and some explanation. The story of the Zip format dates to 1988, when Phil Katz, a programmer and BBS user, responded to a lawsuit over his creation of compression software compatible with the ARC format by creating a new compression algorithm that was incompatible with (and slightly better than) the existing format, but was openly distributed. This led the format to become a de facto standard.
(Katz, while seeing much success and notoriety from his creation, had significant issues in his personal life and died of alcoholism-related disorders in 2000 at the age of 37. His story is fascinating and tragic, if you want to dig in.)
The Zip format, and variants like it, worked well on bulletin boards and FTP servers, and became ubiquitous thanks to shareware and the format’s eventual addition to Windows and Macintosh operating systems. However, their compression approach made them exploitable in unique ways. Part of the reason Zip bombs are possible has much to do with the way that compression algorithms work. At a high level, compression systems take repeated information in the compressed archive and break down the repetition in the most minimal way possible. It’s why an MP3 file, which is already compressed, doesn’t shrink down as much as a standard text file might.
The standard allows the idea to be taken to a logical extreme. If a file is full of a trillion zeroes, it means that the compressed file can be extremely tiny even if the decompressed file is absolutely massive.
The result is that the file itself is basically useless except as a way to defeat some antivirus programs (which often scan compressed files to see if they can uncover viruses) and as something of an engineering exercise.
According to a presentation given at the USENIX Security Symposium in 2015, the first example of a “Zip bomb” in the wild dates to 1996, when a file was uploaded to the bulletin board network Fidonet with the specific goal of getting an admin to open it. (A text file regarding the incident suggests, to put it lightly, that it didn’t go over well.) By 2001, it had become a concern for mainstream security experts, when a thread on the info-sharing mailing list BugTraq drew the attention of both security researchers and the popular tech site The Register.
Soon after, the most famous example of a Zip bomb, 42.zip, emerged—its author to this day is unknown, but its 106 billion-to-one compression ratio is the stuff of legend. In an email interview, Fifield noted that, while 42.zip (which has been hosted on the same single-serving website for at least 15 years) gets much of the attention, he finds later attempts more technically interesting.
“I find 42.zip inspiring on an aesthetic level—not so much the file itself but the circumstances around it,” Fifield said. “It’s like folklore. There must have been many examples of the same basic idea, but for whatever reason 42.zip is the one that had staying power.”
Extreme Compression
The Zip bomb has, over the years, remained something of a malware novelty, one that pops up from time to time in different contexts. My favorite example is from 2017, when an IT blogger, sick of all the bots that were trying to log into his WordPress site, created a gzip-based Zip bomb variant that took advantage of the built-in compression in many web browsers to break the bots that were trying to get into his site.
What makes these files so effective is that they take advantage of the very things that make a compression effective and weaponize them—wasting CPU cycles, RAM, and disk space in the process.
But these files had a natural limitation: Most Zip decompression routines max out at a compression ratio of 1032-to-one, which meant that “Zip bombs” could only reach their true compression potential recursively. For example, 42.zip, just slightly larger than 42 kilobytes when fully compressed, contains increasingly complex Zip files, layered in batches of 16 in multiple layers of a single archive. When fully decompressed, the documents take up 4.5 petabytes—a rate so large that it would tax the highest-end server you could buy.
Later compression experiments, called Zip quines, allowed for infinite recursion, so you could basically keep unzipping and never reach the bottom. (Fifield said this format helped inspire his work.)
But what makes Fifield’s work interesting is that it blows past the 1032-to-one compression ratio by overlapping files on top of one another during the compression process, making for a more tightly packed file in a single layer, with no recursion. While his results are less compressed than 42.zip is, with an equivalent-sized file only reaching a somewhat modest 5.5 gigabytes when decompressed, the compression rate is much more significant at larger sizes—as large as 281 terabytes with a traditional 10-megabyte Zip file.
Moving beyond the original Zip spec produces even more impressive results: A 45-megabyte file, compressed using the somewhat less-common but more efficient Zip64 extension of the format, can reach an equivalent decompressed size to 42.zip without any recursion, and at a ratio of 98 million to one—not as tightly wound, sure, but certainly still enough to give your computer fits.
Fifield noted that part of what makes his process possible was by coming up with ways to handle cyclic redundancy checks, or CRCs, a basic error-correction functionality baked into Zip, PNG, Ethernet, and numerous other technical standards. Messing around with CRC–32 checksums, as they’re called, was where Fifield said he learned the most.
“If you dig into the source code commit history, you’ll see that some of the earliest commits are just me struggling with checksums,” he said in the email.
Are They Actually Dangerous?
Now, to be fair, it’s unlikely that you’re going to run into a Zip bomb in the wild, and even if you do, many modern compression and antivirus programs can correctly detect that they’re problematic. It’s not exactly tricky to defeat a Zip bomb, either. “Detecting the specific class of Zip bomb we have developed in this article is easy: just look for overlapping files,” Fifield wrote in his article.
But since such detection doesn’t exist in decompression tools at the moment, it created an opportunity for testing, and even then, the worst it did with many programs tested by Fifield and others (including LibreOffice and McAfee Antivirus) was cause them to timeout. As pointed out by Twitter user Tavis Ormandy, a few antivirus programs even correctly detected it was a Zip bomb, despite the fact that it’s a new kind of Zip bomb.
Fifield, who will present his findings at the USENIX Workshop on Offensive Technologies (WOOT) conference next month, noted that while the work itself adds to a history of research and likely will be superseded in the future, its benefit from an awareness standpoint is important.
“I hope that one of the benefits is more awareness among developers of the hazards of processing complex archive formats like Zip,” he added. “It helps to have some concrete outputs: code reviewers, customers, and users will at least be able to point to this research and ask, does it handle this correctly?”
The Most Clever ‘Zip Bomb’ Ever Made Explodes a 46MB File to 4.5 Petabytes syndicated from https://triviaqaweb.wordpress.com/feed/
0 notes
kathleenseiber · 6 years ago
Text
Check it out: The first image of a black hole
Astronomers have taken the first direct image of a black hole.
By connecting radio telescopes across five continents into one Earth-sized virtual telescope, they managed to resolve the shadow of a supermassive black hole, a prediction of Einstein’s General Theory of Relativity.
Black holes have been the realm of science fiction and abstract textbook diagrams for more than a century. No one had ever seen what was deemed an object that, by its very definition, cannot be seen. Ever. An object so alien that it defies our imagination, whose mere presence rips apart everything we think we know about the laws of physics, and so extreme it twists space and time and the very essence of our existence into a swirling, contorted maelstrom of… nothing.
Welcome to Messier 87, or M87, a massive galaxy in the Virgo galaxy cluster 55 million light-years from Earth. At its fuzzy, glowing center sits a supermassive black hole, with a mass 6.5 billion times that of our sun.
Taking a direct image of the black hole in M87 is a feat that would not have been possible a decade ago. Even though the object is about the size of our solar system, it is so far away that resolving its features across 55 million light-years is like “taking a picture of a doughnut placed on the surface of the moon,” according to EHT Project Scientist Dimitrios Psaltis, a professor of astronomy and physics at the University of Arizona.
A huge ‘virtual telescope’
Supermassive black holes are relatively tiny astronomical objects, which has made them impossible to directly observe until now. The size of a black hole is proportional to its mass, so the more massive a black hole, the larger the shadow. Scientists now think that black holes with masses that are millions or even billions of times that of the sun lie at the centers of most galaxies, with some outshining all of the other stars combined.
“We’re seeing the unseeable.”
Thanks to its enormous mass and relative proximity, M87’s black hole was predicted to be one of the largest viewable from Earth, making it a perfect target for the EHT. Eight telescopes came together into one virtual telescope as big as Earth, offering unprecedented sensitivity and resolution.
The EHT observations use a technique called very-long-baseline interferometry, or VLBI, which synchronizes telescope facilities around the world and exploits the rotation of our planet to form one huge telescope observing at a wavelength of 1.3mm.
“The observations were a coordinated dance in which we simultaneously pointed our telescopes in a carefully planned sequence,” says Daniel Marrone, associate professor of astronomy at the University of Arizona. Marrone traveled to Antarctica with graduate student Junhan Kim several times to integrate the South Pole Telescope into the EHT array. “To make sure these observations were truly simultaneous, so that we could see the same wavefront of light as it landed on each telescope, we used extremely precise atomic clocks at each of the telescopes.”
The South Pole Telescope. (Credit: Junhan Kim/U. Arizona)
After two weeks of observing, the researchers collected about 5,000 trillion bytes of data on 1,000 disks—enough to keep a playlist of high-quality mp3 files playing for 4,700 years. The data were flown to supercomputers known as correlators at the Max Planck Institute for Radio Astronomy in Bonn, Germany, and the Massachusetts Institute of Technology’s Haystack Observatory in Westford, Massachusetts. There, VLBI experts distilled the raw data to a more usable volume.
Following calibration of the data, members of the EHT collaborated to reconstruct the black hole images. Working in separate groups using different methods, they all arrived at the same result.
Still, how can an object so dense that nothing, not even light, can escape its gravitational pull be seen? As ferocious as they are, black holes are extremely compressed cosmic objects, containing incredible amounts of mass within a tiny region. While the black hole itself is invisible, it gives away its presence by warping spacetime and superheating any surrounding material.
A simulated image shows the turbulent plasma in the extreme environment around a supermassive black hole. (Credit: U. Arizona)
Capturing the shadow
As surrounding gas and dust plunge into the abyss of space and time, and entire stars are shredded into wispy swirls, matter piles up around the black hole, forming a so-called accretion disk of plasma—charged particles heated to billions of degrees and accelerated to almost the speed of light. Due to the black hole’s massive gravity, light gets bent around it, creating a tell-tale photon ring, the appearance of which is predicted by Einstein’s equations. If Einstein’s equations are correct, a dark region should appear in the center, caused by the absence of light captured by the black hole.
An image of the shadow of a black hole is the closest thing to an image of the black hole itself, a completely dark object from which light cannot escape. In the case of M87, the shadow appears around 2.5 times larger than the true size of the black hole’s boundary—the event horizon from which the EHT takes its name—due to light bending. For M87, the horizon should be just under 25 billion miles across, about three times the size of Pluto’s orbit.
Multiple independent EHT observations and imaging methods have revealed a ring-like structure with a dark central region—the black hole’s shadow.
“This is a huge day in astrophysics,” says France Córdova , director of the National Science Foundation, which funded the work. “We’re seeing the unseeable. Black holes have sparked imaginations for decades. They have exotic properties and are mysterious to us. Yet with more observations like this one they are yielding their secrets. This is why NSF exists. We enable scientists and engineers to illuminate the unknown, to reveal the subtle and complex majesty of our universe.”
“The Event Horizon Telescope allows us for the very first time to test the predictions of Einstein’s General Theory of Relativity around supermassive black holes in the centers of galaxies,” Psaltis says. “The predicted size and shape of the shadow theory match our observations remarkably well, increasing our confidence in this century-old theory.”
The Submillimeter Telescope magnifies the evening sky as the sun sets on Mount Graham near Tucson, Arizona. (Credit: Paul Schulz)
‘Natural laboratories’
“Black holes are natural laboratories that the universe provides us with. Everything about them is extreme,” says Feryal Ozel, a member of the EHT Science Council and professor of astronomy and physics at the University of Arizona. “Take quantum mechanics and our theory of gravity, for example: They are not compatible, and around a black hole, they might even break down completely. This is what we have aimed to test with the EHT.”
Ozel led the Modeling and Analysis Working Group, which had the task of measuring the size of the black hole’s shadow and comparing the predictions of Einstein’s theories to the EHT observations. Once EHT scientists were sure they had imaged the shadow, they compared their observations to extensive computer models that include the physics of warped space, superheated matter, and strong magnetic fields.
In addition to building and maintaining the EHT cloud infrastructure, the team generated high-fidelity models of EHT observations by harnessing the power of a supercomputer consisting of 140 Nvidia graphics-processing units, or GPUs, commonly used for graphics-heavy video-gaming applications.
The team developed the first code that traced the path of light and simulated its encounters with black holes to compute realistic images of black holes. The time-dependent simulations of the black hole’s shadow as predicted by general relativity are hosted in the CyVerse, with headquarters at the university. They help scientists visualize light trapped in the fabric of space and time, curving around the black hole by its massive gravity, to study how black holes interact with nearby matter.
“New technologies such as cloud computing are essential to support international collaborations like this,” says Chi-kwan Chan, leader of the EHT Computations and Software Working Group and an assistant astronomer at the university. Chan built and maintains the cloud infrastructure for the EHT. “When we were working on the six papers, we had over 20 powerful virtual machines running in two Google data centers—one on the East Coast and one on the West Coast—to serve the EHT members’ computation needs across the globe.”
The EHT consortium involves more than 200 researchers from Africa, Asia, Europe, North and South America, and consists of 13 stakeholder institutes (listed alphabetically): the Academia Sinica Institute of Astronomy and Astrophysics; the University of Arizona; the University of Chicago; the East Asian Observatory; Goethe-Universität Frankfurt; Institut de Radioastronomie Millimétrique; Large Millimeter Telescope; Max Planck Institute for Radio Astronomy; MIT Haystack Observatory; National Astronomical Observatory of Japan; Perimeter Institute for Theoretical Physics; Radboud University; and the Smithsonian Astrophysical Observatory.
The work involved a network of telescopes around the world: the South Pole Telescope; the Submillimeter Telescope on Mount Graham in southeastern Arizona; the Atacama Large Millimeter Array, or ALMA, and the Atacama Pathfinder Experiment, or APEX, both in the Chilean Atacama Desert; the IRAM 30-meter telescope in the Spanish Sierra Nevada; the James Clerk Maxwell Telescope and the Submillimeter Array in Hawai`I; and the Large Millimeter Telescope Alfonso Serrano in Mexico.
This research appears in a series of six papers published today in a special issue of The Astrophysical Journal Letters.
Source: University of Arizona
The post Check it out: The first image of a black hole appeared first on Futurity.
Check it out: The first image of a black hole published first on https://triviaqaweb.weebly.com/
1 note · View note
bethanygraphicmedia · 7 years ago
Text
Class Lecture Wk 2
TV, Radio, Film and Music
To begin with, the first means of electronic communication to come into public / consumer use was in the first quarter of the 21st century, radio. The radio was a revolutionary media with the ability to communicate / transfer audio information quickly over fairly long distances. Around this same time recorded music was vastly becoming a very popular sound based entertainment. The gramophone Bakelite record / disk format allowed music to be mass produced and broadcasted. During this time early motion picture techs were laying the groundwork for the up and coming film industry. As with the gramophone disk, motion cameras and projectors allowing the production and mass distribution of motion graphics. Film revolutionised the publics access to news and media. It wasn’t for another few decades bringing us into the 1930’s that that the television brought the ability to broadcast motion pictures. Though, due to the cost of this new tech of the time it actually wasn’t until the 1950’s that having a television in the living-room became a common place. Still even then not everyone had one though. Broadcast and media formats made a pretty steady evolution through the 20th century.
Radio soon progressed from AM (Amplitude Modulation) to FM (Frequency Modulation) through to DAB (Digital Audio Webcasting). Audio formats have also moved forwards in terms of quality, record-ability, storage capacity, packaging and accompanying information. Audio tapes have lead way for Mixtapes, Mini-disk players and CD’s. the nature of films have advanced in the quality of recording, editing mediums, end user technology and video formats. In similarity the television has also benefited from a whole combination of improvements in broadcasting technology, better recording equipment and end user technology. But many of the recent advancements in computer technology and the invention of the internet has greatly further advanced the evolution of technology.
New Media
In the mid 1990’s the MP3 was released to the public domain, proving to be highly popular. Digital audio formats to incorporate ‘Lossy compression’, MP3’s allowed a large amount of music to be squeezed down to a much smaller size in return reducing the overall quality of sound. Around about the same time that the MP3’s were being used widely around the world, the internet was also making it’s pathway across the world. AS a result of these two being about at a similar time a number of internet sites soon turned up offering legal downloads. Artists and bands quickly noticed the combined benefits of easy to make music and that of the wider reaching distribution networks.
Napster was one of the first online peer to peer services. For two consecutive years it ran, in its original form enabling millions of users to illegally share billions of music tracks. Napster and other peer to peer sites paved the way for music distribution. Today near enough a decade later downloading music on a regular basis is the norm. while many people still decide to download illegally there are now other legal downloading sites such as iTunes, Amazon and Napster are now beginning to take on an increasing market share of music sales. Whilst it is unlikely that illegal music sharing will ever cease completely the music industry is making some headway in recouping massive loses that the technology revolution had caused. More and more people are now downloading music legally as a result of the availability increasing these being, iTunes, Amazon, HMVdigital and many more. Though as a result of these increasing digital downloads of music the sales in hard copy albums has fallen dramatically. Apart from purchasing music as a file another solution is to use streaming applications such as YouTube. Music sharing of this type generates revenue through advertising or monthly payment schemes. This network platform approach to the circulation of music suggests that we are moving more towards accessing music rather than owning it. The latest advancements of mobile and wireless internet coverage are in some ways playing the part of rewriting the rules. Due to our increasing use of the internet Scaremongering is ever on the increase.
As with the music industry both the television and film industries have also been greatly affected by the internet. In our age of super-fast broadband video files of large sizes now pose little or no barrier to the online distribution of moving content. As with music file having been and still being illegally shared so too are television and film. The online sharing of movies is posing an ever-growing problem. The relatively short life span of movies can make the impact of early sharing or even pre-release sharing can be extremely damaging for the industry and the makers. The film industry also faces the additional threat of online video hosting sites. As with viral videos, the posting of pre-released movies can spread at an alarming rate across the internet. This effects the sales of cinema showings, Blu-rays and DVD’s. as a way of combatting these loses the film industry is concentrating its efforts on improving the cinema experience. High quality projections and digital surround sound can make the visit to the cinema into an unreproducible experience, with one of their many ways / efforts being the 4D cinema where the chairs move according on the movie and some even spray water at you, trying to make you feel like you’re in / part of the movie.
New cutting edge technology such as the Red One Digital camera system are pushing the boundaries 2oD picture quality. In the mean-time in attempt to recapture revenue in sales of hard copies some film companies have recently been moving towards utilising the high end of animation combined with 3D projection. As with music the possibilities of sharing personal information and tastes in relation to what we stream and watch is now becoming a common place. For instance, Gmail / Google delves into our emails to gain information of our habits e.g., where we purchase thing from mostly, what devices we use in doing so and where we were during the time of purchase. All as a part of improving their advertising for consumers.  Scrobbling, in addition to the free film streaming, a growing market of pay as you view downloads is now doing the same for movies as it previously has for music pushing consumers towards the legal and paid for services.
Movies as with music, television programs and films are now mostly being delivered to us by the internet plus no more arials. These past five years have seen a significant shift of media with in the field of television. The availability of television programs ‘on-demand’ and on a variety of devices. Television scheduling has long been an important aspect of delivering programs to the masses with programs placed in various dedicated time slots depending on issues such as audience age, the show’s popularity, content suitability etc. Advertisement plays a hue role for a company’s products.
‘On-demand’ television removes the structure of scheduling. However, advertisements are now being used on online content with YouTube playing a great example of this, these advertisements are positioned I a way that you have to watch them if you want to watch what you’ve searched for, there are the few adverts you have the option to Skip but not with many. ‘On-demand’ television is currently the less popular brother of the television however that is thought to change with time. In some coincidences as with Google Television searches through our previous searches to put in place advertisements that can relate to you and your interests going by what you have previously watched.
Online television can be accessed via a large variety of devices e.g. Tv, Console, Laptop, hand-held devices etc. The content of this can be streamed through an internet connection downloaded for offline viewing. Though, such a wide variety of end-user devices there are many other additional considerations these require in terms of streaming conditions and screen size. As well as providing the ability to watch any program at any time, online television also offers the great addition of interactive elements and the possibility to link to additional information via the web. Technology companies / organisations such as Sky Plus have revolutionised the way we watch television, putting the needs of the consumer above.
0 notes
jpweb12 · 7 years ago
Text
Declining Attention Spans And Your Design and Website Strategy: 9 Tips
Declining attention spans is one of the biggest threats to your website.
In fact, a Microsoft study found that the average human attention span is shorter than that of a goldfish. More surprisingly, a recent Jampp study found that most mobile app users suffer an 88 percent decrease in attention span on an annual basis.
As a webmaster, it is important to pay special attention to these statistics. We, as humans, are becoming increasingly impatient, and this can have great implications for your business. In fact, the following statistics show just how dangerous having a slow website can be:
Slow loading websites cost the U.S. ecommerce market more than $500 billion annually
A one second delay in your site load time can cost you 7 percent in conversions
40 percent of web users will abandon a website that takes longer than 3 seconds to load
A site speed increase from 8 to 2 seconds can boost your conversion rate by 74 percent
Google now uses site speed as a ranking factor
What a Slow Website Means for Your Business
If you have a slow website, here’s the impact it will have on your business:
It will cost you visitors: If you feel you’ve been doing your best but can’t seem to retain visitors to your website, perhaps it is time to address your website speed. As highlighted in the statistics above, at least 40 percent of visitors to your website will stop visiting if your website takes longer than 3 seconds to load.
It will cost you sales: Improving your website speed can dramatically boost sales on your website, even by as much as 74 percent as revealed in one of the statistics earlier referenced. It has also been established that a one second improvement in site speed can boost conversions by 7 percent.
It will cost you search engine rankings: Google has announced that they use site speed as a ranking factor and that faster sites will always outrank sites that are slow. If you’re not experiencing an improvement in your search engine rankings despite all your SEO efforts, then perhaps site speed is a culprit.
Thankfully, all hope is not lost. If your website is not fast enough, here are 9 practical tips guaranteed to insulate your website from declining attention spans:
Get a Good Web Host
Your web host is the foundation of all your site optimization efforts; your web host controls the server infrastructure and configuration — and these are factors that impact your website speed more than anything else. I’ve had great experience with both Bluehost and Hostgator, and I did a comparison here.
Get a Better Theme
You’ll be amazed to find out that many websites are ridiculously slow due to bloated code and poorly-constructed themes. More often than not, outside your web host, your website theme will determine your website speed.
Enable GZIP Compression for your Website
In an article for Smashing Magazine, Marcus Taylor reported being able to reduce the size of his client website from 63kb to 13kb. That’s basically around 400 percent reduction in the site size, resulting in a massive boost in site load time since a user’s browser has to download much less. What did he do differently? He enabled GZIP compression on his client’s website.
GZIP works in a similar way to ZIP compression on computers, but for websites; a group of files that will ordinarily be big when their individual size is added together can instantly be reduced to one small ZIP file, resulting in a faster site load time and an increase in site speed for your site visitors.
Optimize and Compress Your Images
Images alone can make your site a lot more slower; often, a site could be seven to ten times or more heavier due to images used on the website. The solution to this is to compress the images used on your website drastically; thankfully, there’s a way to do this without the quality of your images suffering much. If you use WordPress, the plugins Smush.it and/or EWWW Image Optimizer can help with compressing your images. If you don’t use WordPress you should check out the image compression tool offered by Kraken.io.
Combine Your Images into CSS Sprites
Often times the number of requests a user’s browser sends to your server will influence how long it takes to load your website; the more requests sent, the slower your website will be for visitors. Background images can result in a lot of server requests, especially if they are hosted individually and have to be requested individually from your server — this eventually results in a much slower website. The solution to this is to combine your images into Sprites; basically, when several individual images are combined into one main image, the end result is a Sprite. Once your Sprite is ready, you can use CSS commands to tell your user’s browser to interpret various parts of that image individually.
Use a CDN to Ensure Consistently Fast Access to Your Site Across All Locations
A CDN is a network of proxy servers in multiple locations that automatically serve users a version of your website closest to their location; for example, with a CDN, it isn’t unusual to have a version of your website hosted in all continents. So, someone trying to visit your website from India will be served a version of your website hosted in Asia instead of a version hosted in North America.
Two great CDN options are CloudFlare (both free and paid) and MaxCDN (paid).
Focus on Creating a Better User Experience
While making your website faster will solve some of the problems, ensuring you’re better able to retain people on your site goes beyond that. It is essential to also work on creating a better user experience for your audience; when people visit your website, what happens? Your goal should be to ensure they easily find what they want, and that their experience with your website is satisfactory. Here are a few tips:
Pay special attention to your design; an appealing design won’t just make people love visiting your website, it will also make them likely to take the desired action you want them to.
Besides having clean design, user experience matters; let your users inform the direction of your site. Embrace usability best practices to ensure your site is easy to use.
Introduce Elements of Social Proof
Thanks to technology and increasingly declining attention spans, we no longer have the patience to waste time with meaningless tasks. How do you then convince people to pay more attention to your website? By letting them know that they are not alone; research has shown that social proof is one of the biggest factors that influence human decisions; when we see that other people, like us, are doing something, we are more likely to do the same thing.
If you have a major achievement, or if you have a record number of people using your products, do not hesitate to indicate it on your website.
Don’t Waste People’s Time; Create a Point of Focus
This is especially important if your goal is to get people to take action. Don’t waste their time by rambling around and beating about the bush; it also won’t help to present them with too many options than is necessary. Instead, create a focus point; make it clear what you want them to do, and ensure that’s your focus. Conversion optimization experts recommend focusing on one goal for one page; do that, and – by preventing distractions – you’ll be able to get people to focus more on your offer.
  Read More at Declining Attention Spans And Your Design and Website Strategy: 9 Tips
from IT Feed https://webdesignledger.com/declining-attention-spans-and-your-design-and-website-strategy-9-tips/
0 notes
regulardomainname · 7 years ago
Text
Declining Attention Spans And Your Design and Website Strategy: 9 Tips
Declining attention spans is one of the biggest threats to your website. In fact, a Microsoft study found that the average human attention span is shorter than that of a goldfish. More surprisingly, a recent Jampp study found that most mobile app users suffer an 88 percent decrease in attention span on an annual basis. As a webmaster, it is important to pay special attention to these statistics. We, as humans, are becoming increasingly impatient, and this can have great implications for your business. In fact, the following statistics show just how dangerous having a slow website can be: * Slow loading websites cost the U.S. ecommerce market more than $500 billion annually * A one second delay in your site load time can cost you 7 percent in conversions * 40 percent of web users will abandon a website that takes longer than 3 seconds to load * A site speed increase from 8 to 2 seconds can boost your conversion rate by 74 percent * Google now uses site speed as a ranking factor What a Slow Website Means for Your Business If you have a slow website, here’s the impact it will have on your business: It will cost you visitors: If you feel you’ve been doing your best but can’t seem to retain visitors to your website, perhaps it is time to address your website speed. As highlighted in the statistics above, at least 40 percent of visitors to your website will stop visiting if your website takes longer than 3 seconds to load. It will cost you sales: Improving your website speed can dramatically boost sales on your website, even by as much as 74 percent as revealed in one of the statistics earlier referenced. It has also been established that a one second improvement in site speed can boost conversions by 7 percent. It will cost you search engine rankings: Google has announced that they use site speed as a ranking factor and that faster sites will always outrank sites that are slow. If you’re not experiencing an improvement in your search engine rankings despite all your SEO efforts, then perhaps site speed is a culprit. Thankfully, all hope is not lost. If your website is not fast enough, here are 9 practical tips guaranteed to insulate your website from declining attention spans: * Get a Good Web Host Your web host is the foundation of all your site optimization efforts; your web host controls the server infrastructure and configuration — and these are factors that impact your website speed more than anything else. I’ve had great experience with both Bluehost and Hostgator, and I did a comparison here. * Get a Better Theme You’ll be amazed to find out that many websites are ridiculously slow due to bloated code and poorly-constructed themes. More often than not, outside your web host, your website theme will determine your website speed. * Enable GZIP Compression for your Website In an article for Smashing Magazine, Marcus Taylor reported being able to reduce the size of his client website from 63kb to 13kb. That’s basically around 400 percent reduction in the site size, resulting in a massive boost in site load time since a user’s browser has to download much less. What did he do differently? He enabled GZIP compression on his client’s website. GZIP works in a similar way to ZIP compression on computers, but for websites; a group of files that will ordinarily be big when their individual size is added together can instantly be reduced to one small ZIP file, resulting in a faster site load time and an increase in site speed for your site visitors. * Optimize and Compress Your Images Images alone can make your site a lot more slower; often, a site could be seven to ten times or more heavier due to images used on the website. The solution to this is to compress the images used on your website drastically; thankfully, there’s a way to do this without the quality of your images suffering much. If you use WordPress, the plugins Smush.it and/or EWWW Image Optimizer can help with compressing your images. If you don’t use WordPress you should check out the image compression tool offered by Kraken.io. * Combine Your Images into CSS Sprites Often times the number of requests a user’s browser sends to your server will influence how long it takes to load your website; the more requests sent, the slower your website will be for visitors. Background images can result in a lot of server requests, especially if they are hosted individually and have to be requested individually from your server — this eventually results in a much slower website. The solution to this is to combine your images into Sprites; basically, when several individual images are combined into one main image, the end result is a Sprite. Once your Sprite is ready, you can use CSS commands to tell your user’s browser to interpret various parts of that image individually. * Use a CDN to Ensure Consistently Fast Access to Your Site Across All Locations A CDN is a network of proxy servers in multiple locations that automatically serve users a version of your website closest to their location; for example, with a CDN, it isn’t unusual to have a version of your website hosted in all continents. So, someone trying to visit your website from India will be served a version of your website hosted in Asia instead of a version hosted in North America. Two great CDN options are CloudFlare (both free and paid) and MaxCDN (paid). * Focus on Creating a Better User Experience While making your website faster will solve some of the problems, ensuring you’re better able to retain people on your site goes beyond that. It is essential to also work on creating a better user experience for your audience; when people visit your website, what happens? Your goal should be to ensure they easily find what they want, and that their experience with your website is satisfactory. Here are a few tips: * Pay special attention to your design; an appealing design won’t just make people love visiting your website, it will also make them likely to take the desired action you want them to. * Besides having clean design, user experience matters; let your users inform the direction of your site. Embrace usability best practices to ensure your site is easy to use. * Introduce Elements of Social Proof Thanks to technology and increasingly declining attention spans, we no longer have the patience to waste time with meaningless tasks. How do you then convince people to pay more attention to your website? By letting them know that they are not alone; research has shown that social proof is one of the biggest factors that influence human decisions; when we see that other people, like us, are doing something, we are more likely to do the same thing. If you have a major achievement, or if you have a record number of people using your products, do not hesitate to indicate it on your website. * Don’t Waste People’s Time; Create a Point of Focus This is especially important if your goal is to get people to take action. Don’t waste their time by rambling around and beating about the bush; it also won’t help to present them with too many options than is necessary. Instead, create a focus point; make it clear what you want them to do, and ensure that’s your focus. Conversion optimization experts recommend focusing on one goal for one page; do that, and – by preventing distractions – you’ll be able to get people to focus more on your offer.   Read More at Declining Attention Spans And Your Design and Website Strategy: 9 Tips http://dlvr.it/PrXMGj www.regulardomainname.com
0 notes
faithfulnews · 5 years ago
Text
Is the vastness of the universe evidence against God’s existence?
Tumblr media
Christianity and the progress of science
Physicist Hugh Ross writes about it in Salvo Magazine.
First a quick blurb about Hugh Ross:
Hugh Ross launched his career at age seven when he went to the library to find out why stars are hot. Physics and astronomy captured his curiosity and never let go. At age seventeen he became the youngest person ever to serve as director of observations for Vancouver’s Royal Astronomical Society. With the help of a provincial scholarship and a National Research Council (NRC) of Canada fellowship, he completed his undergraduate degree in physics (University of British Columbia) and graduate degrees in astronomy (University of Toronto). The NRC also sent him to the United States for postdoctoral studies. At Caltech he researched quasi-stellar objects, or “quasars,” some of the most distant and ancient objects in the universe.
Now back to the topic “Is the vastness of the universe incompatible with God’s existence?”
Here’s Ross’ introduction:
Scientists seem more difficult to please than the golden-haired girl of fairy-tale fame. While Goldilocks troubled herself over the just-right porridge, chair, and bed, astronomers appear preoccupied with the size of the universe.
In the days before telescopes, when an observer could count a few thousand stars in the night sky, many considered the universe too small and unimpressive to be the work of an almighty, all-knowing Creator. Only an infinite cosmos, they said, would befit an infinite deity. But then, others argued, an infinite cosmos might eliminate the need for a Creator.
Thanks to the Hubble space telescope, scientists now see that the universe contains roughly 200 billion large- and medium-sized galaxies and about a hundred times as many dwarf galaxies. The stars in those galaxies add up to about fifty billion trillion, and they comprise a mere one percent of the mass of the observable universe.
Because of the travel time of light, the universe humans can observe is really the universe of the past. What researchers know about the expansion and geometry of the universe informs us that the universe of today is at least several hundred times more enormous than the universe we can see. The universe is trillions of trillions of times larger and more spectacular than what the earliest astronomers presumed!
And yet, this new knowledge of the vastness of the universe has led to new complaints. In his book, God: The Failed Hypothesis, Victor Stenger says, “If God created the universe as a special place for humanity, he seems to have wasted an awfully large amount of space.” Stephen Hawking, in the best-selling science book of all time, A Brief History of Time, shares Stenger’s view: “Our solar system certainly is a prerequisite for our existence. . . . But there does not seem to be any need for all these other galaxies.” So now the universe is too big to befit the all-wise, all-powerful God of the Bible.
I like how he quotes an atheist physicist to get the challenge right. No sense in caricaturing the claim of your opponent.
I formalized Stenger’s argument like this:
If all things in the universe are not done the way that Victor Stenger likes them, then there is no God.
All things in the universe were not done the way Victor Stenger likes them.
Therefore, there is no God.
I would deny premise 1, there, since there is no reason to believe that’s it’s true.
Anyway, let’s see what Hugh Ross says:
The hot big bang model (now firmly established by observations) tells us that at the moment of cosmic creation, the universe was infinitely or near-infinitely hot and compressed, and all the ordinary matter existed in the form of hydrogen. As the universe expanded, it cooled. The rate at which the universe expanded and cooled depended in large part on its mass—the greater the mass, the slower the expansion and cooling rate. The slower the expansion and cooling rate, the more time the universe would spend in the temperature range (13–150 million degrees Centigrade) at which nuclear fusion can occur.
Because of its mass, the universe spent about twenty seconds in the nuclear fusion temperature range when it was between three and four minutes old. As a result, 24.77 percent of the universe’s hydrogen (by mass) fused into helium. Thus, when stars began to form—about 380,000 years later—they started off composed of about 75 percent hydrogen, 25 percent helium, and trace amounts of deuterium, lithium, and beryllium.
In the nuclear furnaces of the stars themselves, more hydrogen fused into helium, and, in addition to the extra helium, all the rest of the elements that appear in the periodic table were synthesized (created). The capacity of stellar nuclear furnaces to produce an abundance of elements heavier than helium (all but two of the elements) depended critically on how much of the universe’s initial hydrogen was fused into helium and heavier elements during the first several minutes after the cosmic creation event. How much fusion of the universe’s primordial hydrogen actually occurred at this time depended, in turn, on the universe’s mass or mass density.
If the universe’s mass (or cosmic mass density) had been even the slightest bit less than a hundred times the fifty billion trillion stars occupying the observable universe, nuclear fusion during the first several minutes of its existence would have proceeded less efficiently. Thus, the cosmos would have been forever incapable of generating elements heavier than helium—elements such as carbon, nitrogen, oxygen, phosphorus, sodium, and potassium—all of which are essential for any conceivable kind of physical life.
On the other hand, if the universe’s mass had been even the slightest bit greater, nuclear fusion during the first several minutes after its beginning would have been too productive, and all the hydrogen in the universe eventually would have been fused (after just two generations of stars) into elements as heavy as iron or heavier. Again, all the most life-essential elements, including hydrogen itself, would have ceased to exist.
Basically, your body is made up of heavier elements, and if the universe was not as massive as it is (and as old as it is), then there would not be enough heavy elements to make you, or to make massive stars like our Sun which burn steady for long periods of time. We need the heavy elements and we need the steady source of heat.
Dr. Ross has another reason why God would use vast space and long periods of time, and if you want to read that, you can click here. I think that it’s important for us all to get used to the idea that we all need to understand science apologetics. God put these evidences into the universe for us to discover and use.
Go to the article
0 notes