#''i cant use the simulation program until I have a direction in mind :(''
Explore tagged Tumblr posts
Text
i got too indecisive to go straight to gplates
#''i cant use the simulation program until I have a direction in mind :(''#spends 2 hours after i needed to go to bed to bullshit a rough draft of the entire tectonic history of this planet#REALLY ROUGH very very very subject to change holy sht i forgort to keep track of oceanic crus#Con stop yapping#HELL ON EARTH (NOT ON EARTH)#KIRBY RAMBLES
12 notes
·
View notes
Text
Day four peeps!
This fic was cross-posted on AO3 here
Love Means War
Cattle Prod | Shock | "You in there?"
Fandom: My Hero Academia
Words: 949
Warnings: human experimentation, child experimentation, child abuse, PTSD, childhood trauma
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
I had done it! I made it into UA!
I was excited to have made it in. I made it in, I would receive the official training which would give me better access to heroes and hero missions, I would be able to coordinate the rescue I’d been anxious of for over a year now…
I did it guys, you’re on your way to going back home.
I walked into the area for today’s training period. Some of my classmates had already gathered there. I stood by a brunette, she was very nice. Quiet chatter throughout the room, too much excitement for any of us to really settle in any capacity.
The teacher walked in. I was about to greet them when my heart stopped at who I saw. All Might.
Oh shit, oh fuck, fuck fuck fuck I forgot he was a pro here.
He explained what we would be doing. A simple hero vs villain simulation, everyone else would observe through some cameras planted throughout the fake city.
I was up first.
The brunette next to me got called with us too. Easy breaths, easy breaths, you’ll be fine, he cant do anything to us here.
I prayed he wouldn’t recognize me. The room felt too small.
I ended up in the villains group first.
Thank the gods the cameras didnt have mics. I stayed by the ‘weapon’ until we were supposed to start.
“Hey, you doing okay?” the girl I got partnered with asked. Apparently her name was Uraraka.
“Uh-uhm… yeah, I uh…”
“It’s something about All Might, right?”
I paused. “How could you tell?”
“You started acting really different the second he walked into the room… Do you wanna talk about it?”
Yes. “Uhm… maybe once we know each other better, it’s uh… kind of a lot… I dont want to…” I trailed off, not even fully finishing my thought.
“Oh, of course! I dont mean to be pushy, you were just kind of… You know, I wanted to see if there was anything I could do to help.”
She’s really nice. No wonder she’s in the hero program.
“...Ask me again in a few months once we’re better friends, okay?”
----《 ¤ 》----
“Hey Veeee,” Ochako said.
“Hm? What is it?”
“Well, we’re all alone now, and there was a thing I’ve been meaning to talk to you about for a while.”
“Oh?” There were many directions this could be going.
“Yeah, since the first few days of school.”
Now I was extra intrigued. “And what might that be?”
“Well… you remember the day we ran the hero-villain simulation? The one that All Might led?”
I did my best not to show any of my panic. “Yeah, what about it?” I asked, trying to keep as little of a quake in my voice as possible.
“I had asked you about it and you said to ask again when we’re closer so… mind if I ask now?”
I took a shaky breath. “I didnt know you remembered that.”
“Well, it’s a little hard to forget considering every single time we see him you freeze up momentarily and I have to distract you in order to get you to go somewhat back to normal.”
“...wait all of those times were on purpose?”
“Yep.” She got scooted a little closer to me. “I could see the danger, it was written in your eyes. He did something to you, right?”
Another shaky breath. She knows. She knows, she knows, will she even believe the truth though if I tell her? I cant prove it was him, I cant prove anything, I cant even reach the others, all I have is the things I’ve been trying to keep secret forever, and the things that no one is meant to know about and the thighs I never want anyone else to have to see. I’ve been through hell but I dont know how to show it, I-
She poked at my arm. “Hey, calm down. Breathe with me alright?” she said in a calm and slow voice. “In… and out… in… and out… in… and out…”
I matched her breathing as best I could. I felt a tear going down my cheek.
“...Do you wanna talk about it?”
Yes. “...yeah,” I said. It was impossible to keep the quiver out of my voice now.
She sat patiently. Waiting. Just focus on the senses, the things around us, the couch we’re sitting on, the clothes on our skin… In and out…
“I-I… he… when I was 9…” that’s good, start at the beginning. I took another deep breath. “He kidnapped me, and I was taken to a-a lab… I was… experimented on there… I’m the only one who ever actually… escaped,” I managed.
She stared at me. It was a little uncomfortable. I couldn’t meet her eyes. “I… was part of a series… called the MEQATs. I’m the third one… You know the- uhm… the nomus? That the LOV uses?”
She nodded.
“I’m basically that but… y’know, done properly, considering I’m still… human…” Was I even that at this point? “I’m the third in the series so I-I have a total of four quirks. A-and I, uhm… I made it out when-when I was 13… I dont thi-ink he recognizes me…” My stutters only got worse and worse.
“How could he not recognize you?” She looked puzzled.
She doesn’t believe me. I knew it. I knew it was too out there for this perfect image of himself he’s created.
“Vee.”
“Uh- uh-huh?”
“Make sure you grab my arm next time we see him.”
It was my turn to be confused now. “Wha? Why?”
She looked pissed. “Because I know that the second he enters my line of sight all I’m gonna see is red.”
#whumptober2023#no.4#lyric#cattle prod#shock#my hero academia#fic#child experimentation#child abuse#childhood trauma#human experimentation#ptsd#writeblr#writers on tumblr#writing#writers#creative writing#my writing#whump scene#emotional whump#whump writing#whump writer#psychological whump#whump community#whump#oc: ov
3 notes
·
View notes
Text
How The Hemi & Today’s Demon Got Its Start At Indy in ‘61!
As I walked through the sprawling acreage of Lucas Oil Raceway Park (better known as Indianapolis Raceway Park/IRP) during the 2017 NHRA Chevrolet Performance U.S. Nationals, I tried to imagine what IRP was like during Labor Day of 1961. That’s when “absolutely, beyond a shadow of doubt…the most important drag race in Chrysler history” took place right there on the very soil beneath my feet. That race, run 56 years ago, was directly responsible for the 2018 Dodge SRT Demon Drag Race Simulator I was on hand to help promote.
The quote above came from the late Mopar engineering/racing legend Tom Hoover, and describes an impromptu match race between the new 409 Impala of “Dyno” Don Nicholson and the Ramchargers’ long-ram 413-equipped Dodge Dart Seneca. Though Nicholson’s 409 had defeated Ramchargers driver Al Eckstrand earlier in the hour due to the Dart’s balky three-speed manual transmission, in the time lull between classes the crowd began to chant “re-match, re-match, re-match.” Without the inflexible scheduling caused by modern live TV-broadcast race coverage, the pace was more leisurely and track officials decided “why not.”
Nicholson and Eckstrand quickly returned to the starting line, the flagman moved, both cars pounced, the Ram’s pin-type A239 transmission synchronizers actually cooperated and the full-size C-Body Dodge won by a car length, cranking off a low 13-second e.t. At the time, factory Super Stock machines were the latest thing. Never before did “stock” passenger cars offer so much power. Door slammers suddenly attracted as many spectators as traditional open-wheel race cars.
As Nicholson and Eckstrand pulled onto the return road, the crowd shouted “one more, one more, one more” and by radio the timing tower told the finish line workers to send both cars back to the starting line, pronto! Again, they launched side by side but when Eckstrand rammed the shift lever forward, up and to the right for the 1-2 upshift, the darned A239 ‘box balked and Nicholson’s bowtie took the race, its superior but fragile Borg Warner T-10 4-speed helping a bunch.
But the winning race was photographed heavily and images soon appeared in National Dragster, Hot Rod, Car Craft, Motor Trend and others. In those pre-Internet days, car magazines were even more influential than they are today, and when newly appointed Chrysler Corp. president Lynn Townsend saw them, he wanted more.
Townsend was part of a new breed of auto executives with names like Iacocca, Wangers, Knudsen, Beltz, and others. These fresh thinking corporate chiefs sought to tap into the baby boom market with exciting, youthful offerings. For his part, it helped that Townsend’s teenaged sons kept him up to date on what models were making waves on Woodward Ave. and in the pages of the popular car magazines of the day. Long story short, the Nicholson/Eckstrand run was the match that lit the fuse on the 1962 Max Wedge and 1964 Race Hemi development programs.
Without that singular race and the resulting flood of magazine ink, Hoover and his like-minded band of race engineers might have been shown the door. That’s because another kind of ink—red ink—encircled Chrysler Corporation in 1961. Compared to 1960, DeSoto output was down 88 percent, Dodge was down 33.87 percent, Plymouth was down 21.77 percent and Imperial was down 30.81 percent. Only Chrysler had good news, building 24.8 percent more cars than in 1960. One of Townsend’s mandates from shareholders was to make deep cuts in the engineering budget. Yes, many engineers lost their jobs, but Hoover, Eckstrand, and most of the other Ramchargers were spared….thanks to that freak race at Indy.
Getting back to the present day, the core reason for my trip to the 2017 NHRA U.S. Nationals was to host the Dodge SRT Demon Drag Race Simulator inside the manufacturers’ midway. My honored task during the five-day weekend race was to present 15 minute Demon “show and tell” sessions at the top of every hour. After each 15 minutes of Demon cheerleading was completed, I was free to wander until the next show time.
During each presentation, I used the Demon showcase PA system to continually tell the story of that fluke drag race and how today’s Demon is a direct result. So with this nostalgic mindset, let’s wander the Indy grounds and see how this hollowed competition venue has changed the face of drag racing, and how it still influences it today.
Each Demon simulator is plumbed with functional dashboard air blowers to keep riders cool. I used them as a talking point to discuss Demon’s double-duty Super Chiller that diverts A/C function to drop supercharger inlet air temperature a full 45 degrees lower than ambient air, but also keeps passengers cool.
Matt Hagan’s Hemi Charger flip-top helped the Mopar Fast Lane service division celebrate its 80th anniversary by setting the track record at 3.799 seconds and 338.77 mph. When IRP opened in 1961, the Mopar parts and service division (launched in 1937) was a mere 25 years young.
Ron Capps’ NAPA-sponsored Charger was Funny Car runner-up with a 3.949 second, 325.53 mph rip. The quickest and fastest machine at Indy in 1961 was Pete Robinson’s AA/D blown gas dragster at 8.68/169.49. From 1957 through 1963, the NHRA banned nitro at national events. The Ramchargers’ Dart Seneca ran 13.3s at 109.
Many Fuel Funny Car racers are turning to carbon-carbon front disc brakes from Strange Engineering. Truly immune to fade, the hotter they get the harder they bite. If handled by greasy fingers, Strange says to bake the bare rotors in an electric oven “until the stink stops.” That’s when they’re pure again.
The latest application of the Dart nameplate to a drag machine is light years away from the Ram’s ’61 Seneca. The leaf-blower-from-hell look resulted from the NHRA’s adoption of EFI. Allen Johnson’s Hemi Dart will be his final machine. He’s retiring at the end of the 2017 season after over 500 professional race entries.
If it says “Hemi” on the rocker cover, who are we to argue? Note the equally-spaced spark plug tubes on Allen Johnson’s 11,000rpm Pro Stock mill. Our X-ray camera lens broke but the low-in-a-row spark plug tubes suggest “twisted” chamber and canted valve architecture.
The low-high spark plug tube targeting and tapered roof of this Brad Anderson Top Alcohol rocker cover puzzles traditional Hemi spotters. These adjustments allow for pedestal-mounted rocker arms and larger port housings. There are literally hundreds of aftermarket and factory Hemi head variations.
The Big Go East always features a car show. A much more traditional Hemi powers this 1930 five-window Tudor hot rod. The gorgeous blue multi-shade panel paint was applied by Bucky’s Ltd., home of Bucky Hess and his 150mph 1968 SS/AH Hemi Barracuda.
The color-matched Chrysler Fire Power Hemi was built by Ross Racing Engines. Unlike earlier days when a small four-barrel might have hidden inside its GMC 4-71 blower case, this one’s a legitimate belt-boosted street driver. The quartet of Strombergs adds a crowning retro touch.
The 1970-vintage Paris Brothers Top Fuel dragster has been lightly modernized for NHRA cacklefest appearances. Its dual Chrysler “whale motors” have been revitalized with help from Hot Heads, a leader in the Gen-1 Hemi world.
71 superchargers. At the inaugural 1961 NHRA U.S. Indy Nationals, Dode Martin’s “Two Thing”—running side-by-side supercharged V8s—set the meet’s top speed at 177.87 mph. Other multi-engine Top Gas rails on site in ’61 were Jack Chrisman’s Howard Cam Special, Eddie Hill’s dual-Pontiac rig, and Tommy Ivo’s four-engine Show Boat. In the care of Ron Pellegrini, Ivo’s machine made exhibition runs only.
With machined braces to hold the dual Hemis in alignment, a simple steel hub connects the crankshaft snouts and tails.
A Mopar muscle car-sourced Dana 60 rear axle puts the estimated 2,500 horsepower to the ground. Note how the Paris Brothers used the former pinion snubber support and trio of fastener holes to affix a brace merging the rear engine and axle housing to the frame. Truck/van-sourced Dana 60 housings lack the extra metal and raised pad.
Car Craft archives Of the 32 supercharged AA/GD gasoline dragsters competing at the 1968 Indy Nats, three were dual-engine equipped. Here, the 2,150-pound machine of Frakes & Funk (foreground) awaits time trials with Bill Mullins’ 1,950 pounder. Mullins’ best was a 7.72/194.80.
Before Don Garlits’ March 1970 clutch explosion on the Lions starting line, front-engine “diggers” like the Ditmars Brothers machine were the norm. Note the Gen-2 Hemi’s iron block and heads. Though 1965-vintage aluminum A990 heads were also used, many racers felt iron combustion chambers better resisted melting. Iron blocks were the rule because aluminum clones from Keith Black, Milodon, and Donovan didn’t arrive until the early seventies.
Ever since the original Little Red Wagon A100 pickup truck accidentally yanked the front wheels in late 1964 (with Jay Howell at the wheel), well over a hundred racers have followed suit. The 392 Hemi-powered Little Red Fire Truck was on static display but is active on the exhibition circuit.
A far cry from the stock interior of the original Little Red Wagon, the fire rig’s cockpit puts the driver in the center with plenty of see-through mesh flooring. The twin brake controls—floor pedal and hand lever—affect the rear wheels only; the front axle carries no brakes. Note the absence of a passenger seat. As Jay Leno can testify, The Hurst Hemi Under Glass was built for occasional passengers (Google “Leno/Hemi Under Glass crash” to see the shocking result).
In this age of easy-way-out Chevy-powered wheelie machines, it’s good to see an iron Fire Power Hemi and old-school 6-71 huffer. Hot Heads also sponsors this machine.
The owner of this retro-inspired ’63 Dart GT hardtop probably has no clue a similar Dart 170 pillar coupe won C/Factory Experimental (C/FX) at the 1963 Indy Nats. Driven by California police officer Ron Root, its 225 Slant Six was bolstered by an Offy four-barrel intake (not the expected Hyper-Pak). Root ran a 15.12/90.63 mph to defeat Bruce McConnell’s turbo-215 V8 ’63 Olds Cutlass convertible. Factory Experimental (FX) wasn’t just for big V8s. This red relic is also 225-powered.
The show field included this wild 392 whale-powered ’64 Dodge Polara 500. Owned by Bill Rowe, again that BDS supercharger is no hollowed-out fake. When Ed Donovan’s aluminum 392 racing blocks appeared in 1973, the once obsolete Gen I Hemi was revived in Top Fuel. Drag racing reporters needed to differentiate it from the 426-style Gen II “elephant” Hemi and the “whale motor” designation was born.
Custom mounts position the wide Gen-I Hemi without fender-wall or shock-tower surgery. The hoses are part of the partially hidden air conditioning system. Yep, it has a Hemi …and A/C! The first Gen-II 426 Race Hemis appeared in 1964 Dodges and featured reverse-stamped shock mounts for added clearance.
At the absolute tip of the Hemi spear, modern Top Fuel power plants like the Hennen Motorsports “Road Rage Fuel Booster” machine make nearly twenty five times the original 425 horsepower rating. Had the Ramchargers not beaten Nicholson’s 409 in that fateful 1961 match race, this engine probably wouldn’t exist.
Shown in action at Detroit Dragway, here’s the 1961 Dart Seneca that rattled Nicholson’s 409 and spurred corporate drag race support. Though stripped for dragging, its C-Body was unitized only from the firewall back. It was replaced in 1962 by the 370 pound lighter, fully-unitized B-Body. The 413 Max Wedge—with its one-piece short ram intake manifold—arrived on May 1, 1962.
Hot Rod archives In its coverage of the inaugural 1961 NHRA Nationals at IRP, Hot Rod magazine mentioned that Dodge and its Indianapolis-area Dodge dealers sponsored Dodge Performance Day the day before racing kicked off. Here HRM reporter LeRoi “Tex” Smith (in white) poses with what is probably a Hyper-Pak equipped Lancer GT. The Dart Phoenix or Polara convertible in the background is likely packing a ram-inducted 383 or 413, respectable but of no comparison to the lighter Max Wedges of 1962.
The post How The Hemi & Today’s Demon Got Its Start At Indy in ‘61! appeared first on Hot Rod Network.
from Hot Rod Network http://www.hotrod.com/articles/hemi-todays-demon-got-start-indy-61/ via IFTTT
0 notes
Text
Global Warming Debate: How Can Computer Models' Predictions Be Wrong?
New Post has been published on https://beingmad.org/global-warming-debate-how-can-computer-models-predictions-be-wrong/
Global Warming Debate: How Can Computer Models' Predictions Be Wrong?
The environmental extremists want us to believe that every global warming prediction is 100% correct. But computer models can err and easily draw wrong conclusions. The author has personally developed, and directed the development of, several computer models. It is very easy for a computer model to be wrong. Actually, it is rather amazing that they ever make any correct predictions. So many different errors can creep into a model and cause it to predict erroneous results.
Secondarily, the average computer modeler comes to model development with a particular bent — he or she wants to see a particular result. With that in mind, this author has jokingly said that he should offer his modeling skills to the highest bidder: “Tell me what you want to model, and what you want it to predict, and I will build you a model.” That would be unethical, of course, but anyone I’ve ever met who was developing a computer model wanted it to predict a particular result. If it showed that result, the modeler could quit and call the model complete. If it didn’t show that result, the modeler continued working to develop it further. Even if a particular result is not a conscious goal, subconsciously, most modelers are looking for a certain result. So in addition to all the possible errors that can affect model results, there is always the modeler’s natural bent that must be considered. How ethical is the modeler or the modeling team? Would they intentionally slant a model to produce the results they want? We would like to think most would not intentionally slant a model to the desired result.
One must wonder about this — particularly in the global warming debate because all sorts of unseemly unethical tricks are being used to declare predicted results to be absolute truth and to discourage others from questioning those results. “The debate is over. Consensus has been achieved!” Science doesn’t work by consensus — and the debate is hardly ever over. “The Hollywood elite support the results!” Who cares what Hollywood thinks? “How dare you suggest these results are not accurate?” Well… some people actually know something about models and the model development process. They understand all the possible pitfalls of model development. “How dare you disagree with us?” We disagree for many reasons that have not been included in the debate. We disagree because the debate never occurred. If the intelligentsia is willing to play debating games and wanting to stifle discussion when they think their side is in the lead, one must look carefully at all details and question all results.
A computer model is a computer program that has been designed to simulate a particular function and to make predictions of its expected behavior. For example, the author used computer models to predict the viscous behavior of fluids and suspensions in industrial systems. The software used to render computer generated movies must perfectly simulate the visualizations shown. For example, complex algorithms show reflections on shiny objects to simulate the way light bounces from sources to the viewer’s eye. When the original models and algorithms correctly predicted light reflections, they began to be used to generate movies. The following list includes many of the pitfalls that can unintentionally hinder the success of computer models:
First, models are simplifications of real phenomena. The modeler(s) must determine the proper mathematics to simulate each phenomenon of interest. One usually selects the simplest mathematical algorithm that will perform the task at hand. If one selects incorrectly, the results may be in error. For example, some phenomena appear to have a linear behavior. But the linear behavior may change to non-linear behavior under certain extreme conditions. If that is not known in advance, the model may be asked to predict values in the ‘extreme conditions’ territory and errors will result. This happens easily.
For example, the fluid viscosity of a suspension (powder mixed in a fluid) starts as a linear function of the concentration of powders added to the fluid. When the concentration of powder is small, the function is linear. But as the concentration of powder increases, the viscosity behaves in a non-linear manner. The initial linear function is rather simple to program into a model, but the non-linear behavior is complex to accurately model. It is easy to make programming mistakes and utilize the wrong mathematics. This is closely related to the first pitfall above. If you think you know how a particular phenomenon behaves, but you use the wrong equation, the model will predict erroneous values.
Some phenomena are simply difficult to model. Sometimes, the results of a particular set of phenomena are not known. One must then perform a complex calculation each time those phenomena must be used. Rather than use the resulting mathematical equation to simulate a function, it may be necessary to simulate the actual underlying phenomena to arrive at the results. This may force a model within a model which adds complexity to the whole calculation.
For example, rather than using a simple mathematical equation to simulate how clouds affect sunlight, it may be necessary to model the behavior of individual raindrops in sunlight, and then model the behavior of the bazillions of raindrops that form a cloud to determine how an individual cloud will behave in sunlight. Until one builds up to simulating a whole sky full of clouds, the model can take on enormous proportions and the calculation times can be extremely long. Having gone through such an exercise, one must then determine if the equations and algorithms at each step in this process were modeled accurately.
Memory capacity of a computer and speeds of computation can be limited. This was more of a problem 20-30 years ago, but sizes and speeds can still be limiting. In early computers used by this author, you could program anything you wished — as long as it could fit into a 64,000 byte program (which is quite small as computer programs go.) Program sizes were limited and sizes of memory locations were also limited. Computers have grown over the years where most programs can now be so large, a programmer doesn’t need to be concerned with size limitations or with memory capacity. But sometimes, these still need to be taken into account.
When computation times can grow exponentially with certain simulations, one still needs to determine how long a particular computation will take. If computation times for a particular phenomenon double with each new iteration, capacities can quickly outgrow the available memory and allowed computational times. And models will reach those points within one or two iterations. If it takes one full day, for example, to perform one iteration of a simulation, and the calculation time doubles with each new iteration, how long is the modeler willing to wait to complete the simulation? See — this will build quickly — one day, two days, 4 days, a week, two weeks, a month, two months, four months, eight months, 1 1/3 years, etc. Again — how long is the modeler willing to wait?
How many raindrops are needed to form a cloud? How many individually must be simulated to adequately model the behavior of a cloud? How many in combination are needed to simulate the interaction of light with a cloud? If these types of simulations define a model, we’re talking huge numbers of droplets, huge memory requirements, and extremely long computing times. Even if this process started with an iteration taking a fraction of a second, it doesn’t take many doubles to reach a full day where the list in the previous paragraph began.
In some cases, the mathematical ability of a modeller can limit the complexity of the model. Some phenomena are extremely difficult to simulate mathematically. If the modeller cannot perform a calculation by hand, then they cannot insert that calculation into a computer so it can perform it. Some models require advanced calculus or other higher mathematics to solve a problem quickly. If that level of math is beyond the capabilities of the modeller, a less elegant, longer method of calculation may be required. If that is not possible, it may be necessary to postpone finishing the model until the appropriate algorithms become available.
The fighter jet with its wings canted forward comes to mind. This is a fundamentally unstable configuration for an airplane. Its natural tendency is to flip over and fly backwards. It needed two technological advancements before they could design and test such a plane. (1) It needed a controller that could make fast adjustments to its control surfaces so it could fly. They needed to wait until fast computers were available to control the plane. Pilots were simply not quick enough to do this. (2) It needed to wait until light, stiff composite materials were available to make the wings. Stresses on the wings of such an airplane are incredibly high and for years, they simply did not have materials that could handle the stresses and still be light enough for use in a fighter jet. They had a great idea, but they needed to wait for the technology to catch up.
Computer modelers can have great ideas, too, but if they can not code the sufficiently complex mathematics, they may have to wait. An important phenomenon can be overlooked. When problems randomly occur in an industrial process setting, it usually means one or more important phenomena have not been taken into account in the control schemes. Process engineers do their best to include ALL important phenomena in their control algorithms, but most processes still suffer from random, unpredictable, problems. Most of these are blamed on Murphy, but most occur because important control phenomena have been overlooked. In a particular plant control process, we thought we had taken all possible factors into account, yet an occasional batch of raw materials simply didn’t follow expectations and caused enormous problems. When searching for an answer, we learned that a particular characteristic of the batch materials was responsible. In maybe 95% of all batches, this variable was not a problem, but in 5% of the batches, that particular characteristic was extreme, and lots of problems occurred.
This same behavior happens in computer models. For example, according to the ‘big boys’ in the global warming debate, the earth is not heating due to solar radiation variations from the sun. So what if a computer modeller forgets to include solar radiation in the earth’s temperature calculation because the sun has no effect on it. The results will be erroneous because the sun does affect earth’s temperature.
There are lots of reasons why a modeller can overlook an important phenomenon. Sometimes, one phenomenon is simply not known to have an effect on another. When calculating earth’s temperature, must one take into account the area of paved parking lots?… auto emissions?… the height of downtown buildings?… etc. It is fairly easy to miss necessary phenomena simply because they are not deemed to be important enough for inclusion.
Are the mathematics of phenomena a constant with time?… or do they change? This is a question that affects computer models that are supposed to cover long time frames (like the global warming models). Do atmospheric gasses absorb radiant energy today the same way they did thousands of years ago and the same way they will thousands of years in the future? Lots of other phenomena should be questioned in this same way. Uniformitarian principles suggest that everything happens today as they happened in the distant past and as they will happen in the distant future. There are problems, though. According to evidence, earth’s magnetic field not only changed several times in the past, but it supposedly switched polarities several times (i.e., the north became south, and south became north.) If a phenomenon is dependent on the earth’s magnetic field, how does one handle that in a computer model?
Darwinian evolution and uniformitarianism are closely related. Both theories say that changes occurred very slowly over eons of time and all phenomena behaved similarly throughout those eons. True? False? It depends because creationists who believe in a young earth are grouped with catastrophists who believe that the earth was formed by a series of catastrophic — not by gradual changes over eons. Even in this case, unless known to be otherwise, one still must assume that all phenomena occurred in the past and will occur in the future, as they occur today. But in this case, the models may only be dealing with thousands of years, rather than millions or billions of years. This question still needs to be taken into account. When computer models are developed, are they checked against good data?… and are the results published for all to see? The author developed several computer models that applied to ceramic process systems. Those results were all published in the technical ceramics literature because they were only relevant to a small part of the technical community. But each model had to be proven against real phenomena. Each model had to be demonstrated to determine if it accurately simulated the real phenomena. When no prior data were available to make the demonstration, the author had to perform experiments to demonstrate that the computer’s predictions were correct. In some cases, real results were well known, or data was already available to demonstrate a behavior. The models were then used to explain why the behavior occurred. In those cases, extra tests did not need to be run because the results were well known. The reasons why the results occurred were the answers sought by the computer models. And then, depending on the nature of the models, results were published in appropriate journals. In the case of global climate models, the results appear to be buried in the technical literature, and we are left to see the media’s and the politicians’ explanations that dire events are soon upon us! If the models are that important that they are going to affect our economy and our lives, results that demonstrate the veracity of the models should be published in the open literature for all to see. If today’s mass media believes these models are so accurate that Washington is going to alter our behaviors in response, then we should not need to dig to find the articles that show us the models and prove the accuracy of the results.
According to some, we have been collecting excellent satellite temperature data since 2002. Our best computer models should be tested against those satellite data to demonstrate the models can accurately predict 2010 weather behavior. Those results should then be published in the open literature for all to see. We should not need to take the words of politicians, environmental extremists, or the intelligentsia that we are in jeopardy of dire consequences from global warming. They should be willing to show these important results to all of us. The fact that they are not willing to do so lends credibility to the idea that global warming is nothing but a hoax — perpetrated to allow the redistribution of wealth from the “haves” like the US and Europe to the “have nots” like third world countries.
If results are going to be published broadly, are we going to also see good, logical answers to our questions? If global warming is causing the extremely violent hurricanes of the last several years (note — we haven’t had any to the author’s knowledge), are the modellers going to make reasonable explanations for such predictions, or must we continue to hear only from the politicians and extremists, “Well, of course, global warming is to blame!” That is no explanation and computer modellers must have more substantial, logical answers for such claims than that. An “of course it is responsible” answer is insufficient for us to believe that all heat waves, cold waves, hurricanes, tornadoes, snow storms, etc., are the result of global warming. If modellers believe this to be true, they must have better answers than just, “of course.”
Can a computer model successfully predict climate events 10 to 50 years from now? Professor Cotton, a Professor of Atmospheric Science at Colorado State University, [Cotton, W.R., Colorado State University, “Is climate really predictable on 10-50 year time table?”, 20 Jul 2010, Powerpoint presentation] concluded that it is not possible to do this. According to Cotton, there are too many unpredictable phenomena that affect our weather to possibly make accurate predictions over that time frame. Has any one of the other computer modellers asked and answered this question before they began their computer modeling quests? Apparently, such thinking and questioning was insufficient to stop other modelers from attempting to develop such models.
According to the Bible, God controls the wind and the rain. This means God controls the weather and the climate. If He wants it to rain, snow, hail, or drought at some particular location on the earth — He can make it so! Have computer modelers taken this into account in their models? This author has seen at least two managers who exerted their control over their processes in such a way that they each became in input variable into the successful control of their processes. The engineers who were responsible for those processes had to try to take their manager’s decisions into account as they attempted to successfully control the processes. This made it awkwardly difficult to control the processes because the managers’ decisions were unpredictable. If God is actually in control of the wind and rain, in particular, and the weather, in general, how can a modeler take that into account in a model that predicts climate 50 – 100 years from now? The Bible says, “For who hath known the mind of the Lord?” [Rom 11:34] Man certainly doesn’t! So how can a computer model account for God’s decisions? It can’t! It is simply impossible!
There are lots of potential problems that computer modelers must face in the development of climate change models. Some are within their control. Some are fully outside and beyond their control. Some apply specifically to global climate change models, while most apply to all computer models. There are enough potential pitfalls to the accurate development of such models that this author believes we should be seeing the detailed descriptions, results, and proofs of veracity in the open literature.
If the environmentalists truly believe we are facing dire consequences in the near future, all of these details, answers, and results should be out there where all can see. If they have nothing to hide, and they sincerely believe their results, that should be the case. But the underhanded arguments and sneaky methods (“The debate is over!”) used suggest there is more to these computer model results than meets the eye. When Phil Jones, the former director of the University of East Anglia’s Climatic Research Unit [Petre, Jonathan, UK Daily Mail: “Climategate U-turn as Scientist at Centre of Row Admits: There has Been No Global Warming Since 1995,” 11 Aug 2010] recently admitted that “there has been no ‘statistically significant’ warming over the past 15 years,” one begins to wonder what kind of shenanigans the politicians are trying to pull.
Computer models are very useful to help us understand all sorts of phenomena. Lots of models have been developed and are used to explain lots of different phenomena. Those who wish to model global climate change over the next 50 – 100 years should have a great interest in the proof, testing, and use of their models. That the modellers are being quite and allowing the extremists, politicians, and intelligentsia to defend the results of their models suggests the something underhanded is up!
Dennis Dinger is a Christian who is a Professor Emeritus of Ceramic and Materials Engineering at Clemson University. In 2008, he curtailed his ceramics career when he was disabled by a form of blood cancer called Multiple Myeloma. In 2010, the cancer was in complete remission. Over the past three decades, he has directed many applied ceramic engineering research projects; he has been an active researcher and private consultant; and he is author of several ceramic engineering textbooks as well as several Christian books.
This book, Global Climate Change, the Bible, and Science was written to enter the author’s thoughts and reasoning into the global warming debate. In this book, he shows the Bible references which support three important points: (1) God created, (2) God controls the day-to-day workings of the creation, and in particular, (3) God controls the wind and the rain (that is, God controls the weather and climate). Also included are discussions of process control systems, understandings of which are needed by those who want to create climate models, some important natural cycles which have been in balance (without mankind’s help) for years and years, and possible pitfalls for computer models. These and other related topics are discussed in this book. For more details, click on Global Warming.
0 notes
Text
Global Warming Debate: How Can Computer Models' Predictions Be Wrong?
New Post has been published on https://netmaddy.com/global-warming-debate-how-can-computer-models-predictions-be-wrong/
Global Warming Debate: How Can Computer Models' Predictions Be Wrong?
The environmental extremists want us to believe that every global warming prediction is 100% correct. But computer models can err and easily draw wrong conclusions. The author has personally developed and directed the development of, several computer models. It is very easy for a computer model to be wrong. Actually, it is rather amazing that they ever make any correct predictions. So many different errors can creep into a model and cause it to predict erroneous results.
Secondarily, the average computer modeler comes to model development with a particular bent — he or she wants to see a particular result. With that in mind, this author has jokingly said that he should offer his modeling skills to the highest bidder: “Tell me what you want to model, and what you want it to predict, and I will build you a model.” That would be unethical, of course, but anyone I’ve ever met who was developing a computer model wanted it to predict a particular result. If it showed that result, the modeler could quit and call the model complete. If it didn’t show that result, the modeler continued working to develop it further. Even if a particular result is not a conscious goal, subconsciously, most modelers are looking for a certain result. So in addition to all the possible errors that can affect model results, there is always the modeler’s natural bent that must be considered. How ethical is the modeler or the modeling team? Would they intentionally slant a model to produce the results they want? We would like to think most would not intentionally slant a model to the desired result.
One must wonder about this — particularly in the global warming debate because all sorts of unseemly unethical tricks are being used to declare predicted results to be absolute truth and to discourage others from questioning those results. “The debate is over. Consensus has been achieved!” Science doesn’t work by consensus — and the debate is hardly ever over. “The Hollywood elite support the results!” Who cares what Hollywood thinks? “How dare you suggest these results are not accurate?” Well… some people actually know something about models and the model development process. They understand all the possible pitfalls of model development. “How dare you disagree with us?” We disagree for many reasons that have not been included in the debate. We disagree because the debate never occurred. If the intelligentsia is willing to play debating games and wanting to stifle discussion when they think their side is in the lead, one must look carefully at all details and question all results.
A computer model is a computer program that has been designed to simulate a particular function and to make predictions of its expected behavior. For example, the author used computer models to predict the viscous behavior of fluids and suspensions in industrial systems. The software used to render computer generated movies must perfectly simulate the visualizations shown. For example, complex algorithms show reflections on shiny objects to simulate the way light bounces from sources to the viewer’s eye. When the original models and algorithms correctly predicted light reflections, they began to be used to generate movies. The following list includes many of the pitfalls that can unintentionally hinder the success of computer models:
First, models are simplifications of real phenomena. The modeler(s) must determine the proper mathematics to simulate each phenomenon of interest. One usually selects the simplest mathematical algorithm that will perform the task at hand. If one selects incorrectly, the results may be in error. For example, some phenomena appear to have a linear behavior. But the linear behavior may change to non-linear behavior under certain extreme conditions. If that is not known in advance, the model may be asked to predict values in the ‘extreme conditions’ territory and errors will result. This happens easily.
For example, the fluid viscosity of a suspension (powder mixed in a fluid) starts as a linear function of the concentration of powders added to the fluid. When the concentration of powder is small, the function is linear. But as the concentration of powder increases, the viscosity behaves in a non-linear manner. The initial linear function is rather simple to program into a model, but the non-linear behavior is complex to accurately model. It is easy to make programming mistakes and utilize the wrong mathematics. This is closely related to the first pitfall above. If you think you know how a particular phenomenon behaves, but you use the wrong equation, the model will predict erroneous values.
Some phenomena are simply difficult to model. Sometimes, the results of a particular set of phenomena are not known. One must then perform a complex calculation each time those phenomena must be used. Rather than use the resulting mathematical equation to simulate a function, it may be necessary to simulate the actual underlying phenomena to arrive at the results. This may force a model within a model which adds complexity to the whole calculation.
For example, rather than using a simple mathematical equation to simulate how clouds affect sunlight, it may be necessary to model the behavior of individual raindrops in sunlight, and then model the behavior of the bazillions of raindrops that form a cloud to determine how an individual cloud will behave in sunlight. Until one builds up to simulating a whole sky full of clouds, the model can take on enormous proportions and the calculation times can be extremely long. Having gone through such an exercise, one must then determine if the equations and algorithms at each step in this process were modeled accurately.
The memory capacity of a computer and speeds of computation can be limited. This was more of a problem 20-30 years ago, but sizes and speeds can still be limiting. In early computers used by this author, you could program anything you wished — as long as it could fit into a 64,000-byte program (which is quite small as computer programs go.) Program sizes were limited and sizes of memory locations were also limited. Computers have grown over the years where most programs can now be so large, a programmer doesn’t need to be concerned with size limitations or with memory capacity. But sometimes, these still need to be taken into account.
When computation times can grow exponentially with certain simulations, one still needs to determine how long a particular computation will take. If computation times for a particular phenomenon double with each new iteration, capacities can quickly outgrow the available memory and allowed computational times. And models will reach those points within one or two iterations. If it takes one full day, for example, to perform one iteration of a simulation, and the calculation time doubles with each new iteration, how long is the modeler willing to wait to complete the simulation? See — this will build quickly — one day, two days, 4 days, a week, two weeks, a month, two months, four months, eight months, 1 1/3 years, etc. Again — how long is the modeler willing to wait?
How many raindrops are needed to form a cloud? How many individually must be simulated to adequately model the behavior of a cloud? How many in combination are needed to simulate the interaction of light with a cloud? If these types of simulations define a model, we’re talking huge numbers of droplets, huge memory requirements, and extremely long computing times. Even if this process started with an iteration taking a fraction of a second, it doesn’t take many doubles to reach a full day where the list in the previous paragraph began.
In some cases, the mathematical ability of a modeler can limit the complexity of the model. Some phenomena are extremely difficult to simulate mathematically. If the modeler cannot perform a calculation by hand, then they cannot insert that calculation into a computer so it can perform it. Some models require advanced calculus or other higher mathematics to solve a problem quickly. If that level of math is beyond the capabilities of the modeler, a less elegant, longer method of calculation may be required. If that is not possible, it may be necessary to postpone finishing the model until the appropriate algorithms become available.
The fighter jet with its wings canted forward comes to mind. This is a fundamentally unstable configuration for an airplane. Its natural tendency is to flip over and fly backward. It needed two technological advancements before they could design and test such a plane. (1) It needed a controller that could make fast adjustments to its control surfaces so it could fly. They needed to wait until fast computers were available to control the plane. Pilots were simply not quick enough to do this. (2) It needed to wait until light, stiff composite materials were available to make the wings. Stresses on the wings of such an airplane are incredibly high and for years, they simply did not have materials that could handle the stresses and still be light enough for use in a fighter jet. They had a great idea, but they needed to wait for the technology to catch up.
Computer modelers can have great ideas, too, but if they can not code the sufficiently complex mathematics, they may have to wait. An important phenomenon can be overlooked. When problems randomly occur in an industrial process setting, it usually means one or more important phenomena have not been taken into account in the control schemes. Process engineers do their best to include ALL important phenomena in their control algorithms, but most processes still suffer from random, unpredictable, problems. Most of these are blamed on Murphy, but most occur because important control phenomena have been overlooked. In a particular plant control process, we thought we had taken all possible factors into account, yet an occasional batch of raw materials simply didn’t follow expectations and caused enormous problems. When searching for an answer, we learned that a particular characteristic of the batch materials was responsible. In maybe 95% of all batches, this variable was not a problem, but in 5% of the batches, that particular characteristic was extreme, and lots of problems occurred.
This same behavior happens in computer models. For example, according to the ‘big boys’ in the global warming debate, the earth is not heating due to solar radiation variations from the sun. So what if a computer modeler forgets to include solar radiation in the earth’s temperature calculation because the sun has no effect on it. The results will be erroneous because the sun does affect earth’s temperature.
There are lots of reasons why a modeler can overlook an important phenomenon. Sometimes, one phenomenon is simply not known to have an effect on another. When calculating earth’s temperature, must one take into account the area of paved parking lots?… auto emissions?… the height of downtown buildings?… etc. It is fairly easy to miss necessary phenomena simply because they are not deemed to be important enough for inclusion.
Are the mathematics of phenomena a constant with time?… or do they change? This is a question that affects computer models that are supposed to cover long time frames (like the global warming models). Do atmospheric gasses absorb radiant energy today the same way they did thousands of years ago and the same way they will thousands of years in the future? Lots of other phenomena should be questioned in this same way. Uniformitarian principles suggest that everything happens today as they happened in the distant past and as they will happen in the distant future. There are problems, though. According to evidence, earth’s magnetic field not only changed several times in the past, but it supposedly switched polarities several times (i.e., the north became south, and south became north.) If a phenomenon is dependent on the earth’s magnetic field, how does one handle that in a computer model?
Darwinian evolution and uniformitarianism are closely related. Both theories say that changes occurred very slowly over eons of time and all phenomena behaved similarly throughout those eons. True? False? It depends because creationists who believe in a young earth are grouped with catastrophists who believe that the earth was formed by a series of catastrophic — not by gradual changes over eons. Even in this case, unless known to be otherwise, one still must assume that all phenomena occurred in the past and will occur in the future, as they occur today. But in this case, the models may only be dealing with thousands of years, rather than millions or billions of years. This question still needs to be taken into account. When computer models are developed, are they checked against good data?… and are the results published for all to see? The author developed several computer models that applied to ceramic process systems. Those results were all published in the technical ceramics literature because they were only relevant to a small part of the technical community. But each model had to be proven against real phenomena. Each model had to be demonstrated to determine if it accurately simulated the real phenomena. When no prior data were available to make the demonstration, the author had to perform experiments to demonstrate that the computer’s predictions were correct. In some cases, real results were well known, or data was already available to demonstrate a behavior. The models were then used to explain why the behavior occurred. In those cases, extra tests did not need to be run because the results were well known. The reasons why the results occurred were the answers sought by the computer models. And then, depending on the nature of the models, results were published in appropriate journals. In the case of global climate models, the results appear to be buried in the technical literature, and we are left to see the media’s and the politicians’ explanations that dire events are soon upon us! If the models are that important that they are going to affect our economy and our lives, results that demonstrate the veracity of the models should be published in the open literature for all to see. If today’s mass media believes these models are so accurate that Washington is going to alter our behaviors in response, then we should not need to dig to find the articles that show us the models and prove the accuracy of the results.
According to some, we have been collecting excellent satellite temperature data since 2002. Our best computer models should be tested against those satellite data to demonstrate the models can accurately predict 2010 weather behavior. Those results should then be published in the open literature for all to see. We should not need to take the words of politicians, environmental extremists, or the intelligentsia that we are in jeopardy of dire consequences from global warming. They should be willing to show these important results to all of us. The fact that they are not willing to do so lends credibility to the idea that global warming is nothing but a hoax — perpetrated to allow the redistribution of wealth from the “haves” like the US and Europe to the “have nots” like third world countries.
If results are going to be published broadly, are we going to also see good, logical answers to our questions? If global warming is causing the extremely violent hurricanes of the last several years (note — we haven’t had any to the author’s knowledge), are the modellers going to make reasonable explanations for such predictions, or must we continue to hear only from the politicians and extremists, “Well, of course, global warming is to blame!” That is no explanation and computer modelers must have more substantial, logical answers for such claims than that. An “of course it is responsible” answer is insufficient for us to believe that all heat waves, cold waves, hurricanes, tornadoes, snow storms, etc., are the result of global warming. If modelers believe this to be true, they must have better answers than just, “of course.”
Can a computer model successfully predict climate events 10 to 50 years from now? Professor Cotton, a Professor of Atmospheric Science at Colorado State University, [Cotton, W.R., Colorado State University, “Is climate really predictable on 10-50 year time table?”, 20 Jul 2010, Powerpoint presentation] concluded that it is not possible to do this. According to Cotton, there are too many unpredictable phenomena that affect our weather to possibly make accurate predictions over that time frame. Has any one of the other computer modelers asked and answered this question before they began their computer modeling quests? Apparently, such thinking and questioning were insufficient to stop other modelers from attempting to develop such models.
According to the Bible, God controls the wind and the rain. This means God controls the weather and the climate. If He wants it to rain, snow, hail, or drought at some particular location on the earth — He can make it so! Have computer modelers taken this into account in their models? This author has seen at least two managers who exerted their control over their processes in such a way that they each became in input variable into the successful control of their processes. The engineers who were responsible for those processes had to try to take their manager’s decisions into account as they attempted to successfully control the processes. This made it awkwardly difficult to control the processes because the managers’ decisions were unpredictable. If God is actually in control of the wind and rain, in particular, and the weather, in general, how can a modeler take that into account in a model that predicts climate 50 – 100 years from now? The Bible says, “For who hath known the mind of the Lord?” [Rom 11:34] Man certainly doesn’t! So how can a computer model account for God’s decisions? It can’t! It is simply impossible!
There are lots of potential problems that computer modelers must face in the development of climate change models. Some are within their control. Some are fully outside and beyond their control. Some apply specifically to global climate change models, while most apply to all computer models. There are enough potential pitfalls to the accurate development of such models that this author believes we should be seeing the detailed descriptions, results, and proofs of veracity in the open literature.
If the environmentalists truly believe we are facing dire consequences in the near future, all of these details, answers, and results should be out there where all can see. If they have nothing to hide, and they sincerely believe their results, that should be the case. But the underhanded arguments and sneaky methods (“The debate is over!”) used to suggest there is more to these computer model results than meets the eye. When Phil Jones, the former director of the University of East Anglia’s Climatic Research Unit [Petre, Jonathan, UK Daily Mail: “Climategate U-turn as Scientist at Centre of Row Admits: There has Been No Global Warming Since 1995,” 11 Aug 2010] recently admitted that “there has been no ‘statistically significant’ warming over the past 15 years,” one begins to wonder what kind of shenanigans the politicians are trying to pull.
Computer models are very useful to help us understand all sorts of phenomena. Lots of models have been developed and are used to explain lots of different phenomena. Those who wish to model global climate change over the next 50 – 100 years should have a great interest in the proof, testing, and use of their models. That the modelers are being quite and allowing the extremists, politicians, and intelligentsia to defend the results of their models suggests the something underhanded is up!
Dennis Dinger is a Christian who is a Professor Emeritus of Ceramic and Materials Engineering at Clemson University. In 2008, he curtailed his ceramics career when he was disabled by a form of blood cancer called Multiple Myeloma. In 2010, the cancer was in complete remission. Over the past three decades, he has directed many applied ceramic engineering research projects; he has been an active researcher and private consultant, and he is an author of several ceramic engineering textbooks as well as several Christian books.
0 notes