sayitwithscience
Say It With Science
66 posts
Say It With Science is an educational blog that serves to teach readers about science, in general, but more specifically about physics and mathematics. It is run by members who each have their own unique specialty and interests in science and mathematics. Some of us are merely high school students and some of us are university students. This enables us to provide high quality information about lower and higher level subjects. You may contact us via ask box or at [email protected] We welcome questions, feedback, and submissions; and we will clarify any concepts for readers.
Don't wanna be here? Send us removal request.
sayitwithscience · 12 years ago
Note
Hi! Love your blog. I enjoy math, but the unfortunate thing is that Im really bad at it. Is this strange to be bad at a subject you enjoy? And what do you recommend someone does if they want to get better at math?
No that's not strange at all! Part of the fun of learning something new is its difficulty. If you want to learn more about math all you have to do is stay curious. Wikipedia is always a great resource. Talk to your math teachers or professors and see what fields would be most useful at your age and see if you can buy some text books relating to the subject. Youtube has useful channels too, like Khan Academy. 
38 notes · View notes
sayitwithscience · 12 years ago
Note
What are the 9 types of energy
Hey there,
Why don't you check out this earlier post. It should give you a brief intro to the types of energy, the law of conservation of energy, efficiency and Sankey diagrams!
Hopefully that will help you out and thank you for the question!
As always, feel free to submit any other inquiries to our ask.
20 notes · View notes
sayitwithscience · 12 years ago
Note
Hi! I stumbled upon your tumblr, and I'd like to start off by saying how amazing it is, and thank you for making this tumblr! Moreon to my issue, i'm currently studying crude oil in Chemistry. Could you please help me understand"cracking" in terms of crude oil? From what I understand, 'cracking' is the CHEMICAL process of breaking down large molecules into smaller ones. And they 'crack' crude oil to refine it into petroleum; fractional distillation being a PHYSICAL process. More info please?
It sounds like you’re a bit confused between fractional distillation and cracking. It’s true that cracking is a chemical process and fractional distillation is a physical process, but by saying that I mean to show you that they’re two entirely different processes.
When crude oil is first extracted from the ground, is made up of a variety of different hydrocarbons (chemical compounds that only consist of carbon and hydrogen), some very short (ethene) and some long (decane), and is entirely useless in this state.  Hydrocarbons can be separated into two groups: alkanes and alkenes. An alkane is saturated, meaning it holds as many hydrogen atoms as possible, whereas an alkene is unsaturated and contains a double carbon bond. 
Fractional distillation serves to separate the longer hydrocarbons from the shorter hydrocarbons by their boiling points. This works because the longer the hydrocarbon, the higher the boiling point and viscosity and the lower the flammability.
Fractional distillation takes place as follows:
Crude oil is vapourised and fed into the bottom of the fractionating column.
As the vapour rises up the column, the temperature falls.
Fractions with different boiling points condense at different levels of the column and can be collected.
The fractions with high boiling points (long chain hydrocarbons) condense and are collected at the bottom of the column
Fractions with low boiling points (short chain hydrocarbons) rise to the top of the column where they condense and are collected.
To see a diagram of the fractional distillation process, click here.
Cracking on the other hand, breaks long alkanes down into shorter, more useful alkane and alkene molecules. It requires a catalyst (a substance that causes or accelerates a chemical reaction without itself being affected) and a high temperature. This is done mainly to assuage the high industrial demand for the shorter molecules. The alkenes are typically converted into polymers (plastics) while the alkanes are sought after as a fuel source. Cracking is an example of a thermal decomposition reaction.
I hope that helps clear up some of your confusion.
22 notes · View notes
sayitwithscience · 12 years ago
Photo
Tumblr media
Demons in the History of Science
Part one of two: Laplace’s Demon
Some might say that the modern day physicists have it easy; they can appeal to the public with their stories of eleven-dimensional universes, time travel, and stories of a quantum world that is stranger than fiction. But the basis of such appeal remains the same as the appeal for pursuing science always was and will be: a greater understanding of the environment, ourselves, and knowledge itself.
Just like Schrödinger’s cat, a popular thought experiment by famous physicist Erwin Schrödinger, Laplace’s Demon and Maxwell’s Demon are two other thought-experiments in scientific thinking which are important for what they reveal about our understanding of the universe. It may only interest you to learn of these thought-experiments for the sake of reinforcing the philosophical relevance and beauty that science has always sought to provide.
Jim-Al Khalili, author of Quantum: A Guide for the Perplexed, affirms that fate as a scientific idea was disproved three-quarters of a century ago, referring to the discoveries of quantum mechanics as proof, of course. But what does he mean when he says this? Prior to such discoveries, it may have been okay to argue for a deterministic universe, meaning that scientists could still consider the idea of a world in which one specific input must result in one specific output and thus the sum all these actions and their consequences could help “determine” the overall outcome, or fate, of such a world.
Pierre-Simon Laplace, born on March 23, 1794, was a French mathematician and astronomer whose work largely founded the statistical interpretation of probability known as Bayesian Probability. He lived in a world before Heisenberg’s Uncertainty Principle and Chaos Theory and thus he was allowed to imagine such a deterministic universe:
We may regard the present state of the universe as the effect of its past and the cause of its future. An intellect which at a certain moment would know all forces that set nature in motion, and all positions of all items of which nature is composed, if this intellect were also vast enough to submit these data to analysis, it would embrace in a single formula the movements of the greatest bodies of the universe and those of the tiniest atom; for such an intellect nothing would be uncertain and the future just like the past would be present before its eyes.
Laplace, A Philosophical Essay on Probabilities
Laplace thought about what it would be like if it were possible to know the positions, masses, and velocities of all the atoms in existence and hypothesized a being, later known as Laplace’s Demon, which would be able to know such information and such calculate all future events. 
With our knowledge of physics, The Heisenberg Uncertainty Principle and Chaos Theory, such a being could not exist because such information about atoms cannot be observed with enough precision to calculate and predict future events. (By the way, “enough” precision means infinite precision!) This might be good news for those who believe in free will as its concept would not be permitted in a deterministic universe governed by Laplace’s demon. 
Interestingly enough, The Heisenberg Uncertainty Principle and Chaos Theory are not the only restrictive challenges that scientists have faced in trying to understand the properties and bounds of our universe. The Second Law of Thermodynamics is also of concern to scientists and philosophers alike, as we will learn with the birth of another mind-boggling demon.
131 notes · View notes
sayitwithscience · 12 years ago
Photo
Tumblr media
Maximum Entropy Distributions
Entropy is an important topic in many fields; it has very well known uses in statistical mechanics, thermodynamics, and information theory. The classical formula for entropy is Σi(pi log pi), where p=p(x) is a probability density function describing the likelihood of a possible microstate of the system, i, being assumed. But what is this probability density function? How must the likelihood of states be configured so that we observe the appropriate macrostates?
In accordance with the second law of thermodynamics, we wish for the entropy to be maximized. If we take the entropy in the limit of large N, we can treat it with calculus as S[φ]=∫dx φ ln φ. Here, S is called a functional (which is, essentially, a function that takes another function as its argument). How can we maximize S? We will proceed using the methods of calculus of variations and Lagrange multipliers.
First we introduce three constraints. We require normalization, so that ∫dx φ = 1. This is a condition that any probability distribution must satisfy, so that the total probability over the domain of possible values is unity (since we’re asking for the probability of any possible event occurring). We require symmetry, so that the expected value of x is zero (it is equally likely to be in microstates to the left of the mean as it is to be in microstates to the right — note that this derivation is treating the one-dimensional case for simplicity). Then our constraint is ∫dx x·φ = 0. Finally, we will explicitly declare our variance to be σ², so that ∫dx x²·φ = σ².
Using Lagrange multipliers, we will instead maximize the augmented functional S[φ]=∫(φ ln φ + λ0φ + λ1xφ + λ2x²φ dx). Here, the integrand is just the sum of the integrands above, adjusted by Lagrange multipliers λk for which we’ll be solving.
Applying the Euler-Lagrange equations and solving for φ gives φ = 1/exp(1+λ0+xλ1+x²λ2). From here, our symmetry condition forces λ1=0, and evaluating the other integral conditions gives our other λ’s such that q = (1/2πσ²)½·exp(-x² / 2σ²), which is just the Normal (or Gaussian) distribution with mean 0 and variance σ². This remarkable distribution appears in many descriptions of nature, in no small part due to the Central Limit Theorem.
82 notes · View notes
sayitwithscience · 13 years ago
Text
Happy π Day math lovers!
We here at Say It With Science would like to celebrate it with you by sharing some interesting trivia about one of our favorite physicists, Richard Feynman, and one of our favorite constants, π (pi). The Feynman Point is a sequence of six 9's beginning at the 762nd decimal place of π, named after Nobel Prize winning physicist Richard Feynman. Feynman had memorized π to this point so that he could end his recitation of the mathematical constant by saying "nine nine nine nine nine nine and so on...". At this point someone less knowledgeable about mathematics might assume the number continues this way forever, however we know better. It is believed that π is a normal number, meaning that its digits are as uniformly distributed among the digits 1 through 9 (or the digits of any other base you choose to use). If π is a normal number then the chances of coming across six 9's in a row is 0.08%. Strange occurrences like this are what makes math beautiful. π Day is a perfect reason to start memorizing as many digits of π as you can! Happy π Day!
77 notes · View notes
sayitwithscience · 13 years ago
Photo
Tumblr media
Refraction
Light waves are part of the EM wave spectrum. When moving through an optical medium (i.e. air, glass, etc. ...), the E field of the wave excites the electrons within the medium, causing them to oscillate, as a result, the light wave slows down slightly due to the loss of some of its kinetic energy. Its new speed is always less than that of the speed of light in a vacuum (v<c). Materials are characterized by their ability to bend as well as slow down light, which is known as optical refractive index (n).
c n = - v speed of light in a vacuum = ---------------------------- speed of light in the medium
n = 1 in a vacuum n = more than 1 in all other media
Refraction itself occurs when light passes across an interface between two media with different indices of refraction. As a general rule (which can be derived by Snell's law below), light refracts towards the normal when passing to a medium with a higher refractive index, and away from the normal when moving to a medium of lower refractive index.
Snell's Law:
n₁sinα = n₂sinβ
where n₁ is the refractive index of the first medium
Reflection
One of the properties of a boundary between optical media is that some of the light that's approaching the interface at the angle of incidence (α) is reflected back into the first medium, while the rest continues on into the second medium at the angle of refraction (β).
Angle of incidence = Angle of Reflection
121 notes · View notes
sayitwithscience · 13 years ago
Photo
Tumblr media
The Hamilton-Jacobi Equation
This blog has posted more than a few times in the past about classical mechanics. Luckily, classical mechanics can be approached in several ways. This approach, which uses the Hamilton-Jacobi equation (HJE), is one of the most elegant and powerful methods.
Why is the HJE so powerful? Consider a dynamical system with a Hamiltonian H=H(q,p,t). Suppose we knew of a canonical transformation (CT) that generated a new Hamiltonian K=K(Q,P,t) which (for a local chart on phase space) vanishes identically. Then the canonical equations would give that the transformed coordinates (Q,P) are constant in this region. How easy it would be to solve a system where you know that most of the important quantities are constant!
The rub is in finding such a canonical transformation. Sometimes it can't even be done analytically, but nevertheless this is the goal of the Hamilton-Jacobi method of solving mechanical systems. In the equation given above, S is the generating function of the CT. Coincidentally, it often comes out to just equal the classical action up to an additive constant! This is due to the connection between canonical transformations and mechanical gauge transformations; it turns out that the additive function used to define the latter is the generating function of the former. In general the HJE is a partial differential equation that might be solvable by additive separation of variables... but don't get too hopeful! Oftentimes the value of the HJE comes not in finding the actual equations of motion but in revealing symmetry and conservation properties of the system.
60 notes · View notes
sayitwithscience · 13 years ago
Photo
Tumblr media
The Virial Theorem
In the transition from classical to statistical mechanics, are there familiar quantities that remain constant? The Virial theorem defines a law for how the total kinetic energy of a system behaves under the right conditions, and is equally valid for a one particle system or a mole of particles.
Rudolf Clausius, the man responsible for the first mathematical treatment of entropy and for one of the classic statements of the second law of thermodynamics, defined a quantity G (now called the Virial of Clausius):
G ≡ Σi(pi · ri)
Where the sum is taken over all the particles in a system. You may want to satisfy yourself (it’s a short derivation) that taking the time derivative gives:
dG/dt = 2T + Σi(Fi · ri)
Where T is the total kinetic energy of the system (Σ  ½mv2) and dp/dt = F. Now for the theorem: the Virial Theorem states that if the time average of dG/dt is zero, then the following holds (we use angle brackets ⟨·⟩ to denote time averages):
2⟨T⟩ = - Σi(Fi · ri)
Which may not be surprising. If, however, all the forces can be written as power laws so that the potential is V=arn (with r the inter-particle separation), then
2⟨T⟩ = n⟨V⟩
Which is pretty good to know! (Here, V is the total kinetic energy of the particles in the system, not the potential function V=arn.) For an inverse square law (like the gravitational or Coulomb forces), F∝1/r2 ⇒ V∝1/r, so 2⟨T⟩ = -⟨V⟩.
Try it out on a simple harmonic oscillator (like a mass on a spring with no gravity) to see for yourself. The potential V ∝ kx², so it should be the case that the time average of the potential energy is equal to the time average of the kinetic energy (n=2 matches the coefficient in 2⟨T⟩). Indeed, if x = A sin( √[k/m] · t ), then v = A√[k/m] cos( √[k/m] · t ); then x2 ∝ sin² and v² ∝ cos², and the time averages (over an integral number of periods) of sine squared and cosine squared are both ½. Thus the Virial theorem reduces to
2 · ½m·(A²k/2m) = 2 · ½k(A²/2)
Which is easily verified. This doesn’t tell us much about the simple harmonic oscillator; in fact, we had to find the equations of motion before we could even use the theorem! (Try plugging in the force term F=-kx in the first form of the Virial theorem, without assuming that the potential is polynomial, and verify that the result is the same). But the theorem scales to much larger systems where finding the equations of motion is impossible (unless you want to solve an Avogadro’s number of differential equations!), and just knowing the potential energy of particle interactions in such systems can tell us a lot about the total energy or temperature of the ensemble.
90 notes · View notes
sayitwithscience · 13 years ago
Text
∑ F = ma
… is a differential equation:
Tumblr media
where acceleration a(t), velocity v(t), and displacement s(t) are all vectors and functions of time. This equation is second-order in position because the highest derivative is the second time derivative of position. Combined with the right boundary conditions, s(t) (also called the trajectory: path through space and time) can be determined.
This differential equation can be solved one component, or dimension, at a time. Let us focus on one of these, and call it the x component. The equations for y and z can be found exactly the same way.
Constant acceleration
If the graph of a(t) signifying acceleration in the x direction is constant
Tumblr media Tumblr media
then the graph of v(t), the velocity in the x direction, is a straight line with slope a0
Tumblr media Tumblr media
and the graph of x(t), the position along the x axis, is a parabola
Tumblr media Tumblr media
It is also possible for the acceleration, or either of the initial velocity or initial position, to be negative. Thus the displacement/projectile motion formula is derived.
155 notes · View notes
sayitwithscience · 13 years ago
Photo
Tumblr media
Going superfluid!
A liquid goes superfluid when it suddenly loses all internal friction and gains near infinite thermal conductivity. The combination of zero viscosity but nonzero surface tension allows a superfluid to creep up walls and back down the outside to drip from the bottom of open containers, or to completely cover the inner surface of sealed containers. Lack of viscosity also allows a superfluid to leak through a surface that is porous to any degree, because the molecules can slip through even microscopic holes. Superfluids furthermore exhibit a thermo-mechanical effect where they flow from colder to warmer temperatures, exactly the opposite of heat flow as stated by the laws of thermodynamics! That implies the remarkable property of superfluids of carrying zero entropy. Because of this, a perpetual fountain can be set up by shining light on a superfluid bath just below a vertical open capillary tube, causing the fluid to shoot up through and beyond the tube until its contact with the air causes it to cease being a superfluid and fall back down into the bath, whereby it will cool back into the superfluid state and repeat the process.
So how does superfluidity work, exactly?
Makings of a superfluid
Physicists first got the inkling of something stranger than the norm when, around 1940, they cooled liquid helium (specifically, the 4He isotope) down to 2.17 K and it started exhibiting the above-mentioned properties. Since the chemical makeup of the helium didn't change (it was still helium), the transformation to a superfluid state is a physical change, a phase transition, just like ice melting into liquid water. Perhaps for cold matter researchers, this transition to a new phase of matter makes up for the fact that helium doesn't solidify even at 0 K except under large pressure - whereas ALL other substances solidify above 10 K.
[Phase diagram of 4He, source]
Helium is truly the only substance that never solidifies under its own vapor pressure.
Instead, when the temperature reaches the transition or lambda point, quantum physics takes hold and a fraction of the liquid particles drop into the same ground-energy quantum state. They move in lock-step, behaving identically and never getting in each others' way. Thus we come to see that superfluidity is a kind of Bose-Einstein condensation, the general phenomenon of a substance's particles simultaneously occupying the lowest-energy quantum state.
Read more: "This Month in Physics History: Discovery of Superfluidity, January 1938". APS News: January 2006
Based on a project by Barbara Bai, Frankie Chan, and Michele Silverstein at Cornell University.
101 notes · View notes
sayitwithscience · 13 years ago
Text
Hello followers!
The contributors have been on a bit of a break, but we'll be back soon with a new batch of posts. In the meantime, we'd like to get to know our readers!
What subject areas do you study/where are you in your education/what classes are you taking?
Any requests for future Say it with Science posts?
27 notes · View notes
sayitwithscience · 13 years ago
Photo
Tumblr media
Uniform circular motion describes an object that is travelling at constant speed in a circular pathway. Though the speed which is the magnitude of the velocity remains the same, the direction of velocity is constantly changing as the object curves along the edge of the circle. A changing velocity implies a nonzero acceleration, and thus a net force acting on the object. What might such an acceleration or force look like?
The equation which characterizes UCM is
||Fnet|| = mar = mv2/r
where m is the mass of the object, v the speed, and r the radius of the pathway.  The acceleration ar = v2/r and net force point at all times toward the center of the circle. You might realize this is exactly Newton's law with the added information that the acceleration is entirely radial.
Why would that be? If this were true, it means the external forces on the object in sum continually change the direction of an object's motion but preserve its speed. This means the object never gains nor loses kinetic energy! It also means that acceleration, pointing radially toward the center of the circle, is always at odds to the velocity, which points along a tangent to the circle. By geometry the acceleration and velocity vectors for a particle in UCM are always perpendicular, because in a circle a radial line and a tangent must form a right angle at the circumference.
We gain more insight by looking to Newton's laws. The first defines inertia, which is the ability of an object with mass to remain at the same velocity unless acted upon by a force. So we know that, absent this curious radial net force, the object would speed through space not only at a constant speed but in a straight line. With the forces in place, a net acceleration arises that snaps the trajectory from line to circle.
The second law states ∑ F = ma, that the acceleration of the object is result of all the forces acting on it. UCM is usually defined using this equation, so there's not much to glean here except to remember that the characteristic acceleration, v2/r, is constant in time because v and r are constant, and that we must sum all the forces before equating to mv2/r. But this is a sticking point: the net force in UCM is often called centripetal force, but it is not actually present in the setup. When the initial conditions of the system are just right that they effect a constant radial acceleration, only then do we call the net force centripetal in nature. The UCM version of Newton's second law conveniently allows us to solve for forces or other unknowns precisely because we know what the resultant net force must be.
Finally, Newton's third law states for any force from a first object on a second, the force from the second on the first is equal in magnitude and opposite in direction. This is useful for looking at specific situations: celebrated (or not) problems include cars travelling on a circular track, cars travelling on a circular banked track, the carnival ride where a person is pressed against the inside of a spinning cylinder, a dish being carried on a lazy susan, a puck or other object being spun around on a string, or an electron shot through some kinds of magnetic or electric fields. Forces between objects can be clues to weigh in the equation that equates to the centripetal (net) force.
If you have some calculus under your belt, here is a fun exercise: imagine a bicycle chain, which is a closed loop made of metal links. You set the chain spinning so that it continues spinning in a perfect circle. If the chain has n links, what is the tension in each link? Then what is the tension in each link as n approaches infinity?
If you've been following the blog, try formulating UCM using the Frenet-Serret equations!
60 notes · View notes
sayitwithscience · 13 years ago
Note
Can someone please explain what I am studying when I am studying limits, delta & epsilon. I am having a really difficult time visualizing what is going on and what these things represent... Not so much with limits. I "know" what a limit is,,, but how they all tie in together & the other two specifically. If someone could help I would be ETERNALLY grateful!
Often, the first visual interpretation of a limit is one in which you picture moving along a function by changing your x value, and for a function that seems to have no holes, the simplest way to find the value of a limit is to substitute the target value for x and see what the output value of the function is. However, this is not a rigorous interpretation because, often, there is an obstacle (like an undefined solution) with the output value. The ε-δ definition allows us to define regions about the point you're evaluating, and it ultimately allows us to have rigorous definitions for when a limit does and does not exist.
The δ in the definition applies to a region about the x value in question (say, a) -- the region is defined by a - δ ≤ a ≤ a + δ. The ε in the definition applies to a region about the function's output value, say, L: L - ε ≤ L ≤ L +ε.
The definition itself corresponds to what may be interpreted as a sort of game. If you have two players -- the "devil's advocate" on one side, on the y-axis, and you on the x-axis -- then it's the devil's advocate who claims that you can't bring the true value of the limit to be in the region of L - ε ≤ L ≤ L + ε. In response, you adjust your value of δ to allow the limit to exist in that region, and on his turn, he shrinks the region even further, by decreasing the value of ε. If the limit is to exist, then as long as ε and δ remain greater than 0, you will always be able to achieve a limit value within the given ranges (ε and δ need to be kept greater than 0 because the goal is to define the limit based on the immediate region about the function, not evaluate at a itself -- such would be simply substituting a into f(x)). Hence come the familiar relations that govern the ε-δ definition: Given that
then for any real number ε > 0, there exists another real number δ > 0 such that if
then
7 notes · View notes
sayitwithscience · 13 years ago
Photo
Tumblr media
When describing the trajectory of a point particle in space, we can use simple kinematic physics to describe properties of the particle: force, energy, momentum, and so forth. But are there useful measures we can use to describe the qualities of the trajectory itself?
Enter the Frenet-Serret (or TNB) frame. In this post, we’ll show how to construct three (intuitively meaningful) orthonormal vectors that follow a particle in its trajectory. These vectors will be subject to the Frenet-Serret equations, and will also end up giving us a useful way to interpret curvature and torsion.
First, we define arc length: let s(t) = ∫0t ||x’(τ)|| dτ. (We give a quick overview of integration in this post.) If you haven’t encountered this definition before, don’t fret: we’re simply multiplying the change in position of the particle x'(τ) by the small time step dτ summed over every infinitesimal time step from τ=0 to τ=t=”current time”. The post linked to above also explains a short theorem that may illustrate this point more lucidly.
Now, consider a particle’s trajectory x(t). What’s the velocity of this particle? Its speed, surely, is ds/dt: the change in arc length (distance traveled) over time. But velocity is a vector, and needs a direction. Thus we define the velocity v=(dx/ds)⋅(ds/dt). This simplifies to the more obvious definition dx/dt, but allows us to separate out the latter term as speed and the former term as direction. This first term, dx/ds, describes the change in the position given a change in distance traveled. As long as the trajectory of the particle has certain nice mathematical properties (like smoothness), this vector will always be tangent to the trajectory of the particle. Think of this vector like the hood of your car: even though the car can turn, the hood will always point in whatever direction you’re going towards. This vector T ≡ dx/ds is called the unit tangent vector.
We now define two other useful vectors. The normal vector: N ≡ (dT/ds) / ( |dT/ds| ) is a vector of unit length that always points in whichever way T is turning toward. It can be shown — but not here — that T ⊥ N. The binormal vector B is normal to both T and N; it’s defined as B ≡ T x N. So T, N, and B all have unit length and are all orthogonal to each other. Since T depends directly on the movement of the particle, N and B do as well; therefore, as the particle moves around, the coordinate system defined by T, N, and B moves around as well, connected to the particle. The frame is always orthonormal and always maintains certain relationships to the particle’s motion, so it can be useful to make some statements in the context of the TNB frame.
The Frenet-Serret equations, as promised:
dT/ds = κN
dN/ds = -κT + τB
dB/ds = -τN
Here, κ is the curvature and τ is the torsion. Further reading (lookup the Darboux vector) illustrates that κ represents the rotation of the entire TNB frame about the binormal vector B, and τ represents the rotation of the frame about T. The idea of the particle trajectory twisting and rolling nicely matches the idea of what it might be like to be in the cockpit of one of these point particles, but takes this depth of vector analysis to get to.
Bonus points: remember how v = Tv, with v the speed? Differentiate this with respect to time, play around with some algebra, and see if you can arrive at the following result: the acceleration a = κv2N + (d2s/dt2)T. Thoughtful consideration will reveal the latter term as the tangential acceleration, and knowing that 1/κ ≡ ρ = “the radius of curvature” reveals that the first term is centripetal acceleration.
Photo credit: Salix alba at en.wikipedia
36 notes · View notes
sayitwithscience · 13 years ago
Photo
Tumblr media
Check out a previous post on gravity here, and another post on planetary orbit here.
Gravity and Astronauts in Orbit
Newton's Law of Gravity states that the gravitational force between any two objects in the universe is proportional to the mass of the objects and inversely proportional to the square of the distance (measured from their centers) between them. That means that if you double the size of either of the masses, the gravitational force will double. Then also if you halve the distance between the two objects and their masses stay the same, the gravitational force between them will quadruple. The converse of both these statements is also true.
Despite gravity keeping everything around you steadily on the ground, it's considered to be the weakest force in relation to the other four fundamental forces. After all, you can easily overcome the gravitational force of the whole entire Earth for a few seconds by simply jumping or throwing a ball up into the air.
A common misconception is that there's no gravity in outer space. Anything with mass has an infinitely large gravitational field - but because it gets progressively weaker with distance, its effects quickly become negligible.
Then why do astronauts float around in the space shuttle? Why are they weightless? The only reason you can feel your weight on Earth is because the Earth is pushing back up at you. It's the normal force from the Earth on your body. However within an orbiting spacecraft, you're in free fall. There's nothing to push back at you and make you feel your weight! 
Fun fact: Astronauts experience a lot of nausea and headaches during their first few weeks in orbit. This is probably because not only are they in free fall, but the liquids inside them are probably flowing around too (eg. fluids to the brain, nasal cavities, stomach contents, etc...)
Sources 1, 2 and 3.
28 notes · View notes
sayitwithscience · 13 years ago
Photo
Tumblr media
Imagine you had a function P that upon swallowing a subset E of a universal set Ω will return a number x from the real number line. Keep imagining that P must also obey the following rules:
If P can eat the subset, it will always return a nonnegative number.
If you give P the universe Ω, it will give you back 1.
If you collected together disjoint subsets and gave them to P to process, the result would be the same as feeding P each subset individually and adding the answers.
Simple, if odd out of context.
Mathematicians have a curious way of pulling magic of out simplicity.
~
Probability today is studied as a mathematical science based on the three axioms (flavored by set theory) stated above. These are the "first principles" from which many other, derivative propositions have been speculated and proved. The results of the modern study of probability fuel many branches of engineering, including signals processing in electrical and computer engineering, the insurance and finance industries, which translate probabilities into economic movement, and many other enterprises. Along the way it borrowed from the other giants of mathematics, analysis and algebra, and goes on generating new research ideas for itself and other fields. This is the way of math: set down a bunch of rules (preferably simple to start) and see how their consequences play out.
But what is probability? If it is quantitative measure, what is it measuring? How valid is that measure and how could it be checked? Even these are rich questions to probe. A working qualitative description for practitioners might be that probability quantifies uncertainty. It answers with some degree of success such questions as "What is the chance?" or "How likely is this?" If a system contains uncertainty, probability provides the model for handling it, and data gathered from the system can validate or improve the probability model.
According to Wikipedia, there are three main interpretations for probability:
Frequentists talk about probabilities only when dealing with experiments that are random and well-defined. The probability of a random event denotes the relative frequency of occurrence of an experiment's outcome, when repeating the experiment. Frequentists consider probability to be the relative frequency "in the long run" of outcomes.
Subjectivists assign numbers per subjective probability, i.e., as a degree of belief.
Bayesians include expert knowledge as well as experimental data to produce probabilities. The expert knowledge is represented by a prior probability distribution. The data is incorporated in a likelihood function. The product of the prior and the likelihood, normalized, results in a posterior probability distribution that incorporates all the information known to date.
~
So let's reinterpret the math.
Let Ω be the sample space, the set of all possible outcomes, be Ei be subsets of Ω which denote different events for different i, and 𝔹 be the set of all events. Then a probability map P is defined as any function from 𝔹 → ℝ satisfying
P(Ei) ≥ 0 All probabilities are non-negative.
P(Ω) = 1 It is certain that one of the outcomes of Ω will happen.
Ei ��� Ej = ∅ if i≠j ⇔ P(∑i Ei) = ∑iP(Ei) Probabilities of disjoint events can be added to get the probability of any of them happening.
--
Image generated by Rene Schwietzke using POV-Ray, a raytracing freeware that creates 3D computer graphics.
Further reading: A First Course in Probability (8th ed., 2010), Sheldon Ross. Probability and Statistics (4th ed., 2010), Mark J. Schervish and Morris H. Degroot.
37 notes · View notes