## Saturday, April 14, 2018

### A World of Pure Imagination

In the philosophy of mathematics—hold on, hold on, I promise this is good—there's a perennial debate about whether numbers are real or just something we made up. This argument elicits a kind of irritated shrug from most people, but there is a fairly reliable way to evoke some pushback and/or incredulity: assert that imaginary numbers exist.

An imaginary number is the square root of a negative number, which of course doesn't make sense; any number multiplied by itself comes out positive. But mathematics is all about laying down axioms and seeing what logically follows. We can just declare that √-1 = i instead of a calculator error.

Alright, you think, but we can't just declare things into existence. What does an imaginary number even mean in the real world? You can have 3 apples, or maybe even -3 apples if you owe someone, but 3i apples has no concrete, physical interpretation, right?

Well, it turns out that by allowing complex numbers—a set that includes both real and imaginary numbers—we open up a new space for doing mathematics and physics. In fact, if we want to explain the bewildering diversity of chemical elements or the solidity of matter, we have to explore this imaginary space. Could anything be more concrete?

Take a look and you'll see...

Before we delve into the physics, let's make sure we have a little intuition about complex numbers. The imaginary unit, i, is the square root of -1. Just based on that, we see imaginary numbers cycle:

i*i = -1, because that's our definition

(i*i)*i = -1*i = -i

(i*i)*(i*i) = -1*-1 = 1

And (i*i*i*i)*i = 1*i = i again

This cycle lends itself to a neat geometric interpretation. Instead of the humdrum xy-plane, we can imagine a complex plane like this:

 By Svjo [CC BY-SA 4.0], from Wikimedia Commons
Here, the horizontal axis is real and the vertical axis imaginary. Complex numbers are pairs that have the form a + bi, representing coordinates (or a vector) on our plane. If we draw a circle counter-clockwise through the points 1, i, -1, and -i, you see they follow the same cycle as our imaginary multiplication. So you can rotate through the complex plane just by multiplying two vectors.

It might look like we've only renamed a plain plane, but this space gives us flexibility the real numbers lack. Real numbers sometimes fall down on the job when you’re trying to solve polynomial equations. But if you say i is a root of -1, you can always find a complex number that does the trick. Geometrically, this lets us access points on the complex plane through simple multiplication, without having to rely on more cumbersome machinery.

Okay, finding polynomial roots probably sounds pretty boring, so we're not going to dwell on that. We'll mostly think in terms of complex rotation and how that permits us to peak into weird, non-Euclidean spaces where up and down no longer work the way they should. But know that in the background, these imaginary roots are letting us do a bunch of linear algebra by providing solutions to otherwise unsolvable equations.

We'll begin with a spin...

Let's turn back to physics. Explaining how the properties of chemical elements—the gregariousness of carbon, the aloofness of neon—arise from quantum mechanics goes like this: the protons and neutrons of an atom are squeezed into a tiny nucleus while the electrons whizz by in concentric orbital shells. How “filled” the outermost shell is (mostly) determines the chemical properties of an element. So whatever keeps these negative nancies from clumping together is responsible for, well, basically all macroscopic structure.

The culprit is the Pauli exclusion principle, which says that particles with half-integer spin (electrons) cannot occupy the same quantum state. Spin is intrinsic angular momentum, measured in units of ħ. If you measure the spin of an electron along some axis, you get either +1/2 (referred to as spin up) or -1/2 (upside down—spin down), with no other possible outcomes.

To keep track of the spin state of an electron, we can write a wave function that looks like this:

|↑⟩

Flip the electron upside down and the spin state is:

|↓⟩

Then flip it back right side up and you get:

-|↑⟩

Wait, what? We seemed to have gained a minus sign somehow. In fact, you have to rotate an electron a full 720° to cycle back to the state you started with. The minus sign doesn't matter much in measurement because anything we observe in quantum mechanics involves the square of the wave function, but it being in the math is pivotal.

Say a transporter accident duplicates Kirk and the two end up fighting.

 Credit: Paramount Pictures and/or CBS Studios
There’s a brawl, both men lose their shirts, and one emerges victoriously. How does Spock tell if the original Kirk won or lost? If Kirk is a subatomic particle, we’re left with two possible states that look the same when measured. Either original Kirk wins and duplicate Kirk loses:

|W⟩|L⟩

Or vice versa:

|L⟩|W⟩

Each one will scream, "Spock... it’s... me!" but there's no evil mustache to differentiate them. With identical quantum particles, this symmetry of exchange is mathematically equivalent to taking one particle and flipping it around 360°; in both cases you end up with observationally indistinguishable states.

But there are still two outcomes. Whenever we're dealing with multiple possibilities in quantum mechanics, it's time for you-know-who and his poor cat. Just as the cat can be in a superposition of alive and dead, a Kirk particle can be in a superposition of winning and losing.

Nothing weird happens when you mix and match bosons (particles with integer spin like photons). They exchange symmetrically and their superposition looks like this:

|W⟩|L⟩ + |L⟩|W⟩

But electrons (and other half-integer fermions) are antisymmetric; a 360° flip gives us that minus sign. So their superposition is:

|W⟩|L⟩ - |L⟩|W⟩

As both sides of this expression are indistinguishable, subtracting one from the other equals 0. Any place where the wave function is 0, we have a 0% chance of finding a particle. So two electrons will never end up in a fight in the first place. (Kirk, then, is clearly a boson.) Replace "fight" with "spin up state in the 1s shell of a hydrogen atom" and you've got the beginnings of chemistry and matter.

What we'll see will defy explanation...

Okay, so how do we make sense of the weird minus sign a rotated electron acquires? This perplexing behavior originates with their 1/2 spin, which we can only understand if we venture back into the world of imaginary numbers, to a place called Hilbert space.

Physicists discovered that electrons were spin-1/2 as a result of the Stern-Gerlach experiment, where Stern and Gerlach sent silver atoms (and their attendant electrons) through a magnetic field. Spin up particles were deflected one direction, spin down particles a slightly different direction. That there were only two possible values along a given axis was weird enough, but follow-up experiments revealed even stranger behavior.

 By Theresa Knott from en.wikipedia - Own work, CC BY-SA 3.0, Link
If you collect all the |↑⟩ electrons and send them through another S-G apparatus, only |↑⟩ electrons come through. You're giving me a look, I can tell; what's weird about that? Well, we're still dealing with quantum mechanics, so we always have to consider superposition. Maybe the state after detection is |↑⟩ + |↓⟩ and there's a chance one will come out |↓⟩.

Experiment says no. This is a little weird. It means +1/2 spin doesn't overlap at all with -1/2 spin (positively or negatively). That should only be the case for vectors at right angles to each other. Somehow, these up and down arrows behave as if they're orthogonal.

Say we've been measuring spin along the z-axis until now. We can set up a second S-G apparatus that measures along x (or y) and then send |↑z⟩ electrons through that. The z- and x-axes are at right angles, so there should definitely be no overlap. But electrons are capricious; they split evenly between |↑x⟩ and |↓x⟩, even though an arrow only pointing up clearly has no component in any other direction.

A pattern is emerging here. The 180° separation between |↑⟩ and |↓⟩ acts like a right angle. Right angles act like they’re only separated by 45°. And a full 360° rotation just turns a vector backward, giving it the minus sign at the center of all this. All our angles are halved. The space electrons inhabit is weird, as if someone tried to grab hold of all the axes and pull them together like a bouquet of flowers.

Try to imagine that if you can, but don't worry if you can't; we're not describing a Euclidean space. You can sort of squeeze the z- and x-axes closer together, but any attempt to bring the y-axis in while also maintaining the 90° separation between any up and down and 45° separation between any right angle just won't work.

The only way we can fit the y-axis in there is to deploy a new degree of rotation distinct from Euclidean directions. That sounds like a job for the complex plane. In fact, our inability to properly imagine this space is directly analogous to not being able to find real roots for a system of equations, which as we know is where complex numbers shine. Vectors that are too close in real space can be rotated away from each other in complex space to give us the properties we need.

From this mathematical curiosity—a space where rotation and orthogonality are governed by complex numbers—we find an accurate description of the subatomic particles that serve as matter's scaffolding. Electrons are best thought of not as tiny, spinning balls of charge but as wave functions rotating through a complex 2D vector space.

So what does it mean to have 3i apples? Nothing. But what does it mean to have 3 apple juice? The physical reality of complex numbers only manifests at the quantum level. To many philosophers, this indispensable presence demands ontological commitment. This is a way of saying, "Well, I guess if anything is real, that is." And how are we to say otherwise? Complex numbers might come from a world of pure imagination, but they're necessary for describing this world; shouldn't that count for something?

 Credit: Warner Bros. for this picture and the song lyrics.

## Friday, April 6, 2018

### Global Nudging

This post was inspired by a discussion I had with a couple friends awhile back. Since then, we've had a heat wave, some sort of rainless hurricane, and a snowstorm.

I don't really know anything about weather or climate science, but I do know a bit about thermodynamics. In thermo, you learn ever more esoteric definitions for temperature until you're no longer sure what's fundamental and what's just human convention. Maybe it's all information!?

The first proper definition you get is that temperature is a measure of the average speed of particles in some system, which relates to the average kinetic energy. A little later you learn a more precise definition: the higher the temperature, the wider the distribution of particle speeds. As you pump in energy, more and more particles collide and transfer momentum in unlikely, chaotic ways.

Can this statistical, microscopic argument be scaled up to the entire globe? If the surface temperature of the Earth rises, are we going to get even weirder weather? There is evidence from modeling and observation to support that hypothesis.

Anyway, my friends asked whether it was feasible to counteract global warming by pushing the Earth a little farther away from the sun. Of course, there are reasonable solutions to this looming crisis, but we seem increasingly less likely to opt for reasonable, so let's go with bananas instead. Most people seem to agree that letting the Earth warm by another 2° C would be unfathomably catastrophic; let’s assume we botch that and try for the Spaceship Earth solution.

The sun pumps out an inconceivable 380 yottawatts of power (about 50 million billion times more than our best nuclear plant), but we're so small and far away that we only catch about one half of one billionth of that energy. We also don't absorb all of it. You can tell because, uh, we can see the Earth from space; about 30% bounces back immediately.

The rest is absorbed and eventually radiates back out after heating the planet. Because the Earth is cooler than the sun, this radiation is mostly long wavelength, low energy infrared instead of visible light. When we take the temperature of stars, planets, and other celestial bodies, we're doing so by sampling that spectrum. But just based on the fraction of energy the Earth receives and its albedo, we can predict what temperature a distant alien astronomer would measure for the Earth with the Stefan-Boltzmann law.

The law relates the power output of a black body to its temperature. For a perfect black body, power out conveniently equals power in—our share of the sun's energy—which is a good enough approximation here. If we also know the sun's temperature (~6000 K), out pops the Earth's: 255 K (about -1° F).

A bit chilly? Yes, but this is roughly the effective temperature (another definition: the temperature of a black body with the same power output) aliens would measure via spectral analysis. If the aliens were smart, they would also notice trace amounts of water vapor and carbon dioxide in our atmosphere and be confident that conditions on the ground were a bit more comfortable. Why? Because those are both smurghouse—oh, sorry, greenhouse—gases.

The atmosphere is mostly transparent to the sun's visible light, but to the great frustration of infrared astronomers, it is not transparent to Earth's thermal radiation. So instead of streaming back out to space unimpeded, infrared photons keep getting knocked about and turned around by water vapor and CO2 molecules. This molecular mugging robs the photons of energy—raising temperatures on the ground—and slows their escape.

 Atmospheric absorption by wavelength. Credit: NASA
All this action makes the average across the surface a reasonably pleasant 288 K (59° F). But of course a little more smurghouse gas and we start contemplating pushing the Earth away from the sun. So let's get back to that.

Reducing the effective temperature of the planet from 255 to 253 K is a 0.8% decrease. From the Stefan-Boltzmann law, that requires a 3.1% decrease in energy received from the sun, which we can get by just pushing the Earth a mere 1.6% farther away (2.3 million km). Can we do that?

(This is less than the 5 million km swing due to Earth's elliptical orbit, but as an average, sustained change, it will have a greater effect on temperature. Think about quickly running your hand through a candle flame versus holding your hand over the flame.)

First, let's look at the question from an energy budget standpoint. Any orbit represents a specific balance between kinetic energy from motion and potential energy from gravity, which adds up to a total orbital energy. To move from one orbit to another, you must pay—by some means—the difference in energy between the orbits. In our case, that requires about 7 MJ/kg. That's a 60-watt light bulb operating for 32 hours. But the Earth has a lot of kilograms, so... light bulb comparisons are a little inadequate; it comes out to the energy of a hundred million dinosaur-killing asteroid impacts.

But as we know from the Chicxulub crater and the fact that dinosaurs are now turkeys, celestial bodies don't smack into each other like perfect billiard balls. Collisions between them are very inelastic, with much energy being lost to superheating the atmosphere, excavating dirt, and forging cool new minerals instead of moving the planet. All in all, I would not recommend countering global warming by annihilating three quarters of all life a hundred million times.

 Part 1 of 100,000,000. Credit: NASA
Maybe rockets instead? Here the relevant quantity is the delta-v required for an orbital maneuver—that is, the fuel necessary to change a body's velocity. If we want to move the Earth farther out, it will have to orbit at a slower speed. But this is a two-step process. First we accelerate outward, changing the orbit to an ellipse between our current distance and our desired distance. Then we hit the brakes and lose energy, which circularizes the orbit at the new distance.

All told, the Δv is just 0.3 km/s. The most advanced rocket that actually kind of exists right now is the VASIMIR ion rocket, which uses magnets to propel plasma out the back. Conservation of momentum makes this work: expel a lot of tiny particles very quickly in one direction (300 km/s for our ions), push a heavy object a moderate speed in the opposite direction. Plugging all this into the Tsiolkovsky rocket equation tells us how much of our rocket (engine+Earth) needs to be fuel. The answer: just 0.1%!

VASIMIR uses argon, which is one of the most abundant elements in our atmosphere and the universe at large. If we distill all the argon out of our atmosphere, which is 1.3% argon by mass, we're... 0.001% of the way there.

How about a solar sail? Despite being massless, photons still impart momentum. The total amount we need is Δv times the Earth's mass. To get the required impulse before 2100, which people seem to think is important, we'd need a sail with an area of... well let's just say it's way, way bigger than the sun and move on.

Okay, all in all this looks like a pretty bad idea. But while we're on the topic of solar sails, catching the sun's photons would have the added benefit of preventing said photons from reaching Earth. In that case, why bother moving the planet at all?

Indeed, one of the least implausible recklessly dangerous solutions to global warming is to change the Earth's albedo. We could do this with a sun shield, or particulate matter in the atmosphere, or any number of other options that would no doubt have catastrophic unintended consequences. But as we saw, we only need to get rid of 3.1% of the sun's energy, which we can do by just increasing the current albedo from 0.3 to 0.32. Easy peasy!

## Thursday, March 22, 2018

### Seeing Stars

Near the end of the 20th century, astronomers made a remarkable breakthrough: they began to see stars.

You read that right, and no one has a concussion. When you look up at night and observe twinkling stars, what you're seeing is not an image but a bit of optical trickery as light interferes with your eyes. It wasn't until the last 20 or 30 years that astronomers, employing a clever, delicate technique known as optical interferometry, began to see stars as the distant suns we know them to be. Until this technique matured, all stars were point sources. You can blame the long delay on their stupendous distance and the reality of eyes that fit inside eye sockets.

Our sun is a short 150 million kilometer hop away with an angular size of half a degree. But if it left us behind and visited our nearest neighbor, Proxima Centauri, at 4.25 light years, its angular size would shrink. At that great distance, we would expect the sun to appear roughly 270,000 times smaller (6.7 milliarcseconds). But stars aren’t that small in the sky. Imagine lining up a quarter million of them next to the sun (or moon, which is the same angular size); the sun is going to feel pretty insecure next to that line. Something weird is going on that makes stars appear larger than they should.

The problem is with the equipment we're using: our eyes. To really see a star (or anything), we need to know the incoming angles of all its light rays. The eye does this with a lens, which focuses the light that passes through the pupil onto the retina. Because the pupil is small (~5 millimeters in diameter), an incoming light wave interferes with itself as it passes through, projecting a diffraction pattern onto the retina. Instead of a single point of contact, the light wave spreads out over a small area.

 An Airy disk by Wisky - Own work, CC BY-SA 3.0, Link
For large objects this is okay, but for distant stars it is not. That tiny milliarcsecond angular size means all the light rays that reach your eye are jam packed and their interference patterns overlap. The eye can’t tell the difference, so it all looks like one big, fuzzy blob of light.

Okay, you think, we just need a bigger pupil to cut down on the diffraction. And that's what a telescope is—an optical system with a much larger pupil (aperture) than the human eye. But there's a problem. The (theoretical, usually much worse because of atmospheric turbulence) diffraction limit of the human eye is about 20 arcseconds, which is 3,000 times as large as our sun-at-Proxima-Centauri. To pull the interference apart, the telescope needs to be at least 15 meters across.

There aren't any that big (yet), and we can't really make mirrors that size without breaking them up into smaller segments. To image a star, then, we need a technique that bypasses the diffraction limit imposed by our frustratingly non-gargantuan apertures. Enter optical interferometry, which lets us create a virtual telescope as large as the distance between widely spaced individual ones.

To see how this works, let's first imagine we're just trying to find the position of a single, dimensionless dot of a star. Our interferometer is two regular telescopes set a good distance away from each other. By the time the star’s light reaches us, it looks like a flat plane wave. If the star is directly overheard, the wavefront hits both telescopes at the same time. If it's at an angle, the wavefront hits one telescope before the other and the two signals are out of phase.

 Interferometer diagram. Credit: ESO
The interferometer combines the light from each telescope (very carefully) via mirrors, creating light and dark bands known as interference fringes. In phase signals interfere constructively, out of phase signals destructively. The more the two signals cancel each other out, the farther away from directly overhead the star must be.

Until the signals cancel completely. Then, if you push the star even farther, the signals will cycle back into phase. The distance between the two telescopes—the baseline—determines how long this takes. A longer baseline spaces out the cycle, giving you more precise measurements in the same way that having a bigger telescope gives you better resolution. But if nudging your star around is problematic, this pattern plays itself out in the interference fringes.

Because interference patterns are regular and cyclic, we can think of the baseline as sampling a particular "spatial frequency." This is a measure of the relevant physical scales of an image.

Imagine looking down on a dense forest from overhead. If the ground is obscured, then what you see changes from leaf to leaf, which represents a high spatial frequency. Now think about the same (deciduous) forest in the dead of winter. With the leaves gone, the scene changes from tree to tree—a lower spatial frequency. By sampling spatial frequencies, you can figure out the important sizes of whatever you’re looking at. That way, you don't (I'm sorry) miss the forest for the trees.

But what is the spatial frequency of a point source star? Let's mix up our nature metaphor and imagine a lone blade of grass in a vast, empty plain. If that blade is what we're looking for and it's the only thing there, then every length scale contributes equally to pinpointing it. So every spatial frequency—every baseline of our interferometer—will be strong, producing clear, regularly spaced interference fringes. A small object in space has a wide spatial frequency spectrum.

The bigger an object gets, however, the narrower its spectrum will be. A larger object means more light waves coming from slightly different directions, which creates more interference and messier fringes. The result is that you have to search around to find a baseline where all the waves add up in just the right way to produce a nice, regular set of fringes. So when you’re looking at the spatial frequency spectrum of an extended object (that is, a real, physical one that actually exists), it will be narrow and centered around only a few length scales.

This complementarity—wide in space, narrow in frequency (or the other way around)—is a property of Fourier transforms. A Fourier transform is a way of decomposing a function in one domain (space) into its constituent parts in another domain (frequency), or vice-versa.  We have nifty computer algorithms that can work this out quickly and efficiently. The important part is that a function and its Fourier transform will always have this narrow-to-wide, wide-to-narrow pattern.

So here's what you do to image a star. You point your interferometer at it and sample as many spatial frequencies as you can. Spatial frequencies are determined by the baseline length between telescopes, and the two main ways of adjusting this are (1) adding multiple telescopes at different distances from each other or (2) waiting for the earth to rotate, which changes the "projected baseline" of the interferometer (as seen from the star).

 Very Large Telescope Interferometer. Credit: ESO
With this sampling done, you have a frequency spectrum of all the length scales on which the star is bright. The strongest spatial frequency should be the diameter of the star, because the diameter dominates a circularly symmetric object. Lower frequencies point to surface features. To identify all of them, you take (something like) the Fourier transform of the spectrum to produce a different graph with all the points in space at which the star is bright—that is, the incoming angles of light, an image.

There are some complications, notably that you can never sample all spatial frequencies, which means some guesswork is required. So you have algorithms that interpolate what your star looks like by removing meaningless frequencies, erasing artifacts produced by the shape of the telescopes, and making assumptions about the star (it's probably not a giant monkey, say). Do all this right and optical interferometry gives you a picture like this:

 Pi1 Gruis. Credit: ESO
This is a real infrared image of a star that is not the sun. It is a bubbling red giant 530 light years away that would engulf everything up to the asteroid belt in our solar system. Only a handful of images like this exist, giving us our first glimpse of the many kinds of stars that populate our galaxy. As interferometry techniques and technology improve, so too will these pictures, letting us refine our stellar physics and see stars as never before.

## Wednesday, October 11, 2017

### The United Federation of Paradox

In Star Trek, the Federation is a post-capitalist utopia where citizens act out of a desire to better themselves or civilization rather than attain monetary wealth. It's not entirely clear how this utopia came about, but we're often told humanity transcended its violent, greedy impulses through cultural evolution. A more cynical view is that the advent of replicators eliminated most scarcity, and with it any need to be violent or greedy.

I would like to offer an alternative hypothesis: Every episode, Starfleet ships employ technology that permits time travel, so the Federation should be able to Seven Days its way out of any mistake on the path to utopia. You see, the physics of the 20th century—special relativity—tell us that any method of FTL (whether warp drive, subspace communication, or galactic spore network) is also a method of time travel. FTL permits time travel because reality has no rigid, universal stage on which all events play out. Instead, space, time, and the events that occupy space and time are all linked together by a consistent set of interrelationships.

Galileo made this argument while trying to convince others that a spinning, moving Earth wouldn't throw everything out of whack. What we now call Galilean relativity says the laws of motion don't depend on your (inertial) frame of reference. A frame of reference is just a perspective from which to observe the universe. If you're sitting in a chair reading this, you and the chair constitute a frame; if you're hurtling through interstellar space (at a constant speed) in a starship, that's another frame. Also, go you.

Galilean relativity means that as long as you occupy an inertial frame, you never notice anything funny that doesn't accord with the laws of motion. Whether you're in a turbolift or a shuttlecraft, if your velocity is constant, a tossed ball will land where you expect it and you won't feel any mysterious forces pushing on you. The upshot is that no frame of reference is privileged or what's "really" happening. All are equally valid.

The tricky part is translating one reference frame to another. Walking down the aisle of a plane, everyone on the plane can treat you as moving only a few miles per hour. Everyone on the ground, however, needs a way to combine your velocity and the plane's. This feat is accomplished via a transformation, which is just a mathematical tool for moving between reference frames. In Galilean relativity, that transformation is easy and basically commonsense: to an observer on the ground, your speed = plane speed + walking speed.

It is these transformations—which spell out equally valid and consistent ways of interpreting reality from different frames of reference—that allow for time travel. To see how, we have to move from Galilean relativity to Einstein's special relativity.

Special relativity is a generalization of the Galilean variety. There are two postulates that end up having deep consequences:

(1) The laws of physics don't depend on your frame of reference.

This is an expansion of Galileo's rules to include electromagnetism.

(2) The speed of light (c) is a law of physics.

This postulate is implicitly included in the first one, because Maxwell's equations for electromagnetism predict a speed of light. It's the revolutionary part in all this, though, so Einstein spelled it out explicitly.

By itself, a law that dictates a speed is not terribly noteworthy. Any wave equation specifies the speed at which the wave travels. We usually think of waves as traveling through a medium, in which case Galilean relativity might apply. To an outside observer, the total wave speed = medium speed + wave equation speed. Physicists assumed this applied to light as well and proposed a luminiferous aether to serve as a reference frame and medium.

The trouble was, the properties required by a luminiferous aether (given how light behaved) seemed ludicrous and unphysical, and when measured, c always seemed to be the same. Additionally, and famously, the Michelson-Morley experiment failed to detect any sign of the aether. The alternative, according to Einstein, is that c is not defined relative to a frame of reference; instead, the speed of light is a law of physics and the same for all inertial observers.

But this violates the rules of the Galilean transformation, because it means you can't add velocities when light is involved. If a Klingon runs at you firing a laser pistol (canon in some of the TOS era), Galileo says the laser's speed = Klingon running speed + c. Einstein says the speed is always only c, for both you and the Klingon. And that means we need a new transformation that is, as before, equally valid and consistent for all inertial frames of reference. For special relativity, that's called the Lorentz transformation.

Rather than just show you the Lorentz transformation (it involves c and some square roots and reduces to the Galilean transformation at everyday speeds), I want to provide a visual explanation for how all observers can measure the same c. Memory Alpha says Vulcan is 16 light years from Earth. So let's imagine there's a starbase between the two planets, 8 light years from each. If the starbase emits a radio signal traveling at c, it reaches both Earth and Vulcan 8 years later. How do we represent this graphically?

 Credit: Paramount/CBS for the Trek stuff and NASA for the Earth stuff.
The x-axis (horizontal) is distance in light years and the t-axis (vertical) is time in years. If our reference frame is the starbase and the planets are not moving relative to it, then they move upward in time without moving left or right through space. The radio signals, on the other hand, move 1 light year per year, so they travel 45 degrees out from the starbase. Where the radio signal and the world line of a planet intersect is the location in spacetime (at the planet, 8 years in the planet's future) where the signal reaches the planet.

Now let's say the Enterprise is at the starbase and starts heading toward Vulcan at sublight impulse speeds. What does that look like?

 Credit: Paramount/CBS for the Trek stuff and NASA for the Earth stuff.
Because impulse is slower than light, its path is tilted more toward the vertical than the radio signal; more time is required to go the same distance. Since we’re dealing with special relativity, there is an inertial reference frame following along with the Enterprise, and from that frame we have to measure the same c. According to the graph, this doesn't seem possible. It sure looks like the radio signal hasn’t gotten as far away from the Enterprise as it has the starbase (horizontal distance) in the same amount of time (vertical distance).

So here's where we need to perform a coordinate transformation that takes us from the reference frame of the starbase to the reference frame of the Enterprise. For a frame centered on one inertial object, the object's position doesn't change in time. For the starbase, that means its path through spacetime follows the vertical—or time—axis. So then let's define a new time axis (t') for the Enterprise which follows its diagonal path. If c is the same in all references frames, that means we also need a new space axis (x'), which has the same angular separation from the radio signal as t’.

 Credit: Paramount/CBS for the Trek stuff and NASA for the Earth stuff.
Because x' and t' are tilted toward the radio signal by the same amount, the signal still moves 1 light year per year in this new reference frame; the ratio doesn't change. This has weird consequences, though. For starters, reconciling a constant c seems to have involved squishing space and time together. But it gets worse.

In the starbase reference frame, lines parallel to the x-axis are single moments in time. Any event on such a parallel line happens simultaneously for all observers sharing that frame. For the Enterprise frame, simultaneous events happen on lines parallel to the x' axis, which is a diagonal line that cuts through time in the starbase frame. This means events that are simultaneous in the Enterprise frame happen at different times for observers in the starbase frame, and vice versa.

For example, if you draw a line parallel to the x'-axis through the moment when the radio signal reaches Vulcan, you see that the event of the signal reaching Earth is ahead of that line; it happens later in the Enterprise's frame, despite the two planets being equidistant from the starbase. This is (a) the relativity of simultaneity, (b) patently ridiculous, (c) absolutely true, and (d) the feature we want to exploit to travel through time and create a problem-free utopia.

Normally (in special relativity), observers disagreeing on the order of events doesn't matter. If observers are limited to light speed or less, by the time they're able to meet up and discuss the discrepancies, all the events they disagree about are in everybody's past. FTL lets you circumvent this restriction.

So here's how to resolve every 42-minute Star Trek plot in 3 easy steps. The scenario presented here is set up for graphical simplicity; it smooths over a few wrinkles and might not perfectly align with Star Trek technology. (Then again, neither does Star Trek technology.)

Step 1: A space-ooze-energy monster attacks the Defiant, but it turns out the creature is just misunderstood. To restock on redshirts, Worf activates the Lorentz Protocol! Via subspace, the Defiant sends a message to Deep Space Nine.

 Credit: Paramount/CBS
If subspace communication is instantaneous (which it looks close enough to being in most episodes), then Worf just finds the Bajoran system along the x-axis and puts the message there. Because no time passes, the message arrives along the x-axis.

Step 2: On DS9, Sisko gives the message to O'Brien, who hops into a runabout and flies away from the Defiant at impulse (some speed close to c).

 Credit: Paramount/CBS
In our diagram, we're now switching to the runabout's moving reference frame. Its speed relative to the Defiant establishes a new frame of reference.

Step 3: The runabout sends a warning about the interdimensional slug to the Defiant's location in space via subspace.

 Credit: Paramount/CBS
Because we are in a new reference frame moving relative to the Defiant, an "instantaneous" subspace message no longer appears somewhere on the horizontal line but along the runabout's x'-axis, which intersects the Defiant's spacetime location in its past.

Ultimately, the speed of the runabout and its distance from the Defiant determine, via a pretty simple triangle, how far into the Defiant's past the subspace warning goes. Arrange things correctly and Worf gets the warning before ever running into the crystalline spider-snake.

But of course, now Worf's gone and killed his own grandfather (who he may have been?—time travel!). That is, if he receives the warning before sending out the message to request a warning, then he avoids the cybernetic mind worm attack and never needs to send out a message in the first place. Paradox!

This is the central reason why physicists think FTL communication or travel is a non-starter. Other aspects of special relativity prohibit reaching c, but there’s nothing about naturally faster-than-light processes. They do, however, invariably lead to issues with causality.

As we've just seen, special relativity + FTL means you lose a coherent narrative leading from the past to the future. You can preserve causality with FTL but only if you abandon the rules of special relativity. Or you can live in the universe we seem to inhabit, which has relativity and causality but loses all that FTL fun.

Of course, when asked to pick two, Star Trek usually just picks one: FTL. Most time travel stories in Trek are rife with causality issues that are usually intentionally ignored, except by having characters say things like, "Oh yeah I totally flunked temporal mechanics at Starfleet Academy, haha!" And relativity is almost entirely absent, because there's rarely any mention of time dilation or length contraction or all the other whacky things that happen when you get close to c.

Nevertheless, the United Federation of Planets is a utopia, and it must have gotten there somehow... or will get there... or will have already gotten there. (Oh boy. Consult Dr. Streetmentioner's book for tense corrections.) Or maybe not—after all, utopia does mean no-place.

## Thursday, August 31, 2017

### Nightfall

(Spoilers for a 76 year old Isaac Asimov story, which you can read here for some reason.)

"Nightfall" is one of my favorite Asimov stories. It's set on an alien planet in a system with six suns, arranged so that at least one is always up. Consequently, the people of this planet never know night. What drives the action is the discovery of a moon (invisible due to the constant sunlight) that astronomers predict will eclipse a sun when all the others have set. The effect would be sudden, inescapable darkness, which they fear will drive people mad (and may have led to past catastrophes).

This story has been on my mind since a little before our solar eclipse. I had heard repeatedly that a total solar eclipse is an event unlike any other, that everyone should try to experience one at some point during their lives. But although some say totality can be drop-to-your-knees-and-weep life-changing, there is as far as I know no evidence of totality-induced civilization-wide collapse. Of course, we experience night daily, so sudden darkness is not as extraordinary for us.

What is it about a total solar eclipse that inspires such numinous feeling, then? Having stood within the moon's umbral gloom for slightly more than two minutes, I can offer my own perspective.

On Sunday, August 20, I traveled to Greenville, South Carolina with a friend and his family, who had family in the area willing to put us up for two nights. The drive down to Greenville from Maryland took about 11 hours. 11 hours of tedium and traffic for 2 minutes of totality—an easy choice for most, whatever that choice may be.

That evening, I passed out eclipse glasses to those who needed them and jury-rigged a solar filter onto my binoculars with index cards and masking tape. As the resident astronomy expert, I had been told by multiple eclipse veterans that it was my responsibility to do dry runs of totality so the uninitiated would be prepared for the moment. Instead we watched Game of Thrones and considered our return travel plans in light of the awful traffic coming down.

The day of, August 21, we found a nearby baseball diamond and set up our equipment about fifteen minutes before the start of the partial phase. A partial solar eclipse is a weird and cool but ultimately very detached phenomenon. You can't (or shouldn't) look directly at the sun, so watching the moon's shadow creep across its face requires filters or those eclipse glasses you've heard way too much about by now.

Through them, there was only the waning orange disc of the sun and blackness—the black of the moon, the black of sky, the black of anything else we might try to look at. Witnessing a partial eclipse was like looking through an insufficiently detailed virtual reality environment. On top of that, up until about 80% obscuration, there was very little change in our surroundings to indicate that anything was up.

But the orange disc inexorably slid into a crescent, which served as a visceral countdown to the main event: totality.

At about fifteen minutes before second contact, with the sun a thin wedge, we began to notice that it was substantially cooler out and strangely dim. The sun was still a blazing fireball in a bright blue sky, but the whole scene was a few shades darker, as if seen through sunglasses. Unfortunately, we didn't have an opportunity to see much in the way of strange shadows where we were.

As the moon reduced the sun to an arc of light, I watched through my binoculars until the orange shriveled to nothing, leaving only black. Then I looked up and experienced totality.

There's something of a twist in "Nightfall," which is that it's not night that drives people mad. In the story, they had been preparing for it. In fact, a minute or two in a totally dark room was akin to an amusement park ride for us—thrilling and hair-raising, maybe too much for some, but ultimately pretty safe.

What drove them mad was a phenomenon they were utterly unprepared for, which shattered their conception of the world and forced them to pick up the pieces.

When night finally fell, the stars came out. Except in myth, their world had consisted entirely of one planet and its attendant suns. But each pinprick of light against the black was another sun, another possible world. Each twinkling tear in the curtain of night let them peek into a much, much larger universe, one too big for their minds to bear. So they went mad instead.

I knew intellectually—from descriptions and pictures—what totality was going to be like. None of that prepared me for the moment itself, when the whole solar system was laid out before me.

Night fell and the stars came out, yes. And birds and bugs acted up. And the sun disappeared.

But here's what stuck with me. I don't remember all that many stars and it was never truly dark out. After gaping at the eclipsed sun for a moment, I saw Jupiter to the east and Venus to the west. They flanked the sun, and I could draw a straight line through all three of them. That line is the ecliptic plane, the disc of our solar system. But in the middle, instead of a sun, there was a hole in the sky—the moon. It, too, lay in that plane, along with me staring up at it all.

With the moon intercepting the light of day, the sun's faint outer atmosphere became visible. For most of our lives, the sun is a featureless glare we have to avoid. We only glimpse it during sunrise and sunset. But even then, the beauty of dawn and twilight is in the intermingling of sun and sky; it's never just you and the sun.

But the corona is the crown of the sun. By eye alone I could see exquisite detail and structure in the threaded, incandescent layers that were hidden from me a moment before. All this made the sun very real—not an untouchable brilliance, not a puddle of mixing reds, not a perfect orange disc against the black, but a giant ball of plasma reaching out to me. And it sat in the middle of a vast solar solar system of planets, with me on a tiny blue one hurtling around it.

Then it was over. The eclipse didn't fade away like a half-remembered dream. It just ended. There were a few seconds of twinkling at the edge of the black and then daylight returned, and the sun and planets and solar system were gone.

The initial seed for Asimov's "Nightfall," so the story goes, was a conversation between him and his editor, John W. Campbell. There's a line in a Ralph Waldo Emerson essay that reads, "If the stars should appear one night in a thousand years, how would men believe and adore, and preserve for many generations the remembrance of the city of God!" Campbell gave this line to Asimov essentially as a prompt, telling him he thought "men would go mad" instead.

And indeed, that's what happens. The short story ends with the main characters holed up in a fortified observatory, watching a crimson glow on the horizon that is not the return of the sun, but a city aflame.

So Asimov and Campbell are pretty cynical about our capacity to cope with a terrifyingly large world. By nature I share that perspective, and a gander at my Twitter feed seems to confirm the validity of such cynicism. While we are still stuck on this pale blue dot, the complexity of the world has grown dramatically in the last couple centuries.

We find ourselves unable to confront the reality of global warming, to the extent that some of us deny it while most of us pretend everything will work out somehow. Our societies have become increasingly interconnected and pluralistic, leading many to retreat into xenophobia that is at best ugly and at worst fiery and violent. Given all that, it doesn't seem unreasonable to imagine that a revelation as world-expanding as "Nightfall"'s might just unhinge us permanently and end our little experiment with civilization.

After totality ended, we stuck around for a bit chatting with others who had come to our baseball diamond, then eventually made our way back to my friend's family's place. There, I was told that a neighboring family had questions for the astronomer on location. Apparently they meant me.

I wandered over and met with a five year old and his mom and dad. The mom asked questions about the eclipse—the why of shadow bands and of different eclipse paths. Then the kid launched into questions about dwarf planets. He wanted to see all of them, so I showed him pictures of Pluto and Charon taken from New Horizons and Ceres from Dawn, and then explained that because dwarf planets are so small and so far away, we needed to build bigger telescopes and faster probes before we could see the rest of them. After that I managed to satisfy his curiosity with some moons, including my favorite Enceladus (about which I've been writing a post since my planetary science course in 2015).

The mom wanted to make sure I didn't dumb down my explanations for her son. The dad wanted to know if there was alien life out there (either on some moon in the solar system or on an exoplanet light years away) and when we were going to Mars.

I talked with the young family for about half an hour, answering questions and trying to feed their enthusiasm with as much knowledge as I could. Talking with strangers is not an activity that comes naturally to me (understatement), but after two semesters as a teaching assistant leading discussions and labs, I have come to enjoy this type of interaction.

What I find particularly heartening about being an ambassador for astronomy is the sheer wonder and curiosity we can have for the enormous, mind-blowing universe our telescopes have revealed. People are drawn to strange new worlds and the idea that we might someday have a home beyond Earth. Maybe fear and madness are natural and understandable reactions to a world too big to wrap our heads around, but they're not the only possible responses. How do we cultivate such wonder? How do we embrace curiosity so that it extends beyond pretty pictures and to all the unbearable complexity we are faced with?

I don't know the answer to that question. Maybe it takes witnessing once in a lifetime astronomical marvels. (Helpfully, if you missed this one, the US has another in seven years.) In the meantime, maybe read some imaginative, thoughtful, mind-expanding science fiction. For the foreseeable future, that's as close as we can get to a larger world.

## Friday, August 18, 2017

### Less Is More

The eclipse is only a couple days away, so I've been checking weather reports for the last week or so hoping the forecast will be clear. It varies from site to site and day to day, which is kind of frustrating.

Our inability to precisely predict the weather reminds me of some standard opposition to climate science. That is, if we can't even predict next week's weather in one city, how can we possibly predict the world climate a decade from now? There's a similar argument against evolution: if there's a single "missing link," how can we possibly claim to know the history of life? If we don't know the exact sequence leading from, say, our common ancestor with chimpanzees to anatomically modern humans, how can we be sure that life in general underwent evolution?

The problem with this line of thinking is that it misses why science has managed to be successful at all. Science does not make accurate predictions because we have perfect knowledge of a system. In fact, the opposite is often the case. Science succeeds in part because of our ability to abstract away that which is unimportant and reveal the underlying patterns. Consequently, a little bit of ignorance helps us miss those details which might distract us.

The eclipse itself is a great example of how having all the information can (literally and figuratively!) blind us. Our eyes are not well equipped for looking at the sun because it can be many orders of magnitude brighter than everything else we see. To fit a scene with such drastically varying brightness levels into our head, we lose some contrast resolution and end up not being able to see dim, feeble objects—especially those in the sky. That means we miss out on all the stars up during the day as well as anything even remotely near the sun.

During a total solar eclipse, the conveniently sized moon perfectly blocks the disk of the sun and a local night falls. The stars and planets come out, and the wispy corona that wreathes the sun materializes before our eyes. If we have good enough telescopes, we can measure the deflection of starlight around the sun, which tells us how its mass perturbs space itself.

 1919 Solar Elcipse. This picture is in the public domain, but I guess I'll credit Arthur Eddington?
And yet we can only do this because we have less information, because we are no longer being blinded by the flood of photons.

Now perhaps I'm engaging in some rhetorical trickery here. I started off talking about how missing individual pieces of the puzzle doesn't prevent science from abstracting away the pieces to find the underlying rules, and then I shifted to discussing how having all the pieces hinders us. But I do think there's a connection here, because the truth is we don't always realize when the details have led us astray; it's not usually as obvious as the blazing sun.

The problem has to do with our affinity for patterns and the mathematical tools we've developed for describing them. As the history of geocentrism demonstrates, our tools are too powerful for their own good. Because planetary orbits are pretty complicated, geocentrists employed epicycles—circles on top of circles—to describe how the planets moved through the sky. (Copernicus did this as well, actually, because he couldn't give up the notion of perfectly circle orbits at constant speed.) Add enough epicycles to your system and you can accurately map out any set of planetary observations, with all the messy details included.

And I do mean any set. (You can watch the whole thing, but skip to about a minute for the good part.)

Aha! We have discovered that Homer Simpson is really a complex set of epicycles. But no, that seems to have things exactly backward. Homer Simpson comes from the imagination of Matt Groening. That we happen to be able to describe the character's appearance using a set of epicycles does not give us any insight into why the character looks the way he does.

And yet packing in all the detail can give us the illusion of understanding. You see, that complicated set of Homeric epicycles can also be represented as a Fourier series, a sum of sine functions where each function—with a differing amplitude and frequency—represents one epicycle. That is, we can come up with an equation that describes Homer Simpson, and all we have to do is plug in the right numbers. It is easy to imagine, then, that there is a physical reason for each of us those amplitudes and frequencies. Once we've found a reason for each number, we would seem to have a very satisfying scientific explanation for the existence of Homer.

But we know, of course, that there is none, because Homer is not really built up of epicycles and all the detail we've admitted into the system has led us astray.

Okay, but then how do we discover the truth when the details get in the way? How do we "hide" them so we can see what's underneath? Abstraction is the answer. To see how that works, let's get away from early astronomy and back to the bright, messy sun.

At work, I recently came across a very neat technique used for studying the spectra of solar system bodies. So let's say we want to know more about a comet that's recently swung by the inner solar system. When we point our telescope at it, what do we see?

 Not a real spectrum of any comet. Just something I cobbled together.
If we put a spectrograph on the back of the telescope—a device that breaks up light into individual wavelengths like a prism—we get this weird hump with spikes running through it. This graph shows how intense the light is at each wavelength, running from violet on the left to red on the right.

The problem with this graph is that most of what we're seeing is just reflected light from the sun. If we want to know about the comet itself, we have to find a way to eclipse (sorry) the sun's spectrum.

This process is known as continuum-subtraction. The shape of our comet spectrum is determined by two factors: absorption/emission lines (the spikes) and the temperature of the sun (the overall hump). We have to separate those two if we want to get rid of the sun. We start by abstracting away the details—those messy spikes—leaving us with the continuum.

To do that, we need to find the best fit line for the spectrum. As with the Homeric epicycles up above, our mathematical tools are powerful enough to write some equation that perfectly fits the line. Instead of a Fourier series, it would be a polynomial—a sum of powers of x with coefficients. You know some basic polynomials: a line is a first order polynomial, a parabola a second order one. For this, we might need, say, a 384th degree polynomial, but we could do it. And again, we could imagine that each power of x and each coefficient has some physical cause.

But then the details are distracting us again. The truth is that most of the jagged bits of the spectrum originate from (a) atmospheric interference, the heat of the telescope, and other noise, and (b) absorption and emission lines that are superimposed on top of the continuum spectrum. So let's keep our epicycles to a minimum and use a simple second or third order polynomial instead.

 My parabola. You can't have it.
With the comet continuum in hand, we can then subtract it from the comet spectrum. Doing that leaves behind only the spikes.

 There are bits of code out there to perform this operation. I don't recommend subtracting each bit by hand.
The spikes that rise above the noise (which we can calculate) are the light being emitted and absorbed by the comet at particular wavelengths. The location of each line along the spectrum is a result of what element or compound is interacting with the light. With the continuum removed, we know what the actual flux is and can determine how much of the stuff is on the surface of the comet.

Another neat trick lets us figure out the albedo of the comet—how much it reflects the light of the sun. Its albedo also depends on its composition. (Think about how much brighter the earth's icy poles are than its liquid oceans.) You might think we can figure that out just by pointing our telescope at the comet and seeing if the object is bright, but its brightness results from its proximity to the sun, distance from us, size, and finally albedo. The first two parameters are easy to figure out if we track its orbit; the last two require some disentangling.

Step one is to point the telescope at a star with similar characteristics to the sun—a solar analog—and see what its spectrum looks like in our telescope. We can't point the telescope at the sun itself, because a telescope designed to look at faint comets and stars would be blinded by the sun. We also don't want to just take someone else's spectrum of the sun, because then we're comparing observations from different telescopes in different environments. Best to use the same equipment.

Now, any solar analog we find out there isn't going to be an exact duplicate of the sun. So like before, we're better served by ignoring the details of the star's spectrum and finding a solar continuum. If we compare the solar continuum to the comet continuum, we'll see that even though the comet's light is mostly reflected sunlight, there are differences. Not every bit of light that strikes the comet's surface is reflected back. Some will be absorbed, and this absorption is wavelength-dependent. What we can do is divide the comet continuum by the solar continuum and come up with the fraction of each wavelength of light that is reflected—the albedo as a function of color.

If the albedo is low, then the comet is naturally dark. If it's dark but appears bright in the telescope, it must be very large. Conversely, if the albedo is high, the comet is shiny. A dim but shiny comet must be small. So that's size and albedo worked out, too.

I think that about covers it. With these techniques, we can abstract away the distracting details—the light of the sun, the noise in our instruments—and come away with facts about a ball of ice and rock hurtling through space. A comet is a messy, complicated object and nothing but its orbit is easily reducible to a simple law, but we can nevertheless know more about it by pretending we see less of it.

## Wednesday, July 5, 2017

### From the Earth to the Moon

I recently finished reading The Birth of a New Physics, by I. Bernard Cohen, which describes the 17th century transition from Aristotelian to Newtonian physics. This reminded me of a demonstration I did for my astronomy sections last semester, in which I tried to impress them with the power of Newtonian unification. (It didn't work.) And yesterday was the day we celebrate projectile motion, so that's as good an excuse as any to revisit the topic.

As I mentioned in my last post, I think we suffer from presentism that makes it difficult for us to understand how our predecessors saw the world. To remedy that, I've been reading a lot of history of science recently; I want to understand the role that science has played in changing our conception of the world.

When reading history of science, I sometimes struggle with the seemingly glacial pace of scientific advances that I, with my present level of education, can work out in a few lines. I am no genius, so why did it take humanity's greatest scientific minds generations to find the same solutions? The answer is these solutions originally required deep conceptual shifts that for me—thanks to the work of those scientists—are now completely in the background. Here's an example that I think simultaneously demonstrates the power of Newtonian analysis and the elusiveness of the modern scientific perspective.

Aristotelian physics held that everything from the moon up moved only in circles and was perfect and unchanging, while everything below the moon was imperfect, impermanent, and either drawn toward or away from the center of the universe. The critical thing is that the motion of objects on earth—projectiles, boats, apples—operated according to fundamentally different rules than the motion of stars, planets, and other celestial objects.

What Newton did was to show the same rules apply everywhere, to everything. His laws of motion and gravity work for cannon balls, birds, the moon, and even once in a lifetime comets. This is where our presentism hurts us, because that radical idea seems completely obvious now. Of course physics underlies both airplanes and space probes. Duh.

In the abstract, that's an easy case to make. But the demonstration I did in class, which is a modern-ish take on an analysis Newton himself performed, might be able to show how cool and counterintuitive this unification really is.

Consider this: if you drop a rock from a given height and time its descent, you can explain why a month is roughly 30 days long. These two facts seem completely unrelated but turn out to be connected by a simple law.

Aristotelian physics says that heavy objects are naturally drawn toward the center of the universe and that the celestial moon naturally moves about the Earth in a perfect circle. But even ignoring the Aristotelian perspective, from our modern vantage the link between these two facts seems kind of incredible. We have some vague idea that the length of a month is connected to the cycles of the moon, and we know that gravity makes rocks fall, but the moon is clearly not falling and rocks have nothing to do with calendars; so how are these facts related?

Now, I'm not shocking anybody by saying that gravity is the common factor, but I want to show you how relatively simple it is to work this out using the tools Newton gave us.

Newton's law of universal gravitation says that gravity is an inverse square force. In fact, other scientists before Newton (Kepler, Hooke) had suggested this. It was known that the intensity of light falls off with the square of distance; maybe the same principle worked for gravity, too. Force is proportional to acceleration, so you can measure it by timing falling objects (or the period of a pendulum, which was the most precise method available during Newton's time). At the surface of the earth, this is 9.8 m/s2 and usually denoted with a g.

If the earth is also pulling on the moon, and gravity is an inverse square law, we can find out how much earth's gravity is accelerating the moon. Divide the distance to the moon by the radius of the earth (figures known since the ancient Greeks), square the result, and that's how much weaker gravity's action on the moon is.

The distance to the moon is about 60 times the radius of the earth, so earth’s gravity pulls on the moon with 1/3600 the force that it pulls on a rock near the surface. But even so, shouldn't the moon be here by now? It's obvious that the moon is circling the earth and not slamming into us.

What we need here is another law. We see circular motion on earth, too. Imagine tying a string to a rock and spinning the rock around. What keeps the rock moving in a circle? The string, which is taut. The string pulls on the rock so that it doesn't go flying off. But if the string is pulling the rock inward, why doesn't the rock come inward toward your finger? Well, imagine slowing down the spin rate of the rock. Do that and the whole thing will fall limp. There is a specific speed required to keep the string taut. In fact, if you spin too fast, the string will break and the rock will fly off.

So here's the law. When considering circular motion, inward (centripetal) acceleration is equal to the square of the spin rate (angular velocity) times the radius. The faster you spin the rock, the harder the string needs to pull on it to keep it from flying off.

If we assume the moon is going around the earth in a perfect circle, and we suppose that gravity is pulling it inward at 1/3600 the strength it does on earth's surface, then we can figure out the moon's spin rate (around the earth), too. A little algebra gets us this formula:

$\omega=\frac{1}{60^\frac{3}{2}}(\frac{g}{r_{e}})^\frac{1}{2}$

re is the radius of the earth. The angular velocity ω is how many radians per second the moon moves. To figure out how many seconds it takes to make a single orbit, you basically just flip the expression upside down and multiply by 2π to get a full circle. That gives you:

$t=2\pi60^\frac{3}{2}(\frac{r_{e}}{g})^\frac{1}{2}$

Plug in the right numbers (re=6378 km, g=9.8 m/s2) and you arrive at a t of about 2.35 million seconds, which comes out to roughly 27.3 days (the sidereal period).

This is a couple days off from 29.5 days, which is how long it takes the moon to go through a complete set of phases (the synodic period). The difference is due to the fact that after those 27.3 days, the earth has also moved about 1/13 of the way around the sun, changing where the sun is in the sky. Because the phase of the moon arises from its position relative to the sun, it takes the moon a couple more days to catch up with the sun’s new position.

Those complications aside, the ease with which you can find the moon's sidereal period from a measurement of surface gravity is both stunning and surprising. The calculation is literally only a few lines long. Here, look for yourself:

 Credit: Me me me
I'm not showing you this to impress you with my mathematical talent, but to bring you back to my initial perplexity. Why did it require an intellectual titan such as Newton to figure this out? That is, what conceptual leaps were necessary? I don't know that I can answer that question completely, but here's a partial explanation that comes in large part from Cohen's book.

First of all, as I've said, Newton had the creativity and imagination to suggest a unified physics at all. Others at the time were formulating laws that applied to the heavens (Kepler's laws of planetary motion) and even physical mechanisms by which the planets moved (Descartes' vortices), but none imagined that a single law lay behind falling apples, the tides, planetary orbits, the moon's phases, the movement of Jupiter's satellites, and the orbits of comets.

Furthermore, Newton's laws of motion serve as a starting point for conceptualizing the moon's orbit. Aristotelian physics held that circular motion was perfect because celestial objects could return to their starting point indefinitely, continuing the motion for all eternity. Circular motion required no further explanation.

But Newton's first law says that objects have inertia, that they will continue in straight lines (or remain motionless) unless acted on by an outside force. This law isn't a formula but a tool for analysis. If you assume it is true, then you can look at any physics problem and immediately identify where the forces are. Thus, we can look at the moon, see that it is not moving in a straight line, and conclude there must be some force acting on it.

As I mentioned before, others had already proposed an inverse square law to explain gravity. Simply writing down the law of universal gravitation was not Newton's accomplishment. Instead, what Newton did was to prove mathematically that a body obeying Kepler's laws of planetary motion must be acted on by an inverse square force and the converse that an inverse square force will always produce orbits that resemble the conic sections (circles, ellipses, parabolas, or hyperbolas).

The proof Newton develops is heavily geometrical and begins by looking at an object moving freely through space that is periodically pushed toward a central focus. Newton then reduces the time between impulses until the force becomes continuous and the orbit, which began as a gangly polygon, curves into an ellipse. The important aspect here is there are two components to an orbiting body's motion: a central force acceleration and a velocity tangent to that acceleration.

What this means is the moon is falling toward the earth just as surely as an apple is. The difference is the moon is also moving in another direction so quickly that it continually misses the earth. This is what it means to orbit. As Douglas Adams said, "There is an art to flying, or rather a knack. The knack lies in learning how to throw yourself at the ground and miss."

 Credit: Newton Newton Newton
All this groundwork (and more) was necessary so that Newton could justify a key step in those few lines of math I showed you up above. (I should point out that Newton's work didn't look anything like mine, because the notation and norms of math were very different back then.) The key step is that I equate the moon's acceleration due to gravity (am) with the centripetal acceleration of uniform circular motion (ac). While the units are the same, a priori there's no reason to think the two are related.

Without a mathematical and physical framework detailing how mass, force, and gravity interact, equating those two conceptions of acceleration is nothing more than taking a wild guess. And if you're guessing, that means there are probably plenty of other guesses you could have made as well. This is what our presentism—replete with all the right guesses—hides from us. At each moment when a scientist does what comes naturally to us now, they had innumerable other options before them. The achingly slow pace of scientific discovery, then, is a result of all the frameworks and ideas and theories leading to those other guesses, equally valid a priori, that turned out not to be right.

As I've written before, in physics it is sometimes easy to guess the right answer. What I hope this post does is demonstrate that guessing—that moment of eureka when the correct answer finally materializes—is only the proverbial tip of the iceberg when it comes to science. This is important to remember when you think you’ve been struck by inspiration and arrived at a brilliant new truth about... whatever. Our popular conception of history valorizes those moments, but a fuller understanding of history vindicates the slow, haphazard, incremental work that must come first. If that work isn’t there, maybe your new truth isn’t, either.