Saturday, April 14, 2018

A World of Pure Imagination

In the philosophy of mathematics—hold on, hold on, I promise this is good—there's a perennial debate about whether numbers are real or just something we made up. This argument elicits a kind of irritated shrug from most people, but there is a fairly reliable way to evoke some pushback and/or incredulity: assert that imaginary numbers exist.

An imaginary number is the square root of a negative number, which of course doesn't make sense; any number multiplied by itself comes out positive. But mathematics is all about laying down axioms and seeing what logically follows. We can just declare that √-1 = i instead of a calculator error.

Alright, you think, but we can't just declare things into existence. What does an imaginary number even mean in the real world? You can have 3 apples, or maybe even -3 apples if you owe someone, but 3i apples has no concrete, physical interpretation, right?

Well, it turns out that by allowing complex numbers—a set that includes both real and imaginary numbers—we open up a new space for doing mathematics and physics. In fact, if we want to explain the bewildering diversity of chemical elements or the solidity of matter, we have to explore this imaginary space. Could anything be more concrete?

Take a look and you'll see...

Before we delve into the physics, let's make sure we have a little intuition about complex numbers. The imaginary unit, i, is the square root of -1. Just based on that, we see imaginary numbers cycle:

i*i = -1, because that's our definition

(i*i)*i = -1*i = -i

(i*i)*(i*i) = -1*-1 = 1

And (i*i*i*i)*i = 1*i = i again

This cycle lends itself to a neat geometric interpretation. Instead of the humdrum xy-plane, we can imagine a complex plane like this:

By Svjo [CC BY-SA 4.0], from Wikimedia Commons
Here, the horizontal axis is real and the vertical axis imaginary. Complex numbers are pairs that have the form a + bi, representing coordinates (or a vector) on our plane. If we draw a circle counter-clockwise through the points 1, i, -1, and -i, you see they follow the same cycle as our imaginary multiplication. So you can rotate through the complex plane just by multiplying two vectors.

It might look like we've only renamed a plain plane, but this space gives us flexibility the real numbers lack. Real numbers sometimes fall down on the job when you’re trying to solve polynomial equations. But if you say i is a root of -1, you can always find a complex number that does the trick. Geometrically, this lets us access points on the complex plane through simple multiplication, without having to rely on more cumbersome machinery.

Okay, finding polynomial roots probably sounds pretty boring, so we're not going to dwell on that. We'll mostly think in terms of complex rotation and how that permits us to peak into weird, non-Euclidean spaces where up and down no longer work the way they should. But know that in the background, these imaginary roots are letting us do a bunch of linear algebra by providing solutions to otherwise unsolvable equations.

We'll begin with a spin...

Let's turn back to physics. Explaining how the properties of chemical elements—the gregariousness of carbon, the aloofness of neon—arise from quantum mechanics goes like this: the protons and neutrons of an atom are squeezed into a tiny nucleus while the electrons whizz by in concentric orbital shells. How “filled” the outermost shell is (mostly) determines the chemical properties of an element. So whatever keeps these negative nancies from clumping together is responsible for, well, basically all macroscopic structure.

The culprit is the Pauli exclusion principle, which says that particles with half-integer spin (electrons) cannot occupy the same quantum state. Spin is intrinsic angular momentum, measured in units of ħ. If you measure the spin of an electron along some axis, you get either +1/2 (referred to as spin up) or -1/2 (upside down—spin down), with no other possible outcomes.

To keep track of the spin state of an electron, we can write a wave function that looks like this:

|↑⟩

Flip the electron upside down and the spin state is:

|↓⟩

Then flip it back right side up and you get:

-|↑⟩

Wait, what? We seemed to have gained a minus sign somehow. In fact, you have to rotate an electron a full 720° to cycle back to the state you started with. The minus sign doesn't matter much in measurement because anything we observe in quantum mechanics involves the square of the wave function, but it being in the math is pivotal.

Say a transporter accident duplicates Kirk and the two end up fighting.

Credit: Paramount Pictures and/or CBS Studios
There’s a brawl, both men lose their shirts, and one emerges victoriously. How does Spock tell if the original Kirk won or lost? If Kirk is a subatomic particle, we’re left with two possible states that look the same when measured. Either original Kirk wins and duplicate Kirk loses:

|W⟩|L⟩

Or vice versa:

|L⟩|W⟩

Each one will scream, "Spock... it’s... me!" but there's no evil mustache to differentiate them. With identical quantum particles, this symmetry of exchange is mathematically equivalent to taking one particle and flipping it around 360°; in both cases you end up with observationally indistinguishable states.

But there are still two outcomes. Whenever we're dealing with multiple possibilities in quantum mechanics, it's time for you-know-who and his poor cat. Just as the cat can be in a superposition of alive and dead, a Kirk particle can be in a superposition of winning and losing.

Nothing weird happens when you mix and match bosons (particles with integer spin like photons). They exchange symmetrically and their superposition looks like this:

|W⟩|L⟩ + |L⟩|W⟩

But electrons (and other half-integer fermions) are antisymmetric; a 360° flip gives us that minus sign. So their superposition is:

|W⟩|L⟩ - |L⟩|W⟩

As both sides of this expression are indistinguishable, subtracting one from the other equals 0. Any place where the wave function is 0, we have a 0% chance of finding a particle. So two electrons will never end up in a fight in the first place. (Kirk, then, is clearly a boson.) Replace "fight" with "spin up state in the 1s shell of a hydrogen atom" and you've got the beginnings of chemistry and matter.

What we'll see will defy explanation...

Okay, so how do we make sense of the weird minus sign a rotated electron acquires? This perplexing behavior originates with their 1/2 spin, which we can only understand if we venture back into the world of imaginary numbers, to a place called Hilbert space.

Physicists discovered that electrons were spin-1/2 as a result of the Stern-Gerlach experiment, where Stern and Gerlach sent silver atoms (and their attendant electrons) through a magnetic field. Spin up particles were deflected one direction, spin down particles a slightly different direction. That there were only two possible values along a given axis was weird enough, but follow-up experiments revealed even stranger behavior.

By Theresa Knott from en.wikipedia - Own work, CC BY-SA 3.0, Link
If you collect all the |↑⟩ electrons and send them through another S-G apparatus, only |↑⟩ electrons come through. You're giving me a look, I can tell; what's weird about that? Well, we're still dealing with quantum mechanics, so we always have to consider superposition. Maybe the state after detection is |↑⟩ + |↓⟩ and there's a chance one will come out |↓⟩.

Experiment says no. This is a little weird. It means +1/2 spin doesn't overlap at all with -1/2 spin (positively or negatively). That should only be the case for vectors at right angles to each other. Somehow, these up and down arrows behave as if they're orthogonal.

Say we've been measuring spin along the z-axis until now. We can set up a second S-G apparatus that measures along x (or y) and then send |↑z⟩ electrons through that. The z- and x-axes are at right angles, so there should definitely be no overlap. But electrons are capricious; they split evenly between |↑x⟩ and |↓x⟩, even though an arrow only pointing up clearly has no component in any other direction.

A pattern is emerging here. The 180° separation between |↑⟩ and |↓⟩ acts like a right angle. Right angles act like they’re only separated by 45°. And a full 360° rotation just turns a vector backward, giving it the minus sign at the center of all this. All our angles are halved. The space electrons inhabit is weird, as if someone tried to grab hold of all the axes and pull them together like a bouquet of flowers.

Try to imagine that if you can, but don't worry if you can't; we're not describing a Euclidean space. You can sort of squeeze the z- and x-axes closer together, but any attempt to bring the y-axis in while also maintaining the 90° separation between any up and down and 45° separation between any right angle just won't work.

The only way we can fit the y-axis in there is to deploy a new degree of rotation distinct from Euclidean directions. That sounds like a job for the complex plane. In fact, our inability to properly imagine this space is directly analogous to not being able to find real roots for a system of equations, which as we know is where complex numbers shine. Vectors that are too close in real space can be rotated away from each other in complex space to give us the properties we need.

From this mathematical curiosity—a space where rotation and orthogonality are governed by complex numbers—we find an accurate description of the subatomic particles that serve as matter's scaffolding. Electrons are best thought of not as tiny, spinning balls of charge but as wave functions rotating through a complex 2D vector space.

So what does it mean to have 3i apples? Nothing. But what does it mean to have 3 apple juice? The physical reality of complex numbers only manifests at the quantum level. To many philosophers, this indispensable presence demands ontological commitment. This is a way of saying, "Well, I guess if anything is real, that is." And how are we to say otherwise? Complex numbers might come from a world of pure imagination, but they're necessary for describing this world; shouldn't that count for something?

Credit: Warner Bros. for this picture and the song lyrics.

Friday, April 6, 2018

Global Nudging

This post was inspired by a discussion I had with a couple friends awhile back. Since then, we've had a heat wave, some sort of rainless hurricane, and a snowstorm.

I don't really know anything about weather or climate science, but I do know a bit about thermodynamics. In thermo, you learn ever more esoteric definitions for temperature until you're no longer sure what's fundamental and what's just human convention. Maybe it's all information!?

The first proper definition you get is that temperature is a measure of the average speed of particles in some system, which relates to the average kinetic energy. A little later you learn a more precise definition: the higher the temperature, the wider the distribution of particle speeds. As you pump in energy, more and more particles collide and transfer momentum in unlikely, chaotic ways.

Can this statistical, microscopic argument be scaled up to the entire globe? If the surface temperature of the Earth rises, are we going to get even weirder weather? There is evidence from modeling and observation to support that hypothesis.

Anyway, my friends asked whether it was feasible to counteract global warming by pushing the Earth a little farther away from the sun. Of course, there are reasonable solutions to this looming crisis, but we seem increasingly less likely to opt for reasonable, so let's go with bananas instead. Most people seem to agree that letting the Earth warm by another 2° C would be unfathomably catastrophic; let’s assume we botch that and try for the Spaceship Earth solution.

The sun pumps out an inconceivable 380 yottawatts of power (about 50 million billion times more than our best nuclear plant), but we're so small and far away that we only catch about one half of one billionth of that energy. We also don't absorb all of it. You can tell because, uh, we can see the Earth from space; about 30% bounces back immediately.

The rest is absorbed and eventually radiates back out after heating the planet. Because the Earth is cooler than the sun, this radiation is mostly long wavelength, low energy infrared instead of visible light. When we take the temperature of stars, planets, and other celestial bodies, we're doing so by sampling that spectrum. But just based on the fraction of energy the Earth receives and its albedo, we can predict what temperature a distant alien astronomer would measure for the Earth with the Stefan-Boltzmann law.

The law relates the power output of a black body to its temperature. For a perfect black body, power out conveniently equals power in—our share of the sun's energy—which is a good enough approximation here. If we also know the sun's temperature (~6000 K), out pops the Earth's: 255 K (about -1° F).

A bit chilly? Yes, but this is roughly the effective temperature (another definition: the temperature of a black body with the same power output) aliens would measure via spectral analysis. If the aliens were smart, they would also notice trace amounts of water vapor and carbon dioxide in our atmosphere and be confident that conditions on the ground were a bit more comfortable. Why? Because those are both smurghouse—oh, sorry, greenhouse—gases.

The atmosphere is mostly transparent to the sun's visible light, but to the great frustration of infrared astronomers, it is not transparent to Earth's thermal radiation. So instead of streaming back out to space unimpeded, infrared photons keep getting knocked about and turned around by water vapor and CO2 molecules. This molecular mugging robs the photons of energy—raising temperatures on the ground—and slows their escape.

Atmospheric absorption by wavelength. Credit: NASA
All this action makes the average across the surface a reasonably pleasant 288 K (59° F). But of course a little more smurghouse gas and we start contemplating pushing the Earth away from the sun. So let's get back to that.

Reducing the effective temperature of the planet from 255 to 253 K is a 0.8% decrease. From the Stefan-Boltzmann law, that requires a 3.1% decrease in energy received from the sun, which we can get by just pushing the Earth a mere 1.6% farther away (2.3 million km). Can we do that?

(This is less than the 5 million km swing due to Earth's elliptical orbit, but as an average, sustained change, it will have a greater effect on temperature. Think about quickly running your hand through a candle flame versus holding your hand over the flame.)

First, let's look at the question from an energy budget standpoint. Any orbit represents a specific balance between kinetic energy from motion and potential energy from gravity, which adds up to a total orbital energy. To move from one orbit to another, you must pay—by some means—the difference in energy between the orbits. In our case, that requires about 7 MJ/kg. That's a 60-watt light bulb operating for 32 hours. But the Earth has a lot of kilograms, so... light bulb comparisons are a little inadequate; it comes out to the energy of a hundred million dinosaur-killing asteroid impacts.

But as we know from the Chicxulub crater and the fact that dinosaurs are now turkeys, celestial bodies don't smack into each other like perfect billiard balls. Collisions between them are very inelastic, with much energy being lost to superheating the atmosphere, excavating dirt, and forging cool new minerals instead of moving the planet. All in all, I would not recommend countering global warming by annihilating three quarters of all life a hundred million times.

Part 1 of 100,000,000. Credit: NASA
Maybe rockets instead? Here the relevant quantity is the delta-v required for an orbital maneuver—that is, the fuel necessary to change a body's velocity. If we want to move the Earth farther out, it will have to orbit at a slower speed. But this is a two-step process. First we accelerate outward, changing the orbit to an ellipse between our current distance and our desired distance. Then we hit the brakes and lose energy, which circularizes the orbit at the new distance.

All told, the Δv is just 0.3 km/s. The most advanced rocket that actually kind of exists right now is the VASIMIR ion rocket, which uses magnets to propel plasma out the back. Conservation of momentum makes this work: expel a lot of tiny particles very quickly in one direction (300 km/s for our ions), push a heavy object a moderate speed in the opposite direction. Plugging all this into the Tsiolkovsky rocket equation tells us how much of our rocket (engine+Earth) needs to be fuel. The answer: just 0.1%!

VASIMIR uses argon, which is one of the most abundant elements in our atmosphere and the universe at large. If we distill all the argon out of our atmosphere, which is 1.3% argon by mass, we're... 0.001% of the way there.

How about a solar sail? Despite being massless, photons still impart momentum. The total amount we need is Δv times the Earth's mass. To get the required impulse before 2100, which people seem to think is important, we'd need a sail with an area of... well let's just say it's way, way bigger than the sun and move on.

Okay, all in all this looks like a pretty bad idea. But while we're on the topic of solar sails, catching the sun's photons would have the added benefit of preventing said photons from reaching Earth. In that case, why bother moving the planet at all?

Indeed, one of the least implausible recklessly dangerous solutions to global warming is to change the Earth's albedo. We could do this with a sun shield, or particulate matter in the atmosphere, or any number of other options that would no doubt have catastrophic unintended consequences. But as we saw, we only need to get rid of 3.1% of the sun's energy, which we can do by just increasing the current albedo from 0.3 to 0.32. Easy peasy!