Wednesday, July 5, 2017

From the Earth to the Moon

I recently finished reading The Birth of a New Physics, by I. Bernard Cohen, which describes the 17th century transition from Aristotelian to Newtonian physics. This reminded me of a demonstration I did for my astronomy sections last semester, in which I tried to impress them with the power of Newtonian unification. (It didn't work.) And yesterday was the day we celebrate projectile motion, so that's as good an excuse as any to revisit the topic.

As I mentioned in my last post, I think we suffer from presentism that makes it difficult for us to understand how our predecessors saw the world. To remedy that, I've been reading a lot of history of science recently; I want to understand the role that science has played in changing our conception of the world.

When reading history of science, I sometimes struggle with the seemingly glacial pace of scientific advances that I, with my present level of education, can work out in a few lines. I am no genius, so why did it take humanity's greatest scientific minds generations to find the same solutions? The answer is these solutions originally required deep conceptual shifts that for me—thanks to the work of those scientists—are now completely in the background. Here's an example that I think simultaneously demonstrates the power of Newtonian analysis and the elusiveness of the modern scientific perspective.

Aristotelian physics held that everything from the moon up moved only in circles and was perfect and unchanging, while everything below the moon was imperfect, impermanent, and either drawn toward or away from the center of the universe. The critical thing is that the motion of objects on earth—projectiles, boats, apples—operated according to fundamentally different rules than the motion of stars, planets, and other celestial objects.

What Newton did was to show the same rules apply everywhere, to everything. His laws of motion and gravity work for cannon balls, birds, the moon, and even once in a lifetime comets. This is where our presentism hurts us, because that radical idea seems completely obvious now. Of course physics underlies both airplanes and space probes. Duh.

In the abstract, that's an easy case to make. But the demonstration I did in class, which is a modern-ish take on an analysis Newton himself performed, might be able to show how cool and counterintuitive this unification really is.

Consider this: if you drop a rock from a given height and time its descent, you can explain why a month is roughly 30 days long. These two facts seem completely unrelated but turn out to be connected by a simple law.

Aristotelian physics says that heavy objects are naturally drawn toward the center of the universe and that the celestial moon naturally moves about the Earth in a perfect circle. But even ignoring the Aristotelian perspective, from our modern vantage the link between these two facts seems kind of incredible. We have some vague idea that the length of a month is connected to the cycles of the moon, and we know that gravity makes rocks fall, but the moon is clearly not falling and rocks have nothing to do with calendars; so how are these facts related?

Now, I'm not shocking anybody by saying that gravity is the common factor, but I want to show you how relatively simple it is to work this out using the tools Newton gave us.

Newton's law of universal gravitation says that gravity is an inverse square force. In fact, other scientists before Newton (Kepler, Hooke) had suggested this. It was known that the intensity of light falls off with the square of distance; maybe the same principle worked for gravity, too. Force is proportional to acceleration, so you can measure it by timing falling objects (or the period of a pendulum, which was the most precise method available during Newton's time). At the surface of the earth, this is 9.8 m/s2 and usually denoted with a g.

If the earth is also pulling on the moon, and gravity is an inverse square law, we can find out how much earth's gravity is accelerating the moon. Divide the distance to the moon by the radius of the earth (figures known since the ancient Greeks), square the result, and that's how much weaker gravity's action on the moon is.

The distance to the moon is about 60 times the radius of the earth, so earth’s gravity pulls on the moon with 1/3600 the force that it pulls on a rock near the surface. But even so, shouldn't the moon be here by now? It's obvious that the moon is circling the earth and not slamming into us.

What we need here is another law. We see circular motion on earth, too. Imagine tying a string to a rock and spinning the rock around. What keeps the rock moving in a circle? The string, which is taut. The string pulls on the rock so that it doesn't go flying off. But if the string is pulling the rock inward, why doesn't the rock come inward toward your finger? Well, imagine slowing down the spin rate of the rock. Do that and the whole thing will fall limp. There is a specific speed required to keep the string taut. In fact, if you spin too fast, the string will break and the rock will fly off.

So here's the law. When considering circular motion, inward (centripetal) acceleration is equal to the square of the spin rate (angular velocity) times the radius. The faster you spin the rock, the harder the string needs to pull on it to keep it from flying off.

If we assume the moon is going around the earth in a perfect circle, and we suppose that gravity is pulling it inward at 1/3600 the strength it does on earth's surface, then we can figure out the moon's spin rate (around the earth), too. A little algebra gets us this formula:

re is the radius of the earth. The angular velocity ω is how many radians per second the moon moves. To figure out how many seconds it takes to make a single orbit, you basically just flip the expression upside down and multiply by 2π to get a full circle. That gives you:

Plug in the right numbers (re=6378 km, g=9.8 m/s2) and you arrive at a t of about 2.35 million seconds, which comes out to roughly 27.3 days (the sidereal period).

This is a couple days off from 29.5 days, which is how long it takes the moon to go through a complete set of phases (the synodic period). The difference is due to the fact that after those 27.3 days, the earth has also moved about 1/13 of the way around the sun, changing where the sun is in the sky. Because the phase of the moon arises from its position relative to the sun, it takes the moon a couple more days to catch up with the sun’s new position.

Those complications aside, the ease with which you can find the moon's sidereal period from a measurement of surface gravity is both stunning and surprising. The calculation is literally only a few lines long. Here, look for yourself:

Credit: Me me me
I'm not showing you this to impress you with my mathematical talent, but to bring you back to my initial perplexity. Why did it require an intellectual titan such as Newton to figure this out? That is, what conceptual leaps were necessary? I don't know that I can answer that question completely, but here's a partial explanation that comes in large part from Cohen's book.

First of all, as I've said, Newton had the creativity and imagination to suggest a unified physics at all. Others at the time were formulating laws that applied to the heavens (Kepler's laws of planetary motion) and even physical mechanisms by which the planets moved (Descartes' vortices), but none imagined that a single law lay behind falling apples, the tides, planetary orbits, the moon's phases, the movement of Jupiter's satellites, and the orbits of comets.

Furthermore, Newton's laws of motion serve as a starting point for conceptualizing the moon's orbit. Aristotelian physics held that circular motion was perfect because celestial objects could return to their starting point indefinitely, continuing the motion for all eternity. Circular motion required no further explanation.

But Newton's first law says that objects have inertia, that they will continue in straight lines (or remain motionless) unless acted on by an outside force. This law isn't a formula but a tool for analysis. If you assume it is true, then you can look at any physics problem and immediately identify where the forces are. Thus, we can look at the moon, see that it is not moving in a straight line, and conclude there must be some force acting on it.

As I mentioned before, others had already proposed an inverse square law to explain gravity. Simply writing down the law of universal gravitation was not Newton's accomplishment. Instead, what Newton did was to prove mathematically that a body obeying Kepler's laws of planetary motion must be acted on by an inverse square force and the converse that an inverse square force will always produce orbits that resemble the conic sections (circles, ellipses, parabolas, or hyperbolas).

The proof Newton develops is heavily geometrical and begins by looking at an object moving freely through space that is periodically pushed toward a central focus. Newton then reduces the time between impulses until the force becomes continuous and the orbit, which began as a gangly polygon, curves into an ellipse. The important aspect here is there are two components to an orbiting body's motion: a central force acceleration and a velocity tangent to that acceleration.

What this means is the moon is falling toward the earth just as surely as an apple is. The difference is the moon is also moving in another direction so quickly that it continually misses the earth. This is what it means to orbit. As Douglas Adams said, "There is an art to flying, or rather a knack. The knack lies in learning how to throw yourself at the ground and miss."

Credit: Newton Newton Newton
All this groundwork (and more) was necessary so that Newton could justify a key step in those few lines of math I showed you up above. (I should point out that Newton's work didn't look anything like mine, because the notation and norms of math were very different back then.) The key step is that I equate the moon's acceleration due to gravity (am) with the centripetal acceleration of uniform circular motion (ac). While the units are the same, a priori there's no reason to think the two are related.

Without a mathematical and physical framework detailing how mass, force, and gravity interact, equating those two conceptions of acceleration is nothing more than taking a wild guess. And if you're guessing, that means there are probably plenty of other guesses you could have made as well. This is what our presentism—replete with all the right guesses—hides from us. At each moment when a scientist does what comes naturally to us now, they had innumerable other options before them. The achingly slow pace of scientific discovery, then, is a result of all the frameworks and ideas and theories leading to those other guesses, equally valid a priori, that turned out not to be right.

As I've written before, in physics it is sometimes easy to guess the right answer. What I hope this post does is demonstrate that guessing—that moment of eureka when the correct answer finally materializes—is only the proverbial tip of the iceberg when it comes to science. This is important to remember when you think you’ve been struck by inspiration and arrived at a brilliant new truth about... whatever. Our popular conception of history valorizes those moments, but a fuller understanding of history vindicates the slow, haphazard, incremental work that must come first. If that work isn’t there, maybe your new truth isn’t, either.

Tuesday, May 23, 2017

Rungs All the Way Down

The last lab we run in Astronomy 101 has students simulate observations of distant galaxies and then do some analysis in Excel to discover Hubble's law. By the end, students come up with a rough estimate for the age of the universe. But as I remarked elsewhere, my students seemed more in awe of Excel's tools than in discovering the origin of space, time, and all of existence (including Excel).

I don't want to get too philosophical about why this happened (because the truth is they were probably just bored and wanted the whole thing to be over with) but I suspect we are kind of spoiled nowadays for awesome science news. Everybody knows the universe began with a big bang billions of years ago, and it's difficult to transport people back to a time when such a fact was remarkable.

Yet the discovery of Hubble's law at the end of 1920s represented the culmination of an incredible project going back millennia, one that eventually paved the way for physical cosmology—the concrete study of the structure, origin, and fate of the universe.

What is that project? Figuring out how far away things are. I know, that sounds tremendously dull, but it speaks to something potent about science: the capacity to get answers to questions you didn't ask. The (seemingly) mundane task of finding ever more accurate and applicable ways to measure distance led to an undeniable empirical fact about the origin of the universe without anyone specifically asking deep cosmological questions. That science can do this is remarkable because you're very likely to find the answer you're looking for whenever you ask a specific question. If you stumble upon a totally surprising answer to a question you weren't asking, there's a much better chance that you're not just fooling yourself. (In fact, Hubble was never entirely sold on the significance of his eponymous law.)

So how did this all come about? Well, first I'll give you the punch line. In the 1920s, Edwin Hubble used the gigantic 100" reflector at Mount Wilson Observatory to measure the distance to several far off galaxies. Then, to help build a map of the local universe, he combined this data with spectra of those galaxies collected by Vesto Slipher and Milton Humason. Due to the Doppler effect, the spectrum of a galaxy shifts if it is moving relative to the observer. Hubble discovered that the farther away a galaxy is, the faster it's moving away from us. (Its spectrum is "redshifted" toward longer wavelengths.)

Credit: Edwin Hubble
By a remarkable coincidence, the correct interpretation of this astonishing discovery had already been found by the physicist Georges Lemaître. Using Einstein's general relativity, Lemaitre showed that under the influence of mass, the fabric of spacetime itself could expand outward from a "primeval atom" to the entire universe we see today. If you reverse the recession of galaxies Hubble discovered, you can figure out how long spacetime would have to be stretching out to match the observed distances of galaxies today. That length of time is the age of the universe. Furthermore, if the relationship between redshift and distance really holds up, then measuring an object's redshift tells you how far away it is.

But wait a second. That sounds kind of circular, because you needed to know the distances to those galaxies to find this relationship in the first place. How can we possibly know that Hubble's law is accurate as a distance measure if it relies on a distance measure, and why would you need another one anyway? Those are good questions, but you should have been asking them a long time ago. You see, when you're using Hubble's law to find distance, you're hanging from one of the highest rungs on the cosmic distance ladder, and we've been climbing this ladder for thousands of years.

So let's back up for a moment. I completely glossed over how Hubble determined the distances to these galaxies in the first place. Distance is a tricky thing in astronomy because (until very recently) we couldn't go anywhere astronomical. Instead, we are presented with a celestial sphere that might as well be infinitely far away. The objects on this sphere reveal only two pieces of information: brightness and position. From that we must infer distance. Broadly speaking, brightness and position give us two methods for finding distance: standard candles and geometry, respectively.

Brightness by itself is deceptive because if you don't know beforehand what you're looking at, you can't tell if an object is bright because it's (a) nearby or (b) intrinsically very luminous. Finding a standard candle lets you disentangle luminosity from distance so that brightness encodes distance alone. Here's how that works.

To map his galaxies, Hubble performed careful photometry on a class of stars known as Cepheid variables. Cepheid variables aren't exotic stars made from cepheionic matter; they're just a stage in the lifecycle of massive stars. Cepheids are "variable" because they are dying and unstable, causing them to periodically expand and contract. We observe these death throes as a cycle of brightening and dimming.

In the early 1900s, before we knew the astrophysical details, astronomer Henrietta Leavitt analyzed the brightness over time of thousands of Cepheid variables in the Small Magellanic Cloud (SMC). Because this cloud is distinct from other regions in the sky, Leavitt assumed all the stars are roughly the same distance from Earth. Therefore, any difference in brightness between stars is due to differences in intrinsic luminosity. Using that assumption, she discovered that some Cepheids are (on average, at peak) brighter than others, and that the period of their variability scales with their brightness—the brighter a Cepheid, the longer its cycle.

Thus, measuring the period is a proxy for measuring the luminosity. This was astronomy's first standard candle. Because the period tells you how bright the star is supposed to be, if you see a Cepheid in Andromeda with the same period as a Cepheid in the SMC, you know that any difference in brightness is due to distance alone.

If you notice, by itself the standard candle method only tells you relative distances. You can calibrate your Cepheids with those in the SMC, but if you don't know how far away the SMC is, then your distances are just in multiples of the SMC distance, whatever that is. The upshot is you've only climbed down one rung of the cosmic distance ladder. The ladder ends when you can calibrate a cosmic distance with a terrestrial distance.

Standard candles have another built in limitation. Light intensity falls off with the square of distance, so a standard candle that is 10 times farther away is 100 times dimmer. This is why Hubble needed a gigantic 100" telescope. Without it, he could not resolve individual stars in distant galaxies. If a standard candle is too faint to be picked out, you can't do the precise photometry needed to compare it to a reference candle. So there are many rungs on the ladder, with higher rungs involving supernovae, clusters, and even whole galaxies.

But let's continue down the ladder. Historically, the next rung down involved geometry. Using geometry to measure distance usually involves some type of parallax—that is, observing the change in position of a nearby object relative to more distant objects as your perspective changes. We all intuitively know how this works just by looking out the window of a moving car. Utility poles by the side of the road zoom by; cows in a meadow fall back more slowly; distant mountains appear nearly motionless.

From that alone we see the fundamental limitation of parallax methods. The farther away an object is, the less its apparent position changes. And if it's far enough away, your telescope can't make out any difference in position. In general, parallax methods are only good for relatively nearby stars. But they are a crucial rung on the ladder nevertheless.

After Leavitt's law was discovered, astronomer Ejnar Hertzsprung calibrated the Cepheids in the SMC with ones in our own galaxy. Cepheids are pretty rare (they are a short-lived stage in the lifecycle of massive stars, which are themselves uncommon) so there aren't many that are close enough to triangulate just by watching their position shift over the course of the year. Instead, he used a method known as statistical parallax.

This method works by looking at a set of Cepheids that are roughly the same brightness scattered around the sky. If they're the same brightness, then they are about the same distance from the sun, which means they all lie on the surface of a sphere with the sun in the middle. The radius of this sphere is the distance to the Cepheids.

We can find that radius by looking at the motion of these stars. Stars move across the sky because of their own peculiar motion and the motion of the sun relative to the "local rest frame," which is the frame that follows the orbit of nearby stars around the galaxy. Their peculiar motion is basically random, which means you're just as likely to find a star moving parallel to the sun's motion as perpendicular to it.

Now, there are two ways we can measure the motion of stars. One is to look for the Doppler shift in a star's spectrum to see if it's moving toward us or away from us. The other is to look at the star's proper motion, which is its change in angular position on the sky and is perpendicular to its radial velocity. What we want to do is find the proper motion of a star that is perpendicular to the sun's motion. This motion is tangent to the imaginary star circle we've created.

Credit: No one. This graphic simply popped into existence when needed.
We can then pretend that the star is circling the sun and say that the proper motion is that star's angular speed around the sun. Angular speed can be converted to actual tangential speed by multiplying by the radius. That is, the larger the radius of a circle, the faster an object has to be moving to complete the circuit in a given time. Conversely, if we know the tangential speed, we know the radius—the distance to the star.

But we have no way of independently measuring the tangential speed, because the Doppler shift only measures radial speed. Here's where the statistical part of the statistical parallax method comes in. Because we've assembled a large collection of randomly moving stars, we can just guess that the average radial velocity of a star is the same as the average tangential velocity. We find the average radial velocity using the Doppler effect (being sure to subtract out the component of the sun's motion parallel to the radial motion). Then you set the tangential velocity of your star equal to that average radial velocity, divide by its angular speed, and you've got the distance.

The units for radial velocity are going to be something like km/s, which means we have calibrated a cosmic distance to a terrestrial distance and seemed to have reached the end of the ladder. But the truth is the statistical parallax method has other distance measures baked into it, which means we've really just jumped back down to solid ground, skipping several rungs. In particular, finding the true "solar motion" of the sun requires that you already know some distances.

The real way back to Earth involves measuring the change in apparent position of a very nearby star over the course of a year as the Earth orbits the sun. Finding that change gives you the distance to the star relative to the astronomical unit, which is how far the Earth is from the sun. To measure the astronomical unit, astronomers in the 18th century measured the different durations of the transit of Venus from different positions on Earth. Those timing variations corresponded to changes in the position of Venus across the face of the sun. This gave astronomers the distance to Venus (and all other solar system distances, including the AU) in terms of the size of the Earth. To measure the size of the Earth, ancient Greek smart guy Eratosthenes watched how the lengths of shadows changed as latitude changed, which told him how curved the Earth was and consequently what its circumference was.

I've mostly presented the cosmic distance ladder as being a steady climb from the Earth all the way to the origin of the universe. But in reality it looks more like a game of Chutes and Ladders. I've tried to hint at the fact that there are many more methods involved, each trying to make up for some deficiency in another. Two different methods will operate on different scales but overlap slightly. Where they overlap, you can jump from one rung to the next by calibrating one to the other. Jump enough rungs, and you eventually find yourself at the beginning of everything.

Wednesday, April 12, 2017

The Pale Blue Discourse

By sheer coincidence, xkcd recently did a comic on why the sky is blue at about the same time the astronomy class I TA got to its unit on light and optics.

Credit: xkcd
The Wednesday before that comic appeared, I led a discussion in which I explained why, in fact, the sky is blue. The comic argues against starting out with Rayleigh scattering because, essentially, that's just a fancy name for the specific reason the sky is blue, when the general reason is just that things are the color they are because they reflect that color.

I agree with this argument on one level, and one of the reasons I mentioned the sky's blueness in discussion is because it's an example of one of the three broad reasons why an object is a particular color (reflection/absorption, spectral lines, and thermal radiation). But I also mentioned the blue of the sky because Rayleigh scattering is interesting in a couple ways.

First of all, one way to think about the color of the sky is instead to think about the color of the sun. Sunlight is white (composed of all the colors in the visible spectrum), yet the sun is yellow. Why? Because Rayleigh scattering scatters some wavelengths (blue) more than others (red). The result is that wherever you look, you're looking at the sun; it just depends on whether or not the sun's photons had to bounce around a few times before they got to your eyes (and consequently look like they're coming from somewhere other than the sun).

The second reason I brought up Rayleigh scattering is that, for most objects that are a particular color by dint of reflection, the explanation for why is both complicated (a specific configuration of quantum mechanical energy levels) an unilluminating (it just worked out that way). By contrast, Rayleigh scattering is one of the few instances where the explanation is fairly simple and clear. We can see the process at work throughout the day. Shorter wavelengths of light scatter away as they pass through air. The more air they pass through, the more they scatter. This is why sunsets and sunrises are particularly red: the sunlight is moving through more atmosphere (because the sun is not just straight up), and the blue light has a lot of opportunities to get lost along the way.

But ultimately, xkcd is right that blue is just the color of air, as long as we want to think of color as a property of an object. And why wouldn't we? Well, we can engage in some fun-sucking reductionism by pointing out there is no blueness contained within air, just as there is no greenness contained within leaves. Color arises out of an object's interaction with light and eyes, and it just so happens that a particular interaction involving the sky produces blue. Many philosophers will want to push back against this kind of reductionism by saying, well, okay, then that's just what we mean by the property of blueness: being so configured that interaction with light and eyes produces the subjective experience of blue.

This is a common theme in analytic philosophy. Science has a tendency to unravel our everyday notions by telling us things like, no, we don't really ever touch an object; it's just the electric forces of our skin interacting with the electric forces of the couch. But philosophers balk at this by arguing that we clearly successfully communicate something when we say that, for example, humans have touched the surface of the moon. So let it be that what touching really means is... you get the idea.

But then what does it really mean to say that an object is blue, if blueness is a property that arises only through interaction? Well let's do a little thought experiment. Imagine that one of those TRAPPIST-1 worlds—tidally locked into its orbit around a cool red dwarf—has an atmosphere just like ours. On tidally locked worlds, the sun never rises or sets. One half of the planet is always facing the sun, while the other half never sees it. This could lead to a situation (although an atmosphere probably helps to mitigate it) where one half is a blasted hell hole and the other is a frozen wasteland. Consequently, many scientists and SF authors have imagined life arising only in a narrow strip of twilight at the terminator between night and day. There, the temperature might be just right for life. With a cool red sun (meaning much less blue light to start with) always on the horizon, a sky such as ours might always be some shade of red.

Credit: ESO
Nevertheless, scientific-minded aliens in the twilight might eventually learn the composition of the atmosphere, learn about Rayleigh scattering, and come up with a neat science fact: you know, if you were to shine an enormous amount of white light through our atmosphere, it would appear blue. But is that a good reason to say that the atmosphere is, in fact, blue?

Let's go a step further. Say that the general lack of short wavelength light means that these aliens' eyes never evolved sensitivity to blue light at all. Again, they could perform experiments and develop a theory of optics, but there's no situation in which they would describe the sky as blue, because they have no concept of blue at all.

However, blue-seeing humans are only 40 light years away, so we might someday travel there and explain the reality to them. We might say, your sky looks red, but that is only an illusion. If your eyes were sensitive to short wavelength light, and your planet were not tidally locked, and your star were luminous enough to shine brightly across the specific range of 400-700 nanometers, then you'd see that in reality your sky is, in fact, blue. The aliens would twirl their fuzzy tentacles in derision and laughter, as aliens are wont to do.

Now you might object here and say that we have plenty of names for things we don't have direct subjective experience of. For example, we've labeled the rest of the electromagnetic spectrum, from gamma rays on up to radio waves, even though we only have access to a tiny bit of that spectrum. And that's true enough, but we wouldn't say that the color of an object is x-ray. There might be some property there, but it's not color.

Okay, but let's turn the tables around here. Maybe TRAPPIST aliens are sensitive to infrared light and have a whole host of specific names for the wavelengths they subjectively experience in that range. That sounds a lot like color, too, and it seems anthropocentric of us to deny them their infrared colors. So we can say that blue is a human (or Earth creature) color and that an object is that color when it reflects light in a particular range of wavelengths. That's what color is: the subjective experience of a particular wavelength of light.

But then the aliens might ask, so what's the wavelength of this "brown" color you humans are always talking about? Brown does not have a wavelength; it doesn't show up in the rainbow. Brown is a color humans experience because our perception of color is based on more than just wavelength; it also includes contrast levels and overall brightness. Brown only shows up when something with a red or yellow wavelength is dim compared to what’s next to it.

Purple, too, is not a "real" color by the rough definition given above. It is not composed of a single wavelength but multiple wavelengths that our brains interpret as a single color. Why? Because we don't actually have perfect, exact wavelength detectors in our eyes. Instead, we have three different kinds of cones (photoreceptor cells) that absorb light in three ranges of wavelengths that overlap a bit.

Credit: Vanessaezekowitz at Wikipedia
Our brain figures out what color we're seeing not by identifying a particular wavelength but by adding up how much each type of cone has been stimulated. When a blue cone starts firing more than the rest, our brain will interpret that as seeing blue. But we don't have purple cones. Instead, the human brain has made up the color purple for those situations when our blue and red cones are firing at equal rates.

So what do we say when the aliens ask what it means for something to be purple? Oh, an object is purple when it reflects both short wavelength and long wavelength visible light in a situation where creatures evolved to pick out that combination as signifying something distinctive. Ah, yes, of course.

All of this is not to say that there's no such thing as color, or that trees aren't brown. Again, it does no one any good to object to every statement about the color of an object by saying, "Well actually, leaves absorb everything but green!" So yes, the sky is blue because air is blue. That is a perfectly fine answer that conveys an important aspect of what color is all about. But that important aspect might not be that color depends on reflection; rather, it might be that the idiosyncratic history of our sun, our planet, and our species have led to the subjective experience of color.

Tuesday, March 21, 2017

A Heart to Heart Talk

Several billion years ago, a bright red star the size of Earth's orbit beat like a heart in a spiral arm of the Milky Way galaxy. Already billions of years old, this star had long since fused all the hydrogen in its core into helium. Eventually, the star grew hot enough that the helium ash could begin to burn, slowly transforming the core to carbon and oxygen. When helium in the core finally ran out, the billion year balance between gravity and radiation that every star battles to maintain gave way, and the core contracted and grew hotter. Feeling the heat, the outer envelope expanded and cooled, and a red giant was born.

This giant soon found a new, but ultimately short-lived balance in a period of its life known as the asymptotic giant branch (AGB) phase. Now, a thin shell of hydrogen surrounding the core grew hot enough to burn, producing a new layer of helium that settled onto the core. After tens or hundreds of thousands of years, the helium layer grew hot and dense enough to start its own fusion cycle, leading to a brief helium shell flash. In those moments, the star's brightness would jump by a factor of a thousand before returning to its quiescent, hydrogen-shell burning stage. This was the slow beat of the giant's heart.

Credit: Lithopsian
How do we know this story about a giant, pulsating star that died long before ours was born? We have observational and theoretical evidence that stars like this exist. With telescopes, we have found stars with masses comparable to our own that are tremendously brighter but cooler (on the surface). To be so bright yet cool, such stars must occupy a very great volume. We have also built models of stellar evolution by observing many different stars and figuring out how ones that look different might just be the same kind at different stages of life.

But what about this specific red giant from billions of years ago—how do we know about it? What lets us peer into its heart? Well, we don't know its name or where the cooling remnant of its core is now, but we do know this star was part of a lineage, inheriting the cosmic dust from previous stars and passing it on to us, but transformed. In the roiling convective envelope that surrounded the core of this red giant, there were atoms of iron built by some older star's fusion.

Iron is the endpoint for fusion that can power a star. For all elements with fewer protons than iron, smashing them together at high enough temperatures and densities liberates more energy than is required to do the smashing. But this doesn't work after iron, because you've got so many positively charged protons squished into such a small space that they strongly resist any further squishing. You can still do it, but you're losing energy. Nevertheless, this type of fusion does happen in the outer layers of dying stars, draining a bit of the star's energy with each reaction.

This process of building up elements in stars—known as stellar nucleosynthesis—was first described comprehensively in a famous astrophysical paper known as B2FH (after the initials of the four authors). In it, they gave a detailed account of the nuclear physics required to produce all the elements we see in nature. Spectrographic analysis of our star and ancient meteorites that existed in the early days of the solar system has largely confirmed that elements do exist in the proportions dictated by stellar nucleosynthesis.

But let's get back to the iron in that giant. Here, a type of nucleosynthesis known as the s-process was dominant. One way to build new elements is to bombard atoms with neutrons. Every once in awhile, an atom will capture a neutron and become a radioactive isotope of whatever element it is (as determined by its number of protons). Eventually, beta decay will turn one of the neutrons in the nucleus into a proton, which then bumps that atom up to the next element in the periodic table. This process starts with iron and ends with bismuth.

As you can see, there are two reactions going on here: neutron capture and beta decay. Because of this, the rate at which these reactions occur determines the eventual abundance of elements we see. In AGB stars, neutron capture happens much more slowly than beta decay, which means that we will eventually see a ladder of elements building up from iron rather than more and more weird isotopes of iron.

Let's look at one element in particular to see how this whole thing works. The element thallium has 81 protons and shows up in nature with either 203 or 205 total nucleons (protons+neutrons). 204 nucleons is unstable and decays with a half-life of less than 4 years. That means there is a branching point when thallium reaches 204 nucleons. From there, it can undergo beta decay and become lead with 82 protons, or it can capture another neutron and remain thallium. About 70% of thallium is the 205 kind, while 30% is the 203 variety. (There is more thallium-205 because lead-205, which you get to by thallium-205 or lead-204, is unstable over millions of years and eventually decays back to thallium-205.)

Credit: R8R Gtrs
By experimentally determining how likely thallium is to capture a neutron and how quickly it decays, we can infer how often atoms of thallium in that red giant were being bombarded with neutrons. Knowing the density of neutrons in the AGB star tells us what nuclear reactions were creating neutrons and consequently how hot the core of that star was and what elements it was composed of. It turns out that the abundances of elements we see would require a range of neutron fluxes, which is part of how we know that AGB stars undergo pulses of helium fusion before returning to hydrogen-shell burning.

Because AGB stars are about as large as Earth’s orbit but of comparable mass to our sun, their gravity is not strong enough to contain their extended envelopes. This means much material is lost, becoming a "planetary nebula" and eventually dispersing into interstellar space. That includes the products of nucleosynthesis, which come to pollute cold, giant molecular clouds.

About four and a half billion years ago, one such polluted cloud became unstable and collapsed. Out of that collapse was born our sun and solar system. As Earth formed and mixed together the metals that could withstand the searing heat of our young star, atoms of thallium got locked up in minerals of copper and lead and zinc.

Eventually, humans came along and started extracting pure thallium to do things with it, such as performing experiments that could give us insight into the hearts of long-dead stars. A week or two ago, some pure thallium-203 was bombarded with protons until it became lead-201, which has a half-life of 9.4 hours. The lead decayed into thallium-201, which has a half-life of 73 hours. Because of that short lifetime, the thallium must be prepared and used quickly. This specific batch was mixed with hydrochloric acid to produce thallium chloride, which was then put into a solution and packaged for use.

Four days ago, that radioactive thallium was injected into my veins. Because thallium behaves a bit like potassium as far as cells are concerned, sodium-potassium pumps in the membranes of cardiac cells take in the thallium. These pumps transport ions of sodium and potassium, creating a voltage that gives cardiac cells the electricity they need to beat. Cardiac cells that are working well have functioning pumps and will take up the thallium; cells that aren't won't. To make sure the thallium was well circulated in my heart, they had me run on a treadmill until I got to 160 bpm.

Thank you, Frinkiac.
To see where the thallium in my blood ended up, a camera took pictures of the gamma rays streaming out of my body. But gamma rays present something of a problem. In a normal camera, a lens focuses light rays onto a surface to form an image. In telescopes, we mostly use mirrors to bounce light in the direction we want. This doesn't work with gamma rays, however. Their incredibly short wavelength means that for everyday materials, they will either be absorbed or transmitted, but not redirected. When a photon is simply absorbed without any optics, information about where the photon comes from is lost and you no longer have an image.

Astronomers have devised many clever techniques for getting images from x-ray and gamma ray sources, one of which works for looking into hearts, too. You can preserve the image of a source by creating a very small aperture for light to pass through—a pinhole camera. On the other side of that pinhole, you have a detector. Because you can trace just a single line from where a photon hits the detector to the pinhole, you know what angle that photon came in at and thus know what the original source of the image was. The downside to a pinhole camera is that almost all of the light is blocked. To get around this, you can create an aperture with a very specific shape that lets in more light but leaves a distinct "shadow" on the camera. Using computational techniques, you can than reconstruct the original image.

Credit: Alex Spade
The camera they used rotated around me for eight minutes, producing cross-sections of my heart at different angles that were later combined to form a 3D image.

I don't yet know the results of that test (although I suspect I am okay), but I am comforted by the thought that the thallium used to peer into my heart can also peer into the hearts of long-dead stars, to give a glimpse of another world, an incomparably gigantic furnace burning at hundreds of millions of degrees that does its part in seeding the galaxy with the elements necessary for chemistry and life. I am also comforted to know that I am a part of that lineage, that my carbon was produced in another dying star, that the hydrogen in my water is nearly as old as the universe itself. I hope this specific agglomeration of carbon and water persists a bit longer, but I am happy nonetheless that the universe is eternal and spectacular and knowable.

Monday, February 27, 2017

Snow Line and the Dwarf's Seven

I'm really sorry about the title. Not sorry enough not to use it, of course, but a little sorry.

So you may have heard about the recent discovery of a nearby solar system (a mere 39 light years away!) with seven planets all packed very close to the star (an M-dwarf). The discovery is significant because (a) some of the planets look to be rocky, Earth-sized, and in the habitable zone; (b) the relative nearness of the system makes it a prime target for further investigation; and (c) it's super rad. The occasion gives me the opportunity to explain a bit about how discoveries like this get made while waxing philosophical about the nature of astronomy itself. As a guy with an astronomy degree (I don't feel comfortable calling myself an astronomer) who (kind of) teaches an intro astronomy class, this is basically my job.

Conveniently, last week's discovery does an excellent job of illustrating three aspects of astronomy that I think set it apart from other sciences. (Or possibly my own confirmation bias leads me to see these aspects expressed, but let's leave that for another post.) These features are encapsulated in a kind of motto for astronomy that I've been using recently.

It goes like this: astronomy is the science of what you see when you look up. This sentiment conveys that astronomy is ancient and public, because for thousands of years, anyone could do astronomy just by turning their heads skyward and paying attention. Secondly, astronomy is bound (mostly) by sight, which is a limitation that forces astronomers to be both careful and creative. And finally, “up” is a pretty wide direction, and astronomy encompasses everything from the moon to other stars to the birth of the universe itself and anything else we find along the way.

All of this ties together into something truly remarkable. Astronomy has the power to transform points of light—the ever-present night sky that we rarely stop to consider deeply—into a story about exploding stars and merging galaxies and dark matter halos all under the spell of gravity in a dance that goes back billions of years and will probably continue for many orders of magnitude longer than it's lasted so far. And what's more, we have good reason to be confident in this story. How does astronomy manage to do this? Well, let's take a look at those seven newly discovered exoplanets.

While we've only known about exoplanets for a couple decades now, the study of planets more generally is, like the rest of astronomy, incredibly ancient. There are five planets visible to the naked eye (Mercury, Venus, Mars, Jupiter, Saturn) that have been known into antiquity. The first person to discover a new planet (Uranus) was William Herschel, using a telescope he constructed himself. Neptune followed, after Urbain le Verrier noticed that, after adding up all the known gravitational influences on Uranus, its calculated position on any given night was a little off from its observed position. He predicted that a planet farther out was gravitationally tugging on Uranus, so the astronomer Johann Gottfried Galle looked where le Verrier said to and found another new planet.

I'm giving this brief (and incomplete) history lesson because the fact of the sky always being up there makes astronomical discoveries collaborative and open. There's a parallel in last week's exoplanet discovery both in terms of that public nature and gravitational perturbations. Moreover, discovering new planets used to be a once in a generation kind of thing, but now we've discovered thousands of them and just found seven in one system. Astronomy is a gigantic, ever-expanding field; whenever we look somewhere new or look in a new way, we find new stuff.

So let's talk about TRAPPIST-1. While NASA had a big press conference about the discovery (and they were involved), this was a remarkably international effort, involving astronomers and telescopes from all over the world. Most exoplanets discovered so far have involved space telescopes because the atmosphere makes detecting subtle changes in a star's light curve difficult. A relatively cheap solution being used now is to image the same star many times either with multiple ground-based telescopes or the same scope repeatedly. This lets you produce a single, high quality light curve and means that anyone can get in on the exoplanet discovery game. With a small telescope that spends all its time looking at large patches of the sky, you can detect (and re-detect) the faint signatures of exoplanets. Once TRAPPIST and the other telescopes involved made those initial findings, NASA pointed the Spitzer Space Telescope at TRAPPIST-1 to confirm the discovery.

Okay, but how did these telescopes actually discover the seven exoplanets? This is where the central limitation of astronomy—sight is (just about) our only tool—leads to very creative solutions. The way that we transform TRAPPIST-1 from a point of light into a star with seven worlds is by performing high-precision photometry to construct a light curve of the star. A light curve is the change in a star's light over time. To get an accurate one, you need to get high quality images on short timescales. This runs counter to a very useful tactic in astronomy, which is to collect light from a source over a long period of time to produce a single, bright image. But if you do that, any deviations during that integration time get smeared out and missed.

To detect exoplanets, the deviations you're looking for are dips in the star's brightness at regular intervals. If your telescope, the star, and a planet happen to line up exactly, then every time the planet passes in front of the star from your perspective, the star gets a little bit dimmer. It's just like a solar eclipse here on Earth, except that these planets are much too far away from us to block out all the light of their parent star. Instead we see a tiny drop in brightness.

But these transits reveal a lot of information. First, the duration of the transit and the time between transits tell us how long the planet's year is. Combined with an educated guess about the star's mass (by taking its spectrum), we can figure out how strong gravity's pull on the planet is, and consequently the distance it needs to be from its star to complete an orbit in the observed time. The more massive the star, the faster a planet orbits at a given distance. Finally, the percentage of light blocked by the planet, combined with its distance, tell us how big the planet is compared to the star. Another educated guess about the star's size tells us the actual physical size of the planet.

So by looking very precisely at how a star twinkles, we can deduce the presence of a planet and make a reasonable guess as to how big it is and how close it is to the star. We can do this despite not actually being able to see the planet itself, which is much too small and dim next to its parent star to resolve. But I've been talking about one planet this whole time, and these astronomers discovered seven. You might think sussing out the details of seven different transits while also accounting for anything else that might mess up your photometry would be difficult, and you'd be right. The primary way the team identified seven different planets was through a statistical analysis of the transit times to come up with a chart that looks like this:

Credit: ESO/M. Gillon et al.
As a rule, planets don't share orbits. Doing so isn't stable. And each orbit has a definite period, and each period corresponds to an orbital speed, which tells you how long the transit should last. So if you identify a transit of a particular duration that repeats regularly, then you've found yourself a planet. If you see six or seven different regular transit times, you've found six or seven different planets.

There is a snag in all this, however, called TTVs—transit timing variations. That is, sometimes a transit happens earlier or later than expected. In this case, the variation could be up to half an hour. But it turns out this snag contains even more information, because this sounds an awful lot like the error le Verrier noticed in the orbit of Uranus. The planets weren’t where astronomers thought they would be given just the gravitational influence of the star, which means the planets—all extremely close to each other—are tugging on each other significantly.

Because so much is unknown about the system, the problem is much more complicated than the orbit of Uranus. Le Verrier was able to do a laborious calculation by hand using perturbation theory, but the complexity of TRAPPIST-1 require a slightly faster technique if you want to publish before the stars all die and we’re left in darkness. So instead the team constructed simulations of the system where they plug in the laws of physics and then vary the unknown orbital parameters to see what kind of planetary systems evolve that match the one they observed. In the end, they’re left with a set of possible masses that could produce the tugging required to account for the transit timing variations.

Even doing this produced a wide range of possible answers, which led to a great quote in the article: "The system clearly exists, and it is unlikely that we are observing it just before its catastrophic disruption, so it is most probably stable over a significant timescale." The relevance is that the system's existence is itself a piece of data, which means that as more observations are done, the assumed stability of the system can help to rule out orbital parameters that would produce an unstable system.

With those uncertainties understood, the team was able to estimate that most of the planets are in the neighborhood of Earth's mass. If you know the size and the mass, you also know the density. The worlds of TRAPPIST-1 are all rocky (high density) as opposed to gassy (low density). The proximity to the star itself is also important. If planets are too far out from their star—past the snow line—then water and other volatiles condense into ice. Far enough inside that line, however, and water can remain a liquid. Too close, and the liquid evaporates. These planets are all at the right distance to have liquid water.

An entire system of rocky, Earth-sized worlds warm enough to have liquid water—this is why everybody is so excited and why astronomers are going to keep watching these planets. The Kepler Space Telescope is currently looking at the system, and the James Webb Space Telescope will too when it launches. The relative nearness of the system to us means that it is fairly easy to observe. As new observations come in, we could learn about the planets' atmospheres—their density, composition, and variability—and whether they experience tidal heating and geological activity. Are these complex, intriguing worlds like the moons of Jupiter and Saturn or airless rocks scoured dry by the flares of their parent star? We just have to look up to find out.

Thursday, January 12, 2017

When You Think Upon a Star

Among the sciences, astronomy benefits from widespread public appeal. Hilariously large numbers and gorgeous images make it an attractive source for science news. The result is that some difficult notions from astronomy have managed to penetrate successfully into public awareness. For example, this meme, which I've run across several times:

I got this image here (which, incidentally, is a blog post doing exactly what I'm about to do), but I've seen this meme in other forms elsewhere and have no idea what its original source is.
I'd like to say that I feel conflicted about this meme—that I'm happy the joke relies on knowledge of astronomy (the immense size of the universe versus the finite speed of light), despite the specific fact it calls upon being incorrect (visible stars are almost certainly still alive)—but that would be a lie, because I'm an enormous pedant.

However, in this post I'm going to steer my pedantry in what I hope is a slightly more interesting (and less annoying) direction, toward mathematical reasoning. That is, while I think it's great that the public has been able to learn certain specific facts about astronomy (and other sciences), I think it would be far more valuable if the public learned how to apply mathematical reasoning to claims they encounter.

Here's why: as a recent graduate with an official degree in astronomy and all that jazz, I happen to simply know the fact that, in general, the stars we can see with the naked eye are close enough, and live long enough, to still be alive by the time their light reaches us.

But even if I didn't know that fact, I could arrive at it by constructing an argument from some more readily available facts. And this argument, although mathematical in nature, doesn't involve anything more complicated than a bit of algebra, such that anyone who gets out of high school should be able to reach the same conclusion.

Now, the joke's humor relies on some common facts from astronomy: stars are far away, light is slow compared to the size of the universe, stars eventually die. But before we get into the mathematical meat of evaluating this claim, let's think about another common fact: our sun is 4.5 billion years old (give or take), and it's roughly halfway through its life, so it's got another several billion years to go.

In order for the sun to be dead by the time an alien civilization see its light, that civilization would have to be farther away in light years than the sun's remaining age in years. That is, the alien civilization would have to be many billions of light years away. So if we take our sun as typical, then the above meme is only true if we can see, by the naked eye, stars that are billions of light years away. We can't, and as I'll show in a bit, we don't even have to assume our sun is typical for this argument to work (which it isn't, really). But this is the structure of the mathematical argument: compare the lifetimes of stars we can see with the naked eye to their distances from us.

A few more astronomical facts are necessary to work this out, some of which can be gotten by a bit of googling, and one which, I admit, most people probably aren't aware of. This fact, which makes evaluating the claim very easy, is that the more luminous a (main sequence) star is, the shorter it lives. This means the most luminous stars (which are the most likely to be visible by the naked eye at great light travel times) are the best candidates for stars that are dead by the time their light reaches us. If the claim fails for these stars, it fails for all stars.

The most luminous stars live about a million years and are about a million times brighter than the sun. Now, it's always possible that a star we're seeing just happens to be at the end of its life, but all else being equal, if we pick stars at random out of the sky, then on average they will be halfway through their lives, just like (coincidentally) our sun (not strictly true, because there is some selection bias to the stars we can see).

To be visible by the naked eye, a star needs to have an apparent magnitude of 6 or lower.

For the sun to be magnitude 6 (it's currently an obscenely bright -27), it would have to be about 60 light years away. (There's some math involving logarithms here, but there are tools online that could get you this answer.)

How bright a star appears to us is proportional to its intrinsic brightness and inversely proportional to the square of its distance from us. That is, if star A and star B are identical but star B is twice as far away, it looks 1/4 as bright as star A.

And that's everything we need to evaluate the claim. Now here's how we construct the argument. A star is dead by the time its light reaches us if its remaining lifetime in years is less than its distance in light years. It is visible with the naked eye if its intrinsic brightness relative to the Sun is greater than the square root of its distance relative to the Sun's distance at magnitude 6.

Let me unpack that second statement a bit. Say a star is intrinsically four times as bright as the sun. If it's also magnitude 6 (just visible), then it needs to be farther away than the sun. Specifically, a star four times as bright as the Sun will be just visible at twice the distance (square root of 4) of the magnitude 6 sun: 120 light years. If it is farther away, it is too dim for us to wish upon it.

The brightest stars are 1,000,000 times more luminous than the sun, which means they are the same apparent brightness as the Sun when they are 1,000 times farther away. If the sun is just visible at 60 light years, then the brightest stars are just visible at 60,000 light years. Is 60,000 light years greater than the (on average) half a million years the star will have left to live? No. At that distance, the star could only be dead by the time we see it if it were already 95% of the way through its life. For less luminous stars which live longer, that percentage gets even higher, which makes it much less likely that we ever see such a star.

When we learned algebra via word problems, we were supposed to be learning how to solve problems like these. And while most of us probably managed to get through those word problems successfully, it's been my observation that most of us don't apply this kind of analysis outside of school, to things like evaluating claims that have mathematical content. While it's not vital to the health of a democracy that we be pedantic about random Facebook memes, it might be useful for us to be able to think carefully about scientific claims, at least when the facts and math involved don't require a PhD.

Outside of learning a bunch of astronomical facts, one of the most valuable (academic) lessons I acquired while getting my degree was learning how to bring mathematical tools to bear on a problem. I'm sure this blog post doesn't really have what it takes to impart that same lesson on others, but I hope it reveals a bit of the process. If I could wish upon a star (and I were feeling altruistic), I might wish for an educational system that did a better job of that.