## Thursday, August 31, 2017

### Nightfall

(Spoilers for a 76 year old Isaac Asimov story, which you can read here for some reason.)

"Nightfall" is one of my favorite Asimov stories. It's set on an alien planet in a system with six suns, arranged so that at least one is always up. Consequently, the people of this planet never know night. What drives the action is the discovery of a moon (invisible due to the constant sunlight) that astronomers predict will eclipse a sun when all the others have set. The effect would be sudden, inescapable darkness, which they fear will drive people mad (and may have led to past catastrophes).

This story has been on my mind since a little before our solar eclipse. I had heard repeatedly that a total solar eclipse is an event unlike any other, that everyone should try to experience one at some point during their lives. But although some say totality can be drop-to-your-knees-and-weep life-changing, there is as far as I know no evidence of totality-induced civilization-wide collapse. Of course, we experience night daily, so sudden darkness is not as extraordinary for us.

What is it about a total solar eclipse that inspires such numinous feeling, then? Having stood within the moon's umbral gloom for slightly more than two minutes, I can offer my own perspective.

On Sunday, August 20, I traveled to Greenville, South Carolina with a friend and his family, who had family in the area willing to put us up for two nights. The drive down to Greenville from Maryland took about 11 hours. 11 hours of tedium and traffic for 2 minutes of totality—an easy choice for most, whatever that choice may be.

That evening, I passed out eclipse glasses to those who needed them and jury-rigged a solar filter onto my binoculars with index cards and masking tape. As the resident astronomy expert, I had been told by multiple eclipse veterans that it was my responsibility to do dry runs of totality so the uninitiated would be prepared for the moment. Instead we watched Game of Thrones and considered our return travel plans in light of the awful traffic coming down.

The day of, August 21, we found a nearby baseball diamond and set up our equipment about fifteen minutes before the start of the partial phase. A partial solar eclipse is a weird and cool but ultimately very detached phenomenon. You can't (or shouldn't) look directly at the sun, so watching the moon's shadow creep across its face requires filters or those eclipse glasses you've heard way too much about by now.

Through them, there was only the waning orange disc of the sun and blackness—the black of the moon, the black of sky, the black of anything else we might try to look at. Witnessing a partial eclipse was like looking through an insufficiently detailed virtual reality environment. On top of that, up until about 80% obscuration, there was very little change in our surroundings to indicate that anything was up.

But the orange disc inexorably slid into a crescent, which served as a visceral countdown to the main event: totality.

At about fifteen minutes before second contact, with the sun a thin wedge, we began to notice that it was substantially cooler out and strangely dim. The sun was still a blazing fireball in a bright blue sky, but the whole scene was a few shades darker, as if seen through sunglasses. Unfortunately, we didn't have an opportunity to see much in the way of strange shadows where we were.

As the moon reduced the sun to an arc of light, I watched through my binoculars until the orange shriveled to nothing, leaving only black. Then I looked up and experienced totality.

There's something of a twist in "Nightfall," which is that it's not night that drives people mad. In the story, they had been preparing for it. In fact, a minute or two in a totally dark room was akin to an amusement park ride for us—thrilling and hair-raising, maybe too much for some, but ultimately pretty safe.

What drove them mad was a phenomenon they were utterly unprepared for, which shattered their conception of the world and forced them to pick up the pieces.

When night finally fell, the stars came out. Except in myth, their world had consisted entirely of one planet and its attendant suns. But each pinprick of light against the black was another sun, another possible world. Each twinkling tear in the curtain of night let them peek into a much, much larger universe, one too big for their minds to bear. So they went mad instead.

I knew intellectually—from descriptions and pictures—what totality was going to be like. None of that prepared me for the moment itself, when the whole solar system was laid out before me.

Night fell and the stars came out, yes. And birds and bugs acted up. And the sun disappeared.

But here's what stuck with me. I don't remember all that many stars and it was never truly dark out. After gaping at the eclipsed sun for a moment, I saw Jupiter to the east and Venus to the west. They flanked the sun, and I could draw a straight line through all three of them. That line is the ecliptic plane, the disc of our solar system. But in the middle, instead of a sun, there was a hole in the sky—the moon. It, too, lay in that plane, along with me staring up at it all.

With the moon intercepting the light of day, the sun's faint outer atmosphere became visible. For most of our lives, the sun is a featureless glare we have to avoid. We only glimpse it during sunrise and sunset. But even then, the beauty of dawn and twilight is in the intermingling of sun and sky; it's never just you and the sun.

But the corona is the crown of the sun. By eye alone I could see exquisite detail and structure in the threaded, incandescent layers that were hidden from me a moment before. All this made the sun very real—not an untouchable brilliance, not a puddle of mixing reds, not a perfect orange disc against the black, but a giant ball of plasma reaching out to me. And it sat in the middle of a vast solar solar system of planets, with me on a tiny blue one hurtling around it.

Then it was over. The eclipse didn't fade away like a half-remembered dream. It just ended. There were a few seconds of twinkling at the edge of the black and then daylight returned, and the sun and planets and solar system were gone.

The initial seed for Asimov's "Nightfall," so the story goes, was a conversation between him and his editor, John W. Campbell. There's a line in a Ralph Waldo Emerson essay that reads, "If the stars should appear one night in a thousand years, how would men believe and adore, and preserve for many generations the remembrance of the city of God!" Campbell gave this line to Asimov essentially as a prompt, telling him he thought "men would go mad" instead.

And indeed, that's what happens. The short story ends with the main characters holed up in a fortified observatory, watching a crimson glow on the horizon that is not the return of the sun, but a city aflame.

So Asimov and Campbell are pretty cynical about our capacity to cope with a terrifyingly large world. By nature I share that perspective, and a gander at my Twitter feed seems to confirm the validity of such cynicism. While we are still stuck on this pale blue dot, the complexity of the world has grown dramatically in the last couple centuries.

We find ourselves unable to confront the reality of global warming, to the extent that some of us deny it while most of us pretend everything will work out somehow. Our societies have become increasingly interconnected and pluralistic, leading many to retreat into xenophobia that is at best ugly and at worst fiery and violent. Given all that, it doesn't seem unreasonable to imagine that a revelation as world-expanding as "Nightfall"'s might just unhinge us permanently and end our little experiment with civilization.

After totality ended, we stuck around for a bit chatting with others who had come to our baseball diamond, then eventually made our way back to my friend's family's place. There, I was told that a neighboring family had questions for the astronomer on location. Apparently they meant me.

I wandered over and met with a five year old and his mom and dad. The mom asked questions about the eclipse—the why of shadow bands and of different eclipse paths. Then the kid launched into questions about dwarf planets. He wanted to see all of them, so I showed him pictures of Pluto and Charon taken from New Horizons and Ceres from Dawn, and then explained that because dwarf planets are so small and so far away, we needed to build bigger telescopes and faster probes before we could see the rest of them. After that I managed to satisfy his curiosity with some moons, including my favorite Enceladus (about which I've been writing a post since my planetary science course in 2015).

The mom wanted to make sure I didn't dumb down my explanations for her son. The dad wanted to know if there was alien life out there (either on some moon in the solar system or on an exoplanet light years away) and when we were going to Mars.

I talked with the young family for about half an hour, answering questions and trying to feed their enthusiasm with as much knowledge as I could. Talking with strangers is not an activity that comes naturally to me (understatement), but after two semesters as a teaching assistant leading discussions and labs, I have come to enjoy this type of interaction.

What I find particularly heartening about being an ambassador for astronomy is the sheer wonder and curiosity we can have for the enormous, mind-blowing universe our telescopes have revealed. People are drawn to strange new worlds and the idea that we might someday have a home beyond Earth. Maybe fear and madness are natural and understandable reactions to a world too big to wrap our heads around, but they're not the only possible responses. How do we cultivate such wonder? How do we embrace curiosity so that it extends beyond pretty pictures and to all the unbearable complexity we are faced with?

I don't know the answer to that question. Maybe it takes witnessing once in a lifetime astronomical marvels. (Helpfully, if you missed this one, the US has another in seven years.) In the meantime, maybe read some imaginative, thoughtful, mind-expanding science fiction. For the foreseeable future, that's as close as we can get to a larger world.

## Friday, August 18, 2017

### Less Is More

The eclipse is only a couple days away, so I've been checking weather reports for the last week or so hoping the forecast will be clear. It varies from site to site and day to day, which is kind of frustrating.

Our inability to precisely predict the weather reminds me of some standard opposition to climate science. That is, if we can't even predict next week's weather in one city, how can we possibly predict the world climate a decade from now? There's a similar argument against evolution: if there's a single "missing link," how can we possibly claim to know the history of life? If we don't know the exact sequence leading from, say, our common ancestor with chimpanzees to anatomically modern humans, how can we be sure that life in general underwent evolution?

The problem with this line of thinking is that it misses why science has managed to be successful at all. Science does not make accurate predictions because we have perfect knowledge of a system. In fact, the opposite is often the case. Science succeeds in part because of our ability to abstract away that which is unimportant and reveal the underlying patterns. Consequently, a little bit of ignorance helps us miss those details which might distract us.

The eclipse itself is a great example of how having all the information can (literally and figuratively!) blind us. Our eyes are not well equipped for looking at the sun because it can be many orders of magnitude brighter than everything else we see. To fit a scene with such drastically varying brightness levels into our head, we lose some contrast resolution and end up not being able to see dim, feeble objects—especially those in the sky. That means we miss out on all the stars up during the day as well as anything even remotely near the sun.

During a total solar eclipse, the conveniently sized moon perfectly blocks the disk of the sun and a local night falls. The stars and planets come out, and the wispy corona that wreathes the sun materializes before our eyes. If we have good enough telescopes, we can measure the deflection of starlight around the sun, which tells us how its mass perturbs space itself.

 1919 Solar Elcipse. This picture is in the public domain, but I guess I'll credit Arthur Eddington?
And yet we can only do this because we have less information, because we are no longer being blinded by the flood of photons.

Now perhaps I'm engaging in some rhetorical trickery here. I started off talking about how missing individual pieces of the puzzle doesn't prevent science from abstracting away the pieces to find the underlying rules, and then I shifted to discussing how having all the pieces hinders us. But I do think there's a connection here, because the truth is we don't always realize when the details have led us astray; it's not usually as obvious as the blazing sun.

The problem has to do with our affinity for patterns and the mathematical tools we've developed for describing them. As the history of geocentrism demonstrates, our tools are too powerful for their own good. Because planetary orbits are pretty complicated, geocentrists employed epicycles—circles on top of circles—to describe how the planets moved through the sky. (Copernicus did this as well, actually, because he couldn't give up the notion of perfectly circle orbits at constant speed.) Add enough epicycles to your system and you can accurately map out any set of planetary observations, with all the messy details included.

And I do mean any set. (You can watch the whole thing, but skip to about a minute for the good part.)

Aha! We have discovered that Homer Simpson is really a complex set of epicycles. But no, that seems to have things exactly backward. Homer Simpson comes from the imagination of Matt Groening. That we happen to be able to describe the character's appearance using a set of epicycles does not give us any insight into why the character looks the way he does.

And yet packing in all the detail can give us the illusion of understanding. You see, that complicated set of Homeric epicycles can also be represented as a Fourier series, a sum of sine functions where each function—with a differing amplitude and frequency—represents one epicycle. That is, we can come up with an equation that describes Homer Simpson, and all we have to do is plug in the right numbers. It is easy to imagine, then, that there is a physical reason for each of us those amplitudes and frequencies. Once we've found a reason for each number, we would seem to have a very satisfying scientific explanation for the existence of Homer.

But we know, of course, that there is none, because Homer is not really built up of epicycles and all the detail we've admitted into the system has led us astray.

Okay, but then how do we discover the truth when the details get in the way? How do we "hide" them so we can see what's underneath? Abstraction is the answer. To see how that works, let's get away from early astronomy and back to the bright, messy sun.

At work, I recently came across a very neat technique used for studying the spectra of solar system bodies. So let's say we want to know more about a comet that's recently swung by the inner solar system. When we point our telescope at it, what do we see?

 Not a real spectrum of any comet. Just something I cobbled together.
If we put a spectrograph on the back of the telescope—a device that breaks up light into individual wavelengths like a prism—we get this weird hump with spikes running through it. This graph shows how intense the light is at each wavelength, running from violet on the left to red on the right.

The problem with this graph is that most of what we're seeing is just reflected light from the sun. If we want to know about the comet itself, we have to find a way to eclipse (sorry) the sun's spectrum.

This process is known as continuum-subtraction. The shape of our comet spectrum is determined by two factors: absorption/emission lines (the spikes) and the temperature of the sun (the overall hump). We have to separate those two if we want to get rid of the sun. We start by abstracting away the details—those messy spikes—leaving us with the continuum.

To do that, we need to find the best fit line for the spectrum. As with the Homeric epicycles up above, our mathematical tools are powerful enough to write some equation that perfectly fits the line. Instead of a Fourier series, it would be a polynomial—a sum of powers of x with coefficients. You know some basic polynomials: a line is a first order polynomial, a parabola a second order one. For this, we might need, say, a 384th degree polynomial, but we could do it. And again, we could imagine that each power of x and each coefficient has some physical cause.

But then the details are distracting us again. The truth is that most of the jagged bits of the spectrum originate from (a) atmospheric interference, the heat of the telescope, and other noise, and (b) absorption and emission lines that are superimposed on top of the continuum spectrum. So let's keep our epicycles to a minimum and use a simple second or third order polynomial instead.

 My parabola. You can't have it.
With the comet continuum in hand, we can then subtract it from the comet spectrum. Doing that leaves behind only the spikes.

 There are bits of code out there to perform this operation. I don't recommend subtracting each bit by hand.
The spikes that rise above the noise (which we can calculate) are the light being emitted and absorbed by the comet at particular wavelengths. The location of each line along the spectrum is a result of what element or compound is interacting with the light. With the continuum removed, we know what the actual flux is and can determine how much of the stuff is on the surface of the comet.

Another neat trick lets us figure out the albedo of the comet—how much it reflects the light of the sun. Its albedo also depends on its composition. (Think about how much brighter the earth's icy poles are than its liquid oceans.) You might think we can figure that out just by pointing our telescope at the comet and seeing if the object is bright, but its brightness results from its proximity to the sun, distance from us, size, and finally albedo. The first two parameters are easy to figure out if we track its orbit; the last two require some disentangling.

Step one is to point the telescope at a star with similar characteristics to the sun—a solar analog—and see what its spectrum looks like in our telescope. We can't point the telescope at the sun itself, because a telescope designed to look at faint comets and stars would be blinded by the sun. We also don't want to just take someone else's spectrum of the sun, because then we're comparing observations from different telescopes in different environments. Best to use the same equipment.

Now, any solar analog we find out there isn't going to be an exact duplicate of the sun. So like before, we're better served by ignoring the details of the star's spectrum and finding a solar continuum. If we compare the solar continuum to the comet continuum, we'll see that even though the comet's light is mostly reflected sunlight, there are differences. Not every bit of light that strikes the comet's surface is reflected back. Some will be absorbed, and this absorption is wavelength-dependent. What we can do is divide the comet continuum by the solar continuum and come up with the fraction of each wavelength of light that is reflected—the albedo as a function of color.

If the albedo is low, then the comet is naturally dark. If it's dark but appears bright in the telescope, it must be very large. Conversely, if the albedo is high, the comet is shiny. A dim but shiny comet must be small. So that's size and albedo worked out, too.

I think that about covers it. With these techniques, we can abstract away the distracting details—the light of the sun, the noise in our instruments—and come away with facts about a ball of ice and rock hurtling through space. A comet is a messy, complicated object and nothing but its orbit is easily reducible to a simple law, but we can nevertheless know more about it by pretending we see less of it.

## Wednesday, July 5, 2017

### From the Earth to the Moon

I recently finished reading The Birth of a New Physics, by I. Bernard Cohen, which describes the 17th century transition from Aristotelian to Newtonian physics. This reminded me of a demonstration I did for my astronomy sections last semester, in which I tried to impress them with the power of Newtonian unification. (It didn't work.) And yesterday was the day we celebrate projectile motion, so that's as good an excuse as any to revisit the topic.

As I mentioned in my last post, I think we suffer from presentism that makes it difficult for us to understand how our predecessors saw the world. To remedy that, I've been reading a lot of history of science recently; I want to understand the role that science has played in changing our conception of the world.

When reading history of science, I sometimes struggle with the seemingly glacial pace of scientific advances that I, with my present level of education, can work out in a few lines. I am no genius, so why did it take humanity's greatest scientific minds generations to find the same solutions? The answer is these solutions originally required deep conceptual shifts that for me—thanks to the work of those scientists—are now completely in the background. Here's an example that I think simultaneously demonstrates the power of Newtonian analysis and the elusiveness of the modern scientific perspective.

Aristotelian physics held that everything from the moon up moved only in circles and was perfect and unchanging, while everything below the moon was imperfect, impermanent, and either drawn toward or away from the center of the universe. The critical thing is that the motion of objects on earth—projectiles, boats, apples—operated according to fundamentally different rules than the motion of stars, planets, and other celestial objects.

What Newton did was to show the same rules apply everywhere, to everything. His laws of motion and gravity work for cannon balls, birds, the moon, and even once in a lifetime comets. This is where our presentism hurts us, because that radical idea seems completely obvious now. Of course physics underlies both airplanes and space probes. Duh.

In the abstract, that's an easy case to make. But the demonstration I did in class, which is a modern-ish take on an analysis Newton himself performed, might be able to show how cool and counterintuitive this unification really is.

Consider this: if you drop a rock from a given height and time its descent, you can explain why a month is roughly 30 days long. These two facts seem completely unrelated but turn out to be connected by a simple law.

Aristotelian physics says that heavy objects are naturally drawn toward the center of the universe and that the celestial moon naturally moves about the Earth in a perfect circle. But even ignoring the Aristotelian perspective, from our modern vantage the link between these two facts seems kind of incredible. We have some vague idea that the length of a month is connected to the cycles of the moon, and we know that gravity makes rocks fall, but the moon is clearly not falling and rocks have nothing to do with calendars; so how are these facts related?

Now, I'm not shocking anybody by saying that gravity is the common factor, but I want to show you how relatively simple it is to work this out using the tools Newton gave us.

Newton's law of universal gravitation says that gravity is an inverse square force. In fact, other scientists before Newton (Kepler, Hooke) had suggested this. It was known that the intensity of light falls off with the square of distance; maybe the same principle worked for gravity, too. Force is proportional to acceleration, so you can measure it by timing falling objects (or the period of a pendulum, which was the most precise method available during Newton's time). At the surface of the earth, this is 9.8 m/s2 and usually denoted with a g.

If the earth is also pulling on the moon, and gravity is an inverse square law, we can find out how much earth's gravity is accelerating the moon. Divide the distance to the moon by the radius of the earth (figures known since the ancient Greeks), square the result, and that's how much weaker gravity's action on the moon is.

The distance to the moon is about 60 times the radius of the earth, so earth’s gravity pulls on the moon with 1/3600 the force that it pulls on a rock near the surface. But even so, shouldn't the moon be here by now? It's obvious that the moon is circling the earth and not slamming into us.

What we need here is another law. We see circular motion on earth, too. Imagine tying a string to a rock and spinning the rock around. What keeps the rock moving in a circle? The string, which is taut. The string pulls on the rock so that it doesn't go flying off. But if the string is pulling the rock inward, why doesn't the rock come inward toward your finger? Well, imagine slowing down the spin rate of the rock. Do that and the whole thing will fall limp. There is a specific speed required to keep the string taut. In fact, if you spin too fast, the string will break and the rock will fly off.

So here's the law. When considering circular motion, inward (centripetal) acceleration is equal to the square of the spin rate (angular velocity) times the radius. The faster you spin the rock, the harder the string needs to pull on it to keep it from flying off.

If we assume the moon is going around the earth in a perfect circle, and we suppose that gravity is pulling it inward at 1/3600 the strength it does on earth's surface, then we can figure out the moon's spin rate (around the earth), too. A little algebra gets us this formula:

$\omega=\frac{1}{60^\frac{3}{2}}(\frac{g}{r_{e}})^\frac{1}{2}$

re is the radius of the earth. The angular velocity ω is how many radians per second the moon moves. To figure out how many seconds it takes to make a single orbit, you basically just flip the expression upside down and multiply by 2π to get a full circle. That gives you:

$t=2\pi60^\frac{3}{2}(\frac{r_{e}}{g})^\frac{1}{2}$

Plug in the right numbers (re=6378 km, g=9.8 m/s2) and you arrive at a t of about 2.35 million seconds, which comes out to roughly 27.3 days (the sidereal period).

This is a couple days off from 29.5 days, which is how long it takes the moon to go through a complete set of phases (the synodic period). The difference is due to the fact that after those 27.3 days, the earth has also moved about 1/13 of the way around the sun, changing where the sun is in the sky. Because the phase of the moon arises from its position relative to the sun, it takes the moon a couple more days to catch up with the sun’s new position.

Those complications aside, the ease with which you can find the moon's sidereal period from a measurement of surface gravity is both stunning and surprising. The calculation is literally only a few lines long. Here, look for yourself:

 Credit: Me me me
I'm not showing you this to impress you with my mathematical talent, but to bring you back to my initial perplexity. Why did it require an intellectual titan such as Newton to figure this out? That is, what conceptual leaps were necessary? I don't know that I can answer that question completely, but here's a partial explanation that comes in large part from Cohen's book.

First of all, as I've said, Newton had the creativity and imagination to suggest a unified physics at all. Others at the time were formulating laws that applied to the heavens (Kepler's laws of planetary motion) and even physical mechanisms by which the planets moved (Descartes' vortices), but none imagined that a single law lay behind falling apples, the tides, planetary orbits, the moon's phases, the movement of Jupiter's satellites, and the orbits of comets.

Furthermore, Newton's laws of motion serve as a starting point for conceptualizing the moon's orbit. Aristotelian physics held that circular motion was perfect because celestial objects could return to their starting point indefinitely, continuing the motion for all eternity. Circular motion required no further explanation.

But Newton's first law says that objects have inertia, that they will continue in straight lines (or remain motionless) unless acted on by an outside force. This law isn't a formula but a tool for analysis. If you assume it is true, then you can look at any physics problem and immediately identify where the forces are. Thus, we can look at the moon, see that it is not moving in a straight line, and conclude there must be some force acting on it.

As I mentioned before, others had already proposed an inverse square law to explain gravity. Simply writing down the law of universal gravitation was not Newton's accomplishment. Instead, what Newton did was to prove mathematically that a body obeying Kepler's laws of planetary motion must be acted on by an inverse square force and the converse that an inverse square force will always produce orbits that resemble the conic sections (circles, ellipses, parabolas, or hyperbolas).

The proof Newton develops is heavily geometrical and begins by looking at an object moving freely through space that is periodically pushed toward a central focus. Newton then reduces the time between impulses until the force becomes continuous and the orbit, which began as a gangly polygon, curves into an ellipse. The important aspect here is there are two components to an orbiting body's motion: a central force acceleration and a velocity tangent to that acceleration.

What this means is the moon is falling toward the earth just as surely as an apple is. The difference is the moon is also moving in another direction so quickly that it continually misses the earth. This is what it means to orbit. As Douglas Adams said, "There is an art to flying, or rather a knack. The knack lies in learning how to throw yourself at the ground and miss."

 Credit: Newton Newton Newton
All this groundwork (and more) was necessary so that Newton could justify a key step in those few lines of math I showed you up above. (I should point out that Newton's work didn't look anything like mine, because the notation and norms of math were very different back then.) The key step is that I equate the moon's acceleration due to gravity (am) with the centripetal acceleration of uniform circular motion (ac). While the units are the same, a priori there's no reason to think the two are related.

Without a mathematical and physical framework detailing how mass, force, and gravity interact, equating those two conceptions of acceleration is nothing more than taking a wild guess. And if you're guessing, that means there are probably plenty of other guesses you could have made as well. This is what our presentism—replete with all the right guesses—hides from us. At each moment when a scientist does what comes naturally to us now, they had innumerable other options before them. The achingly slow pace of scientific discovery, then, is a result of all the frameworks and ideas and theories leading to those other guesses, equally valid a priori, that turned out not to be right.

As I've written before, in physics it is sometimes easy to guess the right answer. What I hope this post does is demonstrate that guessing—that moment of eureka when the correct answer finally materializes—is only the proverbial tip of the iceberg when it comes to science. This is important to remember when you think you’ve been struck by inspiration and arrived at a brilliant new truth about... whatever. Our popular conception of history valorizes those moments, but a fuller understanding of history vindicates the slow, haphazard, incremental work that must come first. If that work isn’t there, maybe your new truth isn’t, either.

## Tuesday, May 23, 2017

### Rungs All the Way Down

The last lab we run in Astronomy 101 has students simulate observations of distant galaxies and then do some analysis in Excel to discover Hubble's law. By the end, students come up with a rough estimate for the age of the universe. But as I remarked elsewhere, my students seemed more in awe of Excel's tools than in discovering the origin of space, time, and all of existence (including Excel).

I don't want to get too philosophical about why this happened (because the truth is they were probably just bored and wanted the whole thing to be over with) but I suspect we are kind of spoiled nowadays for awesome science news. Everybody knows the universe began with a big bang billions of years ago, and it's difficult to transport people back to a time when such a fact was remarkable.

Yet the discovery of Hubble's law at the end of 1920s represented the culmination of an incredible project going back millennia, one that eventually paved the way for physical cosmology—the concrete study of the structure, origin, and fate of the universe.

What is that project? Figuring out how far away things are. I know, that sounds tremendously dull, but it speaks to something potent about science: the capacity to get answers to questions you didn't ask. The (seemingly) mundane task of finding ever more accurate and applicable ways to measure distance led to an undeniable empirical fact about the origin of the universe without anyone specifically asking deep cosmological questions. That science can do this is remarkable because you're very likely to find the answer you're looking for whenever you ask a specific question. If you stumble upon a totally surprising answer to a question you weren't asking, there's a much better chance that you're not just fooling yourself. (In fact, Hubble was never entirely sold on the significance of his eponymous law.)

So how did this all come about? Well, first I'll give you the punch line. In the 1920s, Edwin Hubble used the gigantic 100" reflector at Mount Wilson Observatory to measure the distance to several far off galaxies. Then, to help build a map of the local universe, he combined this data with spectra of those galaxies collected by Vesto Slipher and Milton Humason. Due to the Doppler effect, the spectrum of a galaxy shifts if it is moving relative to the observer. Hubble discovered that the farther away a galaxy is, the faster it's moving away from us. (Its spectrum is "redshifted" toward longer wavelengths.)

 Credit: Edwin Hubble
By a remarkable coincidence, the correct interpretation of this astonishing discovery had already been found by the physicist Georges Lemaître. Using Einstein's general relativity, Lemaitre showed that under the influence of mass, the fabric of spacetime itself could expand outward from a "primeval atom" to the entire universe we see today. If you reverse the recession of galaxies Hubble discovered, you can figure out how long spacetime would have to be stretching out to match the observed distances of galaxies today. That length of time is the age of the universe. Furthermore, if the relationship between redshift and distance really holds up, then measuring an object's redshift tells you how far away it is.

But wait a second. That sounds kind of circular, because you needed to know the distances to those galaxies to find this relationship in the first place. How can we possibly know that Hubble's law is accurate as a distance measure if it relies on a distance measure, and why would you need another one anyway? Those are good questions, but you should have been asking them a long time ago. You see, when you're using Hubble's law to find distance, you're hanging from one of the highest rungs on the cosmic distance ladder, and we've been climbing this ladder for thousands of years.

So let's back up for a moment. I completely glossed over how Hubble determined the distances to these galaxies in the first place. Distance is a tricky thing in astronomy because (until very recently) we couldn't go anywhere astronomical. Instead, we are presented with a celestial sphere that might as well be infinitely far away. The objects on this sphere reveal only two pieces of information: brightness and position. From that we must infer distance. Broadly speaking, brightness and position give us two methods for finding distance: standard candles and geometry, respectively.

Brightness by itself is deceptive because if you don't know beforehand what you're looking at, you can't tell if an object is bright because it's (a) nearby or (b) intrinsically very luminous. Finding a standard candle lets you disentangle luminosity from distance so that brightness encodes distance alone. Here's how that works.

To map his galaxies, Hubble performed careful photometry on a class of stars known as Cepheid variables. Cepheid variables aren't exotic stars made from cepheionic matter; they're just a stage in the lifecycle of massive stars. Cepheids are "variable" because they are dying and unstable, causing them to periodically expand and contract. We observe these death throes as a cycle of brightening and dimming.

In the early 1900s, before we knew the astrophysical details, astronomer Henrietta Leavitt analyzed the brightness over time of thousands of Cepheid variables in the Small Magellanic Cloud (SMC). Because this cloud is distinct from other regions in the sky, Leavitt assumed all the stars are roughly the same distance from Earth. Therefore, any difference in brightness between stars is due to differences in intrinsic luminosity. Using that assumption, she discovered that some Cepheids are (on average, at peak) brighter than others, and that the period of their variability scales with their brightness—the brighter a Cepheid, the longer its cycle.

Thus, measuring the period is a proxy for measuring the luminosity. This was astronomy's first standard candle. Because the period tells you how bright the star is supposed to be, if you see a Cepheid in Andromeda with the same period as a Cepheid in the SMC, you know that any difference in brightness is due to distance alone.

If you notice, by itself the standard candle method only tells you relative distances. You can calibrate your Cepheids with those in the SMC, but if you don't know how far away the SMC is, then your distances are just in multiples of the SMC distance, whatever that is. The upshot is you've only climbed down one rung of the cosmic distance ladder. The ladder ends when you can calibrate a cosmic distance with a terrestrial distance.

Standard candles have another built in limitation. Light intensity falls off with the square of distance, so a standard candle that is 10 times farther away is 100 times dimmer. This is why Hubble needed a gigantic 100" telescope. Without it, he could not resolve individual stars in distant galaxies. If a standard candle is too faint to be picked out, you can't do the precise photometry needed to compare it to a reference candle. So there are many rungs on the ladder, with higher rungs involving supernovae, clusters, and even whole galaxies.

But let's continue down the ladder. Historically, the next rung down involved geometry. Using geometry to measure distance usually involves some type of parallax—that is, observing the change in position of a nearby object relative to more distant objects as your perspective changes. We all intuitively know how this works just by looking out the window of a moving car. Utility poles by the side of the road zoom by; cows in a meadow fall back more slowly; distant mountains appear nearly motionless.

From that alone we see the fundamental limitation of parallax methods. The farther away an object is, the less its apparent position changes. And if it's far enough away, your telescope can't make out any difference in position. In general, parallax methods are only good for relatively nearby stars. But they are a crucial rung on the ladder nevertheless.

After Leavitt's law was discovered, astronomer Ejnar Hertzsprung calibrated the Cepheids in the SMC with ones in our own galaxy. Cepheids are pretty rare (they are a short-lived stage in the lifecycle of massive stars, which are themselves uncommon) so there aren't many that are close enough to triangulate just by watching their position shift over the course of the year. Instead, he used a method known as statistical parallax.

This method works by looking at a set of Cepheids that are roughly the same brightness scattered around the sky. If they're the same brightness, then they are about the same distance from the sun, which means they all lie on the surface of a sphere with the sun in the middle. The radius of this sphere is the distance to the Cepheids.

We can find that radius by looking at the motion of these stars. Stars move across the sky because of their own peculiar motion and the motion of the sun relative to the "local rest frame," which is the frame that follows the orbit of nearby stars around the galaxy. Their peculiar motion is basically random, which means you're just as likely to find a star moving parallel to the sun's motion as perpendicular to it.

Now, there are two ways we can measure the motion of stars. One is to look for the Doppler shift in a star's spectrum to see if it's moving toward us or away from us. The other is to look at the star's proper motion, which is its change in angular position on the sky and is perpendicular to its radial velocity. What we want to do is find the proper motion of a star that is perpendicular to the sun's motion. This motion is tangent to the imaginary star circle we've created.

 Credit: No one. This graphic simply popped into existence when needed.
We can then pretend that the star is circling the sun and say that the proper motion is that star's angular speed around the sun. Angular speed can be converted to actual tangential speed by multiplying by the radius. That is, the larger the radius of a circle, the faster an object has to be moving to complete the circuit in a given time. Conversely, if we know the tangential speed, we know the radius—the distance to the star.

But we have no way of independently measuring the tangential speed, because the Doppler shift only measures radial speed. Here's where the statistical part of the statistical parallax method comes in. Because we've assembled a large collection of randomly moving stars, we can just guess that the average radial velocity of a star is the same as the average tangential velocity. We find the average radial velocity using the Doppler effect (being sure to subtract out the component of the sun's motion parallel to the radial motion). Then you set the tangential velocity of your star equal to that average radial velocity, divide by its angular speed, and you've got the distance.

The units for radial velocity are going to be something like km/s, which means we have calibrated a cosmic distance to a terrestrial distance and seemed to have reached the end of the ladder. But the truth is the statistical parallax method has other distance measures baked into it, which means we've really just jumped back down to solid ground, skipping several rungs. In particular, finding the true "solar motion" of the sun requires that you already know some distances.

The real way back to Earth involves measuring the change in apparent position of a very nearby star over the course of a year as the Earth orbits the sun. Finding that change gives you the distance to the star relative to the astronomical unit, which is how far the Earth is from the sun. To measure the astronomical unit, astronomers in the 18th century measured the different durations of the transit of Venus from different positions on Earth. Those timing variations corresponded to changes in the position of Venus across the face of the sun. This gave astronomers the distance to Venus (and all other solar system distances, including the AU) in terms of the size of the Earth. To measure the size of the Earth, ancient Greek smart guy Eratosthenes watched how the lengths of shadows changed as latitude changed, which told him how curved the Earth was and consequently what its circumference was.

I've mostly presented the cosmic distance ladder as being a steady climb from the Earth all the way to the origin of the universe. But in reality it looks more like a game of Chutes and Ladders. I've tried to hint at the fact that there are many more methods involved, each trying to make up for some deficiency in another. Two different methods will operate on different scales but overlap slightly. Where they overlap, you can jump from one rung to the next by calibrating one to the other. Jump enough rungs, and you eventually find yourself at the beginning of everything.

## Wednesday, April 12, 2017

### The Pale Blue Discourse

By sheer coincidence, xkcd recently did a comic on why the sky is blue at about the same time the astronomy class I TA got to its unit on light and optics.

 Credit: xkcd
The Wednesday before that comic appeared, I led a discussion in which I explained why, in fact, the sky is blue. The comic argues against starting out with Rayleigh scattering because, essentially, that's just a fancy name for the specific reason the sky is blue, when the general reason is just that things are the color they are because they reflect that color.

I agree with this argument on one level, and one of the reasons I mentioned the sky's blueness in discussion is because it's an example of one of the three broad reasons why an object is a particular color (reflection/absorption, spectral lines, and thermal radiation). But I also mentioned the blue of the sky because Rayleigh scattering is interesting in a couple ways.

First of all, one way to think about the color of the sky is instead to think about the color of the sun. Sunlight is white (composed of all the colors in the visible spectrum), yet the sun is yellow. Why? Because Rayleigh scattering scatters some wavelengths (blue) more than others (red). The result is that wherever you look, you're looking at the sun; it just depends on whether or not the sun's photons had to bounce around a few times before they got to your eyes (and consequently look like they're coming from somewhere other than the sun).

The second reason I brought up Rayleigh scattering is that, for most objects that are a particular color by dint of reflection, the explanation for why is both complicated (a specific configuration of quantum mechanical energy levels) an unilluminating (it just worked out that way). By contrast, Rayleigh scattering is one of the few instances where the explanation is fairly simple and clear. We can see the process at work throughout the day. Shorter wavelengths of light scatter away as they pass through air. The more air they pass through, the more they scatter. This is why sunsets and sunrises are particularly red: the sunlight is moving through more atmosphere (because the sun is not just straight up), and the blue light has a lot of opportunities to get lost along the way.

But ultimately, xkcd is right that blue is just the color of air, as long as we want to think of color as a property of an object. And why wouldn't we? Well, we can engage in some fun-sucking reductionism by pointing out there is no blueness contained within air, just as there is no greenness contained within leaves. Color arises out of an object's interaction with light and eyes, and it just so happens that a particular interaction involving the sky produces blue. Many philosophers will want to push back against this kind of reductionism by saying, well, okay, then that's just what we mean by the property of blueness: being so configured that interaction with light and eyes produces the subjective experience of blue.

This is a common theme in analytic philosophy. Science has a tendency to unravel our everyday notions by telling us things like, no, we don't really ever touch an object; it's just the electric forces of our skin interacting with the electric forces of the couch. But philosophers balk at this by arguing that we clearly successfully communicate something when we say that, for example, humans have touched the surface of the moon. So let it be that what touching really means is... you get the idea.

But then what does it really mean to say that an object is blue, if blueness is a property that arises only through interaction? Well let's do a little thought experiment. Imagine that one of those TRAPPIST-1 worlds—tidally locked into its orbit around a cool red dwarf—has an atmosphere just like ours. On tidally locked worlds, the sun never rises or sets. One half of the planet is always facing the sun, while the other half never sees it. This could lead to a situation (although an atmosphere probably helps to mitigate it) where one half is a blasted hell hole and the other is a frozen wasteland. Consequently, many scientists and SF authors have imagined life arising only in a narrow strip of twilight at the terminator between night and day. There, the temperature might be just right for life. With a cool red sun (meaning much less blue light to start with) always on the horizon, a sky such as ours might always be some shade of red.

 Credit: ESO
Nevertheless, scientific-minded aliens in the twilight might eventually learn the composition of the atmosphere, learn about Rayleigh scattering, and come up with a neat science fact: you know, if you were to shine an enormous amount of white light through our atmosphere, it would appear blue. But is that a good reason to say that the atmosphere is, in fact, blue?

Let's go a step further. Say that the general lack of short wavelength light means that these aliens' eyes never evolved sensitivity to blue light at all. Again, they could perform experiments and develop a theory of optics, but there's no situation in which they would describe the sky as blue, because they have no concept of blue at all.

However, blue-seeing humans are only 40 light years away, so we might someday travel there and explain the reality to them. We might say, your sky looks red, but that is only an illusion. If your eyes were sensitive to short wavelength light, and your planet were not tidally locked, and your star were luminous enough to shine brightly across the specific range of 400-700 nanometers, then you'd see that in reality your sky is, in fact, blue. The aliens would twirl their fuzzy tentacles in derision and laughter, as aliens are wont to do.

Now you might object here and say that we have plenty of names for things we don't have direct subjective experience of. For example, we've labeled the rest of the electromagnetic spectrum, from gamma rays on up to radio waves, even though we only have access to a tiny bit of that spectrum. And that's true enough, but we wouldn't say that the color of an object is x-ray. There might be some property there, but it's not color.

Okay, but let's turn the tables around here. Maybe TRAPPIST aliens are sensitive to infrared light and have a whole host of specific names for the wavelengths they subjectively experience in that range. That sounds a lot like color, too, and it seems anthropocentric of us to deny them their infrared colors. So we can say that blue is a human (or Earth creature) color and that an object is that color when it reflects light in a particular range of wavelengths. That's what color is: the subjective experience of a particular wavelength of light.

But then the aliens might ask, so what's the wavelength of this "brown" color you humans are always talking about? Brown does not have a wavelength; it doesn't show up in the rainbow. Brown is a color humans experience because our perception of color is based on more than just wavelength; it also includes contrast levels and overall brightness. Brown only shows up when something with a red or yellow wavelength is dim compared to what’s next to it.

Purple, too, is not a "real" color by the rough definition given above. It is not composed of a single wavelength but multiple wavelengths that our brains interpret as a single color. Why? Because we don't actually have perfect, exact wavelength detectors in our eyes. Instead, we have three different kinds of cones (photoreceptor cells) that absorb light in three ranges of wavelengths that overlap a bit.

 Credit: Vanessaezekowitz at Wikipedia
Our brain figures out what color we're seeing not by identifying a particular wavelength but by adding up how much each type of cone has been stimulated. When a blue cone starts firing more than the rest, our brain will interpret that as seeing blue. But we don't have purple cones. Instead, the human brain has made up the color purple for those situations when our blue and red cones are firing at equal rates.

So what do we say when the aliens ask what it means for something to be purple? Oh, an object is purple when it reflects both short wavelength and long wavelength visible light in a situation where creatures evolved to pick out that combination as signifying something distinctive. Ah, yes, of course.

All of this is not to say that there's no such thing as color, or that trees aren't brown. Again, it does no one any good to object to every statement about the color of an object by saying, "Well actually, leaves absorb everything but green!" So yes, the sky is blue because air is blue. That is a perfectly fine answer that conveys an important aspect of what color is all about. But that important aspect might not be that color depends on reflection; rather, it might be that the idiosyncratic history of our sun, our planet, and our species have led to the subjective experience of color.

## Tuesday, March 21, 2017

### A Heart to Heart Talk

Several billion years ago, a bright red star the size of Earth's orbit beat like a heart in a spiral arm of the Milky Way galaxy. Already billions of years old, this star had long since fused all the hydrogen in its core into helium. Eventually, the star grew hot enough that the helium ash could begin to burn, slowly transforming the core to carbon and oxygen. When helium in the core finally ran out, the billion year balance between gravity and radiation that every star battles to maintain gave way, and the core contracted and grew hotter. Feeling the heat, the outer envelope expanded and cooled, and a red giant was born.

This giant soon found a new, but ultimately short-lived balance in a period of its life known as the asymptotic giant branch (AGB) phase. Now, a thin shell of hydrogen surrounding the core grew hot enough to burn, producing a new layer of helium that settled onto the core. After tens or hundreds of thousands of years, the helium layer grew hot and dense enough to start its own fusion cycle, leading to a brief helium shell flash. In those moments, the star's brightness would jump by a factor of a thousand before returning to its quiescent, hydrogen-shell burning stage. This was the slow beat of the giant's heart.

 Credit: Lithopsian
How do we know this story about a giant, pulsating star that died long before ours was born? We have observational and theoretical evidence that stars like this exist. With telescopes, we have found stars with masses comparable to our own that are tremendously brighter but cooler (on the surface). To be so bright yet cool, such stars must occupy a very great volume. We have also built models of stellar evolution by observing many different stars and figuring out how ones that look different might just be the same kind at different stages of life.

But what about this specific red giant from billions of years ago—how do we know about it? What lets us peer into its heart? Well, we don't know its name or where the cooling remnant of its core is now, but we do know this star was part of a lineage, inheriting the cosmic dust from previous stars and passing it on to us, but transformed. In the roiling convective envelope that surrounded the core of this red giant, there were atoms of iron built by some older star's fusion.

Iron is the endpoint for fusion that can power a star. For all elements with fewer protons than iron, smashing them together at high enough temperatures and densities liberates more energy than is required to do the smashing. But this doesn't work after iron, because you've got so many positively charged protons squished into such a small space that they strongly resist any further squishing. You can still do it, but you're losing energy. Nevertheless, this type of fusion does happen in the outer layers of dying stars, draining a bit of the star's energy with each reaction.

This process of building up elements in stars—known as stellar nucleosynthesis—was first described comprehensively in a famous astrophysical paper known as B2FH (after the initials of the four authors). In it, they gave a detailed account of the nuclear physics required to produce all the elements we see in nature. Spectrographic analysis of our star and ancient meteorites that existed in the early days of the solar system has largely confirmed that elements do exist in the proportions dictated by stellar nucleosynthesis.

But let's get back to the iron in that giant. Here, a type of nucleosynthesis known as the s-process was dominant. One way to build new elements is to bombard atoms with neutrons. Every once in awhile, an atom will capture a neutron and become a radioactive isotope of whatever element it is (as determined by its number of protons). Eventually, beta decay will turn one of the neutrons in the nucleus into a proton, which then bumps that atom up to the next element in the periodic table. This process starts with iron and ends with bismuth.

As you can see, there are two reactions going on here: neutron capture and beta decay. Because of this, the rate at which these reactions occur determines the eventual abundance of elements we see. In AGB stars, neutron capture happens much more slowly than beta decay, which means that we will eventually see a ladder of elements building up from iron rather than more and more weird isotopes of iron.

Let's look at one element in particular to see how this whole thing works. The element thallium has 81 protons and shows up in nature with either 203 or 205 total nucleons (protons+neutrons). 204 nucleons is unstable and decays with a half-life of less than 4 years. That means there is a branching point when thallium reaches 204 nucleons. From there, it can undergo beta decay and become lead with 82 protons, or it can capture another neutron and remain thallium. About 70% of thallium is the 205 kind, while 30% is the 203 variety. (There is more thallium-205 because lead-205, which you get to by thallium-205 or lead-204, is unstable over millions of years and eventually decays back to thallium-205.)

 Credit: R8R Gtrs
By experimentally determining how likely thallium is to capture a neutron and how quickly it decays, we can infer how often atoms of thallium in that red giant were being bombarded with neutrons. Knowing the density of neutrons in the AGB star tells us what nuclear reactions were creating neutrons and consequently how hot the core of that star was and what elements it was composed of. It turns out that the abundances of elements we see would require a range of neutron fluxes, which is part of how we know that AGB stars undergo pulses of helium fusion before returning to hydrogen-shell burning.

Because AGB stars are about as large as Earth’s orbit but of comparable mass to our sun, their gravity is not strong enough to contain their extended envelopes. This means much material is lost, becoming a "planetary nebula" and eventually dispersing into interstellar space. That includes the products of nucleosynthesis, which come to pollute cold, giant molecular clouds.

About four and a half billion years ago, one such polluted cloud became unstable and collapsed. Out of that collapse was born our sun and solar system. As Earth formed and mixed together the metals that could withstand the searing heat of our young star, atoms of thallium got locked up in minerals of copper and lead and zinc.

Eventually, humans came along and started extracting pure thallium to do things with it, such as performing experiments that could give us insight into the hearts of long-dead stars. A week or two ago, some pure thallium-203 was bombarded with protons until it became lead-201, which has a half-life of 9.4 hours. The lead decayed into thallium-201, which has a half-life of 73 hours. Because of that short lifetime, the thallium must be prepared and used quickly. This specific batch was mixed with hydrochloric acid to produce thallium chloride, which was then put into a solution and packaged for use.

Four days ago, that radioactive thallium was injected into my veins. Because thallium behaves a bit like potassium as far as cells are concerned, sodium-potassium pumps in the membranes of cardiac cells take in the thallium. These pumps transport ions of sodium and potassium, creating a voltage that gives cardiac cells the electricity they need to beat. Cardiac cells that are working well have functioning pumps and will take up the thallium; cells that aren't won't. To make sure the thallium was well circulated in my heart, they had me run on a treadmill until I got to 160 bpm.

 Thank you, Frinkiac.
To see where the thallium in my blood ended up, a camera took pictures of the gamma rays streaming out of my body. But gamma rays present something of a problem. In a normal camera, a lens focuses light rays onto a surface to form an image. In telescopes, we mostly use mirrors to bounce light in the direction we want. This doesn't work with gamma rays, however. Their incredibly short wavelength means that for everyday materials, they will either be absorbed or transmitted, but not redirected. When a photon is simply absorbed without any optics, information about where the photon comes from is lost and you no longer have an image.

Astronomers have devised many clever techniques for getting images from x-ray and gamma ray sources, one of which works for looking into hearts, too. You can preserve the image of a source by creating a very small aperture for light to pass through—a pinhole camera. On the other side of that pinhole, you have a detector. Because you can trace just a single line from where a photon hits the detector to the pinhole, you know what angle that photon came in at and thus know what the original source of the image was. The downside to a pinhole camera is that almost all of the light is blocked. To get around this, you can create an aperture with a very specific shape that lets in more light but leaves a distinct "shadow" on the camera. Using computational techniques, you can than reconstruct the original image.

The camera they used rotated around me for eight minutes, producing cross-sections of my heart at different angles that were later combined to form a 3D image.

I don't yet know the results of that test (although I suspect I am okay), but I am comforted by the thought that the thallium used to peer into my heart can also peer into the hearts of long-dead stars, to give a glimpse of another world, an incomparably gigantic furnace burning at hundreds of millions of degrees that does its part in seeding the galaxy with the elements necessary for chemistry and life. I am also comforted to know that I am a part of that lineage, that my carbon was produced in another dying star, that the hydrogen in my water is nearly as old as the universe itself. I hope this specific agglomeration of carbon and water persists a bit longer, but I am happy nonetheless that the universe is eternal and spectacular and knowable.

## Monday, February 27, 2017

### Snow Line and the Dwarf's Seven

I'm really sorry about the title. Not sorry enough not to use it, of course, but a little sorry.

So you may have heard about the recent discovery of a nearby solar system (a mere 39 light years away!) with seven planets all packed very close to the star (an M-dwarf). The discovery is significant because (a) some of the planets look to be rocky, Earth-sized, and in the habitable zone; (b) the relative nearness of the system makes it a prime target for further investigation; and (c) it's super rad. The occasion gives me the opportunity to explain a bit about how discoveries like this get made while waxing philosophical about the nature of astronomy itself. As a guy with an astronomy degree (I don't feel comfortable calling myself an astronomer) who (kind of) teaches an intro astronomy class, this is basically my job.

Conveniently, last week's discovery does an excellent job of illustrating three aspects of astronomy that I think set it apart from other sciences. (Or possibly my own confirmation bias leads me to see these aspects expressed, but let's leave that for another post.) These features are encapsulated in a kind of motto for astronomy that I've been using recently.

It goes like this: astronomy is the science of what you see when you look up. This sentiment conveys that astronomy is ancient and public, because for thousands of years, anyone could do astronomy just by turning their heads skyward and paying attention. Secondly, astronomy is bound (mostly) by sight, which is a limitation that forces astronomers to be both careful and creative. And finally, “up” is a pretty wide direction, and astronomy encompasses everything from the moon to other stars to the birth of the universe itself and anything else we find along the way.

All of this ties together into something truly remarkable. Astronomy has the power to transform points of light—the ever-present night sky that we rarely stop to consider deeply—into a story about exploding stars and merging galaxies and dark matter halos all under the spell of gravity in a dance that goes back billions of years and will probably continue for many orders of magnitude longer than it's lasted so far. And what's more, we have good reason to be confident in this story. How does astronomy manage to do this? Well, let's take a look at those seven newly discovered exoplanets.

While we've only known about exoplanets for a couple decades now, the study of planets more generally is, like the rest of astronomy, incredibly ancient. There are five planets visible to the naked eye (Mercury, Venus, Mars, Jupiter, Saturn) that have been known into antiquity. The first person to discover a new planet (Uranus) was William Herschel, using a telescope he constructed himself. Neptune followed, after Urbain le Verrier noticed that, after adding up all the known gravitational influences on Uranus, its calculated position on any given night was a little off from its observed position. He predicted that a planet farther out was gravitationally tugging on Uranus, so the astronomer Johann Gottfried Galle looked where le Verrier said to and found another new planet.

I'm giving this brief (and incomplete) history lesson because the fact of the sky always being up there makes astronomical discoveries collaborative and open. There's a parallel in last week's exoplanet discovery both in terms of that public nature and gravitational perturbations. Moreover, discovering new planets used to be a once in a generation kind of thing, but now we've discovered thousands of them and just found seven in one system. Astronomy is a gigantic, ever-expanding field; whenever we look somewhere new or look in a new way, we find new stuff.

So let's talk about TRAPPIST-1. While NASA had a big press conference about the discovery (and they were involved), this was a remarkably international effort, involving astronomers and telescopes from all over the world. Most exoplanets discovered so far have involved space telescopes because the atmosphere makes detecting subtle changes in a star's light curve difficult. A relatively cheap solution being used now is to image the same star many times either with multiple ground-based telescopes or the same scope repeatedly. This lets you produce a single, high quality light curve and means that anyone can get in on the exoplanet discovery game. With a small telescope that spends all its time looking at large patches of the sky, you can detect (and re-detect) the faint signatures of exoplanets. Once TRAPPIST and the other telescopes involved made those initial findings, NASA pointed the Spitzer Space Telescope at TRAPPIST-1 to confirm the discovery.

Okay, but how did these telescopes actually discover the seven exoplanets? This is where the central limitation of astronomy—sight is (just about) our only tool—leads to very creative solutions. The way that we transform TRAPPIST-1 from a point of light into a star with seven worlds is by performing high-precision photometry to construct a light curve of the star. A light curve is the change in a star's light over time. To get an accurate one, you need to get high quality images on short timescales. This runs counter to a very useful tactic in astronomy, which is to collect light from a source over a long period of time to produce a single, bright image. But if you do that, any deviations during that integration time get smeared out and missed.

To detect exoplanets, the deviations you're looking for are dips in the star's brightness at regular intervals. If your telescope, the star, and a planet happen to line up exactly, then every time the planet passes in front of the star from your perspective, the star gets a little bit dimmer. It's just like a solar eclipse here on Earth, except that these planets are much too far away from us to block out all the light of their parent star. Instead we see a tiny drop in brightness.

But these transits reveal a lot of information. First, the duration of the transit and the time between transits tell us how long the planet's year is. Combined with an educated guess about the star's mass (by taking its spectrum), we can figure out how strong gravity's pull on the planet is, and consequently the distance it needs to be from its star to complete an orbit in the observed time. The more massive the star, the faster a planet orbits at a given distance. Finally, the percentage of light blocked by the planet, combined with its distance, tell us how big the planet is compared to the star. Another educated guess about the star's size tells us the actual physical size of the planet.

So by looking very precisely at how a star twinkles, we can deduce the presence of a planet and make a reasonable guess as to how big it is and how close it is to the star. We can do this despite not actually being able to see the planet itself, which is much too small and dim next to its parent star to resolve. But I've been talking about one planet this whole time, and these astronomers discovered seven. You might think sussing out the details of seven different transits while also accounting for anything else that might mess up your photometry would be difficult, and you'd be right. The primary way the team identified seven different planets was through a statistical analysis of the transit times to come up with a chart that looks like this:

 Credit: ESO/M. Gillon et al.
As a rule, planets don't share orbits. Doing so isn't stable. And each orbit has a definite period, and each period corresponds to an orbital speed, which tells you how long the transit should last. So if you identify a transit of a particular duration that repeats regularly, then you've found yourself a planet. If you see six or seven different regular transit times, you've found six or seven different planets.

There is a snag in all this, however, called TTVs—transit timing variations. That is, sometimes a transit happens earlier or later than expected. In this case, the variation could be up to half an hour. But it turns out this snag contains even more information, because this sounds an awful lot like the error le Verrier noticed in the orbit of Uranus. The planets weren’t where astronomers thought they would be given just the gravitational influence of the star, which means the planets—all extremely close to each other—are tugging on each other significantly.

Because so much is unknown about the system, the problem is much more complicated than the orbit of Uranus. Le Verrier was able to do a laborious calculation by hand using perturbation theory, but the complexity of TRAPPIST-1 require a slightly faster technique if you want to publish before the stars all die and we’re left in darkness. So instead the team constructed simulations of the system where they plug in the laws of physics and then vary the unknown orbital parameters to see what kind of planetary systems evolve that match the one they observed. In the end, they’re left with a set of possible masses that could produce the tugging required to account for the transit timing variations.

Even doing this produced a wide range of possible answers, which led to a great quote in the article: "The system clearly exists, and it is unlikely that we are observing it just before its catastrophic disruption, so it is most probably stable over a significant timescale." The relevance is that the system's existence is itself a piece of data, which means that as more observations are done, the assumed stability of the system can help to rule out orbital parameters that would produce an unstable system.

With those uncertainties understood, the team was able to estimate that most of the planets are in the neighborhood of Earth's mass. If you know the size and the mass, you also know the density. The worlds of TRAPPIST-1 are all rocky (high density) as opposed to gassy (low density). The proximity to the star itself is also important. If planets are too far out from their star—past the snow line—then water and other volatiles condense into ice. Far enough inside that line, however, and water can remain a liquid. Too close, and the liquid evaporates. These planets are all at the right distance to have liquid water.

An entire system of rocky, Earth-sized worlds warm enough to have liquid water—this is why everybody is so excited and why astronomers are going to keep watching these planets. The Kepler Space Telescope is currently looking at the system, and the James Webb Space Telescope will too when it launches. The relative nearness of the system to us means that it is fairly easy to observe. As new observations come in, we could learn about the planets' atmospheres—their density, composition, and variability—and whether they experience tidal heating and geological activity. Are these complex, intriguing worlds like the moons of Jupiter and Saturn or airless rocks scoured dry by the flares of their parent star? We just have to look up to find out.