Wednesday, April 12, 2017

The Pale Blue Discourse

By sheer coincidence, xkcd recently did a comic on why the sky is blue at about the same time the astronomy class I TA got to its unit on light and optics.

Credit: xkcd
The Wednesday before that comic appeared, I led a discussion in which I explained why, in fact, the sky is blue. The comic argues against starting out with Rayleigh scattering because, essentially, that's just a fancy name for the specific reason the sky is blue, when the general reason is just that things are the color they are because they reflect that color.

I agree with this argument on one level, and one of the reasons I mentioned the sky's blueness in discussion is because it's an example of one of the three broad reasons why an object is a particular color (reflection/absorption, spectral lines, and thermal radiation). But I also mentioned the blue of the sky because Rayleigh scattering is interesting in a couple ways.

First of all, one way to think about the color of the sky is instead to think about the color of the sun. Sunlight is white (composed of all the colors in the visible spectrum), yet the sun is yellow. Why? Because Rayleigh scattering scatters some wavelengths (blue) more than others (red). The result is that wherever you look, you're looking at the sun; it just depends on whether or not the sun's photons had to bounce around a few times before they got to your eyes (and consequently look like they're coming from somewhere other than the sun).

The second reason I brought up Rayleigh scattering is that, for most objects that are a particular color by dint of reflection, the explanation for why is both complicated (a specific configuration of quantum mechanical energy levels) an unilluminating (it just worked out that way). By contrast, Rayleigh scattering is one of the few instances where the explanation is fairly simple and clear. We can see the process at work throughout the day. Shorter wavelengths of light scatter away as they pass through air. The more air they pass through, the more they scatter. This is why sunsets and sunrises are particularly red: the sunlight is moving through more atmosphere (because the sun is not just straight up), and the blue light has a lot of opportunities to get lost along the way.

But ultimately, xkcd is right that blue is just the color of air, as long as we want to think of color as a property of an object. And why wouldn't we? Well, we can engage in some fun-sucking reductionism by pointing out there is no blueness contained within air, just as there is no greenness contained within leaves. Color arises out of an object's interaction with light and eyes, and it just so happens that a particular interaction involving the sky produces blue. Many philosophers will want to push back against this kind of reductionism by saying, well, okay, then that's just what we mean by the property of blueness: being so configured that interaction with light and eyes produces the subjective experience of blue.

This is a common theme in analytic philosophy. Science has a tendency to unravel our everyday notions by telling us things like, no, we don't really ever touch an object; it's just the electric forces of our skin interacting with the electric forces of the couch. But philosophers balk at this by arguing that we clearly successfully communicate something when we say that, for example, humans have touched the surface of the moon. So let it be that what touching really means is... you get the idea.

But then what does it really mean to say that an object is blue, if blueness is a property that arises only through interaction? Well let's do a little thought experiment. Imagine that one of those TRAPPIST-1 worlds—tidally locked into its orbit around a cool red dwarf—has an atmosphere just like ours. On tidally locked worlds, the sun never rises or sets. One half of the planet is always facing the sun, while the other half never sees it. This could lead to a situation (although an atmosphere probably helps to mitigate it) where one half is a blasted hell hole and the other is a frozen wasteland. Consequently, many scientists and SF authors have imagined life arising only in a narrow strip of twilight at the terminator between night and day. There, the temperature might be just right for life. With a cool red sun (meaning much less blue light to start with) always on the horizon, a sky such as ours might always be some shade of red.

Credit: ESO
Nevertheless, scientific-minded aliens in the twilight might eventually learn the composition of the atmosphere, learn about Rayleigh scattering, and come up with a neat science fact: you know, if you were to shine an enormous amount of white light through our atmosphere, it would appear blue. But is that a good reason to say that the atmosphere is, in fact, blue?

Let's go a step further. Say that the general lack of short wavelength light means that these aliens' eyes never evolved sensitivity to blue light at all. Again, they could perform experiments and develop a theory of optics, but there's no situation in which they would describe the sky as blue, because they have no concept of blue at all.

However, blue-seeing humans are only 40 light years away, so we might someday travel there and explain the reality to them. We might say, your sky looks red, but that is only an illusion. If your eyes were sensitive to short wavelength light, and your planet were not tidally locked, and your star were luminous enough to shine brightly across the specific range of 400-700 nanometers, then you'd see that in reality your sky is, in fact, blue. The aliens would twirl their fuzzy tentacles in derision and laughter, as aliens are wont to do.

Now you might object here and say that we have plenty of names for things we don't have direct subjective experience of. For example, we've labeled the rest of the electromagnetic spectrum, from gamma rays on up to radio waves, even though we only have access to a tiny bit of that spectrum. And that's true enough, but we wouldn't say that the color of an object is x-ray. There might be some property there, but it's not color.

Okay, but let's turn the tables around here. Maybe TRAPPIST aliens are sensitive to infrared light and have a whole host of specific names for the wavelengths they subjectively experience in that range. That sounds a lot like color, too, and it seems anthropocentric of us to deny them their infrared colors. So we can say that blue is a human (or Earth creature) color and that an object is that color when it reflects light in a particular range of wavelengths. That's what color is: the subjective experience of a particular wavelength of light.

But then the aliens might ask, so what's the wavelength of this "brown" color you humans are always talking about? Brown does not have a wavelength; it doesn't show up in the rainbow. Brown is a color humans experience because our perception of color is based on more than just wavelength; it also includes contrast levels and overall brightness. Brown only shows up when something with a red or yellow wavelength is dim compared to what’s next to it.

Purple, too, is not a "real" color by the rough definition given above. It is not composed of a single wavelength but multiple wavelengths that our brains interpret as a single color. Why? Because we don't actually have perfect, exact wavelength detectors in our eyes. Instead, we have three different kinds of cones (photoreceptor cells) that absorb light in three ranges of wavelengths that overlap a bit.

Credit: Vanessaezekowitz at Wikipedia
Our brain figures out what color we're seeing not by identifying a particular wavelength but by adding up how much each type of cone has been stimulated. When a blue cone starts firing more than the rest, our brain will interpret that as seeing blue. But we don't have purple cones. Instead, the human brain has made up the color purple for those situations when our blue and red cones are firing at equal rates.

So what do we say when the aliens ask what it means for something to be purple? Oh, an object is purple when it reflects both short wavelength and long wavelength visible light in a situation where creatures evolved to pick out that combination as signifying something distinctive. Ah, yes, of course.

All of this is not to say that there's no such thing as color, or that trees aren't brown. Again, it does no one any good to object to every statement about the color of an object by saying, "Well actually, leaves absorb everything but green!" So yes, the sky is blue because air is blue. That is a perfectly fine answer that conveys an important aspect of what color is all about. But that important aspect might not be that color depends on reflection; rather, it might be that the idiosyncratic history of our sun, our planet, and our species have led to the subjective experience of color.

Tuesday, March 21, 2017

A Heart to Heart Talk

Several billion years ago, a bright red star the size of Earth's orbit beat like a heart in a spiral arm of the Milky Way galaxy. Already billions of years old, this star had long since fused all the hydrogen in its core into helium. Eventually, the star grew hot enough that the helium ash could begin to burn, slowly transforming the core to carbon and oxygen. When helium in the core finally ran out, the billion year balance between gravity and radiation that every star battles to maintain gave way, and the core contracted and grew hotter. Feeling the heat, the outer envelope expanded and cooled, and a red giant was born.

This giant soon found a new, but ultimately short-lived balance in a period of its life known as the asymptotic giant branch (AGB) phase. Now, a thin shell of hydrogen surrounding the core grew hot enough to burn, producing a new layer of helium that settled onto the core. After tens or hundreds of thousands of years, the helium layer grew hot and dense enough to start its own fusion cycle, leading to a brief helium shell flash. In those moments, the star's brightness would jump by a factor of a thousand before returning to its quiescent, hydrogen-shell burning stage. This was the slow beat of the giant's heart.

Credit: Lithopsian
How do we know this story about a giant, pulsating star that died long before ours was born? We have observational and theoretical evidence that stars like this exist. With telescopes, we have found stars with masses comparable to our own that are tremendously brighter but cooler (on the surface). To be so bright yet cool, such stars must occupy a very great volume. We have also built models of stellar evolution by observing many different stars and figuring out how ones that look different might just be the same kind at different stages of life.

But what about this specific red giant from billions of years ago—how do we know about it? What lets us peer into its heart? Well, we don't know its name or where the cooling remnant of its core is now, but we do know this star was part of a lineage, inheriting the cosmic dust from previous stars and passing it on to us, but transformed. In the roiling convective envelope that surrounded the core of this red giant, there were atoms of iron built by some older star's fusion.

Iron is the endpoint for fusion that can power a star. For all elements with fewer protons than iron, smashing them together at high enough temperatures and densities liberates more energy than is required to do the smashing. But this doesn't work after iron, because you've got so many positively charged protons squished into such a small space that they strongly resist any further squishing. You can still do it, but you're losing energy. Nevertheless, this type of fusion does happen in the outer layers of dying stars, draining a bit of the star's energy with each reaction.

This process of building up elements in stars—known as stellar nucleosynthesis—was first described comprehensively in a famous astrophysical paper known as B2FH (after the initials of the four authors). In it, they gave a detailed account of the nuclear physics required to produce all the elements we see in nature. Spectrographic analysis of our star and ancient meteorites that existed in the early days of the solar system has largely confirmed that elements do exist in the proportions dictated by stellar nucleosynthesis.

But let's get back to the iron in that giant. Here, a type of nucleosynthesis known as the s-process was dominant. One way to build new elements is to bombard atoms with neutrons. Every once in awhile, an atom will capture a neutron and become a radioactive isotope of whatever element it is (as determined by its number of protons). Eventually, beta decay will turn one of the neutrons in the nucleus into a proton, which then bumps that atom up to the next element in the periodic table. This process starts with iron and ends with bismuth.

As you can see, there are two reactions going on here: neutron capture and beta decay. Because of this, the rate at which these reactions occur determines the eventual abundance of elements we see. In AGB stars, neutron capture happens much more slowly than beta decay, which means that we will eventually see a ladder of elements building up from iron rather than more and more weird isotopes of iron.

Let's look at one element in particular to see how this whole thing works. The element thallium has 81 protons and shows up in nature with either 203 or 205 total nucleons (protons+neutrons). 204 nucleons is unstable and decays with a half-life of less than 4 years. That means there is a branching point when thallium reaches 204 nucleons. From there, it can undergo beta decay and become lead with 82 protons, or it can capture another neutron and remain thallium. About 70% of thallium is the 205 kind, while 30% is the 203 variety. (There is more thallium-205 because lead-205, which you get to by thallium-205 or lead-204, is unstable over millions of years and eventually decays back to thallium-205.)

Credit: R8R Gtrs
By experimentally determining how likely thallium is to capture a neutron and how quickly it decays, we can infer how often atoms of thallium in that red giant were being bombarded with neutrons. Knowing the density of neutrons in the AGB star tells us what nuclear reactions were creating neutrons and consequently how hot the core of that star was and what elements it was composed of. It turns out that the abundances of elements we see would require a range of neutron fluxes, which is part of how we know that AGB stars undergo pulses of helium fusion before returning to hydrogen-shell burning.

Because AGB stars are about as large as Earth’s orbit but of comparable mass to our sun, their gravity is not strong enough to contain their extended envelopes. This means much material is lost, becoming a "planetary nebula" and eventually dispersing into interstellar space. That includes the products of nucleosynthesis, which come to pollute cold, giant molecular clouds.

About four and a half billion years ago, one such polluted cloud became unstable and collapsed. Out of that collapse was born our sun and solar system. As Earth formed and mixed together the metals that could withstand the searing heat of our young star, atoms of thallium got locked up in minerals of copper and lead and zinc.

Eventually, humans came along and started extracting pure thallium to do things with it, such as performing experiments that could give us insight into the hearts of long-dead stars. A week or two ago, some pure thallium-203 was bombarded with protons until it became lead-201, which has a half-life of 9.4 hours. The lead decayed into thallium-201, which has a half-life of 73 hours. Because of that short lifetime, the thallium must be prepared and used quickly. This specific batch was mixed with hydrochloric acid to produce thallium chloride, which was then put into a solution and packaged for use.

Four days ago, that radioactive thallium was injected into my veins. Because thallium behaves a bit like potassium as far as cells are concerned, sodium-potassium pumps in the membranes of cardiac cells take in the thallium. These pumps transport ions of sodium and potassium, creating a voltage that gives cardiac cells the electricity they need to beat. Cardiac cells that are working well have functioning pumps and will take up the thallium; cells that aren't won't. To make sure the thallium was well circulated in my heart, they had me run on a treadmill until I got to 160 bpm.

Thank you, Frinkiac.
To see where the thallium in my blood ended up, a camera took pictures of the gamma rays streaming out of my body. But gamma rays present something of a problem. In a normal camera, a lens focuses light rays onto a surface to form an image. In telescopes, we mostly use mirrors to bounce light in the direction we want. This doesn't work with gamma rays, however. Their incredibly short wavelength means that for everyday materials, they will either be absorbed or transmitted, but not redirected. When a photon is simply absorbed without any optics, information about where the photon comes from is lost and you no longer have an image.

Astronomers have devised many clever techniques for getting images from x-ray and gamma ray sources, one of which works for looking into hearts, too. You can preserve the image of a source by creating a very small aperture for light to pass through—a pinhole camera. On the other side of that pinhole, you have a detector. Because you can trace just a single line from where a photon hits the detector to the pinhole, you know what angle that photon came in at and thus know what the original source of the image was. The downside to a pinhole camera is that almost all of the light is blocked. To get around this, you can create an aperture with a very specific shape that lets in more light but leaves a distinct "shadow" on the camera. Using computational techniques, you can than reconstruct the original image.

Credit: Alex Spade
The camera they used rotated around me for eight minutes, producing cross-sections of my heart at different angles that were later combined to form a 3D image.

I don't yet know the results of that test (although I suspect I am okay), but I am comforted by the thought that the thallium used to peer into my heart can also peer into the hearts of long-dead stars, to give a glimpse of another world, an incomparably gigantic furnace burning at hundreds of millions of degrees that does its part in seeding the galaxy with the elements necessary for chemistry and life. I am also comforted to know that I am a part of that lineage, that my carbon was produced in another dying star, that the hydrogen in my water is nearly as old as the universe itself. I hope this specific agglomeration of carbon and water persists a bit longer, but I am happy nonetheless that the universe is eternal and spectacular and knowable.

Monday, February 27, 2017

Snow Line and the Dwarf's Seven

I'm really sorry about the title. Not sorry enough not to use it, of course, but a little sorry.

So you may have heard about the recent discovery of a nearby solar system (a mere 39 light years away!) with seven planets all packed very close to the star (an M-dwarf). The discovery is significant because (a) some of the planets look to be rocky, Earth-sized, and in the habitable zone; (b) the relative nearness of the system makes it a prime target for further investigation; and (c) it's super rad. The occasion gives me the opportunity to explain a bit about how discoveries like this get made while waxing philosophical about the nature of astronomy itself. As a guy with an astronomy degree (I don't feel comfortable calling myself an astronomer) who (kind of) teaches an intro astronomy class, this is basically my job.

Conveniently, last week's discovery does an excellent job of illustrating three aspects of astronomy that I think set it apart from other sciences. (Or possibly my own confirmation bias leads me to see these aspects expressed, but let's leave that for another post.) These features are encapsulated in a kind of motto for astronomy that I've been using recently.

It goes like this: astronomy is the science of what you see when you look up. This sentiment conveys that astronomy is ancient and public, because for thousands of years, anyone could do astronomy just by turning their heads skyward and paying attention. Secondly, astronomy is bound (mostly) by sight, which is a limitation that forces astronomers to be both careful and creative. And finally, “up” is a pretty wide direction, and astronomy encompasses everything from the moon to other stars to the birth of the universe itself and anything else we find along the way.

All of this ties together into something truly remarkable. Astronomy has the power to transform points of light—the ever-present night sky that we rarely stop to consider deeply—into a story about exploding stars and merging galaxies and dark matter halos all under the spell of gravity in a dance that goes back billions of years and will probably continue for many orders of magnitude longer than it's lasted so far. And what's more, we have good reason to be confident in this story. How does astronomy manage to do this? Well, let's take a look at those seven newly discovered exoplanets.

While we've only known about exoplanets for a couple decades now, the study of planets more generally is, like the rest of astronomy, incredibly ancient. There are five planets visible to the naked eye (Mercury, Venus, Mars, Jupiter, Saturn) that have been known into antiquity. The first person to discover a new planet (Uranus) was William Herschel, using a telescope he constructed himself. Neptune followed, after Urbain le Verrier noticed that, after adding up all the known gravitational influences on Uranus, its calculated position on any given night was a little off from its observed position. He predicted that a planet farther out was gravitationally tugging on Uranus, so the astronomer Johann Gottfried Galle looked where le Verrier said to and found another new planet.

I'm giving this brief (and incomplete) history lesson because the fact of the sky always being up there makes astronomical discoveries collaborative and open. There's a parallel in last week's exoplanet discovery both in terms of that public nature and gravitational perturbations. Moreover, discovering new planets used to be a once in a generation kind of thing, but now we've discovered thousands of them and just found seven in one system. Astronomy is a gigantic, ever-expanding field; whenever we look somewhere new or look in a new way, we find new stuff.

So let's talk about TRAPPIST-1. While NASA had a big press conference about the discovery (and they were involved), this was a remarkably international effort, involving astronomers and telescopes from all over the world. Most exoplanets discovered so far have involved space telescopes because the atmosphere makes detecting subtle changes in a star's light curve difficult. A relatively cheap solution being used now is to image the same star many times either with multiple ground-based telescopes or the same scope repeatedly. This lets you produce a single, high quality light curve and means that anyone can get in on the exoplanet discovery game. With a small telescope that spends all its time looking at large patches of the sky, you can detect (and re-detect) the faint signatures of exoplanets. Once TRAPPIST and the other telescopes involved made those initial findings, NASA pointed the Spitzer Space Telescope at TRAPPIST-1 to confirm the discovery.

Okay, but how did these telescopes actually discover the seven exoplanets? This is where the central limitation of astronomy—sight is (just about) our only tool—leads to very creative solutions. The way that we transform TRAPPIST-1 from a point of light into a star with seven worlds is by performing high-precision photometry to construct a light curve of the star. A light curve is the change in a star's light over time. To get an accurate one, you need to get high quality images on short timescales. This runs counter to a very useful tactic in astronomy, which is to collect light from a source over a long period of time to produce a single, bright image. But if you do that, any deviations during that integration time get smeared out and missed.

To detect exoplanets, the deviations you're looking for are dips in the star's brightness at regular intervals. If your telescope, the star, and a planet happen to line up exactly, then every time the planet passes in front of the star from your perspective, the star gets a little bit dimmer. It's just like a solar eclipse here on Earth, except that these planets are much too far away from us to block out all the light of their parent star. Instead we see a tiny drop in brightness.

But these transits reveal a lot of information. First, the duration of the transit and the time between transits tell us how long the planet's year is. Combined with an educated guess about the star's mass (by taking its spectrum), we can figure out how strong gravity's pull on the planet is, and consequently the distance it needs to be from its star to complete an orbit in the observed time. The more massive the star, the faster a planet orbits at a given distance. Finally, the percentage of light blocked by the planet, combined with its distance, tell us how big the planet is compared to the star. Another educated guess about the star's size tells us the actual physical size of the planet.

So by looking very precisely at how a star twinkles, we can deduce the presence of a planet and make a reasonable guess as to how big it is and how close it is to the star. We can do this despite not actually being able to see the planet itself, which is much too small and dim next to its parent star to resolve. But I've been talking about one planet this whole time, and these astronomers discovered seven. You might think sussing out the details of seven different transits while also accounting for anything else that might mess up your photometry would be difficult, and you'd be right. The primary way the team identified seven different planets was through a statistical analysis of the transit times to come up with a chart that looks like this:

Credit: ESO/M. Gillon et al.
As a rule, planets don't share orbits. Doing so isn't stable. And each orbit has a definite period, and each period corresponds to an orbital speed, which tells you how long the transit should last. So if you identify a transit of a particular duration that repeats regularly, then you've found yourself a planet. If you see six or seven different regular transit times, you've found six or seven different planets.

There is a snag in all this, however, called TTVs—transit timing variations. That is, sometimes a transit happens earlier or later than expected. In this case, the variation could be up to half an hour. But it turns out this snag contains even more information, because this sounds an awful lot like the error le Verrier noticed in the orbit of Uranus. The planets weren’t where astronomers thought they would be given just the gravitational influence of the star, which means the planets—all extremely close to each other—are tugging on each other significantly.

Because so much is unknown about the system, the problem is much more complicated than the orbit of Uranus. Le Verrier was able to do a laborious calculation by hand using perturbation theory, but the complexity of TRAPPIST-1 require a slightly faster technique if you want to publish before the stars all die and we’re left in darkness. So instead the team constructed simulations of the system where they plug in the laws of physics and then vary the unknown orbital parameters to see what kind of planetary systems evolve that match the one they observed. In the end, they’re left with a set of possible masses that could produce the tugging required to account for the transit timing variations.

Even doing this produced a wide range of possible answers, which led to a great quote in the article: "The system clearly exists, and it is unlikely that we are observing it just before its catastrophic disruption, so it is most probably stable over a significant timescale." The relevance is that the system's existence is itself a piece of data, which means that as more observations are done, the assumed stability of the system can help to rule out orbital parameters that would produce an unstable system.

With those uncertainties understood, the team was able to estimate that most of the planets are in the neighborhood of Earth's mass. If you know the size and the mass, you also know the density. The worlds of TRAPPIST-1 are all rocky (high density) as opposed to gassy (low density). The proximity to the star itself is also important. If planets are too far out from their star—past the snow line—then water and other volatiles condense into ice. Far enough inside that line, however, and water can remain a liquid. Too close, and the liquid evaporates. These planets are all at the right distance to have liquid water.

An entire system of rocky, Earth-sized worlds warm enough to have liquid water—this is why everybody is so excited and why astronomers are going to keep watching these planets. The Kepler Space Telescope is currently looking at the system, and the James Webb Space Telescope will too when it launches. The relative nearness of the system to us means that it is fairly easy to observe. As new observations come in, we could learn about the planets' atmospheres—their density, composition, and variability—and whether they experience tidal heating and geological activity. Are these complex, intriguing worlds like the moons of Jupiter and Saturn or airless rocks scoured dry by the flares of their parent star? We just have to look up to find out.

Thursday, January 12, 2017

When You Think Upon a Star

Among the sciences, astronomy benefits from widespread public appeal. Hilariously large numbers and gorgeous images make it an attractive source for science news. The result is that some difficult notions from astronomy have managed to penetrate successfully into public awareness. For example, this meme, which I've run across several times:

I got this image here (which, incidentally, is a blog post doing exactly what I'm about to do), but I've seen this meme in other forms elsewhere and have no idea what its original source is.
I'd like to say that I feel conflicted about this meme—that I'm happy the joke relies on knowledge of astronomy (the immense size of the universe versus the finite speed of light), despite the specific fact it calls upon being incorrect (visible stars are almost certainly still alive)—but that would be a lie, because I'm an enormous pedant.

However, in this post I'm going to steer my pedantry in what I hope is a slightly more interesting (and less annoying) direction, toward mathematical reasoning. That is, while I think it's great that the public has been able to learn certain specific facts about astronomy (and other sciences), I think it would be far more valuable if the public learned how to apply mathematical reasoning to claims they encounter.

Here's why: as a recent graduate with an official degree in astronomy and all that jazz, I happen to simply know the fact that, in general, the stars we can see with the naked eye are close enough, and live long enough, to still be alive by the time their light reaches us.

But even if I didn't know that fact, I could arrive at it by constructing an argument from some more readily available facts. And this argument, although mathematical in nature, doesn't involve anything more complicated than a bit of algebra, such that anyone who gets out of high school should be able to reach the same conclusion.

Now, the joke's humor relies on some common facts from astronomy: stars are far away, light is slow compared to the size of the universe, stars eventually die. But before we get into the mathematical meat of evaluating this claim, let's think about another common fact: our sun is 4.5 billion years old (give or take), and it's roughly halfway through its life, so it's got another several billion years to go.

In order for the sun to be dead by the time an alien civilization see its light, that civilization would have to be farther away in light years than the sun's remaining age in years. That is, the alien civilization would have to be many billions of light years away. So if we take our sun as typical, then the above meme is only true if we can see, by the naked eye, stars that are billions of light years away. We can't, and as I'll show in a bit, we don't even have to assume our sun is typical for this argument to work (which it isn't, really). But this is the structure of the mathematical argument: compare the lifetimes of stars we can see with the naked eye to their distances from us.

A few more astronomical facts are necessary to work this out, some of which can be gotten by a bit of googling, and one which, I admit, most people probably aren't aware of. This fact, which makes evaluating the claim very easy, is that the more luminous a (main sequence) star is, the shorter it lives. This means the most luminous stars (which are the most likely to be visible by the naked eye at great light travel times) are the best candidates for stars that are dead by the time their light reaches us. If the claim fails for these stars, it fails for all stars.

The most luminous stars live about a million years and are about a million times brighter than the sun. Now, it's always possible that a star we're seeing just happens to be at the end of its life, but all else being equal, if we pick stars at random out of the sky, then on average they will be halfway through their lives, just like (coincidentally) our sun (not strictly true, because there is some selection bias to the stars we can see).

To be visible by the naked eye, a star needs to have an apparent magnitude of 6 or lower.

For the sun to be magnitude 6 (it's currently an obscenely bright -27), it would have to be about 60 light years away. (There's some math involving logarithms here, but there are tools online that could get you this answer.)

How bright a star appears to us is proportional to its intrinsic brightness and inversely proportional to the square of its distance from us. That is, if star A and star B are identical but star B is twice as far away, it looks 1/4 as bright as star A.

And that's everything we need to evaluate the claim. Now here's how we construct the argument. A star is dead by the time its light reaches us if its remaining lifetime in years is less than its distance in light years. It is visible with the naked eye if its intrinsic brightness relative to the Sun is greater than the square root of its distance relative to the Sun's distance at magnitude 6.

Let me unpack that second statement a bit. Say a star is intrinsically four times as bright as the sun. If it's also magnitude 6 (just visible), then it needs to be farther away than the sun. Specifically, a star four times as bright as the Sun will be just visible at twice the distance (square root of 4) of the magnitude 6 sun: 120 light years. If it is farther away, it is too dim for us to wish upon it.

The brightest stars are 1,000,000 times more luminous than the sun, which means they are the same apparent brightness as the Sun when they are 1,000 times farther away. If the sun is just visible at 60 light years, then the brightest stars are just visible at 60,000 light years. Is 60,000 light years greater than the (on average) half a million years the star will have left to live? No. At that distance, the star could only be dead by the time we see it if it were already 95% of the way through its life. For less luminous stars which live longer, that percentage gets even higher, which makes it much less likely that we ever see such a star.

When we learned algebra via word problems, we were supposed to be learning how to solve problems like these. And while most of us probably managed to get through those word problems successfully, it's been my observation that most of us don't apply this kind of analysis outside of school, to things like evaluating claims that have mathematical content. While it's not vital to the health of a democracy that we be pedantic about random Facebook memes, it might be useful for us to be able to think carefully about scientific claims, at least when the facts and math involved don't require a PhD.

Outside of learning a bunch of astronomical facts, one of the most valuable (academic) lessons I acquired while getting my degree was learning how to bring mathematical tools to bear on a problem. I'm sure this blog post doesn't really have what it takes to impart that same lesson on others, but I hope it reveals a bit of the process. If I could wish upon a star (and I were feeling altruistic), I might wish for an educational system that did a better job of that.

Thursday, September 1, 2016

Live From Low-Earth Orbit!

It looks like I disappeared again. Or maybe I was just too faint to detect above the noise of the internet. Sorry about that. To make up for my absence, this post will have a whole bunch of pictures. After all, there is a favorable exchange rate between pictures and words.

What's brought me out of hiding today is a very cool new account on Twitter. Because the Hubble Space Telescope kind of belongs to the American public, it has started live tweeting where it's looking, what tools it's using to do that looking, and who told it to look there. So you get stuff like this:
The picture is not what Hubble was looking at right then but an image pulled from the Sloan Digital Sky Survey. Hubble can't usefully beam images directly to us, because everything Hubble (and all other telescopes) looks at has to be processed. This notion makes people grumble, because they want to see the raw, unmanipulated data in its purest form rather than rely on whatever artistic license NASA has exercised.

But raw images in astronomy (and raw data more generally in science) simply aren't useful. In fact, they don't even exist, because any contact with an instrument inevitably distorts the data. The purpose of processing images, then, is to remove the imprint of the instrument on the image and hopefully recover what's actually there.

The coolest part about Hubble_Live is that it tweets out this process, too. There are many ways astronomers attempt to extract the true signal from the data collected, but I want to talk about three of the big ones I've learned about and which Hubble employs. These are:


Hubble performs these calibrations in order to figure out how it's interfering with the pictures it's taking. To see what these calibrations do, I want to show you some data my classmates and I took with a much smaller, terrestrial telescope last fall. We were looking at the Ring Nebula, which Hubble has an obnoxiously gorgeous picture of here for reference:

NASA, ESA, and the Hubble Heritage (STScI / AURA)- ESA / Hubble Collaboration
The Ring Nebula is faint, so to image it we tracked it for two minutes, letting the charge-coupled device (CCD) at the bottom of the telescope count up the photons streaming in from space. But a CCD is not really a camera. It's more accurate to think of a CCD as an electron counter.

At each pixel, there's the electric equivalent of a little bucket that collects electrons and converts them into a voltage that can be measured and manipulated digitally by a computer. Ideally, the way the CCD counts electrons is by having them knocked into the pixel bucket by incoming photons. But there are other sources of electrons, too. If you don't take them into account, you end up with an image that doesn't correspond to what you were looking at. So here's the raw data of the Ring Nebula taken by our telescope:
Ignore the numbers.
As you can see, well, err. Now I said this is the raw data, not an image, because what I'm really showing you is a two dimensional matrix where the intensity at each pixel is proportional to the number of electrons that were counted at that pixel. There's no sense in which this is representative of what a human would see if they had eyes as big as a telescope and could store light for two minutes. It's just a graphical representation of the electrons counted. All pictures you see--whether from Hubble or your smartphone--are just that. The difference is sometimes we want to modify that matrix so it looks something like what people see.

I'm being a little disingenuous here, though. The Ring Nebula is in this data, but because it is very faint compared to some of the pixels in the image, it's not apparent. I can turn up the contrast by bounding the brightness levels you're allowed to see, and then the nebula does appear.
Color photography is so overrated.
But I haven't done anything scientific here. I haven't calibrated the data at all, just chosen what data to show you. This isn't a more accurate or useful representation of the data. To get a scientifically meaningful image, we have to account for all the extra electrons our CCD has picked up.

One electron source is the instrument itself, which because it is not at a temperature of absolute zero consists of vibrating molecules that can occasionally knock an electron into the pixel bucket. This is called the "dark current," because it shows up even when the telescope isn't looking at anything. The warmer your telescope, the larger the dark current will be, which means that weak signals can be lost in the noise of the telescope's heat. You can minimize that heat and detect faint signals by keeping your telescope cold (like, say, by putting it in space).

To determine the dark current, Hubble does a dark calibration, which essentially amounts to taking a picture of the same exposure length as your actual picture but with the lens cap on. That way the only electrons detected will be those coming from the heat of the instrument. Once you know what this average amount of heat is, you can subtract it from the electron counts of your image. Here is an image of the dark frame from our observations:
Think TV static.
The intensity of our dark frame is about 60% of the intensity of our image, which means that by subtracting it from the image, we're losing a lot of information on faint sources. But if we don't subtract the dark current, we're overestimating the true brightness of the Ring Nebula by a factor of roughly 2.5, which would lead to some pretty bad science on our part.

Another source of electrons is the electronic components of the CCD. To operate properly, a CCD requires a certain voltage to be coursing through it constantly. For Hubble, this is the BIAS calibration, because you can think of the CCD voltage as being a bias introduced into the electronics in order to produce usable data. Telescopes acquire a bias frame by taking a zero-second exposure that doesn't let in dark current electrons or photoelectrons. Hubble does this separately from taking its dark calibration, but in certain situations you can also simply assume that your dark current includes the bias electrons. In that case, subtracting out the dark frame gets rid of the bias electrons, too. That was the case for the data we collected. If you look at what's left over after this subtraction, this is the image you get:
The Thumbprint Nebula (I've zoomed in a bit here.)
While this looks worse than the artificially contrasted version up above, the Ring Nebula does pop right out when the telescope's heat and bias is removed without manually adjusting the contrast. By fiddling with contrast, you can create spurious images that don't represent anything actually out there. No artificial images happened to be produced in this case, but we can't be so sure that the structure we see in the Ring Nebula is the true structure except by removing the dark current and bias electrons.

Finally (for our scenraio), the individual pixels in the CCD might have varying levels of light sensitivity. Since we want each photon to count equally, we have to adjust for these effects. Balancing the variable sensitivity is known as flat fielding, and you produce a flat field by shining a light of uniform intensity across the CCD. When you do this, the CCD should record the same number of electrons (more or less) at each pixel. If some regions of the CCD are too bright or too dim, you know this corresponds to unequal sensitivity. To remove the effects of this sensitivity, you divide your image by the (normalized) flat field, so that the brightness at each pixel is adjusted by a factor proportional to its sensitivity.

In space, unfortunately, it's difficult to shine a uniformly bright light on Hubble. You might think the Sun would work, but the Sun is way too bright and even a very short exposure would saturate Hubble’s sensors. Saturation causes electrons to bleed over into neighboring pixels and gives you electron counts that are not proportional to the number of photons detected. Instead, Hubble takes flat fields by looking at the Earth, which (with a lot of processing, aided by the fact that the Earth moves beneath Hubble very quickly, blurring any image it takes) can reproduce a flat field. So the DARK-EARTH calibration is Hubble's way of adjusting for the varying sensitivity of its equipment.

On Earth, flat fields are usually produced by shining a light on the dome of your observatory and having the telescope look at that, or looking at a small region of the dark sky before any stars become visible. Here's the flat field we produced:
I think the telescope has floaters.
I suspect we actually did a very poor job of shining light uniformly as I think you can see our light source positioned on the right side there. The smudges, however, probably are true variations in the pixel sensitivity, so producing the flat field removed those. (The ring-like smudge in the middle is an eerie coincidence.) After dividing our image by the flat field, we get this picture:

Possibly the Eye of Sauron (More zooming done.)
The main visual advantage seems to be increased clarity of the inner region of the nebula.

None of these, of course, look like the beautiful pictures we see from Hubble or APOD. There are two reasons for this. One, our telescope simply doesn't have the resolution (or other exquisite features) that Hubble does, so there's a limit to how nice a picture it can take. The second reason, however, is that pretty pictures are created to be pretty, not for doing science. As I said above, this is just a representation of the data, but there are other representations.

In fact, one purpose of this lab was to determine the three dimensional structure of the nebula. That is, is it a donut or a shell? A picture alone can be deceiving. But other methods of interpreting the data might be more useful. So here's another representation, plotting the brightness of the nebula along a particular axis in different wavelengths of light:
Graphs are the best, you guys.
Doing some math on graphs like these, we were able to show that the Ring Nebula is probably more like a thin shell of material than a donut, despite its visual appearance. The ring is a bit of an illusion. But a graph like this is only accurate because of the processing done to the remove observational artifacts, even though that processing does not produce an aesthetically pleasing picture.

Nevertheless, what's astronomy without cool pictures? In addition to looking at the nebula with a clear filter, we also used filters that passed only red light from glowing hydrogen and blue/green light from doubly-ionized oxygen. When you clean up that data, assign a color to each filter, and plot them on top of each other, you get this:
Insert riff on Beyoncé lyrics here.
That's not really what the Ring Nebula looks like, but it is one way of seeing it.

Thursday, March 17, 2016

On Guessing

This is a follow-up to my Lagrange point post. At the end, I briefly mentioned the L4/L5 Lagrange points, which are stable and form equilateral triangles with the masses of a three-body system. I'd like to delve into the physics of these points a bit to illustrate something about how physicists solve problems.

That is, physicists (in general) do not like doing calculations. They don't want to sit around all day crunching numbers to arrive at an answer. When you solve a physics problem, the goal is to build as simple a model as possible that captures the essential features of what you're studying. (This is where the spherical cow jokes come in.) That way, if you're lucky, you can avoid having to do a lot of math. Instead you can arrive at the answer you want by symmetry, or dimensional analysis, or guessing.

Guessing is an important part of the physicist's toolkit and some of what makes doing these problems fun (for me, at least). It's easy to stare at a problem for hours and feel overwhelmed by the complexity of it. I liken this to how it feels when you've just begun to write something. You have a blank screen and a blinking cursor in front of you and there's nothing more terrifying or paralyzing.

In writing, sometimes the solution is to just start writing and see where the story takes you. And so it follows with physics. If you have a complex problem, at times the best strategy is to just guess at the answer and see where the physics takes you. In this way, doing physics can be a lot like playing a game or solving a puzzle. It's fun, and I seriously wouldn't still be in school if I thought otherwise.

So let's return to the L4/L5 Lagrange points. In class, when discussing the three-body problem, our professor performed enough derivation to get us to believe that stable orbits can exist. He went through the same argument I used about rotating frames and centrifugal force. So a test mass is in a stable orbit when gravity and centrifugal force cancel out. He then gave us the punch line, telling us where the Lagrange points are, but didn't go through the math of actually finding them. Why not? Because if you do the derivation, the equations of motion you end up having to solve are:

I should probably credit Massimo Ricotti for this.
I'm not going to attempt to explain what all that means. It's ugly, and you wouldn't want to solve that unless you had no other choice. But there is another way. Our professor mentioned that when thinking about the 5 Lagrange points, you can guess where 2 of them (L4/L5) must be.

This intrigued me, which is why we're here today. What makes it possible to guess these locations? As we saw with the L2 point, its exact location is related to the square root of the ratio between the two big masses. This is (probably) not something you could just pull out of thin air. But that's not the case for L4 and L5. The location of one of these points is at the vertex of an equilateral triangle that has the two large masses at the other vertices. Flip this triangle over and you get the other one. How massive the objects are isn't relevant at all; distance is the only important variable (and two masses can basically orbit each other at any distance they like). So you could conceivably guess the answer just by looking at the problem.

There are a lot more MS Paint illustrations coming. You've been warned.
But what makes equilateral triangles, as aesthetically pleasing as they are, physically appealing? Let's consider a special case and then move on to a more general scenario.

Forget the Earth-Moon system and consider two stars of equal mass in circular orbits about each other. In that case, the stars are actually orbiting their center of mass, which is halfway between the two for equal mass stars. A third body that's motionless in the rotating frame also orbits the center of mass, which means centrifugal force pushes away from that center. To make the problem even simpler, let's put the third body equidistant from the two stars.

I'm a big fan of purple.
Then the forces of gravity to the left and right cancel out, leaving only gravity pulling down and centrifugal force pushing up. To get our Lagrange point, we just need those forces to balance. This means we have to guess how far up from the center the Lagrange point is.

First, let's consider gravity. The total strength of gravity depends on the inverse square of the distance to the stars, d. But we don't want the total force, only the vertical component. That part is a fraction of the total, and that fraction is equal to h/d. This means gravity now depends on the distance to the center of mass and the inverse cube of the distance to the stars.

On the other hand, centrifugal force depends on the distance to the center of mass, h, and the inverse cube of the distance between the stars, a. Our gravity and centrifugal terms are nearly the same, except one uses a and the other d. But we're trying to find d, so let's just guess that d=a. Then all the lengths of our triangle are equal and we've found a point where all the forces cancel out--a Lagrange point. (This guess works because the constants in each equation are the same. Otherwise, d might just be proportional to a.)

So there we have it. Using a few reasonable assumptions, a simple model, and nothing more than geometry, we've found the Lagrange points. Where do we go from here? How about back to the Sun-Earth system, where one of the two masses is much, much bigger than the other. If that's the case, then the center of mass moves to the sun, and centrifugal force points directly away from it.

It's a trap!
If we maintain our equilateral triangle guess, where does that leave us? With a problem. The problem is that if you rotate the above picture so that the sun's gravity vector and the centrifugal vector are horizontal, you're left with the Earth's gravity vector at an angle of 60° away from horizontal. This is bad because the "vertical" component of the Earth's gravity isn't balanced by anything else, which means that no matter what values you insert into your equation, there is no equilibrium point. Uh, oh.

But our graph has fooled us here. You see, by moving the center of mass directly on top of the sun, we are implicitly saying that the Earth has no mass whatsoever. And if that's the case, then it has no gravitational force, which means it doesn't need to be counteracted at all. In the limit where the Earth has no mass, the three-body problem reduces to the one-body problem. So there is a point of stability at the equilateral triangle, but also at any point along the same circular orbit.

This wasn't a totally useless exercise, however. It shows us that it's reasonable to expect L4/L5 to be stable from one extreme of equal masses to the other extreme of just one big mass. But we haven't yet proven that the L4/L5 points exist where they do for any arbitrary masses. How do we do that? First, let's make a generic diagram describing the situation.

You made it.
Let's say that Star A has a mass of m and Star B has a mass of km, where k is some fraction between 0 and 1. This means we can vary between the two extremes of equal mass (k=1) and one dominant mass (k=0). The smaller k is, the farther to the left the center of mass moves, the smaller Star B's gravity vector is, and the more horizontal the centrifugal vector gets. This should mean that the forces pointing to the right stay balanced. Additionally, as k gets smaller, there is less overall gravity pointing down, but because the centrifugal force is getting more horizontal, that gravity has less it needs to counteract. So our equilateral triangle still looks good.

To prove the general validity of our guess, let's see what happens if the interior angles are some arbitrary angle, rather than the 60° they must be. We have to compare the combined vertical force of gravity to the vertical centrifugal force. Using trig, we can find the distance from the test mass to a star in terms of a and θ. Because of the inverse square law of gravity, a is going to be squared. Trig also gets us the vertical component of that force in terms of θ.

On the other hand, centrifugal force depends on the distance to the center of mass, l. But because we only want the vertical component, the actual location of the center of mass is irrelevant and all we need is h, which again can be found in terms of a and θ. As before, centrifugal force also depends on the inverse cube of a, so some canceling of exponents means it's the inverse square of a that shows up.

Because both expressions depend on the square of a, we can get rid of it. Both forces are also equally dependent on the sum of the masses of the two stars, so we can cancel the mass terms, too. This means our equation is now defined entirely in terms of θ. After a little algebra, we can arrive at the following equality:

sin(θ) = 1/2

Everything else in our equation is gone. All that matters is the angle between h and d. Now, I just happen to know that the sine of 30° is 1/2. This means the full interior angle is 60°. With our guess that the test mass is halfway between the two stars, the only possibility is an equilateral triangle with interior angles of 60° and lengths of a. (A similar argument can be made for the horizontal components of the forces.)

I should note that this doesn't prove that there aren't other Lagrange points forming different triangles when the test mass is not half way between. To see that there can't be other points of stability (except on the line joining the two stars), you need to solve for the effective potential of the force fields at work in this system. That can't be done by guessing, but it can be done by drawing! Unfortunately, drawing equipotential surfaces would strain my artistic talents past their breaking point. Here's some computer art instead.

Credit: NASA / WMAP Science Team

Wednesday, March 9, 2016

Lagrange Point 2: Newton's Redemption

This past November, I had the opportunity to tour Goddard Space Flight Center. Although we saw many cool operations (including a gigantic cryogenic chamber!), the most interesting was the under construction James Webb Space Telescope. I had intended to write about the visit at the time, but I spent much of my fall semester trying not to hyperventilate instead. However, we just covered some relevant material in my theoretical astrophysics course, so let's take a look now.

A full-scale model. Credit: NASA
JWST gets called the successor to Hubble, but calling it the sequel would probably be more appropriate. It promises to explore material untouched by the first one, it's going to have even more spectacular visuals, and it's way over budget and behind schedule. The two features that most distinguish it from Hubble are its size (bigger) and its wavelengths of interest (longer).

Longer means infrared. Being an infrared telescope, JWST will see through dust, directly image planets, and peer further back in time at objects redshifted out of the visible range. But infrared telescopes come with some complications. On Earth, we don't do a lot of infrared astronomy, partly because the atmosphere absorbs too much of it, but also because stuff too cold to emit visible light (basically everything on Earth) is usually spilling out lots of infrared instead. We can't do IR astronomy on Earth for the same reason we can't do visible astronomy during the day: it's too bright.

That's why JWST will be in space. But even in space, the Earth and sun loom large. Keep the telescope too near the Earth, and the Earth warms it up, generating noise in the cameras. JWST must be kept cold, much colder than the objects it wants to look at. The only way to accomplish that is to put it far away from the Earth and hold up a shield to block the Earth and sun. The trick is that you want to be able to block both bodies at the same time, which wouldn't work if you just flung the satellite into any old orbit. The farther you get from the sun, the longer your year (Kepler's third law says the cube of your semi-major axis is proportional to the square of your year), so the sun and Earth will change relative positions in the sky.

You need to find an orbit that's far away, stable, and lets you block two objects at once--tricky. Arranging three objects in space is known as the three-body problem in celestial mechanics , and it has a long history. When Newton first formulated his laws of motion and gravity, he was able to solve the one- and two-body problems. That is, he could tell you how a tiny, insignificant planet would orbit a gigantic star (the one-body problem) or how two comparable objects would orbit each other (the two-body problem), but he was not able to count any higher than 2. Newton reasoned that miniscule interactions from nearby planets would build up over time and slowly destabilize orbits, and he assumed the only solution was divine intervention.

Astronomers, physicists, and mathematicians spent a long time looking for more precise answers. It turns out there is no generic solution to the three-body problem, no simple orbit that works for any configuration of three or more masses. Using perturbation theory, you can account for the infinitesimal, cumulative influences of many bodies over time, but in the long run (millions of years), orbits become chaotic. Chaotic doesn't necessarily mean that a planet will be flung from the solar system, but that we eventually can't say with any precision where in an orbit a planet will be at any given time.

A couple mathematicians were able to work out very specific periodic solutions to what gets called the restricted three-body problem, or the 2+1 body problem: two large gravitating masses, one tiny mass that is virtually insignificant. In just the right location relative to the big ones, the small one can be stable. Nowadays these are known as the Lagrange points, in honor of one of the mathematicians who worked them out (Euler already had enough named after him).

This seems perfect for JWST. If there's a line between the sun and the Earth, we want JWST to be on that line out past Earth. Can we find a Lagrange point there?

In space, lines are purple.
Well first let's backtrack just a second. There isn't really a line connecting the sun and the Earth, because the Earth is constantly in motion about the sun at ~30 km/s. The only way to draw such a line is if we imagine ourselves moving along at the same angular speed as the Earth so that it appears stationary.

Notice I said angular speed, which is how long it takes to move a given angle rather than a given distance. If you think about a spinning tire, the outer bits are moving faster than the inner bits, because the bigger the radius, the larger the circumference covered in the same amount of time. But they are both covering the same fraction of a circle in the same time, and thus both have the same angular speed. If different bits moved at different angular speeds, they wouldn’t keep the same relative positions and the tire would spin apart.

We want our frame and JWST to be moving at the same angular speed as the Earth. But in establishing this frame of reference, we have invalidated Newton's laws of motion. We are no longer in an inertial frame, which is one moving at a constant velocity. Circular motion is not constant, because velocity includes direction.

What does it mean for Newton's laws to be invalidated? It means that an object not experiencing any net force will seem to accelerate away. For our rotating frame, maintaining circular motion requires constant force toward the center of the circle. Tie a ball to the end of a string and spin the ball in a circle. The tension along the string is the radial force that maintains circular motion. If the ball comes loose, it will fly off in a straight line. But from the frame of the spinning string, which can continue spinning as long as you supply a force, the ball will appear to curve away. This tendency to accelerate away from a spinning frame can be accounted for if we invent a fictitious force--centrifugal force--that acts in opposition to whatever force maintains circular motion--centripetal force.

So if we look at the Earth from a rotating frame, JWST will seem to experience a centrifugal force pushing it away from the Earth. In order to have the telescope remain stationary in our rotating frame, the force from gravity must balance the centrifugal force.

Doing physics really involves making diagrams like this.
So here's our three-body problem. JWST is pulled inward by the gravity of the sun at a distance of a+d and by the gravity of the Earth at a distance of just d. That sum is:

Fg = Gmsunmjwst/(a+d)2 + Gmearthmjwst/d2

And it's pulled outward by the centrifugal force which results from the angular motion of the system. How do we characterize the centrifugal force? It's the square of the angular speed times the distance from the center of mass (the sun, in this case) times the mass of the accelerating object. The angular speed is inversely proportional to the period, the Earth’s year. So centrifugal force involves the square of the period. Using Kepler’s relation between period and semi-major axis, we can substitute in that quantity (a in our diagram). Doing some algebra, that gives us a centrifugal force of:

Fc = G(msun+mearth)mjwst(a+d)/a3

And we want Fg to equal Fc. If we cancel some stuff out, we arrive at the following expression, which is defined purely in terms of the masses and the distances between them:

msun/(a+d)2 + mearth/d2 = (msun+mearth)(a+d)/a3

We're trying to solve for d, the point at which all these forces cancel out. But there's a problem. If we were to multiply all these terms out (FOIL!), we'd find this was a quintic function, which means there'd be a d5. And there is no equivalent of the quadratic formula for quintic equations. So we have to make some approximations. We have to assume that the sun is so much bigger than the Earth (true, in this case) that the Earth can be ignored whenever the two terms are added together. And we also assume that d is much smaller than a, which lets us do some mathematical tricks. If you make those approximations, and then do some more algebra, you eventually find that:

d = a(mearth/3msun)1/3

That is the location of the second Lagrange point (and the first one, but on the other side). Plugging in the relevant numbers, d = 1.5 million km, which is curiously 1/100 Earth’s distance from the sun. The sun is a little more than a hundred times wider than the Earth, which means that from L2, the Earth and sun appear just about the same size--more or less the moon. And that means JWST can easily block both of them with the same shield. (The similarity in angular size really is a happy coincidence that has to do with an accidental congruence of densities, radii, and that factor of 3 up there. Try it with any other planet and it doesn't work.)

So there you have it. When the combined gravitational pull of the Earth and sun cancel out the centrifugal force pushing JWST away, the telescope remains stationary with respect to Earth’s motion about the sun. It sits 1.5 million km behind the Earth and completes an orbit in a year despite being farther away from the sun.

But that's not quite the end of the story. It turns out that L1, L2, and L3 (on the other side of the sun from the Earth) are only metastable, which means a slight push sends an object flying off into a new orbit. So we can put satellites there, but they require station keeping to prevent them from falling away. L4 and L5, which form equilateral triangles with the two big masses of the 2+1 problem, are stable. Consequently, we actually find families of asteroids called the Trojans at the Sun-Jupiter L4 and L5 points. Also, I’ve totally neglected the Coriolis effect here, which is another fictitious force that pops up when… oh dear, look at that word count.