Thursday, September 1, 2016

Live From Low-Earth Orbit!

It looks like I disappeared again. Or maybe I was just too faint to detect above the noise of the internet. Sorry about that. To make up for my absence, this post will have a whole bunch of pictures. After all, there is a favorable exchange rate between pictures and words.

What's brought me out of hiding today is a very cool new account on Twitter. Because the Hubble Space Telescope kind of belongs to the American public, it has started live tweeting where it's looking, what tools it's using to do that looking, and who told it to look there. So you get stuff like this:
The picture is not what Hubble was looking at right then but an image pulled from the Sloan Digital Sky Survey. Hubble can't usefully beam images directly to us, because everything Hubble (and all other telescopes) looks at has to be processed. This notion makes people grumble, because they want to see the raw, unmanipulated data in its purest form rather than rely on whatever artistic license NASA has exercised.

But raw images in astronomy (and raw data more generally in science) simply aren't useful. In fact, they don't even exist, because any contact with an instrument inevitably distorts the data. The purpose of processing images, then, is to remove the imprint of the instrument on the image and hopefully recover what's actually there.

The coolest part about Hubble_Live is that it tweets out this process, too. There are many ways astronomers attempt to extract the true signal from the data collected, but I want to talk about three of the big ones I've learned about and which Hubble employs. These are:


Hubble performs these calibrations in order to figure out how it's interfering with the pictures it's taking. To see what these calibrations do, I want to show you some data my classmates and I took with a much smaller, terrestrial telescope last fall. We were looking at the Ring Nebula, which Hubble has an obnoxiously gorgeous picture of here for reference:

NASA, ESA, and the Hubble Heritage (STScI / AURA)- ESA / Hubble Collaboration
The Ring Nebula is faint, so to image it we tracked it for two minutes, letting the charge-coupled device (CCD) at the bottom of the telescope count up the photons streaming in from space. But a CCD is not really a camera. It's more accurate to think of a CCD as an electron counter.

At each pixel, there's the electric equivalent of a little bucket that collects electrons and converts them into a voltage that can be measured and manipulated digitally by a computer. Ideally, the way the CCD counts electrons is by having them knocked into the pixel bucket by incoming photons. But there are other sources of electrons, too. If you don't take them into account, you end up with an image that doesn't correspond to what you were looking at. So here's the raw data of the Ring Nebula taken by our telescope:
Ignore the numbers.
As you can see, well, err. Now I said this is the raw data, not an image, because what I'm really showing you is a two dimensional matrix where the intensity at each pixel is proportional to the number of electrons that were counted at that pixel. There's no sense in which this is representative of what a human would see if they had eyes as big as a telescope and could store light for two minutes. It's just a graphical representation of the electrons counted. All pictures you see--whether from Hubble or your smartphone--are just that. The difference is sometimes we want to modify that matrix so it looks something like what people see.

I'm being a little disingenuous here, though. The Ring Nebula is in this data, but because it is very faint compared to some of the pixels in the image, it's not apparent. I can turn up the contrast by bounding the brightness levels you're allowed to see, and then the nebula does appear.
Color photography is so overrated.
But I haven't done anything scientific here. I haven't calibrated the data at all, just chosen what data to show you. This isn't a more accurate or useful representation of the data. To get a scientifically meaningful image, we have to account for all the extra electrons our CCD has picked up.

One electron source is the instrument itself, which because it is not at a temperature of absolute zero consists of vibrating molecules that can occasionally knock an electron into the pixel bucket. This is called the "dark current," because it shows up even when the telescope isn't looking at anything. The warmer your telescope, the larger the dark current will be, which means that weak signals can be lost in the noise of the telescope's heat. You can minimize that heat and detect faint signals by keeping your telescope cold (like, say, by putting it in space).

To determine the dark current, Hubble does a dark calibration, which essentially amounts to taking a picture of the same exposure length as your actual picture but with the lens cap on. That way the only electrons detected will be those coming from the heat of the instrument. Once you know what this average amount of heat is, you can subtract it from the electron counts of your image. Here is an image of the dark frame from our observations:
Think TV static.
The intensity of our dark frame is about 60% of the intensity of our image, which means that by subtracting it from the image, we're losing a lot of information on faint sources. But if we don't subtract the dark current, we're overestimating the true brightness of the Ring Nebula by a factor of roughly 2.5, which would lead to some pretty bad science on our part.

Another source of electrons is the electronic components of the CCD. To operate properly, a CCD requires a certain voltage to be coursing through it constantly. For Hubble, this is the BIAS calibration, because you can think of the CCD voltage as being a bias introduced into the electronics in order to produce usable data. Telescopes acquire a bias frame by taking a zero-second exposure that doesn't let in dark current electrons or photoelectrons. Hubble does this separately from taking its dark calibration, but in certain situations you can also simply assume that your dark current includes the bias electrons. In that case, subtracting out the dark frame gets rid of the bias electrons, too. That was the case for the data we collected. If you look at what's left over after this subtraction, this is the image you get:
The Thumbprint Nebula (I've zoomed in a bit here.)
While this looks worse than the artificially contrasted version up above, the Ring Nebula does pop right out when the telescope's heat and bias is removed without manually adjusting the contrast. By fiddling with contrast, you can create spurious images that don't represent anything actually out there. No artificial images happened to be produced in this case, but we can't be so sure that the structure we see in the Ring Nebula is the true structure except by removing the dark current and bias electrons.

Finally (for our scenraio), the individual pixels in the CCD might have varying levels of light sensitivity. Since we want each photon to count equally, we have to adjust for these effects. Balancing the variable sensitivity is known as flat fielding, and you produce a flat field by shining a light of uniform intensity across the CCD. When you do this, the CCD should record the same number of electrons (more or less) at each pixel. If some regions of the CCD are too bright or too dim, you know this corresponds to unequal sensitivity. To remove the effects of this sensitivity, you divide your image by the (normalized) flat field, so that the brightness at each pixel is adjusted by a factor proportional to its sensitivity.

In space, unfortunately, it's difficult to shine a uniformly bright light on Hubble. You might think the Sun would work, but the Sun is way too bright and even a very short exposure would saturate Hubble’s sensors. Saturation causes electrons to bleed over into neighboring pixels and gives you electron counts that are not proportional to the number of photons detected. Instead, Hubble takes flat fields by looking at the Earth, which (with a lot of processing, aided by the fact that the Earth moves beneath Hubble very quickly, blurring any image it takes) can reproduce a flat field. So the DARK-EARTH calibration is Hubble's way of adjusting for the varying sensitivity of its equipment.

On Earth, flat fields are usually produced by shining a light on the dome of your observatory and having the telescope look at that, or looking at a small region of the dark sky before any stars become visible. Here's the flat field we produced:
I think the telescope has floaters.
I suspect we actually did a very poor job of shining light uniformly as I think you can see our light source positioned on the right side there. The smudges, however, probably are true variations in the pixel sensitivity, so producing the flat field removed those. (The ring-like smudge in the middle is an eerie coincidence.) After dividing our image by the flat field, we get this picture:

Possibly the Eye of Sauron (More zooming done.)
The main visual advantage seems to be increased clarity of the inner region of the nebula.

None of these, of course, look like the beautiful pictures we see from Hubble or APOD. There are two reasons for this. One, our telescope simply doesn't have the resolution (or other exquisite features) that Hubble does, so there's a limit to how nice a picture it can take. The second reason, however, is that pretty pictures are created to be pretty, not for doing science. As I said above, this is just a representation of the data, but there are other representations.

In fact, one purpose of this lab was to determine the three dimensional structure of the nebula. That is, is it a donut or a shell? A picture alone can be deceiving. But other methods of interpreting the data might be more useful. So here's another representation, plotting the brightness of the nebula along a particular axis in different wavelengths of light:
Graphs are the best, you guys.
Doing some math on graphs like these, we were able to show that the Ring Nebula is probably more like a thin shell of material than a donut, despite its visual appearance. The ring is a bit of an illusion. But a graph like this is only accurate because of the processing done to the remove observational artifacts, even though that processing does not produce an aesthetically pleasing picture.

Nevertheless, what's astronomy without cool pictures? In addition to looking at the nebula with a clear filter, we also used filters that passed only red light from glowing hydrogen and blue/green light from doubly-ionized oxygen. When you clean up that data, assign a color to each filter, and plot them on top of each other, you get this:
Insert riff on Beyoncé lyrics here.
That's not really what the Ring Nebula looks like, but it is one way of seeing it.