You read that right, and no one has a concussion. When you look up at night and observe twinkling stars, what you're seeing is not an image but a bit of optical trickery as light interferes with your eyes. It wasn't until the last 20 or 30 years that astronomers, employing a clever, delicate technique known as optical interferometry, began to see stars as the distant suns we know them to be. Until this technique matured, all stars were point sources. You can blame the long delay on their stupendous distance and the reality of eyes that fit inside eye sockets.
Our sun is a short 150 million kilometer hop away with an angular size of half a degree. But if it left us behind and visited our nearest neighbor, Proxima Centauri, at 4.25 light years, its angular size would shrink. At that great distance, we would expect the sun to appear roughly 270,000 times smaller (6.7 milliarcseconds). But stars aren’t that small in the sky. Imagine lining up a quarter million of them next to the sun (or moon, which is the same angular size); the sun is going to feel pretty insecure next to that line. Something weird is going on that makes stars appear larger than they should.
The problem is with the equipment we're using: our eyes. To really see a star (or anything), we need to know the incoming angles of all its light rays. The eye does this with a lens, which focuses the light that passes through the pupil onto the retina. Because the pupil is small (~5 millimeters in diameter), an incoming light wave interferes with itself as it passes through, projecting a diffraction pattern onto the retina. Instead of a single point of contact, the light wave spreads out over a small area.
|An Airy disk by Wisky - Own work, CC BY-SA 3.0, Link|
Okay, you think, we just need a bigger pupil to cut down on the diffraction. And that's what a telescope is—an optical system with a much larger pupil (aperture) than the human eye. But there's a problem. The (theoretical, usually much worse because of atmospheric turbulence) diffraction limit of the human eye is about 20 arcseconds, which is 3,000 times as large as our sun-at-Proxima-Centauri. To pull the interference apart, the telescope needs to be at least 15 meters across.
There aren't any that big (yet), and we can't really make mirrors that size without breaking them up into smaller segments. To image a star, then, we need a technique that bypasses the diffraction limit imposed by our frustratingly non-gargantuan apertures. Enter optical interferometry, which lets us create a virtual telescope as large as the distance between widely spaced individual ones.
To see how this works, let's first imagine we're just trying to find the position of a single, dimensionless dot of a star. Our interferometer is two regular telescopes set a good distance away from each other. By the time the star’s light reaches us, it looks like a flat plane wave. If the star is directly overheard, the wavefront hits both telescopes at the same time. If it's at an angle, the wavefront hits one telescope before the other and the two signals are out of phase.
|Interferometer diagram. Credit: ESO|
Until the signals cancel completely. Then, if you push the star even farther, the signals will cycle back into phase. The distance between the two telescopes—the baseline—determines how long this takes. A longer baseline spaces out the cycle, giving you more precise measurements in the same way that having a bigger telescope gives you better resolution. But if nudging your star around is problematic, this pattern plays itself out in the interference fringes.
Because interference patterns are regular and cyclic, we can think of the baseline as sampling a particular "spatial frequency." This is a measure of the relevant physical scales of an image.
Imagine looking down on a dense forest from overhead. If the ground is obscured, then what you see changes from leaf to leaf, which represents a high spatial frequency. Now think about the same (deciduous) forest in the dead of winter. With the leaves gone, the scene changes from tree to tree—a lower spatial frequency. By sampling spatial frequencies, you can figure out the important sizes of whatever you’re looking at. That way, you don't (I'm sorry) miss the forest for the trees.
But what is the spatial frequency of a point source star? Let's mix up our nature metaphor and imagine a lone blade of grass in a vast, empty plain. If that blade is what we're looking for and it's the only thing there, then every length scale contributes equally to pinpointing it. So every spatial frequency—every baseline of our interferometer—will be strong, producing clear, regularly spaced interference fringes. A small object in space has a wide spatial frequency spectrum.
The bigger an object gets, however, the narrower its spectrum will be. A larger object means more light waves coming from slightly different directions, which creates more interference and messier fringes. The result is that you have to search around to find a baseline where all the waves add up in just the right way to produce a nice, regular set of fringes. So when you’re looking at the spatial frequency spectrum of an extended object (that is, a real, physical one that actually exists), it will be narrow and centered around only a few length scales.
This complementarity—wide in space, narrow in frequency (or the other way around)—is a property of Fourier transforms. A Fourier transform is a way of decomposing a function in one domain (space) into its constituent parts in another domain (frequency), or vice-versa. We have nifty computer algorithms that can work this out quickly and efficiently. The important part is that a function and its Fourier transform will always have this narrow-to-wide, wide-to-narrow pattern.
So here's what you do to image a star. You point your interferometer at it and sample as many spatial frequencies as you can. Spatial frequencies are determined by the baseline length between telescopes, and the two main ways of adjusting this are (1) adding multiple telescopes at different distances from each other or (2) waiting for the earth to rotate, which changes the "projected baseline" of the interferometer (as seen from the star).
|Very Large Telescope Interferometer. Credit: ESO|
There are some complications, notably that you can never sample all spatial frequencies, which means some guesswork is required. So you have algorithms that interpolate what your star looks like by removing meaningless frequencies, erasing artifacts produced by the shape of the telescopes, and making assumptions about the star (it's probably not a giant monkey, say). Do all this right and optical interferometry gives you a picture like this:
|Pi1 Gruis. Credit: ESO|