Saturday, March 21, 2015

Euler Unmasked

We're going from straight philosophy in my last post to straight math in this one. But if you're an ancient Greek thinker type person, math and philosophy are the same thing, anyway.

So about a year and a half ago, I made a post that touched briefly on the relationship between trig functions and exponential functions as a way of justifying my tendency to make things more complex than they need to be. I mentioned there that I didn't have a firm enough mathematical grasp to explain how these two mathy bits are related. Well, the topic of Euler'sidentity came up a little while ago in my writing group, so I decided to do some research and figure out just how it is that trig functions and exponential functions come together.

For those of you that don't click links, Euler's identity says:




This is a pretty remarkable and frankly incredible equation, but it's true. It manages to link probably the three most famous mathematical constants in a very simple way. The identity arises from Euler's formula, which says:




If you replace x with π, then isin(π) = 0 and cos(π) = -1, so with a little rearranging you can get Euler's identity. But this raises the question of why it should be true that exponential functions and trig functions are connected by the imaginary unit.

First, a quick primer for those who need it. In the common parlance, something that is "exponentially" better is "really super" better. This kind of talk tends to aggravate the mathematically aware, however. Really, exponential functions are ones where adding a constant increment to the input multiplies the output by a constant factor.

So if you hear something like, "Kyrgyzstan's GDP has doubled every year for the last ten years," then that's exponential growth. The factor is 2, and the increment is yearly. But this also applies to, say, the interest rate on your savings account, which as we all know is not exactly "really super" better than anything except possibly 0. There, your balance is getting multiplied by something like 1.0025 every year, which is every bit as exponential as Kyrgyzstan's doubling GDP (totally made up).

The point is, however, that exponential functions (with a factor greater than 1) demonstrate constant (monotonic) growth. If you increase the x value, the y value will increase, too.

Trig functions, on the other hand, are the realm of waves, which go up and down and up and down. They are all about rhythmic or periodic behavior. But as their name suggests, the trigonometric functions are actually based on the angles formed by triangles. Trig functions are really expressions of the Pythagorean formula, A2 + B2 = C2. The relationship between this formula and periodic motion is that for some constant value of C, increasing A will decrease B, and vice versa.

So it's hard to see how exponential functions and trig functions could be related. As I hinted up above, the answer is through i.

i, the imaginary unit, is what the square root of negative one is defined to be. Imaginary numbers kind of get a bad rap, partly because of their name. They seem like something mathematicians just made up that couldn't possibly be real. The funny thing is people had the same opinion about negative numbers for a very long time. After all, how can you possibly have -3 apples? On this whole controversy, the great mathematician Carl Friedrich Gauss had this to say:
That this subject [imaginary numbers] has hitherto been surrounded by mysterious obscurity, is to be attributed largely to an ill-adapted notation. If, for instance, +1, -1, √-1 had been called direct, inverse, and lateral units, instead of positive, negative, and imaginary (or even impossible), such an obscurity would have been out of the question.
While his preferred notation might seem somewhat opaque, it does lend itself very well to a geometric interpretation of numbers. If you look at a Cartesian plot, you can think of Gauss's direct, inverse, and lateral numbers this way. 




The direct unit (+1) moves you one to the right on the graph. The inverse unit (-1) moves you one to the left. And the lateral unit (√-1) moves you up one. Rather than being on the number line we're used to, imaginary numbers can be thought of as being at right angles to it.

This idea lets you plot numbers that are a combination of "real" and "imaginary." So if you have the complex number 3 + 2i, that's just 3 units to the right and 2 units up.


As you see, plotting numbers this way means you can draw right triangles that are related to those numbers. This is the first way that we can connect imaginary numbers to the trig functions. Getting from imaginary numbers to exponential functions will take a little more work, though.

If i is the square root of -1, we can play around with exponentiation to find an interesting pattern. i2 = (√-1)2, which by definition equals -1. i3 = (√-1)3, or (√-1)* (√-1)2, or i*-1, which just comes out to -i. i4 = i2 * i2, or -1 * -1, which equals 1. Multiply that by i, and you of course have i again. So through exponentiation, we have discovered something of a pattern.

i1 = i
i2 = -1
i3 = -i
i4 = 1
i5 = i

The exponents of i loop back in on themselves. You might even say they exhibit periodic behavior, like the trig functions.

Our next step is probably the toughest bit. Bear with me. So, if you recall from my foray into Fourier, many functions can be expressed as an infinite series of sines and cosines that eventually converge on a desired function. These infinite series turn out to be very useful to mathematicians, because not all patterns can be expressed as "elementary" functions, but only as infinite series of some other type of function. One type of infinite series is the power series, which looks like this:




To get different functions, just plug in different values for the coefficients an. The way you figure out which coefficients correspond to the function you want is basically by assuming your function can fit into some power series and then just playing around for awhile until you find a pattern that fits. Let me demonstrate.

One of the defining features of the exponential function, ex, is that it is its own derivative. This means that its rate of change is equal to its value. So the derivative of ex is also ex, and so on.

One of the first tools you learn in calculus is that the derivative of a power function like x4 is 4x3. You multiply by the exponent, and then lower the exponent by one. If the exponent is already 0, then your derivative is 0. So if you take the derivative of our above model power series, you get:





And if you take the derivative of that, you get:





And if you take the derivative of that, you get:





And one more time, because there's a pattern I want you to see:





Now remember, all of these series are equal to the function ex, because ex is its own derivative. The missing ingredients are the values of an. If we evaluate ex at x=0, we have e0, and anything to the 0th power is equal to 1. In the above series, when x is 0, everything except the leading term is also 0. So we have:

1 = a0 = a1 = 2a2 = 6a3 = 24a4

and so on. So with a little bit of algebra, you can figure out the value of any an. It's just 1 divided by the factor preceding the coefficient. But there's a pattern here. 24 = 4*3*2*1. 6 = 3*2*1. 2 = 2*1. The value of the coefficient is equal to 1 over the index of the coefficient multiplied by each integer lower than it. This is known as a factorial in mathematics and looks like this:

5! = 5*4*3*2*1 = 120

With that information in hand, we know what the power series of the exponential function is:







I've gone through this process once so that you don't think I'm pulling this stuff out of a hat, but you can do the same thing to find the power series of a lot of different functions, including the trig functions. For example, the power series of sin(x) is:







And the power series of cos(x) is:







Weirdly, the sine and cosine power series look kind of similar to the exponential function, but with terms missing and some negative signs thrown in. This curious fact turns out to be very important for connecting exponential and trig functions. Let's remember that the key to that connection is i.

Let's see what happens if we try to find the power series of eix rather than ex. To do that, we just replace all instances of x with ix in our series above. That gets us:






Hey, that means we're finding powers of i. But we already did that up above. That follows a pattern, so we can just fill in from that pattern and get:







Now, just for the heck of it, let's separate our series into terms without i and terms with i. So we have:







Look familiar? That's the power series for cosine plus i times the power series for sine. In other words...




Just as Euler told us.

All of this may seem like some kind of tedious mathematical trick. After all, how do we know that the power series representation of a function behaves identically to the function itself in all instances? The truth is, it doesn't, and that's one of the things you have to be careful of when finding series expansions. It does happen to work in this case, though.

But there are ways in which this proof can help motivate understanding. One way to think of the idea is that the introduction of i into the exponential function breaks the function down into four interacting parts: one increasing in the direction of 1, another increasing in the direction of -1, and two others increasing in the direction of i and -i. Different values of x contribute more to one direction than another, and the whole thing repeats with a period of 2πi.

To see if this picture holds true, let's take another look at the powers of i. We saw that powers of i cycle from i to -1 to -i to 1 and then back to i again. But we were only looking at integer powers of i. What happens if we replace the integer with an unknown variable x? That is, how do we evaluate ix?

A neat tool that can sometimes work in mathematics is to perform some operation on an expression and then also perform the inverse of that operation. Doing so doesn't change the expression, but it does let us look at it in a different light. So how about we take the natural log of ix and then exponentiate the expression. That gets us:





The laws of logarithms mean we can move that x to outside the log, giving us:





We know how to evaluate ex, but it’s not immediately clear how to evaluate ln(i). Here it's useful to remember what ln means. The natural log of some number is the power to which you must raise e in order to get that number. So if you have, say, ln(e2), then our answer is 2, because e to the power of 2 obviously equals e2. So let's look at it this way: e to what power equals i?

Now we bring in Euler's formula again.

eix = i when cos(x) = 0 and isin(x) = i

This is true for x = π/2, because cos(π/2) = 0 and sin(π/2) = 1.

So then ln(i) = iπ/2, which means that ix = eiπx/2 = cos(xπ/2) + isin(xπ/2). With that conversion, we can evaluate i to any power at all, not just integer powers. But to reaffirm that this isn't some trick, let's go ahead and see what evaluating it to integer powers means.








This is the exact same pattern we saw above, but this time through the lens of Euler's formula rather than the logic of manipulating √-1. For non-integer values of x, you get complex numbers that, when treated as vectors on the complex plane, are all a distance of 1 from the origin, creating a circle of radius 1. Through purely algebraic means, this connects back up with the geometrical interpretation of imaginary numbers suggested by Gauss.

Okay, I'm done now. I hope this sheds some light on the interconnectedness of math, which can be demonstrated by taking the rules you're familiar with and applying them to unfamiliar situations. When people speak of the beauty of math, this is it. In the real world, we often find depth and meaning through metaphors that connect disparate ideas. That's what art and literature are all about. Math does the same thing, but with numbers, letters, and funny symbols.

(On the other hand, I may have written this post just to play around with LaTeX.)

Friday, March 13, 2015

On Dumbledorean Realism

I wrote a paper this week for my literature in philosophy course discussing the dream argument. Because it's been a little while since my last post, I think I'll reproduce the paper here (with a few changes) just for the heck of it. I procrastinated, though, which means I wasn't quite able to make my point as well as I had intended.

The gist of my argument is that there is no way to define a concept of a "real world" that resembles the world we inhabit (and are comfortable calling the real world) while simultaneously excluding the possibility of "unreal worlds." This leaves us with two possible conclusions: (1) if we do actually inhabit an "unreal" world, then unreal worlds are what reality actually is; or (2) we inhabit an unreal world and real worlds are nothing at all like the type of world we live in.

When talking about the world we seem to live in, I lean toward option 1 because I think it allows us to do some work ontologically. That is to say, I think we can feel justified in calling real many things that might not seem to be real depending on your point of view (subatomic particles, ideas, time, etc.). When talking about my truly fundamental beliefs, however, I subscribe to a system that you might say is a combination of options 1 and 2. But that's a whole 'nother bag of beans (worms? shrimp? cats?--a quick googling doesn't settle this). Anyway, without further ado, here's my damn essay. Oh, also, spoiler alert for the final Harry Potter. But come on, I haven't even read the book and I know what happens.

Near the end of the final book in J. K. Rowling’s Harry Potter series, Harry Potter and the Deathly Hallows, Harry has a seemingly impossible conversation with his mentor Albus Dumbledore. The seeming impossibility of this conversation is predicated on both characters apparently being dead at the time. As the conversation draws to a close and Harry realizes that he might not actually be dead, he asks Dumbledore, “Is this real? Or has this been happening inside my head?” The ever clever Dumbledore answers, “Of course it is happening inside your head, Harry, but why on earth should that mean that it is not real?”
This brief exchange alludes to a problem that philosophers have wrestled with at least since Descartes and to a plot device employed in many works of fiction, from Borges’ short story The Circular Ruins on through to contemporary films such as The Matrix and Inception. The problem is this: what is the difference between the real world and one only inside our head, or one that is illusory or fictitious? To get to the heart of the matter, the question is often posed thusly: how do you know that you are not dreaming or being dreamt? If we could answer this question succinctly, then we would have a clear conception of what the real world is and whether or not we are in it.
I think it might be useful, however, to tackle this question from the opposite direction. So the question might instead be posed: how do you know that you are dreaming? That is to say, if we assume that you are dreaming, what could happen in the dream world that would allow you to correctly conclude that you are, in fact, dreaming? There is an easy but unsatisfactory answer that immediately comes to mind—you could wake up. Unfortunately, all this tells you is that you were dreaming; it gives you no information about what’s happening to you in the moment.
In fact, waking up doesn’t even tell you that you’re not dreaming, because it is not entirely uncommon to have a “dream within a dream” à la Inception. That phrase may be something of a misnomer, though, for what it describes seems no different than moving from one dream to another, an experience with which many of us are also familiar. It is more accurate to say, then, that dreaming can be followed by the apparent experience of waking up, regardless of whether or not we actually do wake up.
Rather than focusing on waking up, it might be useful to examine elements of dreams that strike us as particularly dream-like. But if we’re dispensing with waking up, we can generalize dreaming to include other types of unreal experiences, such as being simulated, fictional, dreamt, or imagined. The common thread that binds these experiences is an apparent disconnect between our subjective awareness and what the real world truly is. It may seem something of a leap to lump in these other concepts, however, because all of us have had the subjective experience of dreaming but few of us would claim to have ever been a fictional character. In comparing these disparate types of unreality, then, we must consider not what it feels like to be that way but what elements are common to our conception of unreal worlds.
I posit that there are four features we might say are characteristic of various forms of unreality. These are abrupt changes, rule violations, missing information, and absurd scenarios. To get an idea of what I mean by these terms, a few examples might be necessary.
We’ve already seen examples of abrupt changes just a few paragraphs up. If you move from one dream to another, then the steady flow of reality has been altered, continuity broken. You may have been dreaming of playing in the World Series and then suddenly shifted to a dream of your wedding day. More generally, abrupt changes abound in our unreal creations. In chapter 6 the main character may decide to take a trip across the country, and in chapter 7 the main character may arrive without the intervening journey having been written by the author.
Rule violations would seem to be the most obvious feature of unreality. Natural laws apparently govern what we are comfortable calling the real world, so an unreal world should not feel bound to obey said laws. Stories taking place in a fantasy or science fiction setting are often rife with events that could not happen according to the laws as we know them. Dreams very often involve impossible happenings, such as reunions with long-dead relations or the ability to fly by flapping your arms. The only limit to what may happen in an unreal world is our imagination, and I can imagine a being possessing a far greater imagination than I have.
Our next unreal attribute is a little harder to pin down. Missing information is the fact that unreal worlds are often insufficiently detailed. An author may write a mundane, temporally continuous story where nothing out of the ordinary happens, but it is very unlikely that the author will describe, unless motivated to do so by story concerns, how that character’s internal organs function, or what’s happening on the other side of the world. This might not seem troubling; after all, I am not constantly aware of everything happening inside my body. But if a fictional character can have a subjective experience produced by the work of fiction that character inhabits, does that character have internal organs not written about? Worse still, if a fictional character is in a room described as merely “plain” or “having four walls,” how rich are the perceptions of that character regarding the room? This is missing information.
Finally, unreal worlds are very often absurd. What constitutes absurdity can certainly be a matter of opinion, especially because I am distinguishing this from scenarios that explicitly contravene physical laws. So for our purposes, absurd scenarios are ones that are prohibited by no natural laws but that we are confident would never happen in reality due to their implausibility. I may dream that I am trapped in an elevator playing Monopoly with all of my ex-girlfriends; this is a deeply unlikely scenario, but no law ever conceived of by Newton says it cannot happen. Absurdist fiction follows similar lines. Look to any TV sitcom such as Seinfeld for examples of situations that may not be physically impossible, but certainly aren’t likely.
With the features of unreality defined, are we now equipped to correctly conclude, if we’re dreaming, that we are? Unfortunately, we are not. If these four elements are common to unreality, then I can identify three possible scenarios we associate with the real world that could explain these elements.
The first is this: in what we are comfortable calling the real world, our scope is limited. Humans are finite, non-omniscient beings. We gather up our experiences of the world through our senses and derive much more, but not everything, from our capacity to reason and imagine. I mentioned earlier that the impossibility of unreal worlds can be thought of as a product of our seemingly unlimited imagination. And it may be true that our imagination is infinite. But even if it is, infinity is not everything. For example, it can be shown that there is an infinite quantity of rational numbers between 0 and 1 (1/2, 1/3, 1/4 … 1/10,327,452, etc.), and yet none of those numbers is the number 2 (or any other number greater than 1, of which there are an infinite number). So even granting an unlimited imagination, a human’s experience of the world is not all of the world.
Thus we are very often apt to encounter events we have failed to anticipate, events which may seem to violate the laws of the universe or be absurd. Consider the first Native Americans to witness European colonists sailing in giant wooden ships, riding horses, and firing guns. No experience had by a Native American up to that point could have prepared them for such an encounter, and yet it happened and was real. Or consider what it might have been like if an asteroid comparable to the one that killed the dinosaurs had struck the Earth during the course of human history but before the advent of telescopes. The world would have changed abruptly, and the change brought about would have been absurd and seemingly in violation of the natural laws taken for granted. The real world is certainly not a place that can suddenly be engulfed in flames, tidal waves, and blackened skies, we would have thought. But we would have been wrong.
From this we can see that our expectation of what is absurd or impossible is a consequence of the limited scope through which we view the world. It is highly dependent on what we have experienced or imagined so far.
The second scenario in which the defining qualities of the unreal world become insufficient is one in which our senses deceive us. All of us are aware that we can be fooled by optical illusions or that we can hallucinate. We think of such instances as being exceptional, but increasingly research in neuroscience points to our being fooled as the norm. This fact can account for  abrupt changes and missing information, to say nothing of hallucinations in which absurd or impossible events occur. A real example of an abrupt change in the world is that which occurs during a bout of dreamless sleep. It is night outside, and then suddenly it is light and eight hours have passed. We excuse the continuity break only because it happens every day. A further illustration is highway hypnosis, in which we can be in one place at one time and then another place at another time with no conscious awareness of what occurred in between.
Missing information manifests in our shoddy attention to the world around us. Cognitive scientists have great fun demonstrating our inattentional blindness by having us watch videos in which we can miss wardrobe changes, people swapping, or gorillas. All of this demonstrates that we can completely fail to be aware of the real world out there and yet have no sense that we do not inhabit a richly detailed world.
This conception, however, is predicated on there being a real world which we can somehow know despite what our senses tell us. Much of this view arises out of modern science, which has allowed us to build up a representation of the world that is free of illusions and hallucinations but also only marginally connected to what we observe empirically. So while we may see color and shape and contrast, what we know from physics tells us that light is just a wavelength of electromagnetic radiation governed by Maxwell’s equations.
But this modern notion is ultimately borne out of experiments performed and reason applied to the observed results of those experiments. In other words, observation has taught us that observation is flawed. But our observations of the real world and our observations about our observations are flawed in the same way: we do not connect directly to the world but build up an image that is filtered through our senses and constructed by our brain. More abstractly, there is a real world, and there is our experience of that world; they are not the same thing. Here it would be wise to remember Morpheus from The Matrix, who tells Neo, “If you're talking about what you can feel, what you can smell, what you can taste and see, then 'real' is simply electrical signals interpreted by your brain.”
Finally, all manner of unreal occurrences can be accounted for if we live in a world governed by supernatural entities. This is the famous evil demon present in Descartes’ Meditations. But it is also a world governed by any kind of god whatsoever. If we live in a world in which miracles can occur, then we live in a world in which the laws of physics can be flouted, abrupt changes can occur, and absurd events can transpire. Rather than evidence of being dreaming or fictitious, miracles would be evidence in favor of a particular supernatural entity.
Moreover, if something exists that is supernatural, the implication is that two kinds of world exist: the natural and the supernatural. Superficially, miracles connote a world that very much seems to resemble an unreal world. If we are dreaming, dreamt, fictitious, imagined, or simulated, then there is some person or entity which is responsible for and has created the unreal world of which we are a part. We could call such an entity a god.
Some might object here by arguing that this is not what fictional universes are generally like. If an author writes a fantasy novel, there may be gods in that novel, but the author is not usually one of them. And yet it is not inconceivable that such a story could be written. It would be no trouble at all for me to write a story about characters in a world created by the god Ori Vandewalle, who sets forth such and such laws and demands such and such prayers. In a slightly less vain direction, science fiction author Greg Egan has written a trilogy of books, beginning with The Clockwork Rocket, that takes place in an alternate universe with laws of physics different from our own. If we are positing the reality of fictional characters, he has a created a new universe subordinate to and different from our own.
So then we have failed to identify criteria sufficient for determining that we are dreaming. But this failure is not a result of dreaming being too slippery a phenomenon to get a handle of; rather, the conclusion is that the type of awareness that comes from existing in an unreal world is indiscernible from the type of awareness that comes from existing in a real world. That is to say, there is no difference between real and unreal. An “unreal world” is one in which a creator in the “real world” imposes an incomplete, incongruent, potentially impossible image on the inhabitants of the unreal world, an image which may not be empirically similar to the real world. Our real world, on the other hand, is one in which we construct an image of the world from the information that falls into us, and the image we form may be incomplete, incongruent, potentially impossible, and ultimately controlled by a supernatural entity.
We cannot know if we are awake because there is no difference between being awake and dreaming. Or rather, if we are forever dreaming, or being dreamt, or fictional or simulated or imagined, then that’s what it is to be real. We might call this Dumbledorean Realism. Yes, it may all be in our heads, but that doesn’t mean it isn’t real. To say otherwise, to say that being a fictional character is not what it is to be real, is to say that a true real world is one in which unreal elements cannot impose themselves—a world that could not have been made by a creator, where subjective experiences map directly onto the world perfectly, and where all inhabitants are omniscient and could only fail to anticipate that which could not happen anyway.

Sunday, March 1, 2015

Fun with Fourier

Here's the moment you've all been waiting for, folks, when I get off my philosophical soapbox and return to regaling you with exciting tales of studying math and physics. Oh yeah!

Because it's been awhile since I've done one of these explain-what-I-just-learned-about posts, I'm gonna cover a lot of (too much) ground here. This explainer of mine is going to run through Fourier analysis (learned in my math methods course), quantum degeneracy pressure (learned in my thermo class from last semester), and the fate of stars (learned in Astro 121). Whew. So let's get started.

If you've ever seen an orchestra in concert, you know that before the orchestra begins playing, the conductor has the musicians tune their instruments. One person will play a note, and the rest will adjust their instruments to match that note. Listening to this process, a thought may have occurred to you: if all those instruments are playing the same note, why do they each sound different?

This is a complicated question, but the relatively simple answer is that a musical note, along with being described by a frequency (pitch) and an amplitude (loudness), can also be described by its quality or timbre. But what timbre represents can get us into some meaty and far-reaching math.

Say an instrument of some sort plays a Concert A. That means it produces a sound wave of 440 Hz. 440 Hz is just some process that repeats 440 times per second. And a sound wave is just a repeated change in air pressure. With no other distracting information, we could graph such a phenomenon like this:

Fun with Excel.
But there are a couple of problems with this graph, some physical and some mathematical. Let's talk about the physical problems first. Sound is a wave that travels through a medium: air. Air is known for being something of a pushover; you walk right through it all day long as if it weren't even there. But if you've ever encountered a stiff breeze, you know that air is, in fact, there.

Even if the wind isn't blowing, however, air molecules are still going to resist your attempts to push them along. You will have to accelerate them, and you will have to keep pushing the air as each molecule bumps into the next one, transfers its momentum, and loses some energy along the way. The end result is that while your musical instrument may produce some momentary impulse exactly 440 times per second (unlikely), the air's density and viscosity are going to smear out those pressure changes into something more wave-like:

Thanks, Wikipedia.
Let's get into sound's wave properties a little more. Waves operate under the principle of superposition, which says that you can find the amplitude of any wave phenomenon (loudness for sound, brightness for light, etc.) at any point in space by adding up the amplitudes of all the relevant waves at that point in space. This is why the acoustics of a concert hall matter. If the crest of one wave meets the trough of another wave, then your waves cancel out and you're left with a dead spot. Alternatively, if two crests meet, they combine to be louder than either wave individually. This will become important in a bit, so keep it in mind.

The mathematical objection to the above graph goes like this. If I look at a limited portion of the graph, how do I know what the frequency of the wave is?

Not so useful.
The answer is that I don't know. In fact, the smaller a segment of time I look at, the less I can know about the definite frequency of the wave, which means the more possible frequencies the wave could have.

That right there is an interesting way of phrasing things: the more possible frequencies the wave could have. Why, that almost makes it sound as if the wave could have multiple frequencies. Does that even make sense, though? It does, for the reason we talked about above: the principle of superposition. When two waves meet in one place, they combine into one wave. This happens even if the waves have different frequencies.

The discontinuous impulse above, then, could just be many waves on top of each other, with many different frequencies combining in such a way as to cancel out almost everywhere except at precise points. Does this rescue our perfect Concert A? Not quite.

The next question that springs to mind is, where are all these different frequencies coming from? And the answer is that a musical instrument does not produce a note at a single frequency of 440 Hz but many tones at frequencies (harmonics) related to the fundamental of 440 Hz. There will be a tone at 440/2 Hz, 440/3 Hz, 440/4 Hz, and so on, all at different amplitudes depending on the properties of the instrument. The combination of these many harmonics into a single sound is the main component of the timbre, or quality, of a note.

All these different sound waves add together, shifting a wave away from a perfect sinusoid and toward something with a sharp peak. But to get that sharp peak, you need a lot of waves at a lot of different frequencies and very high amplitudes. A musical instrument is only going to provide strong amplitudes at specific overtones of the fundamental, so you're very unlikely to get the original graph up above.

Mathematically, the process of decomposing a single wave into its constituent waves is known as Fourier analysis. In fact, you can represent any periodic signal--or even any "well-behaved" function at all (and some not so well-behaved ones)--as a series of sinusoids of varying frequency and amplitude. You can even perform what's known as a Fourier transform which produces a power spectrum, a graph of the strength of each frequency present in a signal.

The perfect sine wave, which has one well-defined frequency, will look like a spike when you take its Fourier transform, the power spectrum. On the other hand, the sharp impulse, which is made up of many different frequencies, will have a Fourier transform that is spread out. It is impossible to have a signal that is a spike both in time and in frequency. There's a minimum level of uncertainty across the two representations.

Uncertainty, you say? Yes, like Heisenberg's principle. Heisenberg's uncertainty principle can be looked at as arising from the wave nature of all matter. A wave cannot have an absolutely precise location in space while also having an absolutely precise wavelength (which is related to frequency). This comes directly out of the observation made up above: the smaller a slice of time you look at, the less information there is about a wave's frequency, which means the more possible frequencies a wave can have.

A century's worth of experiment has revealed that matter is, in fact, composed of waves. Just as sound waves can interfere with each other to produce acoustic dead spots, electrons can interfere with each other, too. While there are very small and precise experiments such as the double slit that bear this out, there is a rather stunning example that exists on a cosmic scale, too.

So, another interesting fact about electrons is that they obey the Pauli exclusion principle, which says that no two electrons can occupy the same state. Why this is true and what exactly it means is complicated and beyond my current knowledge level, but fundamentally it means that as you compress matter to a denser and denser state, each electron present has fewer and fewer allowed states. This means the uncertainty in the position of each electron goes down, which means the uncertainty in its frequency goes way up. An electron's frequency is tied to its momentum, so the more you compress an electron, the faster it will move.

For particularly dense matter, like the kind you might find in a white dwarf star, this momentum creates pressure which prevents the star from collapsing. However, there is a limit to this pressure. An electron cannot travel faster than the speed of light, which means that as a star gets denser and denser, the increase in electron degeneracy pressure slows down.

In normal stars, the denser it gets, the hotter it gets, and the hotter it gets, the more the star pushes back against gravity, which subsequently cools the star. But degeneracy pressure doesn't come from temperature; it comes from the quantum nature of matter. So as the star gets denser, it gets hotter, eventually leading to a runaway fusion process that annihilates the star in a supernova--a spectacular explosion that can outshine a galaxy and leaves behind a neutron star.

The limit imposed by the speed of light leads to a maximum possible mass for a white dwarf, about 1.4 solar masses, known as the Chandrasekhar limit. A white dwarf cannot exist with a mass any greater than that, and sure enough, no white dwarfs with a greater mass have ever been found. But what's more, because (almost) all white dwarf supernovae happen at 1.4 solar masses, they all look pretty much identical. In fact, the characteristic explosion of a white dwarf supernova is so reliable that it gives astronomers a standard candle by which to measure distances across the universe. And this reliability is a direct consequence of the wavelike nature of matter.

So there you go: from music to cosmology, by way of Fourier analysis. By the way, if you want to combine music and cosmology, check out this guy's site. Without getting into hairy mathematics, he talks about the power spectrum (Fourier transform) of the cosmic microwave background, and how in a very real sense this can be thought of as the sound of the early universe. It's fun stuff.