Wednesday, December 30, 2015

The War on Stars

This post contains spoilers for both Star Wars: Episode VII The Force Awakens and my academic semester. Read on at your own peril.

As always, I must begin by apologizing for not having posted in months. My academic load this semester, combined with my work schedule, was probably about the limit of what I could handle and didn't leave me with a lot of time left over for blogging (or sleeping, for that matter). To remedy that, during winter break I'm going to try to find time to write about the classes I took, maybe posting every week or so. We're starting off today with my observational astronomy course.

But we're getting there through Star Wars. To begin, I enjoyed the movie a great deal (all three times). I also go into a Star Wars movie turning off the part of my brain that cares about scientific plausibility or consistency. In fact, I'm partial to the idea that Star Wars is science fantasy rather than science fiction, whatever that distinction may signify. Yet looking at media through a scientific lens is a fun way for me to analyze it, and it might even be educational. We'll see.

So, of course, TFA has a galaxy's worth of scientific errors, but there's one visual in particular I'd like to take a look at, because I think it gets at something important in astronomy. When the First Order fires the weapon from Starkiller Base at the New Republic, Finn on Takodana (Maz Kanata's planet) sees the beam split up and strike different planets in the Hosnian system. This is an impossible image, given the assumption that Starkiller Base, Takodana, and the Hosnian system all orbit different stars. The reason this image is so impossible is because, as the great Douglas Adams informed us, space is big, really big.

Now, I'm not thinking about the fact that light travels at a finite speed and there wouldn't have been time for the image to show up in Takodana's atmosphere. This is a universe with faster than light travel, so let's just mumble something about hyperspace and ignore that. Imagine that it did take years for the light of the beam to stretch across the lightyears; it still wouldn't look like it does.

The problem is that you can see multiple beams at all, that they can be resolved as striking different places. In astronomical terms, the angular separation between the beams is absurdly large. This point can be made with a simple trigonometric argument. If we imagine two lines connecting Finn's eyes and the planets struck by the beams, and another line connecting those two planets, we can make a little triangle.


What we're looking for is the angle between lines C and A. For our purposes, the relative lengths of A and C don't matter and we can just call one of those lines the distance between Takodana and the Hosnian system. Trig gives us the formula sin θ = B/C. But in astronomy we make use of the small-angle approximation a lot, which says that for very small θ, the sine of θ is approximately θ. So then we have θ = B/C.

The significant part of this formula is that, for astronomical purposes, staring up at the sky only gives us θ, not B (the size of the thing we’re looking at) or C (the distance to the thing we’re looking at). This means, without other factors, we can’t tell if we’re looking at a big object far away or a small object nearby.

Digging around Wookieepedia and starwars.com, it seems that Takodana is supposed to be in the Mid Rim of the galaxy and Hosnian Prime in the Core. If we assume that this galaxy is about the same size as ours (not necessarily a great assumption, but published maps show something like a spiral galaxy), then halfway out of the Core gets us a distance of 25,000 lightyears. We don't know the distances between the planets in the system, but if we make the very generous assumption that they are as far apart as Earth and Neptune, we get a distance of 4 lighthours. Plugging those numbers into the above formula (B=4 lighthours, C=25,000 lightyears), our angular separation is 2x10-8 radians, which converts to 4 milliarcseconds (mas). 1 mas is 1/1000 of an arcsecond, which is 1/60 of an arcminute, which is 1/60 of a degree. By comparison, the moon has an angular size of 31 arcminutes, over 400,000 times bigger.

So the beams wouldn't appear that far apart. In fact, you wouldn't be able to tell them apart at all. Okay, but why am I fussing about this? Because it gets into some interesting aspects of observational astronomy having to do with the wave nature of light. Specifically, when light waves enter an aperture, they diffract around the edges and form interference patterns. It's inevitable and must be taken into account no matter what type of observation you're doing.

When light diffracts through a perfectly circular aperture, it forms the following interference pattern, called an Airy disk.
"Airy-pattern" by Sakurambo at English Wikipedia 桜ん坊
That is, if you were to shine a laser pointer through a circular hole, instead of a dot on the other side, you would get the above pattern. However, trying this with a store-bought laser pointer and a hole punch won’t get you much, because the pattern is very sensitive to the wavelength of light used and the size of the hole.

In the case of the Starkiller beam, the aperture we're talking about is your pupil. The human pupil can change in size based on lighting conditions, but a good average diameter is 5 mm. The wavelength of the beam's light is based on its color. The red light of the Starkiller beam is at the long end of the visible spectrum, so let's call it 700 nm. These two variables play into the size and spread of the interference fringes.

In the 19th century, Lord Rayleigh proposed a criterion for determining the limits of image resolution. He said that if two images are closer together than the first minimum of the interference pattern, then you can't resolve them as two objects. This is arbitrary, but not entirely made up. If you add together the intensities of two interference patterns separated by less than that minimum, this is the difficult to interpret graph you get. Are you looking at one object or two?


The pattern of the Airy disc is described by a Bessel function, which is a special function invented to be the solution to some common differential equations. The first minimum of the Airy disc is the point where the function goes to 0 for the first time and happens at an angular distance of θ = 1.22λ/D, where λ is the wavelength of light, D is the diameter of the aperture, and 1.22 is a rounded-off figure for a number that goes on forever, because Bessel functions aren't very nice functions.

In fact, my observational astronomy professor explained that if we're going to use 1.22, we might as well memorize a few more digits because that number only comes up with perfectly circular apertures anyway, and 1.22 is not much greater than 1, so you're not gaining much precision as it is. In most cases, making the approximation that θ = λ/D works well enough. The interesting thing to note about this criterion is that fine angular resolution results from small wavelength or large aperture. This is why radio telescopes are much bigger than optical telescopes. Radio telescopes are looking at very large wavelengths (centimeters to meters compared to hundreds of nanometers), so to be able to resolve images, they need much larger apertures.

Since I just made up the wavelength of our beam and I'm assuming the pupil is exactly 5 mm, let's leave off the .22. In that case, our minimum angular resolution is 700 nm/5 mm = 1.40x10-4 radians, which comes out to 29 arcseconds. This limit is ~7000 times higher than our estimated angular separation of 4 mas for the Starkiller beams. To our eyes, the split beams would look like one beam.

...if they looked like anything at all. If you remember, Finn also saw the beams during the daytime. And as you may also remember, the only celestial object we tend to see during the day is the Sun (and the moon depending on its phase, and occasionally some planets and stars near sunrise and sunset). We intuitively know why this is: the Sun washes out dimmer objects. Even the reflected light of the Sun in the atmosphere is bright enough to wash out dim objects.

But why should that be? If the point where a star is has star and atmosphere, shouldn't it be a smidgen brighter than atmosphere alone? And shouldn't we be able to tell the difference? It turns out we can't, and the reason why is preserved in an ancient system for judging the brightness of stars that has persisted to this day with a few modifications.

The Greek astronomer Hipparchus set about cataloging the fixed stars a little more than two thousand years ago, managing to compile the position and brightness of several hundred of them. He called the brightest ones “stars of the first magnitude,” the second brightest “stars of the second magnitude,” and so on down to the dimmest stars visible to his naked eye, which he placed at magnitude six. Many an astronomy student today curses Hipparchus for giving lower numbers to brighter stars, but the system has stuck nonetheless.

In the 19th century, the English astronomer Norman Pogson realized that with a little fudging, it looked like 1st magnitude stars were 100 times brighter than 6th magnitude stars. You can divide this up a little further and discover that a magnitude jump of 1 represents a change in brightness of about 2.5 (2.55 ~ 100). But to our eyes, 1st magnitude stars don't seem to be 100 times brighter than 6th magnitude stars. They're not necessarily 6 times brighter either, but that's much closer to what we perceive than the physical reality. That's because human eyes don't respond to light in a linear fashion, but on a logarithmic or power scale instead (the details are messy and beyond my understanding).

If one star is twice as bright as another star, the above relation tells us that the magnitude difference is less than 1. In other words, Hipparchus might not even have noticed. The gist is that very small changes in brightness don't register to us if they are below a threshold called the just-noticeable difference. So while star+atmosphere is slightly brighter than atmosphere alone, it's not enough of a difference for our eyes to notice. And if the Starkiller beams shine with the brightness of a star (which seems about right given that Starkiller Base seems to explode into a star), then we wouldn't be able to see the beams at all during the day, let alone tell them apart.

But this isn't a problem just for human eyes. We don't point our telescopes at the sky during the day for the same reason. Modern telescopes pipe their images down to CCDs, digital devices that convert photons into electrons and count them up at each pixel. We can tell we've found something in a CCD if there's a signal that is significantly more intense than the background. But the background is noisy, and if the fluctuations from noise are greater than the difference between the background and the signal, then we can't tell if we've actually found anything at all.

Returning to Hipparchus for a moment, early astronomers noticed that brighter, lower magnitude stars appeared bigger than dimmer stars. We now know that the biggest stars are about a thousand times wider than our Sun. Yet we don’t see any stars in the sky that are a thousand times bigger than any other stars. In fact, it turns out the star with the largest angular size is R Doradus at 0.057 arcseconds. This is still tiny, with the moon about 30,000 times wider. But it doesn’t seem plausible that we could line up 30,000 stars as we see them in the night sky across the face of the moon.

The answer goes back to diffraction. To the naked eye, all stars are too small to have a resolvable disk. Instead, while the width of the central peak of a diffraction pattern is a function only of the aperture size and wavelength, the intensity across that width depends on the overall brightness of the object. As such, brighter stars appear bigger to our eyes, because diffraction means the whole Airy pattern is brighter, and that pattern is not point-like. Thus the size of a star in the night sky is not directly related to its physical size except insofar as bigger usually means brighter. We are not seeing the physical disk of the star itself, only the illusory Airy disk that results from diffraction.

Anyway, I think that’s enough nerding out over Star Wars and astronomy for now, what with me passing the 2000-word mark. I'll have more to say about observational astronomy later because I want to touch on image processing, which was a big chunk of the class. Next up will probably be quantum physics, though, because that’s a demon I’ve yet to exorcise.

Friday, September 4, 2015

Here's Where the Fun Begins

Hey guys. Remember me? Yeah, I haven't done any writing (fiction, blogging, or otherwise) in quite a while due to life being somewhat chaotic of late. I'd like that to change, so here's a quick blog post just to make sure I haven't forgotten how to type.

So I'm almost done with my first week of class, and I have now been to (or watched) at least one lecture for all of my classes. In the order in which I did so, here's a brief summary of said lectures followed by some general commentary. Man, this sounds exciting. I wish it were possible for Statcounter to track the exact paragraph in which my readers decide to leave the page.

Monday morning I had Quantum Physics I, which is an introductory course in quantum mechanics. Intro QM courses often seek to get students to develop some intuition for the quantum realm, which is quite counter-intuitive compared to the well known land of blocks sliding across incline planes. To develop this intuition, professors have students solve the Schrodinger equation again and again and again until their dreams are nothing but operators and wavefunctions.

To that end, the textbook we're using is Griffiths, which is apparently the text almost all intro QM classes use. Page one of that book writes down the Schrodinger equation and simply plows ahead from there. My professor thinks this is actually kind of a dumb way to go about things, so we're beginning the semester with the story of how quantum mechanics came to be.

Now, having been a science dude for quite some time, this is a story I've heard a lot. I'm getting some math to go along with it this time, but in general the story of quantum mechanics goes something like this:

Near the end of the 19th century, the kingdom of physics was at peace. Two centuries earlier, the father of physics, the great Sir Isaac Newton, had discovered the Stone of Counting, Calculus (this is a really funny joke), and used it to tame the very moon itself. Later, Maxwell forged together electricity and magnetism to bring light to the world. And Boltzmann conquered heat with entropy. Plus maybe some other things happened in the intervening two centuries.

But then evil blackbody radiation from the quantum realm brought about the ultraviolet catastrophe. Using classical thermodynamics, physicists predicted that hot objects would emit an infinite amount of energy at low wavelengths. Oh no! But then Planck saved the day by creating oscillators that only emitted and absorbed radiation in discrete chunks. Forced to obey a Boltzmann distribution, these oscillators were too few in number at low wavelengths to bring about divergent infinities.

Yet this was a false peace. Where did these quantized oscillators come from, and why did they only act in multiples of Planck's constant? Tune in next time to find out. (That's as far as we got in lecture. The rest of the story involves the photoelectric effect, emission lines, and some other stuff, but this post is already 8 paragraphs long and I'm only on my first class. Maybe I'll write a children's book about quantum mechanics.)

Tuesday morning (I have another class on Monday, but it's a discussion section and didn't meet the first week) I had Philosophy of Physics. This class is taught by a Distinguished University Professor who got a PhD in Mathematical Physics several centuries ago but then decided to go into philosophy for some reason. It turns out this class is mostly going to be talking about the "weirdness of quantum mechanics," which should make it a nice complement to that other class where I'm just going to "shut up and calculate."

Weirdness, though, is not about how maybe we're all really connected and you can change the world just by looking at it and other quantum woo like that. To this professor, the weirdness of quantum mechanics arises from an SAT-like analogy. Relativity is to space-time as quantum mechanics is to information. That is, Einstein taught us that space and time aren't what our intuition leads us to think they are, and QM does the same for information. Information, which has roots in probability theory, works differently than we think it does and the consequence is that quantum stuff can be correlated in ways that classical stuff can't. I think this is going to be pretty interesting.

Both my quantum classes were prefaced with a quote from Feynman about how nobody understands quantum mechanics. My QM professor thinks this isn't really true anymore and that the results of QM speak for themselves, whereas my philosophy professor thinks we might be getting close to an understanding via thinking about information theory.

Right after that I had Ancient Philosophy. I'm taking this class mostly because I need a history of philosophy credit for my philosophy minor, but also because I want to learn about some of the lesser known ancient Greek philosophers (Pre-Socratics, Stoics, Epicureans, etc.). And the text is chock full of readings from/about those philosophers. It was a shame, then, to learn that the instructor will mostly be teaching us about the moral philosophies of Plato and Aristotle. Yeah, that's good stuff. But doesn't everyone know that Plato's utopia is a dictatorial city-state run by wise philosopher kings? Sigh.

After a morning of philosophy came Observational Astronomy, which is the next required course in the astro sequence. This course is less about what's out there in the universe and more about how we come to learn about what's out there. We'll be studying optics, image processing, celestial coordinates, statistics of signal and noise, and how CCDs work. The biggest chunk of this class grade-wise is some observational projects where we have to take data from the observatory and process it into something useful and meaningful. That's pretty awesome.

Wednesday morning was another lecture of QM. Wednesday afternoon I had Solar System Astronomy. Like ancient philosophy, I'm taking this course mainly because I need a number of upper level astronomy courses to fulfill my major. I'm not super-interested in solar system stuff, but for some reason I'm trying to graduate next spring (it might have something to do with me turning 30 in a couple months...), which means I kind of have to take what's available. Also like Ancient Philosophy, I learned during the first class that this won't be a wide-ranging course about all aspects of the solar system, but will focus mostly on planetary geology, delving into the planets and other rocky bodies that inhabit our sun's domain.

After some thought, I realized I'm actually pretty okay with this. The one novel for which I have something approaching a rough draft spends a lot of time on Ceres and Europa, two big spheres about which I am not all that qualified to say much, despite the number of Wikipedia articles I've read. So, you know, getting a grounding in how these kinds of worlds really work might improve my ability to write about the things I'm already writing about. Or it just might make my infodumps that much more painful. We'll see. Either way, this course involves a term paper about some topic in solar system astronomy, so I'll definitely be writing.

Thursday was identical to Tuesday, and I'm writing this Friday morning, but Friday is essentially identical to Wednesday. The only class I haven't talked about is an online one, Theory of Knowledge. This is an intro philosophy course in epistemology. The course probably technically started Monday, but due some technical glitches (the course is being hosted on the professor's personal website, which he coded himself), I wasn't able to watch the first video lecture until Thursday evening.

During that video, the professor talked about the benefits of online courses, such as the freedom to edit lectures into conveniently sized chunks by excising parts that aren't helpful. Also during that video, the professor gave instructions on how to access his site in a video that his students could only be watching if they had successfully accessed his site.

Anyway, I'm pretty excited about this course. Epistemology is a fascinating subject to me as it acts as a bridge between thinking about the world and knowing about it. The basic stance of modern epistemology is that knowledge is "justified true beliefs." But how do we know if a belief is true? And how can we justify our beliefs? And what does it actually mean to believe something? Epistemology asks and attempts to answer all these questions, and it does so in surprisingly technical ways, invoking psychology, neuroscience, Bayesian statistics, and other pretty modern tools.

Week 9 of the course examines the philosophy of psychedelic transformations. (But it's a 15 week course, so I'm okay with a brief excursion into eye-rolling territory.)

And that about does it. This is going to be my busiest, toughest semester since I returned to school for real in 2012. I've got 19 credits of 300 and 400 level classes. Plus I'm working.

As far as general commentary, I have two things to say. The first is a pattern that may be a coincidence or may be indicative of what happens at this level. My observational astronomy, quantum physics, and epistemology classes are all prereqs for more advanced topics. And all of those professors are covering a lot of ground that is necessarily going to be somewhat outside of their precise areas of expertise.

On the other hand, my ancient philosophy, philosophy of physics, and solar system astronomy courses mostly stand on their own and don't lead explicitly to anything else. And my instructors in those classes have chosen to focus on a particular branch of each field that happens to coincide with their research interests. Coincidence? Probably not. But it does mean I may want to pay more attention to which teachers are teaching which classes when I decide to take free-standing, upper level courses.

My other comment is that most of my instructors (this semester and previously) talk pretty openly about pedagogy, which I think is a good sign. One of the stereotypes of college is the ancient professor who stares at the blackboard with chalk in hand, talking nonstop for the duration of the lecture and paying little heed to any students who might also be occupying the classroom. My college career thus far has been largely absent that phenomenon, and I suspect the apparently institutionalized focus on pedagogy is partly responsible for that. So yay.

Thursday, May 14, 2015

Why Am I Writing This Paper?

I went another month without posting. Sorry about that. I have half a dozen things I'd like to write about, but instead I've been swamped with end of semester stuff--term papers, lab reports, studying, etc.

So instead, like I did before to keep your attention, here's one of my philosophy papers. I did not want to write this paper because it deals with a question that (a) I think the answer to is plainly true, (b) is depressing, and (c) brings to mind a lot of the terrible arguments I had with those close to me when I was super depressed.

Consequently, I procrastinated writing this paper and wasn't able to get started on it before I found a way to make it funny. But I did manage to conceive of a fairly novel (to me) argument while writing it, which is kind of the point, so that's good. Unfortunately, in the first draft (which I turned in), that novel argument was kind of muddled. I cleaned things up a bit for this post. So here's hoping my TA tries to find out whether or not I plagiarized anybody and ends up stumbling onto my second draft.

Before writing a paper, one should always figure out why one is writing it. However, to save time, I have decided to answer this question while writing it. More broadly put, the question I’m considering here is whether writing this paper is in some sense a meaningful thing to do. In asking this question, I will also be forced to wonder whether anything at all—up to and including being alive—is meaningful. A cursory examination of my thoughts reveals three potential reasons why I might want to write this paper: to get a good grade, to have some sort of positive impact on the world, and to give my life value. A detailed exploration will reveal that none of these are sufficient reasons for paper-writing and that it’s overwhelmingly unlikely that completing this assignment could be considered at all meaningful. And yet there is no possible way for me to reach this conclusion without analytically contemplating the question itself—without writing the paper. I could have come to a different conclusion, so it would appear that any necessary first step in finding meaning in life is looking for it.
The most compelling reason for writing this paper is that I want to get a good grade on it. When we ask whether something is meaningful in this sense, we’re inquiring as to the point or purpose of doing it. Here I am asking to what ends paper-writing is a means. While it may not always be easy to elucidate the motivation for any particular action, it seems clear that anything we end up doing was motivated by something. Thus the motivation for writing this paper—the meaning in doing so—is that I wish to excel academically. Within the context of academic excellence, it is easy to find meaning in paper-writing.
Where trouble arises is that the goal of getting good grades is itself embedded in broader contexts. So we might be tempted to ask why doing well in school is a meaningful activity. After all, if maintaining my GPA is not meaningful, it’s hard to argue that any task geared toward GPA maintenance is also meaningful in a deep sense. So we can follow a causal chain up from paper-writing that goes something like this: I’m writing this paper to get a good grade; I want good grades so that I can get a degree; I want a degree so that I can find a satisfying, well-paying job; I want a satisfying, well-paying job so that I can live a happy, moral life; I want a happy, moral life so that… well… here’s where our chain runs into some problems.
Why do I want to live a happy, moral life? It might be so that I can raise happy, moral children who will raise happy, moral children, and so on. There’s no escape from the chain in that direction. I might want to live this kind of life because I am motivated to do so psychologically. If I am merely a machine in a clockwork universe, then my desire to live such a life can be understood as a tool of biological evolution for producing viable offspring, like the kind of animal life described by Taylor in “The Meaning of Human Existence.” Happiness is meaningful only insofar as I am a more efficient tool when happy; morality is meaningful because social cohesion provides a better environment for rearing children.
We might be tempted to stop here and find meaning in being happiness-generating biological machines, but doing so forces us to admit other features of the natural world we find less palatable. We are also motivated to kill competitors, to steal mates, and to enslave our inferiors. In fact, any action we take can be rationalized as psychologically-motivated and thus ultimately stemming from biological urges. Not only does this seem to grant legitimacy to terrible actions, but it also doesn’t leave room for degrees of meaningfulness. If writing this paper is just as meaningful as binge-watching House of Cards (consuming popular media signals to others that I am a member of the group, increasing my social status and apparent reproductive fitness, or something), then there’s no positive reason to perform any particular action at all.
If we continue on down the causal chain, we must engage in some reductionism. Biology is nothing more than the chemistry of self-replicating, homeostatic, organic molecules. Chemistry is nothing more than the physics of very large chunks of atoms. And physics is nothing more than a fundamental description of reality. From this vantage, why we engage in any particular action such as paper-writing can be summed up rather neatly: because thermodynamics, or because the fine-structure constant is 0.0072973525698.
While these might be accurate descriptions of why we do what we do, they are not altogether satisfying as explanations. The reason is that there doesn’t appear to be any deeper significance to the laws of physics. It’s difficult to say that the purpose of writing a paper is to conserve angular momentum. In fact, such a statement hardly even seems intelligible, which casts doubt on it being meaningful. At the end of this causal chain, we’re left not with motivations for actions but abstract descriptions of them.
The way out that many take here is to suppose that the underlying rules do exist for a reason, and that reason is God. If there is a transcendent entity who makes all the rules, including the rules that govern what is meaningful or moral, then acting in accordance with the purpose laid out by this being would be a meaningful way to spend one’s life, as Wolf alludes to in “The Meanings of Lives.” In that case, all I have to do is figure out whether or not me writing this paper is part of God’s plan.
Ah, but which God? Throughout the span of human history, we have described (either via revelation or invention) a great many possible gods. It’s unlikely that I’m going to be able to settle on the correct one before completing this paper. In fact, it’s not even clear how one might go about proving that a particular god is the correct one, because many who profess such knowledge claim that it is a subjective matter of faith. I might be tempted to find one specifically devoted to paper-writing, but that seems somewhat self-serving.
In the absence of any definitive proof about which gods are real, I am forced to abandon my search for meaning down the path of purposes and points. While there is certainly meaning within limited contexts, there is not a clear way toward objective meaning by focusing on the reasons for acting a particular way.
Perhaps the meaning of a thing is not found in the reason for it but in the significance of it. Perhaps me writing this paper will have an impact on the world or be felt in some way. This sense of meaningfulness is divorced from notions of what is good about paper-writing and instead focuses on the lasting effects of paper-writing. Something is meaningful if its creation adds to the world, changes the course of things, or leaves a mark. Here, meaning is found in the positive features of a thing—its extent and shape.
From this perspective, it’s easy to see how my paper will be meaningful. It will have a significant impact on the way its grader spends a half hour. Rather than binge-watching House of Cards, the person deciding my grade will read my paper, mark it up, complain about its inanity to sympathetic ears, and be forced to wrestle with ELMS in order to record my grade for all time. There are two possible objections one might make to this conception of meaning: it’s rather permissive, and our intuitive sense of meaning is of something grander.
Meaning as impact is permissive in that significance is lacking qualification. Everything I do has an impact on the world. Every breath I take rearranges the positions of billions and billions of air molecules. Given the sheer number of states that can be occupied by the atoms around me, everything I do ensures a permanent change. That is, after I act, nothing will ever be exactly the way it was before. Every tap of the keyboard makes microscopic changes in the structure of the keys themselves. These are all lasting changes to the world brought about by my direct intervention, but few would describe any of it as meaningful. Yes, from this perspective, writing papers is meaningful, but so is scratching my head or yawning.
So then we must be discerning about what qualifies as significant if we wish to exclude the trivial. One possible criterion is that actions must be noticed for them to be significant and meaningful. Because I have no direct awareness of how my actions change the molecules around me, my breathing is not noticeable and thus not significant. This qualification still permits my paper to be meaningful because someone else will be forced to read it, which might okay. We can say that my paper would be more meaningful if it were read by more people, if its brilliant philosophical insights changed the way millions thought, if it were referenced in Wikipedia articles, if undergraduate students taking introductory philosophy courses a thousand years from now were required to read it. This sense of meaning gets at the grandeur lacking from simply capturing the attention of a grader for a short while.
We can object to this notion of meaning in two ways. First, meaning as a noticeable impact on the world is grounded concretely in the limitations of human awareness. These limitations can be overcome by advances in observational tools. For example, we could imagine a world in which robots with exquisite sensors monitor the microstates of air molecules in my house and broadcast that information across the internet for all to consume. Under such a scenario, my breathing has once again become meaningful. But in the opposite direction, that which too few of us are aware of is not meaningful. We can imagine another world in which the prosperity of our civilization rests on slave labor that is hidden from us. We would all find it to be very significant indeed if the weight of our world were carried on the backs of the impoverished, and it seems incongruous to believe that the meaningfulness of this notion depends on our being aware of it. It’s also reasonable to believe a hidden slave population should be meaningful to more than just the slaves, especially because it is easy to conceive of a world in which they are unaware of why they labor.
The second objection picks away at the seeming grandness of what we are capable of doing. Having my paper appear on the reading list of future generations is about as significant as paper-writing can get. We can move up in scope and ask what possible significance my life in general could have. History is certainly peppered with great men and women who have done awesome and terrible things that echo in the present. Many historians might quibble with the idea that great people are ultimately responsible for the changes we see, but it’s probably possible to have a lasting impact on human civilization.
Yet here we are faced with the inevitable absurdity of human life. History is doubtless populated by countless significant figures we remain forever unaware of. But beyond that, the extent of our possible significance is quite literally infinitesimal. Virtually every human event in history has taken place inside a sphere with a radius under 6,400 km. The distance to the nearest star is 6 billion times that; the distance to the nearby Andromeda galaxy is half a million times that; the known size of the universe is a hundred thousand times as large as that; and the universe in all its unknown extent may be infinite. Geological records indicate that most species don’t persist longer than a few million years. Even if we beat the odds, in five billion years the Sun will swallow the Earth. If we somehow manage to escape that, the heat death of the universe will eventually erase any contribution we make. And long after we are gone, the universe will continue to exist for a span that is possibly trillions of times longer than its current age.
In “The Absurd,” Nagel objects to this notion of absurdity by pointing out that if nothing we do now will matter in a million years, then it doesn’t matter now that nothing we do will matter in a million years. But this misses the importance of meaning as significance. What’s important about this conception of meaning is persistence, whether through time or space. Binge-watching television isn’t meaningless because it happens not to be important years from now, but because its effects don’t persist through those years. It captures my attention while I am engaged in it but has no effect beyond its limited scope. So if the condition for meaningfulness is persistent significance on a large scale, then everything we could do ultimately fails.
Finally, we are left with a definition of meaning that most closely resembles more traditional meanings of the word meaning. It is possible that writing a philosophy paper could give my life value. That is, writing this paper may be an expression of who I am, a tool that others could use to gain knowledge about me. This is what it means for something to have meaning. The dictionary definition of a word tells you what a word is about; similarly, this paper may tell you what I am about and consequently be meaningful. In this sense, something is meaningful if it builds up some representation of an object that lets us understand something about that object.
From this notion alone, we can again naively conclude that paper-writing is clearly a meaningful activity. Anyone who reads this paper will gain some measure of insight into how my mind works. Similarly, anything I end up doing with my life can be meaningful if the events of my life create a narrative which tells you about me. Yet that presents us with a problem, because our intuition tells us that some lives might be more meaningful than others and that this should depend on what you end up doing with your life. It shouldn’t depend on the quality of the representation that can be built up based on your life.
As an example of why not all representations we can construct about a thing are meaningful, consider lightning. We could image a picture of lightning as being a manifestation of Zeus’ anger over the fact that we build skyscrapers. You could even argue that Zeus has reason to be mad at trees and sometimes even people. This is a description of lightning which may match what we observe, but we would not say that it is a meaningful description of lightning. It does not correspond to what we now know lightning to really be about—electricity, ions, and the like. So what we might say is that some things you do with your life—such as going to the bathroom or watching television—might not be meaningful because they don’t correspond to what we really know life is about.
Once again we are confronted with our sense of what is meaningful. That is, if we sense that some life activity is meaningful, our belief is that the activity accurately maps on to the person. If we follow the sense analogy, we can consider two ways in which we can sense what is out there in the world. On the one hand, our eyes see in color. It might seem obvious to believe that color inheres in objects, but the mechanism by which eyes work suggests something else. Rather, our eyes detect the intensity of light around three wavelength bands and then construct colors based on that information and a variety of other contextual clues. Color is not something that really exists but something our minds make a posteriori because it is useful for distinguishing between objects.
On the other hand, we also sometimes see objects that resemble triangles. Triangles, rather than being something we experience, are things we can construct a priori based on the formal rules of geometry. When we see a triangle in the world, we are comparing it to the Platonic triangle that is a product of our reason.
The parallel with our sense of meaning is this: is meaning a useful tool we build up from experiences, or is it an abstract entity that we see reflected in the world? If I write a philosophy paper and others see something meaningful in it, does that meaning arise from a psychologically-motivated heuristic about what’s important in life, or from a formal system that deductively defines human experience? If it is the former, then that meaning may not necessarily connect to what’s out there in the world—namely me. If the latter, then perhaps my paper is a true reflection of me and the sense of meaningfulness accurately signals this.
Unfortunately, there are no problem-free theories about what humans really are. Are we invested with souls? Are we rational agents or just animals possessing the illusion of control? What is the essence of being a human? What is consciousness? Does personal identity persist over time? Many of the questions regarding what it means to be human come down to what Nagel calls the subjective character of experience, a problem some consider unsolvable. We can never really know what is going on inside another person’s head because qualia are simply not objective. This leads us to the conclusion that it is very unlikely our haphazardly constructed brains have stumbled upon a sense of meaningfulness that is logically sound, so our sense is not a reliable indicator of whether what someone does with their life reflects who they really are. This does not rule out the possibility that people can do meaningful things, but it does rule out our knowing about it. And it might not make sense to say that something can be meaningful if no one gets the meaning, in which case nothing is meaningful.
From all this I can conclude that there is no point to writing this paper, that doing so will have no lasting impact on the world, and that it does not say anything meaningful about who I am. I clearly shouldn’t have wasted any time on it. However, it cannot be ignored that I could not have reached this conclusion without carefully considering what it means to be meaningful. While the arguments I present show that life does not appear to be meaningful, they do not prove that life could not be meaningful. This leaves open the possibility that we may discover some meaning in the future, and the only path toward that meaning is through thinking about it. So writing papers about meaning is not meaningful, but it might be a prerequisite for meaning.

Sunday, April 12, 2015

If the Sequence Fits...

Okay, we're doing an old-fashioned blog post today, wherein I recount one of my recently completed labs. The lab portion of this semester's classes comes from my astrophysics course. This might seem a little weird, because we don't all have telescopes at our lab benches.

Hello, Edwin Hubble.
Instead, we're given data that we must analyze via Matlab. Interestingly, this is probably a bit closer to what real astronomers do, because astronomy today is less peering through a telescope in the wee hours of the night and more writing code to make sense of numbers sent to you from an observatory in New Mexico or Chile or space.

Hello, Hubble Space Telescope.
I've decided to blog this particular lab because I think it has the most interesting plots, which might be just the kind of statement required to turn away what few readers I have left. Specifically, we're looking at Hertzsprung-Russell diagrams, which are a very peculiar kind of graph astronomers use to confuse laypeople. Here's what they look like according to wiki:

Thanks, Wikipedia.
So the x-axis represents temperature, and higher temperatures are to the left. On the y-axis we have luminosity, which increases as you go up. What makes these diagrams strange is that it's not immediately clear what they tell you. Are you looking at different classes of stars? The same star at different times in its life? Stars at different distances (and thus ages) spread out all over the place? The answer is yes.

If you simply point your telescope at the sky, find a bunch of stars, and plot them on an H-R diagram, the only thing you will know with any certainty is that they're not all the same star. To get useful information from this diagram, you have to be specific about what you're looking at.

For this lab, we were looking at open star clusters, which are groups of stars that all formed from the same giant molecular cloud (real term). If that's true, then you can assume that all of the stars in the cluster are roughly the same age and roughly the same distance away from you. If you plot a cluster on an H-R diagram, a particular feature suddenly pops out: that big diagonal line called the the main sequence.

From astrophysical theories, we know that stars on the main sequence are those that are burning hydrogen in their cores. This is what our star is doing; it's what most stars that we look at are doing. Eventually, as a star gets older, it burns through all of the available hydrogen in its core and moves off of the main sequence (top right-ish) and becomes a giant of some sort, and then much later stops fusing at all and becomes a stellar remnant like a white dwarf (bottom left-ish).

What the existence of something like the main sequence means is that if a star is burning hydrogen in its core, and it's at some particular temperature T, then it will also be at some particular luminosity L. One demands the other. There is a pretty concrete relationship--for a main sequence star--between its mass, temperature, luminosity, and lifetime. Bigger stars burn brighter and hotter, go through their fuel more quickly, and thus leave the main sequence sooner.

But as I said earlier, if you just point your telescope at a bunch of stars, it's hard to know what you're looking at. In fact, the only information you get from a telescope about a star is how bright it is, and brightness is a result of a star's intrinsic luminosity as well as its distance from you. The farther a way a star is, the dimmer it is. Because of that, you don't always know if you are looking at a bright star far away or a dim star close to you. So how are we able to figure out a star's luminosity and temperature?

By restricting how we look at the star. Another difference between the popular image of astronomers and the reality is that the telescopes astronomers use today don't just indiscriminately collect all the light that hits them. In fact, some telescopes don't collect visible light at all. Some, like the Arecibo Observatory in Puerto Rico or the Very Large Array in Contact, for example, collect radio waves.

From APOD.
These telescopes look very different from visible light telescopes because light at different wavelengths has different properties that determine how that light moves. This necessitates different equipment. You know this just from looking at a prism. We all know a prism splits white light into a rainbow, but the reason it does this is because different wavelengths of light (different colors) bend at different angles depending on the medium they're moving through.

If this has an effect just between different colors of visible light, imagine the effect between visible light and radio waves and x-rays, for example. But at the visible light level, this discrepancy between how light behaves at different wavelengths means that you can collect more accurate information about an object if you look at it through filters that only pass specific ranges of wavelengths. This way you can calibrate your machinery just for those wavelengths and not worry about anything else.

There are a lot of filters astronomers use to look at stars. For this lab, we looked at stars through B and V filters, which eye-rollingly stand for blue and visible filters. It's enough to know that the B filter looks at bluer (shorter wavelength) light and the V filter looks at redder (longer wavelength) light. If a star is brighter in the B filter than the V filter, this corresponds to a hotter star. That's because stars roughly follow Wien's law, which says that a blackbody's peak wavelength--the wavelength at which it emits the most light--is inversely proportional to its temperature. So the more light at shorter wavelengths, the higher the temperature.

This observation lets us construct a particular H-R diagram called a Color-Magnitude diagram. For boring and annoying reasons (blame Hipparchus), astronomers measure the brightness of objects with the magnitude system, where smaller values represent brighter objects. For our CMD, the y-axis is the magnitude of light coming through the V filter (so higher on the graph is brighter, which means lower magnitudes). The x-axis, which is supposed to be temperature, is instead the quantity B-V.

Recall, if there's more blue light than red light, the star is hotter. More blue light means a lower B magnitude than V magnitude, which means hot stars will have a low B-V. Since temperature is plotted from hot to cold on the H-R diagram, this means we go from low B-V to high B-V on the x-axis.

So now we are plotting the B and V filter magnitudes of stars in the cluster M41, which we're assuming are all roughly the same age and distance from us. Here's the plot:


Hey, that looks kind of similar to wiki's H-R diagram! There's a clearly visible main sequence starting in the top left and moving down and to the right, and then there's a weird branch in the middle. Those are giants of some variety or another that have turned off of the main sequence. We can predict that this is a relatively young star cluster because it doesn't seem to have much in the way of stellar remnants (stars below the main sequence). What else can this CMD tell us?

For the purposes of the lab, we engaged in a process known as main sequence fitting that lets us figure out the age of and distance to a cluster.

As I mentioned earlier, brighter, hotter stars burn faster than dimmer, cooler stars; they leave the main sequence more quickly. So if all of the stars in a cluster form at roughly the same time, this means young clusters will have a pretty even spread of hot and cool stars, but old clusters will mostly have cool stars, because the hot stars will have stopped burning long ago. On an H-R diagram, this means that the main sequence of a cluster will slowly shrink over time, beginning with the stars in the top left. So where the main sequence ends, called the turn off point, corresponds to the youngest age a cluster could be. If it were any younger, then you would see hotter, shorter-lived stars farther up the main sequence.

This can be taken a step further. Through stellar evolution models (produced by computer simulations), you can plot the absolute magnitudes of various types of stars at a particular age. These models are called isochrones, because they show you a line of stars at a constant age. If you can match the features of your isochrone (such as the turn off point) to the features of your real cluster, you can date the cluster. In our lab, we had isochrones ranging from 100 million years old to 11 billion years old.

So let's date M41. First, let's compare it to the 11 billion year old isochrone (in red).


As you can see, this clearly doesn't fit. It's way farther to the right and way higher up than M41. But let's think about something for a moment. Being way farther to the right means it only has cold stars, which are old stars. We predicted above, because of the lack of stellar remnants, that M41 was probably young, so this makes sense.

By why is the isochrone so much brighter than M41? Here we can be fooled. We are seeing the cluster as bright as our telescopes see it, but the isochrone is a computer model which plots stars as bright as they would be if they were 10 parsecs (about 32.6 lightyears) away. Something seen at 10 pc is said to be seen at "absolute magnitude" for uninteresting historical reasons. If we were to adjust the magnitude of the isochrone, moving it up and down the y-axis, then we would also be adjusting the distance at which we saw it--the farther down the y-axis, the higher the magnitude, the dimmer the isochrone, the farther away it is.

We won't bother with that here, because this isochrone is obviously too old for our cluster. With some fiddling, we can find an isochrone that does fit. Specifically, the 300 million year isochrone.



This looks to have the right shape but is way too bright. So we know that our cluster is farther away than 10 pc. If we adjust the magnitude of our isochrone, we can get a better fit.


This isn't perfect, but the very nice alignment with the main sequence is encouraging. To get this match, we adjusted the magnitude of the isochrone by 9.2, which doesn't mean anything to anybody not steeped in dreadfully tedious astrometrics.

People steeped in dreadfully tedious astrometrics.
But here's the gist. Magnitude is a logarithmic scale, which in this case means that increasing the magnitude of an object by 5 decreases the brightness by a factor of 100. Because light gets dimmer with the square of your distance from it, an object 100 times dimmer is 10 times farther away. Doing the math, this means a 9.2 magnitude difference works out to the cluster being 69 times farther away than the isochrone, or 690 parsecs from us.

Looking up M41 on wiki (reliable?), it gives a distance of 710 parsecs and and age of 190 to 240 million years old. Not bad.

We then did the same thing for cluster M67. With many more stellar remnants (bottom-left), it looks like M67 is probably older.


After another round of main sequence fitting, this is our closest match.


An isochrone 3.5 billion years old with a distance modulus of 9.7, corresponding to 870 parsecs. Wiki says M67 is 3.2-5 billion years old and 800-900 parsecs away. Again, not bad. In fact, a better fit.

So that's main sequence fitting, one rung in the cosmic distance ladder (real term) astronomers use to show us how insignificant we are (by demonstrating the vast scale of the universe).

Saturday, March 21, 2015

Euler Unmasked

We're going from straight philosophy in my last post to straight math in this one. But if you're an ancient Greek thinker type person, math and philosophy are the same thing, anyway.

So about a year and a half ago, I made a post that touched briefly on the relationship between trig functions and exponential functions as a way of justifying my tendency to make things more complex than they need to be. I mentioned there that I didn't have a firm enough mathematical grasp to explain how these two mathy bits are related. Well, the topic of Euler'sidentity came up a little while ago in my writing group, so I decided to do some research and figure out just how it is that trig functions and exponential functions come together.

For those of you that don't click links, Euler's identity says:




This is a pretty remarkable and frankly incredible equation, but it's true. It manages to link probably the three most famous mathematical constants in a very simple way. The identity arises from Euler's formula, which says:




If you replace x with π, then isin(π) = 0 and cos(π) = -1, so with a little rearranging you can get Euler's identity. But this raises the question of why it should be true that exponential functions and trig functions are connected by the imaginary unit.

First, a quick primer for those who need it. In the common parlance, something that is "exponentially" better is "really super" better. This kind of talk tends to aggravate the mathematically aware, however. Really, exponential functions are ones where adding a constant increment to the input multiplies the output by a constant factor.

So if you hear something like, "Kyrgyzstan's GDP has doubled every year for the last ten years," then that's exponential growth. The factor is 2, and the increment is yearly. But this also applies to, say, the interest rate on your savings account, which as we all know is not exactly "really super" better than anything except possibly 0. There, your balance is getting multiplied by something like 1.0025 every year, which is every bit as exponential as Kyrgyzstan's doubling GDP (totally made up).

The point is, however, that exponential functions (with a factor greater than 1) demonstrate constant (monotonic) growth. If you increase the x value, the y value will increase, too.

Trig functions, on the other hand, are the realm of waves, which go up and down and up and down. They are all about rhythmic or periodic behavior. But as their name suggests, the trigonometric functions are actually based on the angles formed by triangles. Trig functions are really expressions of the Pythagorean formula, A2 + B2 = C2. The relationship between this formula and periodic motion is that for some constant value of C, increasing A will decrease B, and vice versa.

So it's hard to see how exponential functions and trig functions could be related. As I hinted up above, the answer is through i.

i, the imaginary unit, is what the square root of negative one is defined to be. Imaginary numbers kind of get a bad rap, partly because of their name. They seem like something mathematicians just made up that couldn't possibly be real. The funny thing is people had the same opinion about negative numbers for a very long time. After all, how can you possibly have -3 apples? On this whole controversy, the great mathematician Carl Friedrich Gauss had this to say:
That this subject [imaginary numbers] has hitherto been surrounded by mysterious obscurity, is to be attributed largely to an ill-adapted notation. If, for instance, +1, -1, √-1 had been called direct, inverse, and lateral units, instead of positive, negative, and imaginary (or even impossible), such an obscurity would have been out of the question.
While his preferred notation might seem somewhat opaque, it does lend itself very well to a geometric interpretation of numbers. If you look at a Cartesian plot, you can think of Gauss's direct, inverse, and lateral numbers this way. 




The direct unit (+1) moves you one to the right on the graph. The inverse unit (-1) moves you one to the left. And the lateral unit (√-1) moves you up one. Rather than being on the number line we're used to, imaginary numbers can be thought of as being at right angles to it.

This idea lets you plot numbers that are a combination of "real" and "imaginary." So if you have the complex number 3 + 2i, that's just 3 units to the right and 2 units up.


As you see, plotting numbers this way means you can draw right triangles that are related to those numbers. This is the first way that we can connect imaginary numbers to the trig functions. Getting from imaginary numbers to exponential functions will take a little more work, though.

If i is the square root of -1, we can play around with exponentiation to find an interesting pattern. i2 = (√-1)2, which by definition equals -1. i3 = (√-1)3, or (√-1)* (√-1)2, or i*-1, which just comes out to -i. i4 = i2 * i2, or -1 * -1, which equals 1. Multiply that by i, and you of course have i again. So through exponentiation, we have discovered something of a pattern.

i1 = i
i2 = -1
i3 = -i
i4 = 1
i5 = i

The exponents of i loop back in on themselves. You might even say they exhibit periodic behavior, like the trig functions.

Our next step is probably the toughest bit. Bear with me. So, if you recall from my foray into Fourier, many functions can be expressed as an infinite series of sines and cosines that eventually converge on a desired function. These infinite series turn out to be very useful to mathematicians, because not all patterns can be expressed as "elementary" functions, but only as infinite series of some other type of function. One type of infinite series is the power series, which looks like this:




To get different functions, just plug in different values for the coefficients an. The way you figure out which coefficients correspond to the function you want is basically by assuming your function can fit into some power series and then just playing around for awhile until you find a pattern that fits. Let me demonstrate.

One of the defining features of the exponential function, ex, is that it is its own derivative. This means that its rate of change is equal to its value. So the derivative of ex is also ex, and so on.

One of the first tools you learn in calculus is that the derivative of a power function like x4 is 4x3. You multiply by the exponent, and then lower the exponent by one. If the exponent is already 0, then your derivative is 0. So if you take the derivative of our above model power series, you get:





And if you take the derivative of that, you get:





And if you take the derivative of that, you get:





And one more time, because there's a pattern I want you to see:





Now remember, all of these series are equal to the function ex, because ex is its own derivative. The missing ingredients are the values of an. If we evaluate ex at x=0, we have e0, and anything to the 0th power is equal to 1. In the above series, when x is 0, everything except the leading term is also 0. So we have:

1 = a0 = a1 = 2a2 = 6a3 = 24a4

and so on. So with a little bit of algebra, you can figure out the value of any an. It's just 1 divided by the factor preceding the coefficient. But there's a pattern here. 24 = 4*3*2*1. 6 = 3*2*1. 2 = 2*1. The value of the coefficient is equal to 1 over the index of the coefficient multiplied by each integer lower than it. This is known as a factorial in mathematics and looks like this:

5! = 5*4*3*2*1 = 120

With that information in hand, we know what the power series of the exponential function is:







I've gone through this process once so that you don't think I'm pulling this stuff out of a hat, but you can do the same thing to find the power series of a lot of different functions, including the trig functions. For example, the power series of sin(x) is:







And the power series of cos(x) is:







Weirdly, the sine and cosine power series look kind of similar to the exponential function, but with terms missing and some negative signs thrown in. This curious fact turns out to be very important for connecting exponential and trig functions. Let's remember that the key to that connection is i.

Let's see what happens if we try to find the power series of eix rather than ex. To do that, we just replace all instances of x with ix in our series above. That gets us:






Hey, that means we're finding powers of i. But we already did that up above. That follows a pattern, so we can just fill in from that pattern and get:







Now, just for the heck of it, let's separate our series into terms without i and terms with i. So we have:







Look familiar? That's the power series for cosine plus i times the power series for sine. In other words...




Just as Euler told us.

All of this may seem like some kind of tedious mathematical trick. After all, how do we know that the power series representation of a function behaves identically to the function itself in all instances? The truth is, it doesn't, and that's one of the things you have to be careful of when finding series expansions. It does happen to work in this case, though.

But there are ways in which this proof can help motivate understanding. One way to think of the idea is that the introduction of i into the exponential function breaks the function down into four interacting parts: one increasing in the direction of 1, another increasing in the direction of -1, and two others increasing in the direction of i and -i. Different values of x contribute more to one direction than another, and the whole thing repeats with a period of 2πi.

To see if this picture holds true, let's take another look at the powers of i. We saw that powers of i cycle from i to -1 to -i to 1 and then back to i again. But we were only looking at integer powers of i. What happens if we replace the integer with an unknown variable x? That is, how do we evaluate ix?

A neat tool that can sometimes work in mathematics is to perform some operation on an expression and then also perform the inverse of that operation. Doing so doesn't change the expression, but it does let us look at it in a different light. So how about we take the natural log of ix and then exponentiate the expression. That gets us:





The laws of logarithms mean we can move that x to outside the log, giving us:





We know how to evaluate ex, but it’s not immediately clear how to evaluate ln(i). Here it's useful to remember what ln means. The natural log of some number is the power to which you must raise e in order to get that number. So if you have, say, ln(e2), then our answer is 2, because e to the power of 2 obviously equals e2. So let's look at it this way: e to what power equals i?

Now we bring in Euler's formula again.

eix = i when cos(x) = 0 and isin(x) = i

This is true for x = π/2, because cos(π/2) = 0 and sin(π/2) = 1.

So then ln(i) = iπ/2, which means that ix = eiπx/2 = cos(xπ/2) + isin(xπ/2). With that conversion, we can evaluate i to any power at all, not just integer powers. But to reaffirm that this isn't some trick, let's go ahead and see what evaluating it to integer powers means.








This is the exact same pattern we saw above, but this time through the lens of Euler's formula rather than the logic of manipulating √-1. For non-integer values of x, you get complex numbers that, when treated as vectors on the complex plane, are all a distance of 1 from the origin, creating a circle of radius 1. Through purely algebraic means, this connects back up with the geometrical interpretation of imaginary numbers suggested by Gauss.

Okay, I'm done now. I hope this sheds some light on the interconnectedness of math, which can be demonstrated by taking the rules you're familiar with and applying them to unfamiliar situations. When people speak of the beauty of math, this is it. In the real world, we often find depth and meaning through metaphors that connect disparate ideas. That's what art and literature are all about. Math does the same thing, but with numbers, letters, and funny symbols.

(On the other hand, I may have written this post just to play around with LaTeX.)