We make decisions based on the data we see. One restaurant serves higher-quality food than another. One presidential candidate aligns more appropriately with our values. One surgical technique yields better outcomes. One applicant submits a stronger job application than a competitor. From these data, we decide what course of action to take. In many cases, these decisions are inconsequential. In others, however, a poor decision may lead to dangerous results. Let’s consider danger.

Imagine you are a surgeon. A patient arrives in your clinic with a particular condition. Let us call this condition, for illustrative purposes, phantasticolithiasis. The patient is in an immense amount of pain. After reviewing the literature on phantasticolithiasis, you discover that this condition can be fatal if left untreated. The review also describes two surgical techniques, which we shall call “A” and “B” here. Procedure A, according to the review, has a 69% success rate. Procedure B, however, seems much more promising, having a success rate of 81%. Based on these data, you prepare for Procedure B. You tell the patient the procedure you will be performing and share some of the information you learned. You tell a few colleagues about your plan. On the eve of the procedure, you call your old friend, a fellow surgeon practicing on another continent. You tell him about this interesting disease, phantasticolithiasis, what you learned about it, and your assessment and plan. There is a pause on the other end of the line. “What is the mass of the lesion?” he asks. You respond that it is much smaller than average. “Did you already perform the procedure?” he continues. You tell him that you didn’t and that the procedure is tomorrow morning.

“Switch to procedure A.”

Confused, you ask your friend why this could be true. He explains the review a bit further. The two procedures were performed on various categories of phantasticolithiasis. However, what the review failed to mention was that procedure A was more commonly performed on the largest lesions, and procedure B on the smallest lesions. Larger lesions, as you might imagine, have a much lower success rate than their smaller counterparts. If you separate the patient population into two categories for the large and small lesions, the results change dramatically. In the large-lesion category, procedure A has a success rate of 63% (250/400) and procedure B has a success rate of 57% (40/70). For the small lesions, procedure A is 99% successful (88/89) and procedure B is 88% successful (210/240). In other words, when controlling for the category of condition, procedure A is always more successful than procedure B. You follow your friend’s advice. The patient’s surgery is a success, and you remain dumbfounded.

What’s happening here is something called Simpson’s paradox. The idea is simple: When two variables are considered (for example, two procedures), one association results (procedure B is more successful). However, upon the conditioning of a third variable (lesion size), the association reverses (procedure A is more successful). This phenomenon has far-reaching implications. For example, since 2000, the median US wage has increased by 1% when adjusted for inflation, a statistic many politicians like to boast about. However, within every educational subgroup, the median wage has decreased. The same can be said for the gender pay gap. Barack Obama in both of his campaigns fought against the gap, reminding us that women only make 77 cents for every dollar a man earns. However, the problem is more than just a paycheck, and the differences change and may even disappear if you control for job sector or level of education. In other words, policy change to reduce the gap need to be more nuanced than a campaign snippet. A particularly famous case of the paradox arose at UC Berkeley. In this case, the school was sued for gender bias. The school admitted 44% of their male applicants and only 35% of their female applicants. However, upon conditioning for each department, it was found that women applied more often to those departments with lower rates of admission. In 2/3 of the departments, women had a higher entrance rate than men.

The paradox seems simple. When analyzing data and making a decision, simply control for other variables and the correct answer will emerge. Right? Not exactly. How do you know which variables should be controlled? In the case of phantasticolithiasis, how would you know to control for lesion size? Why couldn’t you just as easily control for the patient’s age or comorbidities? Could you control for all of them? If you do see the paradox emerge, what decision should you then make? Is the correct answer that of the conditioned data or that of the raw data? The paradox becomes complicated once again.

Judea Pearl wrote an excellent description of the problem and proposed a solution to the above questions. He cites the use of “do-calculus,” a technique rooted in the study of Bayesian networks. Put more simply, his methods find causality between a number of variables. In doing so, one can find the conditioning variables and can then decide whether the conditioned data or the raw data are best for decision-making. The set of variables that dictate causality are the ones that should be used. If you are interested in the technique and have some experience with the notation, I recommend this brief review on arXiv.

Of course, rapid and rather inconsequential decisions need not be based on such formalities. On the other hand, it serves all of us well if we at least consider the possibility of Simpson’s paradox on a day-to-day basis. Be skeptical when reading the paper, speaking with colleagues, and making decisions. Finally, if you’re ever lucky enough to be the first patient with phantasticolithiasis, opt for procedure A.

Pops, Shocks, Impulses

January 8, 2014 — 1 Comment

657

Take a balloon, blow it up, and quickly stick a needle through the rubber. What do you hear? You wouldn’t be surprised to hear a loud “pop” immediately after piercing the once-inflated balloon. Try it again with a few balloons of different radii. What do you hear now? Chances are that you will notice a slight change in frequency. Additionally, depending upon the room in which you pop the balloons, you may notice additional reverberation.

What is happening here? Why do different balloons sound slightly different? What is the reverberation, and is it useful?

First, why does a balloon make a loud popping sound? When a balloon pops, the rubber suddenly contracts. This leaves a discontinuity in the air pressure. Pressure outside the balloon is equal to the atmospheric pressure, but the balloon’s internal pressure is often a couple hundred Pascals higher than that of the surrounding air. Upon retraction of the rubber, this high pressure region meets the lower pressure of the atmosphere. This newly-formed pressure wave spreads outward from the center of the late balloon’s location as a weak shock wave. This abrupt change in pressure, as it spreads outward, acts like an impulse in the air, a point we will return to later. The balloon’s weak shock wave is similar to the strong shock wave from a jet plane, though the equations that govern the two differ.

As the peak in pressure propagates outward, something more fascinating is unveiled. Air is accelerated outward due to the sudden difference in pressure, and it will overshoot due to inertia. This leaves a region of low pressure behind the high pressure wave. Air will then accelerate inward in response and will once again overshoot, but this time it will do so in the opposite direction. The process continues, creating an oscillation in the air with a characteristic frequency. The frequency of this oscillation depends upon the radius of the former balloon. Thus, smaller balloons will have a more “shallow” sound, and larger balloons will sound “deeper.”

The question becomes far more interesting when considering that initial weak shock wave. As mentioned previously, this discontinuity acts like an impulse in the air. Impulses are powerful tools in that they contain all frequency information. If one wished to find the resonant frequency of a room, one could play sounds at various frequencies and find those which reflected most loudly off the walls of the room. However, this is a time-consuming process and is by all means impractical. An impulse, however, contains all frequencies. If an acoustical engineer were to supply an impulse at different locations in a room and place a microphone somewhere else, that engineer could calculate which frequencies are best reflected/selected by that room’s architecture. This could be done by firing a starter pistol or by clapping one’s hands (try it out). One could also pop a balloon. The balloon’s pop provides an impulse. The room (unless it is anechoic) will respond at particular frequencies. What this means is quite fascinating. The sound of the balloon’s pop is the sound of a recording studio, the sound of a theater, the sound of a living room, and the sound of a cafeteria.

The pop is the sound of the room itself.

A Musician on Mars

December 11, 2013 — 1 Comment

mars_analog

Welcome to Mars. As one of the first colonists on the fourth planet from the Sun, you endeavor to make it your new home. On Earth, you filled your time in numerous ways, but your real passion was music. Luckily, the Indian Space Research Organisation (ISRO) allowed you to bring your prized possession: a Steinway grand piano. Excited to play for the first time in months, you squeeze into your ISRO-issued space suit and wheel the piano onto the Martian surface. It’s noon near the equator. The temperature is around 25ºC (77ºF). You stretch out your arms, relax, and strike your first key. The sound is… quiet and out of tune. Assuming the piano needs to be retuned, you wheel it back into your pressurized vessel, take off your suit, and tune it yourself. Satisfied, you wheel the piano onto the surface again. The Martian surface is quiet, and you notice the colors of the sky are a lot redder than you had seen in NASA photographs. Again, you begin to play. It again sounds too quiet.

What is happening here? Why might a piano sound different when played on the Martian surface? This is a fairly involved question. Luckily, we are considering an instrument with taut strings rather than something that depends more upon atmospheric conditions than, say, a trombone or pipe organ. Furthermore, the equatorial temperature is Earth-like. Why, then, might a piano sound different on Mars?

When tuning and subsequently playing a piano, the frequency you perceive (or pitch) depends upon the tension, length, and mass of the strings within the piano. Since the temperature is about the same as before, and since you did not physically exchange the strings, these properties remain fairly constant. However, the fluid on the strings does play a role. Like any oscillator, the fluid in which it is immersed provides a load which will subsequently alter the frequency at which the oscillator resonates and by how much. On Mars, the atmosphere is more rarified, with a mean pressure of 600 Pa at the surface. Compare this with a pressure of over 100,000 Pa at sea level on Earth. This reduced loading by air results in a bias to slightly higher frequencies (or a higher pitch). If you retuned the piano in a pressurized cabin and then played the newly tuned piano on the Martian surface once again, it would still sound out of tune. A simple solution is to retune the piano while on the surface.

However, this is not the only problem with playing music on the Martian surface. Remember that Mars has a lower-pressure atmosphere. Sound, as you may recall, propagates as an oscillation of pressure in some medium (like air). If the mean pressure is lower, this presumably changes the ability of sound to propagate over longer distances. Without going into too many details here, what happens is that sound will not propagate very far on Mars, and there is an effect such that high frequencies are heavily attenuated. Before, the pitch was shifted slightly higher. Here, on the other hand, higher frequencies will sound softer than lower frequencies, and all frequencies will sound quieter. This means that not only does the piano sound out of tune, but it also sounds muted. The question of sound propagation is so interesting that an acoustics researcher simulated sound on Earth, Mars, and Titan. She found that a scream which may travel over one kilometer on Earth would only carry 14 meters on Mars!

Your out-of-tune, muted piano, probably wouldn’t be audible to a nearby audience on the Martian surface.

Radiation Risks

November 8, 2013 — 1 Comment

A recent discussion with a colleague on the Neurodome project centered on the acquisition of data by computed tomography (CT). Specifically, we sought volunteers for a non-medical imaging study. Volunteers were difficult, if not impossible, to obtain. Not only did we hope to find a person without cavities or implants, but we needed someone who was willing to be exposed to a certain dosage of radiation. Our conversation rapidly evolved into a treatise on radiation exposure and health risks. CT, which exposes patients to X-rays, indeed carries a certain health risk. What are these risks? How significant are they? I’d like to attempt to answer some (but not all) of these questions here.

To properly elucidate health risks related to cancer exposure, we must answer a number of important questions. What is the person’s health status? Does this person have any underlying genetic mutations? What type of study is being performed (that is, what is being scanned and for how long, along with the width/shape/angle of the beam)? For the purposes of this discussion, let us consider an otherwise healthy human undergoing standard scans.

In the links I provide below, you will note two units of radiation exposure. I’d like to clarify these so that you can more easily explore the topic independently from this post. The first unit, the gray (Gy), measures energy per kilogram. The second, the sievert (Sv), measures absorbed dose per kilogram. This is rooted in models by medical physicists that attempt to adjust for the effects of radiation dependent upon tissue type. In these models, you might be surprised by what is most likely to cause cancer after radiation exposure. While DNA damage is not insignificant, the major contributor is water. When water molecules absorb ionizing radiation (all radiation in this post is ionizing, unless otherwise specified), they emit free radicals (usually OH-), which in turn have damaging effects. Thus, the amount of water in a tissue is often related to a higher damaging dose per unit energy.

How much radiation are you exposed to in a given period of time? This handy chart should answer most of those questions. You might be surprised by the amount of exposure from certain activities. A chest CT is ~7 mSv, which is not much greater than the 4 mSv exposure from background radiation in a given year. As a former resident of central Pennsylvania, I was surprised to see that radiation exposure from the Three Mile Island incident resulted in an average of only 80 µSv. On the other hand, workers at Fukushima were exposed to a dose of 180 mSv! These doses are interesting from an academic point of view, but what real risks do they carry?

When we talk about radiation exposure, health risks come in two flavors. The first, deterministic effects, are those that result from a cumulation of radiation exposure. Below a certain threshold, adverse effects are minimal or non-existent. Above this threshold, health problems arise. The threshold differs between people and the health condition we are considering. Examples include hair loss, skin necrosis, sterility, and death. The second, stochastic effects, are those effects that have an increased probability of occurring with increased radiation exposure. The best example of this is cancer. With low exposure, one has a lower risk of cancer. With high exposure, this risk increases. We often consider this to be a linear relationship, in that a unit increase in radiation exposure results in a unit increase in cancer risk. For example, 100 mSv of radiation, increases one’s lifetime risk of cancer by 0.5%. Unlike deterministic effects, there is no threshold associated with stochastic effects. There is controversy over the linear model of cancer risk, and more research is needed.

An example against the linear model of cancer risk is exposure to radiation at high altitudes. Though this differs from a CT scan in many ways, one would still expect an increased risk of cancer to be associated with exposure to radiation at higher altitudes. However, those who live at high altitudes or those who work at high altitudes (like commercial airline pilots) do not exhibit a greater prevalence of cancer. To put this into perspective, a single round-trip flight across the continental United States results in the same radiation exposure as a chest X-ray. This begs an interesting question: How much risk do medical scans carry? 

The answer, as you can see, is fairly complicated. If you want to know how much radiation exposure a particular study carries, there’s a great resource to calculate this. This website assumes the linear threshold hypothesis to be true and, as I pointed out, it very well might not be true. That being said, any stochastic risks associated with medical scans are often far outweighed by the risks of ignoring a medical condition. In the case of Neurodome, the opposite is sadly true.

That being said, I’d be a happy volunteer.

Plug Me In

October 8, 2013 — Leave a comment

I’m going to have a bit more fun with this blog post. For this thought experiment, I’d like you to suspend your disbelief. Imagine, for a moment, that someone offered you the chance to “plug” your body into a standard outlet and let yourself “charge.” All of your energy would be gathered from this charging process. You would eat nothing. How long must you remain connected to the outlet? How much will it cost?

Where do we start? There are a few ways to approach this, but I’ll start with the basal metabolic rate for an average adult male. For a 70 kg male, this is typically around 1,600-1,700 Calories (kilocalories). If you would like to do more than just sit against a wall, you will need a bit more energy. Let’s round that up to 2,000 Calories. Converting this to units with which we can work, this comes to 8.36 megajoules (MJ). Like most thought experiments, it is easier to work in orders of magnitude, so we will round this up to 10 MJ.

We now know how much energy we need, but how long will it take to draw this energy from an outlet? Every outlet has a maximum power draw, but very few appliances, if any, reach this maximum value. We denote the amount of power drawn in joules per second as Watts(W). On average, microwaves draw 1,450 W, vacuum cleaners 630 W, computers 240 W (though, as I type this, I am drawing <100 W), and alarm clocks 2 W. In other words, it’s variable. If we were to charge ourselves like a microwave oven, it would take almost 2 hours. However, if we used a computer charger (100 W), it would take 28 hours! A laptop computer charger would thus not suffice, since we would not acquire our necessary daily energy within a given day. All of this energy would be expelled as heat, and you would be a blob of meat plugged into a wall outlet. That’s not a fantastic way to live.

In case you are wondering, architectural engineers model heat production from humans as if they were 100 W light bulbs. This is eerily similar to our 100 W laptop charger that provides just enough energy to get us through a single day!

If you tried all of the above with one of Tesla’s new 10 kW chargers, you’d be ready for your day in only a few minutes!

What about the cost? Two apples provide approximately 200 Calories of energy (note that the energy yield from eating is not 100%, so you will actually receive less than 200 Calories from an apple). The cost of the apple varies based upon season, region, type, and quality of the fruit. Let’s say the two apples cost you $1.00 for ease of comparison. You spend $1.00 for 200 Calories of fresh, delicious apple. How does this compare to energy cost from your wall outlet? In the United States, the average is $0.12 per kWh. The energy from those apples, then, would cost you less than three cents. Over the course of a year, you would spend less than $200 to keep yourself more than fully charged! Imagine spending that much on food in a given year.

Do not try this at home, or while in the Navy.

Here’s a conundrum for you: Using only technology available hundreds of years ago, how could you determine the speed at which light travels? We know now that light travels at 299,792,458 m/s, or, to put it simply, “very, very fast.” In fact, we are so sure of this value that we use it to define the meter, where one meter is equal to the distance that light travels in 1/299,792,458 of a second. Today, we have access to technology which allows us to calculate this value. Time-of-flight devices pulse bright flashes of light which are reflected off a mirror, and the difference in time (down to nanoseconds) combined with the distance from the source/detector and the mirror provides an accurate measurement of the speed of light. Additionally, one can take advantage of cavity resonators or interferometers to obtain the same value. However, these devices did not always exist, yet estimates for the speed of light predate their existence. How was this accomplished?

In a first account of the discussion on light propagation, Aristotle incorrectly disagreed with Empedocles, who claimed that light took a finite amount of time to reach Earth. Descartes, too, claimed that light traveled instantaneously. Galileo, in Two New Sciences, made the observation that light appears to travel instantaneously, but that the only observation is that light must travel much faster than sound:

Everyday experience shows that the propagation of light is instantaneous; for when we see a piece of artillery fired, at great distance, the flash reaches our eyes without lapse of time; but the sound reaches the ear only after a noticeable interval.

To determine the speed of light, Galileo devised a time-of-flight experiment similar to the one described above, where two individuals with lanterns would stand at a distance, uncover and recover them upon seeing a flash from the opposing partner, and calculate times between flashes. By starting very close to account for reaction times and eventually moving very far away, one could see if there is a noticeable change in latency. However, this experiment is challenging, to say the least. Is there a simpler method?

Enter Danish astronomer Ole Roemer. Known in his time for accuracy in measurement, arguments over the Gregorian calendar, and firing all the police in Copenhagen, he is best known for his measurement of the speed of light in the 17th century.

While at the Paris Observatory, Roemer carefully studied the orbit of Io, one of Jupiter’s moons. Io orbits Jupiter every 42 and a half hours, a steady rate. This discovery was made by Galileo in 1610 and well-characterized over the following years. During this time, Io is eclipsed by Jupiter, where it disappears for a time and then reemerges sometime later. However, Roemer noticed that, unlike the steady state of Io’s orbit, the times of disappearance and reemergence did change. In fact, Roemer predicted that an eclipse in November 1679 would be 10 minutes behind schedule. When he was proved right, the Royal Observatory remained flabbergasted. Why was this the case?

The figure above, from Roemer’s notes, highlights Earth’s orbit (HGFEKL) around the sun (A). Io’s orbit eclipses (DC) are shown, defined by Jupiter’s (B) shadow. For a period of time, at point H, one cannot observe all eclipses of Io, since Jupiter blocks the path of light. However, when Earth is at positions L or K, one can observe the disappearances of Io, while at positions G and F, one can observe the reemergences of Io. Even if you didn’t follow any of that, note simply that while Io’s orbit does not change, the Earth’s position relative to Jupiter/Io does change as it orbits the Sun. One observing Io’s eclipse at point L or G is closer to Jupiter than one observing an eclipse when the Earth is at point K or F. If light does not travel instantaneously, observations at points K and F will lag, because light takes a bit longer to reach Earth from Io.

In order to calculate the speed of light from this observation, Roemer needed information from his colleagues on the distances from the Earth to the Sun. Additionally, there are other complications. Nonetheless, using the measured distance from the Earth to the Sun at the time (taking advantage of parallax), Roemer announced that the speed of light was approximately 220,000 km/s. While more than 25% lower than the actual speed of light, it remains astounding that one could estimate this speed using nothing but a telescope, a moon, and a notebook.

Giovanni Cassini, a contemporary of Roemer, was not convinced at first. However, Isaac Newton noted the following in his Principia, from Roemer’s observations:

“For it is now certain from the phenomena of Jupiter’s satellites, confirmed by the observations of different astronomers, that light is propagated in succession and requires about seven or eight minutes to travel from the sun to the earth.” 

In other words, philosophers now began to accept that light travels in a finite amount of time.

Over the course of many years, others continued to estimate the speed of light using creative methods. James Bradley, in 1728, noticed that the positions of stars changed during rainfall, using these observations to estimate the speed of light with great accuracy (Bradley: 185,000 miles/second; Speed of Light: 186,282 miles/second). In 1850 in France, Fizeau and Foccault designed a time-of-flight apparatus like the one described in the opening paragraph. As opposed to using modern technology, the apparatus uses a rotating wheel to simulate blips of light. With a wheel of one hundred teeth moving at one hundred rotations per second, the speed of light could be calculated to within the accuracy of Bradley’s observations. Albert Michelson, in the 1870s, repeated the measurements on a larger scale, again with a series of mirrors.

What can be gleaned from this story is a powerful lesson. At times, the simplest observations can result in the most compelling findings. What it required in this case was careful note-taking and a bit of intellect. Even without those, simple observation cannot be understated.

Would you accept US$1,000,000 to solve a maths problem? Apparently, not everyone would say yes. A new prize of this amount was recently announced, in an attempt to prove Beal’s Conjecture. Originally offered with a prize of US$5,000 in 1997, Beal’s conjecture remains unsolved. Today, the Beal Prize has been increased to one million dollars, according to an announcement from the American Mathematical Society.

But what is Beal’s Conjecture? Let’s instead start with something more well known, Fermat’s Last Theorem. Pythagoras originally proposed a formula for the right-angle triangle, where a^2 + b^2 = c^2. This equation has an infinite number of natural number, or positive integer, solutions. However, Fermat claimed that any system with integer exponents greater than 2 (as in Pythagoras’ Theorem) has no integer solutions in a, b, and c. Fermat was kind enough to solve his conjecture for an integer exponent of 4, but he left the rest unsolved. In 1995, Sir Andrew Wiles released a (then-flawed) solution to Fermat’s Conjecture, which included over 100 pages of work over the course of seven years. His story, and the story of the theorem, is a fantastic one, and I recommend reading more on it.

In 1993, two years prior to the solution of Fermat’s Conjecture and five years into Wiles’ quarantine, Andrew Beal proposed another conjecture. It is an extension of the aforementioned theorem. He claimed that the system a^x + b^y = c^z with a, b, c, x, y, and z being positive integers may only have an integer solution for x, y, z > 2 if a, b, and c have a common factor. As mentioned above, he promised US$5,000 to one who could provide a proof or counterexample of his conjecture. Put less abstractly, if we say that a=7, b=7, x=6, and y=7, then we have 7^6 + 7^7 = 941,192. Solving this, we have 7^6 + 7^7 = 98^3 = 941,192. Note that x, y, and z are all integers greater than 2. Thus, Beal would claim that a, b, and c must have a common factor. In this case they do, considering that 98 is divisible by 7 (or 98/7 = 14). There are many (possibly infinite) examples like this, but we still need a proof or counterexample of the conjecture. To date, it remains unsolved, and a solution will be reward with one million dollars.

In addition to the Beal Prize, the Clay Mathematics Institute offers US$1,000,000 for a solution to any of seven listed problems. As of this post, only one has been solved, and no money has yet been accepted. These Millennium Prize Problems continue to baffle mathematicians. It is fascinating to consider that there are so many open problems in mathematics, including those integral to number theory, such as Hilbert’s eighth problem.

Not everyone reading this post is a mathematician. Many of us, including myself, think visually. We like pictures, and we like problems we can solve, or at least ones that currently have solutions. So I’ll introduce one! I’m going to turn this post now to a classic problem that began to lay the foundations for graph theory and topologyFor a related post in topology, I recommend my post, Diving through Dimensions. Some of you may be aware of this problem, and I hope I do it justice. Let us begin by traveling to the capital of Prussia, Königsberg. The city (now Kaliningrad) was set on opposite sides of the Pregel River. On this river sat two islands. Thus, we have four land masses. Connecting these regions were seven bridges, as laid out below in red:

bridges_of_konigsberg

The people of Königsberg posed a question: Is it possible to traverse the city, crossing all bridges once and only once? Let us assume that one cannot dig under the ground, fly through the sky, cross the water by river, or use teleportation technology. One may only access each land mass by crossing bridges. Additionally, one may begin and end at any point. The only requirement is that each bridge must be crossed and that it cannot be crossed more than once.

Leonhard Euler proposed a solution to this problem in 1735. He began by first reducing each land mass to a point. The only relevant information is found in the bridges and where they connect, with the areas of land masses being irrelevant. This combination of nodes (points) and edges (lines) is commonly used in graph theory. Euler noticed that when when reaches a node by one bridge, one must leave that node by another bridge, resulting in an even number of bridges during a full pass-through over a node. Thus, all nodes that are passed over (that is, they are not the beginning nor the end) must have an even number of edges. There may be at most two nodes with an odd number of edges, and these nodes must serve as the beginning and/or the end of our journey.

Now, take a look at our bridges as Euler may have drawn them:

3095

In this case, we see that the top and bottom nodes on the left each have three bridges, the rightmost node has three bridges, and the middle node on the left has five bridges. In other words, all four nodes have an odd number of edges. This violates our requirement that no more than two nodes may have an odd number of edges. As a result, Euler demonstrated that there is no way to traverse each of the Prussian bridges once and only once. This requirement can be applied to any drawing similar to the one above. I recommend trying it out and testing Euler’s proposal. It is quite rewarding.

If you are really interested, take a gander at the current layout on Google Maps:

Screen Shot 2013-06-04 at 3.53.02 PM

It seems that the people of Kaliningrad demolished two of our bridges! The Königsberg bridge problem now has a solution. A part of me likes to think that the bridges were demolished for no other reason than to provide a solution!

As mentioned above, Euler’s solution laid the framework for what we call graph theory. Graph theory, or the study of graphs like the one shown above, has myriad applications. It is used in computer science to model networking. Linguists take advantage of graphs to examine semantic structure. Chemists represent atoms and bonds using graphs. Social network analysis is described in the same terminology, using the same theory, where each person or group may be a node. In biology, we use it to model migration or the spread of disease. More relevant to my work, weighted graphs are used in network analysis, and computational neuroscientists may recognize graphs like the one above when modeling neural networks.

What we thus see is something fantastic. Abstract open problems like the one Euler solved and those proposed by Beal and the Clay Mathematics Institute provide foundational tools that can (and often do) advance our knowledge in numerous fields. Euler’s work propelled us into graph theory. A solution to the Navier-Stokes open problem will advance our understanding of fluid flow. Even if the abstract does not become practical, the journey is delightful.

A shrill whine is engulfing the east coast of the United States. Millions of bluish-black cicadas, specifically Magicicada septendecim, will emerge per acre. These are not the Biblical locusts, more closely related to grasshoppers, but have been likened to them. When Brood II emerges and dies off a few weeks later, we can rest assured that the next emergence will not be until 2030. Magicicada, or periodical cicadas, operate on a 13- or 17-year cycle. Nearly the entire lifespan of each cicada is spent underground as a juvenile before the 4-6 week emergence as an adult, usually at high densities (over 300 per meter squared).

cicada-brood-could-be-big-in-2013_65791_600x450

Why do cicadas emerge en masse? The behavior is linked to an adaptation known as predator satiation. In other words, the high population density of cicadas ensures a low probability of being eaten by a predator. Birds, a main predator of cicadas, can only feast on so many until satiated, allowing the cicadas free reign for the first week or so of adulthood. Oak trees display a similar behavior through masting. Masting, in a general sense, refers to the production of fruits by trees. In some cases, a mass eruption of masts occurs after a long quiescent period. Oak trees, whose fruits are feed for animals, do so in what is called a mast year, where an abundance of such fruit is produced. Since this provides food for rodents, that “predator” is satiated. However, the populations of such rodents rise during mast years. Nonetheless, oak trees are able to generate enough fruit to reproduce, due simply to the mass production of said fruit.

As mentioned before, though, cicadas operate on 13- or 17-year cycles. Why does this matter? To answer this question, let’s talk about snowshoe hares. These hares operate on a 10-year population cycle. The rise and fall of snowshoe hare population coincided with a slightly out-of-phase rise and fall of Canadian lynx populations. These predator-prey dynamics were striking, and these apply to many species. The key point here is that the population cycles of predators and their prey may coincide for the snowshoe hare. Any predator with a one, two, five, or ten year cycle could align perfectly with the hare, as it does for the lynx. (This was recently featured in the New Yorker.)

Cicadas, however, operate a little differently. Their 13- and 17-year cycles are prime numbers. Since their cycles are only divisible by the cycle’s length (13 or 17) and one, it becomes difficult for predators to align their population cycles with the cicada. A predator with, let’s say, a three-year cycle should only align with the 17-year brood every 51 years (Note that many predators have 2-5 year life cycles.). Additionally, broods of different cycle lengths can rest assured that they will not overlap and thus compete for resources. This should only occur every 221 years, which I would argue is a rare brood overlap. Thus, there are two benefits for the use of prime numbers in cicada population cycles. First, predator population cycles are unlikely to align with large prime numbers. Second, different broods will rarely overlap.

In preparation for Brood II, I provide a recording from Brood X in 2004 of cicada calls. It is both enchanting and annoying. Please enjoy.

Mass Casualty Incidents

April 27, 2013 — 1 Comment

In the wake of the 2013 Boston Marathon bombings, I feel it is helpful to reflect, as many have already done, on the situation. Atul Gawande, for example, has already provided an excellent review of the preparedness of Boston’s hospitals. The city was prepared due to extensive and flexible training, along with the assistance of myriad bystanders. With three deaths and over 260 casualties, this left a mortality rate of about 1%. This is remarkable, and numerous factors may have played a role in addition to any disaster plan.

What could have led this mortality rate to be so low? First, consider the location. This was a major public event, with medical personnel readily available. Additionally, the timing of the bombings was unique. The attack occurred not only on a holiday when operating rooms may not be at capacity, but they took place shortly before the 3:00 p.m. shift change that occurred at nearby hospitals. This created a situation where space was available, and twice the number of medical teams were available at this time. The location itself was beneficial, too. Boston is home to world-renowned hospitals with seven trauma centers. In other words, capacity and quality peak at this location. Even the location of the bomb was beneficial. Indoor attacks tend to produce greater injuries due to concentrated blast waves. The shoddy pressure-cooker bombs in this open venue reduced primary blast injuries compared with a situation indoors. Finally, we can see the effects of medical knowledge from military personnel making its way through trauma centers in the United States.  For example, Boston’s EMS chief hosted a conference to discuss the effects of blast injuries and terrorist attacks across the globe, taking advantage of this military knowledge. Taken together, a combination of superb disaster preparedness with key factors unique to this situation may have played a role in the reduction of the mortality in this attack.

This is not intended to detract from the severity of the Boston Marathon bombings. Like all attacks, this was a national tragedy that struck a chord with many of us. Numerous media outlets reporting on the attack described mass casualties. It is this term that I would like to clarify in this blog post. What does mass casualty mean? Having spoken with numerous colleagues and friends, I find that this term needs some clarification.

First, one must define casualty. This term has both military and civilian usages. A military casualty is any person that is deemed unfit for war, typically due to death, injury, illness, or capture. A civilian casualty, in the broadest sense, refers to both injuries and deaths in some incident. I bring this up because the term casualty is oftentimes incorrectly used synonymously with fatality, with the latter referring to only to deaths, and casualties including both fatal and non-fatal injuries.

But what is a mass casualty? The New York Post may want you to believe that this refers to a large number of fatalities, but we already know that a casualty is not a fatality. Is it then a large number of people injured or killed? The answer is more nuanced than that. A mass casualty incident is a term used by emergency medical services, not referring solely to the number of people injured or killed. It is better to consider this as a protocol that one must follow in a situation where the number of casualties (or potential casualties) outweighs the resources available. A mass casualty incident could thus include both a major explosion with hundreds injured to one building with a carbon monoxide leak where there are not yet any injuries. It comes down to the difference between the number of people that must be triaged versus the supplies available. This term may be related to total casualties, but it is not a measure of them.

In a mass casualty incident, a protocol of triage, treatment, and transport is followed. First, all persons in the vicinity are triaged. This means that medical personnel look at each person to see who requires the most immediate medical attention. Then, triaged patients (color coded) are taken to appropriate treatment areas. In extreme cases, an on-site morgue is set up to handle the worst cases. This happened in Boston, where an EMS tent acted as the morgue. Finally, after initial treatment, patients are transported to hospitals for care. This protocol has specific guidelines for each member of the team, creating a situation for efficient triage, treatment, and transport.

It is then disheartening to hear screams of mass casualties used loosely in the media, often implying a large number of civilian mortalities. This creates an air of panic and fear. While we must report cases like this with severity, we must also do it accurately, and the misuse of this term is just one of many mistakes the media made in this scenario. Precision in language is important. More importantly, we must also remain accurate and vigilant in both reporting and understanding of breaking news reports.

Though I only focused on one term, I hope this provides a general lesson to all. Misuse of terms in national media misinforms. We must take it upon ourselves to remain educated, promote vigilance in the reading of such reports, and educate others on what we learned.

When looking at pictures of animals in the wild, one may ask the question: Why are most of these animals brown or black, and why do we see very few colorful creatures? This question can be approached from multiple angles. I will ignore selective pressures that provide an advantage to certain colors. Instead, I will focus on the mechanism behind these colors.

Let’s begin with mammals. When we talk about most creatures being brown or black, we are usually considering mammals, since colors are vibrant across other classes (e.g. reptilia, amphibia, aves). Colors in skin and fur arise from two pigments. They generate two colors: brown-black (through melanin) and reddish-yellow.  I challenge you to use this color palette to make the color green. There are evolutionary advantages to being brown. Early mammals were presumably small, rat-like creatures living on land. It was best to blend into the environment and to invest energy into escape mechanisms. Amphibians, on the other hand, were not limited to the brown dirt of land and were able to develop a green color. This does not answer why they could do this, so let me delve into that.

This is where it gets interesting.

It turns out that birds, amphibians, and reptiles are unable to generate pigments for green (and many cannot generate blue). Most of the tetrapod (four-legged) world is like this.  How, then, could they possibly look so vibrant? The colors arise not from the colors of pigment, but from a molecular refraction mechanism. All of these colors arise from two pigments: black and yellow-red. Thus, when a chameleon changes color, it is not depositing pigments. Instead, it is changing the shape of its refractory cells in order to alter the refraction effect.

pic0053

In bird feathers, the mechanism is a bit more complicated, but it is the same idea. If you have time, try this experiment: Take some flour, and mix it with water. Mix until the flour is suspended evenly in the water. Now, move the glass to a well-light area. What color do you see? You will notice that the suspension looks bluish-white as opposed to just white. This is due to light scattering. Shorter-wavelength blue light is more easily scattered than longer-wavelength red light. What you then witness is this scattered blue light, giving the suspension a blue tint. Another example is found in compact discs. If you stare at the bottom, you will see not just a silver-coated disc, but an array of colors reflected off the CD’s surface. This is due to the same effect, but we are now looking at scattering from tracks on the CD. Spacing of ridges and orientation of pigment granules in bird feathers generates the same effect. Though the pigments are still black and yellow-red, this scattering provides vibrant colors. If you want to learn more about this, I recommend reading this article on peacock feather coloration.

The aforementioned scattering is known at the Tyndall effect. Simply put, when a suspension of particles is exposed to light, some of this light is scattered and produces interesting colors. The flour/water example I provided was just one. You will also see a blue tint of smoke from the exhaust from some automobiles, also due to a suspension similar to our flour and water. Simply put, longer wavelengths (e.g. reds) are transmitted, but shorter wavelengths (e.g. blues) are reflected. I should note that this is a different mechanism from light scattering seen in the sky at sunset. Whereas scattering in our atmosphere is usually by very small particles (Rayleigh scattering), scattering in colloidal suspensions is by relatively larger particles (Mie scattering). This is important to note because the effects of scattering from larger particles are more vibrant and less subtle.

Mammals are not completely brown and black. An exception to the boring colors of mammals arises in the irises of our eyes. The iris is still composed of melanin, as before. However, the density of melanin determines how opaque or translucent the upper layer of the iris becomes. When more translucent, light can pass through this upper layer and be backscattered by layers below. As before, shorter wavelengths of light are more likely to be scattered. This, in turn, results in blue irises.

Like the example above, many questions in this world are nuanced, and these nuances make the questions (and answers) more interesting.