Archives For Science

Pops, Shocks, Impulses

January 8, 2014 — 1 Comment

657

Take a balloon, blow it up, and quickly stick a needle through the rubber. What do you hear? You wouldn’t be surprised to hear a loud “pop” immediately after piercing the once-inflated balloon. Try it again with a few balloons of different radii. What do you hear now? Chances are that you will notice a slight change in frequency. Additionally, depending upon the room in which you pop the balloons, you may notice additional reverberation.

What is happening here? Why do different balloons sound slightly different? What is the reverberation, and is it useful?

First, why does a balloon make a loud popping sound? When a balloon pops, the rubber suddenly contracts. This leaves a discontinuity in the air pressure. Pressure outside the balloon is equal to the atmospheric pressure, but the balloon’s internal pressure is often a couple hundred Pascals higher than that of the surrounding air. Upon retraction of the rubber, this high pressure region meets the lower pressure of the atmosphere. This newly-formed pressure wave spreads outward from the center of the late balloon’s location as a weak shock wave. This abrupt change in pressure, as it spreads outward, acts like an impulse in the air, a point we will return to later. The balloon’s weak shock wave is similar to the strong shock wave from a jet plane, though the equations that govern the two differ.

As the peak in pressure propagates outward, something more fascinating is unveiled. Air is accelerated outward due to the sudden difference in pressure, and it will overshoot due to inertia. This leaves a region of low pressure behind the high pressure wave. Air will then accelerate inward in response and will once again overshoot, but this time it will do so in the opposite direction. The process continues, creating an oscillation in the air with a characteristic frequency. The frequency of this oscillation depends upon the radius of the former balloon. Thus, smaller balloons will have a more “shallow” sound, and larger balloons will sound “deeper.”

The question becomes far more interesting when considering that initial weak shock wave. As mentioned previously, this discontinuity acts like an impulse in the air. Impulses are powerful tools in that they contain all frequency information. If one wished to find the resonant frequency of a room, one could play sounds at various frequencies and find those which reflected most loudly off the walls of the room. However, this is a time-consuming process and is by all means impractical. An impulse, however, contains all frequencies. If an acoustical engineer were to supply an impulse at different locations in a room and place a microphone somewhere else, that engineer could calculate which frequencies are best reflected/selected by that room’s architecture. This could be done by firing a starter pistol or by clapping one’s hands (try it out). One could also pop a balloon. The balloon’s pop provides an impulse. The room (unless it is anechoic) will respond at particular frequencies. What this means is quite fascinating. The sound of the balloon’s pop is the sound of a recording studio, the sound of a theater, the sound of a living room, and the sound of a cafeteria.

The pop is the sound of the room itself.

Advertisements

A Musician on Mars

December 11, 2013 — 1 Comment

mars_analog

Welcome to Mars. As one of the first colonists on the fourth planet from the Sun, you endeavor to make it your new home. On Earth, you filled your time in numerous ways, but your real passion was music. Luckily, the Indian Space Research Organisation (ISRO) allowed you to bring your prized possession: a Steinway grand piano. Excited to play for the first time in months, you squeeze into your ISRO-issued space suit and wheel the piano onto the Martian surface. It’s noon near the equator. The temperature is around 25ºC (77ºF). You stretch out your arms, relax, and strike your first key. The sound is… quiet and out of tune. Assuming the piano needs to be retuned, you wheel it back into your pressurized vessel, take off your suit, and tune it yourself. Satisfied, you wheel the piano onto the surface again. The Martian surface is quiet, and you notice the colors of the sky are a lot redder than you had seen in NASA photographs. Again, you begin to play. It again sounds too quiet.

What is happening here? Why might a piano sound different when played on the Martian surface? This is a fairly involved question. Luckily, we are considering an instrument with taut strings rather than something that depends more upon atmospheric conditions than, say, a trombone or pipe organ. Furthermore, the equatorial temperature is Earth-like. Why, then, might a piano sound different on Mars?

When tuning and subsequently playing a piano, the frequency you perceive (or pitch) depends upon the tension, length, and mass of the strings within the piano. Since the temperature is about the same as before, and since you did not physically exchange the strings, these properties remain fairly constant. However, the fluid on the strings does play a role. Like any oscillator, the fluid in which it is immersed provides a load which will subsequently alter the frequency at which the oscillator resonates and by how much. On Mars, the atmosphere is more rarified, with a mean pressure of 600 Pa at the surface. Compare this with a pressure of over 100,000 Pa at sea level on Earth. This reduced loading by air results in a bias to slightly higher frequencies (or a higher pitch). If you retuned the piano in a pressurized cabin and then played the newly tuned piano on the Martian surface once again, it would still sound out of tune. A simple solution is to retune the piano while on the surface.

However, this is not the only problem with playing music on the Martian surface. Remember that Mars has a lower-pressure atmosphere. Sound, as you may recall, propagates as an oscillation of pressure in some medium (like air). If the mean pressure is lower, this presumably changes the ability of sound to propagate over longer distances. Without going into too many details here, what happens is that sound will not propagate very far on Mars, and there is an effect such that high frequencies are heavily attenuated. Before, the pitch was shifted slightly higher. Here, on the other hand, higher frequencies will sound softer than lower frequencies, and all frequencies will sound quieter. This means that not only does the piano sound out of tune, but it also sounds muted. The question of sound propagation is so interesting that an acoustics researcher simulated sound on Earth, Mars, and Titan. She found that a scream which may travel over one kilometer on Earth would only carry 14 meters on Mars!

Your out-of-tune, muted piano, probably wouldn’t be audible to a nearby audience on the Martian surface.

Plug Me In

October 8, 2013 — Leave a comment

I’m going to have a bit more fun with this blog post. For this thought experiment, I’d like you to suspend your disbelief. Imagine, for a moment, that someone offered you the chance to “plug” your body into a standard outlet and let yourself “charge.” All of your energy would be gathered from this charging process. You would eat nothing. How long must you remain connected to the outlet? How much will it cost?

Where do we start? There are a few ways to approach this, but I’ll start with the basal metabolic rate for an average adult male. For a 70 kg male, this is typically around 1,600-1,700 Calories (kilocalories). If you would like to do more than just sit against a wall, you will need a bit more energy. Let’s round that up to 2,000 Calories. Converting this to units with which we can work, this comes to 8.36 megajoules (MJ). Like most thought experiments, it is easier to work in orders of magnitude, so we will round this up to 10 MJ.

We now know how much energy we need, but how long will it take to draw this energy from an outlet? Every outlet has a maximum power draw, but very few appliances, if any, reach this maximum value. We denote the amount of power drawn in joules per second as Watts (W). On average, microwaves draw 1,450 W, vacuum cleaners 630 W, computers 240 W (though, as I type this, I am drawing <100 W), and alarm clocks 2 W. In other words, it’s variable. If we were to charge ourselves like a microwave oven, it would take almost 2 hours. However, if we used a computer charger (100 W), it would take 28 hours! A laptop computer charger would thus not suffice, since we would not acquire our necessary daily energy within a given day. All of this energy would be expelled as heat, and you would be a blob of meat plugged into a wall outlet. That’s not a fantastic way to live.

In case you are wondering, architectural engineers model heat production from humans as if they were 100 W light bulbs. This is eerily similar to our 100 W laptop charger that provides just enough energy to get us through a single day!

If you tried all of the above with one of Tesla’s new 10 kW chargers, you’d be ready for your day in only a few minutes!

What about the cost? Two apples provide approximately 200 Calories of energy (note that the energy yield from eating is not 100%, so you will actually receive less than 200 Calories from an apple). The cost of the apple varies based upon season, region, type, and quality of the fruit. Let’s say the two apples cost you $1.00 for ease of comparison. You spend $1.00 for 200 Calories of fresh, delicious apple. How does this compare to energy cost from your wall outlet? In the United States, the average is $0.12 per kWh. The energy from those apples, then, would cost you less than three cents. Over the course of a year, you would spend less than $200 to keep yourself more than fully charged! Imagine spending that much on food in a given year.

Do not try this at home, or while in the Navy.

Here’s a conundrum for you: Using only technology available hundreds of years ago, how could you determine the speed at which light travels? We know now that light travels at 299,792,458 m/s, or, to put it simply, “very, very fast.” In fact, we are so sure of this value that we use it to define the meter, where one meter is equal to the distance that light travels in 1/299,792,458 of a second. Today, we have access to technology which allows us to calculate this value. Time-of-flight devices pulse bright flashes of light which are reflected off a mirror, and the difference in time (down to nanoseconds) combined with the distance from the source/detector and the mirror provides an accurate measurement of the speed of light. Additionally, one can take advantage of cavity resonators or interferometers to obtain the same value. However, these devices did not always exist, yet estimates for the speed of light predate their existence. How was this accomplished?

In a first account of the discussion on light propagation, Aristotle incorrectly disagreed with Empedocles, who claimed that light took a finite amount of time to reach Earth. Descartes, too, claimed that light traveled instantaneously. Galileo, in Two New Sciences, made the observation that light appears to travel instantaneously, but that the only observation is that light must travel much faster than sound:

Everyday experience shows that the propagation of light is instantaneous; for when we see a piece of artillery fired, at great distance, the flash reaches our eyes without lapse of time; but the sound reaches the ear only after a noticeable interval.

To determine the speed of light, Galileo devised a time-of-flight experiment similar to the one described above, where two individuals with lanterns would stand at a distance, uncover and recover them upon seeing a flash from the opposing partner, and calculate times between flashes. By starting very close to account for reaction times and eventually moving very far away, one could see if there is a noticeable change in latency. However, this experiment is challenging, to say the least. Is there a simpler method?

Enter Danish astronomer Ole Roemer. Known in his time for accuracy in measurement, arguments over the Gregorian calendar, and firing all the police in Copenhagen, he is best known for his measurement of the speed of light in the 17th century.

While at the Paris Observatory, Roemer carefully studied the orbit of Io, one of Jupiter’s moons. Io orbits Jupiter every 42 and a half hours, a steady rate. This discovery was made by Galileo in 1610 and well-characterized over the following years. During this time, Io is eclipsed by Jupiter, where it disappears for a time and then reemerges sometime later. However, Roemer noticed that, unlike the steady state of Io’s orbit, the times of disappearance and reemergence did change. In fact, Roemer predicted that an eclipse in November 1679 would be 10 minutes behind schedule. When he was proved right, the Royal Observatory remained flabbergasted. Why was this the case?

The figure above, from Roemer’s notes, highlights Earth’s orbit (HGFEKL) around the sun (A). Io’s orbit eclipses (DC) are shown, defined by Jupiter’s (B) shadow. For a period of time, at point H, one cannot observe all eclipses of Io, since Jupiter blocks the path of light. However, when Earth is at positions L or K, one can observe the disappearances of Io, while at positions G and F, one can observe the reemergences of Io. Even if you didn’t follow any of that, note simply that while Io’s orbit does not change, the Earth’s position relative to Jupiter/Io does change as it orbits the Sun. One observing Io’s eclipse at point L or G is closer to Jupiter than one observing an eclipse when the Earth is at point K or F. If light does not travel instantaneously, observations at points K and F will lag, because light takes a bit longer to reach Earth from Io.

In order to calculate the speed of light from this observation, Roemer needed information from his colleagues on the distances from the Earth to the Sun. Additionally, there are other complications. Nonetheless, using the measured distance from the Earth to the Sun at the time (taking advantage of parallax), Roemer announced that the speed of light was approximately 220,000 km/s. While more than 25% lower than the actual speed of light, it remains astounding that one could estimate this speed using nothing but a telescope, a moon, and a notebook.

Giovanni Cassini, a contemporary of Roemer, was not convinced at first. However, Isaac Newton noted the following in his Principia, from Roemer’s observations:

“For it is now certain from the phenomena of Jupiter’s satellites, confirmed by the observations of different astronomers, that light is propagated in succession and requires about seven or eight minutes to travel from the sun to the earth.” 

In other words, philosophers now began to accept that light travels in a finite amount of time.

Over the course of many years, others continued to estimate the speed of light using creative methods. James Bradley, in 1728, noticed that the positions of stars changed during rainfall, using these observations to estimate the speed of light with great accuracy (Bradley: 185,000 miles/second; Speed of Light: 186,282 miles/second). In 1850 in France, Fizeau and Foccault designed a time-of-flight apparatus like the one described in the opening paragraph. As opposed to using modern technology, the apparatus uses a rotating wheel to simulate blips of light. With a wheel of one hundred teeth moving at one hundred rotations per second, the speed of light could be calculated to within the accuracy of Bradley’s observations. Albert Michelson, in the 1870s, repeated the measurements on a larger scale, again with a series of mirrors.

What can be gleaned from this story is a powerful lesson. At times, the simplest observations can result in the most compelling findings. What it required in this case was careful note-taking and a bit of intellect. Even without those, simple observation cannot be understated.

Would you accept US$1,000,000 to solve a maths problem? Apparently, not everyone would say yes. A new prize of this amount was recently announced, in an attempt to prove Beal’s Conjecture. Originally offered with a prize of US$5,000 in 1997, Beal’s conjecture remains unsolved. Today, the Beal Prize has been increased to one million dollars, according to an announcement from the American Mathematical Society.

But what is Beal’s Conjecture? Let’s instead start with something more well known, Fermat’s Last Theorem. Pythagoras originally proposed a formula for the right-angle triangle, where a^2 + b^2 = c^2. This equation has an infinite number of natural number, or positive integer, solutions. However, Fermat claimed that any system with integer exponents greater than 2 (as in Pythagoras’ Theorem) has no integer solutions in a, b, and c. Fermat was kind enough to solve his conjecture for an integer exponent of 4, but he left the rest unsolved. In 1995, Sir Andrew Wiles released a (then-flawed) solution to Fermat’s Conjecture, which included over 100 pages of work over the course of seven years. His story, and the story of the theorem, is a fantastic one, and I recommend reading more on it.

In 1993, two years prior to the solution of Fermat’s Conjecture and five years into Wiles’ quarantine, Andrew Beal proposed another conjecture. It is an extension of the aforementioned theorem. He claimed that the system a^x + b^y = c^z with a, b, c, x, y, and z being positive integers may only have an integer solution for x, y, z > 2 if a, b, and c have a common factor. As mentioned above, he promised US$5,000 to one who could provide a proof or counterexample of his conjecture. Put less abstractly, if we say that a=7, b=7, x=6, and y=7, then we have 7^6 + 7^7 = 941,192. Solving this, we have 7^6 + 7^7 = 98^3 = 941,192. Note that x, y, and z are all integers greater than 2. Thus, Beal would claim that a, b, and c must have a common factor. In this case they do, considering that 98 is divisible by 7 (or 98/7 = 14). There are many (possibly infinite) examples like this, but we still need a proof or counterexample of the conjecture. To date, it remains unsolved, and a solution will be reward with one million dollars.

In addition to the Beal Prize, the Clay Mathematics Institute offers US$1,000,000 for a solution to any of seven listed problems. As of this post, only one has been solved, and no money has yet been accepted. These Millennium Prize Problems continue to baffle mathematicians. It is fascinating to consider that there are so many open problems in mathematics, including those integral to number theory, such as Hilbert’s eighth problem.

Not everyone reading this post is a mathematician. Many of us, including myself, think visually. We like pictures, and we like problems we can solve, or at least ones that currently have solutions. So I’ll introduce one! I’m going to turn this post now to a classic problem that began to lay the foundations for graph theory and topologyFor a related post in topology, I recommend my post, Diving through Dimensions. Some of you may be aware of this problem, and I hope I do it justice. Let us begin by traveling to the capital of Prussia, Königsberg. The city (now Kaliningrad) was set on opposite sides of the Pregel River. On this river sat two islands. Thus, we have four land masses. Connecting these regions were seven bridges, as laid out below in red:

bridges_of_konigsberg

The people of Königsberg posed a question: Is it possible to traverse the city, crossing all bridges once and only once? Let us assume that one cannot dig under the ground, fly through the sky, cross the water by river, or use teleportation technology. One may only access each land mass by crossing bridges. Additionally, one may begin and end at any point. The only requirement is that each bridge must be crossed and that it cannot be crossed more than once.

Leonhard Euler proposed a solution to this problem in 1735. He began by first reducing each land mass to a point. The only relevant information is found in the bridges and where they connect, with the areas of land masses being irrelevant. This combination of nodes (points) and edges (lines) is commonly used in graph theory. Euler noticed that when when reaches a node by one bridge, one must leave that node by another bridge, resulting in an even number of bridges during a full pass-through over a node. Thus, all nodes that are passed over (that is, they are not the beginning nor the end) must have an even number of edges. There may be at most two nodes with an odd number of edges, and these nodes must serve as the beginning and/or the end of our journey.

Now, take a look at our bridges as Euler may have drawn them:

3095

In this case, we see that the top and bottom nodes on the left each have three bridges, the rightmost node has three bridges, and the middle node on the left has five bridges. In other words, all four nodes have an odd number of edges. This violates our requirement that no more than two nodes may have an odd number of edges. As a result, Euler demonstrated that there is no way to traverse each of the Prussian bridges once and only once. This requirement can be applied to any drawing similar to the one above. I recommend trying it out and testing Euler’s proposal. It is quite rewarding.

If you are really interested, take a gander at the current layout on Google Maps:

Screen Shot 2013-06-04 at 3.53.02 PM

It seems that the people of Kaliningrad demolished two of our bridges! The Königsberg bridge problem now has a solution. A part of me likes to think that the bridges were demolished for no other reason than to provide a solution!

As mentioned above, Euler’s solution laid the framework for what we call graph theory. Graph theory, or the study of graphs like the one shown above, has myriad applications. It is used in computer science to model networking. Linguists take advantage of graphs to examine semantic structure. Chemists represent atoms and bonds using graphs. Social network analysis is described in the same terminology, using the same theory, where each person or group may be a node. In biology, we use it to model migration or the spread of disease. More relevant to my work, weighted graphs are used in network analysis, and computational neuroscientists may recognize graphs like the one above when modeling neural networks.

What we thus see is something fantastic. Abstract open problems like the one Euler solved and those proposed by Beal and the Clay Mathematics Institute provide foundational tools that can (and often do) advance our knowledge in numerous fields. Euler’s work propelled us into graph theory. A solution to the Navier-Stokes open problem will advance our understanding of fluid flow. Even if the abstract does not become practical, the journey is delightful.

A shrill whine is engulfing the east coast of the United States. Millions of bluish-black cicadas, specifically Magicicada septendecim, will emerge per acre. These are not the Biblical locusts, more closely related to grasshoppers, but have been likened to them. When Brood II emerges and dies off a few weeks later, we can rest assured that the next emergence will not be until 2030. Magicicada, or periodical cicadas, operate on a 13- or 17-year cycle. Nearly the entire lifespan of each cicada is spent underground as a juvenile before the 4-6 week emergence as an adult, usually at high densities (over 300 per meter squared).

cicada-brood-could-be-big-in-2013_65791_600x450

Why do cicadas emerge en masse? The behavior is linked to an adaptation known as predator satiation. In other words, the high population density of cicadas ensures a low probability of being eaten by a predator. Birds, a main predator of cicadas, can only feast on so many until satiated, allowing the cicadas free reign for the first week or so of adulthood. Oak trees display a similar behavior through masting. Masting, in a general sense, refers to the production of fruits by trees. In some cases, a mass eruption of masts occurs after a long quiescent period. Oak trees, whose fruits are feed for animals, do so in what is called a mast year, where an abundance of such fruit is produced. Since this provides food for rodents, that “predator” is satiated. However, the populations of such rodents rise during mast years. Nonetheless, oak trees are able to generate enough fruit to reproduce, due simply to the mass production of said fruit.

As mentioned before, though, cicadas operate on 13- or 17-year cycles. Why does this matter? To answer this question, let’s talk about snowshoe hares. These hares operate on a 10-year population cycle. The rise and fall of snowshoe hare population coincided with a slightly out-of-phase rise and fall of Canadian lynx populations. These predator-prey dynamics were striking, and these apply to many species. The key point here is that the population cycles of predators and their prey may coincide for the snowshoe hare. Any predator with a one, two, five, or ten year cycle could align perfectly with the hare, as it does for the lynx. (This was recently featured in the New Yorker.)

Cicadas, however, operate a little differently. Their 13- and 17-year cycles are prime numbers. Since their cycles are only divisible by the cycle’s length (13 or 17) and one, it becomes difficult for predators to align their population cycles with the cicada. A predator with, let’s say, a three-year cycle should only align with the 17-year brood every 51 years (Note that many predators have 2-5 year life cycles.). Additionally, broods of different cycle lengths can rest assured that they will not overlap and thus compete for resources. This should only occur every 221 years, which I would argue is a rare brood overlap. Thus, there are two benefits for the use of prime numbers in cicada population cycles. First, predator population cycles are unlikely to align with large prime numbers. Second, different broods will rarely overlap.

In preparation for Brood II, I provide a recording from Brood X in 2004 of cicada calls. It is both enchanting and annoying. Please enjoy.

When looking at pictures of animals in the wild, one may ask the question: Why are most of these animals brown or black, and why do we see very few colorful creatures? This question can be approached from multiple angles. I will ignore selective pressures that provide an advantage to certain colors. Instead, I will focus on the mechanism behind these colors.

Let’s begin with mammals. When we talk about most creatures being brown or black, we are usually considering mammals, since colors are vibrant across other classes (e.g. reptilia, amphibia, aves). Colors in skin and fur arise from two pigments. They generate two colors: brown-black (through melanin) and reddish-yellow.  I challenge you to use this color palette to make the color green. There are evolutionary advantages to being brown. Early mammals were presumably small, rat-like creatures living on land. It was best to blend into the environment and to invest energy into escape mechanisms. Amphibians, on the other hand, were not limited to the brown dirt of land and were able to develop a green color. This does not answer why they could do this, so let me delve into that.

This is where it gets interesting.

It turns out that birds, amphibians, and reptiles are unable to generate pigments for green (and many cannot generate blue). Most of the tetrapod (four-legged) world is like this.  How, then, could they possibly look so vibrant? The colors arise not from the colors of pigment, but from a molecular refraction mechanism. All of these colors arise from two pigments: black and yellow-red. Thus, when a chameleon changes color, it is not depositing pigments. Instead, it is changing the shape of its refractory cells in order to alter the refraction effect.

pic0053

In bird feathers, the mechanism is a bit more complicated, but it is the same idea. If you have time, try this experiment: Take some flour, and mix it with water. Mix until the flour is suspended evenly in the water. Now, move the glass to a well-light area. What color do you see? You will notice that the suspension looks bluish-white as opposed to just white. This is due to light scattering. Shorter-wavelength blue light is more easily scattered than longer-wavelength red light. What you then witness is this scattered blue light, giving the suspension a blue tint. Another example is found in compact discs. If you stare at the bottom, you will see not just a silver-coated disc, but an array of colors reflected off the CD’s surface. This is due to the same effect, but we are now looking at scattering from tracks on the CD. Spacing of ridges and orientation of pigment granules in bird feathers generates the same effect. Though the pigments are still black and yellow-red, this scattering provides vibrant colors. If you want to learn more about this, I recommend reading this article on peacock feather coloration.

The aforementioned scattering is known at the Tyndall effect. Simply put, when a suspension of particles is exposed to light, some of this light is scattered and produces interesting colors. The flour/water example I provided was just one. You will also see a blue tint of smoke from the exhaust from some automobiles, also due to a suspension similar to our flour and water. Simply put, longer wavelengths (e.g. reds) are transmitted, but shorter wavelengths (e.g. blues) are reflected. I should note that this is a different mechanism from light scattering seen in the sky at sunset. Whereas scattering in our atmosphere is usually by very small particles (Rayleigh scattering), scattering in colloidal suspensions is by relatively larger particles (Mie scattering). This is important to note because the effects of scattering from larger particles are more vibrant and less subtle.

Mammals are not completely brown and black. An exception to the boring colors of mammals arises in the irises of our eyes. The iris is still composed of melanin, as before. However, the density of melanin determines how opaque or translucent the upper layer of the iris becomes. When more translucent, light can pass through this upper layer and be backscattered by layers below. As before, shorter wavelengths of light are more likely to be scattered. This, in turn, results in blue irises.

Like the example above, many questions in this world are nuanced, and these nuances make the questions (and answers) more interesting.

A Troubling Divorce

March 23, 2013 — Leave a comment

The unhappy marriage between the United States government and science (research, education, outreach) ended this month. We’ve known for years now that the relationship was doomed to fail, with shouting matches in Washington and fingers pointed in all directions. I would more likely describe an end to the relationship between elected officials and human reason, but that would be harsh, and I still have hope for that one. Sadly, this generation of congresspeople signed the paperwork for a divorce with science.

America’s love affair with science dates back to its origins. Later, Samuel Slater’s factory system fueled the Industrial Revolution. Thomas Edison combatted with Nikola Tesla in the War of the Currents. It was a happy marriage, yielding many offspring. The Hygienic Laboratory of 1887 grew into the National Institutes of Health approximately 50 years later. We, the people, invented, explored, and looked to the stars. Combined with a heavy dose of Sputnik-envy, Eisenhower formed the National Aeronautics and Space Administration (NASA) in July 1958. We, the people, then used our inventions to explore the stars.

Since then, generations of both adults and children have benefited from the biomedical studies at the NIH, the basic science and education at the NSF, and the inspiration and outreach from NASA. Since Goddard’s first flight through Curiosity’s landing on Mars, citizens of the United States have not only directly benefited from spin-offsbut also through NASA’s dedication to increasing STEM (science, technology, engineering, mathematics) field participation. Informed readers will know that although the STEM crisis may be exaggerated, these fields create jobs, assuming benefits from manufacturing and related careers. Such job multipliers should be seen as beacons of hope in troubling times.

Focusing on the NIH, it should be obvious to readers that biomedical science begets health benefits. From Crawford Long’s (unpublished and thus uncredited) first use of ether in the 18th century through great projects like the Human Genome Project, Americans have succeeded in this realm. However, as many know, holding a career in academia is challenging. Two issues compound the problem. First, principal investigators must “publish or perish.” Similar to a consulting firm where you must be promoted or be fired (“up or out”), researchers must continue to publish their results on a regular basis, preferably in high-impact journals, or risk lack of tenure. The second problem lies in funding. Scientists must apply for grants and, in the case of biomedical researchers, these typically come from the NIH. With funding cuts occurring throughout the previous years, research grants (R01) have been reduced both in compensation per award and number awarded. Additionally, training grants (F’s) and early career awards (K’s) have been reduced. Money begets money, and reduction in these training and early career grants make it even more difficult to compete with veterans when applying for research grants. Thus, entry into the career pathway becomes ever the more difficult, approaching an era where academia may be an “alternative career” for PhD graduates.

The United States loved science. The government bragged about it. We shared our results with the world. Earthriseone of my favorite images from NASA, showed a world without borders. The astronauts of Apollo 8 returned to a new world after their mission in 1968. This image, the one of the Earth without borders, influenced how we think about this planet. The environmental movement began. As Robert Poole put it, “it is possible to see that Earthrise marked the tipping point, the moment when the sense of the space age flipped from what it meant for space to what it means for Earth.” It is no coincidence that the Environmental Protection Agency was established two years later. A movement that began with human curiosity raged onward.

Recently, however, the marriage between our government and its science and education programs began to sour. Funding was cut across the board through multiple bills. Under our current administration, NASA’s budget was reduced to less than 0.5% of the federal budget, before the cuts I am about to describe. The NIH has been challenged too, providing fewer and fewer grants to researchers, forcing many away from the bench and into new careers. Funding for science education and outreach subsequently fell, too. Luckily, other foundations, such as the Howard Hughes Medical Institute, picked up part of the bill.

I ran into this problem when applying for a grant through the National Institutes of Health and discussing the process with my colleagues. I should note as a disclaimer that I was lucky enough to have received an award, but that luck is independent of the reality we as scientists must face. The process is simple. Each NIH grant application is scored, and a committee determines which grants are funded based upon that score and funds available. With less money coming in, fewer grants are awarded. Thus, with cuts over the past decade, grant success rates plummeted from ~30% to 18% in 2011. When Congress decided to cut its ties with reality in March and allow for the sequester, it was estimated that this number will drop even further. (It should be noted that a drop in success rate could also be due to an increase in the number of applications, and a large part of that decrease in success rate over 10 years was due to the 8% rise in applications received.) This lack of funding creates barriers. Our government preaches that STEM fields are the future of this country, yet everything they have done in recent history has countered this notion. As an applicant for a training grant, I found myself in a position where very few grants may be awarded, and some colleagues went unfunded due to recent funding cuts. This was troubling for all of us, and I am appalled at the contradiction between rhetoric in Washington and their annual budget.

Back to NASA. As we know, President Obama was never a fan of the organization when writing his budget, yet he spoke highly of the agency when NASA succeeded. Cuts proposed by both the White House and Congress to NASA in 2011 for a reduction of $1.2 trillion over 10 years have already been in place. This was enough to shut down many programs, reduced the number employed, and led to the ruin of many of its buildings. However, the sequester, an across-the-board cut, also hit NASA very hard. As of yesterday, all science education and outreach programs were suspended. This was the moment that Congress divorced Science.

All agencies are hit hard by these issues, and it isn’t just fields in science, education, and outreach. Yet, speaking firsthand, I can say that these cuts are directly affecting those of us on the front line, trying to enter the field and attempting to pursue STEM-related careers. Barriers are rising as the result of a dilapidated system. Having had numerous encounters with failed F, K, and R awards amongst friends and colleagues simply due to budget constraints (meaning that their score would have been awarded in a previous year, but the payline was lowered to fund fewer applications) and seeing children around New York who are captivated by science education but are within a system without the funds to fuel them, I can comfortably claim that we are all the forgotten children of a failed marriage.

Whether it be due to issues raised in this post or your own related to the sequester, remember that this is a bipartisan issue. There are no winners in this game, except for those congresspeople whose paychecks went unaffected after the sequester. I urge you to contact your elected official. Perhaps, we can rekindle this relationship.

Diving Through Dimensions

February 10, 2013 — 1 Comment

I recently made a purchase of a hand-blown Klein bottle. For those not familiar with the concept, a Klein bottle is an unorientable surface that was constructed by sewing two Möbius strips together. These surfaces are interesting in that we have a three-dimensional structure that appears to have two surfaces. However, closer inspection reveals that these two “sides” are both the same surface. This is thus a projection of a lower number of dimensions onto a higher order. If you are interested in these, I recommend a beautiful little short story by A.J. Deutsch, “A Subway Named Mobius.”  

Another projection that may interest you is known as a hypercube, or tesseract. This is not the same tesseract from Madeleine L’Engel’s A Wrinkle in Time, but parallels could be drawn. A hypercube is a four-dimensional object projected onto three dimensions. Within the hypercube, one should see eight cubical cells. Look closely at the projection on the link above. There is a large cube, a small cube, and six distorted “cubes” connecting them. This distortion is a byproduct of the projection onto a lower number of dimensions. To better illustrate this distortion, consider a three-dimensional cube projected as a wireframe onto two dimensions. As opposed to searching for eight cubical cells, we can see six “square” cells. There are two squares, one in the front, and one in the back. These are then connected by four additional “squares.” This projection of a three-dimensional cube onto a two-dimensional surface follows the same concepts of the four-dimensional hypercube projected into three-dimensional space.

However, we cannot visualize four spatial dimensions. This makes the concepts of additional dimensions quite confusing. Should we believe that such dimensions exist? Another interesting story on this topic is that of a world known as Flatland. The story, written in the 19th century, describes a world where only two dimensions exist. Males are placed into social classes by the number of sides in their structure, where circles are the highest order of priests. Females are line segments and, as you can imagine, are quite dangerous if approached from the “front.” The novella delves into the natural laws of this world, the communities, the buildings, and the social norms of this world. The story then focused on a Square, who is visited by a Sphere in his dreams. The Sphere describes the third dimension to the Square (Spaceland), but he cannot understand it. Only by introducing the Square to Lineland and Pointland can he begin to believe in a place called Spaceland. It is a wonderfully-entertaining pamphlet, and I highly recommend reading it.

Let us assume, however, that in another iteration of Flatland, one that follows all the same natural laws of our three-dimensional Spaceland, the Square is not visited by the Sphere. For some reason, the Square is deluded into the heresy that another dimension exists. Without knowledge from some higher-order Sphere, how can he, the Square, demonstrate the existence of a third dimension? Is it even possible?

We need to make two assumptions. First, this version of Flatland follows all the rules of our world. Second, Flatland is a sheet within our world, meaning that there is space above and below Flatland, but the inhabitants of Flatland are unaware of “up” and “down.” Taking these into account, we can then answer this question quite simply. The Square can perform a fairly simple experiment. I must state, however, that this experiment will only provide evidence of a third dimension, and other models of the Flatland Universe could reach the same conclusion. That being said, bear with me.

In our world, at certain spatial dimensions (not very small, and not very large), forces exerted by two objects from the forces of gravity or electromagnetism propagate in three-dimensional space. This results in a reduction of the forces exerted by the objects upon one another as the radius between them increases.  The law they follow is an inverse-square law, where the force exerted is proportional to 1/R^2. However, when we are in a universe limited to only two dimensions, assuming isotropy, there would be no additional spreading in a third dimension, leading the force to follow a simple inverse law, where force is proportional to 1/R. If the Square took two magnets at a reasonable size and distance and measured the forces acting upon them as the radius was changed, he could make a plot of force versus radius. The relationship would presumably follow an inverse-square law, and the Square would have evidence that a third dimension exists! Again, this would be met with scrutiny from the Circles.

Though we cannot always visualize additional dimensions or scales, we can perform experiments to not only demonstrate their existence, but to observe phenomena at an otherwise unobservable scale. This is an aspect of experimentation that I find fascinating. I hope my introduction to dimensional projections, if nothing else, will bring a new perspective on observations around you.

 

Keeping it Random

January 7, 2013 — Leave a comment

When iTunes “shuffle” was introduced, Apple received many complaints. It turns out that a number of songs were played many times, and customers felt that the randomness of this random shuffle algorithm was not truly random. Apple changed the algorithm, and it works a bit better now. However, their change actually made the process non-random. The previous iteration of the software was random. Why, then, did the complaints arise?

If you take a carton of toothpicks and throw them across the room in a truly random manner, you will notice that the toothpicks will start to form clusters. This “clumping” occurs due to the nature of a Poisson point process, or a Cox family of point processes. Simply put, the process tends to create clusters around certain locations or values when it is truly random. The same also occurred in World War II. The Germans were randomly bombing Britain. However, the randomness led to the same type of clustering one would see in iTunes. Certain targets were bombed more often than others. This led the British to think that the Germans had some strategy to their bombing when, in fact, the process was purely random. We tend to think that a random process would be evenly distributed, and when the reality defies our logic, we no longer see the randomness in the random process. Apple decided to change their algorithm to a less random but more evenly distributed one, and customers remained happy.

I can discuss different types of randomness fairly extensively, but I would rather touch upon two different types of random number generation. These are pseudo-random number generators and true random number generators. Pseudo-random number generators use mathematical formulae or tables to pull numbers that appear random. This process is efficient, and it is a deterministic, as opposed to a stochastic, one. The problem is that these generators are periodic and will tend to cycle through the same set of pseudo-random numbers. While they may be excellent for pulling random numbers on small scales, they fall prey to significant problems in large-scale simulations. The lack of true randomness creates artifacts in data and confounds proper analysis.

True random number generators, on the other hand, use real data. Typically, data from physical observations, such as weather patterns or radioactive decay, are extracted and used to generate random values. The lavarand generator, for example, used images of lava lamps to generate random numbers. These true random number generators are nondeterministic and do not suffer from the periodicity of pseudo-random number generators.

This distinction is important in the simulation of data. How can one best generate random numbers? If an internal clock is used to generate random numbers, but you are iterating through some code thousands of times, a periodicity dependent upon the computation time may result and generate artifacts. The use of atmospheric noise could overcome this, though pulling the data takes time and could slow down computation.

The world around us is filled with processes both random and nonrandom. It is a challenge to generate artificial random processes, and it is surprising that truly random processes often appear nonrandom to human observers.