Archives For History

Here’s a conundrum for you: Using only technology available hundreds of years ago, how could you determine the speed at which light travels? We know now that light travels at 299,792,458 m/s, or, to put it simply, “very, very fast.” In fact, we are so sure of this value that we use it to define the meter, where one meter is equal to the distance that light travels in 1/299,792,458 of a second. Today, we have access to technology which allows us to calculate this value. Time-of-flight devices pulse bright flashes of light which are reflected off a mirror, and the difference in time (down to nanoseconds) combined with the distance from the source/detector and the mirror provides an accurate measurement of the speed of light. Additionally, one can take advantage of cavity resonators or interferometers to obtain the same value. However, these devices did not always exist, yet estimates for the speed of light predate their existence. How was this accomplished?

In a first account of the discussion on light propagation, Aristotle incorrectly disagreed with Empedocles, who claimed that light took a finite amount of time to reach Earth. Descartes, too, claimed that light traveled instantaneously. Galileo, in Two New Sciences, made the observation that light appears to travel instantaneously, but that the only observation is that light must travel much faster than sound:

Everyday experience shows that the propagation of light is instantaneous; for when we see a piece of artillery fired, at great distance, the flash reaches our eyes without lapse of time; but the sound reaches the ear only after a noticeable interval.

To determine the speed of light, Galileo devised a time-of-flight experiment similar to the one described above, where two individuals with lanterns would stand at a distance, uncover and recover them upon seeing a flash from the opposing partner, and calculate times between flashes. By starting very close to account for reaction times and eventually moving very far away, one could see if there is a noticeable change in latency. However, this experiment is challenging, to say the least. Is there a simpler method?

Enter Danish astronomer Ole Roemer. Known in his time for accuracy in measurement, arguments over the Gregorian calendar, and firing all the police in Copenhagen, he is best known for his measurement of the speed of light in the 17th century.

While at the Paris Observatory, Roemer carefully studied the orbit of Io, one of Jupiter’s moons. Io orbits Jupiter every 42 and a half hours, a steady rate. This discovery was made by Galileo in 1610 and well-characterized over the following years. During this time, Io is eclipsed by Jupiter, where it disappears for a time and then reemerges sometime later. However, Roemer noticed that, unlike the steady state of Io’s orbit, the times of disappearance and reemergence did change. In fact, Roemer predicted that an eclipse in November 1679 would be 10 minutes behind schedule. When he was proved right, the Royal Observatory remained flabbergasted. Why was this the case?

The figure above, from Roemer’s notes, highlights Earth’s orbit (HGFEKL) around the sun (A). Io’s orbit eclipses (DC) are shown, defined by Jupiter’s (B) shadow. For a period of time, at point H, one cannot observe all eclipses of Io, since Jupiter blocks the path of light. However, when Earth is at positions L or K, one can observe the disappearances of Io, while at positions G and F, one can observe the reemergences of Io. Even if you didn’t follow any of that, note simply that while Io’s orbit does not change, the Earth’s position relative to Jupiter/Io does change as it orbits the Sun. One observing Io’s eclipse at point L or G is closer to Jupiter than one observing an eclipse when the Earth is at point K or F. If light does not travel instantaneously, observations at points K and F will lag, because light takes a bit longer to reach Earth from Io.

In order to calculate the speed of light from this observation, Roemer needed information from his colleagues on the distances from the Earth to the Sun. Additionally, there are other complications. Nonetheless, using the measured distance from the Earth to the Sun at the time (taking advantage of parallax), Roemer announced that the speed of light was approximately 220,000 km/s. While more than 25% lower than the actual speed of light, it remains astounding that one could estimate this speed using nothing but a telescope, a moon, and a notebook.

Giovanni Cassini, a contemporary of Roemer, was not convinced at first. However, Isaac Newton noted the following in his Principia, from Roemer’s observations:

“For it is now certain from the phenomena of Jupiter’s satellites, confirmed by the observations of different astronomers, that light is propagated in succession and requires about seven or eight minutes to travel from the sun to the earth.” 

In other words, philosophers now began to accept that light travels in a finite amount of time.

Over the course of many years, others continued to estimate the speed of light using creative methods. James Bradley, in 1728, noticed that the positions of stars changed during rainfall, using these observations to estimate the speed of light with great accuracy (Bradley: 185,000 miles/second; Speed of Light: 186,282 miles/second). In 1850 in France, Fizeau and Foccault designed a time-of-flight apparatus like the one described in the opening paragraph. As opposed to using modern technology, the apparatus uses a rotating wheel to simulate blips of light. With a wheel of one hundred teeth moving at one hundred rotations per second, the speed of light could be calculated to within the accuracy of Bradley’s observations. Albert Michelson, in the 1870s, repeated the measurements on a larger scale, again with a series of mirrors.

What can be gleaned from this story is a powerful lesson. At times, the simplest observations can result in the most compelling findings. What it required in this case was careful note-taking and a bit of intellect. Even without those, simple observation cannot be understated.

Advertisements

Would you accept US$1,000,000 to solve a maths problem? Apparently, not everyone would say yes. A new prize of this amount was recently announced, in an attempt to prove Beal’s Conjecture. Originally offered with a prize of US$5,000 in 1997, Beal’s conjecture remains unsolved. Today, the Beal Prize has been increased to one million dollars, according to an announcement from the American Mathematical Society.

But what is Beal’s Conjecture? Let’s instead start with something more well known, Fermat’s Last Theorem. Pythagoras originally proposed a formula for the right-angle triangle, where a^2 + b^2 = c^2. This equation has an infinite number of natural number, or positive integer, solutions. However, Fermat claimed that any system with integer exponents greater than 2 (as in Pythagoras’ Theorem) has no integer solutions in a, b, and c. Fermat was kind enough to solve his conjecture for an integer exponent of 4, but he left the rest unsolved. In 1995, Sir Andrew Wiles released a (then-flawed) solution to Fermat’s Conjecture, which included over 100 pages of work over the course of seven years. His story, and the story of the theorem, is a fantastic one, and I recommend reading more on it.

In 1993, two years prior to the solution of Fermat’s Conjecture and five years into Wiles’ quarantine, Andrew Beal proposed another conjecture. It is an extension of the aforementioned theorem. He claimed that the system a^x + b^y = c^z with a, b, c, x, y, and z being positive integers may only have an integer solution for x, y, z > 2 if a, b, and c have a common factor. As mentioned above, he promised US$5,000 to one who could provide a proof or counterexample of his conjecture. Put less abstractly, if we say that a=7, b=7, x=6, and y=7, then we have 7^6 + 7^7 = 941,192. Solving this, we have 7^6 + 7^7 = 98^3 = 941,192. Note that x, y, and z are all integers greater than 2. Thus, Beal would claim that a, b, and c must have a common factor. In this case they do, considering that 98 is divisible by 7 (or 98/7 = 14). There are many (possibly infinite) examples like this, but we still need a proof or counterexample of the conjecture. To date, it remains unsolved, and a solution will be reward with one million dollars.

In addition to the Beal Prize, the Clay Mathematics Institute offers US$1,000,000 for a solution to any of seven listed problems. As of this post, only one has been solved, and no money has yet been accepted. These Millennium Prize Problems continue to baffle mathematicians. It is fascinating to consider that there are so many open problems in mathematics, including those integral to number theory, such as Hilbert’s eighth problem.

Not everyone reading this post is a mathematician. Many of us, including myself, think visually. We like pictures, and we like problems we can solve, or at least ones that currently have solutions. So I’ll introduce one! I’m going to turn this post now to a classic problem that began to lay the foundations for graph theory and topologyFor a related post in topology, I recommend my post, Diving through Dimensions. Some of you may be aware of this problem, and I hope I do it justice. Let us begin by traveling to the capital of Prussia, Königsberg. The city (now Kaliningrad) was set on opposite sides of the Pregel River. On this river sat two islands. Thus, we have four land masses. Connecting these regions were seven bridges, as laid out below in red:

bridges_of_konigsberg

The people of Königsberg posed a question: Is it possible to traverse the city, crossing all bridges once and only once? Let us assume that one cannot dig under the ground, fly through the sky, cross the water by river, or use teleportation technology. One may only access each land mass by crossing bridges. Additionally, one may begin and end at any point. The only requirement is that each bridge must be crossed and that it cannot be crossed more than once.

Leonhard Euler proposed a solution to this problem in 1735. He began by first reducing each land mass to a point. The only relevant information is found in the bridges and where they connect, with the areas of land masses being irrelevant. This combination of nodes (points) and edges (lines) is commonly used in graph theory. Euler noticed that when when reaches a node by one bridge, one must leave that node by another bridge, resulting in an even number of bridges during a full pass-through over a node. Thus, all nodes that are passed over (that is, they are not the beginning nor the end) must have an even number of edges. There may be at most two nodes with an odd number of edges, and these nodes must serve as the beginning and/or the end of our journey.

Now, take a look at our bridges as Euler may have drawn them:

3095

In this case, we see that the top and bottom nodes on the left each have three bridges, the rightmost node has three bridges, and the middle node on the left has five bridges. In other words, all four nodes have an odd number of edges. This violates our requirement that no more than two nodes may have an odd number of edges. As a result, Euler demonstrated that there is no way to traverse each of the Prussian bridges once and only once. This requirement can be applied to any drawing similar to the one above. I recommend trying it out and testing Euler’s proposal. It is quite rewarding.

If you are really interested, take a gander at the current layout on Google Maps:

Screen Shot 2013-06-04 at 3.53.02 PM

It seems that the people of Kaliningrad demolished two of our bridges! The Königsberg bridge problem now has a solution. A part of me likes to think that the bridges were demolished for no other reason than to provide a solution!

As mentioned above, Euler’s solution laid the framework for what we call graph theory. Graph theory, or the study of graphs like the one shown above, has myriad applications. It is used in computer science to model networking. Linguists take advantage of graphs to examine semantic structure. Chemists represent atoms and bonds using graphs. Social network analysis is described in the same terminology, using the same theory, where each person or group may be a node. In biology, we use it to model migration or the spread of disease. More relevant to my work, weighted graphs are used in network analysis, and computational neuroscientists may recognize graphs like the one above when modeling neural networks.

What we thus see is something fantastic. Abstract open problems like the one Euler solved and those proposed by Beal and the Clay Mathematics Institute provide foundational tools that can (and often do) advance our knowledge in numerous fields. Euler’s work propelled us into graph theory. A solution to the Navier-Stokes open problem will advance our understanding of fluid flow. Even if the abstract does not become practical, the journey is delightful.

Flexner and Curricular Reform

November 19, 2012 — 1 Comment

While working with our medical school on curricular reform, an often-mentioned piece of literature is the Flexner Report.  Most, if not all, of those on the committees know what this is and what it entails. However, those with whom I have discussions about the reform outside of the committees are often left dumbfounded. Many understand the need to reform medical curricula, but far less know the history of its structure in the United States.

Prior to the 20th century, American medical education was dominated by three systems. These included an apprenticeship system, a proprietary school system, and a university system. Lack of standardization inevitably resulted in a wide range of expertise. Additionally, the best students left the United States to study in Paris or Vienna. In response, the American Medical Association established the Council on Medical Education (CME) in 1904. The council’s goal was to standardize medicine and to develop an ‘ideal’ curriculum. They requested the Carnegie Foundation for the Advancement of Teaching to survey medical schools across the United States.

Abraham Flexner, a secondary school teacher and principal not associated with medicine, led the project. In one and a half years, Flexner visited over 150 U.S. medical schools, examining their entrance requirements, the quality of faculty, the size of endowments and tuition, the quality of laboratories, and the teaching hospital (if present). He released his report in 1910. It was found that most medical schools did not adhere to a strict scientific curriculum. Flexner concluded that medical schools were acting more as businesses to make money rather than to educate students:

“Such exploitation of medical education […] is strangely inconsistent with the social aspects of medical practice. The overwhelming importance of preventive medicine, sanitation, and public health indicates that in modern life the medical profession is an organ differentiated by society for its highest purposes, not a business to be exploited.”

In response, the Federation of State Medical Boards was established in 1912. The group, with the CME, enforced a number of accreditation standards that are still in use today. They implemented a curriculum with two years of basic science curriculum followed by two years of clinical rotations as their ‘ideal’ curriculum. The quality of faculty and teaching hospitals were to meet certain standards, and admissions requirements were standardized. As a result, many of these schools shut down. Prior to the formation of the CME, there were 166 medical schools in the United States. By 1930, there were 76. The negative consequence was an immediate reduction in new physicians to treat disadvantaged communities. Those with less privilege in America also found it more difficult to obtain medical education, creating yet another barrier for the socioeconomically disadvantaged in America. Nonetheless, the report and its followup actions were key in reshaping medical curricula in the United States to embrace scientific advancement.

Today, medical schools across the country embrace the doctrines established 100 years ago. Most schools continue to follow the curriculum previously imposed. Scientific rigor is a key component. However, medical educators are currently realigning curricula to embrace modern components of medicine and to focus on the service component of medicine that is central to the doctor-patient relationship.

In 2010, the Commission on Education of Health Professionals for the 21st Century was launched, one century after the release of the Flexner Report. By the turn of the 21st century, gaps within and between countries were glaring. Health systems struggle to keep up with new infectious agents, epidemiological transitions, and the complexities and costs of modern health care. Medical education has once again become fragmented. There is a mismatch between aptitude and needs of populations. We focus on hospitals over primary care. Leadership in medicine is lacking. The interdisciplinary structure of medicine requires that we no longer act in isolated professions. As a result, a redesign of the curriculum is required.

The Commission surveyed the 2420 medical schools and 467 public health schools worldwide. The United States, India, Brazil, and China, each having over 150 medical schools, were the most heavily sampled. In contrast, 36 countries had no medical schools. Across the globe, it cost approximately US$116000 to train each medical graduate and $46000 for each nurse, though the number is greatest in North America. There is little to no standardization between countries, similar to the disjointed nature within the United States in the early 20th century. The globalization of medicine thus requires reform.

Reform of medical education did not stop with Flexner. After the science-based curriculum introduced by the report, the mid-20th century saw a focus on problem-based learning. However, a new reform is now required that seeks a global perspective. A number of core professional skills were recommended by the Commission, and these must be implemented in medical curricula across the globe.

Within the United States, medical educators seek to reform curricula to be more in-line with the global perspective of the modern era, focusing more on global health initiatives and service learning. Additionally, health care reform in America will bring with it new challenges, and medical school curricula must keep up. How this will be accomplished is still under heavy discussion.

When considering any reform, it is helpful to remind oneself of its historical context. In this case, the disjointed structure within the United States at the time of Flexner parallels the disjointed global structure of the world seen today. Though changes will be of a very different nature, motivations remain the same.

The recent results from the trial of Lance Armstrong compels me to write about doping in some fashion. Many articles have been written on the topic lately, so I will focus instead on a different topic. Doping is particularly widespread, and it has been the focus of numerous debates. What else can be said about the issue? I will bring to light another approach, an historical account of doping. To do so, let’s flash back to the 1904 Olympics.

Thomas Hicks was a marathon runner from the United Kingdom but representing the United States at the Summer Olympics in St. Louis. He crossed the finish line second behind Fred Lorz, who was disqualified for covering a majority of the track in his manager’s car after succumbing to a bout of exhaustion. The judges deemed Hicks the gold medalist in the event.

However, Mr. Hicks was not himself an innocent man. He had been “doping” with approximately 1 milligram of strychnine, raw eggs, and brandy. After his second dose of strychnine, Hicks soon collapsed, just across the finish line. Some sources state that another dose may have killed the man, but the lethal dose (at least in dogs and cats) is cited to be 0.75 mg/kg. This would assume a lethal dose an order of magnitude greater than what Thomas Hicks received. Nonetheless, we can conclude that Mr. Hicks would have been disqualified by today’s standards.

Considering that doping has been so widespread, why do we not allow it? Why do we allow performance-allowing medications, such as painkillers, but we do not allow performance-enhancing medications? With the advent of new technology, should we just accept this as a component of competition?

In a recent article, the effects of doping on at-risk populations was reviewed. I recommend reading this article to obtain a grasp of the different methods of doping. The group makes the argument that widespread doping among international athletes has spread and will continue to spread to adolescent populations. In addition to their use in sport, the adolescents use the same products for cosmetic and other non-athletic purposes. For example, steroids and human growth hormone (hGH) have been used by high school girls to reduce fat and increase muscle tone. Considering the wide range of general health and developmental dangers associated with anabolic steroids, hGH, erythropoietin (EPO), anti-estrogens, stimulants, and more, we must focus efforts on education for these populations.

Still, the debate continues. A consequence that is all too rarely ignored is that which I described above. What is not dangerous for professional athletes may prove detrimental to adolescents. Even if considered safe for all populations, use of these substances can lead to abuse without proper education in place.