For the first part of this series and to learn a bit more about 3D reconstruction of computed tomography (CT) slices, check out NEURODOME I: Introduction and CT Reconstruction. Our Kickstarter is now LIVE!

“As I stand out here in the wonders of the unknown at Hadley, I sort of realize there’s a fundamental truth to our nature. Man must explore. And this is exploration at its greatest.” – Cdr. David Scott, Apollo 15


It is official. Our Kickstarter for NEURODOME has launched. I have already described a bit about my role in the project and described CT reconstruction. Future posts will delve into fMRI imaging and reconstruction, along with additional imaging modalities and perhaps a taste of medical imaging in space. You might be surprised at the number of challenges astronauts had to take while aboard rockets, shuttles, and the ISS. All of this will be part of the NEURODOME series.

With our launch, we hope to raise enough funds to develop a planetarium show that illustrates our desire to explore. To do so, real data will be used in the fly-throughs. Our first video, The Journey Inward, provides a basic preview of what you might expect.

I will continue to post about this project but, for now, read about NEURODOME on our website and, if you can, help fuel our mission!

A Troubling Divorce

March 23, 2013 — Leave a comment

The unhappy marriage between the United States government and science (research, education, outreach) ended this month. We’ve known for years now that the relationship was doomed to fail, with shouting matches in Washington and fingers pointed in all directions. I would more likely describe an end to the relationship between elected officials and human reason, but that would be harsh, and I still have hope for that one. Sadly, this generation of congresspeople signed the paperwork for a divorce with science.

America’s love affair with science dates back to its origins. Later, Samuel Slater’s factory system fueled the Industrial Revolution. Thomas Edison combatted with Nikola Tesla in the War of the Currents. It was a happy marriage, yielding many offspring. The Hygienic Laboratory of 1887 grew into the National Institutes of Health approximately 50 years later. We, the people, invented, explored, and looked to the stars. Combined with a heavy dose of Sputnik-envy, Eisenhower formed the National Aeronautics and Space Administration (NASA) in July 1958. We, the people, then used our inventions to explore the stars.

Since then, generations of both adults and children have benefited from the biomedical studies at the NIH, the basic science and education at the NSF, and the inspiration and outreach from NASA. Since Goddard’s first flight through Curiosity’s landing on Mars, citizens of the United States have not only directly benefited from spin-offsbut also through NASA’s dedication to increasing STEM (science, technology, engineering, mathematics) field participation. Informed readers will know that although the STEM crisis may be exaggerated, these fields create jobs, assuming benefits from manufacturing and related careers. Such job multipliers should be seen as beacons of hope in troubling times.

Focusing on the NIH, it should be obvious to readers that biomedical science begets health benefits. From Crawford Long’s (unpublished and thus uncredited) first use of ether in the 18th century through great projects like the Human Genome Project, Americans have succeeded in this realm. However, as many know, holding a career in academia is challenging. Two issues compound the problem. First, principal investigators must “publish or perish.” Similar to a consulting firm where you must be promoted or be fired (“up or out”), researchers must continue to publish their results on a regular basis, preferably in high-impact journals, or risk lack of tenure. The second problem lies in funding. Scientists must apply for grants and, in the case of biomedical researchers, these typically come from the NIH. With funding cuts occurring throughout the previous years, research grants (R01) have been reduced both in compensation per award and number awarded. Additionally, training grants (F’s) and early career awards (K’s) have been reduced. Money begets money, and reduction in these training and early career grants make it even more difficult to compete with veterans when applying for research grants. Thus, entry into the career pathway becomes ever the more difficult, approaching an era where academia may be an “alternative career” for PhD graduates.

The United States loved science. The government bragged about it. We shared our results with the world. Earthriseone of my favorite images from NASA, showed a world without borders. The astronauts of Apollo 8 returned to a new world after their mission in 1968. This image, the one of the Earth without borders, influenced how we think about this planet. The environmental movement began. As Robert Poole put it, “it is possible to see that Earthrise marked the tipping point, the moment when the sense of the space age flipped from what it meant for space to what it means for Earth.” It is no coincidence that the Environmental Protection Agency was established two years later. A movement that began with human curiosity raged onward.

Recently, however, the marriage between our government and its science and education programs began to sour. Funding was cut across the board through multiple bills. Under our current administration, NASA’s budget was reduced to less than 0.5% of the federal budget, before the cuts I am about to describe. The NIH has been challenged too, providing fewer and fewer grants to researchers, forcing many away from the bench and into new careers. Funding for science education and outreach subsequently fell, too. Luckily, other foundations, such as the Howard Hughes Medical Institute, picked up part of the bill.

I ran into this problem when applying for a grant through the National Institutes of Health and discussing the process with my colleagues. I should note as a disclaimer that I was lucky enough to have received an award, but that luck is independent of the reality we as scientists must face. The process is simple. Each NIH grant application is scored, and a committee determines which grants are funded based upon that score and funds available. With less money coming in, fewer grants are awarded. Thus, with cuts over the past decade, grant success rates plummeted from ~30% to 18% in 2011. When Congress decided to cut its ties with reality in March and allow for the sequester, it was estimated that this number will drop even further. (It should be noted that a drop in success rate could also be due to an increase in the number of applications, and a large part of that decrease in success rate over 10 years was due to the 8% rise in applications received.) This lack of funding creates barriers. Our government preaches that STEM fields are the future of this country, yet everything they have done in recent history has countered this notion. As an applicant for a training grant, I found myself in a position where very few grants may be awarded, and some colleagues went unfunded due to recent funding cuts. This was troubling for all of us, and I am appalled at the contradiction between rhetoric in Washington and their annual budget.

Back to NASA. As we know, President Obama was never a fan of the organization when writing his budget, yet he spoke highly of the agency when NASA succeeded. Cuts proposed by both the White House and Congress to NASA in 2011 for a reduction of $1.2 trillion over 10 years have already been in place. This was enough to shut down many programs, reduced the number employed, and led to the ruin of many of its buildings. However, the sequester, an across-the-board cut, also hit NASA very hard. As of yesterday, all science education and outreach programs were suspended. This was the moment that Congress divorced Science.

All agencies are hit hard by these issues, and it isn’t just fields in science, education, and outreach. Yet, speaking firsthand, I can say that these cuts are directly affecting those of us on the front line, trying to enter the field and attempting to pursue STEM-related careers. Barriers are rising as the result of a dilapidated system. Having had numerous encounters with failed F, K, and R awards amongst friends and colleagues simply due to budget constraints (meaning that their score would have been awarded in a previous year, but the payline was lowered to fund fewer applications) and seeing children around New York who are captivated by science education but are within a system without the funds to fuel them, I can comfortably claim that we are all the forgotten children of a failed marriage.

Whether it be due to issues raised in this post or your own related to the sequester, remember that this is a bipartisan issue. There are no winners in this game, except for those congresspeople whose paychecks went unaffected after the sequester. I urge you to contact your elected official. Perhaps, we can rekindle this relationship.

Those who work closely with me know that I am part of a project entitled Neurodome ( The concept is simple. To better understand our motivations to explore the unknown (e.g. space), we must look within. To accomplish this, we are creating a planetarium show using real data: maps of the known universe, clinical imaging (fMRI, CT), and fluorescent imaging of brain slices, to name a few. From our web site:

Humans are inherently curious. We have journeyed into space and have traveled to the bottom of our deepest oceans. Yet no one has ever explained why man or woman “must explore.” What is it that sparks our curiosity? Are we hard-wired for exploration? Somewhere in the brain’s compact architecture, we make the decision to go forth and explore.

The NEURODOME project is a planetarium show that tries to answer these questions. Combining planetarium production technology with high-resolution brain imaging techniques, we will create dome-format animations that examine what it is about the brain that drives us to journey into the unknown. Seamlessly interspersed with space scenes, the NEURODOME planetarium show will zoom through the brain in the context of cutting edge of astronomical research. This project will present our most current portraits of neurons, networks, and regions of the brain responsible for exploratory behavior.

To embark upon this journey, we are launching a Kickstarter campaign next week, which you will be able to find here. Two trailers and a pitch video showcase our techniques and our vision. For now, you can see our “theatrical” trailer, which combines some real data with CGI, below. Note that the other trailer I plan to embed in a later post will include nothing but real data.

I am both a software developer and curator of clinical data in this project. This involves acquisition of high-resolution fMRI and CT data, followed by rendering of these slices into three-dimension objects that can be used for our dome-format presentation. How do we do this? I will begin by explaining how I reconstructed a human head from sagittal sections of CT data. In a later post, I will describe how we can take fMRI data of the brain and reconstruct three-dimensional models by a process known as segmentation.

How do we take a stack of images like this:


(click to open)

and convert it into three-dimensional objects like these:

These renders allow us to transition, in a large-scale animation, from imagery outside the brain to fMRI segmentation data and finally to high-resolution brain imaging. The objects are beneficial in that they can be imported into most animation suites. To render stacks of images, I created a simple script in MATLAB. A stack of 131 saggital sections, each with 512×512 resolution, was first imported. After importing the data, the script then defines a rectangular grid in 3D space. The pixel data from each of these CT slices is interpolated and mapped to the 3D mesh. For example, we can take the 512×512 two-dimensional slice and interpolate it so that the new resolution is 2048×2048. Note that this does not create new data, but instead creates a smoother gradient between adjacent points. If there is interest, I can expand upon the process of three-dimensional interpolation in a later post.

I then take this high-resolution structure mapped to the previously-defined three-dimensional grid and create an isosurface. The function takes volume data in three dimensions and a certain isovalue. An isovalue in this case corresponds to a particular intensity of our CT data. The script searches for all of these isovalues in three dimensions and connects the dots. In doing so, a surface in which all of the points have the same intensity is mapped. These vertices and faces are sent to a “structure” in our workspace. The script finally converts this structure to a three-dimensional “object” file (.obj). Such object files can then be used in any animation suites, such as Maya or Blender. Using Blender, I was able to create the animations shown above. Different isovalues correspond to different parts of the image. For example, a value/index of ~1000 corresponds to skin in the CT data, and a value/index of ~2400 corresponds to the bone intensity. Thus, we can take a stack of two-dimensional images and create beautiful structures for exploration in our planetarium show.

In summary the process is as follows:

  1. A stack of saggital CT images is imported into MATLAB.
  2. The script interpolates these images to increase the image (but not data) resolution.
  3. A volume is created from the stack of high-resolution images.
  4. The volume is “sliced” into a surface corresponding to just one intensity level.
  5. This surface is exported to animations suites for your viewing pleasure.

This series will continue in later posts. I plan to describe more details of the project, and I will delve into particulars of each post if there is interest. You can find more information on this project at

Diving Through Dimensions

February 10, 2013 — 1 Comment

I recently made a purchase of a hand-blown Klein bottle. For those not familiar with the concept, a Klein bottle is an unorientable surface that was constructed by sewing two Möbius strips together. These surfaces are interesting in that we have a three-dimensional structure that appears to have two surfaces. However, closer inspection reveals that these two “sides” are both the same surface. This is thus a projection of a lower number of dimensions onto a higher order. If you are interested in these, I recommend a beautiful little short story by A.J. Deutsch, “A Subway Named Mobius.”  

Another projection that may interest you is known as a hypercube, or tesseract. This is not the same tesseract from Madeleine L’Engel’s A Wrinkle in Time, but parallels could be drawn. A hypercube is a four-dimensional object projected onto three dimensions. Within the hypercube, one should see eight cubical cells. Look closely at the projection on the link above. There is a large cube, a small cube, and six distorted “cubes” connecting them. This distortion is a byproduct of the projection onto a lower number of dimensions. To better illustrate this distortion, consider a three-dimensional cube projected as a wireframe onto two dimensions. As opposed to searching for eight cubical cells, we can see six “square” cells. There are two squares, one in the front, and one in the back. These are then connected by four additional “squares.” This projection of a three-dimensional cube onto a two-dimensional surface follows the same concepts of the four-dimensional hypercube projected into three-dimensional space.

However, we cannot visualize four spatial dimensions. This makes the concepts of additional dimensions quite confusing. Should we believe that such dimensions exist? Another interesting story on this topic is that of a world known as Flatland. The story, written in the 19th century, describes a world where only two dimensions exist. Males are placed into social classes by the number of sides in their structure, where circles are the highest order of priests. Females are line segments and, as you can imagine, are quite dangerous if approached from the “front.” The novella delves into the natural laws of this world, the communities, the buildings, and the social norms of this world. The story then focused on a Square, who is visited by a Sphere in his dreams. The Sphere describes the third dimension to the Square (Spaceland), but he cannot understand it. Only by introducing the Square to Lineland and Pointland can he begin to believe in a place called Spaceland. It is a wonderfully-entertaining pamphlet, and I highly recommend reading it.

Let us assume, however, that in another iteration of Flatland, one that follows all the same natural laws of our three-dimensional Spaceland, the Square is not visited by the Sphere. For some reason, the Square is deluded into the heresy that another dimension exists. Without knowledge from some higher-order Sphere, how can he, the Square, demonstrate the existence of a third dimension? Is it even possible?

We need to make two assumptions. First, this version of Flatland follows all the rules of our world. Second, Flatland is a sheet within our world, meaning that there is space above and below Flatland, but the inhabitants of Flatland are unaware of “up” and “down.” Taking these into account, we can then answer this question quite simply. The Square can perform a fairly simple experiment. I must state, however, that this experiment will only provide evidence of a third dimension, and other models of the Flatland Universe could reach the same conclusion. That being said, bear with me.

In our world, at certain spatial dimensions (not very small, and not very large), forces exerted by two objects from the forces of gravity or electromagnetism propagate in three-dimensional space. This results in a reduction of the forces exerted by the objects upon one another as the radius between them increases.  The law they follow is an inverse-square law, where the force exerted is proportional to 1/R^2. However, when we are in a universe limited to only two dimensions, assuming isotropy, there would be no additional spreading in a third dimension, leading the force to follow a simple inverse law, where force is proportional to 1/R. If the Square took two magnets at a reasonable size and distance and measured the forces acting upon them as the radius was changed, he could make a plot of force versus radius. The relationship would presumably follow an inverse-square law, and the Square would have evidence that a third dimension exists! Again, this would be met with scrutiny from the Circles.

Though we cannot always visualize additional dimensions or scales, we can perform experiments to not only demonstrate their existence, but to observe phenomena at an otherwise unobservable scale. This is an aspect of experimentation that I find fascinating. I hope my introduction to dimensional projections, if nothing else, will bring a new perspective on observations around you.


The text below is modified from a document another Director and I wrote regarding our free clinic based in New York City. I feel it is necessary to disseminate this information in order to dispel beliefs that nearly all those living in the United States will have access to healthcare in the next 5-7 years.

Our clinic has a mission “to provide high-quality, accessible healthcare to uninsured adults through consultation, treatment, preventative care, and referral services, at little or no cost.”  The signing of the Affordable Care Act (ACA) in 2010, colloquially known as “Obamacare,” redefines the population of uninsured adults in the United States. However, a significant portion of this group will remain uninsured, and free clinics will continue to provide a safety net for this population.

We currently admit uninsured adults who earn less than 400% of the Federal Poverty Level.  Thus, our clinic provides services to those who do not have access to healthcare and cannot afford the options available.  The ACA is often portrayed as near “universal coverage,” especially in the popular media.  Unfortunately, this portrayal does not reflect reality. The Congressional Budget Office estimates that from 2014 through 2019, the number of uninsured adults in the United States will be reduced by about 32 million through mandates and subsidies.  However, this leaves over 23 million uninsured by 2019. While a significant reduction, this leaves a wide gap that must be filled by safety net programs.  About 4-6 million will pay some penalty in 2016, with over 80% earning less than 500% of the Federal Poverty Level. (Note that increases in the estimates of the number of insured from 23 to 26 million by the CBO reflects changes in Medicaid legislation.) The uninsured population will include, but will not be limited to, undocumented immigrants, those who opt to pay penalties, and those who cannot afford premiums (often those earning less than 500% of the Federal Poverty Level). Thus, many may still be unable to afford options available, and free clinics will continue to welcome them.

The gap in healthcare access will be reduced over the coming years.  Even with this reduction, the number of uninsured adults will remain many. Community healthcare programs across the country shall continue to provide coverage for those who need it most.


  1. Congressional Budget Office, “Selected CBO Publications Related to Healthcare Legislation, 2009-2010.”
  2. Congressional Budget Office, “Another Comment on CBO’s Estimates for the Insurance Coverage Provisions of the Affordable Care Act.”
  3. Congressional Budget Office, “Estimates for the Insurance Coverage Provisions of the Affordable Care Act Updated for the Recent Supreme Court Decision.“
  4. Chaikand et al, “PPACA: A Brief Overview of the Law, Implementation, and Legal Challenges.”

Keeping it Random

January 7, 2013 — Leave a comment

When iTunes “shuffle” was introduced, Apple received many complaints. It turns out that a number of songs were played many times, and customers felt that the randomness of this random shuffle algorithm was not truly random. Apple changed the algorithm, and it works a bit better now. However, their change actually made the process non-random. The previous iteration of the software was random. Why, then, did the complaints arise?

If you take a carton of toothpicks and throw them across the room in a truly random manner, you will notice that the toothpicks will start to form clusters. This “clumping” occurs due to the nature of a Poisson point process, or a Cox family of point processes. Simply put, the process tends to create clusters around certain locations or values when it is truly random. The same also occurred in World War II. The Germans were randomly bombing Britain. However, the randomness led to the same type of clustering one would see in iTunes. Certain targets were bombed more often than others. This led the British to think that the Germans had some strategy to their bombing when, in fact, the process was purely random. We tend to think that a random process would be evenly distributed, and when the reality defies our logic, we no longer see the randomness in the random process. Apple decided to change their algorithm to a less random but more evenly distributed one, and customers remained happy.

I can discuss different types of randomness fairly extensively, but I would rather touch upon two different types of random number generation. These are pseudo-random number generators and true random number generators. Pseudo-random number generators use mathematical formulae or tables to pull numbers that appear random. This process is efficient, and it is a deterministic, as opposed to a stochastic, one. The problem is that these generators are periodic and will tend to cycle through the same set of pseudo-random numbers. While they may be excellent for pulling random numbers on small scales, they fall prey to significant problems in large-scale simulations. The lack of true randomness creates artifacts in data and confounds proper analysis.

True random number generators, on the other hand, use real data. Typically, data from physical observations, such as weather patterns or radioactive decay, are extracted and used to generate random values. The lavarand generator, for example, used images of lava lamps to generate random numbers. These true random number generators are nondeterministic and do not suffer from the periodicity of pseudo-random number generators.

This distinction is important in the simulation of data. How can one best generate random numbers? If an internal clock is used to generate random numbers, but you are iterating through some code thousands of times, a periodicity dependent upon the computation time may result and generate artifacts. The use of atmospheric noise could overcome this, though pulling the data takes time and could slow down computation.

The world around us is filled with processes both random and nonrandom. It is a challenge to generate artificial random processes, and it is surprising that truly random processes often appear nonrandom to human observers.

This time of year is a busy one, made busier with my additional work on a science outreach project. I will post details on this project once our Kickstarter page goes live. It will be an exciting one, and I promise to provide details on the techniques I employed for my portion of the promo video. This busy time of year led me to write less posts, but do not fret. Today I discuss a topic slightly removed from science and medicine, and that topic is science fiction. This should provide a nice reprieve for a holiday season.

A friend recently asked, “What genre do you read the most? And what is your opinion of science fiction?” Both of those questions require complex answers, and I am not an authority on the latter topic. However, I’ll tell you what draws me to science fiction, even though most of my reading is on Pubmed or arXiv and novels I read are rooted more in ethics and philosophy than in science fiction or fantasy (see: “Zen and the Art of Motorcycle Maintenance”, “Ishmael”). Science fiction is more than phasers, hyperdrive, ansibles, and soylent green.

The genre begins at our present reality and extends it. Concepts from science,medicine, and even politics are nudged to new heights, and a story is birthed. Suspension of disbelief is often required. Unlike fantasy or even magical realism, the story is deemed plausible, as explanations are required from the author. For example, faster than light communication, a technology that breaks our current understanding of the universe, requires some mechanism. This extension of reality allows writers to do something wonderful. They explore social structures, morality, religion, and more. It is this that makes the genre wonderful. While I may not agree with the science in science fiction, that word, “science,” implies a level of critical thinking. The memorable stories from the genre apply such critical thinking to contemporary issues, and they delve into fundamental questions in philosophy. This is not a requirement for the genre, but it is what draws me to its best works.

Every genre has traits like this. Biographies, for example, relay information about a person’s life experiences. However, these books may also impart wisdom through lessons gleaned by the protagonist. In Team of Rivals, we learn that a former President was inspired by a cabinet with whom he disagreed. In one of Richard Feynman’s memoirs, we learn lessons of love and humility. For example, he tells the story of a pen commissioned by NASA that could write in microgravity. After months of work and significant money spent, the team revealed their “space pen” to the Soviets. Moscow responded, stating that they solved the problem by using pencils! This lesson, gleaned from a memoir, taught me a valuable lesson. This function of biographies is what raises their quality and timelessness. Fantasy provides similar critiques of society, yet it functions as an escape mechanism from the challenges of a difficult life.

Nonfiction educates, yet it is limited by the constraints of reality. Science fiction takes realty and extends it. Star Trek asked, “What makes us human?” Ender’s Game delved into questions of militarism and genocide. Many writers, such as Orwell, Huxley, and Bradbury, created dystopias where a flicker in decision making led to a scary world. These were rooted in the contexts of the time, and we still reference such works when critiquing current societal measures.

So, what is my take on science fiction? While myriad laws are broken in the writing of these novels, I am drawn to them. These novels apply the scientific method in a work of fiction. They ask a question about an alternate reality, create and experiment with this reality with an artistic license, and draw a set of conclusions from the simulations they employ. We can debate the lessons learned from such novels. That debate alone is further evidence that the works initiated a conversation.

However, remember this: Orson Scott Card really has no idea how time dilation works.

Every year, I read an article written in 1972 by P.W. Anderson, More is Different. This exercise provides two functions. On one hand, it is a kind of ritualistic experience through which I can reflect on the past year. On the other, it allows me to revisit the paper with an expanded knowledge base. The paper revisits an age-old discussion in science: Are less fundamental fields of research simply applied versions of their counterparts?

In 1965, V.F. Weisskopf, in an essay entitled In Defence of High Energy Physics, delineated two types of fields. One, which he called intensive, sought after fundamental laws. The other, extensive, used these fundamental laws to explain various phenomena. In other words, extensive research is simply applied intensive research. In many ways, various fields are closer in proximity to fundamental laws than others. Most of neuroscience is more fundamental than psychology, in that it is reduced to smaller scales and focuses on simpler parts of a more complex system. This psychology, however, is closer to its fundamental laws than the social sciences. Again, where psychology focuses on the workings of individual and small-group dynamics, social sciences use many of these laws to explain their work. Molecular biology is seen as more fundamental than cell biology. Chemistry is less fundamental than many-body physics, which is less fundamental than particle physics. The argument by Weisskopf seems to be in favor when discussing fields in terms of their size scale.

Changes between size scale, however, leads us to a discussion of symmetry. Anderson begins his discussion with the example of ammonia. This molecule forms a pyramid, with one nitrogen at its ‘peak’ and three hydrogrens forming the base. A problem arises, however. When discussing a nucleus, we see that there is no dipole moment, or no net direction of charge. However, the negative nitrogen and positive hydrogens form a structure that disobeys this law, or so one might think. It actually turns out that symmetry is preserved through tunneling of the nitrogen, flipping the structure and creating a net dipole moment of zero. Simply put, symmetry is preserved. Weisskopf’s argument continues to hold, even with the scale change.

However, when the molecule becomes very large, such as sugars made by living systems, this inversion no longer occurs, and the symmetry is broken. The fundamental laws applied at the level of the nucleus now no longer hold. Additionally, one can ask: Knowing only what we learned about the symmetry of a nucleus, could we then infer the behavior of ammonia, glucose, crystal lattices, or other complex structures? The fundamental laws, while still applied to the system, do not capture the behavior at this new scale. On top of that, very large systems break the symmetry entirely.

Andersen goes on to discuss a number of other possibilities. In addition to structure, he analyses time dependence, conductivity, and the transfer of information. In particular, consider the crystal that carries information in living systems: DNA. Here, we have a structure that need not be symmetric, and new laws of information transfer arise from this structure and its counterparts that would not be predicted from particle physics or many-body physics alone.  Considering the DNA example, we must then ask ourselves: Can questions in social sciences, psychology, and biology be explained by DNA alone? We are often tempted, and rightfully so, to reduce these complex systems to changes in our DNA structure. This much is true. However, can we predictably rebuild the same social psychology from such a simple code? With the addition of epigenetics, we are trying to do so, but I argue that we are not yet there. In fact, I argue that we never will be there.

The message here is that larger, more complex systems, while built upon the fundamental laws of their reduced counterparts, display unique phenomena of their own. We can continue to reduce complex systems to smaller scales. In doing so, complexities and phenomena of the larger systems are lost. Starting only with knowledge from the fundamental laws, can we predict all of the phenomena of the larger scales, without prior knowledge of those phenomena? Probably not. This is another kind of broken symmetry, where traverse fields in an intensive direction will lead one in the formulation of a fundamental laws, but traversing in the extensive direction from those fundamental laws will lead to more and more possibilities that need not be the one from where we started. As scales grow, so too does the probability of broken symmetry.

Thus, when stating that “X is just applied Y,” remember ammonia.


While working on my current grant application, I was astounded by the prevalence of hearing impairment in the United States. Additionally, this begged a question: Is hearing impairment currently underdiagnosed, overdiagnosed, or neither? After perusing the literature, I found the answer to be fairly complicated. While it is believed that presbycusis (age-related hearing loss) is underdiagnosed in the U.S., the prevalence of hearing loss appears to be fairly high in this country when compared with worldwide statistics (over 30 million in the United States, with about 275 million around the world). Is this due to relatively better diagnosis in the U.S., or is there something else going on? Here, I’ll delve into that question, through the following measures:

  • Statistics on hearing impairment of all types in the United States
  • Statistics on hearing impairment of all types in various other countries
  • Comparison of screening and management in these regions

It is helpful to consider these basic data before moving on to determining any real differences between the countries. When discussing rates of change in prevalence or incidence of disease, it is helpful to first determine the effects of diagnostic bias. Nonetheless, I hope readers will leave with the impression that hearing loss is a major problem, one that will become more apparent as our population ages.

For the purposes of this post, remember that hearing loss is defined as a hearing threshold greater than 25 dB, where 0 dB is defined as the sound pressure at which young, healthy listeners hear that frequency 50% of the time. Functional impairment, however, is that which begins to impair the ability to understand conversational sound levels at 50-60 dB. A recent review of hearing loss in the United States estimated that over 10% of the population has bilateral hearing loss (>25 dB HL), and over 20% are estimated to have at least unilateral hearing loss. This staggering statistic increases to over 55% in those aged at least 70, increasing to nearly 80% by the age of 80. With an aging population in the United States, this becomes a major public health concern.

The causes of hearing impairment include genetic, drug-induced, and noise-induced hearing loss. With the increased use of overloud noise, noise-induced hearing loss has become more prevalent over time. However, nonsyndromic and syndromic genetic hearing loss accounts for about 50% of impairments in children. Remaining environmental causes include “TORCH” organisms and other neonatal infections. Nonetheless, the problem is a vast one, an issue that will grow as this population ages.

Considering the vastness of this problem, how well do we screen for it? The answer is that we do a poor job of it. Only 9% of internists offer screenings to those aged 65 and older, and only 25% of those with hearing impairment that could be treated with hearing aids actually use hearing aids. This is a failure in both screening and management. Thus, we must reiterate the prevalence of this health condition and do what we can to improve the current state of underdiagnosis and undertreatment. Thus, to answer the question above, we still do not do a stellar job in our screening of hearing loss.

How do we compare with other countries? Is hearing loss more prevalent in the United States, even though our screening programs are not ideal? It’s actually the opposite. Hearing loss is more prevalent in middle-income and lower-income countries, but the screening there is so poor that the numbers are staggeringly underreported. Compared with the rate of about 10%-20% in the United States, prevalence increases to over 25% in southeast Asia, 20-25% in sub-Saharan Africa, and over 20% in Latin America.  The WHO reports a value of about 275 million people with moderate to severe hearing impairment (note that the values listed above are for mild hearing impairment) and estimates that approximately 80% of this is in less wealthy nations. If we include all those with any type of hearing impairment (including mild, >25 dB HL), the number rises to 500-700 million people around the world (with 30-40 million in the United States). There is also very little information regarding hearing aid use in low and middle income countries (excluding Brazil), since many of these countries tend toward worse management than what we have in the United States.

When discussing the “global burden of disease,” hearing impairment hits the nail on the head. It is a health condition that affects all countries and in much the same way. Though there is a lower prevalence in high-income countries, consider that 1 in 5 people will succumb to some form of hearing loss. We must therefore implement increased standards for screening and management of this condition.

Flexner and Curricular Reform

November 19, 2012 — 1 Comment

While working with our medical school on curricular reform, an often-mentioned piece of literature is the Flexner Report.  Most, if not all, of those on the committees know what this is and what it entails. However, those with whom I have discussions about the reform outside of the committees are often left dumbfounded. Many understand the need to reform medical curricula, but far less know the history of its structure in the United States.

Prior to the 20th century, American medical education was dominated by three systems. These included an apprenticeship system, a proprietary school system, and a university system. Lack of standardization inevitably resulted in a wide range of expertise. Additionally, the best students left the United States to study in Paris or Vienna. In response, the American Medical Association established the Council on Medical Education (CME) in 1904. The council’s goal was to standardize medicine and to develop an ‘ideal’ curriculum. They requested the Carnegie Foundation for the Advancement of Teaching to survey medical schools across the United States.

Abraham Flexner, a secondary school teacher and principal not associated with medicine, led the project. In one and a half years, Flexner visited over 150 U.S. medical schools, examining their entrance requirements, the quality of faculty, the size of endowments and tuition, the quality of laboratories, and the teaching hospital (if present). He released his report in 1910. It was found that most medical schools did not adhere to a strict scientific curriculum. Flexner concluded that medical schools were acting more as businesses to make money rather than to educate students:

“Such exploitation of medical education […] is strangely inconsistent with the social aspects of medical practice. The overwhelming importance of preventive medicine, sanitation, and public health indicates that in modern life the medical profession is an organ differentiated by society for its highest purposes, not a business to be exploited.”

In response, the Federation of State Medical Boards was established in 1912. The group, with the CME, enforced a number of accreditation standards that are still in use today. They implemented a curriculum with two years of basic science curriculum followed by two years of clinical rotations as their ‘ideal’ curriculum. The quality of faculty and teaching hospitals were to meet certain standards, and admissions requirements were standardized. As a result, many of these schools shut down. Prior to the formation of the CME, there were 166 medical schools in the United States. By 1930, there were 76. The negative consequence was an immediate reduction in new physicians to treat disadvantaged communities. Those with less privilege in America also found it more difficult to obtain medical education, creating yet another barrier for the socioeconomically disadvantaged in America. Nonetheless, the report and its followup actions were key in reshaping medical curricula in the United States to embrace scientific advancement.

Today, medical schools across the country embrace the doctrines established 100 years ago. Most schools continue to follow the curriculum previously imposed. Scientific rigor is a key component. However, medical educators are currently realigning curricula to embrace modern components of medicine and to focus on the service component of medicine that is central to the doctor-patient relationship.

In 2010, the Commission on Education of Health Professionals for the 21st Century was launched, one century after the release of the Flexner Report. By the turn of the 21st century, gaps within and between countries were glaring. Health systems struggle to keep up with new infectious agents, epidemiological transitions, and the complexities and costs of modern health care. Medical education has once again become fragmented. There is a mismatch between aptitude and needs of populations. We focus on hospitals over primary care. Leadership in medicine is lacking. The interdisciplinary structure of medicine requires that we no longer act in isolated professions. As a result, a redesign of the curriculum is required.

The Commission surveyed the 2420 medical schools and 467 public health schools worldwide. The United States, India, Brazil, and China, each having over 150 medical schools, were the most heavily sampled. In contrast, 36 countries had no medical schools. Across the globe, it cost approximately US$116000 to train each medical graduate and $46000 for each nurse, though the number is greatest in North America. There is little to no standardization between countries, similar to the disjointed nature within the United States in the early 20th century. The globalization of medicine thus requires reform.

Reform of medical education did not stop with Flexner. After the science-based curriculum introduced by the report, the mid-20th century saw a focus on problem-based learning. However, a new reform is now required that seeks a global perspective. A number of core professional skills were recommended by the Commission, and these must be implemented in medical curricula across the globe.

Within the United States, medical educators seek to reform curricula to be more in-line with the global perspective of the modern era, focusing more on global health initiatives and service learning. Additionally, health care reform in America will bring with it new challenges, and medical school curricula must keep up. How this will be accomplished is still under heavy discussion.

When considering any reform, it is helpful to remind oneself of its historical context. In this case, the disjointed structure within the United States at the time of Flexner parallels the disjointed global structure of the world seen today. Though changes will be of a very different nature, motivations remain the same.