Archives For Education

We make decisions based on the data we see. One restaurant serves higher-quality food than another. One presidential candidate aligns more appropriately with our values. One surgical technique yields better outcomes. One applicant submits a stronger job application than a competitor. From these data, we decide what course of action to take. In many cases, these decisions are inconsequential. In others, however, a poor decision may lead to dangerous results. Let’s consider danger.

Imagine you are a surgeon. A patient arrives in your clinic with a particular condition. Let us call this condition, for illustrative purposes, phantasticolithiasis. The patient is in an immense amount of pain. After reviewing the literature on phantasticolithiasis, you discover that this condition can be fatal if left untreated. The review also describes two surgical techniques, which we shall call “A” and “B” here. Procedure A, according to the review, has a 69% success rate. Procedure B, however, seems much more promising, having a success rate of 81%. Based on these data, you prepare for Procedure B. You tell the patient the procedure you will be performing and share some of the information you learned. You tell a few colleagues about your plan. On the eve of the procedure, you call your old friend, a fellow surgeon practicing on another continent. You tell him about this interesting disease, phantasticolithiasis, what you learned about it, and your assessment and plan. There is a pause on the other end of the line. “What is the mass of the lesion?” he asks. You respond that it is much smaller than average. “Did you already perform the procedure?” he continues. You tell him that you didn’t and that the procedure is tomorrow morning.

“Switch to procedure A.”

Confused, you ask your friend why this could be true. He explains the review a bit further. The two procedures were performed on various categories of phantasticolithiasis. However, what the review failed to mention was that procedure A was more commonly performed on the largest lesions, and procedure B on the smallest lesions. Larger lesions, as you might imagine, have a much lower success rate than their smaller counterparts. If you separate the patient population into two categories for the large and small lesions, the results change dramatically. In the large-lesion category, procedure A has a success rate of 63% (250/400) and procedure B has a success rate of 57% (40/70). For the small lesions, procedure A is 99% successful (88/89) and procedure B is 88% successful (210/240). In other words, when controlling for the category of condition, procedure A is always more successful than procedure B. You follow your friend’s advice. The patient’s surgery is a success, and you remain dumbfounded.

What’s happening here is something called Simpson’s paradox. The idea is simple: When two variables are considered (for example, two procedures), one association results (procedure B is more successful). However, upon the conditioning of a third variable (lesion size), the association reverses (procedure A is more successful). This phenomenon has far-reaching implications. For example, since 2000, the median US wage has increased by 1% when adjusted for inflation, a statistic many politicians like to boast about. However, within every educational subgroup, the median wage has decreased. The same can be said for the gender pay gap. Barack Obama in both of his campaigns fought against the gap, reminding us that women only make 77 cents for every dollar a man earns. However, the problem is more than just a paycheck, and the differences change and may even disappear if you control for job sector or level of education. In other words, policy change to reduce the gap need to be more nuanced than a campaign snippet. A particularly famous case of the paradox arose at UC Berkeley. In this case, the school was sued for gender bias. The school admitted 44% of their male applicants and only 35% of their female applicants. However, upon conditioning for each department, it was found that women applied more often to those departments with lower rates of admission. In 2/3 of the departments, women had a higher entrance rate than men.

The paradox seems simple. When analyzing data and making a decision, simply control for other variables and the correct answer will emerge. Right? Not exactly. How do you know which variables should be controlled? In the case of phantasticolithiasis, how would you know to control for lesion size? Why couldn’t you just as easily control for the patient’s age or comorbidities? Could you control for all of them? If you do see the paradox emerge, what decision should you then make? Is the correct answer that of the conditioned data or that of the raw data? The paradox becomes complicated once again.

Judea Pearl wrote an excellent description of the problem and proposed a solution to the above questions. He cites the use of “do-calculus,” a technique rooted in the study of Bayesian networks. Put more simply, his methods find causality between a number of variables. In doing so, one can find the conditioning variables and can then decide whether the conditioned data or the raw data are best for decision-making. The set of variables that dictate causality are the ones that should be used. If you are interested in the technique and have some experience with the notation, I recommend this brief review on arXiv.

Of course, rapid and rather inconsequential decisions need not be based on such formalities. On the other hand, it serves all of us well if we at least consider the possibility of Simpson’s paradox on a day-to-day basis. Be skeptical when reading the paper, speaking with colleagues, and making decisions. Finally, if you’re ever lucky enough to be the first patient with phantasticolithiasis, opt for procedure A.

For the first part of this series and to learn a bit more about 3D reconstruction of computed tomography (CT) slices, check out NEURODOME I: Introduction and CT Reconstruction. Our Kickstarter is now LIVE!

“As I stand out here in the wonders of the unknown at Hadley, I sort of realize there’s a fundamental truth to our nature. Man must explore. And this is exploration at its greatest.” – Cdr. David Scott, Apollo 15

Kickstarter

It is official. Our Kickstarter for NEURODOME has launched. I have already described a bit about my role in the project and described CT reconstruction. Future posts will delve into fMRI imaging and reconstruction, along with additional imaging modalities and perhaps a taste of medical imaging in space. You might be surprised at the number of challenges astronauts had to take while aboard rockets, shuttles, and the ISS. All of this will be part of the NEURODOME series.

With our launch, we hope to raise enough funds to develop a planetarium show that illustrates our desire to explore. To do so, real data will be used in the fly-throughs. Our first video, The Journey Inward, provides a basic preview of what you might expect.

I will continue to post about this project but, for now, read about NEURODOME on our website and, if you can, help fuel our mission!

A Troubling Divorce

March 23, 2013 — Leave a comment

The unhappy marriage between the United States government and science (research, education, outreach) ended this month. We’ve known for years now that the relationship was doomed to fail, with shouting matches in Washington and fingers pointed in all directions. I would more likely describe an end to the relationship between elected officials and human reason, but that would be harsh, and I still have hope for that one. Sadly, this generation of congresspeople signed the paperwork for a divorce with science.

America’s love affair with science dates back to its origins. Later, Samuel Slater’s factory system fueled the Industrial Revolution. Thomas Edison combatted with Nikola Tesla in the War of the Currents. It was a happy marriage, yielding many offspring. The Hygienic Laboratory of 1887 grew into the National Institutes of Health approximately 50 years later. We, the people, invented, explored, and looked to the stars. Combined with a heavy dose of Sputnik-envy, Eisenhower formed the National Aeronautics and Space Administration (NASA) in July 1958. We, the people, then used our inventions to explore the stars.

Since then, generations of both adults and children have benefited from the biomedical studies at the NIH, the basic science and education at the NSF, and the inspiration and outreach from NASA. Since Goddard’s first flight through Curiosity’s landing on Mars, citizens of the United States have not only directly benefited from spin-offsbut also through NASA’s dedication to increasing STEM (science, technology, engineering, mathematics) field participation. Informed readers will know that although the STEM crisis may be exaggerated, these fields create jobs, assuming benefits from manufacturing and related careers. Such job multipliers should be seen as beacons of hope in troubling times.

Focusing on the NIH, it should be obvious to readers that biomedical science begets health benefits. From Crawford Long’s (unpublished and thus uncredited) first use of ether in the 18th century through great projects like the Human Genome Project, Americans have succeeded in this realm. However, as many know, holding a career in academia is challenging. Two issues compound the problem. First, principal investigators must “publish or perish.” Similar to a consulting firm where you must be promoted or be fired (“up or out”), researchers must continue to publish their results on a regular basis, preferably in high-impact journals, or risk lack of tenure. The second problem lies in funding. Scientists must apply for grants and, in the case of biomedical researchers, these typically come from the NIH. With funding cuts occurring throughout the previous years, research grants (R01) have been reduced both in compensation per award and number awarded. Additionally, training grants (F’s) and early career awards (K’s) have been reduced. Money begets money, and reduction in these training and early career grants make it even more difficult to compete with veterans when applying for research grants. Thus, entry into the career pathway becomes ever the more difficult, approaching an era where academia may be an “alternative career” for PhD graduates.

The United States loved science. The government bragged about it. We shared our results with the world. Earthriseone of my favorite images from NASA, showed a world without borders. The astronauts of Apollo 8 returned to a new world after their mission in 1968. This image, the one of the Earth without borders, influenced how we think about this planet. The environmental movement began. As Robert Poole put it, “it is possible to see that Earthrise marked the tipping point, the moment when the sense of the space age flipped from what it meant for space to what it means for Earth.” It is no coincidence that the Environmental Protection Agency was established two years later. A movement that began with human curiosity raged onward.

Recently, however, the marriage between our government and its science and education programs began to sour. Funding was cut across the board through multiple bills. Under our current administration, NASA’s budget was reduced to less than 0.5% of the federal budget, before the cuts I am about to describe. The NIH has been challenged too, providing fewer and fewer grants to researchers, forcing many away from the bench and into new careers. Funding for science education and outreach subsequently fell, too. Luckily, other foundations, such as the Howard Hughes Medical Institute, picked up part of the bill.

I ran into this problem when applying for a grant through the National Institutes of Health and discussing the process with my colleagues. I should note as a disclaimer that I was lucky enough to have received an award, but that luck is independent of the reality we as scientists must face. The process is simple. Each NIH grant application is scored, and a committee determines which grants are funded based upon that score and funds available. With less money coming in, fewer grants are awarded. Thus, with cuts over the past decade, grant success rates plummeted from ~30% to 18% in 2011. When Congress decided to cut its ties with reality in March and allow for the sequester, it was estimated that this number will drop even further. (It should be noted that a drop in success rate could also be due to an increase in the number of applications, and a large part of that decrease in success rate over 10 years was due to the 8% rise in applications received.) This lack of funding creates barriers. Our government preaches that STEM fields are the future of this country, yet everything they have done in recent history has countered this notion. As an applicant for a training grant, I found myself in a position where very few grants may be awarded, and some colleagues went unfunded due to recent funding cuts. This was troubling for all of us, and I am appalled at the contradiction between rhetoric in Washington and their annual budget.

Back to NASA. As we know, President Obama was never a fan of the organization when writing his budget, yet he spoke highly of the agency when NASA succeeded. Cuts proposed by both the White House and Congress to NASA in 2011 for a reduction of $1.2 trillion over 10 years have already been in place. This was enough to shut down many programs, reduced the number employed, and led to the ruin of many of its buildings. However, the sequester, an across-the-board cut, also hit NASA very hard. As of yesterday, all science education and outreach programs were suspended. This was the moment that Congress divorced Science.

All agencies are hit hard by these issues, and it isn’t just fields in science, education, and outreach. Yet, speaking firsthand, I can say that these cuts are directly affecting those of us on the front line, trying to enter the field and attempting to pursue STEM-related careers. Barriers are rising as the result of a dilapidated system. Having had numerous encounters with failed F, K, and R awards amongst friends and colleagues simply due to budget constraints (meaning that their score would have been awarded in a previous year, but the payline was lowered to fund fewer applications) and seeing children around New York who are captivated by science education but are within a system without the funds to fuel them, I can comfortably claim that we are all the forgotten children of a failed marriage.

Whether it be due to issues raised in this post or your own related to the sequester, remember that this is a bipartisan issue. There are no winners in this game, except for those congresspeople whose paychecks went unaffected after the sequester. I urge you to contact your elected official. Perhaps, we can rekindle this relationship.

Those who work closely with me know that I am part of a project entitled Neurodome (www.neurodome.org). The concept is simple. To better understand our motivations to explore the unknown (e.g. space), we must look within. To accomplish this, we are creating a planetarium show using real data: maps of the known universe, clinical imaging (fMRI, CT), and fluorescent imaging of brain slices, to name a few. From our web site:

Humans are inherently curious. We have journeyed into space and have traveled to the bottom of our deepest oceans. Yet no one has ever explained why man or woman “must explore.” What is it that sparks our curiosity? Are we hard-wired for exploration? Somewhere in the brain’s compact architecture, we make the decision to go forth and explore.

The NEURODOME project is a planetarium show that tries to answer these questions. Combining planetarium production technology with high-resolution brain imaging techniques, we will create dome-format animations that examine what it is about the brain that drives us to journey into the unknown. Seamlessly interspersed with space scenes, the NEURODOME planetarium show will zoom through the brain in the context of cutting edge of astronomical research. This project will present our most current portraits of neurons, networks, and regions of the brain responsible for exploratory behavior.

To embark upon this journey, we are launching a Kickstarter campaign next week, which you will be able to find here. Two trailers and a pitch video showcase our techniques and our vision. For now, you can see our “theatrical” trailer, which combines some real data with CGI, below. Note that the other trailer I plan to embed in a later post will include nothing but real data.

I am both a software developer and curator of clinical data in this project. This involves acquisition of high-resolution fMRI and CT data, followed by rendering of these slices into three-dimension objects that can be used for our dome-format presentation. How do we do this? I will begin by explaining how I reconstructed a human head from sagittal sections of CT data. In a later post, I will describe how we can take fMRI data of the brain and reconstruct three-dimensional models by a process known as segmentation.

How do we take a stack of images like this:

untitledCTgif1

(click to open)

and convert it into three-dimensional objects like these:

These renders allow us to transition, in a large-scale animation, from imagery outside the brain to fMRI segmentation data and finally to high-resolution brain imaging. The objects are beneficial in that they can be imported into most animation suites. To render stacks of images, I created a simple script in MATLAB. A stack of 131 saggital sections, each with 512×512 resolution, was first imported. After importing the data, the script then defines a rectangular grid in 3D space. The pixel data from each of these CT slices is interpolated and mapped to the 3D mesh. For example, we can take the 512×512 two-dimensional slice and interpolate it so that the new resolution is 2048×2048. Note that this does not create new data, but instead creates a smoother gradient between adjacent points. If there is interest, I can expand upon the process of three-dimensional interpolation in a later post.

I then take this high-resolution structure mapped to the previously-defined three-dimensional grid and create an isosurface. The function takes volume data in three dimensions and a certain isovalue. An isovalue in this case corresponds to a particular intensity of our CT data. The script searches for all of these isovalues in three dimensions and connects the dots. In doing so, a surface in which all of the points have the same intensity is mapped. These vertices and faces are sent to a “structure” in our workspace. The script finally converts this structure to a three-dimensional “object” file (.obj). Such object files can then be used in any animation suites, such as Maya or Blender. Using Blender, I was able to create the animations shown above. Different isovalues correspond to different parts of the image. For example, a value/index of ~1000 corresponds to skin in the CT data, and a value/index of ~2400 corresponds to the bone intensity. Thus, we can take a stack of two-dimensional images and create beautiful structures for exploration in our planetarium show.

In summary the process is as follows:

  1. A stack of saggital CT images is imported into MATLAB.
  2. The script interpolates these images to increase the image (but not data) resolution.
  3. A volume is created from the stack of high-resolution images.
  4. The volume is “sliced” into a surface corresponding to just one intensity level.
  5. This surface is exported to animations suites for your viewing pleasure.

This series will continue in later posts. I plan to describe more details of the project, and I will delve into particulars of each post if there is interest. You can find more information on this project at http://www.neurodome.org.

Flexner and Curricular Reform

November 19, 2012 — 1 Comment

While working with our medical school on curricular reform, an often-mentioned piece of literature is the Flexner Report.  Most, if not all, of those on the committees know what this is and what it entails. However, those with whom I have discussions about the reform outside of the committees are often left dumbfounded. Many understand the need to reform medical curricula, but far less know the history of its structure in the United States.

Prior to the 20th century, American medical education was dominated by three systems. These included an apprenticeship system, a proprietary school system, and a university system. Lack of standardization inevitably resulted in a wide range of expertise. Additionally, the best students left the United States to study in Paris or Vienna. In response, the American Medical Association established the Council on Medical Education (CME) in 1904. The council’s goal was to standardize medicine and to develop an ‘ideal’ curriculum. They requested the Carnegie Foundation for the Advancement of Teaching to survey medical schools across the United States.

Abraham Flexner, a secondary school teacher and principal not associated with medicine, led the project. In one and a half years, Flexner visited over 150 U.S. medical schools, examining their entrance requirements, the quality of faculty, the size of endowments and tuition, the quality of laboratories, and the teaching hospital (if present). He released his report in 1910. It was found that most medical schools did not adhere to a strict scientific curriculum. Flexner concluded that medical schools were acting more as businesses to make money rather than to educate students:

“Such exploitation of medical education […] is strangely inconsistent with the social aspects of medical practice. The overwhelming importance of preventive medicine, sanitation, and public health indicates that in modern life the medical profession is an organ differentiated by society for its highest purposes, not a business to be exploited.”

In response, the Federation of State Medical Boards was established in 1912. The group, with the CME, enforced a number of accreditation standards that are still in use today. They implemented a curriculum with two years of basic science curriculum followed by two years of clinical rotations as their ‘ideal’ curriculum. The quality of faculty and teaching hospitals were to meet certain standards, and admissions requirements were standardized. As a result, many of these schools shut down. Prior to the formation of the CME, there were 166 medical schools in the United States. By 1930, there were 76. The negative consequence was an immediate reduction in new physicians to treat disadvantaged communities. Those with less privilege in America also found it more difficult to obtain medical education, creating yet another barrier for the socioeconomically disadvantaged in America. Nonetheless, the report and its followup actions were key in reshaping medical curricula in the United States to embrace scientific advancement.

Today, medical schools across the country embrace the doctrines established 100 years ago. Most schools continue to follow the curriculum previously imposed. Scientific rigor is a key component. However, medical educators are currently realigning curricula to embrace modern components of medicine and to focus on the service component of medicine that is central to the doctor-patient relationship.

In 2010, the Commission on Education of Health Professionals for the 21st Century was launched, one century after the release of the Flexner Report. By the turn of the 21st century, gaps within and between countries were glaring. Health systems struggle to keep up with new infectious agents, epidemiological transitions, and the complexities and costs of modern health care. Medical education has once again become fragmented. There is a mismatch between aptitude and needs of populations. We focus on hospitals over primary care. Leadership in medicine is lacking. The interdisciplinary structure of medicine requires that we no longer act in isolated professions. As a result, a redesign of the curriculum is required.

The Commission surveyed the 2420 medical schools and 467 public health schools worldwide. The United States, India, Brazil, and China, each having over 150 medical schools, were the most heavily sampled. In contrast, 36 countries had no medical schools. Across the globe, it cost approximately US$116000 to train each medical graduate and $46000 for each nurse, though the number is greatest in North America. There is little to no standardization between countries, similar to the disjointed nature within the United States in the early 20th century. The globalization of medicine thus requires reform.

Reform of medical education did not stop with Flexner. After the science-based curriculum introduced by the report, the mid-20th century saw a focus on problem-based learning. However, a new reform is now required that seeks a global perspective. A number of core professional skills were recommended by the Commission, and these must be implemented in medical curricula across the globe.

Within the United States, medical educators seek to reform curricula to be more in-line with the global perspective of the modern era, focusing more on global health initiatives and service learning. Additionally, health care reform in America will bring with it new challenges, and medical school curricula must keep up. How this will be accomplished is still under heavy discussion.

When considering any reform, it is helpful to remind oneself of its historical context. In this case, the disjointed structure within the United States at the time of Flexner parallels the disjointed global structure of the world seen today. Though changes will be of a very different nature, motivations remain the same.

As promised before, I plan to write on topics related to my experience in medical school, graduate school, and the combination of the two. For those who do not know, I am a student in an MD-PhD program (thus the “MudPhud” in the title of the blog). The classic paradigm is one that follows a 2-4-2 model of training. Our particular program follows the following pattern:

  • 2 years of medical school – These are the preclinical years, where we study biochemistry, histology, pathology, physiology, pharmacology, and related topics. It is mostly lecture-based, though our school utilizes a problem based learning (PBL) model. A few graduate school courses are taken in parallel with medical school. 
  • 3.5-4.5 years of graduate school – We then transition to graduate school, where a few courses are taken in the first year of graduate school (third year in the program). After rotating in multiple labs during the previous years, we settle into a lab and perform research for the following years. This ends with the defense of a doctoral dissertation.
  • 1.5 years of medical school – These are the clinical years, where students practice on the wards in each of the required fields. This portion of training culminates in graduation from the medical school and thus the MD-PhD program.
  • After the program – Students take multiple paths, ranging from medical residency to a postdoctoral fellowship to work in industry. Most will go on to residency.

The challenge in the transition from medical school to graduate school is not an easy one. In medical school, one must acquire large quantities of data and share this knowledge at regular intervals (usually on written exams). One could consider it like a very fast treadmill where you do not have access to the controls. The treadmill will continue to push you, but you may feel challenged to keep up. Or you might not feel this challenge. To be honest, I did not find this to be too fast, but the challenge for me was the lack of control over my schedule, from an emotional standpoint. During this time, you build a rapport with a large group of classmates who will later become colleagues. The shared experience of medical school creates solidarity among this group.

In graduate school, things change. You are now on your own, in a place where you are now at the bottom rung once again. It is exciting on one hand, because you can now choose what to study and how to direct your education. On the other hand, you may feel lost. As opposed to a treadmill, this is more like jogging through a forest, where vision is limited. You can take breaks to reorient yourself, and you can move at your own pace. However, it is difficult to know whether you are making progress, how fast you should be moving, or whether you are completely lost. Your friends in medical school are now moving on, and you no longer share the rapport you previously had with them. This creates a distance, and it is often emotionally trying.

For as challenging as the graduate school transition might be, the benefits outweigh the drawbacks. You are now able to study what truly fascinates you. You have control over your schedule, and you determine your own pace. You have access to a vast array of resources, and you can take on additional projects outside of your program. For example, I found myself volunteering with mentorship programs, science fairs, and even with a community clinic. The challenges you face in graduate school make each success far more rewarding than if they were easy. A simple rotation or a year-long research project cannot create the same level of suspense, mostly due to their limited timelines and more structured projects. Failure begets learning. Success begets inspiration.

As of this post, I have spent approximately 6-12 hours per day over the past few weeks attempting to solve minor issues in our PID controller. This required repetition of the same calibration trials daily, while I would  focus on creating a script for data analysis. It’s not challenging, and I’m used to it. However, it reveals a common issue in graduate school: we spend quite a bit of time on minutiae. For some reason, we also enjoy it.

It is this concept of delving into the abyss that I find fascinating about graduate school.