Archives For Medicine

We make decisions based on the data we see. One restaurant serves higher-quality food than another. One presidential candidate aligns more appropriately with our values. One surgical technique yields better outcomes. One applicant submits a stronger job application than a competitor. From these data, we decide what course of action to take. In many cases, these decisions are inconsequential. In others, however, a poor decision may lead to dangerous results. Let’s consider danger.

Imagine you are a surgeon. A patient arrives in your clinic with a particular condition. Let us call this condition, for illustrative purposes, phantasticolithiasis. The patient is in an immense amount of pain. After reviewing the literature on phantasticolithiasis, you discover that this condition can be fatal if left untreated. The review also describes two surgical techniques, which we shall call “A” and “B” here. Procedure A, according to the review, has a 69% success rate. Procedure B, however, seems much more promising, having a success rate of 81%. Based on these data, you prepare for Procedure B. You tell the patient the procedure you will be performing and share some of the information you learned. You tell a few colleagues about your plan. On the eve of the procedure, you call your old friend, a fellow surgeon practicing on another continent. You tell him about this interesting disease, phantasticolithiasis, what you learned about it, and your assessment and plan. There is a pause on the other end of the line. “What is the mass of the lesion?” he asks. You respond that it is much smaller than average. “Did you already perform the procedure?” he continues. You tell him that you didn’t and that the procedure is tomorrow morning.

“Switch to procedure A.”

Confused, you ask your friend why this could be true. He explains the review a bit further. The two procedures were performed on various categories of phantasticolithiasis. However, what the review failed to mention was that procedure A was more commonly performed on the largest lesions, and procedure B on the smallest lesions. Larger lesions, as you might imagine, have a much lower success rate than their smaller counterparts. If you separate the patient population into two categories for the large and small lesions, the results change dramatically. In the large-lesion category, procedure A has a success rate of 63% (250/400) and procedure B has a success rate of 57% (40/70). For the small lesions, procedure A is 99% successful (88/89) and procedure B is 88% successful (210/240). In other words, when controlling for the category of condition, procedure A is always more successful than procedure B. You follow your friend’s advice. The patient’s surgery is a success, and you remain dumbfounded.

What’s happening here is something called Simpson’s paradox. The idea is simple: When two variables are considered (for example, two procedures), one association results (procedure B is more successful). However, upon the conditioning of a third variable (lesion size), the association reverses (procedure A is more successful). This phenomenon has far-reaching implications. For example, since 2000, the median US wage has increased by 1% when adjusted for inflation, a statistic many politicians like to boast about. However, within every educational subgroup, the median wage has decreased. The same can be said for the gender pay gap. Barack Obama in both of his campaigns fought against the gap, reminding us that women only make 77 cents for every dollar a man earns. However, the problem is more than just a paycheck, and the differences change and may even disappear if you control for job sector or level of education. In other words, policy change to reduce the gap need to be more nuanced than a campaign snippet. A particularly famous case of the paradox arose at UC Berkeley. In this case, the school was sued for gender bias. The school admitted 44% of their male applicants and only 35% of their female applicants. However, upon conditioning for each department, it was found that women applied more often to those departments with lower rates of admission. In 2/3 of the departments, women had a higher entrance rate than men.

The paradox seems simple. When analyzing data and making a decision, simply control for other variables and the correct answer will emerge. Right? Not exactly. How do you know which variables should be controlled? In the case of phantasticolithiasis, how would you know to control for lesion size? Why couldn’t you just as easily control for the patient’s age or comorbidities? Could you control for all of them? If you do see the paradox emerge, what decision should you then make? Is the correct answer that of the conditioned data or that of the raw data? The paradox becomes complicated once again.

Judea Pearl wrote an excellent description of the problem and proposed a solution to the above questions. He cites the use of “do-calculus,” a technique rooted in the study of Bayesian networks. Put more simply, his methods find causality between a number of variables. In doing so, one can find the conditioning variables and can then decide whether the conditioned data or the raw data are best for decision-making. The set of variables that dictate causality are the ones that should be used. If you are interested in the technique and have some experience with the notation, I recommend this brief review on arXiv.

Of course, rapid and rather inconsequential decisions need not be based on such formalities. On the other hand, it serves all of us well if we at least consider the possibility of Simpson’s paradox on a day-to-day basis. Be skeptical when reading the paper, speaking with colleagues, and making decisions. Finally, if you’re ever lucky enough to be the first patient with phantasticolithiasis, opt for procedure A.


Radiation Risks

November 8, 2013 — 1 Comment

A recent discussion with a colleague on the Neurodome project centered on the acquisition of data by computed tomography (CT). Specifically, we sought volunteers for a non-medical imaging study. Volunteers were difficult, if not impossible, to obtain. Not only did we hope to find a person without cavities or implants, but we needed someone who was willing to be exposed to a certain dosage of radiation. Our conversation rapidly evolved into a treatise on radiation exposure and health risks. CT, which exposes patients to X-rays, indeed carries a certain health risk. What are these risks? How significant are they? I’d like to attempt to answer some (but not all) of these questions here.

To properly elucidate health risks related to cancer exposure, we must answer a number of important questions. What is the person’s health status? Does this person have any underlying genetic mutations? What type of study is being performed (that is, what is being scanned and for how long, along with the width/shape/angle of the beam)? For the purposes of this discussion, let us consider an otherwise healthy human undergoing standard scans.

In the links I provide below, you will note two units of radiation exposure. I’d like to clarify these so that you can more easily explore the topic independently from this post. The first unit, the gray (Gy), measures energy per kilogram. The second, the sievert (Sv), measures absorbed dose per kilogram. This is rooted in models by medical physicists that attempt to adjust for the effects of radiation dependent upon tissue type. In these models, you might be surprised by what is most likely to cause cancer after radiation exposure. While DNA damage is not insignificant, the major contributor is water. When water molecules absorb ionizing radiation (all radiation in this post is ionizing, unless otherwise specified), they emit free radicals (usually OH-), which in turn have damaging effects. Thus, the amount of water in a tissue is often related to a higher damaging dose per unit energy.

How much radiation are you exposed to in a given period of time? This handy chart should answer most of those questions. You might be surprised by the amount of exposure from certain activities. A chest CT is ~7 mSv, which is not much greater than the 4 mSv exposure from background radiation in a given year. As a former resident of central Pennsylvania, I was surprised to see that radiation exposure from the Three Mile Island incident resulted in an average of only 80 µSv. On the other hand, workers at Fukushima were exposed to a dose of 180 mSv! These doses are interesting from an academic point of view, but what real risks do they carry?

When we talk about radiation exposure, health risks come in two flavors. The first, deterministic effects, are those that result from a cumulation of radiation exposure. Below a certain threshold, adverse effects are minimal or non-existent. Above this threshold, health problems arise. The threshold differs between people and the health condition we are considering. Examples include hair loss, skin necrosis, sterility, and death. The second, stochastic effects, are those effects that have an increased probability of occurring with increased radiation exposure. The best example of this is cancer. With low exposure, one has a lower risk of cancer. With high exposure, this risk increases. We often consider this to be a linear relationship, in that a unit increase in radiation exposure results in a unit increase in cancer risk. For example, 100 mSv of radiation, increases one’s lifetime risk of cancer by 0.5%. Unlike deterministic effects, there is no threshold associated with stochastic effects. There is controversy over the linear model of cancer risk, and more research is needed.

An example against the linear model of cancer risk is exposure to radiation at high altitudes. Though this differs from a CT scan in many ways, one would still expect an increased risk of cancer to be associated with exposure to radiation at higher altitudes. However, those who live at high altitudes or those who work at high altitudes (like commercial airline pilots) do not exhibit a greater prevalence of cancer. To put this into perspective, a single round-trip flight across the continental United States results in the same radiation exposure as a chest X-ray. This begs an interesting question: How much risk do medical scans carry? 

The answer, as you can see, is fairly complicated. If you want to know how much radiation exposure a particular study carries, there’s a great resource to calculate this. This website assumes the linear threshold hypothesis to be true and, as I pointed out, it very well might not be true. That being said, any stochastic risks associated with medical scans are often far outweighed by the risks of ignoring a medical condition. In the case of Neurodome, the opposite is sadly true.

That being said, I’d be a happy volunteer.

Plug Me In

October 8, 2013 — Leave a comment

I’m going to have a bit more fun with this blog post. For this thought experiment, I’d like you to suspend your disbelief. Imagine, for a moment, that someone offered you the chance to “plug” your body into a standard outlet and let yourself “charge.” All of your energy would be gathered from this charging process. You would eat nothing. How long must you remain connected to the outlet? How much will it cost?

Where do we start? There are a few ways to approach this, but I’ll start with the basal metabolic rate for an average adult male. For a 70 kg male, this is typically around 1,600-1,700 Calories (kilocalories). If you would like to do more than just sit against a wall, you will need a bit more energy. Let’s round that up to 2,000 Calories. Converting this to units with which we can work, this comes to 8.36 megajoules (MJ). Like most thought experiments, it is easier to work in orders of magnitude, so we will round this up to 10 MJ.

We now know how much energy we need, but how long will it take to draw this energy from an outlet? Every outlet has a maximum power draw, but very few appliances, if any, reach this maximum value. We denote the amount of power drawn in joules per second as Watts (W). On average, microwaves draw 1,450 W, vacuum cleaners 630 W, computers 240 W (though, as I type this, I am drawing <100 W), and alarm clocks 2 W. In other words, it’s variable. If we were to charge ourselves like a microwave oven, it would take almost 2 hours. However, if we used a computer charger (100 W), it would take 28 hours! A laptop computer charger would thus not suffice, since we would not acquire our necessary daily energy within a given day. All of this energy would be expelled as heat, and you would be a blob of meat plugged into a wall outlet. That’s not a fantastic way to live.

In case you are wondering, architectural engineers model heat production from humans as if they were 100 W light bulbs. This is eerily similar to our 100 W laptop charger that provides just enough energy to get us through a single day!

If you tried all of the above with one of Tesla’s new 10 kW chargers, you’d be ready for your day in only a few minutes!

What about the cost? Two apples provide approximately 200 Calories of energy (note that the energy yield from eating is not 100%, so you will actually receive less than 200 Calories from an apple). The cost of the apple varies based upon season, region, type, and quality of the fruit. Let’s say the two apples cost you $1.00 for ease of comparison. You spend $1.00 for 200 Calories of fresh, delicious apple. How does this compare to energy cost from your wall outlet? In the United States, the average is $0.12 per kWh. The energy from those apples, then, would cost you less than three cents. Over the course of a year, you would spend less than $200 to keep yourself more than fully charged! Imagine spending that much on food in a given year.

Do not try this at home, or while in the Navy.

Mass Casualty Incidents

April 27, 2013 — 1 Comment

In the wake of the 2013 Boston Marathon bombings, I feel it is helpful to reflect, as many have already done, on the situation. Atul Gawande, for example, has already provided an excellent review of the preparedness of Boston’s hospitals. The city was prepared due to extensive and flexible training, along with the assistance of myriad bystanders. With three deaths and over 260 casualties, this left a mortality rate of about 1%. This is remarkable, and numerous factors may have played a role in addition to any disaster plan.

What could have led this mortality rate to be so low? First, consider the location. This was a major public event, with medical personnel readily available. Additionally, the timing of the bombings was unique. The attack occurred not only on a holiday when operating rooms may not be at capacity, but they took place shortly before the 3:00 p.m. shift change that occurred at nearby hospitals. This created a situation where space was available, and twice the number of medical teams were available at this time. The location itself was beneficial, too. Boston is home to world-renowned hospitals with seven trauma centers. In other words, capacity and quality peak at this location. Even the location of the bomb was beneficial. Indoor attacks tend to produce greater injuries due to concentrated blast waves. The shoddy pressure-cooker bombs in this open venue reduced primary blast injuries compared with a situation indoors. Finally, we can see the effects of medical knowledge from military personnel making its way through trauma centers in the United States.  For example, Boston’s EMS chief hosted a conference to discuss the effects of blast injuries and terrorist attacks across the globe, taking advantage of this military knowledge. Taken together, a combination of superb disaster preparedness with key factors unique to this situation may have played a role in the reduction of the mortality in this attack.

This is not intended to detract from the severity of the Boston Marathon bombings. Like all attacks, this was a national tragedy that struck a chord with many of us. Numerous media outlets reporting on the attack described mass casualties. It is this term that I would like to clarify in this blog post. What does mass casualty mean? Having spoken with numerous colleagues and friends, I find that this term needs some clarification.

First, one must define casualty. This term has both military and civilian usages. A military casualty is any person that is deemed unfit for war, typically due to death, injury, illness, or capture. A civilian casualty, in the broadest sense, refers to both injuries and deaths in some incident. I bring this up because the term casualty is oftentimes incorrectly used synonymously with fatality, with the latter referring to only to deaths, and casualties including both fatal and non-fatal injuries.

But what is a mass casualty? The New York Post may want you to believe that this refers to a large number of fatalities, but we already know that a casualty is not a fatality. Is it then a large number of people injured or killed? The answer is more nuanced than that. A mass casualty incident is a term used by emergency medical services, not referring solely to the number of people injured or killed. It is better to consider this as a protocol that one must follow in a situation where the number of casualties (or potential casualties) outweighs the resources available. A mass casualty incident could thus include both a major explosion with hundreds injured to one building with a carbon monoxide leak where there are not yet any injuries. It comes down to the difference between the number of people that must be triaged versus the supplies available. This term may be related to total casualties, but it is not a measure of them.

In a mass casualty incident, a protocol of triage, treatment, and transport is followed. First, all persons in the vicinity are triaged. This means that medical personnel look at each person to see who requires the most immediate medical attention. Then, triaged patients (color coded) are taken to appropriate treatment areas. In extreme cases, an on-site morgue is set up to handle the worst cases. This happened in Boston, where an EMS tent acted as the morgue. Finally, after initial treatment, patients are transported to hospitals for care. This protocol has specific guidelines for each member of the team, creating a situation for efficient triage, treatment, and transport.

It is then disheartening to hear screams of mass casualties used loosely in the media, often implying a large number of civilian mortalities. This creates an air of panic and fear. While we must report cases like this with severity, we must also do it accurately, and the misuse of this term is just one of many mistakes the media made in this scenario. Precision in language is important. More importantly, we must also remain accurate and vigilant in both reporting and understanding of breaking news reports.

Though I only focused on one term, I hope this provides a general lesson to all. Misuse of terms in national media misinforms. We must take it upon ourselves to remain educated, promote vigilance in the reading of such reports, and educate others on what we learned.

The text below is modified from a document another Director and I wrote regarding our free clinic based in New York City. I feel it is necessary to disseminate this information in order to dispel beliefs that nearly all those living in the United States will have access to healthcare in the next 5-7 years.

Our clinic has a mission “to provide high-quality, accessible healthcare to uninsured adults through consultation, treatment, preventative care, and referral services, at little or no cost.”  The signing of the Affordable Care Act (ACA) in 2010, colloquially known as “Obamacare,” redefines the population of uninsured adults in the United States. However, a significant portion of this group will remain uninsured, and free clinics will continue to provide a safety net for this population.

We currently admit uninsured adults who earn less than 400% of the Federal Poverty Level.  Thus, our clinic provides services to those who do not have access to healthcare and cannot afford the options available.  The ACA is often portrayed as near “universal coverage,” especially in the popular media.  Unfortunately, this portrayal does not reflect reality. The Congressional Budget Office estimates that from 2014 through 2019, the number of uninsured adults in the United States will be reduced by about 32 million through mandates and subsidies.  However, this leaves over 23 million uninsured by 2019. While a significant reduction, this leaves a wide gap that must be filled by safety net programs.  About 4-6 million will pay some penalty in 2016, with over 80% earning less than 500% of the Federal Poverty Level. (Note that increases in the estimates of the number of insured from 23 to 26 million by the CBO reflects changes in Medicaid legislation.) The uninsured population will include, but will not be limited to, undocumented immigrants, those who opt to pay penalties, and those who cannot afford premiums (often those earning less than 500% of the Federal Poverty Level). Thus, many may still be unable to afford options available, and free clinics will continue to welcome them.

The gap in healthcare access will be reduced over the coming years.  Even with this reduction, the number of uninsured adults will remain many. Community healthcare programs across the country shall continue to provide coverage for those who need it most.


  1. Congressional Budget Office, “Selected CBO Publications Related to Healthcare Legislation, 2009-2010.”
  2. Congressional Budget Office, “Another Comment on CBO’s Estimates for the Insurance Coverage Provisions of the Affordable Care Act.”
  3. Congressional Budget Office, “Estimates for the Insurance Coverage Provisions of the Affordable Care Act Updated for the Recent Supreme Court Decision.“
  4. Chaikand et al, “PPACA: A Brief Overview of the Law, Implementation, and Legal Challenges.”

While working on my current grant application, I was astounded by the prevalence of hearing impairment in the United States. Additionally, this begged a question: Is hearing impairment currently underdiagnosed, overdiagnosed, or neither? After perusing the literature, I found the answer to be fairly complicated. While it is believed that presbycusis (age-related hearing loss) is underdiagnosed in the U.S., the prevalence of hearing loss appears to be fairly high in this country when compared with worldwide statistics (over 30 million in the United States, with about 275 million around the world). Is this due to relatively better diagnosis in the U.S., or is there something else going on? Here, I’ll delve into that question, through the following measures:

  • Statistics on hearing impairment of all types in the United States
  • Statistics on hearing impairment of all types in various other countries
  • Comparison of screening and management in these regions

It is helpful to consider these basic data before moving on to determining any real differences between the countries. When discussing rates of change in prevalence or incidence of disease, it is helpful to first determine the effects of diagnostic bias. Nonetheless, I hope readers will leave with the impression that hearing loss is a major problem, one that will become more apparent as our population ages.

For the purposes of this post, remember that hearing loss is defined as a hearing threshold greater than 25 dB, where 0 dB is defined as the sound pressure at which young, healthy listeners hear that frequency 50% of the time. Functional impairment, however, is that which begins to impair the ability to understand conversational sound levels at 50-60 dB. A recent review of hearing loss in the United States estimated that over 10% of the population has bilateral hearing loss (>25 dB HL), and over 20% are estimated to have at least unilateral hearing loss. This staggering statistic increases to over 55% in those aged at least 70, increasing to nearly 80% by the age of 80. With an aging population in the United States, this becomes a major public health concern.

The causes of hearing impairment include genetic, drug-induced, and noise-induced hearing loss. With the increased use of overloud noise, noise-induced hearing loss has become more prevalent over time. However, nonsyndromic and syndromic genetic hearing loss accounts for about 50% of impairments in children. Remaining environmental causes include “TORCH” organisms and other neonatal infections. Nonetheless, the problem is a vast one, an issue that will grow as this population ages.

Considering the vastness of this problem, how well do we screen for it? The answer is that we do a poor job of it. Only 9% of internists offer screenings to those aged 65 and older, and only 25% of those with hearing impairment that could be treated with hearing aids actually use hearing aids. This is a failure in both screening and management. Thus, we must reiterate the prevalence of this health condition and do what we can to improve the current state of underdiagnosis and undertreatment. Thus, to answer the question above, we still do not do a stellar job in our screening of hearing loss.

How do we compare with other countries? Is hearing loss more prevalent in the United States, even though our screening programs are not ideal? It’s actually the opposite. Hearing loss is more prevalent in middle-income and lower-income countries, but the screening there is so poor that the numbers are staggeringly underreported. Compared with the rate of about 10%-20% in the United States, prevalence increases to over 25% in southeast Asia, 20-25% in sub-Saharan Africa, and over 20% in Latin America.  The WHO reports a value of about 275 million people with moderate to severe hearing impairment (note that the values listed above are for mild hearing impairment) and estimates that approximately 80% of this is in less wealthy nations. If we include all those with any type of hearing impairment (including mild, >25 dB HL), the number rises to 500-700 million people around the world (with 30-40 million in the United States). There is also very little information regarding hearing aid use in low and middle income countries (excluding Brazil), since many of these countries tend toward worse management than what we have in the United States.

When discussing the “global burden of disease,” hearing impairment hits the nail on the head. It is a health condition that affects all countries and in much the same way. Though there is a lower prevalence in high-income countries, consider that 1 in 5 people will succumb to some form of hearing loss. We must therefore implement increased standards for screening and management of this condition.

Flexner and Curricular Reform

November 19, 2012 — 1 Comment

While working with our medical school on curricular reform, an often-mentioned piece of literature is the Flexner Report.  Most, if not all, of those on the committees know what this is and what it entails. However, those with whom I have discussions about the reform outside of the committees are often left dumbfounded. Many understand the need to reform medical curricula, but far less know the history of its structure in the United States.

Prior to the 20th century, American medical education was dominated by three systems. These included an apprenticeship system, a proprietary school system, and a university system. Lack of standardization inevitably resulted in a wide range of expertise. Additionally, the best students left the United States to study in Paris or Vienna. In response, the American Medical Association established the Council on Medical Education (CME) in 1904. The council’s goal was to standardize medicine and to develop an ‘ideal’ curriculum. They requested the Carnegie Foundation for the Advancement of Teaching to survey medical schools across the United States.

Abraham Flexner, a secondary school teacher and principal not associated with medicine, led the project. In one and a half years, Flexner visited over 150 U.S. medical schools, examining their entrance requirements, the quality of faculty, the size of endowments and tuition, the quality of laboratories, and the teaching hospital (if present). He released his report in 1910. It was found that most medical schools did not adhere to a strict scientific curriculum. Flexner concluded that medical schools were acting more as businesses to make money rather than to educate students:

“Such exploitation of medical education […] is strangely inconsistent with the social aspects of medical practice. The overwhelming importance of preventive medicine, sanitation, and public health indicates that in modern life the medical profession is an organ differentiated by society for its highest purposes, not a business to be exploited.”

In response, the Federation of State Medical Boards was established in 1912. The group, with the CME, enforced a number of accreditation standards that are still in use today. They implemented a curriculum with two years of basic science curriculum followed by two years of clinical rotations as their ‘ideal’ curriculum. The quality of faculty and teaching hospitals were to meet certain standards, and admissions requirements were standardized. As a result, many of these schools shut down. Prior to the formation of the CME, there were 166 medical schools in the United States. By 1930, there were 76. The negative consequence was an immediate reduction in new physicians to treat disadvantaged communities. Those with less privilege in America also found it more difficult to obtain medical education, creating yet another barrier for the socioeconomically disadvantaged in America. Nonetheless, the report and its followup actions were key in reshaping medical curricula in the United States to embrace scientific advancement.

Today, medical schools across the country embrace the doctrines established 100 years ago. Most schools continue to follow the curriculum previously imposed. Scientific rigor is a key component. However, medical educators are currently realigning curricula to embrace modern components of medicine and to focus on the service component of medicine that is central to the doctor-patient relationship.

In 2010, the Commission on Education of Health Professionals for the 21st Century was launched, one century after the release of the Flexner Report. By the turn of the 21st century, gaps within and between countries were glaring. Health systems struggle to keep up with new infectious agents, epidemiological transitions, and the complexities and costs of modern health care. Medical education has once again become fragmented. There is a mismatch between aptitude and needs of populations. We focus on hospitals over primary care. Leadership in medicine is lacking. The interdisciplinary structure of medicine requires that we no longer act in isolated professions. As a result, a redesign of the curriculum is required.

The Commission surveyed the 2420 medical schools and 467 public health schools worldwide. The United States, India, Brazil, and China, each having over 150 medical schools, were the most heavily sampled. In contrast, 36 countries had no medical schools. Across the globe, it cost approximately US$116000 to train each medical graduate and $46000 for each nurse, though the number is greatest in North America. There is little to no standardization between countries, similar to the disjointed nature within the United States in the early 20th century. The globalization of medicine thus requires reform.

Reform of medical education did not stop with Flexner. After the science-based curriculum introduced by the report, the mid-20th century saw a focus on problem-based learning. However, a new reform is now required that seeks a global perspective. A number of core professional skills were recommended by the Commission, and these must be implemented in medical curricula across the globe.

Within the United States, medical educators seek to reform curricula to be more in-line with the global perspective of the modern era, focusing more on global health initiatives and service learning. Additionally, health care reform in America will bring with it new challenges, and medical school curricula must keep up. How this will be accomplished is still under heavy discussion.

When considering any reform, it is helpful to remind oneself of its historical context. In this case, the disjointed structure within the United States at the time of Flexner parallels the disjointed global structure of the world seen today. Though changes will be of a very different nature, motivations remain the same.

The recent results from the trial of Lance Armstrong compels me to write about doping in some fashion. Many articles have been written on the topic lately, so I will focus instead on a different topic. Doping is particularly widespread, and it has been the focus of numerous debates. What else can be said about the issue? I will bring to light another approach, an historical account of doping. To do so, let’s flash back to the 1904 Olympics.

Thomas Hicks was a marathon runner from the United Kingdom but representing the United States at the Summer Olympics in St. Louis. He crossed the finish line second behind Fred Lorz, who was disqualified for covering a majority of the track in his manager’s car after succumbing to a bout of exhaustion. The judges deemed Hicks the gold medalist in the event.

However, Mr. Hicks was not himself an innocent man. He had been “doping” with approximately 1 milligram of strychnine, raw eggs, and brandy. After his second dose of strychnine, Hicks soon collapsed, just across the finish line. Some sources state that another dose may have killed the man, but the lethal dose (at least in dogs and cats) is cited to be 0.75 mg/kg. This would assume a lethal dose an order of magnitude greater than what Thomas Hicks received. Nonetheless, we can conclude that Mr. Hicks would have been disqualified by today’s standards.

Considering that doping has been so widespread, why do we not allow it? Why do we allow performance-allowing medications, such as painkillers, but we do not allow performance-enhancing medications? With the advent of new technology, should we just accept this as a component of competition?

In a recent article, the effects of doping on at-risk populations was reviewed. I recommend reading this article to obtain a grasp of the different methods of doping. The group makes the argument that widespread doping among international athletes has spread and will continue to spread to adolescent populations. In addition to their use in sport, the adolescents use the same products for cosmetic and other non-athletic purposes. For example, steroids and human growth hormone (hGH) have been used by high school girls to reduce fat and increase muscle tone. Considering the wide range of general health and developmental dangers associated with anabolic steroids, hGH, erythropoietin (EPO), anti-estrogens, stimulants, and more, we must focus efforts on education for these populations.

Still, the debate continues. A consequence that is all too rarely ignored is that which I described above. What is not dangerous for professional athletes may prove detrimental to adolescents. Even if considered safe for all populations, use of these substances can lead to abuse without proper education in place.

Recently, the Lancet posted yet another article on Obama’s Global Health Initiative. In it, the writer points out the numerous failures of the GHI. The $63 billion budget was not new money and was instead a new label for funds already budgeted elsewhere. Where GHI differed was in its goal to place all of the leadership under one organization. A central office was created, but this was shut down in July. The article then focuses its text on the tensions that arose when USAID took over as the leaders of the program. I could go on about the successes and failures of the global health initiatives, but I would prefer to focus on a more important issue. What are the GHIs? It is my belief that productive debate will arise if and only if we are adequately informed.

The Global Health Initiatives focus mainly on infectious disease and strengthening healthcare systems around the world. Prior to the Obama administration, they were many organizations (and if we are to be honest, still act as such). PEPFAR and the Global Fund to Fight AIDS focused on HIV/AIDS. The Global Fund also targeted tuberculosis and malaria. The GAVI Alliance put its efforts into immunization. The World Bank’s MAP dealt with AIDS and nutrition. These are not foci of the United States, and Obama’s plan called for a comprehensive effort similar to (and including) these programs that would combine their efforts to improve their effectiveness.

I will instead focus on the current administration’s global health initiative, without a critique. In November 2009, the goal of the GHI was to double US aid for global health to approximately $16 billion per year in 2011, establish goals for the US to assist in addressing the Millennium Development Goals, and attempt to scale up domestic health efforts. The six areas of focus included HIV, tuberculosis, malaria, reproductive health, health systems, and neglected tropical diseases. The November report made three recommendations. First, the group wished to define measurable GHI targets. These would be US-specific and would focus on the delivery of care. Second, they recommended funding be increased to $95 billion over six years, an increase from the original budget. Finally, the recommended that the GHI focus on outcomes and be people-based. Overall, the recommendations were subtle and not clearly defined, but they hinted at the theme of the GHI. The goal was to provide a comprehensive program in which the United States could better address global health initiatives. This was sold as change from the disease-specific nature of Bush’s programs to one that focused on health systems and delivery.

In July 2012, the GHI office was officially closed by the Obama administration. It was touted as a productive shift, but the reality was that this closure was due to myriad problems encountered by the program. The program lacked core leadership, and those in the developing world had troubles with knowing what defined a GHI project. While it had a huge budget, there were only four full-time employees in the office. The idea remained, but the office did not.

There is far more to this story, but that is what you should know about Obama’s GHI. It was and still is an interesting idea, but it remained an idea. What we need are solutions with better focus.

Two years ago, I wrote a piece for a public health group based upon a project based in medical school. An excerpt follows:

“In June 2009, President Barack Obama signed into law the Family Smoking Prevention and Tobacco Control Act (HR 1256). This legislation would require all tobacco products and advertising to have a graphic warning covering 50 percent of the front and back iof the package. The FDA has proposed a number of graphic designs […] The proposed designs include grotesque imagery in an attempt to dissuade smoking in the United States. According to the Center for Disease Control, smoking accounts for approximately 443,000 deaths per year in the US, including deaths from lung cancer, cardiovascular disease, COPD, and numerous other morbidities. It is thus apparent that smoking is a public health concern, and these new warning labels hope to address the concern by deterring such behavior. However, though the proposed graphic labels may be more effective than the previous Surgeon General’s Warning (a text-only message on the side of the package), these labels can be greatly improved through what will be defined as a gains-based message as opposed to the proposed loss-based message. In doing so, the labels would not only educate the public on the dangers of smoking, but it will be argued that they will encourage smoking prevention and cessation behavior. In fact, it is argued that the currently proposed labels may do more harm than good. To make this argument, three assumptions must be made. First, as hinted above, smoking is a public health concern. Second, tobacco warning labels are designed to result in human behaviors of smoking cessation and prevention. Finally, human behavior is, in some circumstances, predictable.

This is not to say that the proposed labels by the FDA or those currently being used around the world are completely ineffective. In fact, the graphic labels may be more effective than the small text-only Surgeon General’s Warning. However, there is a wide margin for improvement. The proposed labels appear to be far too grotesque. Though admittedly fear-inducing, this negative emotion will most likely lead to reactance behavior. Expect sales of slip covers to increase, along with the possibility of some smokers increasing their smoking behavior. Smoking rates may continue to decline, but the rate of this decline may not yet be optimal. Data from other countries, along with numerous experimental studies, have demonstrated that confounding factors can contribute to the decline in these countries, and grotesque imagery can result in maladaptive behavior. A truly effective label would be designed with a positive, gain-framed message. It would be designed to motivate behavioral change and encourage self-efficacy. Data from others who were able to quit can enhance subjective norms. Imagery depicting the benefits of quitting or those who were able to quit can further eliminate reactance. All of this would then be coupled with resources on quitting, such as phone numbers, web sites, and support groups. This is a war that cannot be fought with fire. As demonstrated every time a cigarette is lit, fire is only good for lighting up.”

First, I believe that this claim still holds, and my predictions, while still probably true, mean little when compared to the growing need to reform healthcare. Howevera greater concern lies in how we approach public health research. Within basic science, and especially physics, we like to break down larger systems into their components, analyze them, and search for unifying hypotheses. That is one method, but the concept is simple: Focus on the rules of a system in order to predict behavior.

Often, this concept is lost in public health research. Perusal of the literature will reveal that such concepts are hypotheses in the discussions of papers, but one sees very few examples where they are applied. In the case of tobacco warning labels, concepts in behavioral psychology may be applied to predict behavior resulting from labeling campaigns. The validity of these models remains to be seen, due to a low sample size. Nonetheless, studies were performed. Phone surveys revealed results similar to what was predicted by the theory of reactance.

Briefly, the theory of reactance predicts a cycle in human behavior. One begins with some level of (1) freedom, which is then (2) threatened. The human will (3) react, and undergo a (4) restoration of this freedom. Types of reactance include those exhibited by certain groups (trait reactance) and various threats to freedom (state reactance). These can be measured through fear, anxiety, disgust, and the like, all of which are predicted to increase with the level of reactance. The restoration of freedom is key. This is done through avoidance, acting out, or the like. In the case of smoking, this would be manifested in increased rates of smoking, a reduced desire to quit, downplay of harmful effects, and avoidance through the purchase of slip covers. A study by Dillard et al. in 2005 pointed out such concepts, and phone surveys on tobacco noted such reactions.

Another concept is message framing. There are two types of messages in this theory, gain-framed and loss-framed. A gain-framed message focuses on the benefit of performing a task, while a loss-framed message focuses on the risks associated with not performing the task. Loss-framed messages include pointing out the risks associated with not performing regular mammographies or other early detection methods of disease. Gain-framed messages include those associated with the benefits of exercise and sunscreen use. The current tobacco warning labels would be improved by avoiding reactance through the use of gain-framed messages, as was pointed out by Tamera Schneider in the Journal of Applied Psychology in 2001.

This highlights an important issue in public health and other epidemiological studies. The studies often do not cite and properly utilize the foundational psychological or basic science research. This shapes policy in a less-informed manner, sometimes leading to unforeseen negative outcomes. By increasing effective communication between the sciences and epidemiology, policy changes may become more effective