Archives For Communication

We make decisions based on the data we see. One restaurant serves higher-quality food than another. One presidential candidate aligns more appropriately with our values. One surgical technique yields better outcomes. One applicant submits a stronger job application than a competitor. From these data, we decide what course of action to take. In many cases, these decisions are inconsequential. In others, however, a poor decision may lead to dangerous results. Let’s consider danger.

Imagine you are a surgeon. A patient arrives in your clinic with a particular condition. Let us call this condition, for illustrative purposes, phantasticolithiasis. The patient is in an immense amount of pain. After reviewing the literature on phantasticolithiasis, you discover that this condition can be fatal if left untreated. The review also describes two surgical techniques, which we shall call “A” and “B” here. Procedure A, according to the review, has a 69% success rate. Procedure B, however, seems much more promising, having a success rate of 81%. Based on these data, you prepare for Procedure B. You tell the patient the procedure you will be performing and share some of the information you learned. You tell a few colleagues about your plan. On the eve of the procedure, you call your old friend, a fellow surgeon practicing on another continent. You tell him about this interesting disease, phantasticolithiasis, what you learned about it, and your assessment and plan. There is a pause on the other end of the line. “What is the mass of the lesion?” he asks. You respond that it is much smaller than average. “Did you already perform the procedure?” he continues. You tell him that you didn’t and that the procedure is tomorrow morning.

“Switch to procedure A.”

Confused, you ask your friend why this could be true. He explains the review a bit further. The two procedures were performed on various categories of phantasticolithiasis. However, what the review failed to mention was that procedure A was more commonly performed on the largest lesions, and procedure B on the smallest lesions. Larger lesions, as you might imagine, have a much lower success rate than their smaller counterparts. If you separate the patient population into two categories for the large and small lesions, the results change dramatically. In the large-lesion category, procedure A has a success rate of 63% (250/400) and procedure B has a success rate of 57% (40/70). For the small lesions, procedure A is 99% successful (88/89) and procedure B is 88% successful (210/240). In other words, when controlling for the category of condition, procedure A is always more successful than procedure B. You follow your friend’s advice. The patient’s surgery is a success, and you remain dumbfounded.

What’s happening here is something called Simpson’s paradox. The idea is simple: When two variables are considered (for example, two procedures), one association results (procedure B is more successful). However, upon the conditioning of a third variable (lesion size), the association reverses (procedure A is more successful). This phenomenon has far-reaching implications. For example, since 2000, the median US wage has increased by 1% when adjusted for inflation, a statistic many politicians like to boast about. However, within every educational subgroup, the median wage has decreased. The same can be said for the gender pay gap. Barack Obama in both of his campaigns fought against the gap, reminding us that women only make 77 cents for every dollar a man earns. However, the problem is more than just a paycheck, and the differences change and may even disappear if you control for job sector or level of education. In other words, policy change to reduce the gap need to be more nuanced than a campaign snippet. A particularly famous case of the paradox arose at UC Berkeley. In this case, the school was sued for gender bias. The school admitted 44% of their male applicants and only 35% of their female applicants. However, upon conditioning for each department, it was found that women applied more often to those departments with lower rates of admission. In 2/3 of the departments, women had a higher entrance rate than men.

The paradox seems simple. When analyzing data and making a decision, simply control for other variables and the correct answer will emerge. Right? Not exactly. How do you know which variables should be controlled? In the case of phantasticolithiasis, how would you know to control for lesion size? Why couldn’t you just as easily control for the patient’s age or comorbidities? Could you control for all of them? If you do see the paradox emerge, what decision should you then make? Is the correct answer that of the conditioned data or that of the raw data? The paradox becomes complicated once again.

Judea Pearl wrote an excellent description of the problem and proposed a solution to the above questions. He cites the use of “do-calculus,” a technique rooted in the study of Bayesian networks. Put more simply, his methods find causality between a number of variables. In doing so, one can find the conditioning variables and can then decide whether the conditioned data or the raw data are best for decision-making. The set of variables that dictate causality are the ones that should be used. If you are interested in the technique and have some experience with the notation, I recommend this brief review on arXiv.

Of course, rapid and rather inconsequential decisions need not be based on such formalities. On the other hand, it serves all of us well if we at least consider the possibility of Simpson’s paradox on a day-to-day basis. Be skeptical when reading the paper, speaking with colleagues, and making decisions. Finally, if you’re ever lucky enough to be the first patient with phantasticolithiasis, opt for procedure A.

Advertisements

Mass Casualty Incidents

April 27, 2013 — 1 Comment

In the wake of the 2013 Boston Marathon bombings, I feel it is helpful to reflect, as many have already done, on the situation. Atul Gawande, for example, has already provided an excellent review of the preparedness of Boston’s hospitals. The city was prepared due to extensive and flexible training, along with the assistance of myriad bystanders. With three deaths and over 260 casualties, this left a mortality rate of about 1%. This is remarkable, and numerous factors may have played a role in addition to any disaster plan.

What could have led this mortality rate to be so low? First, consider the location. This was a major public event, with medical personnel readily available. Additionally, the timing of the bombings was unique. The attack occurred not only on a holiday when operating rooms may not be at capacity, but they took place shortly before the 3:00 p.m. shift change that occurred at nearby hospitals. This created a situation where space was available, and twice the number of medical teams were available at this time. The location itself was beneficial, too. Boston is home to world-renowned hospitals with seven trauma centers. In other words, capacity and quality peak at this location. Even the location of the bomb was beneficial. Indoor attacks tend to produce greater injuries due to concentrated blast waves. The shoddy pressure-cooker bombs in this open venue reduced primary blast injuries compared with a situation indoors. Finally, we can see the effects of medical knowledge from military personnel making its way through trauma centers in the United States.  For example, Boston’s EMS chief hosted a conference to discuss the effects of blast injuries and terrorist attacks across the globe, taking advantage of this military knowledge. Taken together, a combination of superb disaster preparedness with key factors unique to this situation may have played a role in the reduction of the mortality in this attack.

This is not intended to detract from the severity of the Boston Marathon bombings. Like all attacks, this was a national tragedy that struck a chord with many of us. Numerous media outlets reporting on the attack described mass casualties. It is this term that I would like to clarify in this blog post. What does mass casualty mean? Having spoken with numerous colleagues and friends, I find that this term needs some clarification.

First, one must define casualty. This term has both military and civilian usages. A military casualty is any person that is deemed unfit for war, typically due to death, injury, illness, or capture. A civilian casualty, in the broadest sense, refers to both injuries and deaths in some incident. I bring this up because the term casualty is oftentimes incorrectly used synonymously with fatality, with the latter referring to only to deaths, and casualties including both fatal and non-fatal injuries.

But what is a mass casualty? The New York Post may want you to believe that this refers to a large number of fatalities, but we already know that a casualty is not a fatality. Is it then a large number of people injured or killed? The answer is more nuanced than that. A mass casualty incident is a term used by emergency medical services, not referring solely to the number of people injured or killed. It is better to consider this as a protocol that one must follow in a situation where the number of casualties (or potential casualties) outweighs the resources available. A mass casualty incident could thus include both a major explosion with hundreds injured to one building with a carbon monoxide leak where there are not yet any injuries. It comes down to the difference between the number of people that must be triaged versus the supplies available. This term may be related to total casualties, but it is not a measure of them.

In a mass casualty incident, a protocol of triage, treatment, and transport is followed. First, all persons in the vicinity are triaged. This means that medical personnel look at each person to see who requires the most immediate medical attention. Then, triaged patients (color coded) are taken to appropriate treatment areas. In extreme cases, an on-site morgue is set up to handle the worst cases. This happened in Boston, where an EMS tent acted as the morgue. Finally, after initial treatment, patients are transported to hospitals for care. This protocol has specific guidelines for each member of the team, creating a situation for efficient triage, treatment, and transport.

It is then disheartening to hear screams of mass casualties used loosely in the media, often implying a large number of civilian mortalities. This creates an air of panic and fear. While we must report cases like this with severity, we must also do it accurately, and the misuse of this term is just one of many mistakes the media made in this scenario. Precision in language is important. More importantly, we must also remain accurate and vigilant in both reporting and understanding of breaking news reports.

Though I only focused on one term, I hope this provides a general lesson to all. Misuse of terms in national media misinforms. We must take it upon ourselves to remain educated, promote vigilance in the reading of such reports, and educate others on what we learned.

Those who work closely with me know that I am part of a project entitled Neurodome (www.neurodome.org). The concept is simple. To better understand our motivations to explore the unknown (e.g. space), we must look within. To accomplish this, we are creating a planetarium show using real data: maps of the known universe, clinical imaging (fMRI, CT), and fluorescent imaging of brain slices, to name a few. From our web site:

Humans are inherently curious. We have journeyed into space and have traveled to the bottom of our deepest oceans. Yet no one has ever explained why man or woman “must explore.” What is it that sparks our curiosity? Are we hard-wired for exploration? Somewhere in the brain’s compact architecture, we make the decision to go forth and explore.

The NEURODOME project is a planetarium show that tries to answer these questions. Combining planetarium production technology with high-resolution brain imaging techniques, we will create dome-format animations that examine what it is about the brain that drives us to journey into the unknown. Seamlessly interspersed with space scenes, the NEURODOME planetarium show will zoom through the brain in the context of cutting edge of astronomical research. This project will present our most current portraits of neurons, networks, and regions of the brain responsible for exploratory behavior.

To embark upon this journey, we are launching a Kickstarter campaign next week, which you will be able to find here. Two trailers and a pitch video showcase our techniques and our vision. For now, you can see our “theatrical” trailer, which combines some real data with CGI, below. Note that the other trailer I plan to embed in a later post will include nothing but real data.

I am both a software developer and curator of clinical data in this project. This involves acquisition of high-resolution fMRI and CT data, followed by rendering of these slices into three-dimension objects that can be used for our dome-format presentation. How do we do this? I will begin by explaining how I reconstructed a human head from sagittal sections of CT data. In a later post, I will describe how we can take fMRI data of the brain and reconstruct three-dimensional models by a process known as segmentation.

How do we take a stack of images like this:

untitledCTgif1

(click to open)

and convert it into three-dimensional objects like these:

These renders allow us to transition, in a large-scale animation, from imagery outside the brain to fMRI segmentation data and finally to high-resolution brain imaging. The objects are beneficial in that they can be imported into most animation suites. To render stacks of images, I created a simple script in MATLAB. A stack of 131 saggital sections, each with 512×512 resolution, was first imported. After importing the data, the script then defines a rectangular grid in 3D space. The pixel data from each of these CT slices is interpolated and mapped to the 3D mesh. For example, we can take the 512×512 two-dimensional slice and interpolate it so that the new resolution is 2048×2048. Note that this does not create new data, but instead creates a smoother gradient between adjacent points. If there is interest, I can expand upon the process of three-dimensional interpolation in a later post.

I then take this high-resolution structure mapped to the previously-defined three-dimensional grid and create an isosurface. The function takes volume data in three dimensions and a certain isovalue. An isovalue in this case corresponds to a particular intensity of our CT data. The script searches for all of these isovalues in three dimensions and connects the dots. In doing so, a surface in which all of the points have the same intensity is mapped. These vertices and faces are sent to a “structure” in our workspace. The script finally converts this structure to a three-dimensional “object” file (.obj). Such object files can then be used in any animation suites, such as Maya or Blender. Using Blender, I was able to create the animations shown above. Different isovalues correspond to different parts of the image. For example, a value/index of ~1000 corresponds to skin in the CT data, and a value/index of ~2400 corresponds to the bone intensity. Thus, we can take a stack of two-dimensional images and create beautiful structures for exploration in our planetarium show.

In summary the process is as follows:

  1. A stack of saggital CT images is imported into MATLAB.
  2. The script interpolates these images to increase the image (but not data) resolution.
  3. A volume is created from the stack of high-resolution images.
  4. The volume is “sliced” into a surface corresponding to just one intensity level.
  5. This surface is exported to animations suites for your viewing pleasure.

This series will continue in later posts. I plan to describe more details of the project, and I will delve into particulars of each post if there is interest. You can find more information on this project at http://www.neurodome.org.

For those not aware, peer review is the process by which members of a field evaluate the work of other members in the same field as a form of regulation. This increases credibility and presumably quality within the field. For example, this can refer to review of manuscripts for publication, review of teaching methods by other educators, or the creation and maintenance of health care standards within the medical profession. In particular, scholarly peer review will be the main focus. The term is thus not very specific. I will focus on methods of peer review in publication and in the clinical setting for the purposes of this post. Issues relating to technical peer review in fields like engineering or standardization within education will not be discussed here. However, remember that “peer review” is a broad term encompassing many fields.

In 1665, Henry Oldenburg created the first scientific journal that underwent peer review, the Philosophical Transactions of the Royal Society. Peer review in this journal differed from the peer review we see today. Whereas professionals in the same field and often in competing labs will review today’s articles for publication, articles in this journal were reviewed by the Council of the Society. This journal created a foundation for the papers we see today, disseminating peer-reviewed work and archiving it for later reference. Peer review later developed in the 18th century as one where other professionals, often experts in the field, would perform the review as opposed to the editorial review of the aforementioned journal. This form of scholarly peer review did not become institutionalized until closer to the 20th century. However, professional peer review, such as that performed by physicians, dated back to the 9th and 10th centuries, where one physician would comment on the ethical decisions or procedures of another.

Since that time, scholarly peer review has become a mainstay of academic publication. It is amazing to think that this regulatory process has only been so strong for less than a century. However, the procedure does not come without significant criticism. (Though what topic in science is not heavily criticized?)

First, though, let us consider the benefits of scholarly peer review. Mentioned above was the improved quality of published work. Simply put, this works by first presenting a barrier that authors must overcome in order to get published, and critiques from reviewers are then addressed by authors to improve the quality of a manuscript. These suggestions may include additional experiments that will further test the work. The process filters out scientific error, thus improving accuracy of published information. Poor-quality work is rejected by the peer-review process. Additionally, work is stratified by journal quality, and this process routes papers to the correct tier. In total, peer review is at the heart of scientific critique.

One of the most common critiques of peer review is that it remains untested, as purported by a 2002 article in JAMA. The Cochrane Collaboration in 2003 (and reconfirmed in 2008) concluded that there existed “little empirical evidence to support the use of editorial peer review as a mechanism to ensure quality of biomedical research, despite its widespread use and costs.” Additionally, a study in BMJ took an article about to be published, purposely added a number of errors, and measured the error detection rate to be about 25%, with no reviewer correcting more than 65% of the errors. Finally, single-blinded peer review is open to bias. This could be bias against nationality, language, specialty, gender, or competition. Additionally, there is a common trend of bias toward positive results. Double-blinded review may help to overcome this critique.

Alternatives to single-blind review include double-blind review, post-publication review, and open review. In double-blind review, neither the authors nor the reviewers know the other party, and this would presumably reduce aforementioned bias. Surveys had shown a preference to double-blind review. Post-publication review would be an excellent supplement to the current review system to improve the rate of error correction in publications. Finally, open peer review, where the reviewer is known, would also possibly reduce the bias. However, one may be less willing to critique work by a senior author in the field, and the pilot by Nature in 2006 was far from successful.

At this stage, the system is the best we have, and problems lie less in the peer review process and more in the access to scholarly work without a costly subscription. Discontent in the field does not translate to a desire for one of the alternative methods described. Nonetheless, we should be critical of our process, much in the same way the process itself is critical.

Orwellian Semantics

September 30, 2012 — Leave a comment

I have numerous issues with the bad habits of modern writing, including an overuse of the passive voice, laziness through use of metaphor, and an abundance of technical jargon or pretentious vocabulary. My writing often falls victim to this, especially with my personal poor habit of the use of the passive voice. It becomes problematic in medical or technical communication when jargon and abbreviations render listeners incapable of understanding. In political speech, we hear perversions of metaphorical language, with a bit of Latin thrown in where a Saxon word would suffice. This lack of precision becomes problematic, and such issues were discussed by George Orwell in his fierce essay, ‘Politics and the English Language.’

I’ve often heard, in casual conversation or, worse, in public arenas, the phrase ‘it’s only semantics’ or a phrase of that nature. This implies that the person is either lazy in speech or uninformed. I like to think that our downfall is sloth rather than lack of knowledge, so I will assume that these people know the premises behind semantics, what they imply, and why they are so very important. If that is true, then the speaker is simply tired of the disconnect in language that is being proposed.

Let’s assume one hasn’t read the precision of Ernest Hemingway, the frugality of EB White, or the aforementioned essay by Orwell. I’ll relate semantics to the study of information, information theory. Let’s trace a message from you to me,if we were speaking or writing to one another. A message begins at its source, such as a thought or argument in your brain. You must codify this message into language, either written, spoken, signed by hand, or some variation. That message is transmitted, by air, telecommunications, visually, or the like, to me. I must then, as the receiver, decode the message. Thus, information passes from you to me. A problem at any level leads to a breakdown in our conversation and, according to the optimist Orwell, decline of civilization.

Semantics is the study or philosophy of how we communicate with and understand one another. In terms of the information theory example above, this refers to the coding and decoding of speech. The words we use attempt to convey information. If the words are not precise, the information will be lost or misunderstood. If you heard me state that ‘John is a wild card, but Jane is solid,’ would you say this is precise? Sure, in context, you might understand the message, but that is no excuse and is thus laziness of speech. If you heard ‘John’s exam performance varies based upon his mood, but Jane always performs well,’ you can already see an improvement in the message’s precision. Again, we should remain as precise as possible, no matter the context.

In communicating between those in medicine, those in science, those in economics, and those in countless other fields, I can see this lack of focus on semantics. We become lazy and begin to use metaphors, jargon, or lack of descriptive terms. We then become annoyed when a person begins to focus on the meaning behind individual words or phrases, stating that it is only semantics. Thus, I believe this phrase stems from a laziness founded in sloth.

I am a horrid writer. Specifically, I mean that I tend to use the passive voice too often, use unnecessary words to balance the flow of a sentence, and often lack precision. However, I believe very strongly in proper communication. Many say this has to do with listening, but the act of listening is limited by the quality of the message transmitted.

This post stems both from issues in my research proposal revisions and with my status lying at the meeting point between medical and graduate students. They don’t understand each other quite often, and my split personality feels a sense of cognitive dissonance. I urge those in any field to practice in precision and recoding of speech for those outside your field. I’m working on it, too, and it is difficult to break bad habits.

I am currently in the later stages of preparing my thesis research proposal, which I will be defending in our version of a Ph.D. qualifying exam before the end of the year. The proposal follows the format of an NRSA F30 application, a fellowship for dual degree students. It’s quite interesting, but I thought this would be a great opportunity to discuss the possible components of research proposals. Not all of these sections would be included in a standard proposal, and this list can be adapted for projects in both clinical and basic science research. The sections I included were:

  1. Motivation – Here, we provide a brief background in order to both describe our motivation for the project. More importantly, however, this serves to capture the attention of the reader while laying a broad foundation. This should be limited in length.
  2. Theoretical Framework – This does not apply to all studies but is helpful for laying out the problem statement. Briefly, the line of inquiry should be addressed. Variables within the project and their interrelated concepts should be laid out. In social science and basic science research, these can be useful in laying out the assumptions of the project. The results of the project can be generalized, but we must place a hold on how far this can be taken. Such a framework provides a foundation for later discussions of the project and its results.
  3. Problem Statement – This is a brief description, within the context of the theoretical framework, of what is to be addressed. It is best if we describe not only what is sought, but why we wish to seek it. This is often incorporated into the above sections and rarely stands alone.
  4. Specific Aims – In either a list or series of paragraphs, the aims of the project should be outlined. These can be hypothesis-driven or purely exploratory. It is best to group the aims into broad “sub-projects,” where each aim informs the next. The NIH states that these should “describe concisely and realistically what the proposed research is intended to accomplish.” It is an expansion of the problem statement into tangible goals. For each aim, be sure to specifically state each hypothesis. Additionally, any experiments to be performed should be described here. However, the aims are once again brief.
  5. Literature Review – A full literature review could span countless pages. However, a research proposal’s review must be focused. Each of the studies referenced here should be linked back to the problem statement. For example, if one wishes to determine the effects of aspirin on vascular outcomes, it would be beneficial to focus on studies of the mechanisms of aspirin and various determinants of vascular outcomes. However, it would be less useful to provide background on the various alternatives to aspirin. Keeping this focused and relating papers back to the problem statement will add to the overall understanding of the proposal.
  6. Methodology – Papers typically include a methods section. However, the methodology section in research proposals should be much more expansive. The purpose is to describe how each of the aims will be addressed with a plan of the experiments and expected results. In doing so, this demonstrates a level of competency in the project at hand. It also provides readers with evidence that the project is sound. Go into detail with the methods, but be sure to relate these back to the specific aims.
  7. Preliminary Data – Preliminary data may be sparse, but such data is useful in showing that the project is realistic. These data should follow the previous section on methodology. Unlike a thesis, these data do not yet tell a complete story, which makes sense for a research proposal. Nonetheless, be sure to discuss the results briefly in order to demonstrate competency and to show that the project can be done. Clinical studies may have less preliminary data in early proposals. However, these data could be as simple as a survey. For basic science work, the preliminary data are often slightly more involved.
  8. Budget – Operating costs for a project vary, and the budgets depend on the type of application. A training fellowship (e.g., F series) should include costs of tuition, whereas a project grant (e.g., R series, K series) would focus on the expenditures for the lab.
  9. References

This differs from a thesis in that the thesis will go into detail when displaying results, discussing the data, and formulating conclusions.

Clinical trials often include schematics where various hypotheses are tracked, following alternative routes in methodology. Some proposals will need to discuss ethical issues which may arise in the course of the study. Nonetheless, the general pattern of specific aims -> literature review -> research plan -> preliminary data holds for most proposals, and it is this pattern that I followed in mine.

Of course, at my stage, who am I to say what is the right way to write these things? If you want an accurate depiction of what is expected for grants (which are basically proposals), check out some of the formats below:

Two years ago, I wrote a piece for a public health group based upon a project based in medical school. An excerpt follows:

“In June 2009, President Barack Obama signed into law the Family Smoking Prevention and Tobacco Control Act (HR 1256). This legislation would require all tobacco products and advertising to have a graphic warning covering 50 percent of the front and back iof the package. The FDA has proposed a number of graphic designs […] The proposed designs include grotesque imagery in an attempt to dissuade smoking in the United States. According to the Center for Disease Control, smoking accounts for approximately 443,000 deaths per year in the US, including deaths from lung cancer, cardiovascular disease, COPD, and numerous other morbidities. It is thus apparent that smoking is a public health concern, and these new warning labels hope to address the concern by deterring such behavior. However, though the proposed graphic labels may be more effective than the previous Surgeon General’s Warning (a text-only message on the side of the package), these labels can be greatly improved through what will be defined as a gains-based message as opposed to the proposed loss-based message. In doing so, the labels would not only educate the public on the dangers of smoking, but it will be argued that they will encourage smoking prevention and cessation behavior. In fact, it is argued that the currently proposed labels may do more harm than good. To make this argument, three assumptions must be made. First, as hinted above, smoking is a public health concern. Second, tobacco warning labels are designed to result in human behaviors of smoking cessation and prevention. Finally, human behavior is, in some circumstances, predictable.

This is not to say that the proposed labels by the FDA or those currently being used around the world are completely ineffective. In fact, the graphic labels may be more effective than the small text-only Surgeon General’s Warning. However, there is a wide margin for improvement. The proposed labels appear to be far too grotesque. Though admittedly fear-inducing, this negative emotion will most likely lead to reactance behavior. Expect sales of slip covers to increase, along with the possibility of some smokers increasing their smoking behavior. Smoking rates may continue to decline, but the rate of this decline may not yet be optimal. Data from other countries, along with numerous experimental studies, have demonstrated that confounding factors can contribute to the decline in these countries, and grotesque imagery can result in maladaptive behavior. A truly effective label would be designed with a positive, gain-framed message. It would be designed to motivate behavioral change and encourage self-efficacy. Data from others who were able to quit can enhance subjective norms. Imagery depicting the benefits of quitting or those who were able to quit can further eliminate reactance. All of this would then be coupled with resources on quitting, such as phone numbers, web sites, and support groups. This is a war that cannot be fought with fire. As demonstrated every time a cigarette is lit, fire is only good for lighting up.”

First, I believe that this claim still holds, and my predictions, while still probably true, mean little when compared to the growing need to reform healthcare. Howevera greater concern lies in how we approach public health research. Within basic science, and especially physics, we like to break down larger systems into their components, analyze them, and search for unifying hypotheses. That is one method, but the concept is simple: Focus on the rules of a system in order to predict behavior.

Often, this concept is lost in public health research. Perusal of the literature will reveal that such concepts are hypotheses in the discussions of papers, but one sees very few examples where they are applied. In the case of tobacco warning labels, concepts in behavioral psychology may be applied to predict behavior resulting from labeling campaigns. The validity of these models remains to be seen, due to a low sample size. Nonetheless, studies were performed. Phone surveys revealed results similar to what was predicted by the theory of reactance.

Briefly, the theory of reactance predicts a cycle in human behavior. One begins with some level of (1) freedom, which is then (2) threatened. The human will (3) react, and undergo a (4) restoration of this freedom. Types of reactance include those exhibited by certain groups (trait reactance) and various threats to freedom (state reactance). These can be measured through fear, anxiety, disgust, and the like, all of which are predicted to increase with the level of reactance. The restoration of freedom is key. This is done through avoidance, acting out, or the like. In the case of smoking, this would be manifested in increased rates of smoking, a reduced desire to quit, downplay of harmful effects, and avoidance through the purchase of slip covers. A study by Dillard et al. in 2005 pointed out such concepts, and phone surveys on tobacco noted such reactions.

Another concept is message framing. There are two types of messages in this theory, gain-framed and loss-framed. A gain-framed message focuses on the benefit of performing a task, while a loss-framed message focuses on the risks associated with not performing the task. Loss-framed messages include pointing out the risks associated with not performing regular mammographies or other early detection methods of disease. Gain-framed messages include those associated with the benefits of exercise and sunscreen use. The current tobacco warning labels would be improved by avoiding reactance through the use of gain-framed messages, as was pointed out by Tamera Schneider in the Journal of Applied Psychology in 2001.

This highlights an important issue in public health and other epidemiological studies. The studies often do not cite and properly utilize the foundational psychological or basic science research. This shapes policy in a less-informed manner, sometimes leading to unforeseen negative outcomes. By increasing effective communication between the sciences and epidemiology, policy changes may become more effective