Now, more than ever, media outlets cater to short attention spans and clickbait headlines. In fact, most journalists are banking on the fact that you won’t read the entire article — especially when it comes to nutrition science.
But if you’re interested in reading medical studies to educate yourself or make wiser health decisions, peer-reviewed studies are one of the best resources available to get to the bottom of an issue.
Instead of taking other people’s word for what a study proves, it’s worth your time to examine the data and decide for yourself if the conclusion is valid.
While some people claim that reading studies is best left to doctors and other experts, the truth is that doctors and experts don’t agree on every single issue — and there are plenty of doctors who never read studies, or only read them occasionally[*][*].
According to a study paper published in the prestigious British Medical Journal[*],
Clinicians rarely accessed and used explicit evidence from research or other sources directly, but relied on…collectively reinforced, internalized, tacit guidelines. These were informed by brief reading but mainly by their own and their colleagues’ experience, their interactions with each other and with opinion leaders, patients, and pharmaceutical representatives, and other sources of largely tacit knowledge. [emphasis added]
As a non-expert, looking at the evidence for yourself can help you determine which experts to listen to, as well as empower you to make informed medical choices.
Also, if you pay taxes, your contribution to the government budget helps fund studies, so you may as well get your money’s worth.
In this article, you’ll learn the basic skills you need to read and understand a medical study without feeling lost, plus how to teach yourself more as time goes on.
Other topics covered include:
- Common study types and terminology
- The best legal ways to find full-text versions of studies without paying for them
- Five simple questions you can ask every time you read a study to ensure you think independently and learn as much as possible
- How to review studies more critically, and why you usually shouldn’t take them at face value
We’ll begin with the simplest way to read a study.
This section includes straightforward steps to get started reading and understanding medical studies.
First, though, two important things to remember:
- Always keep in mind why you’re reading a study.
Your motivations and reasons for reading a study can make a difference in how you read it.
For example, the amount of time and effort you dedicate to a study will likely vary depending on whether your purpose is to learn for fun, to win a debate with a friend, to investigate a claim from a health-related website, or to make an important personal health decision.
Unless you enjoy dedicating lots of time to learning new things, it’s not always necessary to go deep.
- You don’t need expert knowledge to benefit from reading a study.
The first few times you read a study, it’s easy to feel intimidated. Even within the scientific community, researchers often receive criticism for using unnecessary jargon and unclear writing styles[*].
But instead of being discouraged, you can use free internet resources to look up and try to understand concepts that you find confusing.
And if one part of a study is too overwhelming, skip it for now and instead focus on learning as much as possible (even if that’s not very much, yet).
As time goes on, you’ll get better at gleaning information from studies, and your knowledge base and understanding will grow.
Next, steps anyone can follow to make sense of studies.
Step 1: Read the Abstract
A study abstract is a brief study summary, usually 300 words or less, intended to provide an overview. Unlike the full-text versions of many studies, abstracts are usually easy to find and available for free.
At the bare minimum, an abstract should tell you whether or not a particular study is relevant to your needs.
Most of the time abstracts also include a summary of results, although the quality and usefulness of summaries can vary quite a bit.
For basic purposes, reading a study abstract by itself is sometimes good enough.
As an example, let’s say you were interested in nutritional topics (like the keto diet and the paleo diet) and became curious about how many carbohydrates hunter-gatherer tribes eat.
A study called Diets of modern hunter-gatherers vary substantially in their carbohydrate content depending on ecoenvironments: results from an ethnographic analysis looks like the perfect resource.
From reading the abstract, you can learn things like:
- The authors estimated carb intake as a percentage of total calories for 229 different hunter-gatherer diets around the world.
- The carb intake ranged vastly, from 3-50% of total calorie intake.
- The median and mode for carb intake were both 16-22% of calories.
- Carb intake varied inversely by latitude, meaning that a greater distance from the equator tended to result in lower carb intake.
- Desert and tropical hunter-gatherers consumed the most carbs, tundra- and coniferous-pine-forest-dwelling hunter-gatherers the least, and groups in regions between consumed amounts somewhere in the middle.
- All hunter-gatherer diets included fewer carbs than modern guidelines from health authorities.
Basically, the authors wrote an excellent abstract that contains all the info most people need.
However, you might still want to read the full text for reasons like doubt about the validity of the findings or methods used, or simple curiosity.
And in other cases, abstracts are unavailable, or barely helpful enough to describe the study topic but don’t include real information from the study itself.
Regardless, if you’re interested enough to go a level deeper, your next step is to obtain a full-text copy of the study.
Sometimes, the full-text version of a study is available one or two clicks away, for free.
If there’s no link in the abstract, you can also try copying the name of the study into a regular internet search engine to see if a full-text version comes up.
Also, some journals are “open access,” meaning all their studies are free and available for download. You can use the DOAJ.org directory to find lots of open access journals.
Unfortunately, there’s often a paywall from a private corporation that will prevent you from accessing the study you want to read, even if it’s publicly funded. (This Guardian article provides good insight into the bizarre world of scientific publishing for those interested.)
If you find yourself blocked by a paywall, you can consider:
- Asking your local public library whether they have access to the study
- Going to the nearest college or university library, which may have access or own a copy of the journal
- Asking a student, professor, researcher, scientist, or doctor friend if they can help
- Emailing an author or contact person listed on the study to politely request a copy (many authors are very open to sharing their work, especially if you disclose the reason you’re interested)
- Learning about the mission of Alexandra Elbakyan, the “pirate queen of science”
As a last resort, you could pay a publishing corporation $20-50 or more to view or download a paper. Sometimes, you’ll only be able to pay for limited access, like a day or a week.
Start With the Intro and Conclusion
Once you have full-text access, it’s usually best to start by reading the intro and then the conclusion of a study.
Sometimes the full-text introduction of a study is similar to the abstract, other times not.
If you’re perusing a study that didn’t have an abstract, reading the intro will quickly tell you whether the rest is worth your time.
And if you plan to spend more time with the study, reading the conclusion next is smart, because it usually contains a more thorough summary of the study results than the abstract.
As with the abstract, many times, you’ll get what you needed from the intro and conclusion.
But if you still want more details, it’s time to check the discussion section next.
Step 3: Check the Discussion
Most often, the section of the study paper directly above the conclusion contains an extended summary and discussion of results, sometimes broken up into different subsections.
These sections generally go into depth but are still in plain English. They’re a good place to get a better understanding of the results, as well as possible takeaways.
If you stop here, you’ll likely have a pretty solid understanding of the study.
However, keep in mind that you’re probably still reading an interpretation of the results, rather than the results themselves.
Therefore, if you’re looking at the study critically, or want to learn as much as possible, the final step is to roll up your sleeves and get your hands dirty in the actual data.
Step 4: Dive Into the Data and Methods
The data and methods sections, which can go by a variety of names, are usually located in the top half and middle of a study paper, after the intro and before the discussion and conclusion sections. (Sometimes, more extensive documentation, such as raw data or more details on the methods used, is also available in a separate downloadable file.)
If you’re new to reading studies, these sections will probably seem intimidating at first. And unless you’re an expert in a specific area, they’re very likely to contain information that’s difficult to understand.
But they’re the heart of the study, and unlike the other sections, they’re not about what the authors think the numbers and findings say.
In other words, they don’t contain opinions and, while they may be flawed, they’re available for you to interpret and decide for yourself.
As before, it’s important to keep in mind why you’re reading these parts of the study:
- If you’re reading out of simple curiosity, it’s still okay to only focus on the parts that interest you.
- If you want to learn as much as possible, slow down and take time to digest (or read introductory material elsewhere) as needed. Reading straight through or skipping around can both work, depending on your learning style.
- If you’re looking for specific information that wasn’t covered elsewhere in the study, you may want to skim and ignore the irrelevant parts for now (Ctrl-F, Cmd-F, or other ways of performing an on-page text search can also save time).
- If you’re skeptical of whether the methods are sound or whether the conclusions match the actual results, look for red flags or inconsistencies.
Overall, keep in mind that you don’t always have to read everything or learn all the information contained within a study paper.
Even professionals who work in scientific fields often skip over technical details unless they have a good reason to delve in.
And as with any skill, you’ll get better at reading studies with practice.
If you read enough studies, you’ll also learn that some of them contain errors, inconsistencies, or poorly written and illogical statements. Studies don’t always show what they purport to show, so don’t be afraid to check the methods and data if something seems fishy (more on that shortly).
Keep reading to learn some of the most common study types and terms used, followed by ways you can think more critically as you review studies.
You may come across dozens of study types and hundreds of specialized terms, but here are some of the more common ones you’re likely to encounter:
- Adverse event is a term that refers to any abnormal or unfavorable medical occurrence in a study participant that occurs during a study, whether or not it is deemed related to the study.
- Animal models are animals used in experiments to attempt to understand human health and disease. Studies using animal models are called animal studies.
- Case studies or case series are in-depth, published analyses and descriptions of an individual or group. Typically case studies do not involve controls, and their purpose is to provide detailed information about a specific medical or health situation, often in a clinical setting.
- Clinical trials are research studies where human participants are assigned to one or more interventions to evaluate the effects of the interventions on health, medical, or other outcomes.
- Control group or control refers to any group in a clinical trial that receives no treatment, a standard treatment, or a placebo treatment instead of an experimental treatment. The purpose of controls is to help establish the effects of an intervention. When a clinical trial protocol uses control groups, it is said to be a controlled clinical trial.
- Correlation is a statistical relationship or connection between two or more variables in a study. In other words, it is an observation that there is a constant rate of change between two or more factors. An observed correlation may suggest, but does not automatically prove, that one variable is causing another to change.
- Double-blind studies are trials in which neither the participants nor the experimenters know which group is receiving a particular treatment or intervention. For example, in a double-blind, placebo-controlled trial, no one knows whom is receiving a placebo or active treatment.
- Effect size is a number that measures the strength of a relationship between two variables. A larger number indicates a stronger relationship. An effect size could be used, for example, to signify how well a treatment appeared to work in a study.
- Epidemiological studies or population studies are observational, non-interventional studies that compare differences between groups to learn more about health conditions or diseases.
- Interventions are treatments, procedures, or actions taken to prevent or treat diseases or improve health outcomes within a study protocol.
- In vitro is Latin for “within the glass.” In vitro studies, sometimes called “test tube studies” or “cell studies,” happen in isolation and outside of living organisms.
- In vivo is Latin for “within the living.” In vivo studies take place in living organisms (animals or humans).
- Open-label studies are trials in which researchers and participants are both aware of which treatment or intervention is provided.
- Outcomes are synonymous with results, or the variables that are monitored during a study to assess the impact of an intervention or other change.
- P-value is a statistical number that is expressed as a decimal or a percent and relates to the likelihood that the study outcome would be obtained if the null hypothesis was correct. (The null hypothesis is the prediction that an observed effect is due to experimental error or existing differences between groups, as opposed to an intervention.) Most studies use a p-value of 0.05 or 5% as the cutoff for statistical significance, meaning that results with a p-value of >0.05 will be disregarded as non-significant or due to chance.
- Placebo substances or treatments are designed to have no meaningful biological effects or therapeutic value, and are included in controlled clinical trials for comparison purposes to evaluate the effect of active treatments. Examples include sugar pills or saline injections.
- Protocols or study designs describe why and how a study will be conducted, including objectives, methodology, statistical calculations, organization, and safety.
- Randomized study protocols assign participants to groups by chance alone (randomization) to reduce bias and distribute participants equally to different groups.
- Sample size refers to the number of participants or subjects in a trial or study group, or the number of people observed in an epidemiological study. Larger sample sizes increase the relevance of the study results to the larger population of non-participants, and also increase the study’s power (ability to detect small but statistically significant changes).
- Single-blind studies are trials in which the experimenters or researchers know which groups are receiving which interventions or treatments, but the participants do not.
- Statistical significance is a determination that an outcome was extremely unlikely to have occurred due to chance (see also p-value).
- Systematic reviews gather and interpret all the available published research on a topic. A meta-analysis is a type of systematic review that combines findings from studies of a similar type to draw conclusions.
- Variables are factors that can cause change or may become changed in a study. Hidden variables that aren’t accounted for can easily skew study results.
No matter how experienced you become at reading studies, it pays to work on your critical thinking skills any time you’re reviewing data or conclusions.
It’s not only a good way to keep your mind sharp, but you should never assume a study conclusion is trustworthy, especially if you’re researching to make an important decision.
Today, in much of science and academia, it’s not possible to receive grants or promotions without publishing in journals. Unfortunately, rather than create incentives for high-quality studies, this situation appears to set the stage for sloppy methodologies and inappropriate conclusions.
As researcher John P. Ioannidis put it in his paper, Why Most Published Research Findings Are False, “claimed research findings may often be simply accurate measures of the prevailing bias”[*].
And an editor of the British Medical Journal once declared that only 1-5% of published research findings meet minimum standards of scientific soundness and clinical relevance[*].
This section includes ideas to help you think more critically vs independently and watch for red flags in studies.
Data Versus Interpretation
First and foremost, always keep in mind that the study data is not the same as the authors’ interpretation of that data.
For example, for decades, researchers recommended avoiding egg yolks because they were found to raise cholesterol, which researchers assumed would increase the risk of heart disease.
More recent research shows that while the data that eggs raise cholesterol (including LDL cholesterol) were correct, the interpretation that heart disease risk would increase was probably incorrect. The reason is that eggs raise HDL cholesterol and large-particle LDL cholesterol, neither of which are associated with increased heart disease risk[*].
It’s also true that flawed methods can lead to inaccurate data, and sometimes scientists intentionally falsify the data, but regardless, you’ll always think more clearly if you keep the data and interpretation separate in your mind.
Overextrapolation occurs when researchers attempt to draw conclusions that are overly broad from limited data, or from a data set that isn’t relevant to the conclusion being drawn.
The scientists that seem to do this most often are animal researchers, but it can also occur in other situations (such as studies with small sample sizes).
While animal models can potentially detect side effects or health problems associated with a drug treatment, they’re usually not the appropriate study method to determine which diets are healthiest for humans, for example.
Underextrapolation occurs when researchers are unable or unwilling to interpret their own data or draw conclusions from the existing body of data.
The most common example is the cliche concluding statement “further research is needed.” In addition to being extremely general to the point of losing meaning, it can also signal a lack of willingness to draw firm conclusions from the data.
A Cochrane trial found that 93% of study reviews recommended further research, even when there was sufficient data to deem the intervention ineffective[*]!
Speaking Outside of Expertise (The Carpenter Fallacy)
The mathematician and former options trader Nassim Nicholas Taleb poses the question: if a carpenter builds a roulette wheel, can you also trust that carpenter to predict betting odds in a game of roulette?
The carpenter fallacy relates to domain expertise or the limits of knowledge within a specific field.
Building a roulette wheel is one area of skilled expertise, but it’s highly specific, and it doesn’t have anything to do with understanding the probabilities of using a roulette wheel.
Similarly, be sure to watch for scientists speaking outside of their domain of expertise in studies.
One common example is when scientists document a circumstance, then make recommendations based on their observation.
While the initial observation may be correct, a scientist could be going way outside his or her area of expertise as soon as he or she makes a philosophical claim or a suggestion for public policy (not to mention making statements that aren’t directly supported by the data)..
Small Effect Sizes and Percentages of Percentages
Large studies are said to have high statistical power, which refers to their ability to detect small effect sizes that do not appear to be due to chance.
As an example, a very large clinical trial with the ability to detect a small effect may find that a drug treatment works better than a placebo, but it may only be a few absolute percentage points higher in effectiveness.
In some cases, this can make a beneficial difference, but most of the time, such a small difference is virtually meaningless to patients, even if it’s real.
Along with treatment effects, this same observation can also apply to risk factors. If 1 in 10,000 people is at risk of a serious health problem, and some risk factor increases the relative risk by 50%, that may sound scary–but for the average person, the absolute risk increase would only be 0.005%.
Put differently, just because a study shows a statistically significant effect doesn’t mean the effect is large enough to matter in real life. A percentage of a percentage often works out to be a small effect size.
Value Risky Conclusions
Nassim Taleb, mentioned in a previous section, recommends valuing or heeding research conclusions that could put the reputation of the author at risk.
That is, as opposed to the mainstream idea that “scientific consensus” or majority opinion is likely to be correct, Taleb argues that scientists who put their reputation on the line are more likely to be telling the truth.
His logic is that people who are publishing for personal gain will always take the easy route of agreeing with the mainstream (or their sponsors), while people who risk their reputation or go against orthodoxy are more likely to be performing rigorous science.
While the popularity of a conclusion one way or the other is not a substitute for valid data or an actual argument, Taleb’s recommendation makes plenty of sense as far as accounting for the motivations of the study authors.
The Shotgun and the Sharpshooter
Watch out for the “shotgun approach” as well as the “sharpshooter fallacy.”
The shotgun approach happens when researchers measure a lot of different outcomes in hopes of finding at least some treatment benefit in their data. It’s especially common in the world of nutritional supplements.
For instance, if a study measures 30 different potential health outcomes including blood sugar, weight loss, fat loss, muscle gain, exercise performance, alertness, energy levels, cognitive function, and reaction time (and on and on), there’s a very good chance that they’ll find some minor benefit in at least one of those areas.
There are also ways of improperly counting such findings as statistically significant (through incompetence or dishonesty), which is one type of “P-hacking.”
Shotgunning for results may not sound so bad, but the main problem is that the results are unlikely to be repeatable in future studies. Essentially, it’s not real science at all, just a way of propping up a weak and ineffective treatment.
The sharpshooter fallacy comes into play during the conclusion phase of a study. It gets its name because the researchers draw a target around the positive results, so to speak, and pretend they intended to hit that target from the start.
Keeping with the previous example, if a nutritional supplement improved reaction time, the scientists could come up with a phony reason why that may be after the results were in–while ignoring that the supplement didn’t do anything for blood sugar, weight loss, exercise performance, or any of the other outcomes they measured.
In blatantly dishonest cases, researchers may publish only the statistically significant, positive outcome, complete with their fake hypothesis of why it occurred.
Asking tough questions is the best way to engage deeply with scientific research.
Use the below suggestions as a starting point, but don’t stop there–always exercise your critical thinking faculties as much as possible any time you read a study or other research.
- Does the data support the authors’ conclusions or claims?
Inappropriate conclusions are likely much more common than erroneous or falsified data.
Deciding for yourself whether or not the conclusion or claims of a paper are supported by the data may be the most important critical thinking skill for reading studies.
- What are the unstated assumptions or arguments within the study paper?
Every published paper has unstated assumptions or arguments, even if they’re abstract or philosophical in nature.
For example, a nutrition study may contain the assumption that “the macronutrients of a diet are the most important variable influencing the health outcomes of that diet”
Or a public health study may assume that “government intervention in matters of public health is noble and justified.”
You may agree with the assumptions, or they may seem like simple enough leaps that are necessary for the purposes of writing a scientific paper. Still, identify them anyway.
If you can identify underlying assumptions or arguments that aren’t spelled out, you can develop a more accurate perspective on the entire study, as well as think more independently.
- Do the authors sneak in statements or claims not supported by the data?
Usually, the answer is yes–even if the statements are assumed to be “common sense,” most of the time a few unsupported statements seem to make their way into study papers.
Unsupported doesn’t necessarily mean incorrect, but identifying unsupported statements can help you put the entire study into perspective. Sometimes, these statements are major influencing factors in the conclusion or other important aspects of a study.
Conversely, authors who are careful enough to avoid making unsupported statements may be more rigorous and trustworthy thinkers.
- What are the possible motivations for publishing this study?
No one publishes a study without one or more motivations.
While you can’t read the authors’ minds or know for sure, it’s a good thought experiment to consider what their motivations could be based on the nature of the study and other available information (such as funding and affiliations, to name two examples of many).
- Does the conclusion even matter or make a difference? (Is there backlash?)
Whether or not the conclusion is supported by the data provided, and no matter the quality of the authors’ logic or the nature of their motivations, does the conclusion matter at all or make any real difference?
As mentioned in the previous section, many times, effect sizes are incredibly small. A lot of medical research, even when the findings are technically valid, is simply unlikely to make a real difference in anyone’s life.
Another way to decide if the conclusions matter or make a difference is to look for backlash and controversy, especially when it spills over outside of scientific journals.
For example, if big corporations or bureaucrats fund public relations campaigns or pay experts speaking fees to dismiss the results of a study, the backlash could indicate that findings on a particular topic could have a high economic or social impact.
Like most human endeavors, science is messy and imperfect. But it’s still a useful tool for getting closer to the truth on many issues, as well as making informed decisions in your life.
Reading studies for yourself doesn’t require expertise or exceptional intelligence, but it does take patience and focus.
You’ll get better at reading and interpreting studies with practice, but try to stay open-minded and avoid the temptation to draw permanent conclusions.
In the end, science is a method for testing ideas and hypotheses using evidence and observations, not a collection of settled facts.