LearnCourse ➜  Part 3

Separating Good Science From Recycled Fear

Published March 2026   |    Updated March 2026


Two of the following statements are true, and four are false:

  • Aluminum baking sheets cause Alzheimer's

  • PEVA shower curtains leach harmful chemicals into the air

  • EMFs and metal bed frames cause cancer

  • You don’t need to worry about scented candles — the levels of endocrine disruptors are too low to matter

  • Phthalates in plastics disrupt your hormones

  • Formaldehyde off-gassing from furniture is toxic

This section goes over exactly which ones are true, which ones are false, and most importantly, why. The same research concepts that explain these six examples will work on any health claim you see, even beyond home products.

Scientific research gets misused, misread, and manipulated constantly: by industries protecting profits, by wellness brands selling fear, and by well-meaning people who just don't know how to read a study. Understanding how that happens is what lets you figure out which concerns are real and which ones are recycled noise.

First: Why Environmental Toxin Research Is So Contradictory

Environmental health research creates lots of contradictory studies, which creates a lot of confusion. This is partly because it's very difficult to prove cause and effect when it comes to toxins. We can't randomly assign people to be exposed to a chemical for thirty years and watch what happens. So instead, we have to observe, measure, and notice patterns over decades. Disease takes decades to develop, which means the exposure that caused it happened long before diagnosis. People encounter dozens of chemicals simultaneously. Studies get funded by parties with interests in the results. Findings that show harm get published; the nine studies that found nothing don't.

Do Aluminum Baking Sheets Cause Alzheimer's?

No, and we know so because of correlation, causation, and the Bradford Hill criteria:

Correlation Is Not Causation

"Correlation is not causation” simply means that two things happening together does not mean one is causing the other.

Ice cream sales and shark attacks rise and fall together, almost perfectly. If you plotted them on a graph, you'd see a tight correlation. But, it’s not causation, because both are driven by a third thing: hot weather. More people go to the beach, where sharks are, in summer. More people buy ice cream in summer. But buying an ice cream doesn’t cause a shark attack. The correlation is real. The causation is not.

This is a huge deal in environmental health because the research in this field is mostly based on noticing patterns (observational.) Some of those patterns represent real causal relationships, but many don't. The way to determine whether an association is actually causal is where the Bradford Hill criteria come in.

The Bradford Hill Criteria: How We Know Something Is a Cause

This is a list of criteria used in Epidemiology for understanding whether an observed association is causal or just correlation. These criteria are used throughout environmental health research — including, notably, to evaluate sixty years of aluminum and Alzheimer's research.

There are nine total, but here are the four that do the most work:

  • Temporality. The cause has to precede the effect. In chronic disease research this is murkier than it sounds — did the chemical exposure come before the illness, or was the illness already developing in ways that changed behavior and exposure?

  • Consistency. Has the finding been replicated? One research group finding an association is a starting point. Independent labs in different countries, using different methods, finding the same result is much more meaningful.

  • Biological plausibility. Is there a known mechanism? An association with no plausible biological explanation isn't impossible, but it raises the bar for how much replication is needed.

  • Dose-response relationship. If more exposure leads to more harm in a consistent, predictable way, that's strong supporting evidence. An erratic or absent dose-response relationship is a red flag.

Applying This to Aluminum and Alzheimer’s

The aluminum-Alzheimer's hypothesis is one of the best case studies in how research plays out in practice.

In 1965, researchers found neurofibrillary tangles in rabbit brains after injecting aluminum salts. Since tangles also appear in Alzheimer's brains, they suspected a connection. Advanced staining techniques quickly revealed these were entirely different types of tangles. The biological basis for the hypothesis was undermined almost immediately — but the fear had already spread.

In the 1970s, dialysis patients developed dementia after exposure to aluminum-contaminated dialysis fluid. Without functioning kidneys, aluminum accumulated to extraordinarily high levels — far beyond any dietary exposure — and crossed the blood-brain barrier. The association between aluminum and brain disease had now been reported twice, through two very different mechanisms.

Through the 1980s and 1990s, studies produced contradictory results. Some showed higher Alzheimer's risk with more aluminum in drinking water; others found the opposite. This inconsistency isn't poor methodology — it's what you see when you're measuring something that doesn't actually cause the outcome you're studying. The Bradford Hill consistency criterion kept failing to be met.

In 2014, a landmark paper titled "Is the Aluminum Hypothesis Dead?" applied all nine Bradford Hill criteria to sixty years of research. The result: zero of nine criteria were met. No consistent strength of association. No dose-response relationship. No confirmed biological mechanism at realistic exposure levels.

Verdict: Not supported. Use your aluminum baking sheets.

PEVA Shower Curtains

The Organism Problem in Environmental Research

A lot of alarming findings in environmental health come from animal studies, and most are misrepresented when they enter the wellness conversation.

Animal studies matter — they're often where we first identify that a substance might be harmful, and for ethical reasons we can't do equivalent experiments in humans. But translating findings from animals to humans requires careful judgment that gets routinely skipped.

Dose matters enormously. Many animal studies use doses orders of magnitude higher than any human would realistically encounter. Finding a tumor in a rat dosed at 100 times the expected human exposure doesn't establish that the substance causes tumors at household concentrations.

Species differences are real. Saccharin causing bladder cancer in rats is the canonical example: the mechanism required high concentrations of a protein in rat urine that doesn't exist in humans. Multiple rodent studies, never replicated in humans.

Route of exposure changes everything. Studies that inject chemicals directly into animals are not equivalent to realistic human exposures through food, air, or skin contact.

Animal studies are hypothesis-generating, not hypothesis-confirming. The jump from "harmful in a mouse study" to "avoid this product" requires considerably more scientific scaffolding than wellness content usually provides.

PEVA takes this even further, because the one study driving most of the alarm isn't even an animal study. It's a worm study.

A 2014 study exposed Lumbriculus variegatus — aquatic worms — to VOCs from PEVA shower curtains heated to 150°F in water. The PEVA-exposed worms showed stress reactions higher than unexposed controls, though lower than worms exposed to PVC. That finding has since circulated widely as evidence that PEVA is toxic.

To the best of my knowledge, no other PEVA toxicity studies exist. None conducted on humans at normal bathroom temperatures. The organisms in question process chemical exposures through entirely different biology than mammals. The exposure conditions — 150°F water, in a lab — don't reflect a bathroom shower. The finding hasn't been independently replicated. Against the Bradford Hill criteria, it meets approximately zero.

PEVA probably isn't as safe as organic fabric, and it deserves more research. But the case against it currently rests on a single study, in worms, under conditions that don't represent realistic exposure.

Verdict: Unestablished. Not exonerated, but not a reason for alarm either. If you want to avoid it, fine — but know that you're acting on very thin evidence.

EMFs and Metal Bed Frames

The File Drawer Problem

Studies with alarming results are far more likely to get published than studies that find nothing. This is called publication bias, and it systematically distorts the picture of the evidence.

If ten research groups study the same chemical and nine find no effect while one finds a modest association, you're likely to see that one published prominently. The other nine sit in file drawers, never submitted or never accepted. The published literature then overstates the evidence for harm.

The metal bed frame story is a clean illustration. A Scientific American article claimed that metal bed frames and coil spring mattresses amplify radiation and increase cancer risk. The article spread widely in non-toxic content. Snopes investigated and found the underlying science didn't hold up — the physics didn't work and the study it was based on was not what was claimed. [links: interiormedicine.com/bedframes and snopes.com/fact-check/coil-mattresses-cause-cancer-amplifying-radio-waves]

Systematic reviews and meta-analyses address publication bias by actively seeking unpublished studies, pre-registered trials, and negative results — which is part of why they're more reliable than individual papers.

What the EMF Evidence Actually Shows

EMF is worth treating carefully because it sits in a genuinely different category than aluminum — not as established as the chemicals that warrant real concern, but not as thoroughly refuted as aluminum-Alzheimer's either.

At very high intensities, EMFs cause documented harm through thermal effects. The real question is whether chronic, low-level non-thermal exposure — the kind from WiFi routers, cell phones, and household wiring — causes harm. Here the evidence is genuinely mixed and ongoing.

Part of what makes EMF research look alarming in aggregate is exactly the file drawer problem. Reporting that a large percentage of studies found significant biological effects sounds compelling until you recognize that "percentage of positive studies" is a publication-bias-inflated number, not a measure of evidence quality. The systematic reviews — specifically designed to correct for this bias — don't converge on harm at realistic residential exposure levels. WHO, ICNIRP, and most major health agencies have reviewed this literature and not concluded that current exposure levels cause harm.

The current regulatory standards are a legitimate target of criticism. Safety limits were established primarily on the basis of acute thermal effects, using short averaging windows rather than cumulative chronic exposure. Whether non-thermal effects at sustained low doses are adequately captured is a genuinely open question, acknowledged in the research literature.

There is also a condition called electromagnetic hypersensitivity, characterized by symptoms attributed to EMF exposure. The symptoms are real. But multiple double-blind provocation studies have consistently found that people with self-reported electromagnetic hypersensitivity cannot detect the presence of EMF at rates above chance, which means EMF is unlikely to be their direct cause.

Verdict: Genuinely uncertain, but not established. Reasonable low-cost precautions — distance from devices, not sleeping with your phone next to your head — make sense under the precautionary principle for an inadequately studied exposure. Treating EMF as a confirmed cause of cancer or neurodegeneration is not supported by the current evidence.

Scented Candles and Endocrine Disruptors

"Low Enough to Be Safe" Assumes a Lot

The reassurance that scented candles are fine because endocrine disruptors are present at levels too low to cause harm relies on assumptions the research literature has explicitly flagged as problematic.

Almost everything we know about chemical toxicity comes from studying one chemical at a time. Almost nothing reflects how humans actually encounter chemicals — simultaneously, in mixtures, over a lifetime.

You don't have PFAS in your blood, or BPA, or phthalates. You have all of them, plus dozens of other compounds, interacting with each other and with your biology in ways that have barely been studied. Some combinations are additive — two chemicals affecting the same pathway roughly add up. Some are synergistic — the combination is worse than the sum of its parts. For most real-world combinations, we don't know which.

Regulatory standards are set for individual chemicals. A product meeting every individual safety threshold can still contribute to cumulative exposure that exceeds what those thresholds were designed to address. This is acknowledged explicitly in the research literature and remains one of the hardest open problems in the field.

"Low-dose exposure to X is safe" is based on thresholds often set decades ago, for single chemicals, in healthy adults, tested in isolation — before we understood endocrine disruption, before mixture toxicity research existed, and before we were studying vulnerable populations like fetuses and children.

"No Evidence of Harm" vs. "Evidence of No Harm"

These sound similar. They are not, and the distinction is the most important one in this section.

"There is no evidence to support this" can mean two very different things.

It can mean: we've studied this extensively, across multiple independent research groups, using rigorous methods, and the evidence consistently fails to support the hypothesis. This is what it means for aluminum and Alzheimer's disease. The hypothesis has been examined and found lacking. This is the scientific process working correctly.

It can also mean: no one has looked carefully, the research that's been done is underpowered, or the regulatory system doesn't require it. This is the situation for the majority of the 80,000+ chemicals in commercial use. "No evidence of harm" for a compound that has never been adequately studied is not reassurance — it's a data gap.

Knowing which version you're dealing with requires asking: has this been studied? At what level? By whom?

Verdict: The reassurance is overconfident. Not "scented candles will harm you" — but "this individual chemical at this level is fine" doesn't account for the mixture problem or the quality of the underlying safety data.

Phthalates

What Justified Concern Actually Looks Like

So far this section has shown you four claims that range from false to overconfident. Here's what it looks like when the evidence genuinely holds up.

Phthalates are plasticizers — compounds added to plastics to make them flexible — found in flooring, shower curtains, food packaging, fragranced products, and medical devices. They're also in nearly every human being. The CDC's National Biomonitoring Program has detected phthalate metabolites in the urine of the vast majority of Americans tested.

Run phthalates through the Bradford Hill criteria and they perform differently than aluminum.

Biological plausibility: Phthalates interfere with androgen signaling — they bind to nuclear receptors and disrupt hormonal communication, particularly during developmental windows. The mechanism is well-characterized and consistent with the outcomes being studied.

Consistency: The findings replicate. Independent research groups in the US, Europe, and Asia studying different populations and using different study designs consistently find associations between phthalate exposure and outcomes including reproductive development, thyroid function, and metabolic markers.

Dose-response relationship: Higher phthalate exposure is associated with larger effects in multiple studies, and the dose-response is particularly pronounced in vulnerable populations — male fetuses during critical developmental windows, for example.

Strength of association: Effect sizes are meaningful at realistic exposure levels, not only at doses orders of magnitude above what people actually encounter.

Regulatory action reflects the evidence: phthalates have been restricted in children's toys and childcare articles in the US, EU, Canada, and elsewhere. The EU's REACH regulation has classified several as substances of very high concern. These aren't precautionary restrictions in the absence of evidence — they're downstream of a substantial, consistent body of research.

Verdict: Supported. Phthalate concern is proportional to the evidence. This is what it looks like when Bradford Hill criteria are met rather than failed.

Formaldehyde Off-Gassing

Justified Concern — and a Lesson in Who Funded What

Formaldehyde is a colorless gas that off-gasses from pressed wood products, flooring, insulation, adhesives, and some textiles. It's also a normal byproduct of human metabolism, which industry representatives have historically used to suggest it's harmless at environmental levels. That argument is worth examining closely, because the funding history here is instructive.

When early research began linking formaldehyde to cancer in the 1970s and 1980s, the industries producing formaldehyde-emitting products funded studies designed to challenge that link. The Formaldehyde Council — an industry trade group — actively promoted research minimizing risk and lobbied against regulatory action. This is documented, not contested.

Meanwhile, independent research continued accumulating. IARC classified formaldehyde as a Group 1 carcinogen — known to cause cancer in humans — in 2004, based primarily on evidence for nasopharyngeal cancer and leukemia in occupationally exposed workers. The US National Toxicology Program followed with its own designation. These classifications weren't made by advocates — they were made by agencies whose explicit function is reviewing the totality of evidence.

What makes the formaldehyde story useful alongside the funding lesson is that the independent replication is what moved the needle. Industry-funded research that minimized risk existed alongside independent research that found harm. The finding that held up regardless of who was paying for it was the one that determined the regulatory outcome. That's the funding principle in action: not "industry funding makes a study wrong," but "findings that hold up across funding sources are more reliable than findings that exist primarily in one camp."

Dose still matters. Occupational exposure — industrial workers with sustained high-level exposure — drove the original cancer findings. Residential off-gassing is lower. But for people living in newer construction with substantial pressed wood content, or with new flooring, or in manufactured housing, the exposures are non-trivial and the concern is legitimate.

Verdict: Supported. Formaldehyde off-gassing is a real concern, particularly in high-emission environments. It's also a case where asking "who funded that?" and then watching what happened when independent research answered the question gives you a clear picture of how the process is supposed to work.

Section 3 — Practical Tools

Practical Tools

The Evidence Hierarchy

Not all research carries the same weight. Click any level to learn what it means and where its limits are.

Regulatory Assessment
WHO · IARC · EPA IRIS · NTP · ATSDR
Meta-Analysis
Pools data across multiple studies
Systematic Review
Analyzes all available studies on a topic
Peer-Reviewed Paper
Evaluated by other scientists before publication
Single Study
One research group, one finding
Click any level to learn more.

Where to Look It Up

Googling a chemical name plus "toxic" returns peer-reviewed science, wellness blogs, industry PR, and recycled fear in no particular order. Use these instead.

PubMed
Database
pubmed.ncbi.nlm.nih.gov
IARC
Classification
iarc.who.int
EPA IRIS
Regulatory
epa.gov/iris
ATSDR
Agency Profile
atsdr.cdc.gov
NTP
Testing Program
ntp.niehs.nih.gov
WHO
Global Health
who.int
Click any source to see what it covers and where it falls short.

6 Questions to Ask

You don't need to read scientific literature to use this framework. Click to expand each question. Check it off when you've applied it.

You've asked all six questions. That's the whole framework.

Next: Part 4 — What Your Home Exposes You To ➜

Explore More Interior Medicine

Shop Healthier Design

Tools and References

About Interior Medicine

Material Health Guides

Learn ➜  Course ➜  Part 3