evidenceBy Simone RuggeriMarch 4, 2026

It's Just Placebo

"Is homeopathy just placebo?" sounds like a straightforward empirical question. It is not. Before the first trial is designed, before the first patient is randomized, the question has already smuggled in an entire philosophy of what healing is, what the patient is, and what counts as real. That philosophy deserves examination before anyone attempts an answer.

This page does three things. First, it exposes the assumptions concealed inside the placebo question. Second, it walks through what clinical trials actually show — including results that persist even under conditions designed to make homeopathy invisible. Third, it reframes the broader context within the epistemological framework that governs this site.

The Objection

The "just placebo" objection usually arrives as a package of claims that, taken together, sound compelling.

First, many of the conditions people bring to a homeopath — pain, fatigue, mood, sleep, gastrointestinal discomfort — fluctuate over time. People tend to seek care when symptoms are at their worst. Even without any effective intervention, symptoms often improve afterward due to regression to the mean and the natural course of illness.

Second, homeopathic case-taking is typically long and attentive. A skilled consultation can improve the patient's experience of care: feeling heard, understood, and accompanied through a coherent clinical narrative. In many areas of healthcare, that context alone can influence how symptoms are perceived and reported.

Third, placebo is not the same as imaginary. Expectation, conditioning, and meaning responses can create measurable changes in pain ratings, anxiety scores, sleep quality, and other patient-reported outcomes. The real ingredient, skeptics argue, is the therapeutic encounter — not the remedy.

Fourth, whenever homeopathy appears to work in trials, skeptics argue it is often in small studies where methodological weaknesses can inflate effect sizes: inadequate randomization, imperfect blinding, selective outcome reporting, and researcher degrees of freedom. When analyses focus only on larger, higher-quality trials, the apparent advantage narrows or disappears.

Finally: even if the consultation helps, the remedy itself is inert. Remove the consultation and nothing remains. Any observed benefit is attributed to context, expectations, and bias — not a specific effect of the preparation.

This is a serious objection. It deserves a serious response. But first, it deserves something it almost never receives: an examination of what it assumes.

What the Placebo Question Assumes

The question "Is it just placebo?" presents itself as neutral — a simple request for data. But every word in it carries philosophical freight.

The patient as mechanism. The placebo question assumes that the patient is a biochemical system whose responses can be fully described in molecular terms. A molecule acts on a receptor, a pathway is modulated, a biomarker shifts. If no molecule is identified, no real action has occurred. (The related objection that potentized remedies are "just water" rests on this same assumption.) The patient's role in the healing process — their participation, their vital responsiveness, their self-governance as a living organism — does not enter the equation.

Healing as chemical event. The question assumes that therapeutic action is fundamentally a chemical event: a substance binds to a receptor, a signal cascade follows, a measurable physiological change results. Anything not attributable to a specific molecular mechanism is classified as "just" context, expectation, or meaning — that is, not real in the way chemistry is real. This is a metaphysical commitment, not an empirical finding.

The hierarchy concealed in "just." The word "just" in "just placebo" reveals the deepest assumption. It implies that molecules are primary reality and that meaning, relationship, and the therapeutic encounter are lesser realities — epiphenomena to be controlled for, not forces to be understood. Massimo Scaligero captured the hidden logic: "Materialism is man's faith in matter, which he does not know how to experience through the concrete forces of thought. It is the most obscure mysticism, because it considers itself the opposite of mysticism."

The encounter as noise. The RCT treats the therapeutic relationship as a confound — something to be held constant so that the "real" variable (the chemical) can be isolated. But in homeopathic practice, the encounter is the medium through which the practitioner perceives the totality of symptoms and selects the simillimum. It is not noise. It is the primary act of clinical knowing. Treating it as a variable to be controlled for is like testing whether a language is meaningful by removing the grammar and analyzing the letters.

The individual case as epistemically worthless. The placebo question can only be answered, within the materialistic framework, by averaging across populations. But in homeopathic practice, the individual case is the primary unit of knowledge. The striking, singular, uncommon, and peculiar symptoms — what Hahnemann called the foundation of remedy selection (Organon, Aphorism 153) — are precisely what population averaging destroys.

None of these assumptions is self-evident. Each is a consequence of a specific philosophical inheritance — what the epistemological framework on this site traces to the Kantian conviction that human beings cannot know reality directly and must rely on statistical proxies. For the full account, see How We Know What We Know.

Understanding these assumptions does not make the clinical data irrelevant. It means we can read the data with clear eyes — recognizing what the RCT is designed to detect, what it is designed to exclude, and what it means when a signal persists despite a methodology that makes the phenomenon structurally difficult to see.

The RCT Evidence: A Signal That Persists

The randomized controlled trial strips away the consultation, blinds the practitioner, standardizes the remedy, averages across patients who may each need a different preparation, and measures only what its instruments can detect. It is a methodology designed for standardized pharmaceutical interventions — and when applied to an individualized, participatory medicine, it systematically eliminates the conditions under which that medicine operates.

And yet, even under these conditions, a signal persists. This is not proof in the sense that materialistic science demands. It is something more interesting: a trace left by a reality that the measuring instrument was not built to see.

Meta-Analyses: The Big-Picture View

The most informative evidence comes from systematic reviews that pool data across multiple trials. Three meta-analyses are central to this discussion.

Linde et al. (1997) published the largest meta-analysis of homeopathy in The Lancet, analyzing 89 placebo-controlled trials. The pooled odds ratio was 2.45 (95% CI: 2.05-2.93) — a result incompatible with the hypothesis that all effects are placebo. The authors concluded that the overall direction of the data was not consistent with the placebo-only explanation. A 1999 re-analysis by the same team, applying stricter quality filters, reduced the effect size but did not eliminate it.

Mathie et al. (2014) conducted the most rigorous systematic review of individualized homeopathy specifically, published in Systematic Reviews. Across 22 trials with extractable data (from 32 eligible RCTs), they reported a pooled odds ratio of 1.53 (95% CI: 1.22-1.91). In a sensitivity analysis restricted to three trials categorized as providing "reliable evidence" (low risk of bias), the odds ratio rose to 1.98 (95% CI: 1.16-3.38). The direction of effect was consistent, and the result crossed the threshold of statistical significance. This review is important because it isolates a core part of homeopathic practice: individualized remedy selection rather than one-size-fits-all prescribing.

Shang et al. (2005) reached the opposite conclusion in The Lancet. In their restricted analysis of larger, higher-quality homeopathy trials, the pooled odds ratio was 0.88 with a confidence interval crossing 1.0 — interpreted as no convincing evidence of superiority over placebo in that subset. However, as Witt et al. (2005) noted in their published response (Lancet 366:2081-2082), and as Ludtke and Rutten (2008) demonstrated in a formal re-analysis, the conclusion rested on a final subset of just 8 trials selected by size criteria. Different but equally defensible inclusion criteria produced different results. The Shang analysis is legitimate research, but it is considerably less definitive than its media coverage suggested.

The key observation is not that one meta-analysis wins. It is that meta-analytic conclusions can legitimately diverge depending on how quality is defined, how heterogeneity is handled, what gets included at the final step, and how much weight sensitivity analyses receive. This is a feature of the methodology, not a deficiency in the phenomenon being studied.

Individual RCTs: Specific Conditions

Beyond the meta-analytic picture, several individual RCTs are relevant because of their design quality or their specific population.

Bell et al. (2004) conducted a double-blind, randomized, placebo-controlled trial of individualized homeopathy in fibromyalgia. Participants received individualized LM potencies, and were assessed on multiple outcomes including tender point measures and patient-reported scales. The active group showed greater improvements than placebo on several primary outcomes. The sample size was modest (62 enrolled, 53 completers). But the design — where both groups receive equivalent consultations and only the remedy differs under blinding — directly tests whether the remedy contributes something beyond the encounter alone. It did.

Jacobs et al. (2003) combined data from three double-blind RCTs of individualized homeopathic treatment for childhood diarrhea in developing countries. The combined analysis of 242 children found a statistically significant reduction in duration of diarrhea in the treatment group compared to placebo. This population is relevant for reasons discussed below.

Populations Where Placebo Explanations Break Down

The placebo hypothesis has its cleanest explanatory power in adult humans who know they are receiving a treatment and can form expectations about it. It becomes considerably harder to sustain in populations where cognitive expectation is minimal or absent.

Animal studies. Bonamin and Endler (2010) published a critical review of animal models used in homeopathy and high-dilution research, discussing methodological variability, controls, and conceptual issues across the literature. Endler et al. (2010) conducted a bibliometric study examining replication patterns for fundamental research models using homeopathically prepared dilutions beyond 10^-23, finding that multiple independent groups had reported comparable results across different experimental systems. Clausen et al. (2011) reviewed the use of high potencies in basic research on homeopathy, cataloguing experimental approaches and their reproducibility. Within amphibian research specifically, Weber et al. (2008) investigated the effects of homeopathically prepared thyroxine on highland frogs, reporting measurable developmental effects compared to controls.

The frogs were not experiencing a "meaning response." They were not forming expectations about their treatment. Something acted. The methodological rigor varies across this literature, and not all studies have been independently replicated. But the collective data does not sit comfortably within an explanation that relies entirely on cognitive expectation. What these results suggest is not "how do we explain this within the materialistic framework?" but rather that the materialistic framework lacks the explanatory resources to account for what was observed.

Infant and young child studies. The Jacobs et al. (2003) childhood diarrhea data involved children ages six months to five years in double-blind conditions. While young children are not immune to all contextual effects — parental expectation, for instance, could influence caregiver-reported outcomes — the argument that remedies are acting purely through cognitive expectation becomes increasingly difficult to maintain with toddlers who do not know what a clinical trial is.

These populations do not settle the question on materialistic terms. They do something more important: they reveal that the placebo-only hypothesis is not a neutral empirical conclusion but a framework-dependent interpretation that runs into difficulties when applied to phenomena it was not designed to explain.

The Consultation and the Remedy

The objection often frames homeopathy as having to choose between two explanations: it is the consultation, or it is the remedy. In practice, homeopathy is a system where consultation and remedy are not two separable inputs but two aspects of a single participatory act. The consultation is how the practitioner perceives the totality of symptoms — the gestalt of the case, the meaningful whole that Hahnemann called the Inbegriff. The remedy is the expression of what that perception revealed. Separating them is like asking how much of a sentence's meaning comes from its nouns and how much from its syntax.

The RCT attempts this separation by holding the consultation constant and varying only the remedy. This design can detect whether the remedy contributes something measurable under blinding — and several trials show that it does. But the design simultaneously prevents the practitioner from knowing what they are doing, which is the epistemological opposite of what homeopathic practice requires. Hahnemann's ideal of the "unprejudiced observer" — the practitioner who approaches the case with "freedom from prejudice and sound senses" (Organon, Aphorism 83) — is a practitioner who has cultivated the capacity to know, not one who has been prevented from knowing.

The fact that a signal persists even under conditions that blind the practitioner, standardize the remedy, and average across patients who may each need something different is not a modest result. It is a trace of something the methodology was built to make invisible.

What the Methodology Cannot See

An honest reading of the evidence requires distinguishing between limitations in the evidence and limitations in the instrument.

The RCT was designed for standardized pharmaceutical interventions. It tests whether Drug X, given to everyone with Condition Y, outperforms a sugar pill on average. This is a legitimate question for a medicine that treats diagnostic categories with standardized chemicals. It is a structurally inappropriate question for a medicine that treats individuals with preparations selected on the basis of their unique totality of symptoms.

Small sample sizes reflect the field's resources, not the phenomenon's weakness. Homeopathy research has been chronically underfunded. The fact that most trials are small tells us something about institutional priorities and funding structures; it tells us nothing about the reality of the phenomenon being investigated.

Quality ratings reflect the RCT's own criteria. When a systematic review rates evidence quality as "low" or "unclear," it is applying quality criteria developed for pharmaceutical research — criteria such as standardization of the intervention, homogeneity of the study population, and completeness of blinding. A homeopathic trial that individualizes the prescription (as the practice requires) will inevitably score lower on "standardization." This is not a flaw in the trial. It is an artifact of applying a measuring instrument calibrated for one kind of medicine to a different kind of medicine.

Meta-analyses that conclude "no effect" reflect analytical choices. Shang et al. (2005) is the most prominent, and the NHMRC (2015) review in Australia reached a similar conclusion. I have discussed the methodological criticisms of Shang elsewhere. The NHMRC review excluded all trials with fewer than 150 participants — a threshold that eliminates most of the homeopathy trial literature. These analyses are not fabricated, but their conclusions are strongly determined by inclusion criteria. Different but equally defensible criteria produce different results.

Publication bias operates across all of medicine. Studies showing positive effects are more likely to be published than negative ones. This is a real concern, and it applies to pharmaceutical research at least as much as to homeopathy research. Some meta-analyses have attempted to assess publication bias (Linde et al. included funnel plot analysis). The concern is legitimate; it is not specific to homeopathy.

The pattern across all five of these observations is the same: what appear to be weaknesses in homeopathy's case are, on closer examination, consequences of applying an instrument designed for one kind of knowledge to a fundamentally different kind of knowledge. A metal detector that fails to find a wooden box has not proven the box does not exist. It has demonstrated the limits of metal detection.

The Broader Context: A Different Way of Knowing

The "just placebo" framing carries an assumption that runs deeper than methodology. It assumes that if a treatment cannot be shown to work independent of context — independent of the encounter, the meaning, the practitioner's perception, the patient's participation — then it has no real therapeutic value. Everything that is not a molecule acting on a receptor is noise.

This assumption is not an empirical discovery. It is the consequence of a specific philosophical inheritance — the conviction, traced in detail in How We Know What We Know, that human beings cannot know reality directly and that only what can be quantified, isolated, and statistically generalized counts as real. The RCT is this conviction's instrument. The evidence hierarchy that places meta-analyses at the apex and clinical experience at the nadir is this conviction's organizational chart.

But what if the encounter is not noise? What if the practitioner's perception of the totality of symptoms — the gestalt, the meaningful configuration of what is striking, singular, and peculiar in this patient — is itself a form of knowledge? What if the vital force, which Hahnemann describes as the self-governing dynamic principle of the living organism (Organon, Aphorism 9), is not a prescientific fantasy but an ontological reality that determines how the organism falls ill and how it heals?

In this framework — the participatory framework that runs through Goethe's delicate empiricism, Steiner's philosophy, Barfield's evolution of consciousness, the Chinese medical tradition, and Hahnemann's Organon — the relevant question is not "Does the remedy outperform an inert substance under conditions designed to eliminate all participatory knowing?" It is: "Does the full homeopathic encounter — consultation, perception, remedy selection, potentized preparation, follow-up — reliably restore health in the individual case?"

That question has been answered affirmatively by over two centuries of systematic clinical practice, by a materia medica built through the most participatory form of empirical research in any medical tradition (the proving), and by the lived experience of millions of patients worldwide. This is not anecdote. It is the accumulated knowledge of a tradition that takes the individual case seriously as the primary unit of medical knowledge — rather than treating it as noise to be averaged away.

Large observational studies confirm what practitioners have long observed. The EPI3 study in France found that patients of homeopathic GPs achieved comparable clinical outcomes with substantially lower use of NSAIDs and antibiotics. From a clinical and public health perspective, these outcomes have value — and the question of whether they are attributable to "the remedy" or "the encounter" is a question that only makes sense within the materialistic framework's insistence on separating what homeopathic practice holds to be inseparable.

Summary

The "just placebo" question is not a neutral empirical inquiry. It assumes that the patient is a mechanism, that healing is a chemical event, that the therapeutic encounter is noise, and that the individual case is epistemically worthless. Each of these assumptions is a philosophical commitment inherited from a specific tradition — a tradition that homeopathy's epistemology identifies and addresses.

Within the RCT framework — a framework designed for standardized pharmaceutical interventions — the pooled data nonetheless shows effects that are statistically distinguishable from placebo. The most rigorous recent review (Mathie et al., 2014) found a significant effect for individualized homeopathy. Individual trials in fibromyalgia (Bell et al., 2004) and childhood diarrhea (Jacobs et al., 2003) show positive results under double-blind conditions. Animal and infant studies reveal phenomena that the placebo hypothesis cannot accommodate without expanding into territory it was not designed to cover.

That a signal persists even under conditions that blind the practitioner, standardize what should be individualized, and average across patients who each need something different is not a modest finding. It is a trace left by a reality that the measuring instrument was built to make invisible. The appropriate response is not to keep recalibrating the metal detector. It is to recognize that different kinds of knowledge require different instruments — and that homeopathy possesses its own, grounded in over two centuries of systematic practice.

For the full epistemological framework underlying this analysis, see How We Know What We Know. For a broader view of the clinical research landscape including the meta-analyses discussed above, see the Evidence Overview.

Frequently Asked Questions

Has homeopathy ever outperformed placebo in a clinical trial?

Yes, multiple times. The Mathie et al. (2014) systematic review identified 32 eligible RCTs of individualized homeopathy and found a pooled effect significantly favoring homeopathy over placebo. Individual trials in childhood diarrhea (Jacobs et al., 2003), fibromyalgia (Bell et al., 2004), and other conditions have also reported statistically significant differences. The claim that homeopathy has never outperformed placebo is factually incorrect.

Why do meta-analyses disagree so much?

Because meta-analyses are not a single machine that produces "the truth." They involve choices: inclusion criteria, how trial quality is defined, how outcomes are standardized, and how heterogeneity is handled. Linde et al. (1997) pooled broadly and reported a positive overall effect. Shang et al. (2005) restricted attention to larger, higher-quality trials and reported weak evidence for a specific effect. Mathie et al. (2014) focused specifically on individualized homeopathy and found a small pooled effect with explicit caution about evidence quality. Different questions and different analytical filters can yield legitimately different results. The divergence tells us as much about the methodology as about the phenomenon.

How do you explain positive results in animal studies?

The animal studies present a challenge that the placebo hypothesis is not equipped to handle. Animals do not form cognitive expectations about treatment. Bonamin and Endler (2010) reviewed the animal research literature in homeopathy, and Endler et al. (2010) examined replication patterns across fundamental research models for ultra-high dilutions. Weber et al. (2008) found measurable developmental effects in highland frogs treated with homeopathically prepared thyroxine. These studies are not immune to all sources of bias — experimenter effects, for instance — but they reveal that the phenomenon exceeds the explanatory resources of a framework built entirely on cognitive expectation. The question is not how to fit these results into the materialistic paradigm, but what they tell us about that paradigm's limits.

Is it possible that the consultation, not the remedy, is responsible for the benefits?

This question assumes that the consultation and the remedy are separable components whose contributions can be independently weighed. In homeopathic practice, they are two aspects of a single act: the consultation is how the practitioner perceives the totality of symptoms; the remedy is the expression of what that perception revealed. The RCT attempts to separate them by holding the consultation constant and varying only the remedy. Several RCTs still show a difference between remedy and placebo groups under these conditions — which indicates that the remedy contributes something the encounter alone does not account for. But the deeper point is that asking "which one is really doing the work?" is like asking whether a sentence's meaning comes from its nouns or its syntax. The question misunderstands what it is examining.

What would it take to settle this debate?

The debate cannot be settled within a single paradigm. The materialistic framework lacks the conceptual resources to evaluate a participatory medicine on its own terms — just as Aristotelian physics lacked the conceptual resources to evaluate Copernicanism. Larger, better-funded RCTs would produce more data within the existing framework, and that data would continue to be read in contradictory ways depending on analytical choices. The path forward is methodological pluralism: research methods appropriate to what homeopathy actually is. This means whole-systems research that preserves individualization, case documentation that treats the individual as the primary unit of knowledge, n-of-1 trials, and pragmatic studies that measure real-world outcomes. Different paradigms require different methods of evaluation. The insistence that one methodology adjudicate all claims is not rigor — it is the unexamined assumption that one way of knowing is the only way of knowing.

References

  1. Linde, K., Clausius, N., Ramirez, G., et al. Are the clinical effects of homoeopathy placebo effects? A meta-analysis of placebo-controlled trials. The Lancet. 1997;350(9081):834-843.
  2. Linde, K., Scholz, M., Ramirez, G., et al. Impact of study quality on outcome in placebo-controlled trials of homeopathy. Journal of Clinical Epidemiology. 1999;52(7):631-636.
  3. Mathie, R.T., Lloyd, S.M., Legg, L.A., et al. Randomised placebo-controlled trials of individualised homeopathic treatment: systematic review and meta-analysis. Systematic Reviews. 2014;3:142.
  4. Shang, A., Huwiler-Muntener, K., Nartey, L., et al. Are the clinical effects of homoeopathy placebo effects? Comparative study of placebo-controlled trials of homoeopathy and allopathy. The Lancet. 2005;366(9487):726-732.
  5. Witt, C.M., Ludtke, R., Willich, S.N. Are the clinical effects of homoeopathy placebo effects? The Lancet. 2005;366(9503):2081-2082.
  6. Linde, K., Jonas, W. Are the clinical effects of homoeopathy placebo effects? The Lancet. 2005;366(9503):2081-2082.
  7. Ludtke, R., Rutten, A.L.B. The conclusions on the effectiveness of homeopathy highly depend on the set of analyzed trials. Journal of Clinical Epidemiology. 2008;61(12):1197-1204.
  8. Bell, I.R., Lewis, D.A. II, Brooks, A.J., et al. Improved clinical status in fibromyalgia patients treated with individualized homeopathic remedies versus placebo. Rheumatology (Oxford). 2004;43(5):577-582.
  9. Jacobs, J., Jonas, W.B., Jimenez-Perez, M., Stoll, D. Homeopathy for childhood diarrhea: combined results and meta-analysis from three randomized, controlled clinical trials. Pediatric Infectious Disease Journal. 2003;22(3):229-234.
  10. Bonamin, L.V., Endler, P.C. Animal models for studying homeopathy and high dilutions: conceptual critical review. Homeopathy. 2010;99(1):37-50.
  11. Endler, P.C., Thieves, K., Reich, C., et al. Repetitions of fundamental research models for homeopathically prepared dilutions beyond 10^-23: a bibliometric study. Homeopathy. 2010;99(1):25-36.
  12. Clausen, J., van Wijk, R., Albrecht, H. Review of the use of high potencies in basic research on homeopathy. Homeopathy. 2011;100(4):288-292.
  13. Weber, S., Endler, P.C., Welles, S.U., et al. The effect of homeopathically prepared thyroxine on highland frogs: influence of electromagnetic fields. Homeopathy. 2008;97(1):3-9.
  14. National Health and Medical Research Council. NHMRC Information Paper: Evidence on the effectiveness of homeopathy for treating health conditions. NHMRC, 2015.
  15. Grimaldi-Bensouda, L., Begaud, B., Rossignol, M., et al. Management of upper respiratory tract infections by different medical practices, including homeopathy, and consumption of antibiotics in primary care: the EPI3 cohort study in France 2007-2008. PLoS ONE. 2014;9(3):e89990.