this post was submitted on 14 Aug 2024
79 points (100.0% liked)

science

14722 readers
899 users here now

just science related topics. please contribute

note: clickbait sources/headlines aren't liked generally. I've posted crap sources and later deleted or edit to improve after complaints. whoops, sry

Rule 1) Be kind.

lemmy.world rules: https://mastodon.world/about

I don't screen everything, lrn2scroll

founded 1 year ago
MODERATORS
 

But Marks points out that the FDA typically follows the advice of its independent advisory committees — and the one that evaluated MDMA in June overwhelmingly voted against approving the drug, citing problems with clinical trial design that the advisers felt made it difficult to determine the drug’s safety and efficacy. One concern was about the difficulty of conducting a true placebo-controlled study with a hallucinogen: around 90% of the participants in Lykos’s trials guessed correctly whether they had received the drug or a placebo, and the expectation that MDMA should have an effect might have coloured their perception of whether it treated their symptoms.

Another concern was about Lykos’s strategy of administering the drug alongside psychotherapy. Rick Doblin, founder of the Multidisciplinary Association for Psychedelic Studies (MAPS), the non-profit organization that created Lykos, has said that he thinks the drug’s effects are inseparable from guided therapy. MDMA is thought to help people with PTSD be more receptive and open to revisiting traumatic events with a therapist. But because the FDA doesn’t regulate psychotherapy, the agency and advisory panel struggled to evaluate this claim. “It was an attempt to fit a square peg into a round hole,” Marks says.

you are viewing a single comment's thread
view the rest of the comments
[–] BackOnMyBS@lemmy.autism.place 12 points 2 months ago (2 children)

around 90% of the participants in Lykos’s trials guessed correctly whether they had received the drug or a placebo

I understand the logic with using a placebo comparison, but who cares if people got better solely because they know they took ecstasy?

[–] SynonymousStoat@lemmy.world 21 points 2 months ago (2 children)

I'm no scientist, but I don't really know how you can have a study of a psychoactive drug and the participants not be able to guess if they had the drug or the placebo.

[–] Nighed@sffa.community 9 points 2 months ago

Give them a different psychoactive drug I guess... Not really a true placebo though.

[–] WhatAmLemmy@lemmy.world 4 points 2 months ago* (last edited 2 months ago) (1 children)

These people are scientific bureaucrats who just go "computer says no". This is clearly a case where "the gold standard" fails and another approach is necessary. That's if they're not on the payroll of big pharma to hamstring adoption of alternatives they can't patent.

[–] ArcticDagger@feddit.dk 3 points 2 months ago (1 children)

I agree that it's a shame that it's so difficult to eliminate the placebo effect from psychoactive drugs. There's probably alternative ways of teasing out the effect, if any, from MDMA therapy, but human studies take a long time and, consequently, costs a lot of money. I'd imagine the researchers would love to do the studies, but doesn't have the resources for it

I think the critique about conflicts of interest seems a bit misguided. It's not the scientists who doesn't want to move further with this. It's the FDA

[–] lolcatnip@reddthat.com 3 points 2 months ago (1 children)

I didn't think the idea of a placebo effect is even valid for a treatment for which no placebo exists. At best, it's a thought experiment, but IMHO it's more of a distinction without a difference.

[–] ArcticDagger@feddit.dk 0 points 2 months ago (1 children)

That's an interesting point. But maybe there are some compounds that can induce a state that fools people who've never tried psychoactive compounds? I've heard of studies using dehydrated water as a placebo for alcohol as it induces some of the same effects:

Like ethanol, heavy water temporarily changes the relative density of cupula relative to the endolymph in the vestibular organ, causing positional nystagmus, illusions of bodily rotations, dizziness, and nausea. However, the direction of nystagmus is in the opposite direction of ethanol, since it is denser than water, not lighter.

https://en.m.wikipedia.org/wiki/Heavy_water

[–] WhatAmLemmy@lemmy.world 2 points 2 months ago* (last edited 2 months ago)

That example is not a placebo. It's the opposite of a placebo. A placebo is supposed to be the control. The baseline "truth" in a hypothesis. The entire idea of the placebo effect is that the individual's own psychology — their expectation of an effect — induces a physiological response, which pollutes the baseline hypothesis and all test data. Thus, the entire purpose of a double blind is to negate that bias from impacting the researcher, or the rat being studied.

That is fucking stupid when studying pretty much any drug people bother to take recreationally. They take them recreationally because they have an acutely noticeable effect. Unless you're a virgin amish person or child, you're gonna know when you're drunk or high; MDMA, LSD, or Psilocybin are on a whole other level, especially at the doses taken for psychiatric treatment. A placebo would only make sense if you were testing micro-doses that are so low they're widely considered to be imperceptible.

So no. The "gold standard" is wholly insufficient to adequately study drugs that induce a significant psychological response. These drugs need to be analyzed by people who hold a greater understanding of their effects, and our perception of reality, than bureaucrats who have zero experience with what they're studying. The only thing worse than a pseudo double blind would be rejecting significant drugs because they don't fit into our existing ape-like understanding of reality (or capitalism), resigning to "computer says no", and preventing millions of people from receiving an improvement in their quality of life; ignorance, stupidity, and maliciousness can cause the same level of damage.

[–] ArcticDagger@feddit.dk 12 points 2 months ago (2 children)

But if they know they're getting ecstasy, the improvement might originate from placebo which means that they're not actually getting better from ecstasy. They're just getting better because they think they should be getting better

[–] Hamartiogonic@sopuli.xyz 9 points 2 months ago (1 children)

Yeah, that’s the thing with placebo. It’s surprisingly effective, and separating the psychological effect from actual chemistry can be very tricky. If most participants can correctly identify if they’re bing fed the real drug or a placebo, it makes it impossible to figure out how much each effect contributes to the end result. Ideally, you would only use effective medicine that does not need the placebo effect to actually work.

Imagine, if all medicine had lots of placebo effect in them. How would you treat patients who are in a coma or otherwise unconscious?

[–] rand_alpha19@moist.catsweat.com 2 points 2 months ago (2 children)

So, let's just use an example of a pill that treats headaches so I can understand, because I'm kinda stupid.

It works super well, and most patients taking it in double blind trials find it relieves headache pain considerably. Why is it a bad thing, to the point of rejecting it as a treatment, that the patient feels that the pill is working very well and has concluded on their own that this is probably not a placebo?

I can understand a patient being misled by coincidence, but surely a measurable, verifiable, and repeatable benefit to the patient compared to pills without medicinal ingredients would warrant a different conclusion, wouldn't it?

In your coma scenario, I'm sure there is a statistical analysis that can be performed to show with a degree of certainty that a specific medication has a higher likelihood of being effective than a placebo in a controlled experiment.

I commented on this same story a while ago when it first broke that it was likely to be rejected and I don't think anyone explained it in the thread.

[–] qaz@lemmy.world 3 points 2 months ago* (last edited 2 months ago) (1 children)

It works super well, and most patients taking it in double blind trials find it relieves headache pain considerably. Why is it a bad thing, to the point of rejecting it as a treatment, that the patient feels that the pill is working very well and has concluded on their own that this is probably not a placebo?

The problem is that it's not a double blind trial because the participants can tell whether they are on it. The placebo effect is also a problem because there is no real control group.

[–] rand_alpha19@moist.catsweat.com 3 points 2 months ago (1 children)

But why is that such a problem that it's worth rejecting what is otherwise widely considered an effective treatment?

I am fundamentally not understanding the inherent risk to patients resulting from the structure of the study that is apparently so harmful that it must not continue.

Why is being able to tell that your medication is working a negative thing in a study? And such a negative thing that it apparently negates all other positive aspects of the medication.

[–] qaz@lemmy.world 3 points 2 months ago (1 children)

The problem is that you can't tell if it's truly working due to the placebo effect.

[–] rand_alpha19@moist.catsweat.com 2 points 2 months ago (1 children)

Yeah, I understand that. But if there's a measurable difference between the efficacy of the 2 pills that even the patient is obviously aware of, why does that warrant extreme caution versus another pill that doesn't have this effect?

Like why is it better to have a study in which the patient literally can't tell the difference between treatments? Why is it not detrimental for a federal agency to unilaterally dismiss this?

I understand that people online aren't obligated to engage with me thoughtfully, but I was hoping for an actual explanation that is longer than 50 words from someone who is more knowledgeable than me regarding the validity of scientific experiments as they relate to pharmaceuticals.

[–] Hamartiogonic@sopuli.xyz 2 points 2 months ago* (last edited 2 months ago) (1 children)

The idea of modern medicine is to sell chemical compounds that actually have an effect. It’s a philosophical and ethical thing. All products have a unique psychological effect that gets intertwined with their biochemical effect. If you can’t study them individually, it’s impossible to tell if the biochemical effect even exists at all. If your medicine relies heavily, or even entirely, on the psychological side, it’s no different than homeopathy. The idea of modern medicine is to be better than the old stuff that preceded it.

I prefer to think of this as an equation like this: Pm+Bm=Pp+Bp

Pm=psychological effect, medicine

Bm=biochemical effect, medicine

Pp=psychological effect, placebo = surprisingly big

Bp=biochemical effect, placebo = 0

If these sides are equivalent, the medicine is just as effective as placebo. If the medicine side is bigger, you’ll want to know how much of it comes from the P and B terms. In order to figure that out, you would need to know some values. Normally, you can just assume that Pm=Pp, but if you can’t assume that, it you’re left with two unknowns in that equation. In this case, you really can’t assume them to be equal, which means that your data won’t allow you to figure out how much of the total effect comes from psychological and biochemical effects. It could be 50/50, 10/90, who knows. That sort of uncertainty is a serious problem, because of the philosophical and ethical side of developing medicine.

[–] qaz@lemmy.world 3 points 2 months ago

biochemical effect, placebo = 0

I'm not sure if the biochemical effects of a placebo are 0.

In conditioning, a neutral stimulus saccharin is paired in a drink with an agent that produces an unconditioned response. For example, that agent might be cyclophosphamide, which causes immunosuppression. After learning this pairing, the taste of saccharin by itself is able to cause immunosuppression, as a new conditioned response via neural top-down control. Such conditioning has been found to affect a diverse variety of not just basic physiological processes in the immune system but ones such as serum iron levels, oxidative DNA damage levels, and insulin secretion. Recent reviews have argued that the placebo effect is due to top-down control by the brain for immunity and pain. Pacheco-López and colleagues have raised the possibility of "neocortical-sympathetic-immune axis providing neuroanatomical substrates that might explain the link between placebo/conditioned and placebo/expectation responses". There has also been research aiming to understand underlying neurobiological mechanisms of action in pain relief, immunosuppression, Parkinson's disease and depression.

Shamelessly stolen from Wikipedia because I couldn't find the original source

[–] Hamartiogonic@sopuli.xyz 2 points 2 months ago

Statistical tests are very picky. They have been designed by mathematicians in a mathematical ideal vacuum void of all reality. The method works in those ideal conditions, but when you take that method and apply it in messy reality where everything is flawed, you may run into some trouble. In simple cases, it’s easy to abide by the assumptions of the statistical test, but as your experiment gets more and more complicated, there are more and more potholes for you to dodge. Best case scenario is, your messy data is just barely clean enough that you can be reasonably sure the statistical test still works well enough and you can sort of trust the result up to a certain point.

However, when you know for a fact that some of the underlying assumptions of the statistical test are clearly being violated, all bets are off. Sure, you get a result, but who in their right mind would ever trust that result?

If the test says that the medicine is works, there’s clearly financial incentive to believe it and start selling those pills. If it says that the medicine is no better than placebo, there’s similar incentive to reject the test result and demand more experiments. Most of that debate goes out the window if you can be reasonably sure that the data is good enough and the result of your statistical test is reliable enough.

[–] BackOnMyBS@lemmy.autism.place 3 points 2 months ago (1 children)

Yeah, that's my point. What does it matter that they got better because they think they should get better? To me, what matters is that they got better, regardless of the reason. Bonus: they got high on ecstasy while under medical supervision.

Option A: Take a pill that doesn't feel like ecstasy and no one gets better.

Option B: Don't tell patients that ecstasy makes them feel better. Give them ecstacy. 20% of patients get better.

Option C: Tell patients that ecstasy can make them feel better. Give them ecstacy. 40% of patients get better.

Personally, option C seems like the most effective and thus preferred option. I don't see any downside whatsoever.

[–] ArcticDagger@feddit.dk 2 points 2 months ago

To a certain extent I agree, but I also think it's a tricky topic that deals a fair bit with the ethics of medicine. The Atlantic has a pretty good article with arguments for and against: https://web.archive.org/web/20230201192052/https://www.theatlantic.com/health/archive/2011/12/the-placebo-debate-is-it-unethical-to-prescribe-them-to-patients/250161/

Yes, in your three situations, I'd agree that option C is the best one. But you're disregarding a major component of any drug: side effects. Presumably ecstasy has some nonnegligible side effects so just looking at the improvement on the treated disease might now show the full picture