this post was submitted on 27 May 2025
598 points (99.7% liked)

Science Memes

15562 readers
2409 users here now

Welcome to c/science_memes @ Mander.xyz!

A place for majestic STEMLORD peacocking, as well as memes about the realities of working in a lab.



Rules

  1. Don't throw mud. Behave like an intellectual and remember the human.
  2. Keep it rooted (on topic).
  3. No spam.
  4. Infographics welcome, get schooled.

This is a science community. We use the Dawkins definition of meme.



Research Committee

Other Mander Communities

Science and Research

Biology and Life Sciences

Physical Sciences

Humanities and Social Sciences

Practical and Applied Sciences

Memes

Miscellaneous

founded 2 years ago
MODERATORS
 
top 17 comments
sorted by: hot top controversial new old
[–] FundMECFSResearch@lemmy.blahaj.zone 72 points 1 month ago (2 children)

Big respect to researchers who publish and share statistically insignificant results.

Instead of doing what is far too common in science, manipulating the data until you find “significance” through twisted interpretations.

[–] prex@aussie.zone 6 points 1 month ago (1 children)
[–] Probius@sopuli.xyz 3 points 1 month ago (1 children)

Is it valid science if you re-test the one that had the link to see if it was a fluke?

[–] AHemlocksLie@lemmy.zip 6 points 1 month ago

Not just valid, I'd argue important. It doesn't make the most exciting headlines and doesn't get funding very well, though, so it's not done nearly as often as it should be. A big part of science is not taking things at face value and verifying that there is sufficient proof for claims.

Plus, if both results agree, it statistically tightens the probability of a coincidence. The chances of a 5% chance event happening twice in a row is 0.25%, and three times in a row is 0.0125% so repetition can make the results more certain.

[–] rikudou@lemmings.world 5 points 1 month ago

Biology papers and Photoshop, name a more iconic duo.

[–] chonomaiwokurae@lemm.ee 48 points 1 month ago (1 children)

Proving that something doesn’t work can be valuable data, too. Especially in research close to industrial interests.. celebrate failures!

[–] drre@feddit.org 26 points 1 month ago

well yeah, but there is money in knowing what to avoid. in academia it's more like "why can't i reproduce this effect i read about in this fancy paper, am i stupid or what", when maybe, they just got lucky, or had plenty of very reasonable analysis options to choose from, or simply fudged the numbers. i fear that in much of academia there is a huge incentive to publish at whatever cost

[–] LibertyLizard@slrpnk.net 31 points 1 month ago

Make sure you publish that shit somehow so the next person doesn’t waste their time on the same experiment.

[–] rustydrd@sh.itjust.works 15 points 1 month ago (1 children)

Null results are still results!

[–] friendlymessage@feddit.org 4 points 1 month ago (1 children)

True, but try to get them published

[–] rustydrd@sh.itjust.works 4 points 1 month ago

I can say with some pride that I have at least co-authored papers with null results, and they did get published. I'm not arguing that what you suggest isn't true, but I have hope.

[–] conditional_soup@lemm.ee 14 points 1 month ago

I remember listening to an episode of TWiV where they bemoaned that more negative results weren't published. They're useful, too, just not nearly as cool and flashy as positive results.

[–] DannyBoy@sh.itjust.works 7 points 1 month ago (1 children)

I didn't know Richard Stallman did research.

[–] rustydrd@sh.itjust.works 2 points 1 month ago

Actually, this is Stannis Baratheon from The Witcher.

[–] miss_demeanour@lemmy.dbzer0.com 3 points 1 month ago

"Give me six lines of data harvested from the most honourable of men, and I will find an excuse in them to hang him."

-- Pileated Woodpecker Richdude

[–] peteypete420@sh.itjust.works 2 points 1 month ago

A scholar can never let mere wrongness get in the way of the theory

[–] fossilesque@mander.xyz 2 points 1 month ago

Lol I love this