this post was submitted on 08 Aug 2024
347 points (89.0% liked)

No Stupid Questions

35742 readers
498 users here now

No such thing. Ask away!

!nostupidquestions is a community dedicated to being helpful and answering each others' questions on various topics.

The rules for posting and commenting, besides the rules defined here for lemmy.world, are as follows:

Rules (interactive)


Rule 1- All posts must be legitimate questions. All post titles must include a question.

All posts must be legitimate questions, and all post titles must include a question. Questions that are joke or trolling questions, memes, song lyrics as title, etc. are not allowed here. See Rule 6 for all exceptions.



Rule 2- Your question subject cannot be illegal or NSFW material.

Your question subject cannot be illegal or NSFW material. You will be warned first, banned second.



Rule 3- Do not seek mental, medical and professional help here.

Do not seek mental, medical and professional help here. Breaking this rule will not get you or your post removed, but it will put you at risk, and possibly in danger.



Rule 4- No self promotion or upvote-farming of any kind.

That's it.



Rule 5- No baiting or sealioning or promoting an agenda.

Questions which, instead of being of an innocuous nature, are specifically intended (based on reports and in the opinion of our crack moderation team) to bait users into ideological wars on charged political topics will be removed and the authors warned - or banned - depending on severity.



Rule 6- Regarding META posts and joke questions.

Provided it is about the community itself, you may post non-question posts using the [META] tag on your post title.

On fridays, you are allowed to post meme and troll questions, on the condition that it's in text format only, and conforms with our other rules. These posts MUST include the [NSQ Friday] tag in their title.

If you post a serious question on friday and are looking only for legitimate answers, then please include the [Serious] tag on your post. Irrelevant replies will then be removed by moderators.



Rule 7- You can't intentionally annoy, mock, or harass other members.

If you intentionally annoy, mock, harass, or discriminate against any individual member, you will be removed.

Likewise, if you are a member, sympathiser or a resemblant of a movement that is known to largely hate, mock, discriminate against, and/or want to take lives of a group of people, and you were provably vocal about your hate, then you will be banned on sight.



Rule 8- All comments should try to stay relevant to their parent content.



Rule 9- Reposts from other platforms are not allowed.

Let everyone have their own content.



Rule 10- Majority of bots aren't allowed to participate here.



Credits

Our breathtaking icon was bestowed upon us by @Cevilia!

The greatest banner of all time: by @TheOneWithTheHair!

founded 1 year ago
MODERATORS
 

I know MediaBiasFactCheck is not a be-all-end-all to truth/bias in media, but I find it to be a useful resource.

It makes sense to downvote it in posts that have great discussion -- let the content rise up so people can have discussions with humans, sure.

But sometimes I see it getting downvoted when it's the only comment there. Which does nothing, unless a reader has rules that automatically hide downvoted comments (but a reader would be able to expand the comment anyways...so really no difference).

What's the point of downvoting? My only guess is that there's people who are salty about something it said about some source they like. Yet I don't see anyone providing an alternative to MediaBiasFactCheck...

you are viewing a single comment's thread
view the rest of the comments
[–] FuglyDuck@lemmy.world 4 points 3 months ago (1 children)

Since others have shared plenty of facts, made great arguments, and all you do is keep shifting the goalposts.

I shift the goalposts but am just repeating myself? interesting.

In any case... as for my "claims" perhaps I've missed something. Again. From their own methodology page:

The primary aim of our methodology is to systematically evaluate the ideological leanings and factual accuracy of media and information outlets. This is achieved through a multi-faceted approach that incorporates both quantitative metrics and qualitative assessments in accordance with our rigorously defined criteria.

Okay. so that's the highlevel sales pitch. emphasis mine.

Perhaps. just perhaps, I've missed where they dropped what those defined criteria are. lets keep reading.

While the concept of bias is inherently subjective and lacks a universally accepted scientific formula, our methodology employs a series of objective indicators to approximate it. We utilize a visual representation—a yellow dot on a scale—to signify the extent of bias for each evaluated source. This scale is accompanied by a “Detailed Report” section which elaborates on the source’s characteristics and the basis for its bias rating.

Our bias assessment encompasses various dimensions, including political orientation, factual integrity, and the utilization of credible, verifiable sources. It’s crucial to note that our bias scale is calibrated to the political spectrum of the United States, which may not align with the political landscapes of other nations.

Objective indicators? what indicators? Where? for you or me to understand how they're arriving at their analysis, I need to understand what "objective indicators" they're using. they're not listed anywhere I can find. Perhaps I've missed it. I don't think I have. but. perhaps I have.

Now, Skipping down to the specific categories....

The categories are as follows:

  • Biased Wording/Headlines- Does the source use loaded words to convey emotion to sway the reader. Do headlines match the story?
  • Factual/Sourcing- Does the source report factually and back up claims with well-sourced evidence.
  • Story Choices: Does the source report news from both sides, or do they only publish one side.
  • Political Affiliation: How strongly does the source endorse a particular political ideology? Who do the owners support or donate to?

Alright. now we're getting to the stuff I'm asking for! maybe. uh. shit. The just "Biased Wording/Headlines" at that. So they have no list of common loaded words, (For example, is "Deadly Wildfire" okay but "Deadly Attack" not? both are describing events in which people presumably died. What you, I or anyone else perceives as "loaded" is going to be entirely different. You want to rigorously define criteria for bias? you're gonna have to at least provide examples. And not on the individual ratings. Protip. the lack of strong or emotional language is also an indication of bias- for examples of that, watch reports surrounding any cops that killed a subject. you're almost certainly going to be seeing the pro-cop news agencies shy away from language that evokes anger.

Then then get into their "comprehensive" analysis:

For a thorough evaluation, we review a minimum of 10 headlines and 5 news stories from each source. Our methodology employs a variety of search techniques to ensure a comprehensive understanding of the source’s political affiliation and ideological leanings. This process can be time-consuming or very simple, depending on the source.

yeah. uhm. that's not "comprehensive". at all. MPR news, just from today, just the ones that get highlighted, Minnesota Public Radio news has 28 articles. from today. and that's not even bothering to look at all of the massive amounts of MPR/NPR affiliated podcasts and such being pumped out sometimes 3 times a day.

Further, there's no information on which articles are selected. Which can have a profound impact on whether or not they get a passing grade for factualness. If you're only checking ten out of literal thousands of articles a year. or, even a hundred articles, out of thousands a year, how you select articles to review are going to have a profound impact. Is it random? is it by top rating? are they cherry picked? top headlines from random dates?

And lets draw attention to that last line. "This process can be time consuming or very simple, depending on the source". meaning... it varies based on the source. Even if there's more to work with for a given source... the process should probably not be any more or less simple- the process should be the process. that's the purpose of a methodology.

Skipping the descriptions of their fact check ratings... all I'm going to say here is that there's no objective standard for what "consistent" or "often" or any sort of miss-rate on being factual. I will submit that, for example VOA news probably should be given a low factual score based on this statement: >A “Low” rating indicates the source is often unreliable and should be fact-checked for fake news, conspiracy theories, and propaganda.
you know, considering VOA is literally a state media outlet. whose entire purpose is to pump out propaganda; yet it's given a 'high' rating. but what do I know, they certainly weren't forbidden from broadcasting inside US boarders because of their propagandist nature.

their critera for who they use as a factcheck service is useful:

Our methodology incorporates findings from credible fact-checkers who are affiliated with the International Fact-Checking Network (IFCN). Only fact checks from the last five years are considered, and any corrected fact checks do not negatively impact the source’s rating.

IFCN is good. the date restriction is good. explaining how correct fact checks affect things... is good. I would like to see a comment about which fact checkers they always use, or always use when it's relevant (for example, reviewing a french news service using, idunno, a taiwanese fact checker seems kinda sketchy.) Do they search all 115 current signatories and the other 54 that are in the renewal process? do they search only those from the source's home country? when do they elect to expand beyond that? do they only use one service at all?

I'd assume they use some sort of aggregator service to look for fact checks across all of them at once. Personally, my preferred choice would be an aggregation service combining all of them, and searching for articles tagged as fact checking the specific source, rather than for each of the articles being reviewed. Then organize those by some sort of pass/mostly-pass/fail/epically-fail sort of metric. but that's just me.

TL:DR? my goal post has always been that their methodology is opaque and not useful to determine that their method reasonably eliminates their bias. that has never changed. they don't describe what acceptable error rates are for factualness (never mind severity of the error. reporting a person wore a green shirt when they wore a blue shirt might be factually incorrect, but does it really matter if the story isn't about what shirt they wore?). they don't describe even in brief detail what 'loaded' or 'biased' headlines actually look like. They describe a literal propaganda service as being "Least Biased".

They cite newsguard as a competitor (i'm not sure about that, but they're in the same space. from what I see on their website... they're selling their service to different audiences. Like brands looking to advertise on a specific site, etc.) Lets look at their methodology page. I'm not going to go into detail. but you see how it's broken down? how specific. each criterion is specifically listed, with reasons for it passing or failing a given criterion listed, as well as express explanations of what things mean. When you're looking through it. not 'we judge on bias.... which means that we look for biased words....'. Like a phrase you see is 'that a regular use would not likely see it on a daily basis'.

Check their scoring process. They have a researcher (described as a trained journalist), research the website, make a report, then they write the article. that article is then put on pause for comment from the company in question ... then it is reviewed by a people ("at least one senior ediitor and Co-CEO"...) to check for factual accuracy and what have you. Only then is it published. I assume that MBFC has something similar, but that's an assumption. no where does it describe the editorial process. for all we know, it really is just one guy in a cat suit working the one article, doing it his way while the lady in the dog suit is doing it her way and the editorial staff are in a two-person horse suit searching for organic oats. I'd rather assume not, but again. that is an assumption on my part.

[–] finley@lemm.ee -3 points 3 months ago* (last edited 3 months ago)

With all due respect: I’m not reading that.

Ya know, I’ve had some great interactions with you here in the past, and generally we’re on the same page, but on this, we disagree. And I doubt we’re going to change each other’s minds, so I’m not really going to waste any more time on this discussion with you.

And, I know this is me repeating myself, but i again suggest that you just block the bot and move on. It’s not worth the energy you’re putting into it over a disagreement.

Peace, buddy