this post was submitted on 09 Aug 2023
1 points (100.0% liked)

Bridging the Gap

1 readers
1 users here now

May contradictions collapse!

Exploring the dynamics of models and the way new information interacts with them — to help ourselves.

How does knowledge work, how do echo chambers appear, and how do models form based on new information?

Not necessarily limited to these topics, but you get a rough idea. Criticism is welcome, as long as it is useful for the poster. New perspectives are very welcome!

founded 1 year ago
MODERATORS
 

Knowledge is like sunlight: it lets us see, but it hides the stars.

As humans, we have senses that allow us to receive information about the outside world. Very useful, but by default this is just raw information. When you see something, it’s just light-detecting cells in your eyes firing, and when you touch something, it’s just electricity moving up your spinal cord.

Your brain has all kinds of faculties to post-process this information to make a more coherent image, like stitching the twodimensional images both eyes separately receive together to create a threedimensional experience.

Another example, your eyes can only detect the signal strength of three separate color frequencies: red, green and blue, and these colors plus their signal strengths get mixed in the brain so we can experience different colors. This is fun and all, but this is still relatively raw information, and says nothing about what is actually being sensed. How does your brain even know what it is sensing?

Your brain is smart, and it can learn things about the outside world. Imagine a small child being given a grape for the first time. The child knows nothing about the grape, and is made to eat it by his mother. The child enjoys the grape, and this has created an association in his brain, something like this, with every arrow resembling processing done on the information:

My eye cells fire in this configuration -> I see a shiny green sphere -> This is a grape -> I can eat this grape and it will taste delicious.

This is knowledge, and without it we would be very dumb. This is a very abstract example, and this only works if your brain even has a concept of “delicious”, but luckily we are are already born with some knowledge about how to process the outside world, like that food is good for us.

This mechanism works for every sense, and taps on multiple senses at the same time: experience something, make and/or refine an association, and use this association in the future to predict and manipulate the world. This learning process is always going on when we are conscious, even if we aren’t aware of this.

Now then, this all sounds extremely useful, what is the dark side? I won’t deny its extreme usefulness, without it we couldn’t function at all, as everything would just be noise, but knowledge has some properties that can be problematic. What happens if the only thing we know about is grapes? With no other reference material, we will think that everything is a grape: it is advantageous for us to do so. Things that resemble grapes for us, which is everything because of our extremely limited knowledge, might be as delicious as the grape we had in the past. We don’t only use our knowledge specifically in the context of our specific past experience, we also tap into it in more unknown situations, in the hopes of having the same result.

Keeping this in mind, what happens to a specialist that has studied volcanoes for 40 years? He will be extremely knowledgeable about volcanoes, obviously, and will be extremely helpful if information about volcanoes is desired. What happens if you ask him about something outside of his own domain, something he doesn’t have knowledge about, like the question of what made the dinosaurs extinct? He might just say that volcanoes killed the dinosaurs, having been so immersed in this subject matter throughout his life.

Being confronted by an unknown problem, it makes sense that he will tap on the pool of knowledge that has worked for him in the past, but you can see why this is problematic. There is a certain difficulty with seeing problems in a broader context, as every experience is filtered through his knowledge, and it’s difficult to shake off. I’ve also been here myself.

Another problem with knowledge, categorization. Whatever’s a grape is not something else. Whatever’s orange is not red or yellow. Because knowledge creates categories, distinct borders can form between concepts. Can we really look at something orange and see either red or yellow? I know I can’t, because those colors don’t fit in what I define as orange. Categorization is incredibly useful to grasp the world, but it also separates, and might lead us to miss the things that bridge categories to each other. It creates a dividing line that might not necessarily exist in the real world.

The final problem with knowledge in this article: faulty experience. What happens if bad things are experienced, like a couple of people hurting you that happen to be bald? Because knowledge is used for future predictions, you might now think that all bald people are out to hurt you. Of course this is wrong, but because you now distrust all bald people so badly, you choose not to engage with them anymore, and you will never get the chance to learn that this knowledge is wrong.

It’s a deadly combo-wombo that leads to all kinds of problems and misery, and is something which I’ve began realizing and correcting in my own life recently, with some wild results. Not specifically with bald people, that’s just as an example. Stay tuned for a more in-depth post about this.

This is also the case with learning new things, learn bad knowledge that teaches you to seek out more bad knowledge, and you get stuck in a self-reinforcing loop. This is a big part of how echo chambers happen.

So, what can we do? I’d say that all these three problems also have a counter strategy:

⚔️ Specialism - 🛡️Generalism: Broadening our knowledge, getting informed about different domains than our own so we have more diverse knowledge to use for unknown problems. The more diverse the knowledge the better, as it allows us to make better predictions.

⚔️ Categories - 🛡️Similarities: Consciously looking for connections between categories, seeing if there are similarities, and nurturing these similarities. This one is difficult as it is more unconscious, and often already happens anyway when learning new things, it’s then a matter of not dismissing the connection because of other differences, as far as it is reasonable.

⚔️ Faulty experience - 🛡️New experience: This one sucks the most, we need to override our fears and put ourselves in situations that we think might hurt us, to see if our predictions are right or not. If not, they will slowly correct themselves based on our new experience. To cope with the fear, I like to see it as tuning myself, like you’d tune a car to work better, to make the experience more detached. In the case of knowledge, we need to expose ourselves deliberately to different views whenever we seek out new information.

Interesting is that all the counter strategies have to do with learning new information. If you’ve made it this far, please leave feedback below, as feedback is very welcome and helps a lot.

top 2 comments
sorted by: hot top controversial new old
[–] maporita@unilem.org 2 points 1 year ago (1 children)

This in a nutshell is why I despair of the tendency these days for people to seek out similar views to their own. We need to hear other opinions and viewpoints even if they contradict what we already believe. For one thing they sharpen our own arguments. Why do you believe the world is round? "It just is" is never an acceptable argument.

I completely agree with you, even if it is a bit ironic. On one hand it's very comforting to get affirmation that your beliefs are correct by other people, but on the other hand this can go so far that it can mentally blind you to things that go against this belief, just based on the way knowledge fundamentally works. Often, rejecting contradictions is also used as a way to keep beliefs coherent, but I've found that it's actually completely the inverse, that actively seeking out contradictions creates way better mental models of the world.