Zalack

joined 1 year ago
[–] Zalack@startrek.website 4 points 1 year ago (1 children)

I like the idea of calling it "Known Network" and "Local"

[–] Zalack@startrek.website 6 points 1 year ago (3 children)

Federation isn't opt-in though. It would be VERY easy to spin up a bunch of instances with millions or billions of fake communities and use them to DDOS a server's search function.

Searching current active subscriptions helps mitigate that vector a little.

[–] Zalack@startrek.website 8 points 1 year ago* (last edited 1 year ago)

While that's true, we have to allow for the fact that our own intelligence, at some point, is an encoded model of the world around us. Probably not through something as rigid as precise statistics, but our consciousness is somehow an emergent phenomenon of the chemical reactions in our brains that on their own have no real understanding of the world either.

I do have to wonder if at some point, consciousness will spontaneously emerge as we make these models bigger and more complex and -- maybe more importantly -- start layering specialized models on top of each other that handle specific tasks then hand the result back to another model, creating feedback loops. I'm imagining a nueral network that is trained on something extremely abstract like figuring out, from the raw input data, what specialist model would be best suited to process that data, then based on the result, what model would be best suited to refine that data. Something we train to basically be an executive function with a bunch of sub models available to it.

Could something like that become conscious without realizing it's "communicating" with us? The program executing the LLM might reflexively process data without any concept that it's text, but still be emergently complex enough when reflecting its own processes to the point of self awareness. It wouldn't realize the data represents a link to other conscious beings.

As a metaphor, you could teach a very smart dog how to respond to certain, basic arithmetic problems. They would get stuff wrong the moment you prompted them to do something out of their training, and they wouldn't understand they were doing math even when they got it "right", but they would still be sentient, if not sapient, despite that.

It's the opposite side of the philosophical zombie. A philosophical zombie behaves exactly as a human would, but is a surface-level automaton with no inner life.

But I propose that we also consider the inverse-philosophical zombie, an entity that behaves like an automation, but has an inner life that has not recognized its input data for evidence of an external world outside it's own bounds. Something that might not even recognize it's executing a program the same way we aren't consciously aware of the chemical reactions our brain is executing to make us think.

I don't believe current LLMs are anywhere near complex enough to give rise to that sort of thing, but they are also still pretty early in their development and haven't started to be heavily layered and interconnected the way I think they'll end up.

At the very least it makes for a fun Sci-fi premise.

[–] Zalack@startrek.website 8 points 1 year ago* (last edited 1 year ago) (1 children)

We really need to start redistributing how we spend money on health care. Public option, lower executive pay. More non-emergency long term facilities for patients with psych issues or rehabilitation, and chronic illness care. Better pay and shorter shifts for doctors and nurses. Subsidies for medical tech companies to offset end-user price. More government-funded research into medical tech.

Health care should realistically be our biggest industry akin to a military with the social status of being a soldier and the compensation of being a software developer. We have the wealth and technology to help most people live healthy lives. We need the government to incentivize allocating it correctly.

[–] Zalack@startrek.website 2 points 1 year ago

Yeah. Part of me has to wonder what -- if any -- backchanneled agreements there are between Glynn Shotwell and the DoD for if/when Musk does something truly compromising.

[–] Zalack@startrek.website 9 points 1 year ago* (last edited 1 year ago) (1 children)

Looking past the technobabble...

The implications of quantum mechanics just reframes what it means to not have free will.

In classical physics, given the exact same setup you make the exact same choice every time.

In Quantum mechanics, given the same exact setup, you make the same choice some percentage of the time.

One is you being an automaton while the other is you being a flipped coin. Neither of those really feel like free will.

Except.

We are looking at this through a kind of implied metaphor that the brain is some mechanism, separate from "us" that we are forced to think "through'. That the mechanisms of the brain are somehow distorting or restricting what the underlying self can do.

But there is no deeper "self". We are the brain. We are the chemical cascade bouncing around through the neurons. We are the kinetic billiard balls of classical physics and the probability curves of quantum mechanics. It doesn't matter if the universe is deterministic and we would always have the same response to the same input or if it's statistical and we just have a baked "likelihood" of that response.

The way we respond or the biases that inform that likelihood is still us making a choice, because we are that underlying mechanism. Whether it's deterministic or not it's just an implementation detail of free will, not a counterargument.

[–] Zalack@startrek.website 21 points 1 year ago* (last edited 1 year ago)

And often if you box yourself into an API before you start implementing, it comes out worse.

I always learn a lot about the problem space once I start coding, and use that knowledge to refine the API of my system as I work.

[–] Zalack@startrek.website 2 points 1 year ago* (last edited 1 year ago)

Its original mandate was investigating financial fraud. Presidential protection came later and has always been in addition to their original mandate.

[–] Zalack@startrek.website 10 points 1 year ago

Even though things seem shitty now. I think that, on average, humanity's story is one of self-improvement. This Good Place quote comes to mind:

What matters isn't if people are good or bad. What matters is if they are trying to be better today than they were yesterday.

I think humanity is trying to be better today than it was yesterday. Human history is a story of more and more types of people being given more and more rights. Of slowly putting down our rocks and spears and guns and trying to live together. Of learning to care for nature while holding the power to destroy it. We've had backslides, but overall we've come a long way from the Apes we once were.

I think humanity deserves the chance to keep trying to better itself. I hope we get to the point where we are good enough to give ourselves that chance. As another scene from Good Place put it:

Come on dummy, faster.

[–] Zalack@startrek.website 49 points 1 year ago* (last edited 1 year ago) (2 children)

This reminded me of an old joke:

Two economists are walking down the street with their friend when they come across a fresh, streaming pile of dog shit. The first economist jokingly tells the other "I'll give you a million dollars if you eat that pile of dog shit". To his surprise, the second economist grabs it off the ground and eats it without hesitation. A deal is a deal so the first economist hands over a million dollars.

A few minutes later they come across a second pile of shit. The second economist, wanting to give his peer a taste of his own medicine, says he'll give the first economist a million dollars if he eats it. The first economist agrees and does so, winning him a million dollars.

Their friend, rather confused, asks what the point of all this was, the first economist gave the second economist a million dollars, and then the second economist gave it right back. All they've accomplished is to eat two piles of shit.

The two economists look rather taken aback. "Well sure," they say, "but we've grown the economy by two million dollars!"

[–] Zalack@startrek.website 4 points 1 year ago (1 children)

I just want you to know that I hate your username.

[–] Zalack@startrek.website 13 points 1 year ago (1 children)

I actually don't think that's the case for languages. Most languages start out from a desire to do some specific thing better than other languages rather than do everything.

view more: ‹ prev next ›