dudeami0

joined 9 months ago
[–] dudeami0@lemmy.dudeami.win 5 points 1 year ago

Thermometers, like most measurement devices, are always accurate until you get two of them. Each device has a specific tolerance (or should, otherwise it's probably a horrible tolerance), for a grill thermometer this will look like -/+5C/10F. Additionally, everything used to read a measurement needs to be calibrated regularly to ensure proper function, otherwise readings cannot be trusted. For a thermometer, the easily accessible way to calibrate are to use ice water (does it read 0C/32F) and boiling water (does it read 100C/212F). Using these constants will allow you to adjust your thermometer and get a (more) accurate reading.

[–] dudeami0@lemmy.dudeami.win -2 points 1 year ago (1 children)

I also fail to see how this applies here. What is the disinformation? Where is the Russian bias? If you are seeing something I am not, please elaborate, but the summary in the article is:

No one would blame Zelenskyy for choosing the lesser of two evils here: Western banks over Russian tanks. Yet, the grim fact remains that even if his nation succeeds in repealing the Russian invasion, the future in store for Ukraine is not necessarily one of sovereignty and self-determination but, most likely, one of Western economic tutelage.

Of course large global asset managers are going to see money signs in their eyes. The fact is that Ukrainians are being put between a rock and a hard place, and exploitation of those kind of situations is capitalism 101.

Also, if you are assuming this is Russian propaganda, why is it coming from a website ran by a British political activist funded by a British investor. It also seems to be "mostly factual". I'm failing to see where the tie to Russia is.

[–] dudeami0@lemmy.dudeami.win 2 points 1 year ago

In my humble opinion, we too are simply prediction machines. The main difference is how efficient our brains are at the large number of tasks given for it to accomplish for it's size and energy requirements. No matter how complex the network is it is still a mapped outcome, just the number of factors weighed is extremely large and therefore gives a more intelligent response. You can see this with each increment in GPT models that use larger and larger parameter sets giving more and more intelligent answers. The fact we call these "hallucinations" shows how effective the predictive math is, and mimics humans abilities to just make things up on the fly when we don't have a solid knowledge base to back it up.

I do like this quote from the linked paper:

As we will discuss, we find interesting evidence that simple sequence prediction can lead to the formation of a world model.

That is to say, you don't need complex solutions to map complex problems, you just need to have learned how you got there. It's never purely random attempts at the problem, it's always predictive attempts that try to map the expected outcomes and learn by getting it right and wrong.

At this point, it seems fair to conclude the crow is relying on more than surface statistics. It evidently has formed a model of the game it has been hearing about, one that humans can understand and even use to steer the crow's behavior.

Which is to say that it has a predictive model based on previous games. This does not mean it must rigidly follow previous games, but that by playing many games it can see how each move affects the next. This is a simpler example because most board games are simpler than language with less possible outcomes. This isn't to say that the crow is now a grand master at the game, but it has the reasoning to understand possible next moves, knows illegal moves, and knows to take the most advantageous move based on it's current model. This is all predictive in nature, with "illegal" moves being assigned very low probability based on the learned behavior the moves never happen. This also allows possible unknown moves that a different model wouldn't consider, but overall provides what is statistically the best move based on it's model. This allows the crow to be placed into unknown situations, and give an intelligent response instead of just going "I don't know this state, I'll do something random". This does not always mean this prediction is correct, but it will most likely be a valid and more than not statistically valid move.

Overall, we aren't totally sure what "intelligence" is, we are just an organism that has developed more and more capabilities to process information based on a need to survive. But getting down to it, we know neurons take inputs and give outputs based on what it perceives is the best response for the given input, and when enough of these are added together we get "intelligence". In my opinion it's still all predictive, its how the networks are trained and gain meaning from the data that isn't always obvious. It's only when you blindly accept any answer as correct that you run into these issues we've seen with ChatGPT.

Thank you for sharing the article, it was an interesting article and helped clarify my understanding of the topic.

[–] dudeami0@lemmy.dudeami.win 9 points 1 year ago (9 children)

Disclaimer: I am not an AI researcher and just have an interest in AI. Everything I say is probably jibberish, and just my amateur understanding of the AI models used today.

It seems these LLM's use a clever trick in probability to give words meaning via statistic probabilities on their usage. So any result is just a statistical chance that those words will work well with each other. The number of indexes used to index "tokens" (in this case words), along with the number of layers in the AI model used to correlate usage of these tokens, seems to drastically increase the "intelligence" of these responses. This doesn't seem able to overcome unknown circumstances, but does what AI does and relies on probability to answer the question. So in those cases, the next closest thing from the training data is substituted and considered "good enough". I would think some confidence variable is what is truly needed for the current LLMs, as they seem capable of giving meaningful responses but give a "hallucinated" response when not enough data is available to answer the question.

Overall, I would guess this is a limitation in the LLMs ability to map words to meaning. Imagine reading everything ever written, you'd probably be able to make intelligent responses to most questions. Now imagine you were asked something that you never read, but were expected to respond with an answer. This is what I personally feel these "hallucinations" are, or imo best approximations of the LLMs are. You can only answer what you know reliably, otherwise you are just guessing.

[–] dudeami0@lemmy.dudeami.win 9 points 1 year ago

Season 8 on more torrents is probably considered to be the new hulu reboot. This is due to the disparity in the home release seasons and the television TV seasons. So most likely if you have seasons 1-7, you have all the home release versions of the show, and therefore have the entire library.

[–] dudeami0@lemmy.dudeami.win 1 points 1 year ago

Sounds like some QoS software is also limiting LAN traffic, seeing as it still works if the internet is disconnected. I would look if your router has "Adaptive QoS" or something similar enabled.

[–] dudeami0@lemmy.dudeami.win 14 points 1 year ago (5 children)

From Time (link: https://time.com/6133336/jan-6-capitol-riot-arrests-sentences/)

So far, the median prison sentence for the Jan. 6 rioters is 60 days, according to TIME’s calculation of the public records.

An additional 113 rioters have been sentenced to periods of home detention, while most sentences have included fines, community service and probation for low-level offenses like illegally parading or demonstrating in the Capitol, which is a misdemeanor.

Overall these people are getting less time than kids who get caught with some weed on them.

You can think you have rights, or you can know your rights, but when you violate the law don't be surprised when one of the most pro-incarceration states around throw you in jail. Lots of protesters get arrested and prosecuted as a scare tactic. This is if you are assuming these people didn't have seditious intentions, which does change things a bit. Overall sounds like they fucked around and found out, at least protesters fighting for real causes are more prepared to get fucked with by the state than these jokers.

[–] dudeami0@lemmy.dudeami.win 1 points 1 year ago

Does the flash drive show when you run lsblk with the correct amount of space? dd will overwrite the partition table and works directly with the underlying physical blocks of the device. If the flash drive isn't broken, you should be able to rebuild the partition table with parted (tutorial from linuxconfig.org on the matter)

[–] dudeami0@lemmy.dudeami.win 0 points 1 year ago (1 children)

Sounds like the cache got corrupted possibly? See if Ctrl+F5 clears up the issue, or try restarting your browser.

[–] dudeami0@lemmy.dudeami.win 3 points 1 year ago

As for the article, I think this is generally PR and corporate speak. Whatever their reasons were, they apparently didn't shut down the initial XMPP servers until 2022 so it was a reliable technology. There "simplification" was bringing users into their ecosystem to more easily monetize their behaviour. This goes along with your last paragraph, at the end of the day the corporation is a for-profit organization. We can't trust a for-profit organization to have the best of intentions, some manager is aiming to meet a metric that gets them their bonus. Is this what we really want dictating the services we use day to day?

[–] dudeami0@lemmy.dudeami.win 3 points 1 year ago* (last edited 1 year ago) (2 children)

Google tried to add support for it in their product

Is like saying that google tried to add support for HTTP to their products. Google Talk was initially a XMPP chat server hosted at talk.google.com, source here.

Anyone that used Google Talk (me included) used XMPP, if they knew it or not.

Besides this, it's only a story of how an eager corporation adopting a protocol and selling how they support that protocol, only to abandon it because corporate interests got in the way (as they always do). It doesn't have to be malicious to be effective in fragmenting a community, because the immense power those corporations wield to steer users in a direction they want once they abandon the product exists.

That being said, if Google Talk wasn't popular why did they try to axe the product based on XMPP and replace it with something proprietary (aka Hangouts)? If chat wasn't popular among their users, this wouldn't of been needed. This could of been for internal reasons, it could of been to fragment the user base knowing they had the most users and would force convergence, we really can't be sure. The only thing we can be sure of is we shouldn't trust corporations to have the best interest of their users, they only have the best interest of their shareholders in the end.

[–] dudeami0@lemmy.dudeami.win 1 points 1 year ago

Q1: Correct

Q2: Not at present, but it's a highly requested feature. I would imagine it will be around soon^tm^ in one form or another.

view more: ‹ prev next ›