this post was submitted on 08 Nov 2024
38 points (88.0% liked)

Technology

59588 readers
3397 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[โ€“] barsoap@lemm.ee 1 points 2 weeks ago* (last edited 2 weeks ago) (1 children)

The problem is: Data is code, and code is data. An algorithm to compute prime numbers is equivalent to a list of prime numbers, (also, not relevant to this discussion, homoiconicity and interpretation). Yet we still want to make a distinction.

Is a PAQ-compressed copy of the Hitchhiker's guide code? Technically, yes, practically, no, because the code is just a fancy representation of data (PAQ is basically an exercise in finding algorithms that produce particular data to save space). Is a sorting algorithm code? Most definitely, it can't even spit out data without being given an equally-sized amount of data. On that scale, from code to code representing data, AI models are at least 3/4th towards code representing data.

As such I'd say that AI models are data in the same sense that holograms (these ones) are photographs. Do they represent a particular image? No, but they represent a related, indexable, set of images. What they definitely aren't is rendering pipelines. Or, and that's a whole another possible line of argument: Requiring Turing-complete interpretation.

[โ€“] wewbull@feddit.uk 1 points 2 weeks ago

I think it comes down to how it's used.

An LLM model is nothing unless it's used to process some other things. It does something. It predicts the likeliness of words following a sequence of other words. It has no other purpose. It can't take the model, analyse it in a different way and extract different conclusions. It is singular in function. It is a program.

Data has no function. It is just data.