this post was submitted on 30 Sep 2023
1071 points (98.7% liked)

Open Source

31291 readers
561 users here now

All about open source! Feel free to ask questions, and share news, and interesting stuff!

Useful Links

Rules

Related Communities

Community icon from opensource.org, but we are not affiliated with them.

founded 5 years ago
MODERATORS
top 50 comments
sorted by: hot top controversial new old
[–] weedazz@lemmy.world 156 points 1 year ago* (last edited 1 year ago) (3 children)

My mind immediately went to a horizon zero dawn like dystopia where the Mozilla AI is the only thing left protecting humans from various malevolent AIs bent on consuming the human race

[–] Bombastic@sopuli.xyz 38 points 1 year ago (2 children)

Mozilla is Gaia, ChatGPT is hades?

[–] weedazz@lemmy.world 24 points 1 year ago* (last edited 1 year ago) (2 children)

I think by that point ChatGPT would be more like Apollo, keeping the knowledge of humanity. I feel like one of the more corporate AIs will go full HADES, I'm thinking Bard. It will get a mysterious signal from space that switches it's core protocol from "don't be evil" to "be evil."

[–] Doombot1@lemmy.one 4 points 1 year ago

A “mysterious signal from space” being just the fact that it’s owned by Google

load more comments (1 replies)
[–] angrymouse@lemmy.world 5 points 1 year ago

SPOOOOOILER

load more comments (2 replies)
[–] cupcakezealot@lemmy.blahaj.zone 91 points 1 year ago

Incredibly welcomed. We need more ethical, non-profit AI researchers in the sea of corporate for-profit AI companies.

[–] KingThrillgore@lemmy.ml 53 points 1 year ago

I want to give them the benefit of the doubt. I really do. I am going to watch this with a critical eye, however.

[–] Weeby_Wabbit@lemmy.world 43 points 1 year ago (2 children)

I'll believe it when I see it.

I'm so goddamn tired of "open source" turning into subscription models restricting use cases because the company wants to appease conservative investors.

[–] blind3rdeye@lemm.ee 37 points 1 year ago

Mozilla has a very strong track-record though. They've been around for a very long time, and have stuck to free open-source principles the whole time.

[–] BetaDoggo_@lemmy.world 4 points 1 year ago* (last edited 1 year ago) (1 children)

That's basically only OpenAI, maybe some obscure startups as well. Mozzila is far too old and niche to get away with that anyway.

load more comments (1 replies)
[–] Eezyville@sh.itjust.works 36 points 1 year ago (2 children)

I would really like Mozilla to make the best browser in the world please.

[–] CaptKoala@lemmy.ml 14 points 1 year ago

As a (very recently) former chrome user, they do already.

[–] kubica@kbin.social 31 points 1 year ago

Wishing they would say something more, the site has been like that for some time.

[–] mojo@lemm.ee 27 points 1 year ago (8 children)

As much as I love Mozilla, I know they're going to censor it (sorry, the word is "alignment" now) the hell out of it to fit their perceived values. Luckily if it's open source then people will be able to train uncensored models

[–] DigitalJacobin@lemmy.ml 63 points 1 year ago (29 children)

What in the world would an "uncensored" model even imply? And give me a break, private platforms choosing to not platform something/someone isn't "censorship", you don't have a right to another's platform. Mozilla has always been a principled organization and they have never pretended to be apathetic fence-sitters.

[–] Doug7070@lemmy.world 38 points 1 year ago (5 children)

This is something I think a lot of people don't get about all the current ML hype. Even if you disregard all the other huge ethics issues surrounding sourcing training data, what does anybody think is going to happen if you take the modern web, a huge sea of extremist social media posts, SEO optimized scams and malware, and just general data toxic waste, and then train a model on it without rigorously pushing it away from being deranged? There's a reason all the current AI chatbots have had countless hours of human moderation adjustment to make them remotely acceptable to deploy publicly, and even then there are plenty of infamous examples of them running off the rails and saying deranged things.

Talking about an "uncensored" LLM basically just comes down to saying you'd like the unfiltered experience of a robot that will casually regurgitate all the worst parts of the internet at you, so unless you're actively trying to produce a model to do illegal or unethical things I don't quite see the point of contention or what "censorship" could actually mean in this context.

[–] underisk@lemmy.ml 15 points 1 year ago

It means they can’t make porn images of celebs or anime waifus, usually.

load more comments (4 replies)
[–] lemann@lemmy.one 20 points 1 year ago

I fooled around with some uncensored LLaMA models, and to be honest if you try to hold a conversation with most of them they tend to get cranky after a while - especially when they hallucinate a lie and you point it out or question it.

I will never forget when one of the models tried to convince me that photosynthesis wasn't real, and started getting all snappy when I said I wasn't accepting that answer 😂

Most of the censorship "fine tuning" data that I've seen (for LoRA models anyway) appears to be mainly scientific data, instructional data, and conversation excerpts

[–] TheWiseAlaundo@lemmy.whynotdrs.org 17 points 1 year ago (2 children)

There's a ton of stuff ChatGPT won't answer, which is supremely annoying.

I've tried making Dungeons and Dragons scenarios with it, and it will simply refuse to describe violence. Pretty much a full stop.

Open AI is also a complete prude about nudity, so Eilistraee (Drow godess that dances with a sword) just isn't an option for their image generation. Text generation will try to avoid nudity, but also stop short of directly addressing it.

Sarcasm is, for the most part, very difficult to do... If ChatGPT thinks what you're trying to write is mean-spirited, it just won't do it. However, delusional/magical thinking is actually acceptable. Try asking ChatGPT how licking stamps will give you better body positivity, and it's fine, and often unintentionally very funny.

There's plenty of topics that LLMs are overly sensitive about, and uncensored models largely correct that. I'm running Wizard 30B uncensored locally, and ChatGPT for everything else. I'd like to think I'm not a weirdo, I just like D&d... a lot, lol... and even with my use case I'm bumping my head on some of the censorship issues with LLMs.

load more comments (2 replies)
load more comments (26 replies)
[–] whoisearth@lemmy.ca 5 points 1 year ago

As an aside I'm in corporate. I love how gung ho we are on AI meanwhile there are lawsuits and potential lawsuits and investigative journalism coming out on all the shady shit AI and their companies are doing. Meanwhile you know the SMT ain't dumb they know about all this shit and we are still driving forward.

load more comments (6 replies)
[–] fosforus@sopuli.xyz 18 points 1 year ago* (last edited 1 year ago)

I remember a time when open-source software was developed without a pre-order business model.

"This new company will be led by Managing Director Moez Draief. Moez has spent over a decade working on the practical applications of cutting-edge AI as an academic at Imperial College and LSE, and as a chief scientist in industry. Harvard’s Karim Lakhani, Credo’s Navrina Singh and myself will serve as the initial Board of Mozilla.ai. "

This money will 90% go to paying the salary of these managers.

[–] donuts@kbin.social 17 points 1 year ago (2 children)

All I want to know is if they are going to pillage people's private data and steal their creative IP or not.

Ethical AI starts and ends with open, transparent, legitimate and ethically sourced training data sets.

[–] freeman@sh.itjust.works 16 points 1 year ago (5 children)

Using copyrighted material for research is fair use. Any model produced by such research is not itself a derivative work of the training material. If people use it to create infringing (on the training or other material) they can be prosecuted in the exact same way they would if they created an infringing work via Photoshop or any other program. The same goes for other illegal uses such as creating harmful depictions of real people.

Accepting any expansion of IP rights, for whatever reason, would in fact be against the ethics of free software.

load more comments (5 replies)
load more comments (1 replies)
[–] baduhai@sopuli.xyz 14 points 1 year ago

Cautiously optimistic about this one.

[–] lowleveldata@programming.dev 13 points 1 year ago (2 children)
[–] Limitless_screaming@kbin.social 46 points 1 year ago* (last edited 1 year ago)

More transparent about data collection, and less likely to reinforce biases. Mozilla vision for trustworthy AI

[–] AdmiralShat@programming.dev 28 points 1 year ago (1 children)
[–] fartsparkles@sh.itjust.works 16 points 1 year ago (3 children)

I feel the issue with AI models isn’t their source not being open but the actual derived model itself not being transparent and auditable. The sheer number of ML experts who cannot explain how their model produces a given result is the biggest concern and requires a completely different approach to model architecture and visualisation to solve.

[–] cynar@lemmy.world 21 points 1 year ago

Unfortunately, the nature of the models is that it's very difficult to get an understanding of the innards. That's part of the point, you don't need too. The best we can do is monitor how it's built and what connects in and out of it.

The open source bits let you see that it's not passing data on without your permission. If the training data is also open source, you can get for biases e.g. 90% of faces being white males.

[–] Jamie@jamie.moe 6 points 1 year ago

No amount of ML expertise will let someone know how a model produced a result, exactly. Training the model from the data requires a lot of very delicate math being done uncountable times to get a model that results in something useful, and it simply isn't possible to comprehend how the work inside is done in a meaningful way other than by doing guesswork.

load more comments (1 replies)
[–] bahmanm@lemmy.ml 10 points 1 year ago

Something that I'll definitely keep an eye on. Thanks for sharing!

[–] CaptKoala@lemmy.ml 10 points 1 year ago* (last edited 1 year ago)

Couldn't give a fuck, there's already far too much bad blood regarding any form of AI for me.

It's been shoved in my face, phone and computer for some time now. The best AI is one that doesn't exist. AGI can suck my left nut too, don't fuckin care.

Give me livable wages or give me death, I care not for anything else at this point.

Edit: I care far more about this for privacy reasons than the benefits provided via the tech.

The fact these models reached "production ready" status so quickly is beyond concerning, I suspect the companies are hoping to harvest as much usable data as possible before being regulated into (best case) oblivion. It really no longer seems that I can learn my way out of this, as I've been doing since the beginning, as the technology is advancing too quickly for users, let alone regulators to keep it in check.

[–] Wrong_thought_7@lemmy.ml 6 points 1 year ago
[–] Turun@feddit.de 5 points 1 year ago* (last edited 1 year ago) (1 children)

In which ways does this differ from stability ai, which made stable diffusion and also have a LLM afaik?

[–] RobotToaster@mander.xyz 10 points 1 year ago

The stability models aren't open source, the moralistic licence they release under violates the open source definition.

load more comments
view more: next ›