I've been thinking about this for a while, and would love to have other, more knowledgeable (hopefully!) opinions on this:
I've been dwelling on how we might be able to enforce some sort of set of rules or widely agreed upon "morals" on artificial general intelligence systems, which is something that should almost certainly be distributed in order a single entity from seizing control of it (governments, private individuals, corporations, or any AGI systems), and which would also allow a potentially growing set of rules or directives that couldn't be edited or controlled by a singular actor--at least in theory.
What other considerations would need to be made? Is this a plausibly good use of this technology?
Hey, I am just a curious sort of person, so I was wondering, do you think there's any cause that's worth protesting?
If you'll humor me a little bit: say completely hypothetically (and ridiculously) that there was a political group who, for some reason or another, absolutely despised people named Austin and wanted to get the legal right to arrest anyone they found named Austin out walking in the streets. The bill they're currently championing would allow them to legally beat, arrest, and imprison any Austins that happened to be discovered in public, and you also discover that there's enough support for this bill to have a chance at passing. Do you think, in this weird, nightmarish scenario, you'd be willing to protest against this bill even if it caused you and others some sort of inconvenience?