I'm wondering about the Luigi line.
Post Trump, it seems as if there is no justice for the rich besides vigilante justice.
Would any of the below qualify for a Luigi? Where is the line? I find the cognitive ethical dissonance of Luigi disconcerting.
The following list is very dark, and super cynical - I apologize in advance.
A pharma company has found a cure for cancer, but suppresses it to make money on treatment. Causing innumerable deaths.
A pharma company has found a cure for Alzheimer's - but suppresses it. Causing suffering.
A pharma company knows a drug treatment is ineffective for some major illness, but pushes it anyway, suppressing other research. Causing suffering.
A pharma company pushes a drug known to cause massive dependence, with insignificant benefit. Causing suffering.
A car company knows an airbag is defective, and does not fix it. Causing thousands of deaths.
An airplane manufacturer creates an airplane with faulty construction, knowingly, and thousands die.
A manufacturing company pollutes a town's water, causing birth defects, general sickness.
This list could go on forever of course. But where is the line post Luigi, post Trump non-trial. What makes one CEO at risk, and another not?
I understand and agree.
I have found that AI is super useful when I am already an expert in what it is about to produce. In a way it just saves key strokes.
But when I use it for specifics I am not an expert in, I invariably lose time. For instance, I needed to write an implementation of some audio classes to use CoreAudio on Mac. I thought I could use AI to fill in some code, which, if I knew exactly what calls to make, would be obvious. Unfortunately the AI didn't know either, but gave solutions upon solutions that "looked" like they would work. In the end, I had to tear out the AI code, and just spend the 4-5 hours searching for the exact documentation I needed, with a real functional relevant example.
Another example is coding up some matrix multiplications + other stuff using both the Apple Accelerate and the Cuda cublas. I thought to myself, "well- I have to cope with the change in row vs column ordering of data, and that's gonna be super annoying to figure out, and I'm sure 10000 researchers have already used AI to figure this out, so maybe I can use that." Every solution was wrong. Strangely wrong. Eventually I just did it myself- spent the time. And then I started querying different LLMs via the ChatArena, to see whether or not I was just posing the question wrong or something. All of the answers were incorrect.
And it was a whole day lost. It did take me 4 hours to just go through everything and make sure everything was right and fix things with testers, etc, but after spending a whole day in this psychedelic rabbit hole, where nothing worked, but everything seemed like it should, it was really tough to take.
So..
In the future, I just have to remember, that if I'm not an expert I have to look at real documentation. And that the AI is really an amazing "confidence man." It inspires confidence no matter whether it is telling the truth or lying.
So yeah, do all the assignments by yourself. Then after you are done, have testers working, everything is awesome, spend time in different AIs and see what it would have written. If it is web stuff, it probably will get it right, but if it's something more detailed, as of now, it will probably get it wrong.
Edited some grammar and words.