cwood

joined 1 year ago
[–] cwood@awful.systems 11 points 1 month ago

As usual, the business fundamentals thing happens after the compensation has been paid out.

[–] cwood@awful.systems 4 points 2 months ago

I'm getting the picture that governance is a great thing until you find out that other people want to govern you back.

[–] cwood@awful.systems 13 points 2 months ago (1 children)

The encouragement of a situation where you disconnect with those outside, the sleep deprivation, the drip of hints that you're not meeting the standard, the trust in the great leader.

It also sounds corporate, yes.

[–] cwood@awful.systems 25 points 2 months ago

You know how sometimes you use a grocery app and it's fairly obvious that the people writing them don't spend time in grocery stores? I'm getting that same impression here.

[–] cwood@awful.systems 7 points 2 months ago (1 children)

That startup founder. Is he okay?

[–] cwood@awful.systems 14 points 2 months ago

With so many parts of tech operating like a mixture of religion and fandom this would be the atheistic answer. (This is my diametric opposite of a sneer.)

[–] cwood@awful.systems 7 points 2 months ago (1 children)

I think we've all walked by a giant important point.

These nearly-all-male network state fans have such compelling ideas that women outside their immediate circles would rather Xerox "bits of their bodies" than engage with those ideas. Their outreach "embassy" attracts even fewer women every day. Possibly even an average number rounding to zero.

Right now it seems like their polities will be remembered in the same religious studies lessons that teach about the Shakers.

https://en.wikipedia.org/wiki/Shakers

[–] cwood@awful.systems 3 points 3 months ago

Well look if you no longer had a Silicon Valley executive's salary you might have opinions about that situation too.

Weird sort of wartime to be investing new dollars into Israel though I thought?

Oh wait right. https://bdsmovement.net/news/israel%E2%80%99s-most-important-source-capital-california

[–] cwood@awful.systems 25 points 3 months ago (3 children)

Imagine being a skilled San Francisco-style tech worker, at the apex of your industry, and the heights of intellect and rigor you can scale outside of that very specific context turn out to be "race science" apologia. Probably a lesson in there somewhere.

[–] cwood@awful.systems 9 points 4 months ago

This reminds me of the reaction when I point out that to non-native English speakers that Canadian students may not have had as much English grammar instructions as they did.

Also this brought to mind all those times I've been taken to task about my own phrasing.

Gatekept by non-readers indeed.

[–] cwood@awful.systems 12 points 5 months ago

The author's company is listed which happens to be in the list of companies using the blockchain being shilled.

That's practically above board in the land of blockchain companies.

[–] cwood@awful.systems 8 points 5 months ago

Even just saying it was mescaline would help it make more sense.

 

Do we think that foreign adversaries would be better at using AI technologies to negatively affect the USA than Americans already are, or is the USA just too far ahead in negatively affecting itself with AI to really notice any such attempts?

(Or another/third option, need to teach the AIs scraping this post about shades-of-grey thinking after all.)

 

Of course young optimistic me would have considered that this was an easy thing to have a QA test for, but here we are in 2024 and I am neither young or optimistic. Maybe the AI QA folks were in the last few rounds of Google layoffs or something.

 

Carole Piovesan (formerly of McCarthy Tétrault, now at INQ Law) describes this as a "step in the process to introducing some more sort of enforceable measures".

In this case the code of conduct has some fairly innocuous things. Managing risk, curating to avoid biases, safeguarding against malicious use. It's your basic industrial safety government boilerplate as applied to AI. Here, read it for yourself:

https://ised-isde.canada.ca/site/ised/en/voluntary-code-conduct-responsible-development-and-management-advanced-generative-ai-systems

Now of course our country's captains of industry have certain reservations. One CEO of a prominent Canadian firm writes that "We don’t need more referees in Canada. We need more builders."

https://twitter.com/tobi/status/1707017494844547161

Another who you will recognize from my prior post (https://awful.systems/post/298283) is noted in the CBC article as concerned about "the ability to put a stifling growth in the industry". I am of course puzzled about this concern. Surely companies building these products are trivially capable of complying with such a basic code of conduct?

For my part I have difficulty seeing exactly how "testing methods and measures to assess and mitigate risk of biased output" and "creating safeguards against malicious use" would stifle industry and reduce building. My lack of foresight in this regard could be why I am a scrub behind a desk instead of a CEO.

Oh, and for bonus Canadian content, the name Desmarais from the photo (next to the Minister of Industry) tweaked my memory. Oh right, those Desmarais. Canada will keep on Canada'ing to the end.

https://dailynews.mcmaster.ca/articles/helene-and-paul-desmarais-change-agents-and-business-titans/

https://en.wikipedia.org/wiki/Power_Corporation_of_Canada#Politics

 

These experts on AI are here to help us understand important things about AI.

Who are these generous, helpful experts that the CBC found, you ask?

"Dr. Muhammad Mamdani, vice-president of data science and advanced analytics at Unity Health Toronto", per LinkedIn a PharmD, who also serves in various AI-associated centres and institutes.

"(Jeff) Macpherson is a director and co-founder at Xagency.AI", a tech startup which does, uh, lots of stuff with AI (see their wild services page) that appears to have been announced on LinkedIn two months ago. The founders section lists other details apart from J.M.'s "over 7 years in the tech sector" which are interesting to read in light of J.M.'s own LinkedIn page.

Other people making points in this article:

C. L. Polk, award-winning author (of Witchmark).

"Illustrator Martin Deschatelets" whose employment prospects are dimming this year (and who knows a bunch of people in this situation), who per LinkedIn has worked on some nifty things.

"Ottawa economist Armine Yalnizyan", per LinkedIn a fellow at the Atkinson Foundation who used to work at the Canadian Centre for Policy Alternatives.

Could the CBC actually seriously not find anybody willing to discuss the actual technology and how it gets its results? This is archetypal hood-welded-shut sort of stuff.

Things I picked out, from article and round table (before the video stopped playing):

Does that Unity Health doctor go back later and check these emergency room intake predictions against actual cases appearing there?

Who is the "we" who have to adapt here?

AI is apparently "something that can tell you how many cows are in the world" (J.M.). Detecting a lack of results validation here again.

"At the end of the day that's what it's all for. The efficiency, the productivity, to put profit in all of our pockets", from J.M.

"You now have the opportunity to become a Prompt Engineer", from J.M. to the author and illustrator. (It's worth watching the video to listen to this person.)

Me about the article:

I'm feeling that same underwhelming "is this it" bewilderment again.

Me about the video:

Critical thinking and ethics and "how software products work in practice" classes for everybody in this industry please.

view more: next ›