YouKnowWhoTheFuckIAM
100% agreed, what terrifies me is that our friend here seems to see the word “science” in here and immediately assume impeccable faith and perfect knowledge
If I may refer you back to the book cited, the (made up) fears of that time in fact incorporated the difficulty of obtaining fissile material during that period, when amongst the worries was that obtaining fissile material would not actually be that difficult. To simply state that biological and chemical warfare bear no resemblance is to depart from the lesson being related here to making excuses for that object of which you happen to be afraid. In each case the fear being constructed will make its own allowances for the real or supposed facts on the ground, and in this case there was no need to assume that a bombmaker would have to make his own plutonium - you’re drawing attention to an irrelevant distraction.
Another point which you’re glibly avoiding, with tellingly unnecessary recourse to insulting language, is that “CBRN” the construct cannot be so easily distinguished from the “practical and technical application” that the real enterprise has. Indeed the existence of the real enterprise is often driven in part by the made-up fears (which does not licence the fears) - this happened, for example, with security protocols around the management of fissile material. I refer you back to the same book and to the rather famous data point about Bill Clinton’s interest in manufactured diseases.
For more on stuff like this, although again not on the subject of bioterrorism because I don’t have that material in front of me, I recommend the confluence of two chapters in The Merger of Knowledge with Power by Rabitz (as well as the whole book), namely “Recombinant DNA Research: Whose Risks?” and “Hardware and Fantasy in Military Technology”. This isn’t paranoid soapboxing from a teenage Chomsky fan, it’s just part of the fabric of industrial science and technology as a social phenomenon.
I expressly put “CBRN groups” in scare quotes to tag along with my line at the bottom “I don’t want to be dismissive of genuine attempts…but the scale and scope of this is defined by politics, not by technical possibility”
You, however, have me saying “cbrn is made up to self justify” - of course if I had said any such thing, then one counter-example would have sufficed. Although actually it wouldn’t have sufficed, because in this context we’re talking about terroristic or otherwise chaotic release of a novel weapon. We’re not talking at all about bad powerful people deliberately employing chemical weapons they already have, for which of course CBRN is a worthy use and “genuine attempt at being ready”.
“CBRN groups”, here, operates at the level of rhetoric, and that’s what I tried to draw attention to. The context in which “CBRN groups” the rhetorical and political device emerged was that in which Bill Clinton could become so enthused by a sci-fi novel about bioterrorism that he had its author up in front of the senate testifying as an expert on the subject. So on reflection, I should have deferred to Eisenhower’s original formulation: the military-industrial-congressional complex.
Edit: you could always try Alex Wellerstein for the aggressively obvious historical counter-point to this whole fantasy. In his Restricted Data he provides a useful companion to Barriers to Bioweapons in a chapter discussing the notorious “backyard atomic bomb built from declassified material” cases. But because it’s a work of history we learn the most salient fact of all: the only way anyone believed that the backyard bomb designs were viable was because somebody wanted them to believe it, or because they had some reason to want to believe it themselves.
Without that ingredient it was plain that the actual know-how was just not there, however that fact was fundamentally obscured by the desire to believe, and so people saw viability where there was none: plugging holes in their imaginary with meaningless verbiage about risk and but-what-if?
I think what’s going miss here is that “CBRN groups” is very obviously and primarily shit made up by the military-industrial complex to justify itself after the Cold War
I don’t want to be dismissive of genuine attempts at being ready just in case, but the scale and scope of this is defined by politics, not by technical possibility
Too late! You already mean “moronarchy”
To be clear: it is all movie plot threats. At the very forefront of the entire “existential threat” space is nothing but a mid-1990s VHS library. Frankly if you want to understand like 50% of what goes on in AI at this point my recommendation is just that you read John Ganz and listen to his podcast, because 90s pop and politics culture is the connective tissue of the whole fucking enterprise.
The great philosophical dialogues in English - those by Berkeley, Hume, Lakatos - are few and far between. Perhaps there is a general awareness that only these exceptional stylists could pull off the rare trick of not obviously putting words into their antagonist’s mouths, even insofar as the author clearly took the view of his protagonist. Indeed there is still, two and a half centuries later, debate about whose view in his own dialogue Hume actually took - when the rather obvious and straightforward alternative was just writing down “this is what I fucking think, alright?” that aporetic flourish was precisely what justified writing it down in dialogue form in the first place.
or confusing GWAS’ current inability to detect a gene with the gene not existing
This remarkable sleight of hand sticks out. The argument from the (or rather this particular) GWAS camp goes “we are detecting the genes, contrary to expectations”. There isn’t any positive assumption in favour of that camp, so failure to thus far detect the gene is supposed to motivate against its existence.
I like the implication that if LLMs are, as we all know to be true, near perfect models of human cognition, human behaviour of all sorts of kinds turns out to be irreducibly social, even behaviour that appears to be “fixed” from an early stage
While I agree with you about the economics, I’m trying to point out that physical reality also has constraints other than economic, many of them unknown, some of them discovered in the process of development.
Bird’s flight isn’t magic, or unknowable, or non reproduceable.
No. But it is unreproducible if you already have arms with shoulders, elbows, hands, and five stubby fingers. Human and bird bodies are sufficiently different that there are no close approximations for humans which will reproduce flight for humans as it is found in birds.
If it was, we’d have no sense of awe at learning about it, studying it. Imagine if human like behavior of intelligence was completely unknowable. How would we go about teaching things? Communicating at all? Sharing our experiences?
To me, this is a series of non-sequiturs. It’s obvious that you can have awe for something without having a genuine understanding of it, but that’s beside the point. Similarly, the kind of knowledge required for humans to communicate with one another isn’t relevant - what we want to know is the kind of knowledge which goes into the physical task of making artificial humans. And you ride roughshod of one of the most interesting aspects of the human experience: human communication and mutual understanding is possible across vast gulfs of the unknown, which is itself rather beautiful.
But again I can’t work out what makes that particularly relevant. I think there’s a clue here though:
…but I also take care not to put humanity, or intelligence in a broad sense, in some special magical untouchable place, either.
Right, but this would be a common (and mistaken) move some people make which I’m not making, and which I have no desire to make. You’re replying here to people who affirm either an implicit or explicit dualism about human consciousness, and say that the answers to some questions are just out of reach forever. I’m not one of those people, and I’m referring specifically to the words I used to make the point that I made, namely that there exist real physical constraints repeatedly approached and arrived at in the history of technology which demonstrate that not every problem has an ideal solution (and I refer you back to my earlier point about aircraft to show how that cashes out in practice).
There are no known problems that can’t theoritically be solved, in a sort of pedantic “in a closed system information always converges” sort of way
Perhaps. The problem of human flight was “solved” by the development of large, unwieldy machines driven by (relatively speaking, cf. pigeons) highly inefficient propulsion systems which are very good at covering long distances, oceans, and rough terrain quickly - the aim was Daedalus and Icarus, but aerospace companies are fortunate that the flying machine turned out to have advantages in strictly commercial and military use. It’s completely undecided physically whether there is a solution to the problem of building human-like intelligence which does a comparable job to having sex, even with complete information about the workings of humans.