Out of curiosity, I went ahead and read the full text of the bill. After reading it, I'm pretty sure this is the controversial part:
SEC. 3. DUTY OF CARE. (a) Prevention Of Harm To Minors.—A covered platform shall act in the best interests of a user that the platform knows or reasonably should know is a minor by taking reasonable measures in its design and operation of products and services to prevent and mitigate the following:
(1) Consistent with evidence-informed medical information, the following mental health disorders: anxiety, depression, eating disorders, substance use disorders, and suicidal behaviors.
The sorts of actions that a platform would be expected to take aren't specified anywhere, as far as I can tell, nor is the scope of what the platform would be expected to moderate. Does "operation of products and services" include the recommender systems? If so, I could see someone using this language to argue that showing LGBTQ content to children promotes mental health disorders, and so it shouldn't be recommended to them. They'd still be able to see it if they searched for it, but I don't think that makes it any better.
Also, in section 9, they talked about forming a committee to investigate the practicality of building age verification into hardware and/or the operating system of consumer devices. That seems like an invasion of privacy.
Reading through the rest of it, though, a lot of it did seem reasonable. For example, it would make it so that sites would have to put children on safe default options. That includes things like having their personal information be private, turning off addictive features designed to maximize engagement, and allowing kids to opt out of personalized recommendations. Those would be good changes, in my opinion.
If it wasn't for those couple of sections, the bill would probably be fine, so maybe that's why it's got bipartisan support. But right now, the bad seems like it outweighs the good, so we should probably start calling our lawmakers if the bill continues to gain traction.
apologies for the wall of text, just wanted to get to the bottom of it for myself. you can read the full text here: https://www.congress.gov/bill/118th-congress/senate-bill/1409/text
I disagree with your interpretation of how an AI works, but I think the way that AI works is pretty much irrelevant to the discussion in the first place. I think your argument stands completely the same regardless. Even if AI worked much like a human mind and was very intelligent and creative, I would still say that usage of an idea by AI without the consent of the original artist is fundamentally exploitative.
You can easily train an AI (with next to no human labor) to launder an artist's works, by using the artist's own works as reference. There's no human input or hard work involved, which is a factor in what dictates whether a work is transformative. I'd argue that if you can put a work into a machine, type in a prompt, and get a new work out, then you still haven't really transformed it. No matter how creative or novel the work is, the reality is that no human really put any effort into it, and it was built off the backs of unpaid and uncredited artists.
You could probably make an argument for being able to sell works made by an AI trained only on the public domain, but it still should not be copyrightable IMO, cause it's not a human creation.
TL;DR - No matter how creative an AI is, its works should not be considered transformative in a copyright sense, as no human did the transformation.