If only they applied the same rigor to big tech scraping the same content into large language models. I guess the bypass paywall team wasn't big enough to afford the legion of lawyers that Sam Altman and co can summon on demand. We can just wait for chatgpt to serve those articles direct to our search results and nobody will even visit their website, because we live in a world where stealing an article to read is illegal, while stealing all of them for profit is not.
Firefox
A place to discuss the news and latest developments on the open-source browser Firefox
This is actually a pretty bad ruling IG you think about it. What does that make the internet archive?
If they don't want people to bypass the paywall they should require authentication
Tragic, but this functionality can still be done with Ublock origin and/or Noscript extensions
Is there a list somewhere?
That is the list.
Ublock origin requires Filters that you can select from a provided list, or enter in/download yourself.
Noscript (likely redundant but I don't know how to use Ublock to do this) lets you select which scripts a site runs and disable whichever ones are necessary to clean the page of garbage/reveal the article.
Every page is different and changes so it isn't perfect but once you are familiar with how it works you can 'Pick' blocking boxes to automatically create a filter that removes the box (on reload) if necessary.
Spending 20 minutes fiddling with noscript and ublock rules to find out whether it's even possible to get in that way for each new paywalled site you accidentally follow a link to is no substitute for instantly fixing them all with one extension.
It's not nearly as bad as you make it sound, but you are right that the OG is better than generic tools you need to configure
Ok, will just keep using archive.is
Until it gets shut down as well. This isn't a great precedent we are walking into