this post was submitted on 21 Jul 2023
1 points (100.0% liked)

Piracy: ꜱᴀɪʟ ᴛʜᴇ ʜɪɢʜ ꜱᴇᴀꜱ

54574 readers
491 users here now

⚓ Dedicated to the discussion of digital piracy, including ethical problems and legal advancements.

Rules • Full Version

1. Posts must be related to the discussion of digital piracy

2. Don't request invites, trade, sell, or self-promote

3. Don't request or link to specific pirated titles, including DMs

4. Don't submit low-quality posts, be entitled, or harass others



Loot, Pillage, & Plunder

📜 c/Piracy Wiki (Community Edition):


💰 Please help cover server costs.

Ko-Fi Liberapay
Ko-fi Liberapay

founded 1 year ago
MODERATORS
 

I want to rip the contents of a pay website, but I have to log in to their web site on a web page to get access

Does anyone have any good tools for Windows for that?

I'm guessing that any such tools must have a built in browser, or be a browser plugin for it to work.

you are viewing a single comment's thread
view the rest of the comments
[–] zabadoh@lemmy.ml 1 points 1 year ago* (last edited 1 year ago) (1 children)

Okay, I found SurfOffline that does the trick without too much hassle, but....

It's verrrrrrrry slooooooooow.

It uses Internet Explorer as a module, and calls each individual resource separately, instead of file copying from IE's cache, which is weird and slow, especially when hundreds of images are involved.

And SurfOffline doesn't appear to be supported anymore, i.e. the support email's inbox is full.

edit: Aaaaand SurfOffline doesn't save to .html files with a directory structure!!! It stores everything in some kind of sql database, and it only saves to .mht and .chm files, which are deprecated Microsoft help file formats!!!

What it does have is a built in web server that only works while the program is running.

So what I plan to do is have the program up but doing nothing, while I sick Httrack on the 127.0.0.1 web address for my ripped website.

Httrrack will hopefully "extract" the website to .html format.

Whew, what a hassle!

[–] zabadoh@lemmy.ml 1 points 1 year ago

To continue my travails:

Httrack didn't do a great job: It was slow, even copying from the same machine, and it flattened the directory structure of the website it was writing, making it almost un-navigable.

Here's where Cyotek WebCopy shines: It's copying the website from SurfOffline's database webserver quickly, so I should have the entire website re-extracted very soon!