Hoo boy.
So, first off, this might be something you've already considered, but you mentioned "your company" a couple of times. Chances are if you've contributed to these knowledge bases as an employee, the content on those knowledge bases are owned by your employer. If you don't get permission to get a copy and incorporate it into your portfolio, you're potentially asking for legal trouble.
So, I guess what I'm saying is if you're not 100% sure your employer will be totally cool with it, try not to get caught. Also, depending on how on-the-ball the IT department is, they might notice that you're scraping specifically because you're sending a lot of extra traffic to those knowledge bases.
So, that out of the way, wget will most likely do it. But it'll probably take a lot of tinkering.
A few considerations. First, if you have to be authenticated to access the content you're wanting to scrape, that'll definitely add a layer of complexity to this endeavor.
Second, if the Zendesk application interface uses a lot of AJAX, it's likely to be much more of a bear.
If you're using wget, a basic version of what you'll need is a command something like:
wget --mirror --tries 5 -o log --continue --show-progress --wait 2 --waitretry 2 --convert-links --page-requisites https://example.com/
If you need to be authenticated to view the data you're wanting, you'll have to do some magic with cookies or some such. The way I'd deal with that:
- Log in to the application in a browser.
- Hit F12 to open developer tools.
- Hit the "Storage" tab. (Sorry, I don't remember if the equivalent tab is called "Storage" or somthing else in Chromium-based browsers. I'm using Firefox.)
- Expand out "Cookies" on the left. (Again, sorry if you're using Chrome or some such.)
- Make a text file with one line per cookie. This bit might be tricky, but it's likely to be quite finicky. You'll have to use a text editor and not one that saves in plaintext, not rich text. (Think Notepad, not Microsoft Word. Notepad++ will work nicely. I don't think Notepad itself will work because...) You'll probably need to use Unix-style line endings.
- Each line will need to follow the format outlined here. Note in particular the tab characters, not spaces.
- Save that text to a file named "cookies.txt" in the directory where you're running wget and add to your wget command
--load-cookies cookies.txt
. You'll need to add that right before the url.
If you're on Windows, you can get wget here.
And, honestly, all this is a little like programming, really. Unfortunately, I'm not aware of any friendlier kind of apps for this sort of thing. Hopefully this ends up getting you what you're hoping for.