the server component had a small bug, empty checking the wrong variable before building a list of allowed zones
when using a config without any Fqdns defined this would result in the server refusing the client access to tunnel anything if any zones where about to be used
tup proxies services on the local network to a remote gateway, all traffic between the remote server and the service on the local network is sent through a wireguard tunnel
think of tup as an open source and self-hosted alternative to ngrok and cloudflare tunnel
tupd (the server) can be found at: https://drive.proton.me/urls/GEJM1HT0DW#aOop4p7zxaPA
the tup client can be found at: https://drive.proton.me/urls/63SE9PW020#GFzZrprg9wjZ
I also noticed all file extensions were not inspectable directly in the drive (even though everything is only text files), I apologize for that, I believe transparancy is a very important key factor
I've complemented with .diff files generated with diff
from GNU diffutils, there are 'full' diff files for both tup and tupd (ending with _full.diff), and there is also a diff file for only the changes between tupd-0.5 and tupd-0.6 (tupd-0.6.diff)
the 'full' diff files can also be applied to an empty directory with GNU patch like this:
mkdir tupd-0.6
patch --directory=tupd-0.6/ --strip=1 < tupd-0.6_full.diff
Since my project is not uploaded by me to any git service many people didn't look on how it can be used so I want to give a few examples of the client, more explanations and examples can be found in the README.md and EXAMPLE.md of the client
Syntax: tup [-zone <zone>] [@][host]:[#]<port>
Examples:
tup :8080
this would proxy http://127.0.0.1:8080 onto a random subdomain on default zone, for example: https://xyz123.zone.domain.tld
tup 192.168.1.11:8080
this would proxy http://192.168.1.11:8080 onto a random subdomain on default zone
Syntax: tup -fqdn <domain> [@][host]:[#][@]<port>
Examples:
tup -fqdn sub.domain.tld :8080
this would proxy http://127.0.0.1:8080 directly onto https://sub.domain.tld
tup -fqdn sub.domain.tld 192.168.1.11:@8443
this would proxy https://192.168.1.11:8443 directly onto https://sub.domain.tld, skipping caddy and its tls termination on the server, same as a raw tcp proxy / sni proxy
Syntax: tup -udp|-tcp [rport:][@][host]:<port>
Examples:
tup -udp :27015
this would proxy udp://127.0.0.1:27015 onto a random UDP port on the server
tup -udp 27016:27015
this would proxy udp://127.0.0.1:27015 onto UDP port 27016 on the server
tup -udp 27016:192.168.1.11:27015
this would proxy udp://192.168.1.11:27015 onto UDP port 27016 on the server
tup -tcp :3306
this would proxy tcp://127.0.0.1:3306 onto a random TCP port on the server
I also want to clarify that the code is released with the Unlicense template, dedicating my software to the public domain
I totally agree it is no different than a random untrusted git repo, so I believe no additional trust is gained if I uploaded it to any of them
I think version control this way is totally fine, every commit in the linux kernel is mailed as a text diff on the different mailing lists
As of trusting this or any security related software I believe you have to ultimately read and understand the software you are using, or someone you trust has to do it, I can't do that, I can only answer questions as they arrive
I also agree unit tests are probably a good idea for those reasons as well, I don't have any right now but I'm open to do them some time or receive patches with them
I like your feedback, thanks for it
Git was literally written by Linus to manage the source of the kernel. Sure patches are proposed via mailing list, but the actual source is hosted and managed via git. It is literally the gold standard, and source control is a foundational piece of software development. Same with not just unit tests, but functional testing too. You absolutely should not be putting off testing.
I've done a lot of testing, not skipping that, writing automated tests are a whole different thing however, it is not as straight forward and is very often skipped for a large amount of projects to be honest
Git was made to handle the sheer amount of commits and people contributing to the linux kernel, the first versions of linux is just Linus uploading the code to a FTP, git is just a tool for Linus to patch his local git tree in a fast way with all the patches he gets from different channels and manage a large public repository
Unlike Linus I'm not planning to be of control of a public development process for my software, so a VCS doesn't make much sense in my opinion
Before git it was far from standard to use a source control system on small projects that weren't about to be a public development process anyway, while it is a gold standard for source control today, I don't think one have to use source control on every software project, like it used to be