Presearch is not fully decentralized.
All the services that manage advertising, staking/marketplace/rewards functionality, and unnamed "other critical Presearch services" are all "centrally managed by Presearch" according to their own documentation.
The nodes that actually help scrape and serve content are also reliant on Presearch's centralized servers. Every search must go through Presearch's "Node Gateway Server," which is centrally managed by them. That removes identifying metadata and IP info.
That central server then determines where your request goes. It could be going to open nodes run by volunteers, or it could be their own personal nodes. You cannot verify this due to how the structure of the network works.
Presearch's search index is not decentralized. It's a frontend for other indexes. (e.g. it outsources queries to other search engines, databases, and APIs for services it's configured to use) This means it does not actually have an index that is independent from these central services. I'll give it a pass for this since most search engines are like this today, but many of them are developing their own indexes that are much more robust than what Presearch seems to be doing.
This node can return results to the gateway. There doesn't seem to be any way that the gateway can verify that what it's being provided is actually what was available on the open web. For example, the node could just send back results with links that are all affiliate links to services it thinks are vaguely relevant to the query, and the gateway would assume that these queries are valid.
For the gateway to verify these are accurate, it would have to additionally scrape these services itself, which would render the entire purpose of the nodes pointless. The docs claim it can "ensure that each node is only running trusted Presearch software," but it does not control the root of trust, and thus it has the same pitfalls that games have had for years trying to enforce anticheat (that is to say, it's simply impossible to guarantee unless presearch could do all the processing within a TPM module that they entirely control, which they don't. Not to mention that it would cause a number of privacy issues)
A better model would be one where nodes are solely used for hosting to take the burden off a central server for storing the index, and chunks sent to nodes would be hashed, with the hash stored on the central server. When the central server needs a chunk of data based on a query, it sends a request, verifies the hash matches, then forwards it to the user, thus taking the storage burden off the main server and making the only cost bottleneck the bandwidth, but that's not what Presearch is doing here.
This doesn't make Presearch bad in itself, but it's most definitely not decentralized. All core search functionality relies on their servers alone, and it simply adds additional risk of bad actors being able to manipulate search results.
I'd actually found them to be better than Google for a while, but coincidentally after the AI craze started really taking off, search quality significantly degraded. Maybe that's not so much of a coincidence after all.