In my experience google still seems to have the broadest collection, so I use it when I have a search that doesn’t return many results.
Usually I use duckduckgo, but mostly out of habit. It has the same results as bing, and I’m dubious as to whether or not it’s actually private because I get location specific results from them which come from bing. Bing/duckduckgo isn’t nearly as broad as google, and it also is dumb in the same way as google because it has a bunch of AI spam.
For quality of ranking, and human results, I have had good experiences with marginalia (https://marginalia-search.com/). It only has about 1 billion pages in it’s index, so it won’t be nearly as good for obscure things, but for quality of results I am fond of it. I don’t use it very often though and maybe I should.
I think it would be great if someone made a meta-search engine with as many engines from that list as possible, and focussed on really good ranking. A tool like that could be best for obscurity and result quality. Unfortunately the meta-search engines that I know of don’t use many sources, and they don’t get all the available results before re-ranking them.
Oh also, there are a surprising number of engines that shut down under suspicious circumstances. This is a good article about those ones: https://archive.org/details/search-timeline
> I think it would be great if someone made a meta-search engine with as many engines from that list as possible, and focussed on really good ranking. A tool like that could be best for obscurity and result quality. Unfortunately the meta-search engines that I know of don’t use many sources, and they don’t get all the available results before re-ranking them.
Thanks. I’ve been thinking about doing something like this for a while, and I was thinking that it could be a program that runs locally, scrapes the search result pages, re-ranks them, and then makes an html page and opens it in a web browser.
I’ve done some small experiments, and some of the search engines take a while to get results from because they block you if you go faster than about 1 page per second, so you might end up waiting a few minutes before getting results.
Waiting a few minutes (possibly as long as 10, probably no shorter than 2) to get better and more complete results that you can go as deep as you want into, with no ads. Is this something you would be interested in?
Depends on what you search for in which language and region. For technical stuff in english DDG mostly suffices, otherwise WP. For businesses, restaurants, clubs, general venues Apple Maps or Google Maps as fallback, depending on region. For pure maps, bicycle routes, pedestrian shortcuts OSM, depending on region again. There is no universally usable map.
In my experience google still seems to have the broadest collection, so I use it when I have a search that doesn’t return many results.
Usually I use duckduckgo, but mostly out of habit. It has the same results as bing, and I’m dubious as to whether or not it’s actually private because I get location specific results from them which come from bing. Bing/duckduckgo isn’t nearly as broad as google, and it also is dumb in the same way as google because it has a bunch of AI spam.
For quality of ranking, and human results, I have had good experiences with marginalia (https://marginalia-search.com/). It only has about 1 billion pages in it’s index, so it won’t be nearly as good for obscure things, but for quality of results I am fond of it. I don’t use it very often though and maybe I should.
This is a great list of search engines with independent indexes: https://seirdy.one/posts/2021/03/10/search-engines-with-own-...
I think it would be great if someone made a meta-search engine with as many engines from that list as possible, and focussed on really good ranking. A tool like that could be best for obscurity and result quality. Unfortunately the meta-search engines that I know of don’t use many sources, and they don’t get all the available results before re-ranking them.
Oh also, there are a surprising number of engines that shut down under suspicious circumstances. This is a good article about those ones: https://archive.org/details/search-timeline
> I think it would be great if someone made a meta-search engine with as many engines from that list as possible, and focussed on really good ranking. A tool like that could be best for obscurity and result quality. Unfortunately the meta-search engines that I know of don’t use many sources, and they don’t get all the available results before re-ranking them.
Thats a really interesting idea.
Thanks. I’ve been thinking about doing something like this for a while, and I was thinking that it could be a program that runs locally, scrapes the search result pages, re-ranks them, and then makes an html page and opens it in a web browser.
I’ve done some small experiments, and some of the search engines take a while to get results from because they block you if you go faster than about 1 page per second, so you might end up waiting a few minutes before getting results.
Waiting a few minutes (possibly as long as 10, probably no shorter than 2) to get better and more complete results that you can go as deep as you want into, with no ads. Is this something you would be interested in?
Depends on what you search for in which language and region. For technical stuff in english DDG mostly suffices, otherwise WP. For businesses, restaurants, clubs, general venues Apple Maps or Google Maps as fallback, depending on region. For pure maps, bicycle routes, pedestrian shortcuts OSM, depending on region again. There is no universally usable map.
Google
I agree and disagree
not google
I agree and disagree