- cross-posted to:
- 404media@ibbit.at
- cross-posted to:
- 404media@ibbit.at
Archive: https://archive.is/lP0lT
Will cut the AI results out of your google searches by switching the browser’s default to the web api…
I cannot tell you how much I love it.
Or better yet, ditch Google altogether.
I switched to Startpage, an EU-based search engine.
Not EU based, and not free, but I’ve been loving Kagi.
Seconding Kagi, it’s worth every penny.
Well, I’m glad. That being said, Startpage is a search engine located in the Netherlands that you can start using now. Just go to the site. Kagi is paid.
Yes! Startpage rocks hard. I wish Safari would let it be the default search engine (FF does with a simple plug in).
I’ve been using startpage, but doesn’t it still rely on google results somehow?
Startpage! No shit. Used to be Ixquick, and I used that for years. Great site - thank you for reminding me it’s still there. :)
FWIW, wp:Startpage.
For Firefox on Android (which TenBlueLinks doesn’t have listed) add a new search engine and use these settings:
Name: Google Web
Search string URL: https://www.google.com/search?q=%s&udm=14

as @Saltarello@lemmy.world learned before I did, strip the number 25 from the string above so it looks more like this:www .google.com/search?q=%s&udm=14Edit: Lemmy/Voyager formats this string with 25 at the end. Remove the 25 & save it as a browser search engineEDIT: There’s got to a Markdown option for disabling markdwon auto-formatting links, right?? The escape backslash seems to not be working for this specifically.EDIT II: Found a nasty hack that does the trick!
https[]()://www.example.com/search?q=%sappears as:
Lemmy also does code markup with `text`
https://www.google.com/search?q=%25s&udm=14Indeed, @SnotFlickerman@lemmy.blahaj.zone, like so!:

Edit, oh is it buggy with parameters per downthread? Interesting
Thank you.
Oh thank you I’ve been looking for this
Yeah switching search links will help but it’s a band-aid. AI has stolen literally everyone’s work without any attempt at consent or remuneration and the reason is now your search is 100 times faster, comes back with exactly something you can copy & paste and you never have to dig through links or bat away confirmation boxes to find out it doesn’t have what you need.
It’s straight up smash-n-grab. And it’s going to work. Just like everybody and their grandma gave up all their personal information to facebook so will your searches be done through AI.
The answer is to regulate the bejesus out of AI and ensure they haven’t stolen anything. That answer was rendered moot by electing trump.
I don’t know about you, but my results have been wrong or outdated at least a quarter of the time. If you flip two coins and both are heads, your information is outright useless. What’s the point in looking something up to maybe find the right answer? We’re entering a new dark age, and I hate it.
I’ve been asking a bunch of next-to-obvious questions about things that don’t really matter and it’s been pretty good. It still confidently lies when it gives instructions but a fair amount of time it does what I asked it for.
I’d prefer to not have it, because it’s ethically putrid. But it converts currency and weights and translates things as well as expected and in half the time i’d spend doing it manually. Plus I kind of hope using it puts them out of business. It’s not like I’d pay for it.
I refuse to believe that it’s in any way better or faster at unit and currency conversion than plain Google or DuckDuckGo. Literally type “100 EUR to USD” and you’ll get an almost instant answer. Same with units: “100 feet to meters”.
And if you’re using it, you’re helping their business. It’s as simple as that.
100%. Unit conversion is a solved problem, and it is impossible for an AI to be faster or more accurate than any of the existing converters.
I do not need an AI calculator, because I have no desire to need to double check my calculator.

Well spotted. I retract my notion that unit conversion was convenient. Clearly I should have switched to another tab to do the thing that is solved.
Removed by mod
But it converts currency and weights and translates things as well as expected and in half the time i’d spend doing it manually
So does qalc, and it can also do arithmetic and basic calculus quickly and (gasp) correctly!
I eat out and lately overhearing some people in other tables talking about how they find shit with ChatGPT, and it’s not a good sign.
They stopped doing research as it used to be for about 30 years.
I was chatting with some folks the other day and somebody was going on about how they had gotten asymptomatic long-COVID from the vaccine. When asked about her sources her response was that AI had pointed her to studies and you could viscerally feel everybody else’s cringe.
asymptomatic long-COVID
The hell even is that? Asymptomatic means no symptoms. Long-COVID isn’t a contagious thing, it’s literally a description of the symptoms you have from having COVID and the long term effects.
God that makes my freaking blood boil.
Damn @BigBenis@lemmy.world that was a hell of a conversation you we having.
“Cool, send me the actual studies.”
*crickets*
Assuming this AI shit doesn’t kill us all and we make it to the conclusion that robots writing lies on websites perhaps isn’t the best thing for the internet, there’s gonna be a giant hole of like 10 years where you just shouldn’t trust anything written online. Someone’s gonna make a bespoke search engine that automatically excludes searching for anything from 2023 to 2035.
I can’t really fault them for it tbh. Google has gotten so fucking bad over the last 10 years. Half of the results are just ads that don’t necesarily have anything to do with your search.
Sure, use something else like Duckduckgo, but when you’re already switching, why not switch to something that tends to be right 95% of the time, and where you don’t need to be good at keywords, and can just write a paragraph of text and it’ll figure out what you’re looking for. If you’re actually researching something you’re bound to look at the sources anyway, instead of just what the LLM writes.
The ease of access of LLMs, and the complete and utter enshittyfication of Google is why so many people choose an LLM.
I believe DuckDuckGo is just as bad. I think they changed their search to match Google. I’m not sure if you are allowed to exclude search terms, use quotes, etc.
I had a song intermittently stuck in my head for over a decade, couldn’t remember the artist, song name, or any of the lyrics. I only had the genre, language it was in, and a vague, memory-degraded description of a music video. Over the years I’d tried to find it on search engines a bunch of times to no avail, using every prompt I could think of. ChatGPT got it in one. So yeah, it’s very useful for stuff like that. Was a great feeling to scratch that itch after so long. But I wouldn’t trust an LLM with anything important.
LLM are good at certain things, especially involving language (unsurprisingly). They’re tools. They’re not the be-all-end-all like a lot of tech bros proselytize them as, but they are useful if you know their limitations
If you use them properly, they can be a valuable addition to one’s search for information. The problem is that I don’t think most users use them properly.
They stopped doing research as it used to be for about 30 years.
Was it really “like that” for any length of time? To me it seems like most people just believed whatever bullshit they saw on Facebook/Twitter/Insta/Reddit, otherwise it wouldn’t make sense to have so many bots pushing political content there. Before the internet it would be reading some random book/magazine you found, and before then it was hearsay from a relative.
I think that the people who did the research will continue doing the research. It doesn’t matter if it’s thru a library, or a search engine, or Wikipedia sources, or AI sources. As long as you know how to read the actual source, compare it with other (probably contradictory) information, and synthesize a conclusion for yourself, you’ll be fine; if you didn’t want to do that it was always easy to stumble upon misinfo or disinfo anyways.
One actual problem that AI might cause is if the actual scientists doing the research start using it without due diligence. People are definitely using LLMs to help them write/structure the papers ¹. This alone would probably be fine, but if they actually use it to “help” with methodology or other content… Then we would indeed be in trouble, given how confidently incorrect LLM output can be.
I think that the people who did the research will continue doing the research.
Yes, but that number is getting smaller. Where I live, most households rarely have a full bookshelf, and instead nearly every member of the family has a “smart” phone; they’ll grab the chance to use anything that would be easier than spending hours going through a lot of books. I do sincerely hope methods of doing good research are still continually being taught, including the ability to distinguish good information from bad.
Internet (via your smartphone) provides you with the ability to find any book, magazine or paper on any subject you want, for free (if you’re willing to sail under the right flag), within seconds. Of course noone has a full bookshelf anymore, the only reason to want physical books nowadays is sentimentality or some very specific old book that hasn’t been digitized yet (but in that case you won’t have it on your bookshelf and will have to go to the library anyway). The fastest and most accurate way of doing research today is getting a gist on Wikipedia, clicking through the source links and reading those, and combing through arxiv and scihub for anything relevant. If you are unfamiliar with the subject as a whole, you download the relevant book and read it. Of course noone wants to comb through physical books anymore, it’s a complete waste of time (provided of course they have been digitized).
(pasting a Mastodon post I wrote few days ago on StackOverflow but IMHO applies to Wikipedia too)
"AI, as in the current LLM hype, is not just pointless but rather harmful epistemologically speaking.
It’s a big word so let me unpack the idea with 1 example :
- StackOverflow, or SO for shot.
So SO is cratering in popularity. Maybe it’s related to LLM craze, maybe not but in practice, less and less people is using SO.
SO is basically a software developer social network that goes like this :
- hey I have this problem, I tried this and it didn’t work, what can I do?
- well (sometimes condescendingly) it works like this so that worked for me and here is why
then people discuss via comments, answers, vote, etc until, hopefully the most appropriate (which does not mean “correct”) answer rises to the top.
The next person with the same, or similar enough, problem gets to try right away what might work.
SO is very efficient in that sense but sometimes the tone itself can be negative, even toxic.
Sometimes the person asking did not bother search much, sometimes they clearly have no grasp of the problem, so replies can be terse, if not worst.
Yet the content itself is often correct in the sense that it does solve the problem.
So SO in a way is the pinnacle of “technically right” yet being an ass about it.
Meanwhile what if you could get roughly the same mapping between a problem and its solution but in a nice, even sycophantic, matter?
Of course the switch will happen.
That’s nice, right?.. right?!
It is. For a bit.
It’s actually REALLY nice.
Until the “thing” you “discuss” with maybe KPI is keeping you engaged (as its owner get paid per interaction) regardless of how usable (let’s not even say true or correct) its answer is.
That’s a deep problem because that thing does not learn.
It has no learning capability. It’s not just “a bit slow” or “dumb” but rather it does not learn, at all.
It gets updated with a new dataset, fine tuned, etc… but there is no action that leads to invalidation of a hypothesis generated a novel one that then … setup a safe environment to test within (that’s basically what learning is).
So… you sit there until the LLM gets updated but… with that? Now that less and less people bother updating your source (namely SO) how is your “thing” going to lean, sorry to get updated, without new contributions?
Now if we step back not at the individual level but at the collective level we can see how short-termist the whole endeavor is.
Yes, it might help some, even a lot, of people to “vile code” sorry I mean “vibe code”, their way out of a problem, but if :
- they, the individual
- it, the model
- we, society, do not contribute back to the dataset to upgrade from…
well I guess we are going faster right now, for some, but overall we will inexorably slow down.
So yes epistemologically we are slowing down, if not worst.
Anyway, I’m back on SO, trying to actually understand a problem. Trying to actually learn from my “bad” situation and rather than randomly try the statistically most likely solution, genuinely understand WHY I got there in the first place.
I’ll share my answer back on SO hoping to help other.
Don’t just “use” a tool, think, genuinely, it’s not just fun, it’s also liberating.
Literally.
Don’t give away your autonomy for a quick fix, you’ll get stuck."
originally on https://mastodon.pirateparty.be/@utopiah/115315866570543792
Most importantly, the pipeline from finding a question on SO that you also have, to answering that question after doing some more research is now completely derailed because if you ask an AI a question and it doesn’t have a good answer you have no way to contribute your eventual solution to the problem.
Maybe SO should run everyone’s answers through a LLM and revoke any points a person gets for a condescending answer even if accepted.
Give a warning and suggestions to better meet community guidelines.
It can be very toxic there.
Edit: I love the downvotes here. OP - AI is going to destroy the sources of truth and knowledge, in part because people stopped going to those sources because people were toxic at the sources. People: But I’ll downvote suggestions that could maybe reduce toxicity, while having no actual impact on the answers given.
I guess I’m a bit old school, I still love Wikipedia.
I use Wikipedia when I want to know stuff. I use chatGPT when I need quick information about something that’s not necessarily super critical.
It’s also much better at looking up stuff than Google. Which is amazing, because it’s pretty bad. Google has become absolute garbage.
deleted by creator
To get a decent result on Google, you have to wade through 2 pages of ads, 4 pages of sponsored content, and maybe the first good result is on page 10.
ChatGPT does a good job at filtering most of the bullshit.
I know enough to not just accept any shit from the internet at face value.
deleted by creator
Why the fuck are you defending google so hard lmao.
Google will absolutely put bad information front and center too.
And by using Google you make Google richer. In fact you get served far more ads using Google products than chatGPT.
What’s your fucking point lmao.
Why the fuck are you defending google so hard lmao.
Ah yes, when I said “use a different search engine” as a solution to Google having issues I’m certainly defending Google! What an endorsement right? “Use a completely different service” is free publicity for Google!
Other search engines are even worse than Google lmao. Brave consistently provide literally the worst results. Duck duck go same.
Are you actually serious.
I think you missed a part of their comment:
Block ads and use a different search engine?
Both Ecosia and DuckDuckGo have served me pretty well. Kagi also seems somewhat interesting.
Ecosia is working with Qwant on their own index, the first version of which has already gone online I believe. So they’re no longer exclusively relying on Bing/Google for their back-end.I have yet to use an alternate search engine for any length of time (and i’ve tried a few) and think “ah yes, this was the kind of results I expected from my search”, they’re systematically worse than google, which is an incredible achievement, considering how absolute garbage google is nowadays.
Brave, which i’m using now, is atrocious with that. The amount of irrelevant bullshit it throws at you before getting to the stuff you are actually looking for is actually incredible.
Yep, that an occasionally Wiktionary, Wikidata, and even Rationalwiki.
You’re right bro but I feel comfortable searching the old fashioned way!
Same but with Encyclopedia Brittanica
AI will inevitably kill all the sources of actual information. Then all we’re going to be left with is the fuzzy learned version of information plus a heap of hallucinations.
What a time to be alive.
AI just cuts pastes from the websites like Wikipedia. The problem is when it gets information that’s old or from a sketchy source. Hopefully people will still know how to check sources, should probably be taught in schools. Who’s the author, how olds the article, is it a reputable website, is there a bias. I know I’m missing some pieces
You replied to OP while somehow missing the entire point of what he said lol
Do be fair I didn’t read the article
That’s not ‘being fair’ that’s just you admitting you’d rather hear your own blathering voice than do any real work.
To be fair calling that work is stretching it
Much of the time, AI paraphrases, because it is generating plausible sentences not quoting factual material. Rarely do I see direct quotes that don’t involve some form of editorialising or restating of information, but perhaps I’m just not asking those sorts of questions much.
Man, we hardly did that shit 20 years ago. Ain’t no way the kids doing that now.
At best they’ll probably prompt AI into validating if the text is legit
I’ve been meaning to donate to those guys.
I use their site frequently. I love it, and it can’t be cheap to keep that stuff online.
“With fewer visits to Wikipedia, fewer volunteers may grow and enrich the content, and fewer individual donors may support this work.”
I understand the donors aspect, but I don’t think anyone who is satisfied with AI slop would bother to improve wiki articles anyway.
The idea that there’s a certain type of person that’s immune to a social tide is not very sound, in my opinion. If more people use genAI, they may teach people who could have been editors in later years to use genAI instead.
That’s a good point, scary to think that there are people growing up now for whom LLMs are the default way of accessing knowledge.
Eh, people said the exact same thing about Wikipedia in the early 2000’s. A group of randos on the internet is going to “crowd source” truth? Absurd! And the answer to that was always, “You can check the source to make sure it says what they say it says.” If you’re still checking Wikipedia sources, then you’re going to check the sources AI provides as well. All that changes about the process is how you get the list of primary sources. I don’t mind AI as a method of finding sources.
The greater issue is that people rarely check primary sources. And even when they do, the general level of education needed to read and understand those sources is a somewhat high bar. And the even greater issue is that AI-generated half-truths are currently mucking up primary sources. Add to that intentional falsehoods from governments and corporations, and it already seems significantly more difficult to get to the real data on anything post-2020.
But Wikipedia actually is crowd sourced data verification. Every AI prompt response is made up on the fly and there’s no way to audit what other people are seeing for accuracy.
Hey! An excuse to quote my namesake.
Hackworth got all the news that was appropriate to his situation in life, plus a few optional services: the latest from his favorite cartoonists and columnists around the world; the clippings on various peculiar crackpot subjects forwarded to him by his father […] A gentleman of higher rank and more far-reaching responsibilities would probably get different information written in a different way, and the top stratum of New Chuasan actually got the Times on paper, printed out by a big antique press […] Now nanotechnology had made nearly anything possible, and so the cultural role in deciding what should be done with it had become far more important than imagining what could be done with it. One of the insights of the Victorian Revivial was that it was not necessarily a good thing for everyone to read a completely different newspaper in the morning; so the higher one rose in society, the more similar one’s Times became to one’s peers’. - The Diamond Age by Neal Stephenson (1995)
That is to say, I agree that everyone getting different answers is an issue, and it’s been a growing problem for decades. AI’s turbo-charged it, for sure. If I want, I can just have it yes-man me all day long.
Not me. I value Wikipedia content over AI slop.
Alternative for DuckDuckGo:
https://noai.duckduckgo.com/?q=%25s
Edit: Lemmy/Voyager formats this string with 25 at the end. Remove the 25 & save it as a browser search engine
Using backticks can help
https://noai.duckduckgo.com/?qedit: How odd, the equal sign disappears
I asked a chatbot scenarios for AI wiping out humanity and the most believable one is where it makes humans so dependent and infantilized on it that we just eventually stop reproducing and die out.
So we get the Wall-e future…
Mudd explains that he broke out of prison, stole a spaceship, crashed on this planet, and was taken in by the androids. He says they are accommodating, but refuse to let him go unless he provides them with other humans to serve and study. Mudd informs Kirk that he and his crew are to serve this purpose and can expect to spend the rest of their lives there.
Tbh, I’d say that’s not a bad scenario all in all, and much more preferably than scenarios with world war, epidemics, starvation etc.
because people are just reading AI summarized explanation of your searches, many of them are derived from blogs and they cant be verified from an official source.
Or the ai search just rips off Wikipedia.
What?!? That’s crazy!!!

I’m surprised no-one has asked an LLM to produce a plausible version and just released that, claiming it’s a leak.
That is too bad. Wikipedia is important.
all websites should block ai and bot traffic on principle.
The problem is many no longer identify as bots and come from hundreds if not thousands of IPs.
Voight-Kampff them.
all websites should block ai and bot traffic on principle.
Increasing numbers do.
But there is no proof that the LLM trawling bots are willing to respect those blocks.
FWIW:
Wikipedia:Bot policy#Bot requirements
https://en.wikipedia.org/wiki/Wikipedia:Bot_policy#Bot_requirements
RationalWiki:Bots
























