
This year at Google I/O 2025, a new mode of search was announced: AI Mode. The idea behind it is simple: Google is going to add an AI-generated answer to search queries at the top of the results page so that users don’t have to click through websites to try to find the answer or information they are looking for.
This seems to be the general direction where search engines are heading. Google isn’t alone in its approach. Even DuckDuckGo has added AI to the top of its search results.
I am very torn on this. There are both positives and negatives to this approach and I can see it from both perspectives.
Positives
We’ll start with the positive aspects. This is actually a great feature for users. It should, in theory, save them time so that they can immediately get what they are looking for without having to manually click on links, close cookie banners, close newsletter modals, close chatbots and finally comb through the ad-infested content of a website just to realize the information they want isn’t there and they have to repeat the process on the next website.
The amount of enshittification that has occurred on so many websites in the name of marketing or “helping the user” is astounding and leads to a terrible user experience. That’s not even mentioning all of the keyword-driven SEO content written purely so that a website ranks in the search results without providing much real information.
In theory, AI answers should enable the user to skip all of this. It will have done the job of combing through websites’ contents for you and provide you with a nice, neat summary of the information you’re looking for. It’s a win for the user — in theory, if it works properly.
Negatives
As great as all that sounds, there is a nefarious side to it as well. We’ll start with the fact that AI’s reliability is currently abysmal. It frequently hallucinates information that is flat out wrong and could potentially be harmful to the user.
Imagine if the user searches for the temperature considered to be a high fever for a baby. Instead of visiting an accredited website such as the UK’s NHS which says a high fever is 38 C, AI tells them a high fever is 40 C. That could have fatal consequences.
In essence, as AI is now, users have to double-check the answers it provides which defeats the entire purpose of it. They might as well just click through the websites and save themselves the extra step of reading the AI summary.
Part of the problem is that AI as it stands now with its large language models (LLMs) can only regurgitate what its models have been trained on. As we all know, not every website on the internet is reliable or accurate since anyone can write about anything whether they are an expert or not.
While not all AI hallucinations are a result of bad training data, there is so much inaccurate data on the internet that the models are bound to be full of it. This is unavoidable and makes the reliability of its summaries questionable at best.
That isn’t the only issue at play either. As a person who keeps multiple blogs, it is likely that I, like all other website owners, will see a significant drop in traffic. I write on my blogs for fun, not for profit, so the direct impact it has on me will be minimal. The problem is really for websites that rely on traffic to earn money. News organizations or commerical blogs will likely be the most highly impacted by AI summaries since they generally rely on ad revenue or subscriptions — both of which users have to visit the organization’s website for them to earn money.
This leads to a break in the current paradigm of how the internet works. To simplify it: a user searches for something and visits a website. Both the search engine and the website have now earned ad revenue. It’s a win-win. AI summaries have the potential to disrupt this as the user will stay on the search engine’s website and never visit the website with the content. The search engine therefore earns all the profit despite having used the other website’s content to train its models.
This is not only not fair to the producer of the content, it threatens the very production of content. If organizations can no longer earn a profit from producing content, they will go out of business and there won’t be any new content. This is a lose-lose situation for both the organizations and the search engines.
It seems awfully shallow-sighted on the part of the search engines to kill off the very content that they rely on to train their models. In the end, everyone loses.
That isn’t even to mention the psychological impact on people like me who keep websites for fun. Essentially, I write free content for AI bots to train on. It’s free labor that these companies are going to generate revenue from. I don’t get paid, but I put in the work and they reap the profit. As you can imagine, I resent that.
Conclusion
As you may be able to tell, the negatives still far outweigh the positives. While I am torn on it in that I think it would be wondeful to use AI summaries as a user, the unreliability and the cost of the potential impact it has on content creators are too great of an issue to simply ignore. I can’t, with a good conscious, use them exclusively.
That said, I’ve found AI can often get you pointed in the right direction. That is especially true if the information you’re looking for is obscure or you aren’t versed enough in the subject matter to formulate a decent query. I call that a hybrid approach as I use AI summaries to refine my query, then, once I am confident that I am heading in the right direction, I start looking through websites to verify the answer given by AI. Of course, that method is really only tenable for queries where you aren’t entirely sure of how to phrase what you’re looking for. It’s too complex and unnecessary for simple searches.
I’m neither a doomsayer nor an AI-enthusiast. It’s just another tool in the toolbox and you have to figure out how to use it best for your purposes. I can certainly see the potential AI has to benefit users, but we have to be wary about its reliability and impact and enjoy it with caution.