Search Startup Kagi Wants to Humanize the Web Using “Artificial Intelligence”

Why does Kagi’s founder repeat the lie that an information business can be apolitical? How can you humanize the web by coding algorithms to rewrite it? Make it make sense, because I need Kagi’s valuable search service to not suck

Caroline Delbert
7 min readApr 3, 2024

This piece is not about your personal search choices — I’m happy for you to use whatever you’re happy with, and no one needs to have discourse about it. I’m a paying Kagi user, and I have no immediate plans to change that.

Update on April 12: I’ve learned something puzzling that summarizes the situation well. Kagi founder Vlad Prevolac mines only three-star product reviews because he insists these are “unbiased.” This man has no understanding of data sampling, human behavior, or even the definitions of the words he’s using to dictate the course of his product.

I pulled three Amazon.com products I’ve ordered recently. Of all the reviews for each, ranging from about 100 to several thousand, the vast, vast majority are four and five star reviews. Only 8%, 6%, and 5% were three-star reviews. There’s no definition by which a portion of data this small can be what we claim is unbiased.

It’s a grim time to be a searcher. Over the last 25 years, Google created the paradigm that the entire internet hews to, then they started to turn the screws: more and more intrusive technology, less and less effective searching, more and more side Google products of very varying quality. More ads. More sponsored results. More side widgets. More shopping results. More SEO-gaming content mills in the top results. JC Penney even got deindexed in 2011 for using black hat SEO. Google shaped many decisions companies made on the English-language internet.

Many of my friends and colleagues who heavily rely on searching — I’m a science journalist — agree that Google’s search results have gotten noticeably worse over time. Google started to assume it knew what I was really looking for, in spite of what I typed in the search box. My searches often turned up top results that didn’t even include the words in my search term. Google appended yellow highlighting onto other people’s websites and appended long, clumsy strings to their URLs.

Today, it’s almost unfathomable to untangle yourself from the Google ecosystem. But when their search became more frustrating and unusable for me than not, I started to look around. I tried Duck Duck Go, a reskinning of Bing with a privacy costume. It just didn’t give me very good results, ever, and I found myself regularly opening a Google tab to close the gap. Then I learned about Kagi.

Kagi was founded in 2018 by Vladimir Prelovac, a web industry veteran who was at GoDaddy just prior. “I founded Kagi to create a novel search engine and web browser. I am dedicating this work to my three children, trying to make the web more humane and friendlier for them,” his website says. His Twitter bio reads, “Humanizing the web.”

Kagi has attracted users like me by offering a stripped-down search experience that looks like vintage Google. There are no ads, ever. User data is not saved. The site lets you blacklist top level domains from your results. I’ve been very happy with the user experience and search results on Kagi, after taking an initial pass through and switching off some of the Google-inspired bells and whistles, like AI-powered preview clips that — just like Google’s — were notably giving me incorrect information. I pay $10 a month for unlimited Kagi searches, and I use it a lot.

I’ve done 12 more Kagi searches since I took this screenshot 90 minutes ago.

There have been a few puzzling, troubling decisions during Kagi’s year or so on my radar. They fall into two major areas. First, founder Prevolac loves AI. I found out he originally wanted those terrible AI preview widgets without a setting to switch them off. This would have turned me off of Kagi maybe completely. And second, Prevolac believes in being “apolitical.” He’s even said he believes public moral “politics” have hindered innovation, which he later walked back.

Like many Bay Area tech founders, Prevolac loves to claim that things should be “unbiased.” In the Kagi Discord server, he separates news stories into “constructive” or not — another key term. And he links to a 2013 Guardian editorial by Swiss entrepreneur Rolf Dobelli, himself a tech founder, about how reading the news is bad and you should not do it.

Dobelli’s piece is very specific in its scope: he hates “factoids” and what sounds like quick-hit news stories. The thing is, Dobelli doesn’t even really hate the news. He hates so-called “news junkies” — something I largely agree with — who spend all day repeating the same inflammatory news headlines into a mirrorbox. (Anecdotally, this is one of the big things people don’t miss after leaving Twitter.) He hates unethical journalism that implies false causality, or a neat narrative for the sake of clicks. And Dobelli concludes this way:

“Society needs journalism — but in a different way. Investigative journalism is always relevant. We need reporting that polices our institutions and uncovers truth. But important findings don’t have to arrive in the form of news. Long journal articles and in-depth books are good, too.” It sounds like Dobelli wants even more nuance, not less.

But Prevolac has additionally boiled this philosophy down to “constructive” versus “destructive” news, and decided that news should be “apolitical.” That’s fundamentally different. He suggested in the Kagi Discord, “But news should not only be about politics?” A user replied, “All things are political in nature.” In response, Prevolac said, “A coffee shop opening in your town?” (The user quickly supplied a lot of ways a coffee shop includes political ideas, from the oblique — fair trade coffee? Employee pay? — to the overt — the legal regulations that dictate what is built where.)

Even when Dobelli mentions bias, he’s talking about cognitive biases that apply to everyone based on the nature of the human brain — not the political sense of the word as Prevolac uses it. “Real balanced news would not discriminate between left and right, but between constructive and destructive,” he said in the Kagi Discord.

Prevolac is in the information business, and his decisions decide what his users find online. He’s very interested in “artificial intelligence,” a buzzterm that usually refers to large language models (LLMs) of machine learning. These technologies are not intelligent in anything close to the human sense, but they mimic human language in order to be more palatable to users. They’re trained on data — often, user data sold by the services Kagi is positioning itself against.

I will only say that it’s absurd to believe the mass market LLMs Prevolac likes to tinker with are not biased or political. LLMs have propagated racist pseudoscience since their inception. Chatbots have turned racist very, very fast by mining the public internet they’re trained on. They pass racist judgment based on dialect. The peer-reviewed studies and, yes, news stories about this go on and on — you can look for yourself.

It does not make sense, intuitively and by using the data, to believe that a global internet built by societies with huge biases based on race, gender, socioeconomic class, and more would somehow magically become unbiased because it’s turned into an additional type of computer code. The computers built by biased people — which, to be clear, is all people — will always have to consciously find and account for those biases. And it’s really not in the ways Vlad Prevolac seems to suggest.

In fact, I find it hard to believe that someone like Prevolac really thinks that at all. I think he’s just repeating words like “unbiased” and “apolitical” as classic buzzwords of the right wing, especially in the tech sector. And he’s not alone. In a new, preprint paper — meaning it isn’t yet peer reviewed — researchers found that “human intervention” in LLM results just made racism put on a less conspicuous outfit.

MIT Technology Review reported on the study. Previously, the five LLMs in the study used “suspicious,” “radical,” and “aggressive” more often when asked to describe samples of African-American Vernacular English (AAVE). Now, they’re more likely to use “dirty,” “lazy,” and “stupid.” And that’s after ten years of continuous work by technical workers, whose work to course correct overt racism and bigotry is called “alignment.”

All of this work — research studies, publications, and news coverage by outlets like the Verge and MIT Technology Review — falls well into the categories Rolf Dobelli specified are valuable in the world. In Prevolac’s dichotomy of constructive and destructive, this is constructive. It lifts up a very troubling problem, with technology that replicates the biases our society also shows against marginalized groups.

Let’s go back to Vlad Prevolac’s tentpole statements about Kagi. He wants a humane internet. He wants to humanize the internet. These are such interesting word choices, because humane means compassionate and concerned over the pain of another. To humanize something is to give it the quality of being human or humane. Neither word carries a definition of defanging or bowdlerizing the internet.

All I want is a working search engine that lets me use boolean operators. And to be honest, Prevolac’s libertarian interference and devotion to LLMs are both huge turnoffs. He has chosen to invest in bigoted search products, and argued with customers who don’t like that or his spaghetti-throwing founder-brained ideas. Can’t we “take [Vlad’s] politics out” of Kagi, and just serve up the search results? As an interested, paying user — that sounds a lot more humane to me.

I have done 18 more Kagi searches, with the expected high-quality results, since annotating the image above.

Editing to add: I forgot that one of Vlad Prevolac’s most insistent points is that his search engine, unlike all the major search engines, will never respond to searches for topics about suicide with a banner encouraging people to call a hotline if they feel suicidal. This kind of intervention works. Vlad has been arguing against it with users for nearly two years. Is this the humane internet?

--

--

Caroline Delbert

I'm a contributing editor at Popular Mechanics and an avid reader. Bylines at the Awl, Eater, GamesIndustry.biz, Scientific American, Unwinnable, and more.