We’ve Learned More About How to Appear in AI Answers

Thanks to some recent experiments and research in the marketing world (and an unfortunate, manipulative attack by a Reddit mod), we’ve learned a lot more about how to influence the answers AI tools provide. This is particularly relevant for those seeking to have their brand names (or websites) appear when ChatGPT, CoPilot, Claude, Gemini, or others are asked for “the best product/service/provider” in a given space.

When those types of answers are sought, LLMs have a strong bias to brands that frequently appear in documents across the web (even if those mentions aren’t on particularly relevant or trustworthy sources), are referenced in recent documents (even if those docs include false, post-hoc manipulated dates), and that have positive mentions specifically on Reddit and YouTube (which seem to still hold particular sway with AI answers).

If this sounds like a recipe for using spam techniques to change AI answers fast, well my friend, you’re paying attention. As Wil Reynolds showed, there are a lot of SEO agencies already using this tactic for themselves.

Transcript:

I think we’re learning a lot more about how to influence LLMs.

And to be honest, it’s pretty easy. So my buddy, Ross Hudgens, who’s a SparkToro investor, thank you, Ross, talked today on LinkedIn, I’ll link to it in the comments about this idea of putting best x software, best x agency, best x your product category articles up all over the web and on your own website and maybe on like Substack and social media platforms and how this works. And I replied and said, hey, I’ve noticed that some people are spamming these, like essentially creating these relatively low quality articles, just putting them everywhere, making themselves at the top of the list and seeing results in ChatGPT, in Perplexity, in, Claude.

And Nick, who did the study with him, said that’s exactly what he’s seeing too. Then we saw this, today from Chris Long, which is researchers who put a fake publication date into their content and saw massively improved AI visibility. So essentially they jumped hundreds of positions in what the LLM would show when they said that the publication date was very recent. So it appears that unlike Google, these LLMs don’t sort of have the spam and manipulation and intelligence signals to figure out when they’re getting manipulated.

Now, what’s interesting is, you know, Rich Tatum here, whose comment caught my attention says, hey, I don’t think this is how marketers should be behaving. We shouldn’t be abusing these signals. To which my response is, these AI tools and Google themselves and the companies that own them, the investors, they all kind of suck. Like they have done highly unethical things.

Look the world of AI art, is completely unethical.

Why are we supposed to hold ourselves to these standards that the AI tools don’t? I just wanna point out one more example that really sent this home for me. This is a story of CodeSmith. I recommend you read it from Lars Lofgren.

He looked into this, did some research about CodeSmith, which is a coding bootcamp startup that basically got destroyed, absolutely destroyed by these Reddit comments. Can you see this here? I’ll blow it up. This is right on the homepage of Google.

It is also in LLMs like Reddit is very popular with them and this sort of campaign really destroyed them. So you look at these and you think, oh, those must be authentic. They’re not. The r/codingbootcamp subreddit is run by a moderator who is the co founder of one of their competitors.

And Lars dug into the fact that basically this guy was sending out on average a negative post per day on Reddit for five hundred days, right? For the last two years, this Michael Novati guy. And you can see, right? He’s basically the co founder of one of their competitors and he just used his Reddit powers to destroy them, absolutely destroy them across all platforms.

It’s horrifying. It cost them eighty percent of their revenue, dozens of jobs were lost. It’s a terrible thing, but it makes you realize how these systems work and how you can influence not just the AI tools, but Google and Reddit and popular opinion online very quickly with accurate or inaccurate opinions with spam or real content.

I obviously am not gonna be spamming. I don’t find value in this, but you know what?

I also have it much easier because I can do this authentically. I can get my, for example, if you search for Rand Fishkin bio, I don’t know if I’ve shown you this before folks, but like if you see this right here, I take my bio and I change the words and phrases that are used here. So I say Spark Toro makers of fine audience research software because that bio gets picked up on every YouTube video I do, every webinar, every conference that invites me, every time I speak on a podcast, they take that little snippet and they publish it. And so I can make sure that makers of fine audience research software is all over the web, which means that, you know, we we rank very highly in LLMs and Google for this stuff.

I changed it up, from audience research tools a little while ago because that was another thing people were looking for. So I don’t know if that’s spam or not, but hey, it works. This is how these systems work. And I would urge you to, check out.

I’ll link to all of these in the comments. I think they’re all worth reading and they will all give you a better understanding of how to appear in LLMs and how to appear prominently.