How AI Assistants Recommend Software: What We Learned from 42,000 Responses
Between January and February 2026, we ran 42,131 queries across ChatGPT, Gemini, and Perplexity for 64 B2B software brands. Each brand received 50 to 100 strategically designed queries — the same questions a potential buyer would ask an AI assistant when evaluating software in that brand's category. We collected every response, extracted every citation URL, tagged every brand mention, and scored every recommendation.
This is the largest structured analysis of how AI assistants recommend B2B software that we are aware of. The findings challenge several assumptions about what drives AI visibility — and reveal a new competitive landscape that most brands are not yet prepared for.
Provider Differences Are Dramatic
The single most important finding from this research is that different AI providers behave in fundamentally different ways. A brand's visibility on ChatGPT can be completely different from its visibility on Gemini or Perplexity — and the sources each provider relies on are distinct.
OpenAI's ChatGPT draws heavily from Wikipedia and institutional sources. It mentions brands conservatively, but when it does mention a brand, it tends to cite the brand's own domain. Google's Gemini leans on G2 and Capterra review aggregators — third-party review platforms carry outsized weight in Gemini's recommendations. Perplexity is the most opinionated of the three, with the highest winner identification rate at 63.1%, and surfaces niche content blogs that neither ChatGPT nor Gemini would find.
Over 25 brands in our dataset had provider mention-rate spreads exceeding 10 percentage points. For these brands, a single "AI visibility score" averaged across all providers is meaningless. The brand could be dominant on one provider and invisible on another.
The citation hierarchies tell the same story. Wikipedia is OpenAI's favourite source, cited 3,568 times — more than any individual brand domain. G2 is Gemini's top non-video source at 1,980 citations. Perplexity cites YouTube 5,784 times. Each provider has its own trusted source ecosystem, and brands need provider-specific strategies to achieve visibility across all three.
The Recommendation Gap: AI Does Not Hedge
Traditional search results present ten blue links and let users choose. AI assistants make a recommendation — or they do not. Our data shows that AI recommendations are binary, not graduated.
Across all 42,131 responses, 48.1% contained no recommendation at all. The AI listed options, described the category, and moved on without endorsing any specific product. Another 27.1% contained a strong, clear recommendation — the AI identified a winner and explained why. Only 2.7% of responses gave a weak or hedged recommendation.
This means brands are either "the answer" or they are background context. There is almost no middle ground. The challenge for AI visibility is not just getting mentioned — it is crossing the threshold from "listed as an option" to "recommended as the solution."
What Actually Gets Cited
We extracted 2,636 valid citation URLs from the dataset — the actual web pages that AI assistants referenced when constructing their responses. The page types that get cited are not what most marketers expect.
Pricing pages
This was one of the most surprising findings. Pricing pages are consistently cited across multiple brands and providers. Make's pricing page was cited 8 times. Linear, Squaretalk, and Findymail pricing pages were all cited. AI assistants love structured comparison data: tiers, features, prices. Substantive pricing pages with 1,000 or more words and detailed feature comparison tables get cited. "Contact sales" gated pricing pages get nothing.
Comparison and alternatives posts
The single most validated content strategy for driving citations. Findymail's alternatives posts (covering Apollo, Instantly, RocketReach, and UpLead alternatives) were all cited by Perplexity. Lusha's alternatives directory was cited extensively. These pages work because they directly answer the questions people ask AI assistants: "What are alternatives to [product]?"
Developer documentation and help content
Developer docs and API documentation are cited for technical queries across multiple brands including Ceipal, Nextiva, Slack, and ThriveDesk. Help documentation provides the specific, question-answering content that AI assistants need to construct useful responses.
Community content
Make's community forum is a citation goldmine — over 30 unique community forum URLs were cited across the dataset. User-generated content in forums provides specific, real-world answers to questions that AI assistants surface directly. Most brands do not have active community forums, making this a significant differentiator for those that do.
The Competitive Content Threat
Across our dataset, 68.8% of AI responses contained competitor mentions. That number alone is striking, but the real story is about which competitor content gets cited — and what happens when it does.
Zapier's single blog post "best enterprise integration platforms" was cited 29 times across Make's AI queries — accounting for 39% of all competitive citations for Make. One piece of content from one competitor shaped nearly 40% of the competitive narrative.
When competitor comparison content is cited as a source, the effect on weak brands is devastating: complete erasure. In our data, weak brands showed brand_mentioned = false across every instance where competitor comparison content was the cited source. The competitor's content controlled the narrative entirely.
Strong brands, however, survive. Make was still mentioned positively even when Zapier's "make-alternatives" page or IFTTT's "ifttt-vs-make" page was the cited source. Brand strength acts as a defence against competitive content — but only if you have it.
What Predicts Visibility (And What Does Not)
Strong predictors of AI visibility
Content depth per page is the single strongest signal. This is not about total site word count — it is about individual page depth. Dense pages with structured explanations, comparisons, and specifics give AI assistants something to work with. Thin pages with bullet points do not.
Clear, focused positioning matters. Brands that try to serve every market segment score lower. Brands with a clear ICP, defined use cases, and a narrow product category score higher. AI assistants need to categorise and recommend — ambiguity gets brands excluded.
Comparison content has direct citation impact. When a brand's own comparison page is cited, that brand is mentioned 100% of the time with neutral-to-positive sentiment. Substantive pricing pages drive citations. Help documentation and developer docs get cited for technical queries.
Weak or negative predictors
Trust badges and social proof indicators showed a negative correlation with visibility. Not because they actively hurt — but because lower-visibility brands over-invest in them to compensate. The presence of many G2 badges, Capterra badges, and trust seals is a symptom of struggling for credibility, not a cause of visibility.
Schema markup and technical SEO signals showed weak correlation. AI assistants do not crawl schema.org markup the way search engine spiders do. Having perfect technical SEO has minimal impact on whether an AI recommends you.
Blog posting frequency showed a weak signal. Volume matters far less than depth and topic selection. A brand with 10 deep, well-structured posts on relevant comparison topics outperforms one with 200 shallow posts.
AI Visibility Is a New Discipline
The data makes one thing clear: AI visibility is not a subset of traditional SEO. The strategies, metrics, and sources of influence are fundamentally different. Schema markup does not help. Blog frequency does not help. Trust badges do not help. What helps is content depth, clear positioning, comparison content, and substantive pricing pages.
The organic window for establishing AI visibility is narrowing. OpenAI has reportedly signalled plans for paid advertising within AI responses by late 2026. The brands that build strong organic visibility now will have a structural advantage. The brands that wait will face a paid landscape where establishing organic presence becomes significantly harder.
Every month of historical AI visibility data becomes irreplaceable once paid placements launch. The baseline you build now is the baseline you will be measured against. The time to start tracking and optimising AI visibility is before the landscape changes — not after.
Frequently Asked Questions
Want to see your brand's AI visibility score?
Citaition tracks how ChatGPT, Gemini, and Perplexity recommend your brand — with provider-specific breakdowns, competitive intelligence, and actionable recommendations.
Start Free Trial