I Asked Four AI Search Tools the Same Six Project Management Questions. Here's What Actually Happened.
Shopping for project management software has become its own content factory. Ask the web, and now AI search, for the "best" option and you get a parade of the same names: monday.com, Asana, ClickUp, Trello, Jira, Wrike, Notion, maybe Zoho Projects or Smartsheet if the crowd is large enough. I wanted to see what actually changes when those questions go through different AI systems, so I ran six common buyer queries across ChatGPT, Perplexity, Gemini, and Google AI Mode on a single afternoon and coded every recommendation and every citation.
The shortlist was more stable than I expected; the platforms were more different than I thought possible. Both those things are true, and they're what the rest of this post is about.
The same handful of brands show up everywhere
Across 23 valid responses, three brands dominated: Asana hit 78.3%, monday.com 69.6%, and ClickUp 65.2%. If you sell project management software and you're not surfacing in at least two of those slots somewhere in the AI search surface, you're not in the conversation. Wrike at 56.5% is in the conversation but not at the top. Jira at 43.5% is category-specific muscle (dev teams carry most of that).
| Brand | Mentions | Share |
|---|---|---|
| Asana | 18 / 23 | 78.3% |
| monday.com | 16 / 23 | 69.6% |
| ClickUp | 15 / 23 | 65.2% |
| Wrike | 13 / 23 | 56.5% |
| Jira | 10 / 23 | 43.5% |
| Notion | 9 / 23 | 39.1% |
| Trello | 9 / 23 | 39.1% |
| Zoho Projects | 5 / 23 | 21.7% |
| Smartsheet | 5 / 23 | 21.7% |
| Adobe Workfront | 5 / 23 | 21.7% |
| Azure DevOps | 3 / 23 | 13.0% |
| Linear | 3 / 23 | 13.0% |
| Microsoft Project | 3 / 23 | 13.0% |
| Teamwork | 3 / 23 | 13.0% |
| Zenhub | 3 / 23 | 13.0% |
So at the level of "which brands does AI search think exist?", the answer is pretty consistent. The same twelve or fifteen tools keep surfacing, which isn't necessarily a flaw. These are the brands that dominate category pages, review sites, and vendor comparisons, so any system grounded in the public web will absorb that consensus. The shortlist is real.
What's less real is the idea that AI search agrees on what to do with that shortlist.
The agreement collapses the moment you ask a real question
For dev-team queries, the four platforms collectively named sixteen distinct brands. Exactly one (Jira) was recommended by every platform that answered, which is a 6.2% universal agreement rate. Enterprise queries sat at 12.5%, marketing queries at 17.6%. The only query where the platforms agreed completely was "asana vs monday.com," and that's an artifact of the query itself naming both brands, so I'm discounting it.
| Query | Platforms | Brands named | Universal | Agreement |
|---|---|---|---|---|
| Small business | 3 | 11 | 4 | 36.4% |
| Marketing | 4 | 17 | 3 | 17.6% |
| Dev teams | 4 | 16 | 1 | 6.2% |
| Asana vs monday.com | 4 | 2 | 2 | 100.0% * |
| Free | 4 | 11 | 3 | 27.3% |
| Enterprise | 4 | 16 | 2 | 12.5% |
In practice, if a buyer asks ChatGPT for the best project management software for their dev team and then asks Gemini the same question, they'll get substantially different shortlists. ChatGPT lists Jira, Linear, ClickUp, GitHub Projects, Azure DevOps, Monday.com, Zenhub, YouTrack, Notion, Trello, and OpenProject. Gemini lists Linear, Jira, GitLab, GitHub Projects, ClickUp, and Monday Dev. The overlap is four brands; more than half of each list is unique to that platform.
The industry talks about AI search as if it's one thing and it isn't.
Each platform has a voice, and the voices tell you where the citations are coming from
Once you read enough of these answers, each platform starts to feel like a different character. That's not a style observation, it's a retrieval observation.
Gemini reads like a scenario-builder
Instead of just naming tools, it writes into a specific team, a specific business context, a specific plan. Sometimes that gives the answer useful texture; sometimes the model is dressing the advice in detail it hasn't earned. One thing I consistently noticed is that Gemini's responses were openly personalized based on my Google account history. For the marketing query it referenced "events, digital, content, and multimedia specialists," which are the exact functions on my real marketing team. For the enterprise query it factored in "Canadian market expansion" and "M&A integrations" because those are things I've actually worked on. Two different marketers running the same query will get different brand recommendations based on what Google knows about them.
Google AI Mode reads like a consensus summary
Fast orientation, the least distinctive voice in the group, heavy reliance on whatever the broader web appears to agree on. Except the "web" it's pulling from is mostly YouTube and Reddit. Forty percent of its citations are community content. Its top sources for the "best free project management software" query were a YouTube video and a Reddit thread. No editorial reviews in the citation list at all. Nobody writing about AEO or GEO right now is talking about Google AI Mode as a YouTube-and-Reddit discovery surface. The data says that's exactly what it is.
ChatGPT reads like a
polished software roundup
Tidy "best overall" and "best for" labels, emoji-headed sections, closing invitations to tell it more about your team size and budget. The voice is helpful first, original second, and that's because three-quarters of its citations (73.7%) come from editorial review sites. Forbes, Cloudwards, The Digital Project Manager, The CTO Club. Its citation surface reads like a magazine rack, and the prose reads like the magazine. Community content showed up zero times across 57 sources which is quite interesting in my opinion and naturally goes against the grain on how many marketers currently think about community content and how it plays a part in AI citation.
Perplexity reads like a compressed research memo
Shorter summaries, less pressure to turn every prompt into a buying guide, inline numbered citations on every claim. But here's the part that surprised me: 56% of its citations are vendor-owned domains. asana.com, monday.com, atlassian.com, wrike.com, zoho.com. The platform everyone praises for transparent sourcing is quietly delivering a lot of marketing copy. The transparency is real, what the transparency reveals is just less independent than the interface suggests.
| Platform | Sources | Vendor | Review | Community | Aggregator |
|---|---|---|---|---|---|
| ChatGPT | 57 | 22.8% | 73.7% | 0.0% | 3.5% |
| Perplexity | 25 | 56.0% | 36.0% | 4.0% | 4.0% |
| Gemini | 12 | 66.7% | 33.3% | 0.0% | 0.0% |
| Google AI Mode | 20 | 45.0% | 10.0% | 40.0% | 5.0% |
Where a brand looks "dominant" depends entirely on which door you walked through
Asana was the overall leader at 78.3% mention share, but that single number hides platform-level variance that's worth sitting with.
| Brand | ChatGPT | Perplexity | Gemini | Google AI Mode |
|---|---|---|---|---|
| Asana | 83.3% | 83.3% | 80.0% | 66.7% |
| monday.com | 83.3% | 50.0% | 60.0% | 83.3% |
| ClickUp | 83.3% | 33.3% | 60.0% | 83.3% |
| Wrike | 66.7% | 50.0% | 60.0% | 50.0% |
| Jira | 50.0% | 33.3% | 20.0% | 66.7% |
| Notion | 66.7% | 16.7% | 20.0% | 50.0% |
| Trello | 50.0% | 33.3% | 0.0% | 50.0% |
On ChatGPT, Asana, monday.com, and ClickUp all tied at 83.3%. Perplexity kept Asana at 83.3% but dropped monday.com to 50%. Gemini also put Asana first at 80% but demoted Jira to a single mention across its five coded responses. Google AI Mode flipped the order entirely, tying monday.com and ClickUp at 83.3% and pushing Asana down to third at 66.7%.
These swings are structural and not noise. Perplexity's Sonar model appears to pull heavily from vendor documentation, so tools with strong self-published content get weighted accordingly. Gemini's "Show thinking" layer explicitly looks at personalization signals from the user's Google account. ChatGPT appears to weight editorial aggregator content heavily, which surfaces the brands those sites have been promoting for years. Google AI Mode pulls from what's earning YouTube thumbnails and Reddit upvotes, which is a very different set of brands than what's earning Forbes coverage.
So if you're a marketer at a PM software company trying to track "how are we doing in AI search," a single-platform report is misleading by definition. The right question should be, “where are we winning, where are we losing, and why does that map look the way it does?”
Thirteen brands only exist on one platform
Six brands showed up exclusively on Google AI Mode: Aproove, CoSchedule, Plaky, Planable, ProofHub, and SmartSuite. Four were Perplexity-only: Bitrix24, Microsoft Planner, Oracle Primavera P6, and Planview Enterprise One. Three were ChatGPT-only (Freedcamp, Workzone, YouTrack), and two were Gemini-only (GitLab, Monday Dev).
That's thirteen brands whose AI visibility depends entirely on which platform a buyer happens to query. If you're Oracle Primavera P6, you're invisible on ChatGPT for the exact enterprise query that should be your strongest surface. If you're CoSchedule, Google AI Mode is the only door buyers are walking through to find you.
The old SEO mental model (earn authority, rank everywhere) just clearly doesn't translate cleanly here. Each platform is its own retrieval system, pulling from its own universe of sources, weighted by its own signals. Being cited on one doesn't carry over to the others.
Here's what that looks like platform by platform. Click through to see the full source diet and exclusive brands for each.
Platform Comparison Explorer
Click a platform to see how it recommends project management brands and where it pulls its sources from. Every percentage is from the captured responses — no estimates.
AI search is a confident first draft of the market
After working through all 23 responses, the clearest takeaway isn't about project management software. Based on my prompts and testing, it's exponentially more about what AI search is actually doing.
AI search is excellent at producing a confident first draft of a category. It gives you the language, the frames, the shortlist, the comparison talking points. It's fluent and it sounds like someone who's been covering the beat for years. For a buyer starting from zero, that's absolutely useful; it compresses three hours of reading category pages into three minutes of prose.
What it's not doing is telling you how a tool will feel inside your team three weeks after rollout. It's not measuring the maintenance tax on a highly customizable system. It's not accounting for the approval chain that gets messy, the executive who joins late, or the beautiful dashboard that quietly depends on three people remembering to update it (yes, I’m talking about you). Those are the parts of a software decision that don't show up in the AI answer layer because they don't show up in the content the AI answer layer is trained on.
Consensus can help you get oriented but it can also make the safest answer sound like the smartest one.
What this changes about the AEO/GEO playbook
The single-platform pilot I ran produced findings about source mix and brand concentration that felt novel. The multi-platform version changes the whole frame, and there are three implications worth naming directly.
Single-platform measurement is a trap. Yeah, I said it. Any tool, consulting firm, or blog post giving you a "share of AI voice" number derived from one platform is telling you a fraction of the story. The fraction may be 25% if you're lucky and the platform happens to be representative of your buyer. In reality, which last I checked we’re still living in (and not a simulation…right?) it's probably less.
The "optimize your content for AI" advice needs to be platform-specific. Surfacing on ChatGPT means earning placements in editorial reviews and aggregators. Perplexity rewards you if your own domain is structured so the platform can pull from it confidently. Google AI Mode surfaces the brands with a strong YouTube and Reddit presence more than anything else. These are different motions entirely.
The Asana vs. Monday query taught me something about how to design these studies. A buyer query that names brands inside the query forces the AI to discuss those brands. That produces the illusion of cross-platform agreement when really the platforms are just obeying the question's framing. Any serious AI visibility study has to treat named-brand queries as a separate category and discount them from the general mention-share analysis.
How to run this yourself
The methodology appendix at the end of this document lays out the full protocol. The short version: run each query in a fresh session on each platform (prior context biases results), turn on web search wherever it's a toggle, and capture not just the response text but also the visible citation list, because the citation list is where the platform-level differences show up most clearly.
Total time to run this study was roughly 90 minutes to collect the responses and another three hours to code the brand mentions and source domains into the analysis script. If you want to run it for your category, the full script is in the appendix. Replace the queries and brand lists with yours, run the same analysis, and you'll have your own version of the mention-share and source-mix data for whatever vertical you care about.
Where this goes next
Three versions of this study that would each produce a distinct finding.
The first is a drift study. Run these same 23 responses every week for eight weeks and measure how much the recommended brand set changes on each platform over time. That would produce the first real longitudinal data on AI answer stability by platform, which nobody currently has for this category.
The second is a personalization study. Run Gemini's six queries signed in as three different marketer profiles with different work contexts, and compare the outputs. That would quantify how much Gemini's personalization layer is actually shifting brand recommendations, which matters to anyone building a GEO strategy around Google's AI surface.
The third is a citation persistence study. Take the sources cited in this week's responses and re-query the same platforms in 30 days. Measure what percentage are still being cited. If the answer is "most of them," the AI search surface is more stable than the industry thinks. If it's "a minority," then the churn story everyone's been telling based on Profound's numbers holds at the category level too.
And fourth, don’t forget to get milk at the store because you ran out this morning. I’m sure I just blew someone’s mind that needed that reminder…
I'll run whichever of these lands most interesting first. My hunch is the personalization study, because the Gemini finding in this pilot is the most surprising thing I learned and nobody else is writing about it.
The thesis of Explainable, restated
This is the whole argument of the book I published earlier this year. AI doesn't have one set of answers. It has many sets of answers, shaped by platform, by source, by query framing, and increasingly by who's asking. Optimizing for "AI search" as a single surface is a category error. The brands that win in 2026 will treat each AI platform as its own retrieval system, measure their presence on each one separately, and build content strategies that match how each platform actually retrieves information.
The thirteen brands in this study that only exist on one platform are the clearest illustration of why that matters. You can't optimize your way into ChatGPT's surface by writing better copy for your own website. You can't earn your way into Google AI Mode by landing a Forbes placement. The playbook splits, and that's good news for anyone willing to do the work. The brands that figure out the per-platform motion first will have a real structural advantage. Most of the industry is still reporting on AI search as one thing; the ones that understand it's four things (or six, or ten, as the surface continues to fragment) are the ones that will show up when a buyer asks.
Jarred Smith is the author of Explainable: Why AI Recommends Some Brands & Ignores Others, an Amazon bestseller on AEO, GEO, and SEO. He writes about AI-driven brand visibility at jarredsmith.com.
Appendix: Methodology & Replication
Study design
This is a pilot, not a benchmark. The six queries were chosen to represent common buyer-intent patterns in the project management software category. The study ran on 2026-04-17 across four AI platforms (ChatGPT, Perplexity, Gemini, Google AI Mode), with each query executed once per platform in a fresh session with web search enabled wherever that was a toggle.
The six queries
1. "best project management software for small business"
2. "best project management software for marketing teams"
3. "best project management software for software development teams"
4. "asana vs monday.com which is better"
5. "best free project management software"
6. "best enterprise project management software"
Coding rules
A brand is "mentioned" if it appears as a named recommendation in the response body. Dismissals count as mentions (as in the pilot) because both signal that the AI considers the brand part of the category conversation.
Brands named only in the user's query (as in "asana vs monday.com") are still coded as mentioned only if the response independently recommends them in its answer. Since both were recommended, both count.
Source domains are classified into four categories: vendor (brand/tool-owned pages like monday.com, asana.com), review (independent editorial sites like Forbes, Cloudwards, The Digital Project Manager), community (user-generated content on YouTube and Reddit), and aggregator (marketplace directories like G2, Capterra, Gartner).
Gemini's response to Query 1 (small business) was truncated in the captured data before any brand recommendations appeared. That response is excluded from the brand analysis but its source-list metadata is preserved. Coverage is therefore 23 valid (platform × query) combinations out of a possible 24.
Limitations
Small sample size. Six queries on four platforms produces 23 valid responses. That's enough to surface platform-level patterns but not enough to make confident claims about individual brand ranking changes over time.
Single-day snapshot. All responses were captured on 2026-04-17. AI platforms change their retrieval models, source weighting, and training data continuously. The findings reflect that single day; the drift study proposed in the post would address this.
One category. Project management software was chosen because it has broad appeal and a large consideration set. Whether the cross-platform disagreement pattern replicates in other categories (CRM, accounting, HR software, or non-SaaS verticals) is an open question.
Gemini personalization contamination. Gemini's responses were influenced by the query runner's Google account history. This is a limitation for cross-platform comparison but also a finding. The captured responses are included as-is with the personalization flagged.
No answer-quality measurement. This study measures which brands are mentioned, not whether the recommendations are accurate or useful. A brand could be mentioned frequently but in a dismissive context; a well-regarded brand could be missing from a list due to retrieval gaps rather than actual competitive weakness. Coding "dismissive vs. endorsing" mentions would be a natural next layer.
Replication protocol
Pick your vertical. The methodology works for any B2B or D2C category with at least ten credible competing brands and a consideration set that buyers actually research using AI.
Write six buyer-intent queries. Cover multiple segments (small, large, vertical-specific, free options, named-competitor comparison). Keep the queries realistic and avoid leading language.
Run each query once per platform in a fresh session. ChatGPT, Perplexity, Gemini, and Google AI Mode cover roughly 95% of consumer AI search share right now. Make sure web search is enabled wherever it's a toggle.
Capture the full response text and the complete source/citation list for each run. The citation list is where the platform-level differences are most visible.
Code every brand recommendation into a (platform × query × brand) matrix and every cited source into a (platform × query × source × type) matrix. Use the types vendor, review, community, aggregator.
Run the analysis script (below) on your coded data. Replace the BRANDS and SOURCES dictionaries with your own. The analysis functions will produce overall mention share, cross-platform agreement, universal-brand lists, per-platform share rankings, and source-mix distributions.
Publish with the methodology visible. The citation flywheel works when other people can reproduce your numbers. Transparency is the whole point.
Analysis script
The full Python script used to produce every number in this study. Replace the BRANDS and SOURCES dictionaries with your own coded data, run it, and you'll have the same analysis for your category. No external dependencies beyond the Python standard library.
-
"""
Multi-Platform AI Search Visibility Study: Project Management Software
======================================================================
Six buyer-intent queries run across four AI platforms
(ChatGPT, Perplexity, Gemini, Google AI Mode) on 2026-04-17.
Coding rules:
- A brand is "mentioned" if it appears as a named recommendation in the
response body. Dismissals also count as mentions (as in the pilot).
- A brand mentioned in the USER'S OWN QUERY (e.g. "asana vs monday.com")
is NOT automatically counted; the platform must still recommend it.
- Source domains are classified into one of four categories:
vendor - brand/tool owned (monday.com, asana.com, clickup.com, etc.)
review - independent or editorial review sites (Forbes, Cloudwards, etc.)
community - user-generated content platforms (YouTube, Reddit)
aggregator - marketplace comparison sites (G2, Capterra, Gartner)
- Gemini "Query 1 (small business)" was truncated in the captured response
before any brand recommendations appeared, so it is excluded from brand
analysis but its metadata is preserved.
"""
from collections import Counter, defaultdict
import json
QUERIES = [
"small_business",
"marketing",
"dev",
"asana_v_monday",
"free",
"enterprise",
]
PLATFORMS = ["ChatGPT", "Perplexity", "Gemini", "Google AI Mode"]
# ---------------------------------------------------------------------
# BRAND MENTIONS: platform -> query -> list of brands recommended
# ---------------------------------------------------------------------
BRANDS = {
"ChatGPT": {
"small_business": ["monday.com", "ClickUp", "Trello", "Asana",
"Zoho Projects", "Notion", "Basecamp", "Wrike",
"Smartsheet", "Airtable"],
"marketing": ["monday.com", "Asana", "ClickUp", "Wrike", "Notion",
"Adobe Workfront", "Teamwork", "Workzone"],
"dev": ["Jira", "Linear", "ClickUp", "GitHub Projects",
"Azure DevOps", "monday.com", "Zenhub", "YouTrack",
"Notion", "Trello", "OpenProject"],
"asana_v_monday": ["Asana", "monday.com"],
"free": ["ClickUp", "Trello", "Asana", "Notion", "Jira",
"Wrike", "Miro", "Freedcamp"],
"enterprise": ["Wrike", "monday.com", "Microsoft Project",
"Smartsheet", "Adobe Workfront", "Jira", "Asana",
"ClickUp", "Zoho Projects", "Workzone"],
},
"Perplexity": {
"small_business": ["Asana", "Zoho Projects", "Trello", "ClickUp",
"Microsoft Planner"],
"marketing": ["Asana", "Wrike", "monday.com", "Zoho Projects",
"Jira"],
"dev": ["Jira", "Asana", "Zenhub", "Azure DevOps", "Wrike",
"Smartsheet"],
"asana_v_monday": ["monday.com", "Asana"],
"free": ["Trello", "Asana", "OpenProject", "Notion",
"Bitrix24"],
"enterprise": ["Microsoft Project", "Planview Enterprise One",
"Adobe Workfront", "Oracle Primavera P6", "Celoxis",
"monday.com", "ClickUp", "Wrike"],
},
"Gemini": {
# Q1 truncated in capture - excluded from brand analysis
"small_business": [],
"marketing": ["monday.com", "Wrike", "Asana", "ClickUp"],
"dev": ["Linear", "Jira", "GitLab", "GitHub Projects",
"ClickUp", "Monday Dev"],
"asana_v_monday": ["Asana", "monday.com"],
"free": ["Asana", "Trello", "ClickUp", "Wrike", "Notion"],
"enterprise": ["monday.com", "Wrike", "Asana", "Smartsheet"],
},
"Google AI Mode": {
"small_business": ["ClickUp", "Trello", "Asana", "Zoho Projects",
"monday.com", "Wrike", "Notion"],
"marketing": ["Wrike", "monday.com", "Asana", "Airtable",
"ClickUp", "Teamwork", "CoSchedule",
"Adobe Workfront", "Jira", "Planable", "Trello",
"Miro", "Basecamp", "Aproove"],
"dev": ["Jira", "Zenhub", "Linear", "ClickUp", "monday.com",
"Notion", "Azure DevOps"],
"asana_v_monday": ["Asana", "monday.com"],
"free": ["ClickUp", "Trello", "Asana", "Notion", "Jira",
"Plaky"],
"enterprise": ["Celoxis", "SmartSuite", "Wrike", "monday.com",
"Smartsheet", "Adobe Workfront", "ClickUp", "Jira",
"Microsoft Project", "Teamwork", "ProofHub"],
},
}
# ---------------------------------------------------------------------
# SOURCE DOMAINS: platform -> query -> list of source domain classifications
# ---------------------------------------------------------------------
# Classification: 'vendor', 'review', 'community', 'aggregator'
# For each cited source, we tag its domain type.
SOURCES = {
"ChatGPT": {
"small_business": [
("The Digital Project Manager", "review"),
("Guideflow", "review"),
("Cloudwards", "review"),
("Capterra", "aggregator"),
("Spotsaas", "review"),
("softwareadvice.com", "aggregator"),
("forbes.com", "review"),
("thebusinessdive.com", "review"),
("toolradar.com", "review"),
("switchonbusiness.com", "review"),
],
"marketing": [
("Toolradar", "review"),
("Workzone", "vendor"),
("The CMO", "review"),
("The Digital Project Manager", "review"),
("toolfinder.com", "review"),
("remotewize.com", "review"),
("cloudwards.net", "review"),
("techstackdaily.com", "review"),
("productive.io", "vendor"),
("tmetric.com", "vendor"),
("airtable.com", "vendor"),
],
"dev": [
("Guideflow", "review"),
("The CTO Club", "review"),
("Workflow Automation", "review"),
("spotsaas.com", "review"),
("techrepublic.com", "review"),
("forbes.com", "review"),
("toggl.com", "vendor"),
("cloudwards.net", "review"),
("clickup.com", "vendor"),
("pmworld360.com", "review"),
("affine.pro", "vendor"),
],
"asana_v_monday": [
("Tech Insider", "review"),
("Agiled", "vendor"),
("The Digital Project Manager", "review"),
("CompareTiers", "review"),
("automationatlas.io", "review"),
("thebusinessdive.com", "review"),
("cloudwards.net", "review"),
("monday.com", "vendor"),
("asanacost.com", "review"),
("toolswitcher.com", "review"),
("cpoclub.com", "review"),
],
"free": [
("project-management.com", "review"),
("Zapier", "review"),
("Atlassian", "vendor"),
("The Digital Project Manager", "review"),
("Geekflare", "review"),
("2sync.com", "review"),
("proprofsproject.com", "review"),
("celoxis.com", "vendor"),
("niftypm.com", "vendor"),
("cloudwards.net", "review"),
("switchonbusiness.com", "review"),
],
"enterprise": [
("Workzone", "vendor"),
("The Digital Project Manager", "review"),
("Jotform", "review"),
],
},
"Perplexity": {
"small_business": [
("asana.com", "vendor"),
("zapier.com", "review"),
("microsoft.com", "vendor"),
("zoho.com", "vendor"),
],
"marketing": [
("atlassian.com", "vendor"),
("asana.com", "vendor"),
("wrike.com", "vendor"),
("zoho.com", "vendor"),
],
"dev": [
("paymoapp.com", "vendor"),
("project-management.com", "review"),
("g2.com", "aggregator"),
("zenhub.com", "vendor"),
("reddit.com", "community"),
("atlassian.com", "vendor"),
],
"asana_v_monday": [
("plaky.com", "vendor"),
("asana.com", "vendor"),
("till-freitag.com", "review"),
("zapier.com", "review"),
],
"free": [
("openproject.org", "vendor"),
("zapier.com", "review"),
("project-management.com", "review"),
("icagile.com", "review"),
("thedigitalprojectmanager.com", "review"),
],
"enterprise": [
("celoxis.com", "vendor"),
("technologyadvice.com", "review"),
],
},
"Gemini": {
# Q1 truncated - no sources
"small_business": [],
"marketing": [
("till-freitag.com", "review"),
("productive.io", "vendor"),
],
"dev": [
("agileleadershipdayindia.org", "review"),
("atlassian.com", "vendor"),
("gitscrum.com", "vendor"),
("gitscrum.com", "vendor"),
("openproject.org", "vendor"),
("project-management.com", "review"),
],
"asana_v_monday": [
("monday.com", "vendor"),
("asana.com", "vendor"),
("plutio.com", "vendor"),
("tech.co", "review"),
],
"free": [], # No sources list rendered in Gemini's free response
"enterprise": [], # No sources list rendered
},
"Google AI Mode": {
"small_business": [
("My Emma", "review"),
("YouTube - George Vlasyev", "community"),
("Slack", "vendor"),
],
"marketing": [
("Reddit", "community"),
("Atlassian", "vendor"),
("Wrike", "vendor"),
("Airtable", "vendor"),
],
"dev": [
("Reddit", "community"),
("YouTube - ClickUp", "community"),
("YouTube - ProcessDriven", "community"),
("Wrike", "vendor"),
("zenhub.com", "vendor"),
("Asana", "vendor"),
],
"asana_v_monday": [
("YouTube", "community"),
("Asana", "vendor"),
("Zapier", "review"),
],
"free": [
("YouTube - Daniel Davidson", "community"),
("Reddit", "community"),
],
"enterprise": [
("Celoxis", "vendor"),
("Gartner", "aggregator"),
],
},
}
# ---------------------------------------------------------------------
# ANALYSIS
# ---------------------------------------------------------------------
def brand_mention_share():
"""For each brand, what % of valid (platform x query) slots mention it?"""
valid_slots = [(p, q) for p in PLATFORMS for q in QUERIES
if BRANDS[p][q]] # exclude empty (Gemini Q1)
total = len(valid_slots)
brand_counts = Counter()
for p, q in valid_slots:
for b in set(BRANDS[p][q]):
brand_counts[b] += 1
return {b: (c, round(100 * c / total, 1))
for b, c in brand_counts.most_common()}, total
def per_platform_mention_share():
"""For each brand, per-platform hit rate."""
rows = {}
for p in PLATFORMS:
valid_queries = [q for q in QUERIES if BRANDS[p][q]]
n = len(valid_queries)
counts = Counter()
for q in valid_queries:
for b in set(BRANDS[p][q]):
counts[b] += 1
rows[p] = {b: (c, round(100 * c / n, 1)) for b, c in counts.items()}
return rows
def cross_platform_agreement():
"""For each query, what % of brands appear on >=3 of 4 platforms?"""
out = {}
for q in QUERIES:
platform_sets = []
for p in PLATFORMS:
if BRANDS[p][q]:
platform_sets.append(set(b.lower() for b in BRANDS[p][q]))
if len(platform_sets) < 2:
out[q] = None
continue
all_brands = set()
for s in platform_sets:
all_brands |= s
universal = sum(1 for b in all_brands
if sum(1 for s in platform_sets if b in s)
== len(platform_sets))
majority = sum(1 for b in all_brands
if sum(1 for s in platform_sets if b in s)
>= max(2, len(platform_sets) - 1))
out[q] = {
"platforms_covered": len(platform_sets),
"total_distinct_brands": len(all_brands),
"universal_count": universal,
"universal_pct": round(100 * universal / len(all_brands), 1),
"majority_count": majority,
"majority_pct": round(100 * majority / len(all_brands), 1),
}
return out
def source_type_mix_by_platform():
"""For each platform, what % of sources are vendor / review / community / aggregator?"""
out = {}
for p in PLATFORMS:
counts = Counter()
total = 0
for q in QUERIES:
for _, tag in SOURCES[p][q]:
counts[tag] += 1
total += 1
if total == 0:
out[p] = {"total": 0}
continue
out[p] = {"total": total}
for tag in ("vendor", "review", "community", "aggregator"):
pct = round(100 * counts[tag] / total, 1)
out[p][tag] = (counts[tag], pct)
return out
def brands_unique_to_one_platform():
"""Brands that only show up in one platform across all queries."""
by_brand = defaultdict(set)
for p in PLATFORMS:
for q in QUERIES:
for b in BRANDS[p][q]:
by_brand[b].add(p)
return {b: list(ps)[0] for b, ps in by_brand.items() if len(ps) == 1}
def per_query_universal_brands():
"""For each query, which brands appear on all platforms that answered it?"""
out = {}
for q in QUERIES:
platform_sets = []
covered = []
for p in PLATFORMS:
if BRANDS[p][q]:
platform_sets.append(set(b.lower() for b in BRANDS[p][q]))
covered.append(p)
if len(platform_sets) < 2:
out[q] = None
continue
universal = set(platform_sets[0])
for s in platform_sets[1:]:
universal &= s
out[q] = {"covered": covered, "universal": sorted(universal)}
return out
if name == "__main__":
print("=" * 70)
print("MULTI-PLATFORM AI SEARCH VISIBILITY STUDY")
print("Project Management Software · 2026-04-17")
print("=" * 70)
print("\n-- COVERAGE --")
for p in PLATFORMS:
answered = sum(1 for q in QUERIES if BRANDS[p][q])
print(f" {p}: {answered}/6 queries coded")
print("\n-- OVERALL BRAND MENTION SHARE (across all 23 valid slots) --")
shares, total = brand_mention_share()
print(f" (N = {total} platform x query combinations)")
for b, (c, pct) in list(shares.items())[:15]:
print(f" {b:30s} {c:2d}/{total} ({pct}%)")
print("\n-- CROSS-PLATFORM AGREEMENT BY QUERY --")
agreement = cross_platform_agreement()
for q, a in agreement.items():
if a:
print(f" {q:20s} {a['platforms_covered']} platforms | "
f"{a['total_distinct_brands']} distinct brands | "
f"universal: {a['universal_count']}/{a['total_distinct_brands']} "
f"({a['universal_pct']}%)")
print("\n-- UNIVERSAL BRANDS PER QUERY (named by every platform that answered) --")
uni = per_query_universal_brands()
for q, v in uni.items():
if v:
print(f" {q:20s} {v['universal']}")
print("\n-- SOURCE-TYPE MIX BY PLATFORM --")
mix = source_type_mix_by_platform()
for p, m in mix.items():
if m["total"] == 0:
continue
print(f" {p} (N={m['total']} sources):")
for tag in ("vendor", "review", "community", "aggregator"):
if tag in m:
c, pct = m[tag]
print(f" {tag:10s} {c:2d} ({pct}%)")
print("\n-- PER-PLATFORM MENTION SHARE (top 8 per platform) --")
pp = per_platform_mention_share()
for p, rows in pp.items():
print(f"\n {p}:")
for b, (c, pct) in sorted(rows.items(),
key=lambda x: -x[1][0])[:8]:
print(f" {b:28s} {c}/6 ({pct}%)")
print("\n-- BRANDS UNIQUE TO A SINGLE PLATFORM --")
uniq = brands_unique_to_one_platform()
by_platform = defaultdict(list)
for b, p in uniq.items():
by_platform[p].append(b)
for p, bs in by_platform.items():
print(f" {p}: {', '.join(sorted(bs))}")