Prompt Intelligence
How SolCrys builds prompt sets - the Golden Prompt Set methodology
SolCrys generates a Golden Prompt Set (GPS) for each product category by grounding on four real-world signals: organic intent volume across major search and marketplace surfaces, trending questions from leading public community platforms, AI query volume signals, and live follow-up questions captured from rendered consumer surfaces such as ChatGPT and Rufus. Every prompt carries projected query volume and source provenance so customers can prioritize the questions buyers actually run, not the ones marketers wish they ran. This page is the trust-building methodology document behind the GPS - it explains how prompts are chosen, why the four-source grounding matters, how customer-supplied prompts blend in, how the templates update, and what GPS deliberately does not promise. Prompt selection is the single biggest determinant of AEO data quality, and we publish ours in full so buyers can evaluate before they sign.
Updated 2026-05-08
Questions this guide answers
- How do AI visibility platforms choose prompts to track?
- What is a Golden Prompt Set?
- How does SolCrys decide which prompts to monitor?
- Where do AEO prompt sets come from?
Direct answer
SolCrys generates a Golden Prompt Set (GPS) for each product category by grounding on four real-world data sources: organic intent volume across major search and marketplace surfaces, trending questions from leading public community platforms, AI query volume signals, and live follow-up questions captured from the rendering of answer engines like Rufus and ChatGPT. Each prompt in the GPS comes with projected query volume, so customers can prioritize the questions buyers actually run - not the ones marketers wish they ran.
Prompt selection is the single biggest determinant of AEO data quality. If a platform tracks the wrong prompts, every chart it produces is wrong - no matter how polished the dashboard or how accurate the per-prompt scoring.
Three failure modes the GPS is designed to avoid
We designed the GPS to avoid three patterns we have observed in how AEO prompt selection commonly goes wrong.
| Failure mode | What goes wrong | What buyers experience |
|---|---|---|
| Synthetic keyword lists | Tools pull SEO keywords and reformat them as questions ('best [X]?', '[X] vs [Y]?'). They look like prompts but don't match how buyers phrase questions to AI assistants. | 'Why is my AEO tool tracking 200 prompts but I never see traffic move when I fix things?' |
| Customer-only prompts | Tools rely entirely on customer-supplied prompts. Marketers tend to ask brand-flattering questions; real buyers ask harder, more skeptical ones. | 'Our SOV looks great but our pipeline isn't growing.' |
| LLM-generated synthetic prompts | Tools prompt an LLM to 'generate 100 questions a buyer might ask.' This produces plausible-sounding but volume-unverified queries - many are never actually asked. | 'We're tracking lots of prompts but only 3 of them seem to drive any decisions.' |
The four grounding sources
Every GPS is built from four grounded data sources. A prompt only enters the GPS if it can be evidenced from at least one source - preferably more than one.
Source 1: Organic intent volume across major search and marketplace surfaces
We start with organic intent-volume data across the customer's category, covering general web search and, for ecommerce categories, marketplace-side search behavior. The exact data composition draws on multiple aggregate sources and partnerships and is tuned per category; we treat the supplier mix as part of our internal methodology.
Intent volume is the closest public proxy to real demand. A query asked at high volume in conventional search is highly likely to be asked in similar form in AI assistants. The phrasing differs - AI prompts are longer and more conversational - but the underlying intent is the same. We convert each intent-volume query into a prompt-style phrasing and add buyer context, comparative framing, and follow-up structure that AI assistants tend to elicit.
Source 2: Trending consumer questions from public community platforms
Intent volume tells us what people search. Communities tell us how they actually phrase questions when asking other humans, which is much closer to how they ask AI.
We continuously monitor leading public community platforms in the customer's category - the discussion forums and Q&A sites where buyers ask category questions of other humans. The platform mix varies by industry and is tuned per category as part of our internal methodology. Public community Q&A is consistently among the most-cited source layers in AI answers; multiple third-party citation studies place community-discussion sites in the top tier of ChatGPT and Perplexity citations.
Real AI-search prompts are typically much longer than typed search queries - multi-clause, conversational, and explicitly problem-stated. Community-grounded prompts close that gap because human-to-human Q&A in communities phrases questions the same way humans phrase them to AI.
Source 3: AI query volume signals
Direct AI query volume data is not fully public yet, but it is no longer guesswork. We aggregate signals from a combination of public engine disclosures, third-party research, and SERP-side trigger data, drawing on multiple aggregate sources rather than any single supplier. Specific input mix is part of our internal methodology.
Some queries are far more common in AI assistants than in traditional search - especially long, conversational, multi-clause questions. These queries don't show up in keyword-planner-style tools but dominate AI engine traffic. SEO-keyword-only prompt sets miss them entirely.
Source 4: Live follow-up questions from answer engines
This is the source most platforms don't use, and it's where the GPS gets its edge. When a user asks ChatGPT, Rufus, Perplexity, or Google AI Overviews a question, the engines themselves often suggest follow-up questions. These follow-ups reflect the engine's own model of 'what users ask next,' expose the prompt journey from question to comparison to risk validation to recommendation, and surface buyer-stage transitions that pure search data cannot see.
We capture follow-ups from the rendered consumer-surface output of each engine, from marketplace-side AI assistant follow-up Q&A, and from supported SERP surfaces' 'People also ask' expansion. See the companion Visibility Measurement methodology for the broader capture approach.
What a Golden Prompt Set looks like in practice
A typical GPS for a B2B SaaS category includes 30 to 150 prompts depending on plan, grouped into six prompt types. The mix below is a design target for a generic B2B SaaS category, not a survey result. Actual mix is tuned per industry and per customer.
| Prompt type | Target mix | Illustrative example | Typical sources |
|---|---|---|---|
| Category leadership | ~20% | Who are the top CRM platforms for B2B SaaS in 2026? | Intent volume + AI query signals |
| Comparison | ~25% | How does HubSpot compare to Salesforce for a 50-person team? | Engine follow-ups + community signals |
| Use case fit | ~20% | Best CRM for SaaS with a 6-month sales cycle | Community signals + intent volume |
| Risk / objection | ~10% | What are the downsides of HubSpot for enterprise sales teams? | Community signals |
| Implementation | ~10% | How long does Salesforce implementation take for a mid-market company? | Intent volume + community |
| Brand-specific | ~15% | Is [your brand] worth the price? | Customer-supplied + brand mentions |
What metadata each prompt carries
Examples above use well-known public products purely for illustration; they are not taken from any specific customer's prompt set. Real customer prompts are private to each workspace. Each prompt in the GPS comes with metadata that lets customers prioritize.
- Projected query volume - estimated monthly run frequency on AI engines.
- Buyer journey stage - awareness, consideration, decision, or risk validation.
- Engine relevance - which AI engines this prompt is most likely to appear on.
- Source provenance - which of the four grounding sources it came from.
How customer-supplied prompts blend in
The GPS is the foundation, but every customer knows their buyers in ways we don't. SolCrys reserves a meaningful share of prompt slots for customer-supplied prompts. For Free Audit reports, that means five SolCrys-generated prompts paired with five customer-entered prompts. On paid plans, customers can replace any GPS prompt with their own, and an in-app prompt suggestion panel recommends additions based on gaps in the current set, trending discussion patterns, and engine follow-ups discovered in the customer's tracked responses.
When a customer adds a prompt, we run a sanity check that flags off-topic prompts, brand-flattering bias, and duplicates. The customer can override any flag - it is guidance, not gating.
How often the GPS updates
Buyer questions shift, community discussion patterns spike, and engines change which follow-ups they suggest. The GPS is not a static template.
| Update cadence | What changes |
|---|---|
| Weekly | Engine follow-up questions re-captured from the consumer-surface rendering. |
| Monthly | Community trending question signals refreshed. |
| Quarterly | Intent volume baselines re-grounded; full GPS template review per category. |
| On-trigger | Major news event or new entrant in a category triggers an out-of-cycle GPS refresh within 48 hours. |
What the GPS is not
We are explicit about what we do not do.
- Not a guarantee of revenue lift. A perfect prompt set does not fix bad content or weak brand presence; it just measures the right things.
- Not a replacement for human judgment. The customer-supplied share exists because real buyers know nuances we don't.
- Not pulled from a single LLM. We do not ask one model to 'generate 100 questions' and call it research. Every prompt has external evidence.
- Not infinitely customizable in real time. Templates update on the cadence above, not on every customer request, to maintain quality.
Why we publish this methodology
Most AEO platforms keep their prompt-selection methodology opaque. We publish ours because trust is the product - if you don't trust how prompts were chosen, every chart we show you is meaningless. Methodology should be falsifiable: a customer should be able to look at any prompt in their GPS and see why it was selected. And buyers deserve to evaluate before they sign, not after.
FAQ
How is the Golden Prompt Set different from just using SEO keywords?
SEO keywords measure what people type into a search box. AI prompts are longer, more conversational, and increasingly bypass conventional search engines entirely. SEO keywords are one input to the GPS; the other three (community questions, AI query volume signals, engine follow-ups) capture the parts of buyer behavior that SEO data does not see.
How many prompts should I be tracking?
It depends on your category and buyer complexity. A solo founder or small SMB usually surfaces the meaningful 80% of buyer journeys with 15 to 30 prompts. Mid-market B2B teams typically run 50 to 100 prompts to cover personas, ICPs, and use cases. Enterprise and agency accounts often track 200+ prompts across product lines, regions, or client brands.
Can I track prompts in languages other than English?
Currently the GPS supports English-language prompts across English-dominant markets (US, UK, Canada, Australia). Multi-language GPS support is on the roadmap; please contact sales for current localization availability.
Do you share my custom prompts with anyone?
No. Customer-supplied prompts are private to your workspace and never used to update other customers' GPS templates. Industry-template GPS updates are derived from public sources only.
How do you verify a prompt is actually asked by real buyers and not just an SEO artifact?
We require evidence from at least one of the four grounding sources before a prompt enters the GPS, and we prioritize prompts evidenced by two or more sources. Synthetic SEO-only prompts get the lowest priority weighting in your dashboard, so you can see at a glance which prompts have strong evidence and which are speculative.
What happens to my historical data if the GPS template updates?
Your tracked prompts do not change unless you accept the update. Template changes appear as suggestions in your suggestion panel; you decide which to adopt. Historical trend lines for prompts you keep tracking stay continuous, with no breaks or recalibration needed.
Related guides
Prompt Intelligence
AI Search Prompt Set
A practical guide to building an AI search prompt set across category, comparison, risk, implementation, competitor, and brand-specific prompts.
Measurement
AI Visibility Measurement Methodology
How SolCrys captures AI visibility data: dual-channel measurement that combines consumer-surface capture from ChatGPT, Google AI Overviews, and Rufus with API capture for agents - every data point traceable to a prompt, platform, and timestamp.
Buyer Guides & Platform Decisions
Evaluate an AEO Platform's Data Methodology
A buyer's checklist for evaluating AEO and AI visibility platforms on data methodology. Seven questions that distinguish vendors with auditable, fidelity-first measurement from vendors with synthetic dashboards.
Free AI visibility audit
Find out where your brand is missing, miscited, or misrepresented.
SolCrys maps high-intent prompts to mentions, citations, answer accuracy, and content gaps so your team can prioritize the next pages to ship.