How SolCrys Works
SolCrys FAQ - 19 questions buyers ask about how we work
This page answers the 19 questions prospects ask us most often during evaluation - how we build prompt sets, how we capture visibility data, how we handle engine non-determinism, what the Free Audit includes, and how we document methodology. We've tried to give concrete, falsifiable answers; wherever a question deserves more depth, we link to the full methodology page. We publish this FAQ because most platforms in our category describe their methodology in marketing-speak, and we'd rather you read specific answers before you talk to sales than infer answers from positioning. If a question you have isn't here, contact us - we'd rather answer hard questions than have buyers guess.
Updated 2026-05-09
Questions this guide answers
- How does SolCrys build prompt sets?
- How does SolCrys measure AI visibility?
- Can I trust SolCrys's data?
- How does SolCrys handle AI engine randomness?
- What is SolCrys's Free AI Visibility Audit?
- Does SolCrys export data?
- Which AI engines does SolCrys cover?
How we build prompt sets
Five questions about prompt grounding, customer prompts, and update cadence.
How does SolCrys build prompt sets?
We build a Golden Prompt Set (GPS) for every customer category, grounded on four real-world signals: intent volume across major search and marketplace surfaces, trending questions from public community platforms, AI query volume signals, and live follow-up questions we capture from the rendered consumer surface of each engine. Every prompt comes with projected query volume so you can prioritize what your buyers actually ask. You can replace any GPS prompt with your own at any time - typically 40 to 50 percent of your working set ends up customer-supplied. Full methodology: see our Golden Prompt Set methodology page.
Why doesn't SolCrys just use SEO keywords as prompts?
SEO keywords measure what people type into a search box. AI prompts are longer, more conversational, and increasingly bypass traditional search entirely (asked directly to ChatGPT, Claude, Perplexity, or Rufus). Synthetic-keyword prompt sets miss the multi-clause, problem-stated questions buyers actually ask AI assistants ('I run a 50-person SaaS team and our current CRM doesn't handle our 6-month sales cycle well - what should I look at?'). Our community-grounded layer specifically closes that gap.
Are SolCrys prompts AI-generated, or sourced from real demand?
Sourced. Every prompt entering the GPS requires evidence from at least one of the four grounding sources, and we prioritize prompts evidenced by two or more sources. We don't ask an LLM to 'generate 100 questions a buyer might ask' and call it research - synthetic-only prompts get the lowest priority weight in your dashboard.
How often does SolCrys update the prompt set?
Engine follow-up questions refresh weekly; community trending signals refresh monthly; intent-volume baselines and full GPS templates re-ground quarterly. When your category sees a major news event or new entrant, an out-of-cycle refresh runs within 48 hours. You see new prompts as suggestions - your historical trend lines stay continuous unless you accept the update.
Can I bring my own prompts to SolCrys?
Yes. You can replace any GPS prompt with your own at any time. We provide a suggestion panel that flags gaps in your set (for example, 'you don't have any risk/objection prompts') and recommends additions. Your custom prompts are private to your workspace and never used to update other customers' GPS templates.
How we measure visibility and why you can trust the data
Five questions about measurement methodology, browser-channel vs API capture, engine non-determinism, reproducibility, and model changes.
How does SolCrys measure visibility, and why should I trust the data?
We measure across two complementary channels because real users interact with AI both ways. For ChatGPT, Google AI Overviews, and Amazon Rufus, we capture from the rendered consumer surface - the same view your buyer sees when they use the product. For agents and deep-research tools, we query the engines' current default consumer-grade models with live grounding enabled, matching how agents call the API in production. Every data point is traceable to a specific prompt, platform, and timestamp. Full methodology: see our Visibility Measurement methodology page.
Why is SolCrys's consumer-surface capture better than just calling the API?
A consumer engine like chatgpt.com is not a thin shell over the public API. It uses a specific default model, enables tools and post-processing the API does not, and renders UI elements (product cards, follow-up suggestions, retailer placements) the API never returns. Capturing from the rendered consumer surface is the only way to measure what a buyer actually sees on these surfaces - Google AI Overviews has no public API at all.
AI engines give different answers each time. How does SolCrys handle that?
By repeated capture and rolling-window reporting. A single snapshot is a noisy data point - engines like ChatGPT are non-deterministic by design, sampling probabilistically so the conversation feels natural. We re-run prompts on a recurring cadence and report rolling 7/30/90-day windows; movement within historical variance for that engine is flagged as 'within noise' rather than reported as a real change.
Can I reproduce a specific data point SolCrys shows me?
Yes. Every chart drills back to the exact prompt, engine, region, timestamp, and captured response. Copy the prompt text, set your browser to the same region, submit it on the engine yourself within a short time window - the response should match in substance (small text-level variation between snapshots is expected; engines are noisy). If a result ever looks wrong, request the source artifact and we'll show you the captured page.
What happens when an engine changes its default model?
We monitor provider announcements and model deprecations on an ongoing basis. When a default changes, we update tracking within days and disclose the change in our platform changelog so you can interpret any trend-line discontinuities. Model signal is part of every per-data-point audit trail when the engine discloses it.
Free Audit and onboarding
Three questions about evaluating us before signing and getting started.
How do I see SolCrys data on my own brand before committing?
Request our Free AI Visibility Audit. We run 10 prompts (5 from your industry's prompt template plus 5 you supply) across 3 engines - ChatGPT, Gemini, and Google AI Overviews - and deliver a PDF report plus dashboard view within 24 to 48 hours. It includes a sample Deep Analysis with the top three recommended actions for your brand.
How long does SolCrys onboarding take?
For a standard workspace, the GPS for your category is pre-built and you can review or replace prompts quickly. Our team uses onboarding to tune prompts, configure competitors, validate the engine allowlist, and confirm which buyer surfaces matter most for your category.
How often does SolCrys re-run prompts?
The Free AI Visibility Audit is a single-snapshot read. Ongoing monitoring uses a recurring cadence agreed during onboarding, with rolling windows that separate real movement from normal engine noise. We do not want buyers to confuse a one-time audit with trend-grade measurement.
Comparing SolCrys to other options
Three questions about how we position ourselves and what to ask any vendor.
How does SolCrys compare to other AI visibility platforms?
We don't write public competitor comparisons by name. Instead, here is the honest framework: there are six methodology questions every buyer should ask any vendor in this category - about prompt sources, browser-channel vs. API capture, model-version disclosure, randomness handling, per-data-point reproducibility, and response to engine default changes. Our methodology pages answer all six. Run the same questions with anyone you're evaluating.
What questions should I ask any AEO vendor before signing?
Send these in writing and score 0 / 1 / 2 (no answer / vague / specific): 1) Where do prompts come from - specific evidence sources, not 'AI-generated'? 2) Consumer-surface capture or API only? 3) Which model versions, with public registry? 4) How do you handle engine non-determinism - multi-shot averaging, confidence bands? 5) Can I reproduce a specific data point with full metadata? 6) What happens when an engine changes its default?
A vendor below 8/12 isn't yet ready for production work. We try to score well on all six and we expect you to score us against this same standard.
What's the single biggest red flag when evaluating any AEO vendor?
Any answer that boils down to 'the platform decides which model gets used' - that means data is non-reproducible, and you can't verify a result you cannot replicate. A close second: refusal to walk you from any chart to the underlying captured response. Both correlate with vendors hoping buyers won't ask harder questions.
Technical and compliance
Three questions about engine coverage, ToS compliance, and data export.
Which AI engines does SolCrys cover?
Coverage depends on the buyer surfaces that matter for your category and on whether each engine is technically reliable to track. We prioritize major AI answer engines and retail assistants where customers have real exposure, and we confirm the current engine allowlist during evaluation. We don't silently add or remove engines without disclosure.
Is SolCrys's data capture compliant with engine terms of service?
Yes. We comply with each provider's terms and use the access methods each provider supports - public-content access patterns on consumer surfaces, official APIs for the API channel, and SERP capture infrastructure for surfaces with no official API (most notably Google AI Overviews and AI Mode). For enterprise customers with strict compliance requirements, we provide written documentation of the access methods used per engine under NDA.
Can I export my SolCrys data?
Yes, where export is part of the customer workflow. Structured response data can include prompt, engine, timestamp, response text, citations, and extracted entities. If export is important to your team, we confirm the current format and access method during evaluation.
Why we publish this FAQ
Most platforms in our category describe their methodology in marketing-speak. We publish this page for three reasons. First, trust requires specifics - 'comprehensive AI visibility tracking across all major engines' tells you nothing useful, while the specific answers above can be checked, audited, and replicated. Second, buyers should evaluate before signing, not after - so this page exists for you to read before talking to sales. Third, methodology should be falsifiable - every claim above is something you can audit, replay, and challenge, and we expect you to.
FAQ
Something I want to know isn't on this page. What now?
Contact us. We'd rather answer a hard question directly than have you infer the answer from marketing copy. We also update this page as buyers send us questions we should have anticipated.
Where can I see SolCrys's full methodology?
We publish three companion methodology pages: Golden Prompt Set methodology (how we choose prompts), Visibility Measurement methodology (how we capture data), and the AEO platform methodology checklist (the questions we'd want any buyer to send to any vendor). Each is linked from the relevant section above.
Do these answers reflect what SolCrys does today, or what it plans to do?
They reflect how we work today. If a capability is on our roadmap rather than in production, we say so explicitly instead of burying it in positioning copy.
Can I share this FAQ with my procurement or compliance team?
Yes - we wrote this page for that purpose. If your team has additional questions (audit logs, SOC 2, vendor risk forms, regional data residency), contact us and we'll send the relevant documentation.
Related guides
How SolCrys Works
Golden Prompt Set Methodology
We ground every AEO prompt set on real intent volume, public community questions, AI query signals, and live engine follow-ups - not synthetic keyword lists. Here's how we build it.
How SolCrys Works
AI Visibility Measurement Methodology
How we capture your AI visibility data: dual-channel measurement combining consumer-surface capture from ChatGPT, Google AI Overviews, and Rufus with API capture for agents. Every data point traces back to a prompt, engine, region, and timestamp.
How SolCrys Works
Evaluate an AEO Platform's Data Methodology
Six questions every buyer should send to every AEO platform - including us - before signing. We designed SolCrys to answer all six; here's how, and what to listen for from anyone you're evaluating.
Free AI visibility audit
Find out where your brand is missing, miscited, or misrepresented.
SolCrys maps high-intent prompts to mentions, citations, answer accuracy, and content gaps so your team can prioritize the next pages to ship.