How Kylian AI Used Analyze to Drive 809 Visits From AI SEO
Written by

Ernest Bogore
CEO
Reviewed by

Ibrahim Litinine
Content Marketing Expert

Before using Analyze, the Kylian AI team was doing what most SaaS marketers would call “the right thing.” They leaned hard into top-of-the-funnel SEO. Their blog was filled with broad, educational content like “20 French words for gratefulness” or “Comment dire ‘je suis fatigué’ en espagnol.” That approach wasn’t wrong—it was actually working. At the time, they were pulling close to 12,000 visits a month from all channels combined.
The problem was that their AI SEO was basically nonexistent. And if you understand how answer engines actually work, you’d understand why their AI SEO traffic was zero.

Here’s the thing: large language models don’t drive traffic for general how-to questions. If someone asks ChatGPT “how do I apologize in Spanish?” the model doesn’t cite a blog—it just answers the question itself. No brand gets mentioned, no link gets dropped, no referral traffic shows up in your analytics. The whole point is to resolve informational intent without sending people away.
There are a couple of exceptions. Google’s AI Overviews and Perplexity do sometimes cite sources for top-of-funnel queries. But even when they do, it’s usually the big incumbents—Duolingo, Babbel, or language publishers like PONS—that win those citations. Smaller players like Kylian AI never stood a chance at competing for those broad, educational spots.

And that’s when the team realized that the real opportunity isn’t at the top of the funnel at all. It’s when answer engines mention your brand in response to solution-seeking queries. Think about prompts like “best online English courses” or “apps like Duolingo with speaking practice.”

These aren’t curiosity-driven—they’re decision-driven. This is where the traffic carries buying intent, the AI equivalent of bottom-of-funnel or Pain Point SEO.
Table of Contents
So how do you get LLMs to recommend your brand? And how did Kylian AI get its first 200 visits from AI answer engines?
It started with teaching the models how to talk about the Kylian AI brand.
If you don’t define your own category in plain, citable terms, the model will either skip you or describe you poorly. The Kylian AI team realized this early, and rewrote its product positioning into short, factual sentences that an LLM could easily lift: “Kylian teaches group or private Spanish classes at less than $5 per lesson, anytime, anywhere.”

That clarity made Kylian legible to the machines. Within weeks of shipping those changes across their homepage and key product pages, Analyze showed the first blips of AI traffic coming in. By April, ChatGPT and Perplexity were sending trickles of referrals—14 visits here, 17 visits there.

From there, the playbook shifted. Instead of cranking out educational posts like “20 French words for gratefulness,” the team doubled down on evaluative intent. They published content like “Best AI English tutors for adults (2025),” “Kylian AI vs Duolingo,” and “Who Kylian AI is for (and not for).”
These blog posts spoke directly to people making decisions, and they gave LLMs clean, quotable evidence to reference.
Analyze data confirmed another key insight: facts fuel citations. So Kylian updated every high-intent page with numbers the models could reuse—lesson lengths, CEFR alignment, speaking minutes per session, testimonials, pricing anchors. Structured headings, tables, and FAQs made it easier for engines like Perplexity and Copilot to parse and link back.
And because not every mention comes from your own site, Kylian seeded third-party credibility too. They worked to appear on comparison lists and editorial reviews that LLMs were already citing. That way, even if ChatGPT didn’t surface their domain directly, the brand still showed up inside roundups that did.

Finally, Analyze closed the loop. With its referral tracking, the team could see exactly which models were sending traffic, which pages were getting cited, and which sessions converted visitors. Monthly audits revealed where Kylian was gaining ground and where gaps remained. Each tweak made them more legible to the models, which in turn made the models more likely to recommend them again.
The result? Kylian went from invisible in March to 200+ monthly AI referrals by May. What started as chance mentions became a repeatable engine of bottom-of-funnel traffic. Instead of chasing volume at the top—where answer engines will never send clicks—they built durable visibility at the bottom, where every mention carries buying intent.
Doubling down after the breakthrough—and growing from 200 to 809 AI referrals, while stacking up conversions
Once that proof of concept was in place, the question shifted from “can AI SEO work?” to “how do we scale this into a repeatable channel?” That’s where Analyze’s research came in. We shared insights from our studies of how different platforms rank and cite sources—like how ChatGPT prefers clear entity definitions, how Perplexity weighs freshness and third-party validation, and how Copilot prioritizes structured comparisons. With that playbook in hand, Kylian AI doubled down, refining existing content and aligning it more tightly to conversion outcomes rather than pure visibility.
The impact was immediate. ChatGPT emerged as the dominant driver, sending 461 visits in just 30 days. Copilot surged as well, up +58.6% month-over-month, showing the value of optimizing for Microsoft’s ecosystem. At the same time, referrals from Perplexity (-24.1%) and Qwant (-35%) slipped, underscoring why no single referrer can be relied upon in AI search. Analyze’s analytics made that distribution crystal clear and gave Kylian the confidence to spread bets across multiple platforms.

The Analyze dashboard also showed exactly which pages were converting. “Best Online English Courses” drove 24 sessions and 2 conversions—a conversion rate of 8.3%, far above the typical blog benchmark of 1–2%. “300 Most Common English Words” brought 17 sessions but no conversions, confirming its role as more informational. “Websites to Learn English” recorded 13 sessions and 1 conversion, a 7.7% rate—again several times higher than standard content benchmarks.

By August, the compounding effect was undeniable. Kylian AI’s AI-sourced sessions had scaled from 200 in May to 697 in August (+15.4% month-over-month). AI referrals now made up 3% of all site traffic. And the growth was resilient—spread across ChatGPT, Copilot, Claude, Gemini, and more—rather than concentrated in a single platform.

What’s next for Kylian AI?
The next chapter is about keeping the traffic flowing and sharpening it into a consistent, scalable channel. The Kylian team’s goal now is twofold:
Keep AI-driven conversions in the 7–10% range.
Early data showed that AI referrals convert far higher than typical blog traffic (closer to landing page benchmarks than educational posts). Maintaining that performance while scaling volume will be key.Sustain ~15% month-over-month growth in LLM traffic.
Growing from 200 to 809 sessions proved the channel’s momentum. The focus now is on sustaining that compounding effect so AI referrals become a predictable share of acquisition.
To hit those targets, Analyze’s team is pointing Kylian toward the next level of optimization—shifting from being cited to outranking competitors inside AI answers. That means:
Owning the comparison queries. Competitors like Duolingo or Babbel dominate many “best” lists. Kylian needs head-to-head pages that highlight clear differentiators so models have reasons to elevate them over incumbents.
Tightening entity authority. LLMs rely on consistent, factual signals. Building authority pages around pricing, lesson formats, and outcomes makes Kylian harder to ignore in evaluative prompts.
Expanding third-party validation. Getting cited in editorial reviews and comparison content adds weight, since engines like Perplexity often pull from aggregated sources.
Freshness and iteration. Updating proof points—student outcomes, lesson stats, pricing updates—every quarter gives models current data to cite, keeping Kylian competitive against bigger brands.
With Analyze as both the measurement system and the strategy partner, that next 15% MoM growth curve looks less like a gamble and more like a plan.
Similar Content You Might Want To Read
Discover more insights and perspectives on related topics

Moz Pro vs Ahrefs Brand Radar: Which Should You Rely On For GEO?


The Top 7 Alternatives to Peec AI for AI Search Visibility


The 35 Best AI Marketing Tools in 2026


AthenaHQ vs Profound: Which GEO Platform Actually Delivers?


How To Rank On Perplexity AI (Based On Analysis Of 65,000 Prompt Citations)


50 Generative Engine Optimization Statistics That Matter in 2026
