A Thought on AI and Conversion Rate Optimization
I attended Conversion Boost in Copenhagen on March 17, 2026. During the event, Ole (host and organizer) asked speakers directly how they use AI in practice.

Some of the responses were skeptical, especially around hallucinations and reliability. I understand that concern. Hallucinations have been real, and many teams have experienced low-quality output from generic AI usage.
My view is that this is less a model question, and more a systems question.
AI in CRO works best when the system is structured
If AI is used with weak context, it produces weak guidance. If AI is used with structured inputs, clear instructions, and quality controls, it becomes much more useful for CRO work.
For me, this has shifted the focus from prompt quality to workflow quality.
The setup we have built at Umbraco
In our setup, Umbraco is where the website lives, and Airtable is where all relevant data lives together so we can query it.
In Airtable, we collect and connect:
- Call transcripts
- Customer reviews (for example Trustpilot and G2)
- Market signals from sales and customer conversations
- CRO learnings from previous tests
- Hypotheses with outcomes and supporting evidence
- Page-level context used in content and optimization work
We sync this with Zapier in our current setup. You could use tools like Make or n8n as well, but we use Zapier.
Why this matters for analysis quality
With that foundation, I run agents in Claude or Codex to analyze across:
- Qualitative customer data
- Web analytics (Matomo or GA4)
- Paid media performance (LinkedIn and other channels)
- Experiment history and conversion patterns
This makes it easier to identify patterns, prioritize better hypotheses, and move faster from insight to action.
If you want the operational detail behind this approach, I have shared that in AI Agents for Marketing: Why Prompting Fails and Workflows Win.
On hallucinations and bias
Hallucinations and confirmation bias are still risks, especially in open-ended chat usage. But in agent-style workflows, those risks can be reduced through:
- Clear task scope
- Explicit evidence requirements
- Defined process steps
- QA checkpoints before decisions are used
So I do not see this as AI is always reliable now. I see it as: reliability improves meaningfully when the workflow is designed for it.
Closing thought
For CRO teams, AI is most valuable as an analysis and decision-support layer, not just a content generator.
My perspective after Conversion Boost is simple: the upside comes from combining AI with structured data, operational discipline, and clear safeguards.
Recommended next
More writing connected to this topic, based on shared tags.