Back to Knowledge Hub

Lessons from the Trenches: Building a Travel Insurance AI with Claude

I recently spent several weeks building CarriffTravelAgent — a conversational, FCA-compliant travel insurance platform — with Claude as my primary AI pair programmer (and Cursor, and occasionally ChatGPT when I needed a second opinion). This is not the polished "look what AI can do" article. This is the honest version.

Start with a Plan — and a Justification to Build It

Before writing a line of code, I used Claude to help produce a project plan and kickoff document. Not a traditional business plan — more of an internal justification for what I was building, why it was worth building, and what success would look like. It covered scope, the regulatory constraints (FCA), target users, the tech stack, and a phased delivery approach.

It sounds like overhead. It wasn't. Three weeks in, when the scope started to blur and new ideas started appearing, I had a document to go back to. It kept the project grounded and gave the whole thing a backbone. Claude produced the first draft in minutes. I shaped it into something that actually reflected what I was doing.

Lesson: Get Claude to help you write a project plan and kickoff justification before you build anything. It will save you from yourself later.

How do you know you have enough data to train your AI? — it might not matter at this stage

I designed the system around Google Document AI to parse travel itinerary PDFs. The idea was solid: upload your booking confirmation, the AI extracts your trip details, the journey kicks off. The reality was considerably more humbling.

Training Document AI requires labelled data. Good labelled data. In quantity. When you're building a new product with mock documents and a handful of real PDFs, you get results that are creative. The model confidently extracts the wrong dates, misses traveller names, and occasionally decides that "Heathrow" is a person.

I ended up switching to a Gemini-powered parser as the default — no training required, works well out of the box — with Document AI kept as a future upgrade path for when there's real document volume. Know your MVP from your end state.

Lesson: Don't build on ML models you're training yourself unless you have production data to train them with. Mock data will not save you. Get something working first.

GCP Is Not a Dev Environment

I went straight into Google Cloud Run for the .NET backend. GCP is great for production. For development? You're paying for Cloud SQL instances that sit idle most of the day, worrying about Terraform plan drift, and burning time on IAM permissions when you should be building features.

Vercel and SQLite exist for a reason. The frontend lived on Vercel from day one and cost next to nothing. The backend should have done the same — local, cheap, fast to iterate — and graduated to GCP only once the architecture was stable.

Lesson: Start cheap. Vercel, Railway, SQLite, .env files. Move to proper infrastructure once you know what you're actually building.

Use AI to Bridge the Gap While You Build Yours

The product needed to be demonstrable before my custom layers were ready. The solution was pragmatic: use OpenAI's GPT-4.1-mini for the conversational engine while the more specialised pieces matured. I built a provider abstraction from day one — swap between OpenAI, Vertex AI, Anthropic, or a scripted mock with a single environment variable.

This is probably the best architectural decision I made. The demo worked from week two. The switching cost to upgrade AI providers later is zero.

Lesson: Build AI provider switching into your architecture from the start. Use the best available model to get a working demo; optimise later.

Protect Your API Credits Before Anyone Else Does

Demo site or not, if your API key is live, your budget is at risk. I learned quickly that an unprotected chat endpoint is an open invitation — not necessarily from bad actors, but from curious people hitting refresh, automated crawlers, and your own testing loops getting out of hand.

I built per-user rate limits on the chat API, capped the number of messages per session, and set hard token limits on system prompts from the start. These aren't sophisticated protections — they're basic hygiene. But they're the kind of thing that's trivially easy to add early and genuinely painful to retrofit.

Lesson: Treat API cost protection as a day-one requirement, not a post-launch patch. Rate limit your chat endpoints, cap session length, and watch your usage dashboard.

Complexity Arrives Without Knocking

An 18-step conversational journey — and a lot more to come. FCA compliance gap checks. Audit trails. PDF parsing. Suitability scoring. Vulnerability detection. Stripe integration. Terraform environments. Each individually manageable. Together, they silently devour your mental model of what's actually been built.

Around week three, I called a time-out. Not to add features — just to read. Architecture diagrams. The current state of the codebase. The CLAUDE.md file. I'd been so deep in individual pieces that I'd lost the thread of how they connected.

Claude is excellent at producing architecture documentation and context files that let you — or the AI — dive back into a complex system days later without losing an hour reorientating. Having this as a living document, not an afterthought, saved me multiple times.

Lesson: Every project needs a moment where you stop building and just look at what you've built. Invest in architecture documentation early, and keep it current.

The Spinning Cursor Is Productive Time

Claude takes time to think. Cursor takes time to generate. Large responses take time to stream. Early on, I'd just wait.

I stopped waiting. Now when Claude is mid-generation on a complex service class, I'm reviewing the previous output, writing tests, updating documentation — or, genuinely, doing the dishes. Context-switching at the human level while the AI handles the mechanical work turned out to be a meaningful productivity unlock.

The same applies to AI-generated content like this article. AI-written prose is recognisable: slightly too structured, slightly too even-toned, lacking the texture of actual experience. The trick is using AI to generate the skeleton, then filling in the honest bits yourself — while it's working on your next task.

Lesson: Plan what you'll do while Claude is thinking. Your time is the constraint, not the AI's.

When Claude Recommends a Tool

I hit a wall with rate limiting. The Next.js frontend ran on Vercel serverless. The .NET backend was on GCP Cloud Run. Vercel's functions can't reach a GCP VPC Redis instance without significant networking overhead.

Claude suggested Upstash — a serverless Redis provider with a free tier, no VPC required, works with Vercel out of the box — and walked through the setup step by step. Twenty minutes later, rate limiting was done. This kind of "I know a tool for that" moment is where Claude genuinely earns its keep. Not just what to build, but what to reach for.

The same goes for CLI commands. gcloud, terraform, docker — the flags, the syntax, the order of arguments. Nobody remembers all of it. When you can't remember the exact command to grant bucket permissions or apply a CORS policy, just ask. Claude knows them all.

The Token Incident

Upstash gives you two tokens: read-only and read-write. At some point, I copied the wrong one and spent considerably longer than reasonable debugging why the rate limit counter wasn't incrementing.

Your AI will probably work out the problem before you do — debugging is so much easier now.

Closing

Building with Claude — and Cursor, and the occasional ChatGPT for a second opinion — is genuinely different from building alone. The speed of scaffolding, the quality of architecture conversations, the willingness to generate test cases for edge cases I hadn't considered: it compresses what would take a week into a day.

But it still takes weeks. Because the hard parts aren't the code. They're the decisions: what to build, when to stop, where to invest in proper infrastructure. Those decisions are still yours.

I'll publish a deeper technical dive on the architecture soon. For now — go build something. And plan something to do while the AI is thinking.

Ready to Transform Your Data Strategy?

Contact our experts today to schedule a free assessment of your current data governance framework.

Speak to an Expert