Accelerating Agency Delivery with Fantasy
How FM and Fantasy used an intentional "Human-in-the-Loop" collaboration model to deliver a massive multi-brand consolidation project in just six weeks.
In the current hype cycle of artificial intelligence, the narrative often centers on "autonomous agents" replacing developers. But when our partners at Fantasy approached us with a complex challenge for a global consumer care leader, we proved a different thesis: AI is most powerful not when it works alone, but when it is strictly guided and inspired by human expertise.
The brief was ambitious: unify disparate data streams and multiple sub-brands into a single, cohesive consumer destination. With a rigid six-week timeline from inception to MVP, traditional workflows were unlikely to succeed.
To meet this deadline without sacrificing the "best-in-class" experience Fantasy is known for, FM deployed a revolutionary workflow. We moved beyond the idea of "auto-pilot" to a collaborative Human + AI model, where AI provides velocity, and human experts provide direction, consistency, and architectural integrity.
The Challenge: Unification at Speed
The client needed to merge the digital presence of multiple distinct brands into a single, centralized web portal. The technical hurdles were significant:
- Complex Data Integration: Combining data streams from three different brands using three different data schemas into a centralized model to power a location-based provider finder.
- Legacy vs. Modern: Migrating to a modern stack (Next.js, Payload CMS) on a tight deadline.
- High Fidelity: The need to maintain rigorous design standards and performance metrics despite the tight timeline.
The Philosophy: AI Output, Human Outcome
Our approach wasn't about letting AI run wild. It was about Intentional Task Parallelization. We structured the work so that AI agents handled the "first pass" of component generation and data migration, while a lean human team focused on strategy and refinement. As noted in our findings, "AI can deliver useful code and structure quickly, but it needs steady guidance to stay consistent".
- The Human-Driven "Context" Layer
AI models often fail because they lack context. We solved this with Context7, a Model Context Protocol (MCP). While the AI did the searching, humans defined the boundaries.
- Bridging the Knowledge Gap: Because standard models lacked familiarity with the newest Payload CMS 3.0 patterns, we used Context7 to feed the agents current documentation.
- Prompt Engineering as Development: High-level human involvement was required for prompt definition and architecture review. We found that reuse of well-structured prompts in GitHub Actions and Claude Code led to consistently better results than ad-hoc requests.
- The "Glass Box" Reality: Why Autonomy Failed
We learned quickly that AI agents were not fully autonomous. Without human intervention, the AI’s "literal" interpretation of designs caused issues:
- Literal Translations: The AI often interpreted hidden Figma layers or unnecessary rotations as essential code, creating bloated markup. A human developer had to intervene to clean and consolidate these styles.
- Style Consistency: The AI struggled to create a cohesive typeface system, often duplicating CSS classes instead of reusing them. Human experts were essential to manually consolidate these styles and ensure the "pixel-perfect" finish Fantasy demands.
Scaling output without scaling understanding leads to re-work, bugs and frustration:
- Left to its own devices, an agent will make hidden assumptions and decisions when writing code. By evolving a workflow that surfaced decisions and assumptions up front, persisted them in our issue tracker, then used any human edits to move to the next step, we were able to significantly reduce technical design and coding errors that are the main cause of expensive and inefficient re-work.
The Workflow: Hybrid Cloud & Local Control
To balance speed with control, we split the workflow based on the level of judgment required:
The Cloud Stream (High Volume, Low Risk):
We used Claude Code GitHub Actions for repetitive tasks. This allowed us to run background development work, but even here, humans remained in the loop—developers could review and trigger these tasks directly from mobile devices via the GitHub app, ensuring oversight even on the go.
The Local Stream (High Judgment):
For complex architectural decisions, we avoided automation. Developers used the Claude Code CLI locally. Initially we used interactive prompting and debugging where human intuition was required to guide the AI through nuanced logic. Eventually we progressed to repeated prompts and consistent workflows to create more predictability and reduce cognitive overhead and agent mistakes.
The "No-Fly" Zones:
We intentionally excluded certain areas from AI workflows. Content edits and Product QA remained manual because human review was simply faster and more reliable than current AI QA tools for early-stage products that were fast changing.
The Infrastructure: Safety Rails for AI
To allow for this rapid iteration, we needed a safety net. We utilized Vercel and Neon to create isolated preview environments for every single pull request.
- Isolated Database Branches: Neon automatically assigned a unique database branch to each preview.
- Risk-Free Iteration: This allowed the AI to attempt migrations or configuration updates in a sandbox. If the AI "broke" the database, it only broke its own branch—never the production or even other agents’ data.
The Results
By keeping humans firmly in the driver's seat while using AI as a high-powered engine, we achieved remarkable efficiency.
- Velocity: MVP delivered above spec in 6 weeks with only 1 designer and 1 developer..
- Quality: The site launched, matching or improving upon original designs, with a 97 Performance score and 100 Best Practices score on Lighthouse.
- Efficiency: We successfully implemented an observable data sync process combining multiple streams into a centralized model.
Key Takeaway
This project proved that the "Human in the Loop" is not a bottleneck—it is the safety valve that makes AI viable for enterprise production. By combining rapid AI output with steady human guidance, FM helped Fantasy deliver a complex platform at a speed and quality impossible with traditional methods.
Ready to accelerate your delivery?
Would you like to schedule a deep-dive to see how our Human + AI collaboration model can shorten your next project's timeline? Schedule a call to learn more.

