Intelligent Design: Building findable.ch with v0 and Cursor AI

By Franco on March 23, 2025

Intelligent Design: Building findable.ch with v0 and Cursor AI
Share:Share on LinkedInShare via Email

At findable, we thrive on experimenting with new technology. In our latest project, we combined the power of Cursor AI and v0 from Vercel to build our new website, findable.ch. When we built the first findable.ch we were pretty occupied with edufind.ch and therefore just created a squarespace site in a few hours. Now it was time for something new: a site completely built with AI.

Starting Point

Our only resource was the existing Squarespace site. We started by taking a screenshot of this site and uploading it to v0. Using a simple command, we instructed v0 to "make it nicer," and let it work its magic.

A discarded "wilder" approach.

We cycled through 3-4 designs and chose one to start with. v0 copied the team and other site information. We then made some minor adjustments.

Once satisfied, we downloaded the code into a local Next.js project and from that point forward, focused solely on using Cursor AI to shape and refine our website.

The version we went with.

Building with Cursor AI

After experimenting with Crursor for a few days, our real journey began by constructing a rules library (ghuntley’s stdlib). Alongside this, we integrated essential tools like BrowserTools MCP and GitHub MCP which massively helped in the development process.

One of the most exciting parts was holding planning sessions with the AI. In these sessions, the AI generated tickets in GitHub, complete with detailed acceptance criteria. We would then review these tickets, sometimes take them as is, or mostly remove any unnecessary items, and let the AI implement the approved changes. This mode of using AI for planning, human input for validation and correction, and AI for execution has proven to be highly effective.

Timid Start

Embarking on an AI-driven project required a cautious approach at first.

  • Start slow: Introduce changes gradually to maintain stability.
  • Control frequently: Constantly review progress and adjust as needed.
  • Go back and retry: Don’t be afraid to roll back—even big chunks—if something goes off track.
  • Add new rules: Continuously refine our rules library to improve outcomes.

When the AI veered off course—for example, when using BrowserTools to take a screenshot and fix an issue—the results could range from brilliant to wildly off the rails. In those moments, halting the process and re-focusing on one problem at a time was key. Commands like “Do not do anything else!” at the end of a prompt often worked wonders.

Don’t shy away from experimenting with rules when the AI repeatedly makes the same mistakes.

Embrace YOLO Mode

Once our foundation was built, we shifted into "YOLO mode" with Cursor. In this mode, the AI operates with more freedom, with the only safeguard being the critical rm command to prevent catastrophic errors. Our Prompt in YOLO mode typically looks like:

Let’s start with ticket #N
Discuss the ticket and propose a plan for review
Execute the plan
Run precommit checks
Commit, update the ticket and create a PR

The AI is guided by rules that we have established. These rules specify our requirements for pre-commit checks and mandate that the AI cannot close tickets without an approved PR.

Lessons Learned

Throughout this journey, we discovered several insights that have refined our process and could help others embarking on a similar path:

GitHub MCP and Ticket Workflow:

Using GitHub MCP for ticket creation proved highly effective for human interaction. The detailed tickets—with clear acceptance criteria and implementation hints (including code snippets and documentation links)—helped the AI understand exactly what was needed, making refinement a smooth process.

BrowserTools MCP:

While BrowserTools can sometimes produce brilliant results.

A pompt like:

Look at the screenshot and fix the error

is aways worth a try. But stop it when it goes off the rails.

Pre-commit Check:

Implementing a pre-commit check using commands like 

yarn format:check && yarn lint && yarn tsc && yarn test

has been crucial. Whether run manually before committing or you could add it as a git commit hook.

When the AI Goes Off:

Don’t hesitate to roll back changes—even large ones. It’s not a loss, you won already by using AI in the first place. When facing multiple failures, instruct the AI to focus on one file or test at a time. A clear, focused prompt often resolves issues more effectively than a broad command.

Testing:

Testing with AI is still in its early days. The AI might overcomplicate tests or try to address too much at once. Sometimes, removing a problematic test is the best solution when the AI gets stuck. This is an area ripe for future exploration, perhaps in a dedicated blog post.

Recurring Mistakes:

Despite all efforts, the AI occasionally repeats similar mistakes. This signals a need to experiment with better rules and documentation—especially for testing—to ensure a more consistent and reliable output over time.

Key Takeaways

The future of coding is undeniably tied to AI advancements. AI will change the role of software engineers, though we probably can’t even imagine how.

What we can tell now, is that communication will be even more critical, as engineers need to clearly tell AI what they need and make sure it aligns with project goals. They also need to understand and analyse AI's code to give useful feedback.

The AI's ability to write solid tests and debug code effectively is still in its infancy. However, if we can teach it these skills, AI coding will take a significant leap forward. It's a space worth keeping an eye on.