StartupsCRM SaaSBuilding in PublicSoftware TestingMobile CRMQA TestingCRM for StartupsCRM for Small Teams

How We Test a CRM Before Our Users Do

A behind-the-scenes look at how we approach quality at Founders Kit — and why clicking every button isn't enough.

Oana ClopotelOana Clopotel

Most Software Gets Tested by Clicking Buttons. We Test by Running a Business.

There's a version of QA that looks like this: open every page, click every button, confirm nothing explodes. Ship it.

That's not how we test Founders Kit. We test it by pretending to be you — a founder managing deals, delegating tasks, checking your pipeline on your phone between meetings. If it breaks during a real workflow, it doesn't matter how many buttons we clicked.

I want to walk you through exactly how we approach quality. Not because our process is perfect, but because you deserve to know what happens before a feature reaches your account.

Why "It Works on My Machine" Is Never Good Enough

Early on, I'd test a feature on a wide monitor with fast internet. Everything looked great. Then someone would use it on their phone between meetings and the experience was completely different.

The problem was that we tested one version of reality. A founder uses your product on a laptop, a phone, a tablet — in different contexts, with different screen sizes, often while multitasking. If you only test the ideal scenario, you're just confirming your own assumptions.

If your testing doesn't match how your users actually use the product, you're just testing your assumptions.

Testing Like a User, Not a Developer

The biggest shift was moving from "does this page work?" to "does this workflow make sense?" A founder doesn't use isolated features — they take a call, log a deal, check their pipeline on their phone, and assign a follow-up, all within the same ten minutes.

So instead of testing features in isolation, we started testing the way founders actually work. End-to-end scenarios with realistic data. Different devices, different screen sizes. The same workflows that a real user would run through on a busy Tuesday morning.

Every time we found something that didn't feel right, it became a permanent part of our process. Over time, that built up into a thorough set of checks that catches the things we'd otherwise miss.

The best testing process isn't the one that finds the most bugs. It's the one that finds the bugs your users would have found first.

The Edge Cases That Matter Most

The most valuable bugs aren't the obvious crashes. They're the subtle inconsistencies — a button that works in one context but silently fails in another, a notification that shows up sometimes but not always. Those erode trust faster than a missing feature.

If you've ever struggled with CRM adoption, you know that trust in the tool matters as much as the features themselves. Users don't file bug reports for small inconsistencies. They just stop using your product.

Test the edges, not just the happy path. Your users will find them whether you do or not.

What Real Users Taught Us That Testing Couldn't

No matter how thorough our checklists are, users find things we don't. One founder managed their entire pipeline through Kit instead of the UI — typing "move the Acme deal to negotiation" rather than clicking through screens. That feedback expanded our testing — now every audit includes conversational workflows alongside the traditional UI paths.

We also learned that solo founders have different needs than small teams. Features that made sense for teams felt cluttered for one person. That insight fed directly into how we built for teams of one.

Your users are trusting you with their data, their workflows, their business. The least you can do is test it like it matters.