SERVICE · MANUAL QA

Manual QAfor startups that need realproduct judgment.

Mantis uses exploratory, human-led QA to uncover bugs, UX friction, edge cases, and broken flows that automated tests alone often miss — run by senior testers who actually use your product like a customer would.

Exploratory testingUX & trust frictionRelease readiness
Sample findings · last sprintEXPLORATORY · 04 / 17
F-041Resend code button silently no-ops after 60s — onboarding stalls without feedbackHIGH
F-042Failed payment retry returns to empty cart — recovery flow brokenHIGH
F-043Empty inbox reads "Error 0" — misleading copy, not a real errorMED
F-0442FA SMS arrives after input expires on slow networksHIGH
FOUND BY JUDGMENT, NOT SCRIPTSAcross web & iOS
WHY MANUAL QA STILL MATTERS

Why thoughtful human QA still matters.

PREMISE · 01 / 06

Automation is necessary. It is not sufficient. Real products break in places that scripts were never written to check — and that's exactly where startups lose users.

Manual QA at Mantis is not a checklist run by junior testers. It's senior, product-aware testing — engineers who read the product, follow the user's intent, and notice when something feels wrong even before it technically fails.

We use exploratory testing alongside automation. Each catches different bugs, and the most important ones almost always belong to the human.

01

Automation tests scripts, not judgment

Automated suites verify what you already know to check. They don't notice the moment a flow becomes confusing, untrustworthy, or quietly broken.

02

New features need a human first

Before a feature is stable enough to script, someone has to try it the way a real user would — including the paths the product manager didn't write down.

03

UX friction is a human discovery

Misleading copy, broken empty states, and trust-damaging recovery flows rarely throw errors. They're found by people, not assertions.

04

Startups need testers, not checkboxes

Cheap manual QA executes a list. Product-aware manual QA reads the product, asks why, and surfaces the issues that change a release decision.

WHAT MANTIS MANUAL QA INCLUDES

What's actually in scope.

Five disciplines that compound. Each one is run by a senior engineer who has shipped real software — not a tester following someone else's checklist.

SCOPE · 02 / 06
01

Exploratory testing

Senior testers learn the product, form hypotheses about where it's likely to break, and probe those areas with the intent of a real user. Issues are found through judgment, not coverage.

SESSIONSCHARTERSHEURISTICS
02

Edge-case discovery

Bad networks, slow devices, weird inputs, interrupted flows, mid-state navigation, repeat actions. The conditions real users hit that scripted tests almost never reproduce.

BOUNDARYINTERRUPTRECOVERY
03

UX and trust-friction observations

Confusing copy, misleading empty states, silent failures, recovery flows that lose user data. Issues that don't throw errors but quietly cost you users — and the credibility you can't easily rebuild.

COPYSTATESTRUST
04

Regression validation

Manual sweeps of release-critical flows on real devices and browsers. Catches the regressions automated suites miss because the assertion drifted or the test was never written.

FLOWSDEVICESBROWSERS
05

Release-readiness validation

Before each release, a focused pass on the riskiest changes with a clear ship / hold recommendation. Founders and engineering leads get a real signal, not a green checkmark.

SMOKERISKSHIP-CALL
WHEN THIS IS MOST USEFUL

Where manual QA actually moves the needle.

Manual QA isn't a service every team needs every week. These are the moments where having a senior human in the loop pays off most.

CONTEXT · 03 / 06
01BEFORE RELEASE

New feature releases

Features that haven't stabilized yet are too volatile to script. Exploratory manual QA validates real behavior before you commit to automation — or to shipping.

SIGNALCommon signal: the team is debating whether the feature is "good enough" to ship.
02HIGH CADENCE

Fast-moving products

When you ship multiple times a week, automation can't keep up with the surface area. A human in the loop catches the regressions your suite hasn't grown to cover yet.

SIGNALCommon signal: releases happen faster than the test plan can be updated.
03TRUST-SENSITIVE

Onboarding, payment & trust flows

Sign-up, checkout, password reset, and account recovery don't tolerate silent failures. These are the flows where a bad UX moment becomes a churn moment.

SIGNALCommon signal: support tickets concentrate around the same 2–3 critical paths.
04NO INTERNAL QA

Teams without QA discipline

If engineers are testing their own work between commits, real issues are slipping through. Manual QA brings a fresh perspective and the rigor an in-house function would.

SIGNALCommon signal: every release surfaces an issue that should have been caught earlier.
HOW WE WORK

A simple, sharp process.

No long onboarding decks. No process theatre. Senior engineers ramp into the product, focus on what matters, and tell you what they see.

PROCESS · 04 / 06
01DAYS 1–2

Learn the product

We read the product, the docs, recent releases, and the support backlog. Before testing anything, we understand what's important and where the risk actually lives.

02DAY 2 ONWARD

Test the flows that matter

Coverage focuses on release-critical paths and the areas with the highest user-impact risk. We don't burn hours on what doesn't ship.

03ONGOING

Document meaningful issues

Bugs get severity, reproduction, expected behavior, and a clear note on the user impact. No noise, no "works on my machine" tickets, no padded counts.

04PER RELEASE

Support release decisions

Before each release we surface the risks worth knowing about and give engineering leads a real ship-or-hold view — not a green dashboard.

What you won't see: dashboards full of green checkmarks with no insight behind them. We report what the product is doing, what it should be doing, and what we'd ship — or wouldn't.
PROOF · EXAMPLE FINDINGS

The kinds of issues Mantis catches.

A representative sample of bugs found by Mantis manual QA across recent startup engagements. Anonymized, but every one is real — and every one was missed by the team's existing test coverage.

CATALOG · 05 / 06
BROKEN STATEF-118 · WEB

"Save changes" appears active after the form silently fails to submit.

Network call returned 500. UI showed no error. User believed their changes were saved. Discovered by exploratory session, not by the form's unit tests.

MISSED BY AUTOMATION
POOR RECOVERYF-094 · iOS

Payment retry sends user back to an empty cart with no error.

After a declined card, the retry path lost cart state. No notification, no explanation. A clean re-add was the only way forward — and several users never made it.

MISSED BY UNIT TESTS
MISLEADING EMPTYF-076 · WEB

Empty inbox renders the literal string "Error 0".

A valid empty state was being treated as an error by the UI layer. Not a server bug. Not a test failure. Just a user staring at a message that made them lose trust.

MISSED BY MONITORING
TRUST DAMAGEF-058 · iOS

2FA SMS arrives after the in-app input has already timed out.

On slower carriers, the input expired before the code arrived. The retry button reset the timer but didn't resend the code. Repeated silent failures, no thrown errors.

MISSED BY E2E SUITE
EDGE CASEF-103 · WEB

Pasting a 10-digit phone number with spaces blocks the submit button.

Validation accepted the value visually but the model still saw the raw string. Button stayed disabled with no message. Real users paste from contacts apps all the time.

MISSED BY FORM TESTS
REGRESSIONF-129 · ANDROID

Back gesture after deep link returns to a different account's data.

Auth context wasn't being refreshed on navigation. The bug only surfaced when arriving via push notification. No automated test had a reason to walk that exact path.

MISSED BY UNIT TESTS

Customer names withheld under NDA. Bug IDs and copy lightly edited to protect product details. Severity and reproduction steps are tracked in full in our reporting.

GET IN TOUCH

Need QA that catches more than obvious bugs?

A 30-minute fit call is enough to know whether senior manual QA is the right next step for your product — and what coverage would actually look like.

Fit call · 30 min · No prep needed
Manual QA — Mantis