Testing & QA Strategy
Why QA is non-negotiable, the 30% QA rule, when QA should be involved, and how to evaluate test quality.
The Non-Negotiable Truth About QA
Without testing:
- Features break when you add new ones
- Users find bugs (embarrassing and costly)
- You lose credibility fast
- Fixing bugs post-launch costs 10x more
"We'll test it later" = "We'll never test it" - QA isn't optional for MVPs. It's the difference between "buggy prototype" and "real product."
The 30% QA Rule
What It Means
For every 3-4 hours of development, budget 1 hour of QA.
- Developer builds feature: 8 hours
- QA tests feature: 2-3 hours
- Developer fixes bugs: 1-2 hours
- QA retests: 30 minutes
Total QA time ≈ 25-35% of development time
Why This Ratio?
- Less than 25%: Corners get cut, bugs slip through
- More than 35%: Diminishing returns (over-testing)
- 30% is the sweet spot for MVP quality
Budget Impact
Example project:
- Development: 400 hours @ $40/hr = $16,000
- QA: 120 hours @ $25/hr = $3,000
- Total: $19,000 (QA adds ~19%)
If your quote doesn't include QA, add 20-30% to get the real cost.
When QA Should Be Involved
Too Early: Waste of Time
- ❌ Testing wireframes/mockups (nothing to test yet)
- ❌ Testing half-built features (will change anyway)
Just Right: Maximum Value
- ✅ Feature complete on dev environment
- Developer thinks it's done
- Basic manual testing by dev already done
- ✅ Before deploying to staging
- Catch bugs before stakeholder demo
- ✅ Regression testing before production
- Ensure new features didn't break old ones
Too Late: Damage Control
- 🚩 Testing in production (users find your bugs)
- 🚩 Testing after launch (reputation already at risk)
Types of Testing for MVPs
1. Manual Testing (Essential)
What it is: Human tester clicks through the app
When to use: Every feature, every time
What they test:
- Happy path (feature works as intended)
- Edge cases (empty fields, max values, special characters)
- User experience (buttons work, forms are clear)
- Visual bugs (misaligned elements, broken layouts)
Cost: $20-35/hour (QA specialist)
2. Exploratory Testing (Highly Valuable)
What it is: Tester tries to break things creatively
When to use: Before major releases
What they find:
- Unexpected bugs
- UX issues
- Security holes
- "What if user does THIS?" scenarios
3. Automated Testing (Optional for MVP)
What it is: Scripts that test code automatically
When to use:
- After MVP launch (when codebase stabilizes)
- For critical flows (login, payment)
- If budget allows
Pros:
- Fast regression testing
- Catches regressions immediately
Cons:
- Expensive to write (2-3x dev time)
- Needs maintenance when code changes
- Doesn't replace manual testing
For MVPs: Manual testing is enough. Add automation later if you have resources.
4. User Acceptance Testing (UAT)
What it is: YOU test the app before launch
When to use: Final check before going live
What you test:
- Does it match your vision?
- Is it usable for your target audience?
- Any showstopper bugs?
Time commitment: 2-4 hours per major release
QA Deliverables: What to Expect
Bug Reports
Good bug report includes:
- Title: Clear, specific ("Login fails with Gmail" not "Login broken")
- Steps to reproduce: Numbered list
- Go to /login
- Enter email: test@gmail.com
- Click "Login"
- Error appears
- Expected result: User should be logged in
- Actual result: "Invalid credentials" error
- Screenshot/video: Visual proof
- Severity: Critical / High / Medium / Low
Test Reports
After each test cycle, QA should provide:
- Number of test cases executed
- Number of bugs found (by severity)
- Pass/fail summary
- Recommendation (ready to ship or needs fixes)
Test Cases (Optional but Helpful)
Document what will be tested:
- Feature: User Registration
- Test case: Valid email registers successfully
- Test case: Invalid email shows error
- Test case: Existing email shows "already registered"
How Developers Should Self-Test
Before Marking "Done"
Developer checklist:
- ☐ Happy path works (main use case)
- ☐ Edge cases handled (empty, null, max values)
- ☐ Error messages are clear
- ☐ UI looks correct (no visual bugs)
- ☐ Works on mobile and desktop (if applicable)
- ☐ Didn't break existing features (quick smoke test)
Self-Testing vs QA Testing
| Developer Self-Test | QA Testing |
|---|---|
| Basic functionality | Comprehensive edge cases |
| Quick smoke test | Detailed test plan |
| 5-15 minutes | 1-3 hours per feature |
| Familiar with code | Fresh eyes, finds unexpected issues |
Developers should self-test to catch obvious bugs. QA finds the non-obvious ones.
Definition of Done Checklist
Feature Is NOT Done Until:
- ☐ Code is written
- ☐ Developer self-tested
- ☐ Code reviewed (by another developer or tech lead)
- ☐ QA tested and approved
- ☐ Deployed to staging
- ☐ Demoed to stakeholder (you)
- ☐ Any bugs fixed
- ☐ Regression tested (ensure nothing else broke)
Common "Fake Done" Scenarios
- 🚩 "Code is written but not tested" → NOT DONE
- 🚩 "It works on my machine" → NOT DONE (needs to work on staging)
- 🚩 "It's 90% complete" → NOT DONE (90% = 0%)
- 🚩 "Just needs minor fixes" → NOT DONE (fix them first)
"Done" means shippable. If it's not shippable, it's not done.
Bug Severity Levels
Critical (Fix Immediately)
- App crashes or won't load
- Data loss or corruption
- Security vulnerabilities
- Payment processing broken
- Core feature completely unusable
Action: Stop other work, fix now
High (Fix Before Launch)
- Major feature doesn't work
- Workaround exists but awkward
- Affects most users
- Embarrassing visual bugs
Action: Fix in current sprint
Medium (Fix Soon)
- Minor feature broken
- Affects some users in some scenarios
- Easy workaround exists
Action: Fix in next sprint or two
Low (Backlog)
- Cosmetic issues
- Rare edge cases
- Nice-to-have improvements
Action: Fix when you have time (or never)
QA Process Flow
Step-by-Step
- Developer completes feature
- Self-tests
- Deploys to dev environment
- Marks as "Ready for QA"
- QA tests feature
- Follows test cases (if exist)
- Tries to break it
- Documents bugs
- If bugs found:
- QA logs bugs with details
- Developer fixes bugs
- Return to step 2 (QA retests)
- If no bugs (or only low-severity):
- QA approves
- Feature moves to staging
- Before production deployment:
- QA runs regression tests (did new code break old features?)
- You do UAT (final check)
- Deploy to production
Hiring QA: In-House vs Outsourced
Developer Does QA (Not Recommended)
Pros:
- No extra cost
Cons:
- Developers are terrible at testing their own code (blind spots)
- Slows down development
- Usually skipped when under pressure
Verdict: Only for tiny budgets (under $10K)
Dedicated QA Specialist (Recommended)
Pros:
- Catches more bugs (fresh eyes)
- Developers stay focused on development
- Professional test process
Cons:
- Added cost (20-30%)
Rates: $20-35/hour (Eastern Europe)
Verdict: Essential for projects over $15K
Automated QA Tools (Optional)
Examples: Selenium, Cypress, Playwright
Use for:
- Post-MVP when codebase is stable
- Regression testing
- Frequent releases
Cost: 2-3x dev time to set up
Red Flags: Poor QA
From QA Team
- 🚩 Vague bug reports ("Login doesn't work" with no details)
- 🚩 Missing bugs that you find immediately
- 🚩 No test documentation (can't track what was tested)
- 🚩 Rubber-stamps everything ("Looks good!" with no real testing)
- 🚩 Takes too long (3-hour test takes 2 days)
From Development Process
- 🚩 No QA budget allocated ("Developers will test")
- 🚩 QA only involved at the end (too late to fix architecture issues)
- 🚩 Bugs logged but never prioritized (backlog grows forever)
- 🚩 Production deploys without QA approval (asking for trouble)
Testing Environments
Development (Dev)
- Purpose: Where developers build and test their own work
- Stability: Constantly changing, often broken
- Who uses: Developers only
Staging
- Purpose: Production-like environment for QA and demos
- Stability: Should be stable, only deploy tested features
- Who uses: QA, stakeholders (you), clients for UAT
Production
- Purpose: Live app that real users see
- Stability: Must be stable, only deploy QA-approved code
- Who uses: Real users
Never test in production. Always have staging environment.
How Much Testing Is Enough?
Minimum for MVP
- ✅ Manual testing of all features
- ✅ Regression testing before each release
- ✅ UAT by you before launch
- ✅ Bug fixing until no critical/high bugs remain
Over-Testing Red Flags
- ❌ Testing minor visual tweaks repeatedly
- ❌ Delaying launch to fix low-severity bugs
- ❌ Writing automated tests for unstable features
- ❌ Perfectionism ("One more test pass...")
Under-Testing Red Flags
- ❌ Users reporting bugs on day 1
- ❌ Features breaking when you add new ones
- ❌ "It works on my machine" (but not staging)
- ❌ No test documentation
QA Budget Calculator
Simple Formula
QA Hours = Development Hours × 0.30
Example 1: Small MVP
- Dev: 200 hours @ $40/hr = $8,000
- QA: 60 hours @ $25/hr = $1,500
- Total: $9,500
Example 2: Medium MVP
- Dev: 500 hours @ $40/hr = $20,000
- QA: 150 hours @ $30/hr = $4,500
- Total: $24,500
Example 3: Complex MVP
- Dev: 1000 hours @ $45/hr = $45,000
- QA: 300 hours @ $30/hr = $9,000
- Total: $54,000
If your development quote doesn't include QA, add 20-30% to the total cost.
Key Takeaways
- The 30% QA Rule: Budget 1 hour QA for every 3-4 dev hours - adds 20-30% to project cost but essential
- Manual Testing Is Enough: For MVPs, skip expensive automated tests - use manual QA specialist at $20-35/hr
- Definition of Done: Not done until coded, self-tested, QA approved, deployed to staging, and demoed - "90% done" = 0%
- Developers Can't Test Their Own Code: Blind spots mean bugs slip through - always use separate QA person
- Fix Critical/High Before Launch: Medium/low bugs can wait - don't delay launch for cosmetic issues