How-To GuidesUse Exploratory Testing

Use Exploratory Testing

Learn how to use AI-powered exploratory testing to autonomously discover bugs and edge cases in your application.

What Is Exploratory Testing?

Exploratory testing lets an AI agent autonomously:

  1. Discover pages — Navigate and find all pages in your app
  2. Test interactions — Try buttons, forms, links, flows
  3. Find bugs — Identify broken functionality
  4. Analyze failures — Categorize why tests failed
  5. Generate report — Provide actionable bug findings

Key difference from scripted tests:

  • Exploratory: “Explore the app and find bugs”
  • Scripted: “Execute these exact steps”

Running Your First Session

1
Open Exploratory Testing
Dashboard → Exploratory Testing
2
Click New Session
Enter session name (e.g., 'Mobile app testing')
3
Set Target URL
Which app to test (must be publicly accessible)
4
Configure Settings
Duration, persona (optional), risk focus area
5
Start Exploration
Click 'Start Session' — AI begins testing
6
Monitor Progress
Watch real-time bug detection and session phases
7
Review Results
Read AI report and bug list after completion

Session Configuration

Basic Settings

Session Name: Any descriptive text

✅ "Checkout Flow Testing"
✅ "Mobile App Smoke Test"
✅ "Auth Edge Cases"

Target URL: Full URL where app is hosted

✅ https://example.com
✅ https://staging-app.example.com
✅ https://app.example.com/dashboard

Project: Which project to associate session with

Advanced Settings

Persona Selection

  • Guest (default) — Unauthenticated user

    • Tests public pages
    • Good for landing pages, public docs, auth flows
  • With Persona — Logged-in user

    • Tests authenticated pages
    • Tests user-specific features
    • Requires persona created in advance

Duration: 5-120 Minutes

DurationUse CaseCoverage
5-10 minQuick smoke testMain flows only
20-30 minStandard testingMost flows, basic coverage
45-60 minDeep explorationAll major flows + edge cases
90-120 minComprehensive auditComplete coverage + variations

Risk Focus Area

Focus AI testing on specific area:

  • All — No specific focus, test everything equally
  • Authentication — Login, 2FA, session management, account flows
  • Payment — Checkout, billing, payment methods, subscriptions
  • User Data — Profile, settings, data deletion, preferences

Monitoring Session Progress

Session Phases

Your session goes through 3 phases:

Scouting (20%)      → Discovering pages

Exploring (60%)     → Testing interactions

Reporting (20%)     → Analyzing findings

Completed           → Results ready

Status indicators:

  • 🟢 Running — Session in progress
  • 🟡 Paused — Session paused (click Resume)
  • ✅ Completed — Session finished
  • ❌ Failed — Error occurred

Live Metrics

Watch in real-time as session runs:

  • Pages Discovered: Count of unique pages found
  • Interactions Tested: Number of actions attempted
  • Bugs Found: Live count increasing
  • Current Focus: What the AI is testing right now

Example progression:

Pages: 5 → 12 → 23 → 35
Bugs:  0 → 2  → 5  → 8
Progress: 20% → 45% → 80% → 100%

Reviewing Results

After Session Completes

AI-Generated Report

Natural language summary of findings:

"BugBrain discovered 12 bugs during exploration:

🔴 CRITICAL (2):
- Payment form doesn't validate card numbers
- Session expires while filling checkout form

🟠 HIGH (3):
- Search results page missing error handling
- Mobile menu doesn't close after clicking

🟡 MEDIUM (4):
- Form labels not properly centered
- Pagination buttons have low contrast

🟢 LOW (3):
- Typo on products page
- Broken favicon link

Bug List

Sortable list of all detected bugs:

Sort by:

  • Severity (critical → low)
  • Category (selector, timeout, assertion, etc.)
  • Page (group by location)
  • Confidence (AI’s certainty)

Each bug shows:

  • Title and description
  • Steps to reproduce
  • Expected vs actual behavior
  • Screenshot/evidence
  • AI-suggested fix

Using Findings

For test case generation:

  1. Review bug list
  2. Create test cases for high-severity bugs
  3. Add to test plan
  4. Run regularly to catch regressions

For priority queue:

  1. Export bugs as issues (GitHub/Jira)
  2. Assign to team
  3. Schedule in sprint
  4. Track fixes

For quality metrics:

  1. Compare sessions over time
  2. Track bug trends (increasing/decreasing)
  3. Measure improvement after fixes
  4. Report to stakeholders

Advanced Usage

Running Sessions Regularly

Schedule weekly exploratory sessions:

  1. Create session → Save as template
  2. Day & time → Off-peak hours (e.g., 2 AM)
  3. Frequency → Weekly or daily
  4. Notifications → Email on new critical bugs

Benefits:

  • Continuous bug detection
  • Catch regressions after updates
  • Maintain quality baseline

Using Knowledge Maps

After 2-3 sessions on same URL, Knowledge Map reaches 75%+ confidence:

  • AI understands app structure better
  • Finds more relevant bugs
  • Less duplicate findings
  • Better recommendations

To leverage knowledge:

  1. Run multiple sessions (3-4 weeks)
  2. Wait for high confidence score
  3. Auto-generate test cases from findings
  4. Test cases will be much better

Comparing Sessions

Track bugs over time:

Session 1: 15 bugs
Session 2: 12 bugs (improvement ✓)
Session 3: 8 bugs  (more improvement ✓)
Session 4: 7 bugs  (stabilized, good quality)

Use trend charts to show QA progress to stakeholders.

Parallel Sessions

Run multiple simultaneous sessions:

Session 1: Desktop app testing
Session 2: Mobile app testing
Session 3: API edge cases
Session 4: Authentication flows

All run in parallel, results aggregated.

Troubleshooting

”Session Failed: Target URL Unreachable”

  • Verify URL is live
  • Check firewall/IP allowlisting
  • Ensure app isn’t behind authentication
  • Wait for app to restart if needed

”Session Timeout: Max Time Exceeded”

  • Session ran for full duration
  • Results are still available even if incomplete
  • Increase duration for next session if needed

”No Bugs Found”

Possible reasons:

  • App is very robust (good news!)
  • AI didn’t find edge cases
  • Run longer session or with specific persona
  • Try different risk focus area

”Too Many False Positives”

  • Bugs detected that aren’t really bugs
  • Review findings with team
  • False positives help identify brittle tests
  • Use findings to improve test robustness

Best Practices

DO:

  • Run sessions on staging environment
  • Disable notifications temporarily (avoid alert spam)
  • Start with short sessions (10-20 min) for quick validation
  • Run weekly or bi-weekly for continuous discovery
  • Review critical bugs within 24 hours
  • Create test cases from high/critical bugs

DON’T:

  • Run on production (could trigger monitors/alerts)
  • Use production user accounts
  • Expect 100% coverage
  • Ignore exploratory findings
  • Set unrealistic expectations (not a replacement for manual QA)

Example: Full Workflow

Week 1:
  Monday → Run exploratory session (2 hour)
  → Find 8 bugs
  → Create 3 test cases for critical bugs
  → Assign to dev team

Week 2:
  Monday → Run second session
  → Compare to week 1
  → Verify critical bugs fixed
  → Find 5 new edge cases
  → Add to backlog

Week 3:
  Monday → Run third session
  → Knowledge map now 85% confident
  → Auto-generate 10 test cases
  → Add to regression test plan

Week 4:
  Monday → Run fourth session
  → Bug count stabilized at 4-5
  → Quality improved significantly
  → Report trend to stakeholders

Pro Tip: Exploratory testing complements scripted tests perfectly. Use scripted tests for regression prevention, exploratory for new bug discovery.