FeaturesExploratory TestingConfiguring Sessions

Configuring Exploratory Sessions

Learn how to set up exploratory testing sessions to autonomously discover bugs and edge cases in your application.

Quick Start

1
Go to Exploratory Testing
From Dashboard, click Exploratory Testing tab
2
Click New Session
Enter session name and target URL
3
Configure Settings
Select persona, duration, risk focus area
4
Start Exploration
AI agent begins discovering pages and testing interactions
5
Monitor Progress
Watch real-time bug detection and session phases

Session Configuration Fields

FieldRequiredOptionsDefault
Session NameYesAny descriptive text
Target URLYesFull URL (with https://)
ProjectYesSelect existing project
PersonaNoSelect pre-configured persona or guestGuest (unauthenticated)
Duration (minutes)Yes5-120 minutes30
Risk Focus AreaYesAuthentication / Payment / User Data / AllAll
Max PagesNo10-500 pages100
Max Interactions per PageNo5-50 interactions20

Field Explanations

Session Name

Short, descriptive name for your own reference.

Examples:

  • “Mobile App Shopping Flow”
  • “User Profile Editing”
  • “Payment Checkout Security”

Target URL

The starting URL for exploration. The agent will crawl from this page.

Must include protocol:

  • https://example.com
  • https://app.example.com/dashboard
  • example.com (missing https://)
  • localhost:3000 (must use http://localhost:3000)

Project

Links the session to your project for organization and quota tracking.

Note: Exploratory sessions consume quota based on your subscription plan.

Persona Selection

What is a Persona? A pre-configured user account or login credentials for testing authenticated flows.

No Persona (Guest/Unauthenticated):

  • Tests public pages only
  • No login required
  • Good for: Public landing pages, authentication flows, public documentation

With Persona:

  • Tests authenticated pages and user-specific flows
  • Logs in automatically before exploration
  • Good for: Dashboard testing, payment flows, user-specific features

How to Create a Persona:

  1. Go to SettingsPersonas
  2. Click New Persona
  3. Enter name, email, password (or MFA details)
  4. Save and use in exploratory sessions

Duration

How long the AI agent should explore (in minutes).

Recommended by Use Case:

  • Quick smoke test: 5-10 minutes (just main flows)
  • Standard exploration: 20-30 minutes (most flows)
  • Deep exploration: 45-60 minutes (all flows + edge cases)
  • Comprehensive audit: 90-120 minutes (all interactions + variations)

Note: Longer duration = more bugs found but higher cost (quota usage).

Risk Focus Area

Which area should the AI prioritize when exploring.

Available Risk Areas:

AreaWhat It TestsGood For
AuthenticationLogin, password reset, 2FA, session handlingIdentity/auth testing
PaymentCheckout, payment forms, billing updatesFintech/e-commerce
User DataProfile editing, data deletion, personal infoPrivacy/security
AllEverything equallyGeneral QA

Example: If you select “Payment”, the AI will prioritize finding payment-related flows and edge cases (e.g., expired cards, invalid zip codes, currency conversion errors).

Monitoring Session Progress

Session States

StateMeaningDuration
PendingInitializing, not yet started0-2 min
ScoutingDiscovering pages (20% progress)~5-10 min
ExploringTesting interactions (60% progress)~15-30 min
ReportingAnalyzing findings, generating report (20% progress)~5-10 min
CompletedSession finished, results ready
FailedError occurred (timeout, network issue)
CancelledUser stopped the session

Real-Time Metrics

While exploring, you’ll see:

  • Pages Discovered: Count of unique pages found (increases during Scouting phase)
  • Interactions Tested: Number of user interactions attempted (increases during Exploring phase)
  • Bugs Found: Live count of issues detected (increases throughout)
  • Session Progress: Visual progress bar (Scouting → Exploring → Reporting)

Bug Severity Distribution

As the session runs, bugs are categorized by severity:

SeverityColorDefinition
🔴 CriticalRedSecurity vulnerability, data loss risk, core feature broken
🟠 HighOrangeMajor feature broken, significant user impact
🟡 MediumYellowFeature works but with issues, poor user experience
🟢 LowGreenMinor issues, cosmetic problems, edge cases

Viewing Results

After Session Completes

  1. AI-Generated Report — Natural language summary of findings
  2. Bug List — Searchable, sortable list of all detected issues
  3. Bug Details — Each bug includes:
    • Description — What the issue is
    • Steps to Reproduce — How to recreate it
    • Evidence — Screenshots, error messages, network logs
    • Severity — Critical/High/Medium/Low rating
    • Suggested Fix — AI recommendation for resolution

Exporting Results

Export session results for your team:

# PDF Report
- Executive summary
- Bug list with severity breakdown
- Top 10 critical issues
- Screenshots and evidence
 
# CSV Data
- All bugs with fields, severity, timestamp
- Suitable for bug tracking system import
 
# JSON
- Raw session data for tooling integration

Session Limits by Plan

PlanSessions/MonthMax DurationMax Pages/Session
Starter1030 min50
Growth5060 min100
ProUnlimited120 min500

Tip: Run exploratory sessions in staging/pre-production, not production. The agent will interact with your application extensively, which may trigger notifications, modify test data, or consume resources.

Best Practices

  1. Target Staging Environment — Always use staging or test environment
  2. Disable Notifications — Temporarily turn off Slack/email alerts during exploration
  3. Clear Test Data — Run on a fresh database or reset afterward
  4. Run Regularly — Schedule weekly or monthly exploratory sessions
  5. Act on Findings — Prioritize critical/high severity bugs for immediate fixes