Configuring Exploratory Sessions
Learn how to set up exploratory testing sessions to autonomously discover bugs and edge cases in your application.
Quick Start
Session Configuration Fields
| Field | Required | Options | Default |
|---|---|---|---|
| Session Name | Yes | Any descriptive text | — |
| Target URL | Yes | Full URL (with https://) | — |
| Project | Yes | Select existing project | — |
| Persona | No | Select pre-configured persona or guest | Guest (unauthenticated) |
| Duration (minutes) | Yes | 5-120 minutes | 30 |
| Risk Focus Area | Yes | Authentication / Payment / User Data / All | All |
| Max Pages | No | 10-500 pages | 100 |
| Max Interactions per Page | No | 5-50 interactions | 20 |
Field Explanations
Session Name
Short, descriptive name for your own reference.
Examples:
- “Mobile App Shopping Flow”
- “User Profile Editing”
- “Payment Checkout Security”
Target URL
The starting URL for exploration. The agent will crawl from this page.
Must include protocol:
- ✓
https://example.com - ✓
https://app.example.com/dashboard - ✗
example.com(missing https://) - ✗
localhost:3000(must use http://localhost:3000)
Project
Links the session to your project for organization and quota tracking.
Note: Exploratory sessions consume quota based on your subscription plan.
Persona Selection
What is a Persona? A pre-configured user account or login credentials for testing authenticated flows.
No Persona (Guest/Unauthenticated):
- Tests public pages only
- No login required
- Good for: Public landing pages, authentication flows, public documentation
With Persona:
- Tests authenticated pages and user-specific flows
- Logs in automatically before exploration
- Good for: Dashboard testing, payment flows, user-specific features
How to Create a Persona:
- Go to Settings → Personas
- Click New Persona
- Enter name, email, password (or MFA details)
- Save and use in exploratory sessions
Duration
How long the AI agent should explore (in minutes).
Recommended by Use Case:
- Quick smoke test: 5-10 minutes (just main flows)
- Standard exploration: 20-30 minutes (most flows)
- Deep exploration: 45-60 minutes (all flows + edge cases)
- Comprehensive audit: 90-120 minutes (all interactions + variations)
Note: Longer duration = more bugs found but higher cost (quota usage).
Risk Focus Area
Which area should the AI prioritize when exploring.
Available Risk Areas:
| Area | What It Tests | Good For |
|---|---|---|
| Authentication | Login, password reset, 2FA, session handling | Identity/auth testing |
| Payment | Checkout, payment forms, billing updates | Fintech/e-commerce |
| User Data | Profile editing, data deletion, personal info | Privacy/security |
| All | Everything equally | General QA |
Example: If you select “Payment”, the AI will prioritize finding payment-related flows and edge cases (e.g., expired cards, invalid zip codes, currency conversion errors).
Monitoring Session Progress
Session States
| State | Meaning | Duration |
|---|---|---|
| Pending | Initializing, not yet started | 0-2 min |
| Scouting | Discovering pages (20% progress) | ~5-10 min |
| Exploring | Testing interactions (60% progress) | ~15-30 min |
| Reporting | Analyzing findings, generating report (20% progress) | ~5-10 min |
| Completed | Session finished, results ready | — |
| Failed | Error occurred (timeout, network issue) | — |
| Cancelled | User stopped the session | — |
Real-Time Metrics
While exploring, you’ll see:
- Pages Discovered: Count of unique pages found (increases during Scouting phase)
- Interactions Tested: Number of user interactions attempted (increases during Exploring phase)
- Bugs Found: Live count of issues detected (increases throughout)
- Session Progress: Visual progress bar (Scouting → Exploring → Reporting)
Bug Severity Distribution
As the session runs, bugs are categorized by severity:
| Severity | Color | Definition |
|---|---|---|
| 🔴 Critical | Red | Security vulnerability, data loss risk, core feature broken |
| 🟠 High | Orange | Major feature broken, significant user impact |
| 🟡 Medium | Yellow | Feature works but with issues, poor user experience |
| 🟢 Low | Green | Minor issues, cosmetic problems, edge cases |
Viewing Results
After Session Completes
- AI-Generated Report — Natural language summary of findings
- Bug List — Searchable, sortable list of all detected issues
- Bug Details — Each bug includes:
- Description — What the issue is
- Steps to Reproduce — How to recreate it
- Evidence — Screenshots, error messages, network logs
- Severity — Critical/High/Medium/Low rating
- Suggested Fix — AI recommendation for resolution
Exporting Results
Export session results for your team:
# PDF Report
- Executive summary
- Bug list with severity breakdown
- Top 10 critical issues
- Screenshots and evidence
# CSV Data
- All bugs with fields, severity, timestamp
- Suitable for bug tracking system import
# JSON
- Raw session data for tooling integrationSession Limits by Plan
| Plan | Sessions/Month | Max Duration | Max Pages/Session |
|---|---|---|---|
| Starter | 10 | 30 min | 50 |
| Growth | 50 | 60 min | 100 |
| Pro | Unlimited | 120 min | 500 |
Tip: Run exploratory sessions in staging/pre-production, not production. The agent will interact with your application extensively, which may trigger notifications, modify test data, or consume resources.
Best Practices
- Target Staging Environment — Always use staging or test environment
- Disable Notifications — Temporarily turn off Slack/email alerts during exploration
- Clear Test Data — Run on a fresh database or reset afterward
- Run Regularly — Schedule weekly or monthly exploratory sessions
- Act on Findings — Prioritize critical/high severity bugs for immediate fixes