Analytics & Insights
Track test performance, identify trends, and make data-driven decisions about your testing strategy with BugBrain’s analytics dashboard.
What Can You Track?
Test Execution Metrics
- Total tests run
- Pass/fail rates
- Average execution time
- Tests run over time
Quality Trends
- Pass rate trends
- Flaky test detection
- Failure patterns
- Coverage gaps
Team Activity
- Tests created per member
- Execution activity
- Most active projects
- Collaboration metrics
Usage Statistics
- Quota consumption
- Feature usage
- Integration activity
- Cost tracking
Accessing Analytics
Dashboard View:
- Go to any project
- Click “Analytics” in the sidebar
- View project-specific metrics
Organization View:
- Click organization name
- Select “Analytics”
- View org-wide metrics
Key Metrics
Pass Rate
What it shows: Percentage of tests that pass
Formula: (Passed Tests / Total Tests) × 100
What’s good:
- 95%+ - Excellent
- 85-95% - Good
- 75-85% - Needs attention
- <75% - Critical issues
Tip: A suddenly dropping pass rate often indicates new bugs or environmental issues. Investigate quickly!
Execution Time
What it shows: How long tests take to run
Metrics:
- Average execution time
- Min/max times
- Time by test priority
- Trends over time
What to watch:
- Increasing times (performance degradation)
- Very long tests (timeout candidates)
- Inconsistent times (flaky tests)
Flaky Tests
What it shows: Tests that pass/fail inconsistently
Detection: Test passes some runs, fails others with same code
Why it matters:
- Unreliable tests waste time
- Hide real bugs
- Reduce confidence in suite
Action: Investigate and fix flaky tests immediately
Test Coverage
What it shows: How much of your app is tested
Metrics:
- Pages with tests
- Flows with tests
- Features with tests
- Critical paths covered
Goal: 100% coverage of critical user journeys
Visualization Types
Line Charts
Best for: Trends over time
- Pass rate by day/week/month
- Execution count trends
- Duration trends
Bar Charts
Best for: Comparisons
- Tests by priority
- Pass/fail by project
- Execution by team member
Pie Charts
Best for: Distributions
- Test status breakdown
- Priority distribution
- Tag distribution
Heatmaps
Best for: Patterns
- Failure frequency by test
- Execution times by day/hour
- Activity by team member
Date Ranges
Filter analytics by time period:
- Last 7 days - Recent activity
- Last 30 days - Monthly view
- Last 90 days - Quarterly trends
- Custom range - Specific period
Filtering Options
Drill down into specific data:
By Priority:
- Critical tests only
- High priority
- All priorities
By Tag:
- smoke tests
- regression
- api tests
- Custom tags
By Status:
- Passed tests
- Failed tests
- All tests
By Team Member:
- Tests created by person
- Tests run by person
- Activity per person
Reports
Daily Report
Automated daily summary:
- Tests run yesterday
- Pass/fail rate
- New failures
- Flaky test alerts
Configure: Settings → Notifications → Daily Reports
Weekly Report
Comprehensive weekly analysis:
- Week-over-week trends
- Top failures
- Coverage improvements
- Team activity summary
Delivery: Email on Monday mornings
Monthly Report
Executive summary:
- Monthly metrics
- Quality trends
- Usage statistics
- Recommendations
Audience: Management and stakeholders
Pro Tip: Schedule reports to arrive when you need them. Many teams like Monday morning weekly reports to plan the week.
Key Insights
Test Health Score
Overall quality indicator (0-100):
Components:
- Pass rate (40%)
- Flakiness (30%)
- Coverage (20%)
- Execution speed (10%)
Score Ranges:
- 90-100: Excellent
- 75-89: Good
- 60-74: Needs work
- <60: Critical
Failure Analysis
Identify why tests fail:
Common Causes:
- Element not found (40%)
- Timeout (25%)
- Assertion failed (20%)
- Network error (10%)
- Other (5%)
Action: Focus fixes on top causes
Slowest Tests
Tests taking longest to execute:
Why it matters:
- Slow tests delay feedback
- Increase CI/CD time
- Cost more to run
Action: Optimize or break into smaller tests
Most Flaky Tests
Tests failing inconsistently:
Metric: Flakiness rate = (Inconsistent runs / Total runs) × 100
Threshold: >10% is considered flaky
Action: Fix or remove flaky tests
Team Analytics
Member Activity
Track individual contributions:
- Tests created
- Tests run
- Executions initiated
- Discovery sessions
Use case: Performance reviews, workload balance
Collaboration Metrics
Measure team effectiveness:
- Shared tests
- Team response to failures
- Knowledge sharing
- Cross-project work
Project Ownership
See who owns what:
- Primary maintainers per project
- Test creation by project
- Activity levels
Usage Analytics
Quota Tracking
Monitor plan limits:
- AI generations used/remaining
- Discovery sessions used/remaining
- Team members vs limit
- Days until reset
Alerts: Get notified at 80%, 90%, 100% usage
Feature Adoption
See which features teams use:
- Most used features
- Unused features
- Feature usage trends
- ROI indicators
Integration Activity
Track integration usage:
- Notifications sent
- Issues created
- Webhook calls
- Integration errors
Cost Analytics
Execution Costs
Track testing expenses:
- Cost per execution
- Monthly testing costs
- Cost by project
- Cost optimization opportunities
AI Usage Costs
Monitor AI spending:
- Test generation costs
- AI chat usage
- Discovery analysis costs
- Budget vs actual
ROI Metrics
Measure testing value:
- Bugs found per dollar
- Time saved vs manual testing
- False positive rate
- Developer time freed up
Export Options
Download analytics data:
Formats:
- CSV - Spreadsheet data
- PDF - Visual reports
- JSON - API integration
- Images - Charts and graphs
Use cases:
- Share with stakeholders
- Present in meetings
- Import to other tools
- Archive records
API Access
Access analytics programmatically:
GET /api/analytics/metrics
GET /api/analytics/trends
GET /api/analytics/reportsUse cases:
- Custom dashboards
- Integration with BI tools
- Automated reporting
- Data warehouse sync
[Pro plan feature]
Best Practices
1. Check Daily Review key metrics each morning to catch issues early
2. Set Baselines Establish normal ranges for your metrics
3. Investigate Anomalies Sudden changes deserve immediate attention
4. Share Insights Distribute reports to relevant stakeholders
5. Act on Data Analytics are useless if you don’t act on findings
6. Track Improvements Measure impact of changes to testing strategy
Common Patterns
Declining Pass Rate
Pattern: Pass rate drops over time
Causes:
- New bugs introduced
- Environmental instability
- Test maintenance neglected
Action: Prioritize bug fixes and test updates
Increasing Execution Time
Pattern: Tests take longer each week
Causes:
- Application performance degradation
- More complex tests added
- Browser/environment slowdown
Action: Optimize tests and investigate app performance
Spike in Flaky Tests
Pattern: Sudden increase in inconsistent tests
Causes:
- Infrastructure changes
- New dynamic UI elements
- Timing issues introduced
Action: Review recent changes and add stability
Troubleshooting
Metrics not updating?
- Check that tests are running
- Verify execution history exists
- Wait for daily aggregation (runs at 2 AM)
- Clear browser cache
Wrong data showing?
- Verify date range selection
- Check applied filters
- Ensure timezone is correct
- Refresh the page
Can’t export reports?
- Check your plan includes export
- Verify file permissions
- Try different format
- Contact support if persistent