How to Generate Tests with AI
Let AI create comprehensive test suites for you. This tutorial shows you how to use BugBrain’s AI test generation feature.
What You’ll Learn
By the end of this guide, you’ll know how to:
- Generate tests from natural language descriptions
- Use knowledge base items to improve accuracy
- Review and save AI-generated tests
- Enable advanced options (edge cases, security tests)
Prerequisites
- A BugBrain account (sign up here)
- At least one project created (create project)
- AI generation quota available (check Settings → Billing)
Step-by-Step Tutorial
Tips for Better Results
Be Specific: The more details you provide, the better the tests will be. Instead of “test login”, try “test that a user with a valid email and password can log in and see their personalized dashboard with their name displayed in the header.”
Good Descriptions
✅ Good: “Test the checkout flow for a guest user purchasing a single product. They should add to cart, enter shipping info, enter payment details, and see an order confirmation.”
✅ Good: “Verify that form validation works on the contact form. Required fields should show error messages when left empty. Invalid emails should be rejected.”
Poor Descriptions
❌ Too Vague: “Test the website”
❌ Too Broad: “Test everything”
❌ Unclear: “Make sure it works”
Using Knowledge Base
Adding knowledge base items dramatically improves test quality:
- Before generating: Click “Add Knowledge Items”
- Select relevant items: Choose documentation that describes:
- The feature you’re testing
- Business rules or validation logic
- Common user workflows
- Generate: AI will use this context to create more accurate tests
Learn more about Knowledge Base →
Advanced Options Explained
Edge Case Generation
Automatically creates tests for:
- Empty inputs
- Maximum length inputs
- Special characters
- Boundary conditions
When to use: For forms, inputs, and data validation
Security Test Generation
Creates tests for:
- XSS injection attempts
- SQL injection prevention
- Authentication bypass attempts
- Authorization checks
When to use: For login, forms, and sensitive operations
Accessibility Test Generation
Creates WCAG 2.1 compliance tests:
- Keyboard navigation
- Screen reader compatibility
- Color contrast
- ARIA labels
When to use: For public-facing applications and compliance requirements
Quality Scores Explained
Each generated test has a quality score (0-100):
| Score Range | Interpretation | Action |
|---|---|---|
| 80-100 | Excellent quality | Ready to use as-is |
| 60-79 | Good quality | Minor adjustments may be needed |
| 40-59 | Fair quality | Review and refine steps |
| 0-39 | Needs improvement | Regenerate with more context |
Pro Tip: If tests have low scores, try adding more details to your description or selecting relevant knowledge base items.
What’s Next?
After generating tests:
- Run the tests → to see them in action
- Create test plans to group related tests
- Set up notifications to get alerted when tests fail
- Provide feedback using the thumbs up/down buttons to help improve AI quality
Troubleshooting
Not enough quota?
- Check your plan’s AI generation limit in Settings → Billing
- Upgrade to a higher plan for more quota
- Wait until next month when quota resets
Tests not accurate?
- Add more details to your description
- Include knowledge base items with context
- Try regenerating with different wording
Can’t find the feature?
- Make sure you’re inside a project (not on the dashboard)
- Check that you have permission to create tests (member role or higher)