How-To GuidesGenerate Tests with AI

How to Generate Tests with AI

Let AI create comprehensive test suites for you. This tutorial shows you how to use BugBrain’s AI test generation feature.

What You’ll Learn

By the end of this guide, you’ll know how to:

  • Generate tests from natural language descriptions
  • Use knowledge base items to improve accuracy
  • Review and save AI-generated tests
  • Enable advanced options (edge cases, security tests)

Prerequisites

  • A BugBrain account (sign up here)
  • At least one project created (create project)
  • AI generation quota available (check Settings → Billing)

Step-by-Step Tutorial

1
Navigate to AI Generation
From your project dashboard, click 'Test Cases' in the sidebar, then click the 'Generate with AI' button.
2
Describe What You Want to Test
In the description box, write a clear explanation of what you want to test. For example: 'Test the user login flow. A user should be able to enter their email and password, click sign in, and see their dashboard.'
3
Add Context (Optional but Recommended)
Click 'Knowledge Base' to select relevant documentation that will help AI understand your application better. Select any guides, API docs, or business rules.
4
Configure Advanced Options (Optional)
Expand 'Advanced Options' to enable edge case generation, security tests, or accessibility tests if needed.
5
Click 'Generate Tests'
Click the button and wait while AI analyzes your description and generates test cases. This usually takes 10-30 seconds.
6
Review Generated Tests
You'll see a list of generated tests with quality scores. Review each test to ensure it matches your needs. Tests with scores above 80 are typically ready to use.
7
Save Tests You Want
Select the tests you want to keep and click 'Save Selected Tests'. They'll be added to your project's test case library.

Tips for Better Results

Be Specific: The more details you provide, the better the tests will be. Instead of “test login”, try “test that a user with a valid email and password can log in and see their personalized dashboard with their name displayed in the header.”

Good Descriptions

Good: “Test the checkout flow for a guest user purchasing a single product. They should add to cart, enter shipping info, enter payment details, and see an order confirmation.”

Good: “Verify that form validation works on the contact form. Required fields should show error messages when left empty. Invalid emails should be rejected.”

Poor Descriptions

Too Vague: “Test the website”

Too Broad: “Test everything”

Unclear: “Make sure it works”

Using Knowledge Base

Adding knowledge base items dramatically improves test quality:

  1. Before generating: Click “Add Knowledge Items”
  2. Select relevant items: Choose documentation that describes:
    • The feature you’re testing
    • Business rules or validation logic
    • Common user workflows
  3. Generate: AI will use this context to create more accurate tests

Learn more about Knowledge Base →

Advanced Options Explained

Edge Case Generation

Automatically creates tests for:

  • Empty inputs
  • Maximum length inputs
  • Special characters
  • Boundary conditions

When to use: For forms, inputs, and data validation

Security Test Generation

Creates tests for:

  • XSS injection attempts
  • SQL injection prevention
  • Authentication bypass attempts
  • Authorization checks

When to use: For login, forms, and sensitive operations

Accessibility Test Generation

Creates WCAG 2.1 compliance tests:

  • Keyboard navigation
  • Screen reader compatibility
  • Color contrast
  • ARIA labels

When to use: For public-facing applications and compliance requirements

Quality Scores Explained

Each generated test has a quality score (0-100):

Score RangeInterpretationAction
80-100Excellent qualityReady to use as-is
60-79Good qualityMinor adjustments may be needed
40-59Fair qualityReview and refine steps
0-39Needs improvementRegenerate with more context
💡

Pro Tip: If tests have low scores, try adding more details to your description or selecting relevant knowledge base items.

What’s Next?

After generating tests:

  1. Run the tests → to see them in action
  2. Create test plans to group related tests
  3. Set up notifications to get alerted when tests fail
  4. Provide feedback using the thumbs up/down buttons to help improve AI quality

Troubleshooting

Not enough quota?

  • Check your plan’s AI generation limit in Settings → Billing
  • Upgrade to a higher plan for more quota
  • Wait until next month when quota resets

Tests not accurate?

  • Add more details to your description
  • Include knowledge base items with context
  • Try regenerating with different wording

Can’t find the feature?

  • Make sure you’re inside a project (not on the dashboard)
  • Check that you have permission to create tests (member role or higher)