How screenshot to test case generation actually works
You upload a screenshot. Thirty seconds later, you have 15-20 test cases with steps, expected results, and edge cases. Here is what happens in between.
The input: a UI screenshot
BugBoard accepts any screenshot of your application UI. The AI works best with:
- Clear, high-resolution images
- Visible form fields, buttons, and interactive elements
- Readable text labels
- Standard UI patterns (forms, tables, modals, navigation)
You can screenshot a login page, a dashboard, a settings panel, or a checkout flow. The AI adapts to whatever it sees.
Step 1: visual analysis
The AI identifies UI elements in your screenshot:
- Form fields: text inputs, dropdowns, checkboxes, radio buttons
- Buttons: submit, cancel, navigation, toggles
- Text: labels, headings, error messages, placeholders
- Layout: sections, cards, modals, navigation menus
- State indicators: active tabs, selected items, loading states
For a login form, the AI recognizes: email input field, password input field, "Remember me" checkbox, "Login" button, "Forgot password" link, and any visible validation messages.
Step 2: interaction mapping
Based on identified elements, the AI maps possible user interactions:
| Element | Possible Interactions | |---------|----------------------| | Email field | Enter valid email, enter invalid email, leave empty, enter SQL injection, enter XSS payload | | Password field | Enter correct password, enter wrong password, leave empty, enter minimum length, enter maximum length | | Remember me | Check, uncheck, verify persistence | | Login button | Click enabled, click disabled, click while loading | | Forgot password | Click, verify navigation |
Each element generates multiple test scenarios based on its type and context.
Step 3: test case generation
The AI produces structured test cases with:
Test case ID: Auto-generated unique identifier
Title: Descriptive name following "[Action] [Element] [Condition]" pattern
Steps: Numbered actions the tester performs
Expected result: What should happen after each step
Test data: Suggested input values
Priority: Based on criticality of the functionality
Example output for a login form screenshot
- TC001: Submit login with valid credentials
- Navigate to login page
- Enter "[email protected]" in email field
- Enter "ValidPass123!" in password field
- Click Login button
- Expected: User redirected to dashboard, session created
- TC002: Submit login with invalid email format
- Navigate to login page
- Enter "not-an-email" in email field
- Enter any password
- Click Login button
- Expected: Validation error displayed for email field
- TC003: Submit login with empty password
- Navigate to login page
- Enter "[email protected]" in email field
- Leave password field empty
- Click Login button
- Expected: Validation error displayed for password field
- TC004: Attempt SQL injection in email field
- Navigate to login page
- Enter "' OR '1'='1" in email field
- Enter any password
- Click Login button
- Expected: Input sanitized, login rejected, no database error exposed
- TC005: Verify remember me persistence
- Navigate to login page
- Enter valid credentials
- Check "Remember me" checkbox
- Click Login button
- Close browser
- Reopen browser and navigate to application
- Expected: User remains logged in
...and 10-15 more covering boundary conditions, accessibility, and error states.
Step 4: edge case detection
The AI specifically looks for scenarios humans often miss:
- Boundary values: Maximum field lengths, minimum values, zero
- Special characters: Unicode, emojis, HTML entities
- Concurrency: Double-click prevention, race conditions
- Accessibility: Keyboard navigation, screen reader compatibility
- Security: Input validation, authentication bypass attempts
These edge cases represent 40-60% of the generated test cases and often catch bugs that manual test case writing overlooks.
Using the MCP tool
If you are using AI agents with BugBoard's MCP server, the \