Allscreenshots Docs
Guides

Visual regression testing

Catch unintended UI changes before they reach production

Visual regression testing

Automatically detect unintended visual changes in your UI by comparing screenshots over time.

What you'll build

A visual testing system that:

  1. Captures baseline screenshots of your pages
  2. Compares new screenshots against baselines
  3. Alerts you when visual differences are detected

How it works

┌─────────────┐    ┌─────────────┐    ┌─────────────┐
│  Baseline   │    │    New      │    │   Compare   │
│ Screenshot  │ => │ Screenshot  │ => │  & Report   │
└─────────────┘    └─────────────┘    └─────────────┘

Basic implementation

Define pages to test

// visual-tests/pages.js
export const pages = [
  { name: 'homepage', url: '/' },
  { name: 'pricing', url: '/pricing' },
  { name: 'login', url: '/login' },
  { name: 'dashboard', url: '/dashboard', auth: true },
  { name: 'settings', url: '/settings', auth: true },
];

export const viewports = [
  { name: 'desktop', width: 1920, height: 1080 },
  { name: 'tablet', width: 768, height: 1024 },
  { name: 'mobile', width: 375, height: 667 },
];

Capture screenshots

// visual-tests/capture.js
import { pages, viewports } from './pages.js';

const BASE_URL = process.env.TEST_URL || 'http://localhost:3000';

async function captureScreenshots() {
  const screenshots = [];

  for (const page of pages) {
    for (const viewport of viewports) {
      const response = await fetch('https://api.allscreenshots.com/v1/screenshots', {
        method: 'POST',
        headers: {
          'Authorization': `Bearer ${process.env.ALLSCREENSHOTS_API_KEY}`,
          'Content-Type': 'application/json',
        },
        body: JSON.stringify({
          url: `${BASE_URL}${page.url}`,
          viewport: { width: viewport.width, height: viewport.height },
          format: 'png',
          waitUntil: 'networkidle',
          // Hide dynamic content that changes between runs
          hideSelectors: [
            '[data-testid="timestamp"]',
            '[data-testid="random-content"]',
          ],
        }),
      });

      const buffer = await response.arrayBuffer();
      const filename = `${page.name}-${viewport.name}.png`;

      screenshots.push({
        name: `${page.name}/${viewport.name}`,
        filename,
        buffer,
      });
    }
  }

  return screenshots;
}

Compare with baselines

Use a library like pixelmatch or looks-same for comparison:

// visual-tests/compare.js
import pixelmatch from 'pixelmatch';
import { PNG } from 'pngjs';

async function compareScreenshots(baseline, current) {
  const baselineImg = PNG.sync.read(baseline);
  const currentImg = PNG.sync.read(current);

  const { width, height } = baselineImg;
  const diff = new PNG({ width, height });

  const mismatchedPixels = pixelmatch(
    baselineImg.data,
    currentImg.data,
    diff.data,
    width,
    height,
    { threshold: 0.1 }
  );

  const totalPixels = width * height;
  const diffPercentage = (mismatchedPixels / totalPixels) * 100;

  return {
    passed: diffPercentage < 0.1, // Less than 0.1% difference
    diffPercentage,
    diffImage: PNG.sync.write(diff),
  };
}

Generate report

// visual-tests/report.js
async function generateReport(results) {
  const passed = results.filter(r => r.passed);
  const failed = results.filter(r => !r.passed);

  console.log(`\nVisual Regression Test Results`);
  console.log(`==============================`);
  console.log(`Passed: ${passed.length}`);
  console.log(`Failed: ${failed.length}`);

  if (failed.length > 0) {
    console.log(`\nFailed tests:`);
    for (const result of failed) {
      console.log(`  - ${result.name}: ${result.diffPercentage.toFixed(2)}% different`);
    }
  }

  // Save diff images for review
  for (const result of failed) {
    await fs.writeFile(
      `visual-tests/diffs/${result.name.replace('/', '-')}-diff.png`,
      result.diffImage
    );
  }

  return failed.length === 0;
}

Integration with CI/CD

GitHub Actions

# .github/workflows/visual-tests.yml
name: Visual Regression Tests

on:
  pull_request:
    branches: [main]

jobs:
  visual-test:
    runs-on: ubuntu-latest

    steps:
      - uses: actions/checkout@v4

      - name: Setup Node.js
        uses: actions/setup-node@v4
        with:
          node-version: '20'

      - name: Install dependencies
        run: npm ci

      - name: Start application
        run: npm run build && npm start &
        env:
          NODE_ENV: production

      - name: Wait for app to be ready
        run: npx wait-on http://localhost:3000

      - name: Run visual tests
        run: npm run test:visual
        env:
          ALLSCREENSHOTS_API_KEY: ${{ secrets.ALLSCREENSHOTS_API_KEY }}
          TEST_URL: http://localhost:3000

      - name: Upload diff images
        if: failure()
        uses: actions/upload-artifact@v4
        with:
          name: visual-diffs
          path: visual-tests/diffs/

Updating baselines

When intentional changes are made, update the baseline images:

# Update all baselines
npm run test:visual:update

# Update specific page
npm run test:visual:update -- --page=homepage
// visual-tests/update-baselines.js
async function updateBaselines(filter) {
  const screenshots = await captureScreenshots();

  for (const screenshot of screenshots) {
    if (filter && !screenshot.name.includes(filter)) {
      continue;
    }

    await fs.writeFile(
      `visual-tests/baselines/${screenshot.filename}`,
      Buffer.from(screenshot.buffer)
    );
    console.log(`Updated baseline: ${screenshot.filename}`);
  }
}

Advanced techniques

Test different states

Capture pages in different states:

const states = [
  { name: 'empty', setup: () => clearData() },
  { name: 'loading', setup: () => mockSlowResponse() },
  { name: 'error', setup: () => mockErrorResponse() },
  { name: 'populated', setup: () => seedTestData() },
];

for (const state of states) {
  await state.setup();
  await captureScreenshot(`${page.name}-${state.name}`);
}

Test dark mode

const themes = [
  { name: 'light', darkMode: false },
  { name: 'dark', darkMode: true },
];

for (const theme of themes) {
  const screenshot = await capture({
    url: pageUrl,
    darkMode: theme.darkMode,
  });
}

Test responsive breakpoints

const breakpoints = [
  { name: 'mobile-sm', width: 320 },
  { name: 'mobile-md', width: 375 },
  { name: 'mobile-lg', width: 425 },
  { name: 'tablet', width: 768 },
  { name: 'laptop', width: 1024 },
  { name: 'desktop', width: 1440 },
  { name: 'desktop-xl', width: 1920 },
];

Handle dynamic content

Hide or stabilize dynamic content:

const screenshot = await capture({
  url: pageUrl,
  hideSelectors: [
    '.timestamp',
    '.random-avatar',
    '[data-dynamic]',
  ],
  customCss: `
    /* Disable animations */
    *, *::before, *::after {
      animation: none !important;
      transition: none !important;
    }

    /* Stabilize randomized content */
    .random-element {
      background: #ccc !important;
    }
  `,
});

Best practices

Visual tests work best when the environment is consistent. Use the same browser, viewport, and wait conditions for every run.

Threshold tuning

  • Start with a low threshold (0.1%) and adjust based on false positives
  • Different pages may need different thresholds
  • Anti-aliasing can cause small differences—account for this

Selective testing

Don't test everything—focus on:

  • Critical user flows
  • Components with complex styling
  • Recently changed pages
  • Pages with historical visual bugs

Test isolation

  • Use consistent test data
  • Mock external content (images, ads)
  • Disable analytics and third-party scripts
  • Use deterministic timestamps
const screenshot = await capture({
  url: pageUrl,
  blockLevel: 'pro', // Block third-party content
  customCss: `
    img[src*="external"] {
      visibility: hidden;
    }
  `,
});

On this page