Pixel-stable captures
Deterministic rendering with font-loading waits, animation pinning, and dynamic content masking so diffs reflect real regressions, not flaky noise.
ScanU is a visual regression and cross-browser screenshot testing platform for professional web teams. It captures pixel-accurate renderings of every page you care about, across the browsers and viewports your users actually use, and surfaces the layout changes that unit and end-to-end tests silently miss.
What ScanU is
ScanU turns the last mile of release quality — the part your users actually see — into a signal your team can act on. It takes pixel-stable screenshots of your pages, stores an explicit baseline, and every subsequent run is compared against that baseline across the browsers and devices that matter.
Most teams already run linters, type checks, unit tests, and end-to-end flows. All of that can stay green while a button slips twelve pixels down, a modal loses its padding, a menu breaks on Safari, or a mobile breakpoint silently re-flows the hero. Functional tests verify behaviour; they do not verify what the page looks like. That is what a screenshot testing tool is for, and it is what ScanU is designed to do without ceremony.
Under the hood, ScanU drives real rendering engines — Chromium, Gecko, and WebKit — against the URLs or component previews you nominate. Each run produces a deterministic capture for every (browser, viewport) pair you configure. The first accepted run becomes the baseline. From there, every pull request, every deployment, every scheduled check is a diff against that baseline: any visual regression is highlighted as a reviewable change, the same way a code review surfaces a changed line.
The full comparison pipeline handles the details that make screenshot diffing actually useful in a production workflow: anti-aliasing tolerance, animation stabilisation, font loading, dynamic content masking, viewport scroll capture, and perceptual diff thresholds so that a sub-pixel font hinting change doesn’t drown out a real layout regression.
ScanU is built for teams who treat the UI as a contract with their users. It is a visual regression testing tool, a cross-browser screenshot comparison tool, and a diff review surface — in one platform — sitting between your build and your release gate.
Deterministic rendering with font-loading waits, animation pinning, and dynamic content masking so diffs reflect real regressions, not flaky noise.
Chromium, Gecko, and WebKit engines driven in parallel, across the desktop, tablet, and mobile viewports your users actually browse from.
Side-by-side baseline, current, and highlighted diff for every change. Accept, reject, or update the baseline with an explicit action — no silent drift.
Runs inline with your pipeline. Status checks, PR comments, and artefact links sit where your engineers already review code.
Point ScanU at Storybook, component sandboxes, or live URLs. Token changes and component refactors get an honest visual audit before they ship.
Screenshots and metadata are stored on European infrastructure, with GDPR-aligned data handling — suitable for teams with strict data residency requirements.
Why it matters
The cost of a visual bug is rarely measured in code time. It is measured in the trust of a user who opens your product, sees something subtly wrong, and quietly decides the team does not sweat the details. That trust is difficult to rebuild, and it is almost always cheaper to prevent the bug.
Functional tests answer one question well: “does the button do the right thing when clicked?” They are silent on everything that happens before the click — whether the button is the right colour, the right size, in the right place, readable on a 360-pixel screen, or visible at all after the latest CSS refactor. This is the gap where most release-day incidents actually live.
Real regressions we see, and that ScanU is designed to catch, tend to cluster around a handful of predictable triggers:
None of those break an assertion. All of them break a user. ScanU’s job is to make the second kind of breakage as easy to see, review, and block as a failing unit test. When the diff appears on the pull request, a designer can weigh in the same way a reviewer weighs in on a function signature — before the change merges, not after it reaches production.
That shift — from “hope someone notices in staging” to “review a visual change the way we review a code change” — is the core value of a visual regression testing tool in a modern release pipeline.
Cross-browser testing
Cross-browser testing is not a nostalgia tax from the IE6 era. Modern engines — Chromium, Gecko, and WebKit — still diverge on the details that your layouts quietly depend on, and a surprising number of production incidents trace back to a rendering difference nobody audited.
If a regression only shows up in one engine, it almost always means one of three things: a feature was used that ships later in some engines than others; a layout depends on a metric that differs by platform (scrollbar width, font baseline, smoothing algorithm); or a user-agent-conditional polyfill is doing something slightly different in the wild than it did on the developer’s machine. All three are invisible to an engine-agnostic test suite.
Typical examples we see during a cross-browser audit:
date, datetime-local, select — render with visibly different chrome on WebKit vs Chromium, occasionally pushing adjacent elements around by a few pixels.gap inside flex containers, aspect-ratio, and container queries have shipped at slightly different moments across engines. If your build targets an older baseline, fallback paths may render differently than the primary path.ScanU captures every page across the engines and viewports you configure, in a single run, with identical fixtures and timing hooks so that the diffs you see are real layout differences — not a side-effect of running each browser in a different harness. You can see the full cross-browser testing capabilities on the product page: the matrix you configure is the matrix captured on every run.
For teams shipping design systems, or any product whose visual identity is part of its brand, that matrix is not a nicety. It is the only way to know that what your customer sees in Safari is what you signed off on in Chrome.
| Browser | Viewport | Status |
|---|---|---|
| Chromium 124 | 1440 × 900 | Baseline match |
| Firefox 126 | 1440 × 900 | Baseline match |
| WebKit (Safari 17) | 1440 × 900 | Visual diff · review |
| Chromium 124 | 768 × 1024 | Baseline match |
| WebKit (Safari 17) | 768 × 1024 | Layout shift detected |
| Chromium 124 | 390 × 844 | New baseline pending |
How screenshot comparison works
A visual diff tool is only as useful as the signal-to-noise ratio of the diffs it produces. ScanU’s capture pipeline is engineered so that the only thing that changes between two runs is the thing you actually changed — not the wallpaper of fonts, animations, timestamps, or ad slots around it.
Every run is a three-step pipeline. First, ScanU navigates each browser engine to the page or component under test, waits for a configured set of readiness signals — fonts loaded, images decoded, network quiet, a data-ready attribute, or a custom JavaScript probe — and pins any CSS animations to their final frame. Second, it captures the page at every configured viewport in the same run, using scroll-stitching for long pages so you get the full artefact, not just the viewport fold. Third, it compares that capture to the accepted baseline and produces a per-(browser, viewport) result: identical, within tolerance, or meaningfully different.
The comparison itself is not a naive pixel diff. ScanU applies a perceptual model that understands anti-aliasing, subpixel rendering, and colour-space nudges, so a WebKit kerning quirk does not masquerade as a bug. It also supports explicit masks for the regions you know will change every load — live data, relative timestamps, randomised content, carousels — so those areas contribute zero noise to the diff.
When a real change appears, the platform shows you three synchronised panes: the baseline, the current capture, and a diff overlay that highlights exactly what moved. A single reviewable decision — accept, reject, or update the baseline — is recorded on each change, and that decision becomes part of the project’s history. There is no silent drift and no auto-acceptance: the baseline only moves when a human with permission says it should.
The same pipeline can be driven from three entry points: against live URLs (production, staging, PR previews), against a static site or built asset bundle, or against component previews in a Storybook-style sandbox. Most teams use the first for release gates, the second for deployment artefacts, and the third for design-system work — all feeding the same baseline store.
The last accepted render for this page, browser, and viewport. Frozen until a reviewer explicitly updates it.
The capture from this run. Same fixtures, same timing hooks, same viewport — only the code under test has changed.
Regions highlighted in amber show where the current render diverges from the baseline beyond the configured tolerance.
Who ScanU is for
ScanU is built for the teams whose work is judged by what the user sees. That is a broader group than it sounds — it is not only designers and frontend engineers, but anyone whose release decisions depend on a UI being correct on a given browser, at a given viewport, on a given day.
Frontend and full-stack engineers use ScanU as the last stage in a pull-request pipeline: the build goes green, the unit tests pass, the end-to-end suite is happy, and then a visual diff either confirms the change is what the designer expected, or surfaces a regression nobody would have caught by reading a patch. The result is less time spent firefighting post-deploy and more time spent on the work that actually moves the product.
QA engineers get a test tier that scales without rewriting selectors. Adding a new page to the visual suite is a URL and a baseline — not a week of brittle XPath. The tier that catches UI regressions is now a first-class part of the test pyramid rather than a post-hoc Slack message from a user.
Design-system and platform teams run ScanU against every component sandbox in their library. When a token changes, every component that depends on it gets a visual audit automatically. When a component is refactored, the downstream consumers get a diff they can review before the new version lands. This is how a design system avoids accumulating quiet regressions over years.
Product and engineering leaders use the review surface as evidence. Release notes can include a link to the accepted visual diff for each release. Incident reviews can reference whether a regression was caught, missed, or explicitly accepted. Over time, the platform becomes a record of how the UI has evolved and who signed off on each step.
Use cases
The same platform serves very different workflows. The common thread is that each of these teams ships UI on a schedule their customers care about, and each needs a way to tell — at a glance — whether the UI still looks the way it should.
Whatever the shape of the team, ScanU is typically adopted for one of five reasons: a recent production incident that a visual regression would have caught; a design-system migration where the blast radius of changes is impossible to audit manually; a CI/CD workflow that needs a release gate richer than “tests passed”; a client-facing deliverable that needs before/after evidence; or a compliance posture that prefers EU-hosted tooling for testing assets. A shared CI/CD integration sits behind all five — the same pipeline does the work whether it is triggered by a PR, a merge, or a nightly schedule.
Hand clients a weekly visual delta per site. Before/after evidence on every sprint, across every browser the contract calls for, in a single report.
Add a visual tier to the test pyramid. New pages enter the suite with a URL and a baseline — no brittle selectors, no flakey screenshot scripts to maintain.
Screenshot diffs appear inline on pull requests. Layout regressions are caught in review, not in production, and not in a Slack thread from a customer.
Move fast without breaking the homepage. A deterministic safety net that keeps the marketing site pixel-stable through redesigns, A/B tests, and token migrations.
Run visual coverage on every component sandbox. Token changes, theme swaps, and component refactors all get an automatic audit before the library publishes.
Validate breakpoints across mobile, tablet, and desktop in the same run. Responsive design testing stops being a screenshot-grabbing chore and becomes a review step.
CI/CD and release confidence
Release confidence is not a feeling. It is a pipeline that tells the truth about what is shipping — including the parts your test runner has no opinion about. ScanU plugs into that pipeline as a status check, a PR comment, and a stored artefact you can link to from any incident review.
The integration model is deliberately boring: run the ScanU capture step in the same job that already runs your tests; let it report a pass, a review-needed, or a fail on the PR; and treat that status like any other required check. Teams already know how to read green, yellow, and red from their pipeline — a visual regression check slotted into that rhythm adds a new dimension of safety without requiring a new mental model.
Under the hood, ScanU is designed for the realities of modern CI/CD:
The step-by-step CI/CD integration guide walks through the specific wiring for GitHub Actions, GitLab CI, Bitbucket Pipelines, and self-hosted runners. The common case is a single extra job; the more advanced case is a matrix of environments promoting a baseline forward through a staged release.
GDPR · EU hosting · privacy
Testing artefacts are data. Screenshots can incidentally capture personal data that ended up on a page — a name in a navigation bar, an email in a profile menu, an order reference on a confirmation screen. A responsible visual testing platform treats those artefacts with the same care as the production data they reflect.
ScanU is built and hosted in the European Union, with storage and processing located on EU-operated infrastructure. For teams with data-residency commitments — regulated industries, public-sector buyers, privacy-sensitive B2B customers — that is frequently a procurement requirement, not a preference. ScanU is designed to satisfy that requirement without forcing a compromise on product capability.
A few deliberate design choices follow from that posture:
For the full privacy posture — sub-processors, regional hosting, DPA, retention windows — see the GDPR and data-handling documentation. For most teams, the headline answer is the important one: your screenshots stay in the EU, and the platform is engineered to keep them that way.
What’s different
Visual testing is a crowded category. Much of what differentiates ScanU is the things it refuses to do — the shortcuts it will not take, the noise it will not forward to your review queue, and the opinions it has about how a diff should be reviewed.
Generic screenshot tools tend to ship as a library you glue to your test runner, then leave you to wrestle with timing, fonts, animation, and the eternal question of whether a one-pixel shift is a real bug or a rounding artefact. ScanU takes a firmer position: if the platform cannot produce a deterministic capture, it is the platform’s problem to solve — not yours. That is why readiness signals, font loading, animation pinning, scroll stitching, perceptual diffing, and masked regions are built in rather than bolted on.
Pricing is built the same way. For a sensible team — a few dozen pages, three engines, three viewports, one pipeline — the cost of visual coverage should not be a line item that needs defending at a budget review. The full pricing page is written in plain numbers: what you get at each tier, what counts as a run, and what the upper bound is. No bespoke quote for teams that just want to ship the UI they designed.
Stabilisation, font readiness, animation pinning, and scroll stitching are platform features — not snippets you maintain in your own repo.
The diff engine understands anti-aliasing and subpixel rendering, so sub-visible nudges don’t drown out the regressions you actually care about.
Baselines only move when a human with permission accepts the change. Every update is auditable. There is no silent drift.
Screenshots, metadata, and review history live on EU infrastructure. No opt-in needed for teams that have to live in that posture.
What ScanU catches
Most of the bugs a visual regression platform catches are not exotic. They are the small, undramatic, easy-to-miss breakages that slip past reviewers because no assertion knows to look for them. Below is a sampler of the categories ScanU is specifically built to surface.
Visual bugs cluster into a handful of recurring patterns. Each one looks trivial in isolation; each one can ship to production without a single red test. All of them get surfaced as a diff on the next ScanU run, before the PR is merged.
backdrop-filter. On Safari, a navigation bar suddenly renders opaque where it used to blur.text-wrap: balance is crisp on Chrome but still reads unbalanced on engines that have not shipped it. Without a cross-browser check, this is invisible on the developer’s machine.The feature overview walks through the detection primitives in more detail — perceptual tolerances, masked regions, viewport scroll capture, baseline branching, and how each maps to one of the categories above. For most teams, the shorter story is: if it changes visually, ScanU will notice; if it is expected, you accept it in one click; if it is not, you have caught it before anyone else does.
FAQ
Next steps
The fastest way to decide whether visual regression testing belongs in your pipeline is to run it once against a project you already ship. ScanU is free to start, takes a handful of minutes to wire into an existing CI job, and produces its first diff on the very next commit.
Most teams start with a single high-traffic page on three browsers and three viewports. That is enough to see the platform work, and it is enough to catch the next real regression your current test suite would have missed.
From there, the scope grows with the team’s confidence: more pages, more breakpoints, component-level coverage on the design system, a required status check on every pull request. There is no call to book, no bespoke onboarding, and no minimum commitment.