A/B testing.

Test variants of any widget setting against your live configuration. Daima splits traffic at a percentage you control, tracks conversions on both variants, and tells you when one is statistically winning.

This is a Pro feature. A/B testing is part of the Daima Pro plan ($9.99/mo flat). Free merchants can read this doc as a preview and upgrade from Billing in the admin nav. Free vs Pro details →

How it works

A test consists of variant A (your current widget settings, automatically captured from the live Widget Editor configuration) and variant B (overrides on one or more widget keys). Daima serves variant B to a configurable percentage of visitors, variant A to the rest, and tracks conversion separately for each cohort.

Tests don't affect existing subscribers — only new visitors who see the widget on a product page get assigned to a variant. The assignment is sticky per visitor (cookie-based), so a returning visitor sees the same variant they saw the first time.

Creating a test

Open A/B Testing in the admin nav, then click New test. Configure:

  • Test name — internal label so you can identify the test in your history (e.g., "Headline copy: SS vs Save 15%")
  • Traffic split — percentage of visitors who see variant B. Default 50% (even split). Lower for higher-confidence experiments on busy stores.
  • Variant B overrides — pick which widget settings to override. You can change the headline, CTA copy, default frequency, layout style, accent color, button radius, badge text, animation style, or any other widget setting.

Save the test. It starts as a draft — no traffic is served until you explicitly start it.

Starting and stopping a test

Click Start test on a draft to begin serving variants. The test transitions to running status and Daima starts logging widget views and conversions per variant.

You can stop a running test at any time. Stopping freezes the analytics — no further data is collected, but the historical results stay accessible. A stopped test transitions to completed.

Only one test runs at a time per shop. Daima won't let you start a second test while another is running — concurrent tests would cross-contaminate the variant assignment and invalidate the results. Stop the current one first, then start the new one.

Reading test results

Each running or completed test shows:

  • Variant A and Variant B widget views — total impressions per cohort
  • Variant A and Variant B conversions — count of subscribe-and-save selections that led to a Subscription order
  • Conversion rate per variant
  • Conversion lift — variant B's lift over variant A in percentage terms
  • Statistical significance — Daima's confidence level that the difference is real, not noise. Targets 95% confidence.

Declaring a winner

Once a test reaches statistical significance, click Declare winner on the winning variant. Daima writes the winner's settings to your live Widget Editor configuration, ending the test and switching all traffic to the winning variant.

You can also declare a winner manually before significance is reached if you want to stop early — just be aware that the result might not be reliable. Daima warns you in the UI if you try to declare with low confidence.

Designing a useful test

  • Test one thing at a time. If variant B changes the headline AND the CTA copy AND the layout, you won't know which change moved the needle. Pick a single variable per test.
  • Have at least 1,000 monthly visitors per product page. Below that volume, tests take months to reach significance and seasonal noise dominates. If your traffic is lower, focus on bigger UX changes (test a different layout style, not a button color).
  • Run for at least 14 days. Even with traffic, weekly traffic patterns matter. A test that wins on a Tuesday-only sample might lose on a Saturday-only sample.
  • The biggest wins come from offer changes. Discount percentage, default frequency, badge text — these usually move conversion more than colors and fonts.

Cleaning up old tests

Stopped/completed tests stay in your history forever. You can delete a completed test from its detail view if you want to clean up the list. Deleting removes the test record and its analytics — there's no undo.