ConceptReviewed
A/B Testing
Name variants
- English
- A/B Testing
- Katakana
- テスト
Quality / Updated / COI
- Quality
- Reviewed
- Updated
- Source
- Citations & Trust
- COI
- none
TL;DR
A/B testing compares two or more variants simultaneously to determine which performs better on a defined metric.
Definition
A/B testing randomly assigns users to different versions of a page, message, or feature and measures outcome differences. Statistical significance helps determine whether the effect is real or due to chance. Proper sample size, test duration, and metric selection are essential for valid decisions.
Decision impact
- Chooses which design or message to deploy based on evidence.
- Prioritizes roadmap changes with measurable impact.
- Evaluates marketing tactics with quantified lift.
Key takeaways
- Define the hypothesis and success metric before testing.
- Randomization and adequate sample size are required.
- Consider long-term effects, not only short-term conversion.
- Running too many tests at once can create interference.
- Document results to build institutional learning.
Misconceptions
- Small sample results are enough to decide.
- Statistical significance means the change is always good.
- One test is definitive; re-testing may be needed.
Worked example
An e-commerce team tests two checkout layouts. After running the test for two weeks and reaching the required sample size, the new layout shows a 5% increase in completed purchases. The team monitors returns and customer support tickets to confirm the change did not create downstream issues.
Citations & Trust
- Principles of Marketing (OpenStax)