Insider Intelligence delivers leading-edge research to clients in a variety of forms, including full-length reports and data visualizations to equip you with actionable takeaways for better business decisions.
In-depth analysis, benchmarks and shorter spotlights on digital trends.
Learn More
Interactive projections with 10k+ metrics on market trends, & consumer behavior.
Learn More
Proprietary data and over 3,000 third-party sources about the most important topics.
Learn More
Industry KPIs
Industry benchmarks for the most important KPIs in digital marketing, advertising, retail and ecommerce.
Learn More
Client-only email newsletters with analysis and takeaways from the daily news.
Learn More
Analyst Access Program
Exclusive time with the thought leaders who craft our research.
Learn More

About Insider Intelligence

Our goal at Insider Intelligence is to unlock digital opportunities for our clients with the world’s most trusted forecasts, analysis, and benchmarks. Spanning five core coverage areas and dozens of industries, our research on digital transformation is exhaustive.
Our Story
Learn more about our mission and how Insider Intelligence came to be.
Learn More
Rigorous proprietary data vetting strips biases and produces superior insights.
Learn More
Our People
Take a look into our corporate culture and view our open roles.
Join the Team
Contact Us
Speak to a member of our team to learn more about Insider Intelligence.
Contact Us
See our latest press releases, news articles or download our press kit.
Learn More
Advertising & Sponsorship Opportunities
Reach an engaged audience of decision-makers.
Learn More
Browse our upcoming and past events, recent podcasts, and other featured resources.
Learn More
Tune in to eMarketer's daily, weekly, and monthly podcasts.
Learn More

Most A/B Tests Don't Produce Significant Results

Experimenting remains a crucial aspect of marketing

Experimenting is a crucial part of media and marketing, but many internal tests fail to produce tangible effects.

About two-thirds of brand marketers use A/B testing to improve conversion rates, according to research by Econsultancy and Red Eye. Marketers also rely on A/B testing to optimize the landing pages of their ads. And publishers use A/B testing to personalize content for users and find headlines and images that drive traffic.

While A/B testing is common among publishers and marketers, most A/B tests fail to produce statistically significantly results. According to a survey of 3,900 professionals worldwide by UserTesting, fewer than 20% of respondents reported that their A/B tests produce significant results 80% of the time.

A similar analysis by Appsumo concluded that only one of every eight A/B tests lead to significant change. Although many A/B tests don’t produce significant results, it’d be irresponsible of marketers to eliminate A/B testing from their media plans, according to John Donahue, chief product officer of programmatic platform Sonobi.

“The benefits of A/B testing are undeniable,” Donahue said. “Developing any creative project there are a lot of assumptions, A/B testing allows you to remove those assumptions.”

In some instances, A/B testing call-to-action features and ad headlines can save marketers 40% of their media budget on ad platforms like Facebook, according to Donahue. Part of the reason the listicle publisher Ranker is able to make money from buying traffic off Facebook is because Ranker frequently tests which audience targets it can reach at a low price.

Of course, it’d be unrealistic to expect every A/B test to facilitate meaningful results. And similar to how scientists learn from their failed experiments, marketers can learn from A/B tests that didn’t yield anything.

While some tests fail due to bad design, another reason many A/B tests don’t produce significant results is because the sampling traffic that’s powering the test isn’t large enough to lead to conclusive evidence, according to Mani Gandham, CEO of content marketing company Instinctive. Gandham said that if the size of the test’s sample isn’t properly put into context, marketers can end up with experiments that “result in rather fuzzy results, and a tiny relative difference in performance can easily be mistaken for a clear signal.”