
What Reduces “Not as Expected” Returns
Methodology
Type of Study
Sample Size
Location
Industry
Segment
Sub-Segment
Target Audience
the challenge
DTC brands were seeing strong top-of-funnel traffic but inconsistent conversion and elevated returns tied to “not as expected” experiences. Stakeholders needed clarity on which proof points truly build trust in product claims (e.g., reviews, UGC, expert endorsements, third-party testing) and how those signals shift both purchase confidence and post-purchase satisfaction. The lack of a clear trust hierarchy across shopper types slowed creative optimization, PDP prioritization, and claims governance—making it harder to support decision-making on where to invest.
Our Approach
InnResearch designed an ad-hoc quantitative study to isolate the trust impact of common proof signals across the DTC purchase journey. We mapped proof points to key decision moments (ad → PDP → checkout) and evaluated which combinations best improved confidence while reducing expectation gaps that drive returns. The output enabled stakeholders to prioritize the highest-leverage trust builders, helped brands refine claims language and validation strategy, and delivered actionable insights for PDP, creative, and CX teams.
Key Insights
Third-party validation wins trust: The most persuasive trust stack combined verified reviews + independent testing/certifications, with ~60–70% of shoppers rating it as “highly confidence-building,” especially for higher-priced items. UGC drives belief, but needs guardrails: Authentic UGC (before/after, real-life demos) increased belief in outcomes, but shoppers flagged overly polished creator content as less credible unless paired with transparent disclaimers and “what results vary” context. Review quality beats review volume: Shoppers trusted detailed, specific reviews (fit, longevity, drawbacks) more than star ratings alone; review recency and “verified buyer” tags were consistent conversion accelerators. Expectation gaps cause returns: Returns were most strongly linked to claims ambiguity (e.g., “works for everyone”) and missing “what to expect” guidance; clarifying usage, limitations, and comparison context reduced “not as expected” risk signals among ~55–65% of return-prone shoppers.
Impact
The study enabled stakeholders to re-prioritize PDP real estate toward the most trusted proof points and standardize a claims-validation playbook across categories. It supported decision-making on creative testing (UGC formats vs. expert proof), review strategy (verified + detailed templates), and post-purchase education to reduce expectation gaps. As a result, teams aligned on a tighter trust stack that helped brands improve conversion efficiency while mitigating preventable returns.
Conclusion
InnResearch delivered actionable insights that clarified which trust signals truly move DTC shoppers from consideration to purchase—and which reduce costly “not as expected” returns. By quantifying the trust hierarchy and identifying the highest-impact proof combinations, the work helped brands confidently optimize claims, creative, and PDP experiences to build credibility and sustain growth in the U.S. DTC market.

