Consumer feedback plays a critical role in shaping successful cosmetics and personal care product development strategies, but not all feedback carries equal weight. While incentivized reviews, collected through sampling campaigns, coupons, or rewards, can quickly generate buzz and bolster early-stage visibility, they may also introduce biases that skew or influence insights.
According to Sogyel Lhungay, VP of Insights at consumer analytics platform Yogi, brands need to tread carefully when interpreting incentivized reviews. Drawing from the analysis of more than 50,000 beauty product reviews, Lhungay outlined when these reviews can provide value, and when they risk misleading development teams, marketers, and brand leaders.
In this CosmeticsDesign US Q&A, he explained the differences between incentivized and organic reviews, the biases at play, and best practices for responsibly incorporating consumer feedback into product innovation workflows.
CDU: Based on Yogi’s analysis, what key differences should beauty brands be aware of between insights gathered from incentivized reviews versus organic reviews during product development?
Sogyel Lhungay (SL): At the end of the day, decision makers at beauty brands are doing their best to create a more attractive product that beats out the competition. Depending on their positioning, they may want to create the longest-lasting product in their space or the one that smells the best.
They may want to include an exotic ingredient or formulate a product that can support a particular “free from” claim.
On the marketing side, they may want to maximize trial or focus on repurchases. They may want to provide a luxurious consumer experience or efficiently reduce costs so that their products are better priced than the alternatives.
All of these decisions, whether made by the product development team or the brand marketing team, need to be rooted firmly in the “true” experience of consumers – once product development teams have a firm grasp of the “truth” they can work backwards from that to update a legacy product, launch a brand extension or create a brand new product
Unfortunately, the “truth” can be elusive when it comes to a large and diverse consumer set, and the discerning analyst needs to be aware of all the strengths and weaknesses of the types of data that they are investigating. Yogi has analyzed tens of millions of product reviews spanning the better part of a decade and discovered a proven discrepancy between incentivized and organic reviews.
These notable patterns of bias in incentivized reviews introduce several forms of bias that make them suboptimal for the purpose of making product development decision. For example:
- Mental Accounting: Most incentivized reviews tend to be 5-star reviews.
- Reciprocity Bias: This is a common and powerful social norm where people feel obligated to “return a favor”. When someone receives a free sample or a coupon, they’re likely to feel a sense of indebtedness towards the company. This feeling can unconsciously (or even consciously) bias their review in a more positive direction than their genuine opinion. They might downplay negative aspects or exaggerate positive ones to “repay” the perceived favor.
- Social Desirability Bias: People generally want to be seen in a positive light. This can play out in a few ways: For example, since they are receiving a benefit from the company, reviewers might feel pressure to provide a favorable review to appear appreciative or avoid seeming ungrateful. But on the flipside, knowing that their review is clearly marked as incentivized, some consumers may give a lower star rating (e.g. 4 instead of 5) to appear unbiased.
- Moral Hazard: Consumers writing incentivized reviews do not expect to face any downsides by posting an inaccurate or overly positive review. On the one hand, that can lead to laziness – a common example is that incentivized reviews will repeat large chucks of the product’s existing claims, features, PDP and/or packaging text – this distorts the truth and doesn’t introduce any novel or useful consumer feedback. Inversely, they may be under the impression that they may lose out on opportunities for future freebies and discounts if they post a critical review, making it more attractive to keep their reviews drama-free. Some 3rd party review sites like Influenster have gamified point systems for each review written, which also encourages volume over quality.
- Selection Bias: The pool of incentivized reviewers are not a representative sample of the brand’s usual consumers.
- Consumers are more likely to participate in the incentive program of a brand that they already like or are familiar with or. Barring that, they may only participate because they are in the market for that product category (e.g. signing up for a sampling program for an unknown brand of fridges because they are shopping for a fridge).
- The type of individual that signs up for an incentive program may be notably divergent from the brand’s target consumer type (e.g. college students sign up to leave reviews for a luxury brand that is targeting middle aged users.
CDU: At what stages of the product development process, if any, can incentivized reviews still be a valuable source of consumer feedback without leading to misleading conclusions?
SL: In almost all cases, I would advise against incentivized reviews being included within the consumer feedback dataset during the product development process for the reasons described above in Question #1.
CDU: Conversely, when should product developers and brand teams be especially cautious about relying on incentivized review data, and why?
SL: Product developers and brand teams should avoid making any major decisions based on incentivized review data for the reasons described above in Question #1.
For questions 1-3, the following chart demonstrates the positive bias inherent in incentivized reviews for several key skincare subcategories. Note that the bias appears not to be too dramatic because of the diverse underlying dataset (100s of products over 7+ years).
If instead we looked at the gap in ratings for a recently launched specific product, it is much more common to see larger gaps in star ratings.
CDU: What best practices would you recommend for brands to more accurately interpret and balance incentivized review data alongside organic consumer feedback to inform better product development decisions?
SL: Start first by separating your dataset into incentivized and organic (non-incentivized). Some incentivized review data can be difficult to discern, so you should use a conservative heuristic, for example: “did the consumer providing this feedback receive a free sample, a coupon, “points” towards a reward, a status/reputational boost or any other incentive that may influence them to provide an overly rosy impression (or less commonly, a hatchet job) of the product?
If you have data that lives in the grey area (e.g. you do not have accurate metadata identifying an incentivized review), you should keep that in a third “null” category and ignore it for the purpose of this question
Compare the incentivized and organic feedback buckets in 3 primary ways:
Volume: What is the percentage mix of incentivized vs. organic? For a mature product that has been in the market for several years, organic reviews should make up at least 80% of consumer feedback. For a more recent product, a 50-50 split between organic and incentivized reviews is a good target for 6 months post-launch.
For a new product, having an early large percentage of incentivized reviews is normal and beneficial because seeing a small # of reviews will make most consumers balk from purchase.
Mature products that are mostly made up of incentivized reviews will be a yellow flag for more discerning eCommerce consumers, and as more organic reviews roll in, they will invariably start dropping the average star rating as consumers who have had their rosy expectations set by the incentivized reviews come crashing back to reality.
Average Star Rating: What is the gap in average star rating between your product or brands’ incentivized and organic ratings?
If incentivized reviews are between 0.2 to 1.0 stars higher than organic reviews, this is very common across categories and you should take that incentivized data with grain of salt.
If incentivized reviews are similarly or lower rated than organic reviews, this is a red flag and would require further investigation to discern the underlying problem with the product or the consumer experience. Our research indicates that average star ratings of incentivized reviews are rarely ever comparable or lower than those of organic reviews.
If incentivized reviews are over 1.0 stars higher than organic reviews, you should realize that the incentivized review data is almost certainly misleading and filter it away from any of your analytic datasets. In the chart above, sunscreen would be a good example of this.
Compare the “shape” of the consumer conversation: Volume and ratings are only part of the picture – the kinds of topics that consumers are talking about – specifically, the relative percentage mix and the sentiment scores of those themes are crucially important for better understanding your consumers.
For example, Product A is from an established brand of sunblock with thousands of reviews across several retailers. Consumers mention the rich texture & consistency of Product A in 20% of the reviews with a very high/positive sentiment score.
In contrast, the 10 other most fitting alternatives to Product A, average only 10% relative mentions of texture & consistency and, on average, the mentions are neutral in sentiment.
Suppose the brand that makes Product A is launching a “new and improved” formula, Product B. This new product touts that it lasts twice as long between applications, and this “2x long-lasting” claim is plastered all over the PDP, advertisement, and packaging.
To support the launch of Product B, the brand invests in a successful campaign to generate 100s of early incentivized reviews.
However, when looking at the “shape” of the mostly incentivized consumer feedback of Product B, the brand leaders realize that only 10% of reviews are talking about texture & consistency, and instead, 20% are talking about long-lasting coverage.
In this case, it would be dangerous to assume that consumers of Product B are less excited about the product’s texture – it should be investigated further to see if relatively more consumers are talking about long lasting coverage than texture & consistency solely because the highly visible claims in the PDP nudged more of the incentivized reviewers to talk about the “new” 2x long lasting feature that the improved formula is providing.
In this case, the existence of incentivized reviews distorts the “true” reality that Product A has a hero feature that beats out its competition- its rich texture.
CDU: How can beauty brands build more resilient product feedback loops that minimize the risk of “blind spots” created by overly positive incentivized reviews, especially when launching new products?
SL: Beauty brands should keep in mind that incentivized reviews are best suited for ginning up interest in a new product and reducing the barriers to trial. Consumers are more likely to purchase a new product when they see that the product has at least 20+ reviews compared to one with few or no reviews.
If a beauty brand chooses to incorporate incentivized reviews in their product feedback loop, they should do so understanding the inherent positive bias in the data and therefore take very seriously any criticisms highlighted in incentivized reviews, as it is breaking through the positive noise.
In contrast, any positive feedback in incentivized reviews should be discounted, and brand actions should not stem from them.
For a more resilient product feedback loop for new products, reviews should only be part of the puzzle, and a more universal approach to consumer feedback should be taken.
Brands should consolidate and analyze the following data sources pre- and post-launch:
- Surveys & Questionnaires: Implement short, targeted surveys immediately after launch, focusing on initial impressions, ease of use, and satisfaction. (Since these tend to be incentivized, this faces similar challenges in terms of positive bias.)
- Social Listening & Engagement: Actively monitor social media platforms, beauty forums, and review sites for mentions of the new product. Engage with comments and reviews, both positive and negative, to show you’re listening. Reddit and Youtube (both comments and content) are great forums for deep product discussions.
- Customer Care Integration: Capture and categorize product feedback received through calls, emails, and chats. Ensure this feedback is compared to other feedback channels and systematically share Customer Care data with the product development team.
- Early Access/Beta Testing Programs: For significant new product launches, consider offering early access to a select group of loyal customers or beauty enthusiasts in exchange for detailed feedback before the wider release.