+ All Categories
Home > Documents > Product Testing Deck_June 23

Product Testing Deck_June 23

Date post: 03-Mar-2015
Category:
Upload: karan-pandey
View: 300 times
Download: 6 times
Share this document with a friend
38
Product testing Design options
Transcript
Page 1: Product Testing Deck_June 23

Product testing Design options

Page 2: Product Testing Deck_June 23

• Monadic Design

• Comparative Design

Design structure options

• Comparative Design

The main choice to be made is between a monadic design and a comparative design

1© 2010. Synovate Ltd.

design

Page 3: Product Testing Deck_June 23

Monadic vs Comparative Test

• Monadic Product Test- A respondent tries and evaluates only one product- “Absolute measurements” (without any comparison references)- Each product is evaluated by an independent sample then - Each product is evaluated by an independent sample then

the results evaluated against each other- Single product evaluation represents a more natural environment

• Comparative Test- A respondent tests two or more products, one after the other- Both products are evaluated by the same sample

2© 2010. Synovate Ltd.

- Both products are evaluated by the same sample- Equal number of respondents test each product first and

second inorder to avoid any order bias- Preference ratings alone or

both monadic ratings and preference ratings- Desire greater ability to detect product differences by

sensitizing consumers

Page 4: Product Testing Deck_June 23

Types of Comparative tests

• Paired Comparison- Respondent tries two products one after the other- Respondent gives direct comparison post usage of both products- No absolute measurements

Sequence

� Screening

� First Product Placement

� Usage of first product

3© 2010. Synovate Ltd.

� Usage of first product

� Second Product Placement

� Usage of second product

� Comparative evaluation of fist and second product. No monadic ratings of each product.

Page 5: Product Testing Deck_June 23

• Proto Monadic- Respondent evaluates two products. The first is evaluated using

typical rating scales- Second product is evaluated via direct comparison

Types of Comparative tests

- Second product is evaluated via direct comparison- Incorporate the advantages of both monadic and paired comparison

tests Sequence

� Screening

� First Product Placement

� Usage of first product

4© 2010. Synovate Ltd. 4

� Evaluation of first product (monadic measurement only)

� Second Product Placement

� Usage of second product

� Comparative evaluation of fist and second product. No monadic evaluation of second product

Page 6: Product Testing Deck_June 23

• Sequential Monadic- Respondent evaluates two products and evaluates each separately

followed by a direct comparison- “Absolute measurements” for both products ; Preference after final

evaluation

Types of Comparative tests

evaluationSequence

� Screening

� First Product Placement

� Usage of first product

� Evaluation of first product (absolute measurement only)

5© 2010. Synovate Ltd. 5

� Second Product Placement

� Usage of second product

� Evaluation of second product (absolute measurement only)

� Comparative evaluation of fist and second product

Page 7: Product Testing Deck_June 23

Comparative Design – Question

• The comparative design is more sensitive and is more likely to pick up differences, because the respondent is being asked to directly compare and contrast the two products she is testing :products she is testing :- “Of the two products you tried, I would like you to tell me which

one you prefer on xyz attribute”- Prefer the product tried first- Prefer the product tried second- I noticed a difference but I had no preference- I noticed no difference between them”

6© 2010. Synovate Ltd.

Page 8: Product Testing Deck_June 23

Comparative Product Test for rationalization

• Critical factor in Rationalization Tests where you want to be sure that any changes you have made to an existing product will NOT be noticed by consumersproduct will NOT be noticed by consumers

• If consumers cannot pick up differences between current product and cost effective prototype even after sensitizing, then we can be more certain that the differences will not be picked up by consumers if the cost effective version is launched

7© 2010. Synovate Ltd.

Page 9: Product Testing Deck_June 23

Which Comparative Test to use for rationalization ?

• Paired comparison design : If the only important decision is whether to adopt the new rationalized formulation instead of the current one, paired comparison design should be used.comparison design should be used.

• Proto Monadic design : If the decision is whether to adopt the new rationalized formulation instead of the current one and there may be a need to re-formulate or optimize the cost reduced product using diagnostics.

• Sequential Monadic design : If it is important that

8© 2010. Synovate Ltd.

detailed attribute data is provided for products in all positions, as well as preference data, the more comprehensive sequential monadic design should be used for diagnostic understanding

Page 10: Product Testing Deck_June 23

Blind Evaluation

• Blinded Product Evaluation:- Used in early stage product testing- The evaluation or ratings provided when the product is not - The evaluation or ratings provided when the product is not

identified either through packaging or other labeling.- The purpose of the blinding is to remove from respondent

consideration the effect of branding on the product evaluation. This is an attempt to obtain an evaluation that focuses on product characteristics unaffected by the influence of the brand and the image it conveys.- At the end of the questionnaire questions can be added to

9© 2010. Synovate Ltd.

- At the end of the questionnaire questions can be added to determine if respondents knew or “guessed” what the product was. This can be used to classify respondents to see if ratings were affected.

9

Page 11: Product Testing Deck_June 23

Branded Evaluation

• Branded Product Evaluation:- Used for confirmatory tests- The evaluation or ratings provided when the product is identified - The evaluation or ratings provided when the product is identified

or branded, with the intent of allowing the brand and its associated imagery to affect or influence perceptions of the product characteristics.

- The effects of branding will most likely mask or obscure the differences between products that are due to the influence of physical or sensory product characteristics.

10© 2010. Synovate Ltd.

Page 12: Product Testing Deck_June 23

Central Location Test

• Central Location Test (CLT):

- Used for products that are usually consumed out of home and for impulse purchase products

Used for Confectionery, Mints, Gums and Chocolate StudiesIn categories for impulse consumption,

11© 2010. Synovate Ltd.11

we need to capture reactions just after the experience. E.g. – For Eclairs, CDM Perk, etc we would have to take reactions immediately after eating the product.

Page 13: Product Testing Deck_June 23

Final / In-Home Use Test

• Final use / In-Home Use Test (IHUT):

- A study conducted to evaluate the use and performance of a product in a setting more consistent with how the product might normally be used by consumers.

- The data obtained from such an evaluation are considered to have - The data obtained from such an evaluation are considered to have good validity given the natural setting in which the product is used.

Used for MFD studiesTypically MFD products

placed for a week

12© 2010. Synovate Ltd.12

Page 14: Product Testing Deck_June 23

MarketQuest helps MarketQuest helps you design better products…….

Page 15: Product Testing Deck_June 23

MarketQuest helps you design better products with an integrated toolkit that aids key decisions and volume estimates for all stages of product development• MarketQuest offers an:

- Integrated approach to developing successful products, that uses a - Integrated approach to developing successful products, that uses a consistent, best practices approach from initial assessment to final evaluation and launch.

- International, consistent protocols and global expertise coupled with local market knowledge.

- Innovative tools and thinking delivered in innovative ways

• It contains a suite of solutions:

© 2010. Synovate Ltd.

- ConceptQuest: concept development and evaluation- ProductQuest: evaluation of products throughout the lifecycle- PackQuest: evaluation of package performance and impact- PriceQuest: evaluation of pricing and impact on business decisions- MVP: estimation of market potential

Page 16: Product Testing Deck_June 23

Analysis

Macro Level Analysis:Looks across products tested to develop a strategic view on how to proceed

Micro Level Analysis:Looks within an individual product to provide a tactical direction for an individual product

• Analytic approaches and statistical tools that focus on differences or variability between or among products.

• The analyses address issues of profiling and identifying variables that best differentiate or discriminate between products and

• Analytic approaches and statistical tools that focus on data for one specific product.

• Attention is paid to examining and understanding differences among observations gathered for that product, such as between those respondents who found the product

15

discriminate between products and which most strongly predict or "drive" changes among products.

respondents who found the product acceptable and those who did not.

15

Page 17: Product Testing Deck_June 23

AH

Removes stubborn stains

Keeps clothes like new for longer

The best liked products and have similar profiles.

“Removes stubborn stains” is the most discriminating aspect for the set of products tested

Biplot Analysis(Hedonic Scales)

C

D

E

Overall Likeability

Cleans clothes thoroughly

Removes tough dirt well

Removes stains in the first wash itself

Keeps Whites Really WhiteRinses off easily

Leaves no residue in bucket

Needs less scrubbing / rubbing

Is gentle on my hands

75.3 %

20.7 %

B

F

Leaves a long lasting fragranceMakes clothes smell fresh and clean

Has a great fragrance

Dissolves easilyLathers quickly and easily

2D Fit = 96%

Product F and Product B have strong fragrance profiles

Page 18: Product Testing Deck_June 23

How to Read a Biplot Overview

� Macro level analysis

– Focus on differences between products (used when comparing 3 or more products).

� There are several “rules” that should be used in reading biplots

– The vectors or lines represent the product characteristics. The dots and letters represent – The vectors or lines represent the product characteristics. The dots and letters represent the products.

– Products close to each other are more similar.

– The longer the vector, the more discriminating that characteristic is across the products.

– Vectors that are very close to each other are highly correlated.

• Vectors at a 90� angle from each other are not correlated with each other

• Vectors at an 180� angle are negatively correlated with each other.

– A product’s relationship to a vector is obtained by “projection.”

• Extend the vector of interest across the page.

17

• Extend the vector of interest across the page.

• Draw a line from a product to a vector of interest and note the place at which the two intercept at 90�. This represents the position (point of projection) of the product on the characteristic.

• The closer the product is to the upper end of the vector, the more strongly it was rated on the characteristic represented by the vector.

Page 19: Product Testing Deck_June 23

Difference Driver Analysis Overview

– When there is a significant difference in overall opinion between two objects (concept/product or two products), a Difference Driver Analysis is conducted to further understand which attributes have the biggest impact on driving that difference.

– The analysis takes into account the difference between the objects on the dependent measure (e.g., overall opinion) and the product attributes. The analysis considers each attribute one at a time. In addition the analysis accounts for the correlation with the dependent measure to assess the impact of each attribute on the difference.

18

Page 20: Product Testing Deck_June 23

Explanation: Difference Driver AnalysisProduct A vs Product B

� Product A needs to improve its perception on ‘not feeling sticky’ if it is to reduce the significant gap with Product B

� If product B improves on ‘not leaving white marks’, it can further increase the gap that it has with product A

Product A Product B % Reduction %Increase

Overall Opinion 3.67 4.01

Not feeling sticky during day 3.78 4.29 94Not feeling sticky at application 3.66 4.30 87Not feel too wet at application 3.78 4.31 63Dries quickly when applied 3.70 3.93 34Not feel wet during day 3.55 3.72 32Does not feel greasy 4.04 4.19 28Can be washed off skin easily 3.39 3.51 18

Impact ScoreMean scores

19

Controls wetness 4.29 4.37 13Protects when physically active 3.64 3.76 4Odour control lasts all day 4.35 4.40 2Does not irritate skin 4.36 4.39 2Not leave white mark on skin 4.42 4.05 109Not leave white mark on clothing 4.37 4.09 81Easy to apply right amount 4.15 4.05 23Feels smooth when applied 4.37 4.29 37

Page 21: Product Testing Deck_June 23

Components of T-PlotAnalysis Overview

� Macro level analysis

– The goal of the Components of T-Plot Analysis is to compare attribute performance between a concept and a product, or between two products.

– This analysis provides a graphic display of overall performance (i.e., top-2-– This analysis provides a graphic display of overall performance (i.e., top-2-box, top-box).

� How to read the chart

– If an attribute falls within the confidence bounds (represented by dashed lines), the concept/product or two products are considered to perform at parity for that attribute. (The “T” in T-plot is a reference to the t- or z- test used to assess significance.)

– Any attributes falling outside of the confidence bounds are considered to be significant (usually with 95% confidence) in performance from each other.

20

significant (usually with 95% confidence) in performance from each other.

Page 22: Product Testing Deck_June 23

T-Plot(Concept-Product Fit)

� The product delivery fell short of expectations generated by the concept on characteristics related to quality and efficacy.

211914

100

The concept

1 Overall Opinion2 Does Not Leave White Marks, Residue Or Stains On Clothes3 Does Not Leave White Marks, Residue Or Stains On Skin19

18

16

15

14

13

12

11

10

9

8

7

6

5 4

3

21

70

80

90

Co

nce

pt

The concept significantly outperformed the product on these attributes.

4 Has A Light Fragrance5 Has A Pleasant Fragrance6 It Smells Cheap And/Or Bad7 Dries Quickly8 Leaves Fewer White Marks Or Residues Than Other Deodorants9 Effectiveness

10 Feel Of Product During Application11 Odor Control12 Quality13 Gives Me Day Long Protection Against Wetness14 Is A High Quality Product15 Is Suitable For Someone Like Me16 Keeps Me Feeling Fresh All Day17 Has A Long Lasting Fragrance18 Leaves Underarm Skin Soft

21

2017

50

60

70

50 60 70 80 90 100

Product

19 Is Suitable For Everyday Use20 Works Harder Than Other Antiperspirants/Deodorants21 Does Not Irritate My Skin

Page 23: Product Testing Deck_June 23

Penalty Analysis Overview

� Micro level analysis

– Assessment of one product not a comparison of performance between products.

� Penalty Analysis determines the “penalty” a product pays for not being just right on a particular characteristic. Through the use of “just right” scales...particular characteristic. Through the use of “just right” scales...

– Identifies product attributes, which if fixed, could improve overall opinion

– Determines the direction for change of the attribute to increase overall opinion

� A penalty reflects the decline in overall opinion between those who felt the product was “just right” on an attribute, vs. those saying it was “too weak” or “too strong”.

– Attributes are considered “just right” if scores are 80% or greater.

� Product testing experience suggests that penalties of concern are those where:

– 20% or more of the sample indicates the product is too weak or too strong and the associated overall liking penalty is greater than .75 (when a 7-point overall opinion scale is

22

associated overall liking penalty is greater than .75 (when a 7-point overall opinion scale is used; greater than .50 when a 9 point scale is sued.)

– Sometimes there are relatively high percentages for both “too much” and “too little” ends of the scale. There generally need to be at least a 10 percentage point difference between the percentages to make a recommendation for modification on the characteristic.

– However, it is important to take the penalties into consideration. For example, if the penalty for being “too little” is very small but the penalty for being “too much” is large, the recommendation may be to refine the product by reducing the level of this characteristic.

Page 24: Product Testing Deck_June 23

Example: Penalty Analysis Prototype A

Penalty Penalty Opportunity

� Key Findings: Penalty analysis was performed using bipolar ratings to gain even further insights into opportunities for improving this prototype. Modifications should focus on increasing tomato flavor and decreasing pickle relish flavor.

JUST RIGHTPenalty Penalty Opportunity

-1.41 Too Weak

Tomato Flavor Strength

Too Strong -2.61

Primary�Tomato

Flavor Strength

-2.23 Too Weak

Pickle Relish Flavor Strength

Too Strong -2.06Primary�Pickle

Relish Flavor

-1.59 Too Weak

Spice Flavor Strength

Too Strong -1.86 Unclear

-0.89 Not Salty Enough

Saltiness

Too Salty -1.40No Change

25%

11%

10%

66%

64%

65%

80%

10%

26%

17%

10%

19%

2323= Primary Modification Opportunity = Secondary Modification Opportunity = Penalty Not Meaningful

-0.89 Not Salty Enough Too Salty -1.40 "Just Right"

-2.33 Not Sweet Enough

Sweetness

Too Sweet -1.50 Increase sweetness

10%

19%

80%

67%

10%

14%

Page 25: Product Testing Deck_June 23

Penalty Analysis

� Key Findings: Penalty analysis was performed using bipolar ratings to gain even further insights into opportunities for improving this prototype. Modifications should focus in increasing aroma and overall flavor, specifically sweetness and bitterness.

Penalty Just Right Penalty

Aroma Too Weak -2.1 -1.1 Too Strong

Color Too Light -1.9 -1.4 Too Dark

Overall Flavor Too Weak -1.9 -1.2 Too Strong

Sweetness Too Weak -2.6 -1.5 Too Strong

24%

20%

21%

72%

86%

75%

70%

4%

9%

5%

9%

5%

2424

Sourness Too Weak -1.0 -0.3 Too Strong

Bitterness Too Weak -1.4 -0.8 Too Strong

13%

23%

72% 15%

9%68%

Page 26: Product Testing Deck_June 23

� Micro level analysis

– Assessment of one product, not a comparison of performance between products.

� Used to identify product modifications that could maintain or improve overall acceptance.

What is Attributable Effects?

acceptance.

� Identifies drivers of liking/purchase intent from two perspectives:

– Maintenance/Risk Drivers

• Those characteristics where the product performs well and positive perceptions should be maintained to retain current levels of overall acceptance.

– Potential/Opportunity Drivers

• Those characteristics where the product does not perform well and modification/improvement may lead to increased levels of overall acceptance.

25

modification/improvement may lead to increased levels of overall acceptance.

Page 27: Product Testing Deck_June 23

What does Attributable Effects Tell Me?

� Each characteristic input into AE receives a Maintenance and Potential statistic which can be aligned with the marketing strategy:

– Maintenance: Focus on retaining and strengthening current level of acceptance.– Maintenance: Focus on retaining and strengthening current level of acceptance.

– Potential: Focus on improving the level of acceptance.

� Maintenance and Potential are influenced by:

– The absolute level of performance on the characteristic and

– The strength of the relationship with acceptance (correlation)

� These values are calculated for each characteristic one at a time

Page 28: Product Testing Deck_June 23

Attributable Effects

� Key Findings:

– Maintenance: Product A must continue to provide current perceptions of taste and quality ingredients to maintain current level of overall opinion.

– Potential: Product A could improve perceptions of texture characteristics and flake flavor to increase the level of overall opinion. and flake flavor to increase the level of overall opinion.

51%

T2B

Overall Opinion

Product A

Maintenance Potential

Provides The Right Balance Of Taste And Health

Is Made With Whole Grain

Overall Taste*

Tastes Great

Is Made With High Quality Ingredients

100%

100%

97%

96%

91%

Maintenance = the percent (%) of current 'likers' who would become 'dislikers' if performance on these characteristics were not positively perceived.

27

49%

B5B

Overall Opinion

Texture Of The Raisins

Flavor Of The Flakes

Overall Taste*

Crunchiness After In Milk For A While

Overall Sweetness

Texture Of The Flakes

Crunchiness When Milk Is First Added

70%

82%

80%

70%

63%

55%

54%

Potential = the percent (%) of current 'dislikers' who potentially could become 'likers' if positive perceptions of these characteristics were improved.

Page 29: Product Testing Deck_June 23

� Micro level analysis

– Assessment of one product, not a comparison of performance between products.

� A Graphical model is a micro level analysis that illustrates the path to impacting

Graphical Model Overview

� A Graphical model is a micro level analysis that illustrates the path to impacting overall opinion by identifying the relationships between the attributes.

– Graphical modeling is a quantitative laddering technique.

– Partial Correlations are used to uncover the underlying structure of relationships across performance ratings.

– Supplies a dictionary, helping to define each characteristic in terms of its association with other characteristics that may affect its ratings

28

Page 30: Product Testing Deck_June 23

Graphical Model

The edge (line) between “foam amount” and “foam texture” suggests the two attributes have a statistically significant relationship

Overall Opinion

mocha flavor

creaminessdairy flavorfoamtexture

sweetness

foamamount

have a statistically significant relationship after the effects of all other ratings were statistically removed (partial correlation).

29

roast taste

flavorstrength

richness consistency aroma

color

However, the lack of an edge between “foam texture” and “consistency” indicates these two attributes are unrelated. Consumer perceptions of the two attributes are considered independent.

Page 31: Product Testing Deck_June 23

Attributable Effects Analysis for Coffee Product

� Given the objective is to identify opportunities to improve Overall Opinion, the focus of this analysis should be on the potential statistic.

Linking Attributable Effects and Graphical Models

should be on the potential statistic.

– Potential: Primary areas of improvement should focus on flavor strength, consistency, creaminess and foam amount.

Provides the caffeine "jolt" I need

Wakes me up

Tastes great

Comes in the flavors I want

82%

80%

78%

72%

Maintenance Potential

38%Top-2-Box

Maintenance: Percentage of those who rated PI top-2-box and who would no longer remain top-2-box if the product did not perform well on these Comes in the flavors I want

Is right for someone like you

Good value for money

Flavor strength

Consistency

Creaminess

Foam amount

72%

69%

65%

55%

51%

45%

41%

Top-2-Box Purchase Intent

62%Bottom-3-Box Purchase Intent

perform well on these characteristics.

Potential:Percentage of those who rated PI bottom-3-box and who could become ‘top-2-box if perceptions of these characteristics were improved.

The attributes identified as areas of potential should help guide the path or paths to choose in the Graphical Model.

Page 32: Product Testing Deck_June 23

� Characteristics related to flavor strength, consistency, creaminess and foam amount were identified as primary areas of improvement in the Attributable Effects Analysis.

Linking Attributable Effects and Graphical Models

� Therefore, team may consider focusing on the paths circled.

Overall opinion

mocha flavor

creaminessdairy flavorfoamtexture

sweetness

foamamountOverall opinion

roast taste

flavorstrength

richness

creaminessdairy flavor texture amount

consistency Aroma

color

Page 33: Product Testing Deck_June 23

Source Of Volume (SOV) Overview Allocation of Chips

• A Source of Volume (SOV) is an analysis tool that:- Is designed to provide guidance in estimating what portion of a test concept�s volume is being

drawn from each of the items in the competitive set as defined in the survey. drawn from each of the items in the competitive set as defined in the survey. - This analytic can be used for a concept, product or a package.

• Other names for SOV- When using a broad category list , SOV can be referred to as a “share of stomach” or “share of

wallet” analysis.

• The Source of Volume is estimated using a pre/post constant sum (preference chip) exercise.

• Prior to being exposed to the new product/concept, consumers are asked to imagine they are making 11 purchases in the competitive set. They are instructed to allocate all or a portion of their 11 preferences (using chips) across the brands presented

© 2007 Synovate 32

all or a portion of their 11 preferences (using chips) across the brands presented totaling 11 chips.

• Following exposure to the new product/concept idea, consumers are then asked to complete the allocation exercise again. However, this time with the new product introduced into the exercise.

• The new product earns its own “share of preference” during this post exposure allocation task.

Page 34: Product Testing Deck_June 23

Source Of Volume (SOV) Overview Shift in Preferences

• The analysis is conducted among concept acceptors at the respondent level. The SOV is estimated via an algorithm using both the pre-concept and the post-concept exposure preference allocation. post-concept exposure preference allocation.

• The shifting of preferences is analyzed on a respondent by respondent basis, and then the results are aggregated.

• The chip allocation question in the survey usually specifies the number of chips to be assigned in order to maintain a controlled experiment. - However, in a traditional marketplace environment, potential consumers of the test

© 2007 Synovate 33

- However, in a traditional marketplace environment, potential consumers of the test product might be heavier or lighter users of the product.

- To take this into account, each of the respondent’s results from the algorithm are weighted by the anticipated frequency of purchasing the test product.

Page 35: Product Testing Deck_June 23

Source Of Volume (SOV) Overview Brand List

• The Source of Volume analysis is highly dependent on the competitive brand list that is used. If a brand is not listed, its sourced volume is not measured.

• It is recommended that brands listed account for 75-85% of the category.

• An option for “Would not select any of these brands” should also be included. - If a respondent chooses not to select any of the brands listed, they should be

required to allocate all of their chips to “would not buy”.

• The brand lists for the pre and post chip allocation questions must be the same, with the exception that the new concept is added alphabetically to the post list (drawing no attention to the new).

© 2007 Synovate 34

post list (drawing no attention to the new).

• It is recommended that items in the SOV list be priced.

• It is also recommend that (if possible) respondents pre-chip allocations are presented back to them at the post-allocation question.

Page 36: Product Testing Deck_June 23

40.0 -6.4Brand A

Allocation of Preferences (%) Pre-Concept

SOV %

38.0

Shift in Preferences (pts) Post-Concept vs Pre-Concept

Source Of Volume (SOV) Overview What Does The Output Look Like?

Allocation of Preferences

Prior to being exposed to the

The Shift in preferences

The shift in preferences is the difference between the

Source of Volume

The SOV percentages

10.0

18.8

8.4

22.8

40.0-5.0

-3.6

-1.0

18.0

-2.0

Brand B

Brand C

Brand D

Brand E

New Concept

38.0

22.5

15.5

14.0

10.0

0.0**

© 2007 Synovate 35

concept, consumers are asked to imagine they are making 11 purchases within the competitive set. They are instructed to allocate their 11 preferences (using chips) across the brands presented.

The bar graph above is a visual representation of the Pre-concept exposure chip allocations.

respondents post-concept and pre-concept preference allocations (the post concept exposure brand list includes the new concept).

The algorithm examines the shifting of preferences on a respondent by respondent basis with weights applied based on their anticipated frequency of purchase…results are then aggregated.

The bar graph above illustrates the change between post-chip and pre-chip allocations. The preference for the new “test” concept is displayed as a positive change or shift.

The SOV percentages above demonstrate what percent of the new concept volume will be sourced from brands in the competitive set (as defined in the survey).

Page 37: Product Testing Deck_June 23

37.8

14.0

11.8

6.2

-1.2

-3.8

-8.1

-2.7

Flavoured Water

Unflavored carbonated water

Unflavored non-carbonated water

Carbonated water with fruit flavor

Non-carbonated water with fruit flavor

SOV %

26.7

8.8

20.5

8.6

Allocation of Preferences (%) Pre-Concept

Shift in Preferences (pts) Post vs Pre Concept

76%

3.6

1.4

4.4

1.8

7.5

2.5

3.2

2.7

6.2

-0.7

-1.2

-0.9

-4.1

-0.5

-0.8

-1.6

-0.3

-2.7Non-carbonated water with fruit flavor

Water / flavor and additives (vitamins)

Water / flavor and additives (herbs)

Water / flavor and additives of tea

Ready-to-drink bottled fruit juice

Ready-to-drink ice teas

Caffeine-containing lemonades/cola

Lemonades

Bitter lemonades

8.6

4.1

4.5

2.4

8.2

2.1

3.7

4.9

1.5

The new concept is expected to draw 76% of its volume from bottled waters.

Beyond waters, the new concept will source from

The new concept is expected to draw 21% of its volume from bottled waters with fruit (like 1.4

1.0

2.1

-0.3

-0.5

-0.8

27.1

Bitter lemonades

Energy drinks

Others

New Concept

1.5

1.2

2.7

0.0

The SOV analysis is designed to provide guidance in estimating what portion of the test product’s volume is being drawn from the brands in the

competitive set…assumes a perfect awareness/distribution.

will source from RTD bottled fruit juices, and lemonades.

**

with fruit (like Bonaqa Fruits)

Page 38: Product Testing Deck_June 23

Thank You For Your Thank You For Your Attention.

Our Curiosity Is All Yours.

37© 2010. Synovate Ltd. 37


Recommended