## Concept Testing

Experimental designs are essentially the backbone of causal inference. True random assignment to conditions, followed by best practices to maintain equivalence of conditions along extraneous variables, remains the most compelling and conclusive research method to establish causality.

What do you wish to test?

- Need to quantify the relative appeal of different messages?
- Need to simulate uptake of varying configurations of product features?
- Need to test the viability of a new concept on different user segments?
- Need to figure out minimum sample sizes to detect effect size of meaningful magnitude?
- Need to build a well-balanced factorial design?

I can help with above tasks and much more.

### A/B Testing

The most basic design for concept testing is the A/B test or monadic design, where subjects are randomly assigned to 2 or more alternative conditions. Based on monadic designs, we can quantify with confidence the extent to which a concept or feature makes a significant difference in the market space.

An experiment with too small a sample could lack sufficient power to detect the target effect, whereas an experiment with too large a sample would result in unnecessary waste of resources. Power analysis is essential to determine the optimal sample size needed to uncover the anticipated treatment effect.

To this end, my approach to monadic designs are grounded in answers to key questions:

- Is the primary comparison between means or proportions?
- What is the outcome/endpoint measure? Is it an absolute threshold or a continuous scale?
- What is the minimum effect size deemed to be of meaningful magnitude?
- How many conditions are to be compared? Are different stimuli repeated on the same subject (within-subject design) or are distinct subjects randomly assigned to different conditions (between-subject design)?
- How many additional attributes are to be considered? Say gender, health status, socioeconomic status, geographical location, etc. And how many levels exist in each covariate?

Answers to the above questions will determine the appropriate statistical test to be used to analyze the experimental data; and thus also identify the power analysis technique that should be invoked for sample size estimation. Although power computations are essentially functions of sample size, effect size, and alpha values, the power analysis for sample size estimation is specific to the statistical test to be used on the final data. That is, different algorithms need to be invoked to run power analysis for t-tests, Analysis of Variance (ANOVA), vs. chi-square tests of differences respectively.

### Factorial Designs

More complex combinations of concepts and features can be tested in factorial designs. Precise causal inferences on how new concepts or feature bundles affect uptake can be derived via Discrete Choice Modeling, which is grounded in Random Utility Theory (RUT), a longstanding theory of human choice behavior in the social sciences. As in monadic designs, when research participants are randomly assigned to view different sets of choice cards, we meet the fundamental principle of experimental design by assuring equivalence across cells at the outset, prior to any treatment or exposure to stimuli. Hence, resulting differences between cells in the final output can be attributed to differences in concept or product features.

When models are based on Random Utility Theory (RUT), we are building on the premise that there are latent constructs called “utilities” existing in people's mind that cannot be directly observed by researchers. These latent utilities can be reliably estimated in terms of two components, a systematic component and a random error component. The systematic component consist of attributes explaining differences in choice alternatives and covariates explaining differences in choices made by potential customers; while the random error component accounts for unidentified elements. These error components play critical roles in estimating the external validity of parameter estimates from discrete choice models.

My approach to concept / feature testing with Discrete Choice Modeling:

- define & operationalize core outcome measure(s)
- define & operationalize concept or feature attributes and levels
- generate full factorial design
- derive optimally balanced fractional design(s); and blocks as needed
- finalize choice sets and corresponding feature cards
- melt and recast collected data for analysis
- run ANOVA or conditional logit models depending on variable type
- quantify relative importance, main effects, additive and interaction effects of each feature
- produce Web or Excel simulator (if needed)

A well-designed experiment will attain high internal validity (causal inferences about treatment effects) but will not automatically attain high external validity, i.e. the extent of generalizability of the treatment effects is limited to the subjects used. Factorial designs can expand generalizability of experimental results, because they support concurrent testing of both main effects and interaction effects between a broader set of factors, beyond the core experimental design.

### Price Sensitivity Testing

Pricing research is technically a special case of feature testing. Astute business executives rarely commission pricing research without some a priori expectations. In fact, collection of new empirical data may not even be necessary if there are sufficient econometric data and analogous products in the existing market space.

Nonetheless, when companies have the need for primary pricing research, it is always a fun and challenging endeavor. Web surveys allow interactive question design that enhance classic Willing to Pay (WTP) and Van Westendorp measures. When specific price points are to be tested, I have also produced monadic designs, price ladders, and discrete choice models to capture price sensitivity.

I have worked on pricing research for several new prescription medications that required simulations across three levels of stakeholders - namely, patients, clinicians, and health insurance payers, linking multi-layer considerations spanning WAC vs. net price points, access restrictions, and co-pay assistance etc.

I have also completed multiple projects on patients’ price sensitivity to varying levels of co-pays, as well as their health cost perceptions, attitudes toward generics, compliance issues, and more; as well as employers' price sensitivity toward comprehensive vs. piecemeal wellness solutions, for both preventive and disease management programs.

Another study I supported was based on in-depth interviews with hard-to-reach hospital administrators, and captured their price perceptions and expectations in response to varying attributes of an automated / robotic solution for drug dispensing. When needed, econometric revenue forecasts can be delivered as part of my research.

Beyond healthcare pricing research, I have also conducted pricing research in other areas such as web consumers' evaluation of pricing structures and online product offerings; and bank customers' fee tolerance and price sensitivity to new and existing financial products.