A recent exchange of posts with the usual CX liquid influencers (liquid paraphrasing Zygmunt Bauman) prompts me to write this short article in the hope that I can finally enlighten these CX-thinking minds.

Important, what I am going to write are not my thoughts but scientifically proven facts from academic studies that started in the nineteenth century and developed to the present days.

The question was straightforward, and the answer was equally obvious statistically and mathematically: can we place a value on nonrespondents in a survey? In the discussion, since this was an NPS survey, there had been talking of arbitrarily assigning the value of detractors to heavily and negatively influence the observed result.

The reality was much more straightforward: having collected 100 interviews from a total of 500 customers, we can say that n=100 (the observed sample) and N=500 (the whole population), so the estimated mean in the sample can vary by +/- 8.77%. Since this is a net promoter score, the margin is even higher because we are most likely dealing with a non-normal distribution. (see my article)

The same logic that brings me that margin of error of 8.77% tells me that for a 95% confidence level, I will need a sample of 218 interviews for a margin of error of 4.99%. But how do we arrive at this margin of error? What is statistical and mathematical logic? I will try to explain it without mathematical forms, in plain text for business people.

Statistical sampling and statistical inference

Statistical sampling (which relies on sample theory or “sampling theory”) is the basis of statistical inference, divided into two major chapters: estimation theory and hypothesis testing. Specifically, a survey is said to be a sample survey when it is helpful for inferences, that is, for inferring information about the whole population from the sample itself.

The last sentence is significant, and it explains the importance of the sample itself: the ultimate goal is to select a subgroup of the total population so that it is representative of the entire population. We could treat the results from the subset as valid for the whole population.

Obvious. No, I recently happened to participate in a conversation on LinkedIn where people wanted to assign arbitrary values to those who did not respond to an NPS survey. Those who did not respond cannot simply be given a default answer. In the same vein, why build and distribute a survey? Use PhotoShop and create a dashboard with the results you like. You are insulting statistics and mathematics anyway. If you want to do it, dare to do it all the way! It would be like saying: I asked a sample of the population for their height to find out the average size of the people; those who did not answer arbitrarily attribute the size to 190 cm (!). Do you quickly understand that this does not work? I have no words in the face of such havoc!

What I have described, to be precise is the procedure by which the characteristics of a population are induced from the observation of a part of it (sample); it is called statistical inference and originated in the mid-nineteenth century from the studies of Fisher and Pearson. It is also called classical inference to distinguish it from Bayesian inference, based on Bayes’ theorem.

The error in the sample and its consequences

We should answer two main questions by analyzing the results of a study of a subgroup of the entire population:

1. Do you think the result obtained from the subgroup is random, or will it remain the same if repeated? (Confidence, or is the result significant?)
2. Is the sample genuinely representative of the population, or does it deviate from it and thus create an error that I can quantify? (margin of error, or error in the sample)

Don’t confuse “statistical significance” with “importance”!

Significant in statistics means that the observed result is not due to mere chance with a certain confidence level and that if you repeat n times the observation on other subgroups of the population, we will get the same result. Nothing to do with the significance of the result.

I won’t go into the details of the calculation. Just know that this type of analysis was proposed by Fisher and is based on the common idea that we can accept a ratio of 1/20 (alpha 0.05). Generally, the result is not chance in 95% of cases.

In other words, if you set this threshold, you are saying that for your study, it is okay that one time in 20, the observed difference might be due to chance alone. The threshold could be raised to higher values (e.g., 0.01). To be 100% sure, then you should clearly survey the entire population.

Simply put, when we say that a result is significant, it merely means that it is not the result of chance by accepting a margin of error (commonly 0.05). If we repeat the same study with a sample from the same population but different from the first, in 95% of cases, we will get the same result.

But is the sample genuinely representative?

This is the operational scope of inferential statistics, which is aimed at probabilistic induction of the unknown characteristics of a population, i.e., it is concerned with solving the so-called inverse problem, i.e., based on observations made on a sample of units representative of the entire population and selected by given procedures, it arrives at conclusions that can be generalized (inference), within given levels of error probability, to the whole of the same population. Underlying inferential statistics are probability theory and sample theory.

Very simplistically, given the mean observed in the sample, one will attempt to calculate the difference with the actual mean of the entire population by estimating the margin of error. One will determine the sampling error, which is the measure of the reliability of the sample.

There is a straightforward rule, the larger the sample, the more representative it will tend to be of the entire population. However, this also depends on other factors. For example, the more variability of elements in the people, the larger the sample will need to be.

Here are three key terms you will need to understand to calculate the sample size and put it in context:

Population size -Total number of people in the group you are trying to analyze. If you take a random sample of people in the United States, the population will be about 317 million. Similarly, if the survey refers to your company, the population size will be the total number of employees.

The margin of error – A percentage that indicates how likely the survey results will reflect the views of the total population. The smaller the margin of error, the greater the probability of receiving the correct answer at a given confidence level.

Sample confidence level – A percentage that reveals how confident you can be that the population would choose an answer within a given range. For example, a 95 percent confidence level means you can be 95 percent certain that the results lie between the numbers x and y.

Several sites offer free sample size calculations based on various parameters. Just google “sample size calculator,” and you will receive a list of sites that offer this possibility.

In conclusion, statistics is an exact science. We are not talking about personal interpretations but precise mathematical concepts. Reading comments like “What value do we place on nonrespondents?” Or, even worse: “We don’t present error margin values because they bore those listening to the survey results.” Are ideally in line with the quality of discussion that pseudo influencers and gurus are bringing to the customer experience: practically close to zero, with a meager margin of error and 99.9% confidence.

Influencers, please, don’t mess with statistics!