FAQ: Why Doesn't Quantum Calculate Statistical Significance?

Statistical significance, numerically represented as the p-value (ex: p = 0.03), is not reported for engagement surveys or pulse surveys for the sake of simplicity and statistical soundness. Here are a few of our top reasons for not including this element in our survey reporting:

  • The average employee does not have a statistical background. As such, a large portion of users would not know how to accurately interpret the meaning of a p-value, making it a number that would cause confusion and misinterpretation rather than offer clarity and guidance.
  • Statistical significance is not the same as practical importance. Reporting p-values could give users a false sense of direction or security, with the misunderstanding that p-values under or above a certain threshold are less or more valuable to pursue for positive organizational change.
  • Organizations are populations. Statistical significance is part of the branch of statistics known as inferential statistics. These statistics focus on being able to infer results about a population (e.g., a country, an industry) from a smaller group of that population because it’s rarely possible to get data from an entire population. However, in the case of census engagement surveys, the organization is the population. Response rates for our engagement surveys tend to be quite high ( > 80%),  so there is little room, and therefore little need, for statistical inference.
  • Statistical significance can be strongly impacted by group size. In particular, results are less likely to be “statistically significant” with smaller group sizes, and more likely to be “statistically significant” with larger group sizes. This relates directly to the second reason listed above; users could be misguided by an artifact of statistical significance (e.g., group size) rather than being guided by practically important differences.
  • The p-value as a measurement has come under severe scrutiny within the broader scientific community, and its use is becoming increasingly controversial.

Instead of relying on statistical significance to determine whether a difference or change is important, we recommend focusing on relative differences or changes within your data. For example, if most survey questions increased in favorability yet a few decreased since the previous survey, then relatively speaking, those questions that decreased in favorability are more practically important to focus on. And more specifically, those items that decreased the most should receive highest priority. Likewise, if one department has especially low overall favorability, then that department is most important to focus on. And if all departments have similar yet fairly low favorability, then that suggests a strong organization-wide effort is required.