A recently published New York Times article describes a scientific study at the University of Illinois comparing two experimental models — the observational approach, and the randomized controlled trial. There were 5,000 participants, and the purported study related to the benefits of workplace wellness programs.
Turns out, the observational approach — just looking at the actual program participants and comparing them to the non-participants — showed incredible benefits. But when the study sample included the additional names of individuals who had no idea that they were participating in the study, the results were far different — in fact the study showed there was no effective gain in health or health cost savings between the control group and the study participants.
The study appears to validate one of the biggest problems with observational studies — selection bias. Individuals sign on (or are selected) because they will likely succeed. This does not reflect the overall population, however.
Why is this important? First, I expect many studies related to marketing science are based on observational rather than randomized selection processes. The reason comes down to cost and available sample sizes. It is much harder to pull together a true “randomized” sample than it is to watch a few people closely.
Then we add the problem in applying marketing science within our industry — the ridiculously small sample sizes for any company comparison tests, in most cases. Allowing that a simple statistical study would require at least 500 and better 1,000 or even 5,000 projects, how will a contractor, architect or engineer find enough samples if the business designs or builds 10 to 30, $10 to $100 million projects a year?
The sample size problem remains even if you are working in the larger mass residential markets. The sample size will be larger, of course, but it is unlikely to be large enough to provide meaningful results in any short time period (and if you have the resources and patience to measure over years rather than months, then how do you account for changing tastes, economic conditions, and new competitors?)
The point here is that much of the science we follow and try to implement in developing our marketing approaches may be giving us false or at best meaningless results.
Should we ignore the scientific studies, then and just go back to a seat-of-your-pants approach to construction marketing? Not necessarily. Some of the psychological/social science studies that we follow have probably been done properly with full randomized research. (And we can be thankful, for example, that the higher standards are normal for many critical research areas, such as pharmaceuticals.) As well, there’s often little harm possible if we modify our approaches based on the non-randomized research; and even if our selection process is yielding artificially higher results, does this matter if our hit/sales/return on investment rates are improving?
Just take the latest scientific study with a grain of salt, at least until you check that it is based on a large enough randomized sample.