One of the standard marketing mantras is: Test, test and test some more. This concept — that we should conduct quick and dirty tests of marketing ideas, campaigns, copyrighting techniques, conversion models, and the like — is as old as when there was no Internet and we would use print media and direct response (snail mail) marketing pieces. Then, testing was a challenge and took some time. We could — and still can — send out A/B samples to evaluate differential response, and see which works best.
Today, at least for consumer and higher volume business-to-business markets, the testing concept has reached a much higher level of immediacy, granular detail and effectiveness. Online campaigns can be devised with virtually immediate metrics — and meaningful sample sizes can be collected within days. On a large-scale, organizations can test extremely tiny differentiators and achieve meaningful improvements. (This is the practice at Google, which constantly rolls out various new ad formats, sizes, search models and the like on miniscule — to the general public — samples, but which can represent hundreds, thousands or even tens of thousands of users.)
Fair enough. But how can these concepts help you when your market is designing or building schools or hospitals, and you might have at most 10 to 12 projects a year — and the lead time to assess a project (from initial lead to final conclusion) may be three or five years, or longer. Clearly, you have two major problems: The sample size will be far too small to gather any statistically meaningful data and the long lead time means that conditions at a test starting point could very likely be invalid by its conclusion. (Imagine starting a test in 2005 — which you could only conclude after great recession began in 2008!)
Clearly, we can test some things, including uptake and open rates from your email marketing pieces. As well, as you build a “go/no go” model, you can build a number of control variables to reduce your risk in spinning your wheels on ineffective RFP responses. (Likely, over a few years, you’ll do enough of these to come up with some metrics — and I recall well one California designer that successfully assessed the pre-RFP non-billable hours spent by the company’s principals was the key factor in determining an RFP success, so that data became vital in making the decision to push for work, or not.)
However, these approaches only provide a partial answer.
The practical solution, relatively obvious and easy to implement, would be to combine some testing where you can do it effectively with your own experience/history, and an understanding of industry benchmarks and norms — which you may learn through data services and perhaps more informally through your participation in industry associations such as the Society for Marketing Professional Services (SMPS). Here, you’ll be reminded that RFP success generally depends on relationships developed BEFORE the RFP begins, and that there is a continuum in relationships with ongoing/previous clients, and though relevant community/client-focused associations. You may also discover value in developing your online and event content and reputation as an expert, and you can measure the take-up of your various initiatives in this regard. (If you blog, for example, you can find quickly enough which content is read the most, and even in some case correlate it to initial leads and inquiries.)
Have you developed your own testing/metrics models — or had difficulties in implementing the strategies? Please share your observations, either as a comment or through an email to me at email@example.com.