Mark Twain reportedly popularized the statement: "There are three kinds of lies: lies, damned lies, and statistics." The phrase refers to the persuasive power of numbers, particularly the ability to use statistics to bolster weak arguments. In the midst of an election season, we see this practice on display everyday.
I confess my love of statistics--and research in general--to support the advice I give to clients. And the technical professionals I serve seem to respond to the numbers as well. They give our conclusions the appearance of being objective, proven, reliable. As a consultant, they make my recommendations seem more authoritative.
Unfortunately, research (and the statistics it produces) often is not nearly so trustworthy as we may like to think. For one thing, it frequently conflicts. Some studies indicate one thing, others point to a different conclusion. Then there's the human tendency to pick the data that best supports the point we were already trying to make (that's what Twain was referring to). The "proof" we're looking for can be elusive if we're honest.
For example, one of my passions is raising the bar for client service among A/E firms. With almost 40 years of experience in our business, I've seen for myself plenty of evidence that there's room for improvement. But having a little data to support that observation is always helpful in persuading my clients.
I had it at one point: One survey indicated that only 16% of clients gave their A/E service providers an "A" grade for service. Another showed that 88% of A/E firm clients were open to switching from their current providers, the highest percentage among the professional service sectors included in the survey. These provided the compelling evidence I was looking for.
Alas, the two consulting firms that provided those numbers did another round of surveys, with remarkably different results. In one year, the percentage of clients giving their A/E firms an "A" rose to 56%! The firm that did the survey announced in a press release that "overall satisfaction with the industry's customer service has shown significant gains." I'm more inclined to conclude that their research, based on a relatively small sample of clients, is apparently flawed.
Likewise, the percentage of A/E firm clients open to switching dropped from 88% to 54%. To their credit, this firm was more guarded in their assessment of the apparent improvement. When I contacted them about the discrepancy, they admitted that the sample of A/E firm clients (you guessed it) was pretty small. The study was really oriented more towards the other professional service sectors covered.
What to make of this? Well, if you're among the many firms that use industry data for benchmarks and guidance, consider the limits of such "objective" standards. The numbers may not be representative for your firm or market, may come with a significant caveat, or may not be that accurate. The most frequently used benchmarks in our industry come from ZweigWhite and PSMJ, and in general I trust their numbers. They draw from reasonably large samples and their medians don't shift unpredictably from year to year.
Let me offer a few perspectives to ponder regarding the use of industry benchmarks and other numbers to guide your firm:
Use medians as references rather than goals. Many firms seem happy to aspire to be average. They target industry norms as goals and let off the gas when these are achieved, even as they sell themselves as being above average. Medians are helpful to determine where your firm stands relative to the industry. They may be appropriate goals if you're deficient in some areas. But overall, don't you want to do better than average? If you want to benchmark, try looking at the top performers in the business.
Recognize the norm may not be right for your firm. Reigning in costs in a weak economy, firms often look to industry medians as guides. For example, on average, A/E firms have spent about 4-6% of net revenue on business development. I've known firms spending above that range that felt compelled to cut their expenditures. But ZweigWhite found that some of the most successful firms spent closer to 10%. Other research indicates that companies that make higher investments in marketing during a recession grow much faster when the economy begins to recover. Whatever you spend, the important metric is ROI.
Use the right formula. If you're tracking performance against industry norms, be sure you're comparing apples with apples. For example, the industry median for proposal win rate is supposedly around 40%. But most of the time when I encounter a firm doing that well or better, it turns out that they're fudging the number--including noncompetitive, sole-source proposals in the mix. That makes me wonder how accurate that benchmark really is (which is supposed to include only competitive submittals). On the other hand, I wouldn't encourage a goal any lower.
Another illustration is firms that calculate utilization differently than everyone else (backing out overhead positions, for example). That makes it darn near impossible to determine how they're doing in comparison with their peers (which may have been the intent!).
Benchmark against your firm. The most important measure is progress. However your firm might compare with the industry, continued improvement should be your focus. In some cases, that may involve catching up with your peers. In others, it will mean going beyond the norm. The most valuable benchmarks are those you set for your firm, not what other firms are doing. As noted above, overemphasis on benchmarks can result in complacency once those thresholds are reached.
Don't limit your metrics to what everyone else is measuring. I do think it's helpful to benchmark against the industry, but don't stop there. There are a number of metrics that are not common (and therefore cannot be benchmarked) that are still worth consideration. I mentioned several such measures in this earlier post on tracking BD performance.
Also, as I've written about previously in this space, there is real value in measuring behaviors. This provides leading indicators of performance to complement the usual lagging indicators. I'm currently helping a client implement a behavior-based safety process which in part involves observing and recording safety behaviors. To date we've compiled a database of thousands of behaviors from which we can deduce trends and take preventive actions.
The moral of the story: Let us all strive to use statistics that do more to reveal truth than obscure it. At the same time, we should recognize the inherent limitations of so-called objective measures. After all, we are human. And the ramifications of how that affects our business is sometimes hard to calculate. Which makes it all the more interesting!