For those involved in benchmarking, there’s a natural curiosity about how such initiatives are pursued
elsewhere and what the data show. In the case of the United States, there’s also much more than a single answer.
Although organised benchmarking programs have been around for decades, there has been limited agreement on how to coordinate those efforts. The most ambitious of those programs was the ICMA Centre for Performance Management, which began in 1994 and collected data on as many as 5,000 metrics – ranging from efficiency, timeliness, quality, and satisfaction to descriptive information about how services were being delivered or what policies might affect their administration.
While that effort peaked at participation from about 230 jurisdictions, it was hampered in part by the attempted comprehensiveness of its scope. Even where two jurisdictions might commit to responding to as many of the measures as they could, they might find that there was very little alignment on which measures those were, and as a result, very spotty results with which to compare.
To rectify that situation, a national Insights program followed, paring the list of metrics to 950, while adding big data visualisations and predictive analytics to facilitate better understanding and forecasting. Even that number, however, proved too daunting for the majority of cities and counties, 59% of which were still not doing any internal measurement, let alone benchmarking with others.
In an attempt to lower the barriers to entry, ICMA has shifted to a new Open Access Benchmarking programme that is limited to 80 metrics. This list is not intended to be exhaustive, but rather provide at least a sampling of comparison data across a range of key services. One of the other benefits of this approach is that it takes into account the proliferation of new