The recent OECD World Forum in Delhi showcased the plethora
of approaches now underway to go beyond GDP and measure what matters. The growth in popularity of these indicator
initiatives is undeniable: the number of projects ratchets up after each world
forum. But while there has been a growth
in activity, I don’t see as much growth in the maturity of the conversation
about the indicators.
Photo from Wikipedia |
Far too much time is still being invested in discussions
around how to measure, far too little in what to do with the measures once you
get them (and what to do with the measures is the subject for a future post). Moreover, I’m increasingly convinced that the
questions around the how to measure?
are much less important than we statisticians think they are.
One type of indicator will never suit all
purposes, and we’d be much better turning our minds to promoting the use of our
new indicators for appropriate decision making, rather than searching for ever
greater statistical perfection.
Take the debate around the merits of a composite indicator
of progress (an average of other measures) versus a set of indicators for
example. Few issues are likely to get a group of statisticians as hot under the
collar.
Many government statisticians feel about composite indicator much like the Taliban look on miniskirts. They express abhorrence but are fascinated by what lies beneath.
Any composite indicator requires its component parts to be
weighted together, which in turn requires judgments on the relative weights of
each component. And such judgments are often difficult to make on statistical
grounds alone. Now this is a genuinely important statistical issue, but I’d
argue it is given way too much prominence when we consider how that composite
indicator is going to be used. Most
composite indicators are not, and should never be, used directly to guide
decision makers.
If policies were set
explicitly to achieve a certain value of a composite indicator for instance,
we’d have to be very sure we had the weights right. In reality composite indicators are meant to
raise awareness, reframe debate and challenge a prevailing mindset. They might
not be useful to design a policy, but they are great at summarizing a
complicated set of data. Take the Human Development Index for example. It has
never claimed to be anything other than a fairly crude measure of development.
It doesn’t claim to have perfect weights, nor does it pretend to measure
everything that matters. Yet it is a
practical tool that is relatively easy for all countries to produce and for
users to interpret. And it is this
simplicity that has allowed it to challenge the hegemony of GDP and so help the
world to realize that development is not synonymous with
growth.
Likewise, those who support composite indicators do not
always see the value in a set of indicators.
“How can you communicate a simple story about change with 15 or 20 indicators?” they ask. “You can’t.” is the honest answer.
But you can use a set of indicators to
encourage decision makers to look at policy through a broader well-being lens,
rather than one focused only on achieving growth.
A well-crafted set of
indicators can help promote whole of government and cross-silo decision making,
and highlight the interface between the economic, social and environmental
spheres, and make trade-offs and synergies more explicit.
Different approaches and different indicators are neither better
nor worse than each other. They are all
made to measure, but one size will not fit all purposes. We will make quicker progress when we accept
this, and start paying more attention to what those purposes are, and how to
ensure all this information is turned into action.
Jon Hall
Human Development
Report Office, UNDP
No comments:
Post a Comment