The quality of composite indicators depends largely on the quality of indicators
It is usually not a lack of measures that hinders the evaluation of an institution's or country's performance, but the overwhelming abundance of potentially useful indicators. Ideally, indicators should be selected on the basis of their analytical soundness, measurability, relevance to the phenomenon being measured, and relationship to each other.
The following list contains some of the most obvious and most frequently quoted criteria.
Policy relevance - Can the indicator be associated with one or several issues around which key policies are formulated? Unless the indicator can be linked by readers to critical decisions and policies, it is unlikely to motivate action.
Simplicity - Can the information be presented in an easily understandable, appealing way to the target audience? Even complex issues and calculations should eventually yield clearly presentable information that the public understands.
Validity - Is the indicator a true reflection of the facts? Were the data collected using scientifically defensible measurement techniques? Is the indicator verifiable and reproducible? Methodological rigor is needed to make the data credible for both experts and laypeople.
Time series data - Is time series data available, reflecting the trend of the indicator over time? If based on only one or two data points, it is not possible to visualize the direction the country (or institution) may be going in the near future.
Availability of affordable data - Is good quality data available at a reasonable cost or is it feasible to initiate a monitoring process that will make it available in the near future? Information tends to cost money, or at least time and effort from many individuals.
Sensitivity - Can the indicator detect a small change in the system? It has to be determined beforehand if small or large changes are relevant to monitoring.
Reliability - Will the same result be obtained by making to or more measurements of the same indicator? Would two different researchers arrive at the same conclusions?
Several problems are often, however, encountered in constructing a composite indicator. To begin with, one major difficulty is associated with the lack of relevant data. Statistics may be unavailable because a certain behaviour cannot be measured or no one has attempted to measure it. The data available may not be comparable across countries (institutions) or exist only for a few countries. The indicators may be unreliable measures of the behaviour or not match the analytical concepts in question. Due to the expense and time involved in developing internationally-comparable performance indicators, composites often rely on data sources of less than desirable quality. In the end, they may measure only the most obvious and easily accessible aspects of performance.
Because there is no single definitive set of indicators for any given purpose, the selection of data to incorporate in a composite can be quite subjective. Different indicators of varying quality could be chosen to monitor progress in the same performance or policy area. Due to a scarcity of full sets of comparable quantitative data, qualitative data from surveys or policy reviews are often used in composite indicators. The tendency to include "soft" qualitative data is another source of unreliability with regard to composites.
Changes in composite indicators over time are generally hard to interpret, which limits their value as a tool for identifying the determinants of country's performance over time. One difficulty is obtaining data for points in time which are synchronised with measurements in other countries (e.g. selection of base years, mixing of years across indicators), which compounds the above-mentioned problems of missing values. Especially when the methodology and underlying data are not made public, it is virtually impossible for a reader to distinguish between the “real” performance (improvements or deteriorations in some or all areas) and the method and data coverage: differences in country rankings over time may be the result of data improvements, different weighting approaches or other changes to the composite indicator make-up or methodology rather than to any change in country performance. This is why composite indicators generally do not use time series data.
In general, the quality and accuracy of composite indicators should evolve in parallel with improvements in data collection and indicator development. From a statistical point of view, the construction of composite indicators can help identify priority indicators for development and weaknesses in existing data. The current trend towards constructing composite indicators of country performance in a range of policy fields may provide an impetus to improving data collection, identifying new data sources and enhancing the international comparability of statistics.