Indicators 2.0: From Ranks and Reports to Dashboards and Databanks
In September 2021, the World Bank Group’s management announced its decision to discontinue one of its most notable and controversial products – the Doing Business Report. Michael Riegner had welcomed the death of indicators as a technology of governance, noting that we are now in the era of “governance by data”. Proliferation of digital data, increased reliance on sensing technologies, creation of digital products by international organizations, and the funding of large-scale digital infrastructure projects (e.g., e-government, e-health) by the multilateral development banks, including the World Bank, are ushering new forms of global governance. Riegner suggests that this turn to digital technologies and computational capacity for big data analytics is one of the reasons for indicators’ demise:
“why use aggregated indicators based on expert surveys when you can digitally collect and process actual raw data, disaggregated all the way down to the smallest unit of relevance?”
If by this question Riegner intimates that indicators – understood as “named collection of rank-ordered data that purports to represent the past or projected performance of different units…[wherein] data are generated through a process that simplifies raw data about a complex social phenomenon” (see here) – can be written off as a technology of governance, his dismissal may be too swift. First, the kind of “raw” data that would be required to make accurate assessments may not be readily available. Moreover, if commensurability is to be achieved, one would require access to roughly similar type of data for each unit of analysis – no small feat given the unequal availability and distribution of data across countries, within countries, and between public and private actors. Second, even if global governance actors increasingly embrace differentiated governance that is tailored to specific actors or entities, there will be continued demand for metrics and representations that simplify and translate complex data into legible and comparable information. Third, as Riegner himself acknowledges, other prominent indicators likes PISA, Human Development Index, Rule of Law Index, and Freedom Scores continue to exist. Whether their influence is declining, as Riegner suggests, remains to be seen.
The World Bank itself is showing no sign of giving up on the production of indicators. At the same time, how indicators are disseminated have changed: the World Bank has turned to dashboards as a means of presenting and contextualizing indicators, and has “datified” indicators, making them accessible as data through the DataBank. The Bank has also begun experimenting with new methodologies, embracing open-source “big data” to construct indicators.
These changes – dashboardization, datafication, and the turn to “big data” as a source for indicators – alter not only how indicators are produced and used (and by whom), but also how they govern, shifting and re-constituting the sites of expertise and power. The cancellation of the Doing Business report thus might not be evidence of the demise of the indicators but a consequence of a shift (begun several years earlier) towards a different process of indicator construction and dissemination that, in turn, implicates different means by which governance effects are achieved.
The Dashboardization and Datafication of Indicators
The turn to dashboardization of indicators is exemplified in the changes to the World Development Indicators(WDI). Prior to 2018, the World Bank produced reports that analyzed and summarized the state of “world development”, so to speak, including the alignment with development goals (first, the Millennium Development Goals – MDGs – and subsequently the Sustainable Development Goals – SDGs). In 2018, the World Bank ceased production of these reports. The Bank continued to produce WDI indicators, even adding a new indicator – the Human Capital Index, but the presentation of the indicators shifted to a web-based dashboard (WDI Dashboard).
The WDI Dashboard contains photographs and illustrations, and features stories that feature “key development data issues” (e.g., a story “World Immunization Week: Lessons from the fight to contain measles” highlights uses of data to track immunizations and disease incidence as necessary tools in the fight against the disease). It also groups indicators by theme (e.g., Poverty & Inequality, People, Environment, etc.), leading users to separate websites that contain not only information on relevant indicators (e.g., methodologies, metadata) but also offer other qualitative information, including additional stories and short entries analyzing selected issues under pithy titles (e.g., “Are the poor catching up?”).
The WDI Dashboard provides tools for “bulk downloads” in excel and CVS files of the entire World Development Indicators database, a function that at the time of writing, appears to be only accessible via certain browsers. One can further access and download information about the indicators by following the link to the World Bank’s Open Data Portal, which also contains links to other webpages, including those that list “data updates and errata”. The indicators, including those that make up the WDI, can also be accessed through DataBank, a portal that aggregates various datasets across the World Bank Group, enabling users to interact with the sub-indicators as data. Users can query that data to receive the outputs of the indicators comprising the composite (e.g., poverty headcount ratio at national poverty lines as percentage of population for Algeria in 2011 is 5.5%). It also allows users to download all indicators comprising WDI (or a specific set) for a country or a series of countries for select years in different formats (excel, CVS, SDMX). Users can also use DataBank’s visualization tools to create a table, chart, or map.
The users do not get access to the underlying data upon which indicators were produced. They are simply told about what the general source of data was for a particular sub-indicator. A click on the “information” icon reveals a slightly longer description of methodology, which includes a disclaimer that
“[t]his series only includes estimates that to the best of our knowledge are reasonably comparable over time for a country. Due to differences in estimation methodologies and poverty lines, estimates should not be compared across countries.”
As Shannon Mattern has written, dashboards represent a way of seeing, managing and governing an entity – whether agency, city, country or the world – in ways that highlight (and obscure) important (and, by extension, unimportant) variables. The WDI Dashboard reduces and simplifies, reinforcing the message contained in the indicators themselves, that poverty and development are but an aggregate of (specific) variables that can be measured and optimized. At the same time, the dashboard contextualizes and interprets the indicators by embedding them within photographs, featured stories and visualizations, creating a common expression of poverty and development. By datafying indicators and allowing (very limited) analytic functions to be performed by the users of the DataBank, the World Bank continues to project technical and objective nature of indicators. Imbedding the indicators within a dashboard filled with stories and photographs allows for their humanization, bringing indicators “to life”, so to speak.
The dashboardization and datafication of indicators thus not only promote the data-driven approach to governance but also reconstitute potential users and uses of indicators. Engagement with indicators now implicates data scientists, software developers and potentially even engineers of different products. Indeed, the World Bank itself has produced indicator-driven products, such as the EdStats DataFinder App, which allows users without technical capabilities or expertise (but with access to a smartphone) to conduct simple quick predefined and pre-formatted queries of various education-related indicators, the output of which is presented as a chart, map, or country ranks. On the receiving end, bringing indicators “to life” not only feeds into the narrative that emphasizes the importance of data for improving the lives of the poor but also potentially expands the indicators’ user base.
The information about the indicators themselves is dispersed – with information about errors and updates separated on different sites and methodologies embedded across layers of webpages (or made altogether invisible in the app), thereby creating a distance between the knowledge produced by the indicators and the means of its production. This separation might not only reinforce the taken-for-granted-effects of the indicators but also lower barriers for engagement with indicators, fostering reification of indicators and data while stripping away the politics of its production and visualization.
Are these changes in indicator dissemination producing governance effects that are different from older modes of representation (e.g., as publicized ranks or reports)? It may be too early to tell and also more difficult to ascertain, as both uses and users are significantly more dispersed and potentially diversified. What can be said already is that the new forms of (re)presentation (whether via dashboards or apps) also implicate a different modality of governance that is at least layered over (if not displacing) the governance-by-indicator effects identified by scholars to date.
With the move to dashboards, databanks, and apps, the World Bank now (also) regulates production and at least some uses of indicators by infrastructure, including through application programming interphases (APIs) (apparently prone to misconfiguration, thus inadvertently preventing access to databases at the time of this writing), formats, query structures, visualizations, and other choices about the functionality of a portal or an app (for example, DataBank does not provide an option for the user to rank countries on the basis of certain sub-indicators). The effects of such regulation-by-infrastructure deserve deeper examination.
Apart from changes in how existing indicators are represented, there may also be a shift in methodologies by which new indicators are produced. Traditional methods of indicator production have been deductive (i.e., a theory of what data is needed to provide evidence for the phenomena to be assessed was developed prior to the indicator’s creation). There is evidence that the World Bank is now experimenting with inductive indicator production (i.e., whereby indicators are built up from whatever data is available). For example, as part of its “big data challenge”, World Bank staff developed an index measuring city innovation capacity. The methodology behind the index involved identifying those proxy indicators for human capital, financial infrastructure, urban amenities, collaborative culture, and networking assets, for which there was open-source data that could be updated frequently and rapidly. As a result, some dubious proxies emerged. For example, a choice to rely on data from Open Street Maps translated into (i) number of universities being used as a proxy for the strength of human capital (ii) the number of banks or ATMs in a city – as a proxy for financial infrastructure; and (iii) the ubiquity of coffee-shops, pubs and restaurants as a approximations of urban amenities. The resulting index was presented in a Dashboard that also automatically ranked cities (alongside providing other visual representations). None of the built-in assumptions in the selected proxies were illuminated, examined or even mentioned anywhere on the Dashboard.
One might be tempted to disregard these “experimentations” if not for the fact that their outputs were apparently enthusiastically welcomed by the Bank’s sub-national clients, with the Dashboard subsequently being piloted in Tanzania “as part of a broader initiative to understand entrepreneurial ecosystems, leading to the design and preparation of a US$100 million lending operation.” Apart from questioning the normative value of the proxies and the resulting index (which would go beyond the scope of this article), it is important to highlight that in this inductive approach to indicator production, the power to shape the governance effects of indicators is distributed to those who control what data is (and is not) made available and to those who control the infrastructure through which the data is produced.
These shifts in indicator production, dissemination and use require both additional theorizing and in-depth empirical examination that takes account of the complex interaction between digital data (big and small) and indicator production, with particular attention paid to the role that digital infrastructures might shape the governance effects. While Michael Riegner is undoubtedly correct in stating that international organizations need to implement data governance frameworks to regulate their data activities (beyond the GDPR-inspired data protection and privacy principles that have been proliferating), I would suggest that additional demands should also be placed on the organizations to provide means for contesting the ever-dispersing power behind indicator and data production, particularly where such power is dispersed among those who control the different components of infrastructures through which data and indicators are produced.
Despite his proclamation that “indicators are dead”, one could read Riegner’s article as calling attention to the fact that the turn to digital technologies, including for indicator production, require not only more sophisticated regimes to regulate the new modalities of global governance but also a rethinking and recalibrating of the roles, powers and responsibilities of international institutions vis-à-vis the various publics. With this, I wholeheartedly agree!