Using Alternative Data Sources to Validate International Surveys?
The narrative of misleading development statistics has been iterated in recent years. Morton Jerven’s well-known text, How We are Misled by African Development Statistics and What to Do about It, challenges the reliability of development statistics and calls for new approaches to data collection. More recently, Michael Robbins and Noble Kuriakose have determined that approximately 1 in 5 international surveys contain fabricated data.
By reviewing responses from more than 1000 surveys, Robbins and Kuriakose identified 17% of these surveys as “likely to contain a significant portion of fabricated data. For surveys conducted in wealthy westernized nations, that figure drops to 5%, whereas for those done in the developing world it shoots up to 26%.”
The two researchers came to these conclusions by identifying duplicate responses in surveys. One existing hypothesis is that many survey responses are biased due to the presence of data collection assistants (who are often collecting data door to door). Although some researchers have challenged Robbins and Kuriakose’s methods, many others believe that the problem is even larger than the stated 17 percent.
Clearly, there is a strong argument to be had that additional data sources should be used to validate survey responses. One of our objectives at the Big Data and Development Incubator is to understand how data has been effectively harnessed in the context of development. By furthering the conversation on data and development, we aim to support development professionals, practitioners, and scholars.
Note: This post was originally published on the OII's Big Data and Human Development project blog on . It might have been updated since then in its original location. The post gives the views of the author(s), and not necessarily the position of the Oxford Internet Institute.