Q&A on State of Aadhaar Report 2017–18

One of our survey enumerators, Chandrani, administers the survey to a respondent in South 24 Parganas district in West Bengal.

You have questions, we have answers! Here is version one of our Q&As based on the questions we have received thus far.

Feel free to contact us at stateofaadhaar@idinsight.org for any further questions.

  1. What questions were asked in the State of Aadhaar survey (and why)
  • Our questionnaire covered more questions than we were able to include in the full report. As with all reports, you cannot include everything.
  • We did not cover the entire universe of questions that were of interest. As with all surveys, we had to prioritize questions constrained by time and resources. We intentionally chose breadth on multiple topics (enrolment, banking, PDS) versus depth into one. We encourage other researchers to pursue these areas further.
  • Our full survey will be available along with when our website is updated (May 24).

2. How did you choose your sample size? Is ~3000 households sufficient to make policy decisions?

  • Sample size numbers are calculated by optimising for precision within operational constraints. This sample size was calculated by our economists and is representative of rural populations with a high degree of precision.
  • The level of precision reflected in our survey is evidenced by the error-bars on our bar graphs in the report — representing 95% confidence intervals. As you’ll see the confidence intervals are narrow in most cases, increasing our certainty in the estimates.
  • This is also the largest survey on Aadhaar to date and we welcome more studies covering more states from other researchers.

3. Why don’t you extrapolate your estimates to the all-India level?

  • Our survey was designed to be representative of rural populations at the state level for Andhra Pradesh, Rajasthan, and West Bengal.
  • Given the diversity in the states and their use in Aadhaar, we feel the three-state average is indicative of national trends. However, it would be technically incorrect for us to extrapolate these numbers explicitly to a national level.

4. Why don’t you include population estimates for the percentage figures in the report?

  • With certain assumptions, almost every single percentage point in the report can be extrapolated to arrive at population figures for rural populations in the three states covered in our survey. External stakeholders are free to list out their assumptions and extrapolate.
  • However, we took a call to use extrapolations when a) the number of assumptions were reasonable, and b) where the population numbers was essential to make a policy-relevant point.

5. Why does the report recommend that there should be a strong grievance redress mechanism for enrolling into and updating one’s Aadhaar?

  • We found that many people are unaware of the costs and processes of updating their Aadhaar information. This leads to people paying higher-than-required fees; an issue that deserves immediate attention.
  • We call for a strong awareness campaign on update fees and processes and push for easy updation of Aadhaar errors.
  • This recommendation should not be interpreted as the only feasible recommendation to improve the enrolment and update processes.

6. Why does the report recommend a push towards mobile-based financial services?

  • We found that only 17% of respondents stated that they recently used a microATM. We also cite other research outlining issues with the current business correspondent network.
  • This leads us to believe alternatives should be explored — either strengthening the network or trying other types of access to financial services.
  • Other developing countries with similar economic indicators and literacy levels have had success with digital financial services (using feature phones). We believe this is an issue that can be explored further.

7. For Figure 5.5, how do you define “number of attempts” required for successful biometric authentication?

  • “Number of attempts” = stated number of attempts an individual had to try using either their fingerprint or iris for successful authentication on the e-PoS machine.
  • Data on number of visits to the ration shop was collected separately, and will be published soon. We didn’t include time-use and wage/income questions and can consider it for the next survey round.

8. In Figure 5.3, Aadhaar-related monthly exclusion for Rajasthan is 2.2%. From Figure 5.4, it seems that this only includes Aadhaar authentication and seeding issues (as those two issues also add up to 2.2%). Why are the other Aadhaar-issues in Figure 5.4 not part of the estimate in Figure 5.3?

  • This is a wrong interpretation, but an understandable mistake. Questions to assess exclusion under PDS were “select-all-that-apply.” The note in the graph for Figure 5.4 (which is for a 3-month period) clearly states that numbers do not add up to the headline figures in the previous graph (which is a monthly average) because of this aspect. In our calculation of Aadhaar-related exclusion, we include ‘connectivity issues’ and ‘non-availability of PoSable member.’ The 2.2% figure accounts for both these reasons. The fact that the first two reasons on Figure 5.4 add up to 2.2% is purely coincidental.

9. The report says that non-availability of ration is not an Aadhaar-related factor. However, a principle claim of UIDAI is that “dealers can’t route rations to fake/ghost beneficiaries while claiming non-availability of rations.” Therefore even non-availability of rations points to an Aadhaar weakness.

  • We point out that “ration not available” is a major reason for PDS exclusion. The argument that this is also attributable to Aadhaar is untenable. In multiple academic articles and essays, it’s clearly argued that Aadhaar cannot play a role in reducing quantity fraud or addressing issues of non-availability of ration. We endorse this fairly obvious point.
  • It’s also argued that Aadhaar does not play a substantive role in increasing state capacity, which is the main reason for non-availability of ration. The huge difference in this particular aspect between Andhra Pradesh (where it’s less than 0.3 percentage points of the 1.1% PDS exclusion) compared to Rajasthan (where it’s in the 6 percentage point range) demonstrates this fact.

10. Isn’t the high approval for mandatory linking of Aadhaar to government services contradictory to the stated preference for privacy (knowing how data is used)?

  • The survey gathered data on respondents’ preferences for knowing how their private information is used (demographic, biometric, and Aadhaar number). Around 97% of respondents said that they find it important to know their data is used. This is a key finding that should inform how public institutions and private companies interact with individuals.
  • However, we also found that many people approve of linking Aadhaar to various government and private services when asked directly.
  • This is suggestive evidence that the respondents both valued privacy but were okay providing their information in receipt of some benefit/service.
  • Many people hold complex views on privacy and think of tradeoffs between privacy and convenience in very different ways. As government and the private sector establish policies and privacy ‘defaults’ on these issues, they should understand that people expect transparency and safeguarding of privacy.
  • These results are striking and important to explore further, but not necessarily contradictory.

11. What’s the difference between demographic errors and demographic authentication errors on Aadhaar? And why did you ask about voter IDs

  • We compare Aadhaar to voter ID as the latter is the most widely held form of identification after Aadhaar. In addition, both are comparable in that they are individual-based IDs and have similar (though not exactly the same) demographic data on them.
  • Our analysis focussed on comparing the self-reported demographic error rate in Aadhaar to self-report demographic errors in voter IDs. We found that there are 1.5x as many self-reported errors in Aadhaar compared with voter ID (across the three states).
  • Error rate for demographic authentication cannot be compared to the self-reported demographic error-rate. The latter comprises of a subset of demographic authentication, which is contingent on other factors.

12. Everyone knows people share photocopies of their Aadhaar — why is that “new”?

  • While it is a widely known fact that providing photocopies is a common use, we were able to gather data to verify the extent of this using our survey. This allows us to directly compare prevalence of the analog and digital uses of Aadhaar
  • Given the prevalence of this use and the fact that individuals find value in the analog usage, we discuss the importance of adding security features to the paper document (SOAR 17–18, page 11).

13. Are questions on approval for mandatory linking biased? Would they result in different responses if asked differently?

  • Understanding opinions is notoriously hard. We added a section on privacy preferences and mandatory linking because we felt it was important and very little data is available to-date. It’s possible that individuals’ stated preferences may change with different phrasing and it’s also possible that their preferences change over time.
  • However, that does not negate the value in understanding how rural residents view the sharing of their information and whether or not they are comfortable with mandatory linking of Aadhaar to various benefits and services.
  • The question on mandatory linking was asked in as unbiased way as possible:
“It is currently mandatory to have Aadhaar to access many government benefits, e.g. NREGA, PDS, pensions, mid-day meals. Do you approve or disapprove the government’s decision to make Aadhaar mandatory to access government benefits?
The enumerator read out all options (approve, neutral or disapprove) for this question.