What we collected and did not collect
The survey had 54 questions ranging from basic demographics (see below) to in-depth questions about tooling choices, technical preferences, and attitudes towards various professional practices. The survey was open-ended, meaning users did not need to reach the end of the survey, and all questions were optional.
While we asked questions about how much experience developers have had, we did not ask respondents how old they were. Similarly, we did not collect demographics on race or gender identity. The goals of the survey did not justify collecting this data.
We did collect information on several demographic topics relevant to our goals:
We asked respondents “What country do you live in?” Respondents from a total of 194 countries and territories responded. The top 30 countries are in the table below.
It is hard to measure exactly the distribution of npm users around the world, so while some countries are over-represented relative to their population, it may be that this is still a relatively accurate measure of npm’s user base worldwide. We attempted to locate any bias in this sample by comparing these answers to a different source of npm usage: web traffic to npmjs.com, as measured by Google Analytics.
Relative to the web traffic, there are some meaningful differences: Germany is #2 here but #5 in web traffic, China is #16 here but #4 on the web, and Japan is #26 here but #12 on the web. In the latter two cases it seems likely that the survey being available only in English was a factor, but we are not sure why residents of Germany would be more likely to answer a survey.
We asked respondents their primary spoken language. Twenty-two languages were represented. The survey was only in English and this changed the demographics of who responded. In particular (see above), we believe it under-samples China and Japan. The 16% answering “other” languages suggests our list of options can be improved, see below. Our results are definitely biased towards English speakers.
We asked respondents if they are currently in full-time education. Respondents answering “yes” to this question were excluded from questions (see below) about company size and industry.
We asked all respondents including those still in education for the highest level of education they had completed. These results are in line with broader measures of education level, so we do not believe there is any bias on this axis.
Respondents were asked how many people worked at their employer, or if they were not currently working. The distribution of answers matches census data on company size, so we believe our results are not biased towards any particular size of company.
We asked respondents what industry they work in. In our previous survey “technology” dominated. We are aware that many companies, while being tech companies, are also part of one or more specific industries: for instance, Google is a tech company that is also in advertising, media, and arguably other industries. Airbnb is a tech company that is also in real estate. To attempt to correct for this, we first asked respondents whether the company they work for is considered a “tech” company:
We then had a second question which asked respondents to say what industries their company was in. Multiple industries were allowed, and the “tech” option was labelled as “It really is just tech, no specific industry”. Relative to last year’s results this produced a much wider range of responses, much less concentrated in tech.
Number of respondents
Of 33,478 responses collected, 25,034 were considered “complete” responses, meaning the respondent reached the final page of the survey and indicated they were finished answering. For questions that were answered both by respondents who completed the survey and those who did not, we compared results and did not find statistically meaningful differences between the groups, so we have included data from incomplete responses in the findings.
However, because questions later in the survey were more likely to be skipped by respondents who abandoned the survey, total respondents for those questions were lower. The lowest number of responses for a question give to all respondents was 24,781.
Some questions were asked conditionally based on previous responses, e.g. users who said they do not use TypeScript were not asked how often they use it. These questions have lower numbers of respondents for that reason, but we believe this is more likely to increase than decrease their accuracy.
Differences from 2017/2018 survey
We learned a great deal from our 2017/2018 survey, run from December 2017 to January 2018, and this was reflected in changes to our survey this year.
Desktop, mobile, and native apps
In last year’s survey our questions were ambiguous as to what was meant by “mobile”: does a mobile web app count, or a native mobile app? This year we asked multiple unambiguous questions about developers who write code “for browsers”, “for servers”, “native apps” and “embedded devices” and included options for “I do not write this kind of code” for all. This gives us useful, unambiguous numbers of who is writing what types of apps where.
As mentioned above, our question about Industry last year included several ambiguities: the unemployed, those in full-time education and companies that compete in multiple industries were given unclear options to respond. We believe our separate clarifying questions this year have done much to resolve these ambiguities.
Last year our list of web frameworks was smaller, so we expanded it this year. We also gave a single option for “Angular”, but learned from the Angular community that there are two separate and incompatible frameworks with that name: Angular 1.x (also known as AngularJS) and Angular 2.x+ (known as Angular). We gave separate options for these frameworks this year.
What we intend to improve
While we believe our survey significantly improves on last year’s questions, there are still opportunities for further improvement that we hope to make in subsequent years.
Order of questions
Respondents are less likely to answer questions further down in the survey, resulting in a steady decline in the number of respondents for later questions. While in some cases question order matters, we may be able to randomize question order to some degree to ensure a better distribution of respondents.
Number of languages
Our list of spoken languages excluded 15% of respondents; we should have a longer list.
List of territories
There were some unfortunate errors in the names of some territories that reflected political conflicts, and some equally unfortunate omissions.
How respondents differ from the general population
What conclusions we can draw