The New Jersey Basic Income Experiment of the 60s-70s

Sara Bizarro
Jan 11, 2019 · 23 min read

In the 60s and 70s there were several Basic Income Experiments conducted in the USA. The first of these experiments was known as the New Jersey Income Maintenance Experiment. This experiment was the “Founding Experiment” for Mathematica, a company that performs public policy research and is still operating in Princeton, NJ, and that has recently celebrated its 50th anniversary. In this piece, I present a summary of the NJIME in case that is of interest for those of you studying or interested in the idea of Basic Income.

How did the Income Maintenance Experiments start?

The idea of a negative income experiment was proposed formally to the Office of Economic Opportunity (OEO) by Heather Ross, William Baumol and Albert Rees in 1966 (Kershaw & Fair, 1976). It was intended that the OEO would fund Mathematica to create a negative income tax experiment. The OEO department was headed by Joseph Kershaw, the father of David Kershaw who became Mathematica’s first president. However, it was proposed that the contract should be given to an academic rather than a for-profit company and the experiment was awarded to the University of Wisconsin’s Institute for Research on Poverty, which had been funded with OEO money. Has soon as the grant was awarded, the Institute subcontracted with Mathematica to run the field operations and part of the research. The grant attributed to the University of Wisconsin in September of 1967 was of 620,068 dollars for a study of 1,000 families. In 1968, Mathematica solicited bids from survey organizations to select and enroll families and administer interviews quarterly. That contracted was granted to Opinion Research Corporation. At that point, David Kershaw was hired as the project director and he later decided to do the survey work in house forming the Urban Opinion Survey division of Mathematica in October of 1968.

David Kershaw become president in 1975

In 1969 Mathematica was accused of welfare fraud and a Mercer County Grand Jury was set up to investigate the charges. The claim was that Mathematica had told people who were receiving welfare and negative income tax payments, that they did not have to report those payments to the welfare department. The Grand Jury report from 1971 was not made public, in 1972 there was a final report and no criminal indictments were made. The Grand Jury recommended Mathematica to improve its administrative procedures and to improve communication between the experimenters and the government. Mathematica also developed ways of preserving the confidentiality of the participants of the experiments after this ruling.

The NIT experiments were the first randomized control trials in social sciences (Marinescu, 2017: 9). They were the first large-scale social experiments that used randomly assigned human subjects and control groups in a way similar to what medical research does, in order to explore policy efficiency. These experiments were pioneering a new area of policy research, some even said they were experiments in how to run experiments (Widerquist, 2005:51) and may have had a larger influence in policy research in general than in this policy in particular. In this paper I will talk about the New Jersey Income Maintenance Experiment only, but Mathematica ran several other Basic Income experiments: New Jersey and Pennsylvania (1968–1972), in Iowa and North Carolina (1970–1972), and in Gary Indiana (1971–1974) and Seattle and Denver (1970–1980). The Seattle and Denver experiment was the largest one, with the biggest sample and the longest of all of them. These experiments will not be analyzed in this piece.

Pilot Parameters

The first Basic Income experiment created the first set of parameters and a blueprint for others to follow. Several elements of the experiment were decided upon in the planning stages, and they constitute the bare bones of any basic income experiment for the future. Those elements were: 1) Duration; 2) Frequency; 3) Amount; 4) Subjects; 5) Evaluation; 6) Analysis.

Duration, 1), refers to the period the pilot program will last for, the most common is for the experiments to last 2 to 3 years, but it can vary. Frequency, 2), refers to how often the subjects are being paid: is it every week, every two weeks, every month, or once a year in a grant type of payment? The amount, 3), refers to how much subjects are being paid and to variations within the sample, some can be paid more and others less, for instance. Subjects, 4), refers to the type of subjects that will be the focus of the experiment, will it be individual or per household? Will they be unemployed, employed, have a lower income or will there be a broad range of incomes? How many subjects will be in the group? How many in the control group? There are many possible variations in the subject parameter. Evaluation, 5) refers to how the program will be evaluated, what are we trying to test for, do we want to know if it helps people work and break out of the welfare and poverty trap? Do we want to know if they will they get better nourishment and be healthier? Will they work more hours or less hours? How do they use their money? The evaluation parameters have to be decided in the planning stage as they may have implications on how we chose our subjects, sample amount, amount of money, etc. The final parameter is the analysis, 6), how will the results be analyzed? What methods will be used to treat the data that results from the experiment? In the next section, I will present how the New Jersey Income Maintenance Experiment decided on all these parameters. These are important as they can be used as guidelines for present and future Basic Income experiments.

Parameters of the New Jersey Income Maintenance Experiment

The general parameters presented in the previous section can be used as a guideline to understand the elements of different Basic Income experiments, we can use them to identify variation and similarity in past, present and future Basic Income experiments. The New Jersey Income Maintenance Experiment, as the first Basic Income experiment with social science credentials, paved to way, so to speak, to all future experiments. This was the first step, so the first reference on which other experiments were built upon when they wanted to be considered a “scientific experiment”.

Let’s look at how the parameters were specified in the NJIME. Regarding duration, parameter 1), the experiments lasted three years spread between July 1, 1967 and June 30, 1974. Not all locations begun at the same time. There were four locations (three in NJ and one in PA). The Trenton experiment began in August 1967; the Paterson-Passaic began in January 1969; the Jersey City experiment began in June 1969; the Scranton, Pennsylvania experiment began in September of 1969. Also, in October 1969, 141 additional families in Trenton and Paterson-Passaic were added to the control group. The experiment lasted 3 years on each site. Also of interest is that Mathematica considered this initial experiment the pilot of the pilots so to speak, it was an opportunity to sharpen the edges for the future experiments that the company performed.

Regarding frequency, parameter 2), payments were received every two weeks. Each family filled out an income report every four weeks which formed the base for calculating their payments. These transfers were ruled by the Internal Revenue Service to be non-taxable. It is interesting that the payments were not monthly, but every two weeks, possibly to better complement the families incomes that may have been mostly on a two-week schedule as well. Adjustments could be made every month to make sure that everyone was receiving the same amount. There was some confusion in the initial stages in the income reporting, subjects were not sure if they were supposed to report their income before or after taxes, but it was eventually solved.

As for the amount of money received, parameter 3), in the case of the NJIME it varied, since it was a NIT type of Basic Income experiment. The amount included several built-in variations. The amount was calculated by referring to the poverty line with four variations, and it also included variations in tax rates for earned income. There were three marginal tax rates for earned income tested for in the experiments and after some discussion, the experiment used four levels of the poverty line, resulting in 8 variations. The levels were 50, 75, 100 and 125 percent of the poverty line. The tax rates for earned income were 30, 50 and 70 percent. There were eight pairings tested were: 50% of the poverty line to 30% marginal tax; 75% poverty line and 30% marginal tax; 50% poverty line and 50% marginal tax; 75% poverty line and 50% marginal tax; 100% poverty line and 50% marginal tax; 125% poverty line and 50% marginal tax; 75% poverty line and 70% marginal tax; and 100% poverty line plus 70% marginal tax. The 125% poverty line parameter was introduced later in the experiment. The differences in marginal tax rate, or “take back” rate were used to measure work disincentive, poverty trap but also cost of the program. See Figure 1, “Table 1.3.” below (Kershaw & Fair, 1976, p. 10):

The poverty line was measured approximately, and the Basic Income payments had some variations compared to the actual poverty lines. For instance, in the first year, the SSA poverty line for two people was $2130 per year and the NJIME payment was of $2000, while for four people the poverty line was $2610 and the NJIME payment was $2750.

From the variations tested, the only the ones with 100% and up had a guarantee that the recipients would always be above the poverty line. The variation of 100% poverty line and 50% marginal tax rate if the poverty line is considered to be 3000 a year, would look like this:

Since the payments were attributed to households, and not individuals, there were also variations calculated taking into consideration the numbers of dependents — they included a sliding scale per dependent, every additional dependent would increase the payment, but the amount would diminish with each. See the figure below.

In addition to the regular payments, the families were paid $10 every two weeks in return for sending their “Income Report Form”. The control families were paid $8 for sending a small card with their current address. Other payments included $5 for the one hour interview every three months administer to all. These extra payments were considered taxable income, unlike the NIT payments that were tax-free. All payments were made by mail and income report was made by mail. There was an effort made to separate the payments from the team that conducted the interviews, they even had different names. The payments were designed to be as impersonal as possible so as to not depend on any of the subjects responses.

Parameter 4), the subjects, consisted of families with “able-bodied males” between the ages of 18 and 58, not going to school full time, not institutionalized and not in the armed forces. The experiment used only families with males in the family because “working-age men with no physical disabilities were the only people in American society who had never qualified for public assistance.” ( Kershaw & Fair, 1976, p.9). Also, most single women with children qualified for AFDC, Aid to Families with Dependent Children, a program that was in place from 1935 to 1996, and the idea was that the Basic Income experiment was not supposed to clash with other similar or even better-existing programs.

The subjects of the experiment were families who reported income no higher than 150% of the poverty line. Initially, there were 1,216 families enrolled in the experiment, 725 in the experimental groups and 491 in the control group. The number of families was expanded in October 1969, when an additional 141 families in Trenton and Paterson-Passaic were added to the control group. The families had a variety of ethnic backgrounds, described as“black, white and spanish”.

Regarding evaluation, parameter 5), the experiment administered surveys via an interview. The survey team was separate from the payment team and the team that analyzed the data. There was an effort to separate these teams as mentioned above so that financial considerations did not condition the answers. Interviews were 1 hour long with a 20-minute section that was the same every time about labor force status and a 40-minute section about other types of economic behavior, such as expenditures, debt consolidation, health, and social behavior. Besides the monthly interviews, there were other 3 interviews made during the experiment: 1) screening; 2) pre-enrollment baseline; 3) follow up after the last transfer.

The interviews performed were intended to evaluate several behavioral aspects. The main goal of the experiment was to evaluate the labor supply response to an NIT payment, the main concern was to find out if giving people who were able to work unconditional cash would disincentive participation in the workforce. The labor responses were divided into different subcategories: 1) Labor supply responses of husbands; 2) Labor supply responses of wives; 3) Educational and labor-supply responses of young adults; 4) Effect on job turnover and unemployment duration; 4) Impact on job selection. There were other aspects analyzed in the surveys beyond labor response, including housing, results in home buying and home improvement, consumption behavior, analysis of the way the subjects changed what they consumed, health and utilization of medical care, social and psychological effects, social integration, leisure activities, media exposure and lifestyle enhancement, fertility and household composition.

The final parameter, 6), is the analysis of the data. The experiment included 725 experimental families and 632 control families. There were 8 different experimental negative income tax plans. Families were divided into three income stratum, (I) below official poverty line, (II) incomes at the poverty line and (III) higher than 124% but no higher than 150% poverty line. Families were assigned various plans with a non-symmetrical design. There were only 693 homogenous families that continued to the end of the experiment.

The analysis used several different labor force variables. The questions on labor force participation included: “What kind of work did you do? Where you absent from work? If so why? Were you looking for work?” Regarding the hours worked, the subjects were asked: “How many hours did you work? Did you take time off not for illness or holiday? Did you work over time? Did you work more than one job?” Regarding earnings: “How much total earnings before taxes? Non earned income was another variable.”

The statistical method used to analyze the results was when “an observed behavioral phenomenon was summarized in one or more quantifiable variables” such as hours worked or days in the hospital, these “dependent variables” were “then regressed against a number of potential explanatory variables: socioeconomic variables and experimental variables. The socioeconomic variables included a variety of indices such as income, education, financial assets and ethnic group. The experimental variables generally included two items: guaranteed level (the amount guaranteed to the family if its outside earnings were zero), and the marginal tax rate (the percentage of incremental outside earnings deduced from the support rate).” (Watts and Rees, 1977. p.6–7) The one exception was when trying to predict household composition, where they used a Marov-chain model that predicts the probability projections (more on Watts and Rees, 1977).

Results of the NJIME experiment

The main object of the NJIME was to find out if there was any significant labor force reduction in Basic Income recipients and their main conclusion was that there was no significant reduction in workforce participation for males. Regarding the wives, the study found that “white” wives worked fewer hours, there was no significant change for “spanish speaking” and “black” wives. The study was not able to draw statistically relevant conclusions regarding mobility in and out of labor force for wives. As for husbands, the only groups of males that reduced labor hours were the very young and very old who were assigned the highest levels of subsidies. However, the young adults did tend to invest in education. The families with a lower NIT payment had more job flexibility and lower paid jobs were accepted.

Regarding other impacts of the NIT payment, it was difficult to identify statistically relevant results on health, social behavior, and so forth. The only significant effect was an investment in housing and furniture which also included incurring in more debt. No other statistically relevant results in the other areas. The researchers summarized their findings this way: “The analysis of the experimental data reported here have confirmed neither the worst fears nor the highest hopes for a program of graduate work incentives. The experiment neither undermined the moral fiber of the recipients of support payments, neither did it transform them into paragons. By and large, it seems to have left their living patterns undisturbed with respect to health, social activities, and the number of children. The increased command over goods and services given them by the transfer payments was spent very much the same way as money received in other ways. The major change the program apparently produced was an improvement in housing standards and an increase in homeownership. That, in itself, in an achievement which should not be underestimated.” (Watts and Rees, 1977, p. 14).

The main impression, as I read through the internal reports at Mathematica, was that the team faced a difficult challenge when trying to find statistically significant effects. This difficulty may be explained by the small size of the sample combined with the 8 Basic Income types experimented on. Furthermore, the limited time of the experiment may not have been enough to analyze effects such as job turnover, health benefits, social behavior, etc. The researchers in the NJIME experiment concluded that there were a large number of issues and limitations of the experiment and that “future work, both with new data being generated and other experiments and panel surveys can help establish firmer limits to the generalizability of the results.” (Vol. III p. 465) However, the results were still said to “provide a substantial increment of our understanding of the behavior of families and individuals” (Vol. III p. 465)

Lessons from the NJIM Experiment

The NJIM Experiment produced a clear result regarding the main objective of the study: there was no significant labor reduction participation as a result of the unconditional cash payment that lasted three years. Other results were harder to pinpoint, even though the result in housing improvement was indeed significant, besides that, the researchers were not able to find many other significant results. The NJIM was followed by other experiments, including the Rural Income Maintenance Experiment, RIME in North Carolina, the Seattle Denver Income Maintenance Experiment, SIME/DIME, which was the largest one, the Gary Income Maintenance Experiment and in Canada de Mincome in Manitoba. I am only focusing on the first experiment, the NJIM which was the pilot of the pilots so to speak.

In this section, I will go through some of the possible reasons why the experiment did not find significant effects, some of the difficulties associated with Basic Income experiments, and then some general issues with BI pilots that should be kept in mind for anyone designing experiments today. The specific issues identified in this experiment will be divided mainly into 2 categories: 1) Limited sample; 2) Limited time. Other more general problems related to social experiments will be referred to, but not analyzed here.

The NJIME and the other experiments conducted in the 60s and 70s all had as target people on or near the poverty line. The reason for this was that Basic Income was being studied as a policy to eradicate poverty. However, as Karl Widerquist notes in his paper “A Failure to Communicate: What (If Anything) Can we Learn from the Negative Income Tax Experiments?”, 2005, this is slightly strange since: “although the effect on poverty of most social policies (AFDC, TANF, EITC, job training, education, etc.) requires testing, the conclusion that an NIT with a guaranteed rate at the poverty line can eliminate poverty is true by definition.” (Widerquist, 2005, p. 51). This pilot is better described not a way to research the power of this policy to eradicate poverty, but the effects of this policy on low-income people and work-force participation. Do low-income people reduce their work-force participation if given “free money”, or will they continue to work on low paying jobs? The pilot concluded that they will keep working and trying to create more security and to climb up the economic ladder.

However, the behavior of people living on the poverty line cannot be extrapolated to the behavior of all income levels. This family is usually called the “fallacy of composition”, the error of assuming that what is true for a member of a group is true for the group as a whole. By including only a certain limited social strata sample the study creates serious limitations. Basic Income experiments can avoid this fallacy by either including a wider variety of subjects or reducing their focus on a very specific group and avoiding the temptation of extrapolating the results from that group to the population in general.

The next issue regarding the limited sample is the number of subjects in the experiment combined with the number of variations tested. There may have been to many variations (8) of the NIT with the sample not being sufficiently large to create statistically significant results. There were so many variations that it “reduced the numbers of subjects receiving each type of treatment, and therefore reduced the statistical reliability of the results for each.” (Widerquist, p. 55). Sample size and variations are important in Basic Income experiments (Marinescu, p. 11). This is a lesson that can be put to use by current Basic Income Experiments, the samples have to be large enough for what is being tested and that the variations do not unnecessarily reduce the samples and therefore the statistical significance of the results.

Regarding limited samples, Basic Income pilots should, on the one hand, be clear about the target group they will be analyzing, if they target only people at the poverty line, then their conclusions apply to only that target group, for conclusions with a wider scope, more target groups need to be analyzed. However, in order to have statistically significant results, the number of the sample must be large enough. Variations in the study should be carefully examined in order not to create unnecessary complexity, and the researchers should remember that any increase in complexity will have to include an increase in sample numbers of subjects in order to create statistically relevant results.

The second category of issues, 2) limited time, is more challenging in the case of Basic Income experiments. The fact that they are experiments and not permanent cash grants will have behavioral consequences that are crucially different from the results of implementing a full Basic Income. To see behavioral outcomes that are closer to a real implementation of a Basic Income grant, ideally, we should have much longer experiments, even lifetime experiments.

Let’s use as an example the labor market effect of the grant during the experiment. A limited time grant can have biased results in different directions. On the one hand, since the recipients know the grant is for a limited time only, they may tend to maintain their employment situation, as it would be unwise to quit a job because of a limited time cash assistance. On the other hand, since they know they will have support from the temporary cash grant, they may also take “time off” during that period and intend to return to work when the experiment is nearing its completion. These actions motivated by a temporary cash grant cannot be extrapolated to how people would act in a permanent cash grant. It is extremely difficult to go from temporary effects of cash grants to permanent effects.

The only way to try to counter this limitation is to have longer-term or even ideally lifetime grant recipients in an experiment and see if their behavior is radically different than the behavior of the short term recipients. If it is consistently not significantly different, we could have good reason to think that the short-term results can be extrapolated to the long-term results. But any conclusions must be taken with great caution.

Something close to this idea was implemented in the SIME/DIME experiment, where some recipients were meant to receive a grant for 20 years. These recipients did not behave very differently from the other experimental group (Robins, 1984) However, the funds were revoked and they only received the grant for 9 years, and one could say they never expected to receive the grant for the full 20 years and that may have affected their behavior. There are also studies of lottery winners and there are permanent fund grants that indicate that there is no significant retreat from the job market (Ionesco, Roosevelt). But more work needs to be done in these areas.

The issue of short-term and long-term research in social science is not exclusive of Basic Income. Imagine for instance that the United States wanted to test Universal Health Care by giving free care to a group of individuals for 3 years. Let’s consider that the items being tested were if subjects went to the doctor more or less often and if their health improved or not during the period. If the subjects know that the experiment is supposed to last only three years and that after that they will return to paid health care, they would most likely use the service as much as they can during that period. By overuse of health care services, they could also identify more diseases than the control group. Such an experiment would indicate that free healthcare leads to abuse of service and diminished health. This is just an example, exaggerated, but it is intended to illustrate the issue.

In Basic Income experiments, we can have a similar effect, people know they have extra money for a specific amount of time and they will use it as a bonus to do what they value most. In the NJIME it seems to have been house purchasing and furniture. Today I would guess debt payment could be a good candidate. Regardless, the resulting behavior is quite different from a long-term result. This issue of the connection between short term and long term results may be particularly difficult in social experiments that are not evaluated by the direct effect they have, but that are evaluated for the behavior they elicit when that behavior can be affected by the fixed duration of the experiment. To overcome this difficulty, the experimental design needs to try to link short-term outcomes, intermediate outcomes, and long-term outcomes. One way to do this would be by analyzing behavior in the short term and long-term subjects. Another would be to test for very specific immediate impacts of the policy and not make claims about long-term impact. This is definitely an issue to keep in mind during experimental planning.

Another epistemic issue with these experiments alluded by philosopher Nancy Cartright. She claims that there is a serious limitation with social experiments: what works in one place may not work at all in another. Cartright argues convincingly that there is overconfidence in social sciences relying on “randomized controlled trials” that are said to be the gold standard of social research. According to Cartright, there are three different epistemic claims in these experiments: 1) It works somewhere; 2) It works in general and 3) It works for us. She claims that RCTs only entitle us to state 1) and we can’t deduce 2) nor 3) from the experimental results. If this is true of many social experiments, such as ways of fighting malnutrition, for instance, it is most surely true for Basic Income pilots. These experiments are being held both in first world countries and developing countries, in countries that have healthcare and countries that don’t, in countries with very different cultural backgrounds, etc. Results in one country cannot be extrapolated to another country or maybe even another area in the same country. This idea, however, can be challenged when there are a large number of experiments of a certain policy in different locations that consistently give similar results. That is, if for instance, the same policy for fighting malnutrition works in many different countries, we can certainly deduct that there is a good reason to believe in 2) and an indication that 3) may be true, although not guaranteed.

In view of all the limitations analyzed, are Basic Income pilots desirable at all? What can we do to avoid the pitfalls and limitations mentioned so far? Summing up, make sure the samples are large enough to match the complexity of the experimental design. Either test only short-term effects or try to connect with long-term effects by using a control group of long-term recipients. Finally, try not to extrapolate from results in one site to other sites, or from results within one group to results with other groups. Such general conclusions can only be tentatively drawn by analyzing data from many experiments.

It is important to keep these limitations in mind while designing Basic Income pilots, not only to be able to make significant claims after the pilots are executed, but also so that the experiments are not misused by political agents. For example, in the 60s and 70s Basic Income experiments in North America, even though the experiments did not find any clear work disincentive, there were some cases where there was a slight reduction of work. The mere mention of this resulted in “columnists across the country responded with a chorus of negative editorials decrying the guaranteed income and ridiculing the government for spending millions of dollars to find out whether people work less if you pay them not to work.” (Widerquist, 2005, p. 67). The experiments need to be clear enough that this kind of spin is not possible, the results must be strictly framed and the conclusions very carefully stated.

In order to avoid freewheeling spinoffs of the experiments, the pilots should make sure the samples are large enough to draw statistically relevant conclusions, and that the scope of these conclusions is clearly stated in the experimental design so that there is no room for free interpretation of experimental results. These are the lessons from the New Jersey Income Maintenance Experiment.

References

Burtless, G. (1986) “The Work Response to a Guaranteed Income: A Survey of Experimental Evidence”, in Munnell, A. H. ed., Lessons from the Income Maintenance Experiments, p. 22–52, Boston: Federal Reserve Bank of Boston.

Carcagno, G. J. & Walter S.C. (1976) “The Impact of a Negative Income tax on AFDC Recipients and State Welfare Expenditures.” Princeton, NJ: Mathematica Policy Research, February.

Carr, T. J. (1980) “The Effect of Local Labor Market Conditions on the Labor Supply Response to a Negative Income Tax.” Princeton, NJ: Mathematica Policy Research, November.

Cartwright, N. (2010), “Will This Policy Work for You? Predicting effectiveness better: How philosophy helps”, LSE and UCSD Presidential Address.

Fair, J. (1971) “Estimating the Administrative Costs of a National Income Maintenance Program.” Princeton, NJ: Mathematica Policy Research, May.

Fair, J. & Freeman, A. (1971) “The New Jersey/Pennsylvania Graduated Work Incentive Experiment Tax Rebate System.” Princeton, NJ: Mathematica Policy Research.

Garfinkel, I. (n.d.) “The Effect of Welfare on the Labor Supply Response on the Urban Graduated Work Incentives Experiment.” Princeton, NJ: Mathematica Policy Research.

Greenberg & Shroder (2004) The Digest of Social Experiments, Washington DC: Urban Institute.

Horner, D. (1976) “The Impact of Negative Taxes on the Labor Supply of Low Income Male Family Heads: Evidence from the Graduated Work Incentive Experiment.” Princeton, NJ: Mathematica Policy Research, April.

Kershaw, D.N. & Jerilyn, F. (1976) The New Jersey Income Maintenance Experiment. Volume 1 — Operations, Surveys, and Administration, New York: Academic Press.

Kershaw, D.N. (1969) “The Negative Income Tax Experiment in New Jersey: General Discussion.” Princeton, NJ: Mathematica Policy Research, April.

Kershaw, D. N. (1972) “A Negative Income Tax Experiment.” Scientific American, vol. 227, no. 4, October 30, pp. 19–25.

Kershaw, D. N., and Skidmore, F. (1974) “The New Jersey Graduated Work Incentive Experiment.” Princeton, NJ: Mathematica Policy Research, July.

Levine et al (2005), “A Retrospective on the Negative Income Tax Experiments: Looking Back at the Most Innovative Field Studies in Social Policy.” in Widerquist, K., Lewis, M.A. & Pressman, S. eds., The Ethics and Economics of the Basic Income Guarantee, 95–106, New York: Ashgate.

Maxfield, M. Jr. (1978). “Program Cost, Adequacy of Support, and Induced Labor Supply Reduction of a Negative Income Tax.” Princeton, NJ: Mathematica Policy Research, January.

Moffitt, R. A. & Kehrer, K.C. (1978) “Estimating Labor Supply Disincentives of a Negative Income Tax: Some Results and Lessons from the Experiments.” Princeton, NJ: Mathematica Policy Research, December.

Ross, H. (1968) “An Experimental Study of the Negative Income Tax.” Princeton, NJ: Mathematica Policy Research, May.

Mathematica Policy Research (1971) “Procedures Manual for the New Jersey/Pennsylvania Negative Income Tax Experiment. Volume 1.” Princeton, NJ: Mathematica Policy Research, February.

Mathematica Policy Research (1971) “Procedures Manual for the New Jersey/Pennsylvania Negative Income Tax Experiment. Volume 2. Appendices” Princeton, NJ: Mathematica Policy Research, February.

Marinescu, Iona (2017) “No Strings Attached: The Behavioral Effects of U.S. Unconditional Cash Transfer Programs”, Roosevelt Institute, online at: http://rooseveltinstitute.org/no-strings-attached/

Pechman & Timpane (1975), Work Incentives and Income Guarantees: The New Jersey Negative Income Tax Experiment, Washington DC: Brookings Institution.

Rossi and Lyall (1976), Reforming Public Welfare: A Critique of the Negative Income Tax Experiment, New York: Russell Sage Foundation.

Skidmore (1975) “Operational Design of the Experiment”, in Joseph A. Pechman and Michael Timpane, eds, Work Incentives and Income Guarantees: The New Jersey Negative Income Tax Experiment, 25–59, Washington DC: Brookings Institution.

U.S. Department of Health, Education, and Welfare, Mathematica Policy Research, and Institute for Research on Poverty (1973) “Summary Report: New Jersey Graduated Work Incentive Experiment.” Washington, DC: Office of Economic Opportunity, December.

Watts, H.W. & Rees, A. (1977) The New Jersey Income Maintenance Experiment. Volume 2. Labor-Supply Responses, New York: Academic Press.

Watts, H.W. & Rees, A. (1977) The New Jersey Income Maintenance Experiment, Volume 3. Expenditures, Health, and Social Behavior; and the Quality of the Evidence, New York: Academic Press, 1977.

Widerquist (2005) “A Failure to Communicate: What (If Anything) Can We Learn from the Negative Income Tax Experiments”, Journal of Socioeconomics, 34 (I): 49–81

Sara Bizarro

Written by

PhD in Philosophy, Professor, Artist, Movie Buff.

More From Medium

More from Sara Bizarro

Sara Bizarro
Feb 6 · 3 min read

195

More from Sara Bizarro

More from Sara Bizarro

Free Money or Freedom?

139

More from Sara Bizarro

Welcome to a place where words matter. On Medium, smart voices and original ideas take center stage - with no ads in sight. Watch
Follow all the topics you care about, and we’ll deliver the best stories for you to your homepage and inbox. Explore
Get unlimited access to the best stories on Medium — and support writers while you’re at it. Just $5/month. Upgrade