Random Magnitude Schedules of Reinforcement

Are organisms more sensitive to the rate of reinforcement or to reinforcer magnitude? So far, the results seem to suggest that sensitivity to rate (ar = 0.8; Davison and McCarthy, 1988; McDowell, 2012) tends to be higher than sensitivity to magnitude (am = 0.65; Cording, McLean, & Grace, 2011), results replicated successfully by McDowell’s computational model of behavior dynamics (McDowell, Popa, & Calvin, 2012).

Note, however, that many (if not all) experiments that looked at sensitivity to rate and/or magnitude approached them asymmetrically. Consider Figure 1. When varying the rate (left), reinforcers of fixed (from trial to trial) and equal magnitudes (left/right lever) are scheduled at variable (from trial to trial) and unequal (left/right) intervals. When magnitude varies (right), reinforcers of fixed and unequal magnitudes are scheduled at variable and equal intervals.

In other words, the time between reinforcers, either equal or unequal (left/right levers), is always variable (from trial to trial). Reinforcer magnitude, on the other hand, either equal or unequal (left/right levers), is always fixed (from trial to trial).

One way to make the comparison symmetrical (Figure 1, Scenario 1) is to “make time like magnitude”: deliver the reinforcers on Fixed Intervals (FI) instead of Random Intervals (RI) schedules. Another way (Scenario 2) is to “make magnitude like time”, by allowing magnitude to vary from reinforcer to reinforcer just as time intervals vary in a VI (or RI) schedule. I refer to these as concurrent, Random Magnitude schedules (RM RM).

Conducting these experiments with non-humans could be technological challenging, but they are easy to arrange in McDowell’s model and in computerized human matching procedures (see Popa, 2013).

In the model, the magnitude of the reinforcer is manipulated by changing the mean (µ) of the parental selection function, which essentially regulates the relation between a behavior’s fitness and its chance to be selected as parent (McDowell, 2004; see also Popa & McDowell, 2016). Implementing the second scenario (Fig 1) would entail the addition of a computational step: instead of selecting parents based on the value set by the experimenter (e.g., µ = 100), use “100” as if it were the mean of a random interval schedule (RM 100). The criterion for fitness would differ from selection event to selection event; in time, these variable magnitudes will average to 100 (in this example), just like variable time intervals between reinforcers, in time, average around the RI average.

The same idea applies to human procedures; is just easier to implement, especially if one builds on an existing procedure (Popa, 2013).

Significance and research directions

This approach would be relevant to all mathematical models and conceptual developments (matching theory and its assumptions) that attempt to describe behavior frequency/allocation as a function of reinforcer magnitude, by itself or in conjunction with other variables, like rate, or COD).

Will matching in FI FI environs parallel matching in FM FM environs? Will both vary around 0.6?

Will matching in RI RI environs parallel RM RM environs? Will both vary around 0.8?

What would happen in concurrent RI RI schedules if magnitude varies as well? Will organisms match better? Worse? Will sensitivity vary around unity?

When both matching and magnitude vary on random schedules, would the two terms of Equation 1 (Baum & Rachlin, 1969; Killeen, 1972; see also Davison & McCarthy, 1988), rate x magnitude, contract to value, and, by extension, ar and am, to av?

This manipulation would also bring the laboratory setting closer to the natural environment, in which reinforcers do not have fixed magnitudes, from trial to trial; this point is more evident when considering social reinforcers, like “praise from parent 1” and “praise from parent 2”.

Collaboration opportunities

I am very interested in collaborating with like-minded scholars in exploring the effects of Random (or Variable) Magnitude Schedules of Reinforcement. If you’re interested, I can be reached at andreipopa515@gmail.com.

Also, if you know of studies that already explored this experimental manipulation, please let me know so I can give credit where credit is due.