Full DAO Testimony on Risk Assessment Instrument
These remarks were delivered on December 12, 2018.
Good morning. My name is Mike Lee and I am the Director of Government Affairs for the District Attorney’s Office of Philadelphia. Joining me today are Oren Gur, PhD, our Director of Research, and Joanna Kunz, an appeals attorney in our office. Thank you for the opportunity to contribute to the conversation on your ongoing efforts to improve sentencing practices.
We would like to thank the Sentencing Commission for its transparency and responsiveness in developing the Risk Assessment Instrument. We appreciate the Commission’s response to public comments following the publication of the Proposed Risk Assessment Instrument in April, and its continued receptiveness to hear follow-up comments.
The Philadelphia District Attorney’s Office believes that recidivism can be addressed through fair and just prosecution, investment in strengthening communities, and meaningful rehabilitative opportunities. We support the Sentencing Commission’s efforts to create an Instrument to aid in evaluating the relative risk that an offender will reoffend. The Commission has described the Risk Assessment Instrument as a “screening tool to prioritize those cases (high risk and low risk) where additional information may have the greatest impact in terms of public safety and resource utilization.”1 The Commission has recognized what current research shows — that “‘more intensive interventions should be reserved for higher risk offenders while lower risk offenders should normally receive minimal or no intervention. Whereas intensive interventions may decrease recidivism among high risk offenders, research has shown that the use of numerous or intensive interventions with low risk offenders can actually increase their likelihood of recidivism.’”2 We agree, and support leveraging research and data to increase the positive impact of our finite resources. Risk Assessment Instrument as currently constructed, does not leverage the best-evidence research and data, therefore, the Philadelphia District Attorney’s Office does not support this version of the Instrument.
Our remaining concerns with the proposed Instrument are largely structural. Namely, creating a standardized tool to evaluate individuals will invariably lead to inaccurate outcomes for some. We address those issues today in hopes of minimizing the risk that inaccuracy leads to injustice.
POINT 1: USING HISTORICAL DATA WILL INHERENTLY UNDERCUT REFORM EFFORTS
* The criminal justice system as it exists today reflects historic race- and class-based bias. Therefore, even if the Risk Assessment Instrument were based on current data, it would necessarily rely on these realities and risk perpetuating the current problems.
* However, the data used to create the Instrument is not current — it is over a decade old (based on 2004–2006 sentences). Many of the practices used to secure those sentences are now recognized as pillars of mass incarceration. For example:
* 3-strike laws
* Mandatory minimums
* Truth in sentencing.
* Since then, the Commonwealth has made significant advancements in criminal justice reform in recognition of the failed policies that drive historic data.
* We also testified in June that the tool treats a conviction the same regardless of the passage of time. A 30-year-old conviction should not have the same impact on recidivism risk as a 3-year-old conviction. The revised Recidivism Risk Scale still does not appear to account for the age of convictions.3 Including these very old convictions in the calculation of how likely a person is to recidivate risks further skewing the data. Not only does the tool rely on data from 2004–2006, it also relies on convictions from well before that, in an era where policing and prosecutorial practices were drastically different.
* What guarantees are there that — by using historical data from an era that saw our jail, prison, and probation populations swell — this Instrument will not recreate an era we are purposefully trying to end in Philadelphia and beyond?
* Is there a plan to reevaluate the tool in the future given that our understanding of these issues has changed significantly since 2006, and will likely deepen over time?
POINT 2: IF IMPLEMENTED, THE INSTRUMENT’S EFFECTIVENESS SHOULD BE EVALUATED AND IMPROVED AT REGULAR INTERVALS
* Updating data — in responses to public comments, the Commission states, “a more recent data set will be constructed for any re-validation of the instrument”4 — What is the plan for updating the data?
* The Commission has stated that the tool is being independently reviewed by the Urban Institute. When will that external review be available, and how will it be used?
* How will the Instrument’s success be measured? We suggest:
o Measuring correlation between high/low risk designation and recidivism
o High/low risk designations by county
o Effect of high/low risk designation on sentence lengths
o Impact PSI has on a sentence
* Role PSI has on recidivism and treatment plans
* Role PSI has on victim satisfaction.
* We further recommend, based on those outcomes, reevaluating on a regular basis whether and how the tool should continue to be employed.
POINT 3: IF IMPLEMENTED, COURTS MUST UNDERSTAND LIMITS OF THE RISK ASSESSMENT INSTRUMENT
* We worry about the risk of giving appearance of a sanitary tool that is in fact not so clean.
* To mitigate that risk, the courts need to be aware of the basis and limits of this Instrument.
* In addition to the above (that the tool reflects the state of play over a decade ago), courts should be aware that:
o The accuracy rate of the tool is not terribly high (only 66% accurate for general recidivism, and not sufficiently accurate to predict whether a person is at high risk to commit crimes against persons).
* Therefore, courts must be clearly and explicitly cautioned not to use the tool for more than its express purpose — to aid in determining whether an offender would benefit from additional evaluation (a PSI).
* The Commission recognized, but has not implemented this change.5
o The tool does not account for “dynamic factors such as employment, education, and marital status” factors that may be statistically significant.6
o The tool does not account for differences in practices by county like higher conviction rates or fewer diversionary programs.
o The tool does not parse by offense type (e.g., a person with past drug convictions who has now committed a robbery –the instrument does predict the likelihood of committing another robbery, it merely predicts the likelihood of re-arrest and conviction for something.)
* In other words, a high- or low-risk designation is not a substitute for individualized sentencing consideration.
POINT 4: THE INSTRUMENT’S FOCUS SHOULD BE ON GOVERNMENT AGENCIES NOT INDIVIDUALS
* A constructionist view of criminal justice system data posits that information collected by rate-producing organizations reflects both
o 1) the behavior of individual actors the agency comes into contact with and
o 2) the behavior of the agency itself
* For example, for many crimes, police data reflect how the police choose to spend their time, not the criminality of a given jurisdiction (eg choose to enforce drug laws in some places, not others). Similarly, that we are not prosecuting people for certain behavior does not mean that behavior has ceased to exist.
* However, the sentencing risk assessment only considers the behavior of individuals who have come into contact with criminal justice systems in Pennsylvania, not the behavior of the criminal justice systems themselves. Just as people can change over time, systems change too.
* What data informs the instrument about similarities and differences between counties in regards to historical enforcement and sentencing practices that might help further understand the context in which an individual was processed?
o For example, someone arrested for cannabis possession in Philadelphia in 1990, when there was no diversion program, should be thought of differently from someone arrested for cannabis possession in Philadelphia in 2014, when there was a diversion program, compared to someone arrested for the same infraction today, who is subject to a civil infraction. The person charged today would likely have committed a much different infraction than the one charged in years past. And that example controls for charge and jurisdiction.
* To address this, sentencing risk assessment could:
o Statistically account for jurisdictional differences in the behavior of criminal justice agencies
o Inform judges of the jurisdictional differences in the behavior of criminal justice system agencies
* The sentencing commission has the information to conduct such a systems-level analysis. In lieu of such an approach, which we understand might be novel, the instrument only accounts for and provides information about how similarly-situated individuals have fared without considering the constantly changing systems in which people are enmeshed — systems that may be different from the jurisdiction where the individual before the court is being sentenced, and which also may have reformed their approach to a particular issue over time.
o Going back to the previous example of cannabis possession in Philadelphia in 1990, 2014, and today:
* The same behavior might have been responded to with a felony conviction in 1990, a diversion program in 2014, and no charges today.
* Or, different behavior might be responded to with the same charges over time: possession in Philadelphia in 1990 might have meant a joint, in 2014 an ounce, and now a far greater amount.
* Clearly, the behavior of government agencies in Philadelphia has changed over time, in turn impacting how individuals are treated.
* Data can tell us about people, systems, or both.
o The risk assessment currently focuses only on what data can tell us about people.
o It should also incorporate information about government agencies — in the form of county-level metrics — and inform judges of jurisdictional and statewide changes in practice over time.
o Systems change — oftentimes, as in the case of drug offenses described above, the systemic change mirrors the change in societal attitudes towards certain behaviors. If the behavior has been down-graded or up-graded by jurisdictions in the larger system, the instrument should include a method for measuring and applying these changes to the assessment of an individual.