The Startup

Get smarter at building your thing. Follow to join The Startup’s +8 million monthly readers & +772K followers.

Discrimination in Ontario Automobile Insurance

--

Source: CityNews Toronto

Background

In 2018, two bills were set out to end territorial ratings — often referred to as “postal code discrimination” — in the automobile industry. Bill 44, titled Ending Automobile Insurance Discrimination in the Greater Toronto Area Act was struck down in the Ontario legislature on November 1st, 2018. Bill 42, Ending Discrimination in Automobile Insurance Act, remains alive. It passed second reading last March and is currently referred to the Standing Committee. Before bill 42 can become a law, postal code, which is highly correlated with income, race, unemployment rates and educational attainment, is legally used in risk classification algorithms and systems in Ontario.

Risk classification was developed to group people together according to the risks they represent, so people are protected from carrying more expense of someone who is more risk-prone. However, the classification relies on algorithmic systems that perpetuate discrimination by assigning levels of risk based on people’s area of residency, gender, marital status and more. Insurance companies justify such discrimination based on an individual’s area of residency, gender, marital status as “fair” because they are statistically significant indicators of risks. But what is “fair”? Actuarial fairness aims to function as a neutral and technical concept that serves the industry market. However, it can conflict with human rights principles when so-called “fair discrimination” perpetuate discrimination among society. Coming back to the automobile insurance example of using postal code as a risk factor in Ontario, premium costs are weighted more on drivers in higher-risk neighbourhoods. Most of these higher-risk neighbourhoods are also underprivileged neighbourhoods. Therefore, algorithms for calculating premiums based on postal codes are likely to result in “digital redlining” — practices that create and perpetuate inequities between marginalized groups through digital technologies. The seemingly “fair” discrimination of postal code failed to justify its validity in a social and civic context.

Postal code or area of residency is not the only problematic risk factor. According to the Financial Services Regulatory Authority of Ontario, presently, a personal automobile insurance profile is created based on the type of vehicle, driving record, annual mileage, area of residency, age, gender, and marital status. Only three out of the seven factors are directly related to “driving.” Age is a biological factor that people have no control with, and marital status largely depends on different views of family values. None of these factors should be burdened with discriminatory insurance pricing and should be removed from the automobile insurance algorithms.

Digital Redlining

In the 1930s, the US government sent appraisers to grade neighbourhoods in 239 cities, colour-coding them green for “best,” blue for “still desirable,” and yellow for “definitely declining.” The lowest grade was “hazardous,” which was marked with red lines on maps. The “redlined” areas were the ones financial service providers diminished as credit risks. Although the assessments were made by accounting for local amenities and home prices, the residents’ minority racial and ethnic demographics were the central part that contributes to a neighbourhood’s undesirability. Redlining today refers to the systematic denial of various services in selected areas by loan and insurance companies either directly or through the selective raising of prices. Because financial resources were harder to get in these areas, the redlines established a vicious circle for the neighbourhoods to stay underprivileged, even until today.

The effect of redlining in the automobile insurance industry in Ontario is perpetuated by modern automobile insurance algorithms. In the age of Big Data, insurance is evaluated by data analytic models and machine learning algorithms that are not as direct but may be even more damaging than colour-coded maps; this is because they are not limited by geographic boundaries and their inner works are often very complex and uninterpretable:

“Software that makes decisions based on data like a person’s ZIP code can reflect,
or even amplify, the results of historical or institutional discrimination.”

— ProPublica.org

We can see the effect in the Greater Toronto Area’s automobile insurance industry, which is the largest and most multicultural city in Canada. According to Kanetix, the average Toronto insurance rate is $1948. The highest estimated rate is $2590, and the lowest estimated rate is $1533, which results in a large gap of $1057.

Most expensive rates for car insurance in Toronto. ( By Kanetix 2019)

The most expensive area consists of four postal codes that cover eight neighbourhoods out of 140 in the City of Toronto. Based on the demographic information of the eight neighbourhoods retrieved from the City of Toronto’s Neighbourhood Profiles, six out of the eight neighbourhoods are designated as Neighbourhood Improvement Areas (NIAs) — vulnerable communities with socio-economic struggles recognized by the City of Toronto. 7 out of the 8 neighbourhoods have a higher than average percentage of residents identified as visible minorities. Half of the neighbourhoods have higher than average percentages of households in poverty. 7 out of the 8 neighbourhoods have higher than average unemployment rates. Finally, all neighbourhoods listed have low higher education attainment compared to the City of Toronto as a whole.

Demographics of the neighbourhoods in the area that has the highest estimated automobile insurance premium, sourced from City of Toronto Neighbourhood Profiles. Bolded values are comparatively disadvantaged to the City’s average.

It is evident that postal codes are not just codes for mailing routes but codes for income, ethnicity, and educational attainment. It interplays with multiple social dimensions and thus carries the baggage of historical and structural discrimination. Allowing postal code to be a risk factor allows discrimination in race, income, education level, and employment status.

Actuarial Algorithmic Discrimination

The postal code is not the only risk indicator that perpetuates discrimination. Under section 22 of the Ontario Human Rights Code, the law explicitly allows four exceptions in insurance contracts — age, gender, marital and family status or handicap — as long as they are bona fide and reasonable. Except for “handicap”, all exceptions are applied to automobile insurance in Ontario. According to the Financial Services Regulatory Authority of Ontario, presently in Ontario, a personal automobile insurance profile is created based on not only factors that are directly related to driving (type of vehicle, driving record and annual mileage), but also four discriminatory risk indicators including area of residency, age, gender, and marital status.

Age and Gender

Among these four factors, age and gender are innate features that people do not have control with. Although age and gender correlate with the number of automobile accidents,there is no evidence that these two factors are the direct cause of these risks. However, male drivers under 25-year-old are charged considerably higher insurance premiums because they are seem as higher risk drivers. It is evident that if starting to drive at the same age, older drivers would have more driving experiences contributing to lower risks; age is just a proxy for experiences. There is also no evidence that age is the cause of higher-risk and driving record alone should be the risk factor. Both age and gender are non-casual and immutable factors that should be not be included in the risk classification algorithms.

Marital Status

Unlike age or gender, marital status is a product of choice in general. However, it largely depends on an individual’s personal values and attitudes towards cohabitation and the individual’s life stage, which should not be burdened by discriminatory insurance prices. Traditionally, marriage is the favoured status because it is associated with traits of stability and responsibility. Statistically, married individuals do have fewer accidents than single people. However, marital status is not a direct cause of the risks. Algorithms that use marital status as a variable in determining the premium would contribute to a single cohabitation model’s normative status and the stereotypes that single individuals are unstable and irresponsible.

Area of residency, age, gender, and marital status have long been featured in the statistics used to determine automobile insurance premiums in Ontario. Although insurance companies claim that there are correlations between these characteristics and risk level, the causal connections are not proven. Therefore, instead of relying on an individual’s actual circumstances and behaviour, the algorithms’ assessment of individual risk is based on generalized assumptions and stereotypes of a group. As a result, discrimination on marital status, family status, age and gender are embedded and perpetuated in the actuarial risk classification algorithms.

Alternatives

Area of residency, age, gender, and marital status have long been the features used in actuarial algorithms for determining premium rate in Ontario’s automobile insurance industry; it is time to end such discriminatory practices. To change the status quote, four alternatives are proposed:

  1. The inclusion of area of residency, age, gender and marital status as risk factors in actuarial algorithms should be made illegal by the law.
  2. A shift from feature-based risk assessment to a behaviour-based ground is necessary.
  3. All variables and their weights in determining insurance premium should be made understandable and auditable.
  4. A legal definition of “fairness” in the automobile insurance industry should be stated to include the welfare of the society should be imposed.

Since individuals are assigned to a predetermined number of risk groups, they do not have the autonomy to demonstrate their unique characteristics that distinguish them from the average class members. As an alternative, the industry should adapt to a behaviour-based approach to replace the feature-based approach in conducting risk assessment in the automobile insurance industry. For example, “black box” insurance (or telematics insurance) has risen in recent years. “Black box” here does not refer to the mysterious inner working of an algorithm, but a physical device that is mounted to a car for a period of time to measure and record vehicle speed, location, traveled distance, frequency of driving, and time of day the car is in motion. Other driving performance indicators also measured, including how hard the brakes are applied, how rapid the acceleration level is, and how sharp of a corner is taken. In Ontario, black box insurance is mostly used for determining the “discount” over a determined premium. However, it is apparent that automobile insurance premiums should be largely, if not entirely, based on such personalized individual determination.

Due to the nature of complex actuarial algorithms and pricing models, transparency has been a challenge for consumers. EU’s General Data Protection Regulation (GDPR) mandates that users have the right to demand explanations of algorithmic decisions made for them, including insurance risk systems. There are technical difficulties associated with transparency in modern algorithms in which not even the creator of an algorithm understands the inner working of the algorithm. However, the industry can explicitly inform the consumers what variables are considered in risk classification and how changes of these variables contribute to the final premium determination. By identifying the relationships between inputs and outcomes, we can detect possible biases and get a direction for fixing the transparency problem.

Finally, the definition of “fairness” in the insurance industry should be expanded to the impact of the welfare of society and civilization instead of just the sustainability of the industry. To allow discrimination simply on the basis of statistical averages would only serve to perpetuate traditional stereotypes with all their invidious prejudices, which is absolutely unfair and unethical. Relying on historical data and current definitions of fairness in the insurance industry, the algorithms and the insurance products they support will always be establishing past norms, reflecting the accumulated unfairness, and paralyzing the social progress of our society. Fairness simply cannot be examined in a self-contained statistical ground. Fairness is a dynamic, social and civic issue that should be consistently audited, adapted and debated by all members of society.

References

About FSRA. (n.d.). Retrieved November 18, 2020, from https://www.fsrao.ca/about-fsra

BIG DATA AND THE ROLE OF THE ACTUARY. (2018). The American Academy of Actuaries. Retrieved November 12, 2020, from https://www.actuary.org/sites/default/files/files/publications/BigDataAndTheRoleOfTheActuary.pdf.

City of Toronto. (2018, December 05). Neighbourhood Profiles. Retrieved November 16, 2020, from https://www.toronto.ca/city-government/data-research-maps/neighbourhoods-communities/neighbourhood-profiles/

Discussion paper: Human rights issues in insurance. (n.d.). Retrieved November 12, 2020, from http://www.ohrc.on.ca/en/discussion-paper-human-rights-issues-insurance

Financial Services Commission of Ontario (n.d.). About FSCO. Retrieved November 18, 2020, from http://www.fsco.gov.on.ca/en/About/Pages/default.aspx

Ito, J. (2019). Supposedly ‘Fair’Algorithms Can Perpetuate Discrimination. Wired, April, 2.

Jan, T. (2018). Redlining was banned 50 years ago. It’s still hurting minorities today. Washington Post, 28.

Kagan, J. (2020, August 28). Black Box Insurance. Retrieved November 16, 2020, from https://www.investopedia.com/terms/b/black-box-insurance.asp

Kanetix. (2019, February 26). Kanetix.ca Reveals Ontario’s Most Expensive Cities for Auto Insurance. Retrieved November 12, 2020, from https://www.newswire.ca/news-releases/kanetix-ca-reveals-ontario-s-most-expensive-cities-for-auto-insurance-819924015.html

Kirchner, L. (2015). When big data becomes bad data. ProPublica, September, 2.

Understanding Auto Insurance Rates. (n.d.). Retrieved November 12, 2020, from https://www.fsrao.ca/consumers/auto-insurance/understanding-auto-insurance-rates

Wiegers, W. A. The Use of Age, Sex, and Marital Status as Rating Variables in Automobile Insurance”(1989) 39. UTLJ, 2, 149-at.

--

--

The Startup
The Startup

Published in The Startup

Get smarter at building your thing. Follow to join The Startup’s +8 million monthly readers & +772K followers.

Keli
Keli

Written by Keli

Writing the good, the bad, and the ugly about data. | kelichiu.com

Responses (1)