Analyzing the Social, Functional and P&L Pitfalls of Evil {Old} Data.

Breaking down Algorithm Accountability.

With great advancement comes great dangers. As many organizations migrate towards ultimate efficiently and tech assisted business models, it’s critical to respect the process and not entertain shortcuts throughout the migration period. Companies paired with the appropriate tech solutions can experience a significant increase in value delivery. However, algorithms that are build on dirty data can result in multiple pitfalls that will slowly eat away at your company’s code of ethics, day to day process, and profit margin.

When Netflix recommends you watch “Grace and Frankie” after you’ve finished “Love,” an algorithm decided that would be the next logical thing for you to watch. And when Google shows you one search result ahead of another, an algorithm made a decision that one page was more important than the other. Oh, and when a photo app decides you’d look better with lighter skin, a seriously biased algorithm that a real person developed made that call. — Techcrunch (Algorithmic Accountability)

Accountability is fundamentally about checks and balances to power. Journalism and public advocates serve as a tool to hold powerful institutions and individuals accountable. However, in a world of data and algorithms, accountability is often murky.

Companies of all sizes can experience significant benefits when sorting through vast troves of information. It’s recently become common practice to mine through customer data in search of trends, patterns and hidden gems that can serve as a competitive advantage within your business strategy. Automated algorithms can use a sequence of well-defined steps and instructions to generate categories for filtering information based on a combination of motives about a desirable outcome. However, Three characteristics of algorithmic ordering have made the problem particularly difficult to address: the data used may be inaccurate or inappropriate, algorithmic modeling may be biased or limited, and the uses of algorithms are still opaque in many critical sectors.

The strategist behind the algorithm may have no intentions of producing discriminatory results.

Often, the elements of uncertainty, subjective interpretation, arbitrary choice, accidents, and other ingredients in the mix are rendered invisible, and what is displayed to the end-user is just the functionality of the technology. For example, Google, Bing, Yahoo, and other search engines can effectively create “filter bubbles” for the results people see when they query items, which can be problematic. Some information is more visible to one individual versus another based on the user profiles that the search engine has on them. How might algorithms affect the flow of educational materials or other types of information? Who or which networks of stakeholders are the arbiters of algorithmic power that strongly influence information flows?

In the past, individuals controlled business outcomes. This has resulted in unfair outcomes, or evil outcomes. If a data set is built based on previous actions and then applied to modern technological advancements, the algorithm will most likely produce unbalanced outcomes.

The strategist behind the algorithm may have no intentions of producing discriminatory results.

Insurance providers, like many other companies, are not allowed to discriminate on the basis of protected classes. Yet, sometimes insurance is fundamentally a mechanism of discrimination. Insurers try to minimize risks and maximize profits. Marginalized populations, including protected classes of people, are often more risky to insure, in part because of how discrimination has historically made it harder for people in these groups to get access to high quality medical care, favorable mortgages in lowrisk communities, and educational opportunities. Thus, if insurers want to minimize their risks, they would often benefit by not covering many marginalized populations.

As insurance determinations are increasingly computed algorithmically, it is more difficult to determine whether or not a person is being discriminated against inappropriately. The strategist behind the algorithm may have no intentions of producing discriminatory results. Algorithmically inserting race without a high degree of accuracy relative to the present query can be quite easy when mining through outdated data sets.

It’s important to mitigate the dangerous outcomes that result from algorithms based on flawed data. If algorithms remove qualifying customers simply because of a fundamentally invalid data point, revenue and earnings can be compromised. It’s critical to have analyst test to insure an algorithm is not vulnerable.

Related Read: Some, including Buzzfeed, would argue that basing your data on psychographics as opposed to demographics is a far more lucrative practice. If you’re interested in learning more about how that works click here.

If you enjoyed this story, please recommend and share to help others find it! Feel free to leave a comment as well.