Artificial intelligence and education

How AI can serve as the subject of an investigation or as a reporting tool for journalism

Florencia Coelho
JSK Class of 2019
4 min readApr 4, 2019

--

Flickr / GotCredit

Now at the end of my second quarter at Stanford, I’m leaving behind my impostor’s syndrome and enjoying some “aha” moments, as I pursue a broader understanding of different artificial intelligence (AI) opportunities.

As journalists, we must empower ourselves to understand that professional challenges like AI are here to stay. And we may need AI to hold governments and corporations accountable.

Let’s brainstorm about government. Say an agency designs or hires the design of an algorithm that could ultimately discriminate against a part of the population, or have an output based on a mistake or an unfair situation.

When we ask the government for information on such a case, via the Freedom of Information Act (FOIA), we receive replies like, “It’s math,” or “Computers don’t make mistakes,” or, “We can’t give you that information. The software is proprietary to a contractor.”

Journalists must insist on the premise that contracts between governments and private consultants should respect the public’s right to know, especially when constitutional rights could be threatened.

In working with AI, we should expect to have long discussions about the extent of information you can get, how an algorithm works, and which data the system was trained.

And more questions arise. Will judicial courts decide them on a case by case basis? How must legislation adapt to include these new scenarios?

It looks like algorithm awareness and literacy are important to policy makers, public officers, civil society, lawyers and journalists.

AI and Education

I’m collecting real-life examples to get inspired. I will share two cases related to the education beat.

The first one involves a case where the US government developed a machine learning algorithm whose output produced unfair results in teachers’ gradings.

a) Investigating AI: When machine learning algorithms are fed with untruthful data

In the introduction to “Weapons of Math Destruction,” Cathy O’Neil refers to the story of Sarah Wysoski, a teacher who was fired from a Washington, D.C. school, due to the result of an algorithm calculation.

In 2007, the new city administration in Washington, D.C. hired a consultant to develop a system to measure math and language teaching skills in the public schools.

The composition the algorithm used to grade the teachers was not publicly available. This is a typical system known as a “black box.” Ideally, the input data would include reviews from school administrators, the community, and students’ performances on standardized tests.

In 2010–2011, Sarah scored at the bottom 2% of the teacher rankings, which led to her being fired from the D.C. district school.

She couldn’t understand how she found herself in this situation; she was always a valued teacher in the community, and there was no formal procedure to appeal the system’s decision.

To make a long story short, it appears the algorithm may have been fed fraudulent data. Suspiciously, bonuses were involved for teachers and school administrators whose students outperformed their peers from other district schools. A previous teacher could have cheated on her students’ scores to make them seem overqualified. When Sarah graded them on the basis of their real performances, the average results sank, in the year to year comparison.

Her case was reported by The Washington Post here and here, and by The Huffington Post.

Interested in more AI and education? I highly recommend this entire video of Cathy O’Neill’s presentation at the Ford Foundation, where she comments on: a) the Sarah Wysocki case (4.40 to 7.20); b) how tailored advertising by for-profit universities targets vulnerable people with over-promised expectations (18.35 to 19.11) and c) how two universities handle information differently, in projecting which prospective freshmen kids might struggle in college (56.30 to 58.00).

b) AI as a reporting tool to generate leads for investigative stories

Another example was reported by Meredith Broussard in the book, “The Artificial Unintelligence.” In this case, the author explored using AI as a reporting tool to find stories related to public affairs issues.

Although machine learning is nowadays the most popular subdomain of AI, I got immediately curious about how she used an older but still useful tool for the task, expert systems.

In a nutshell, “Why Poor schools can’t win the standardized tests” proved that Philadelphia’s low-income schools didn’t have the books required to answer specific questions.

Meredith designed AI-based software to crunch available education data, identify possible stories, and display customizable data visualizations. She explained how she set up the model called “Story Discovery Engine” in this academic paper. Basically, the Expert System worked as a lead generator, analyzing 15 datasets.

Image: Meredith Broussard

The software did not answer Meredith’s question about poor schools. But it served as a source for original investigative ideas.

My apologies for the tripod fail in the next video. The content rocks!

Image Credit: Meredith Broussard

I interviewed Meredith at NYU and we talked about the process for her project, including why she decided to use an expert system and which kind of datasets she analyzed with the tool.

In the meantime, I’ll continue saving bookmarks on my Pinboard. And I’m also available at fcoelho@stanford.edu. Stay tuned!

--

--

Florencia Coelho
JSK Class of 2019

JSK Stanford Fellow. Class of 2019. LA NACION (Argentina). #neverstoplearning