CS @Harvard | I write about fairness & ethics in AI/ML for @fairbytes | Storyteller, hacker, innovator | Visit me at www.catherinehyeo.com

Building a web application that transforms photographs into artworks of one’s chosen style

A photo of Harvard transformed into the style of Leonid Afrimov

This article was produced as part of the final project for Harvard’s AC215 Fall 2021 course. This project consists of team members Benjamin Wu, Catherine Yeo, and Zev Nicolai-Scanio.

Introduction

When planning their artwork, artists often have difficulty deciding the style or visualizing the final piece of artwork they want to…

A technique to explain how black-box machine learning classifiers make predictions

Photo by Joshua Hoehne on Unsplash

It’s needless to say: machine learning is powerful.

At the most basic level, machine learning algorithms can be used to classify things. Given a collection of cute animal pictures, a classifier can separate the pictures into buckets of ‘dog’ and ‘not a dog’. …

Elmo, Bert, and Marge (Simpson) aren’t just your favorite TV characters growing up — they’re also machine learning & NLP models

Photo by Stefan Grage on Unsplash

Bart. Elmo. Bert. Kermit. Marge. What do they have in common?

They’re all beloved fictional characters from TV shows many of us watched when we were young. But that’s not all — they’re also all AI models.

In 2018, researchers at the Allen Institute published the language model ELMo. The…

A guide for college students on factors to consider and options for what to do in the next school year

In the last month, many US universities have announced their fall reopening plans.

Some universities have proposed a hybrid model, with some students returning to campus for in-person classes and others staying home, or offering a select number of small classes being held in-person. …

Papers, books, and resources to learn about fairness in vision, NLP, and more

Word cloud generated by titles in this reading list

Recent discussion in the machine learning community has brought to light the importance and necessity of understanding not just machine learning, but all the considerations of bias and fairness behind every algorithm’s usage.

“This isn’t a call for ‘diversity’ in datasets or ‘improved accuracy’ in performance — it’s a call…

An overview of how to use counterfactual fairness to quantify the social bias of crowd workers

Photo by Edwin Andrade on Unsplash

Crowdsourcing is widely used in machine learning as an efficient form of annotating datasets. Platforms like Amazon Mechanical Turk allow researchers to collect data or outsource the task of labelling training data from individuals all over the world.

However, crowdsourced datasets often contain significant social biases, such as gender or…

Despite its impressive performance, the world’s newest language model reflects societal biases in gender, race, and religion

Last week, OpenAI researchers announced the arrival of GPT-3, a language model that blew away its predecessor GPT-2. GPT-2 was already widely known as the best, state-of-the-art language model; in contrast, GPT-3 uses 175 billion parameters, more than 100x more than GPT-2, which used 1.5 billion parameters.

GPT-3 achieved impressive…

Catherine Yeo

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store