Exploring a Creative, Safe Introduction to Machine Learning
We’re excited to share Face Sensing blocks as a new experimental extension on Scratch Lab.
With these blocks, you can create projects that respond to your eyes, nose, and other parts of your face.
For instance, you can make a game that uses your head movement as a controller — or virtually wear some funny glasses and a hat.
How does it work?
Face Sensing blocks use your webcam and run entirely and securely in your browser. For any given webcam image, the extension can tell whether or not a face is present, and where eyes, ears, a nose, and a mouth are located on that face.
At Scratch, a key priority is maintaining children’s privacy and safety. The extension can tell that a face exists, but it is not able to identify who the face is. Face images and data are never stored or sent anywhere, not on your own computer and not on Scratch servers.
The technology underlying Face Sensing Blocks is referred to as machine learning. What exactly does “machine learning” mean, and how does a machine “learn”?
Machine learning models like this one are pre-trained on large amounts of data. In this case, developers provided a learning model with many images of faces. The model then looks for patterns and predicts answers based on what it has already seen.
Machine Learning and Scratch
Machine-learning technologies are already part of our everyday lives. Specifically, face-sensing technologies are present in many photo programs, messaging applications, and social media platforms.
What if young people could not only interact with machine-learning technology, but use it to build projects that reflect their own interests?
At Scratch, we’re committed to providing young people with opportunities to engage with and learn about exciting and personally meaningful technologies on their own terms — not only in ways which are defined by large software companies.
Challenges with Machine Learning
Because machine-learning technologies are expensive and complex to make, large companies and research teams currently lead the charge. As a result, the underlying learning models and training data tend to be proprietary, hidden, and created exclusively by a limited group of people.
Scratch believes that the responsibility to create equitable experiences begins with the people who make and design the technology.
We know that tools and processes reflect the values and perspectives of their creators. Machine-learning models only “know” what they have been built and trained to do. The more limited the training data or development team, the more bias and inaccuracy the technology will exhibit across racial and gender groups.
To learn more about racial bias in Machine Learning, check out this video from researcher Joy Buolamwini:
Face Sensing technology will open up new creative possibilities and learning opportunities for young people using Scratch.
We’re so excited for children to explore this technology. But before we introduce any new features to Scratch, we must ensure they align with our core principles:
Safety: Maintain the privacy and safety of children
Fairness: Only choose models which have a published fairness evaluation
Responsibility: Gather feedback on questions of ethics and equity from advocates and domain experts
Transparency: Publicly acknowledge the social concerns about face sensing technologies
Accountability: Use feedback from our community to inform any future Face Sensing updates
To help us reach our goals in these areas, it was crucial to open up dialogue with thought leaders and experts in the field. We reached out to:
- AI researchers
- AI ethics advocates
- Engineers developing these technologies
- Children’s privacy experts
Then, we searched for a face-detecting model that aligned most with our core principles, using criteria such as:
- The model is freely available and open source
- The authors tested its fairness by recording and comparing the model’s accuracy on different images
- The authors published quantified results of their fairness evaluation
While no model was a perfect fit, we built the current Face Sensing Blocks using Google’s BlazeFace model. You can learn more about BlazeFace’s fairness metrics here.
We are open to changing our approach as this technology evolves to ensure we are using the fairest model possible.
Bringing AI Ethics Into the Classroom
We are inspired by the advocacy of groups like the Algorithmic Justice League. As educators, activists, and other advocates continue to question technology companies, we are beginning to see improvements in corporate transparency and accountability.
At Scratch, we hope for more children to engage with relevant technologies in empowered, inclusive ways.
Face Sensing blocks offer creative possibilities and opportunities for critical thinking:
- Kids can tinker with the bounds of “how computers see.”
- Kids can find pathways for new kinds of inclusive projects; e.g, assistive interfaces.
- Kids can create projects in a space that usually isn’t available to them.
Face Sensing blocks provide a meaningful opportunity to open up conversations with your students about AI ethics and how these concepts will shape our future. If you are interested in exploring this in your classroom, these resources and curricula can help facilitate your discussion:
- Blakeley H. Payne, MIT AI Ethics Curriculum
- Joy Buolamwini, AI Ain’t I a Woman
- Allied Media Projects, A People’s Guide to AI
- Algorithmic Justice League, Drag vs. AI
As we navigate this new space, we’ll be listening to your feedback.
In order to release Face Sensing blocks as an official extension in the Scratch community, we’ll need more robust internal standards to measure fairness. This includes both quantitative evaluations and qualitative feedback from young people.
We hope that this is the beginning of a dialogue with our broader Scratch community: children, families, and educators around the world. It’s important that these topics persist in public conversation and in our collective understanding, so more voices can shape the future of machine learning policies.