United States Air Force and Space Force members visit MIT for immersive learning experience

During the three-day Learning Machines Training, the USAF and USSF members learned how AI works and how to safely, ethically manage best practices when applying AI to their profession.

MIT Open Learning
MIT Open Learning
4 min readJun 3, 2024


A group of people in a room with a projector are calibrating their drones to navigate targets.
Learning Machines Training participants calibrate their drones to navigate targets. Photo: Stephen Nelson

By Stephen Nelson

How do you set and lead learning machine strategies and tasks? That’s what 50 members of the United States Air Force (USAF) and United States Space Force (USSF) learned at a three-day training at MIT. They visited campus from more than a dozen bases to elevate their knowledge about AI, the responsible design and use of AI systems, and AI leadership.

The annual Learning Machines Training was held at the MIT Media Lab and featured lectures led by members of the Media Lab’s Personal Robots Group and MIT Responsible AI for Social Empowerment and Education (RAISE), an initiative spearheaded by the Media Lab and MIT Open Learning. The workshop design followed a constructionist approach where participants learned about AI through creative hands-on learning in teams. Activities included coding with drones, computer games, and robotics. Participants also engaged in facilitated group discussions on AI ethics, policy, and organizational change topics.

Teaching drones to fly

The training kicked off with an energetic welcome from Cynthia Breazeal, director of the Personal Robots Group at the MIT Media Lab, director of MIT RAISE, and dean for digital learning at MIT Open Learning. “We’re here to learn, but also here to have fun,” she said before outlining the course objectives, which included: demystifying AI algorithms, identifying challenges and opportunities in autonomous systems and machine learning, and exploring ethical issues and policy considerations in real-world AI applications.

Day one included several tasks designed by MIT’s Sharifa Alghowinem to prepare participants to teach a drone to fly itself — or rather autonomously create its own path using data. For example, participants programmed their drone to recognize certain colors, shapes, or patterns using supervised machine learning and navigate to those features when prompted. Participants were given balloons as targets and worked in teams to pop their balloon the quickest. During this exercise USAF and USSF participants showed their first hints of competitiveness, a recurring theme throughout the three-day training. Each exercise increased in complexity as the day progressed.

Invited guest speaker, Professor Julie Shah, department head of Aeronautics and Astronautics, spoke about the autonomy paradox: the notion that automating basic tasks doesn’t eliminate the need for humans, it simply changes humans’ roles, often requiring more training and higher skills in other fields. “Used responsibly,” Shah said, “generative AI can increase flexibility and transparency to help overcome issues with increased technology in the workforce.”

Day one wrapped up with a session led by MIT’s Dong Won Lee on Conversational AI and Large Language Models. These lessons served as a foundation for the rest of the week, asking participants to think about what technology is promising and why — and encouraging participants to think critically about the sometimes unbelievable benefits touted by industry.

Making autonomous robots responsibly

MIT’s Matt Taylor led attendees using a block-based coding platform created at MIT (Scratch) to train the behavior of an AI player in a game. The activity demonstrated to participants that minor adjustments to the game play would help the AI-player learn from experience what movements would maximize point values within the game through reinforcement learning. The better the manipulation, the stronger the AI player became at learning to score higher with each task.

Nathaniel Hanson from MIT Lincoln Laboratory then taught a deep-dive session on how to train and evaluate autonomous agents using deep reinforcement learning. The key learning objective for this session was to have students appreciate the intricacies of hyperparameter setup, and how they can determine the overall system performance. The learners explored several deep reinforcement algorithms in interactive Jupyter Notebooks. “An important part of this exercise is to clearly show that learning algorithms that work in one problem-space with a set of hyperparameters, do not necessarily translate to a different environment,” Hanson said. “There is no master algorithm.”

Following these coding activities, participants were introduced to responsible ethics and policy with MIT’s Daniella DiPaola and Anastasia Ostrowski. DiPaola, who helped create MIT’s free Day of AI curriculum with an approach to weave ethical, social, and policy considerations throughout technical explanations, spoke briefly about the moral repercussions of lifting guardrails completely. “I tend to think that by becoming more informed about AI technologies and having a clear picture of their capabilities as well as their shortcomings, it will help us make decisions on how we can best integrate them into schools and the workplace,” DiPaola said.

Day two concluded with a training session on AI leadership, culture, and change, conducted by MIT’s Brandon Leshchinsky. The overarching theme conveyed the need for leaders to grasp the complexity of AI and distill it to bite-sized lessons to make it easier for the entire organization to comprehend from top to bottom.

Learning practical applications

The training concluded with a day of generative AI and its practical applications in coding and in media, while having an eye for responsible use and ethical implications. MIT’s Safinah Ali and Ayat Abodayeh taught participants about image and code generation, which led to participants competing, evaluating, and eventually playing interactive games created collaboratively with AI. Anastasia Ostrowski led a session on how to apply responsible design to their own work. They also introduced participants to the concept of techno-solutionism, which is the tendency for people to solve problems through the use of technology without fully evaluating the overall outcomes.

RAISE is an MIT-wide initiative headquartered in the MIT Media Lab and in collaboration with the MIT Schwarzman College of Computing and MIT Open Learning.



MIT Open Learning
MIT Open Learning

Transforming teaching and learning at MIT and around the globe through the innovative use of digital technologies.