Neural Networks and The Robotic Dream of Mobility

Advancements in robotic mobility, powered by neural network algorithms and LIDAR sensors, span from household robotics to self-driving cars, including Boston Dynamics’ quadrupeds and humanoid robots.

Pedro Uria-Recio
ILLUMINATION
23 min read3 days ago

--

Chapter 14 of the book “Machines of Tomorrow: From AI Origins to Superintelligence & Posthumanity.” How AI Will Shape Our World.

Buy “Machines of Tomorrow on Amazon

Review it on Amazon or Goodreads

Subscribe to our newsletter to receive all the chapters

Machines of Tomorrow: From AI Origins to Super Intelligence & Post Humanity. How AI Will Shape Our World.

This is nothing. In a few years, that bot will move so fast that you’ll need a strobe light to see it. Sweet dreams…”

Elon Musk

American Billionaire

Post on X, referring to the acrobatics of Boston Dynamics humanoid robot Atlas. [Musk and Medina]

2017

Elon Musk’s post on X highlights the remarkable mobility of today’s robots. Robot mobility relies on neural networks, a type of artificial intelligence that mimics the human brain’s neural architecture, as discussed in Chapters 4 and 5. Neural networks are interconnected layers of artificial neurons designed to analyze vast volumes of data, identify patterns, and reach well-informed decisions. Data-driven learning has improved mobility for various robotic applications, taking a high-speed leap forward in eventually replicating and advancing past human capability.

The intricate web-like layers of artificial neurons require substantial computational power for efficient data processing and swift decision-making. It was not until the 2000s that hardware capabilities attained the necessary speed and robustness to handle their computational demands effectively.

Modern robots rely heavily on sensors like cameras, LIDAR, radar, and GPS to perceive their surroundings. Neural networks process this sensory data, enabling self-driving cars and warehouse robots to make real-time decisions on navigation, obstacle avoidance, and predicting the actions of other objects, whether they are robots, humans, or vehicles. In quadrupedal or bipedal robots, neural networks assist in refining their movements, maintaining balance, and responding effectively to unexpected challenges like navigating rough terrain or recovering from stumbles.

We now turn to the topic of neural networks in robotic development and how the creation of self-propelled and self-guided movement represents the first meaningful intersection of the development lines of AI and robotics.

Machines of Tomorrow: From AI Origins to Super Intelligence & Post Humanity. How AI Will Shape Our World.

Link to the book: Machines of Tomorrow

Robots’ Remarkable Agility in Combat Competitions

The rising popularity of robots, characterized by their mobility and agility, has given birth to captivating and continually evolving robot combat competitions [Stone]. These events draw large crowds of in-person attendees and online viewers, numbering millions. The compelling allure of robot combat lies in its distinctive fusion of inventive engineering, strategic gameplay, and exhilarating battles. Contestants delight in the opportunity to create and build robots for competitive showdowns, putting their creativity and problem-solving skills to the test. The spectacle of these mechanical clashes, often featuring sparks and airborne components, offers distinctive entertainment, fostering a dynamic and continually evolving subculture [Berry].

The first robot combat competition was organized in 1987 in Denver during a Science Fiction convention and was called “Critter Crunch.” At that time, neural networks existed, but as we explained, they were not practical for use. Instead, robot builders remotely controlled robots to engage in phenomenal combats and disable their opponents ingeniously.

Little by little, robot combat emerged from its niche in local communities of geek enthusiasts and passionate students. In 1990, the Turing Institute organized the First Robot Olympics in Glasgow, with competitors from different countries [Guinness], and in 1994, the first major US event, called “Robot Wars,” was organized in San Francisco. Its success was overwhelming and caught the attention of the British BBC, which eventually produced the TV series “Robot Wars.” In 1999, a new competition called “BattleBots” began as an Internet broadcast and quickly evolved into a weekly television program on Comedy Central in 2000. From that point onward, robot competitions proliferated worldwide, ranging from “Robotica” in 2001 to “Robot Combat League” in 2013. Television shows like the revival of “BattleBots” on ABC in 2015 and “Robot Wars” in 2016 further contributed to the growth of robot combat events.

At the same time, builders started increasingly using more advanced algorithms, depending on each robot’s design and capabilities. One notable example of a robot that employed a neural network algorithm is “Bronco” from “BattleBots” in 2015 [Bryant]. Bronco was still primarily remote-controlled by its human operators. Still, the neural network allowed Bronco to make more precise decisions regarding its pneumatic flipping arm, improving its ability to strategize and execute effective flips against opponents in combat.

Sweeping Changes in Household Robotics

Combat competition robots were primarily remote-controlled. In contrast, Roomba, introduced by iRobot in 2002, was the first successfully completely autonomous robot — outside industrial contexts. iRobot was founded partly by MIT Professor Rodney Brooks, a prominent figure known for his role in the Nouvelle AI movement and the establishment of Rethink Robotics, a cobot manufacturer. It is remarkable how some individuals consistently reappear with their meaningful contributions again and again in the history of AI and robotics.

Roomba is a small, circular robot designed for vacuuming and floor cleaning. Its circular shape and low profile allow it to move under furniture and reach hard-to-clean areas. It garnered global attention because of its autonomous navigation, obstacle detection, and efficient maneuvering within indoor spaces.

Roomba did not employ neural networks but simpler algorithms for navigation and cleaning tasks. These algorithms were primarily rule-based and sensor-driven. They included obstacle avoidance routines that use infrared sensors to detect objects in their path, ensuring they can maneuver around furniture and obstacles. Additionally, the bump sensors help Roomba identify collisions with walls or objects, prompting it to change its direction. Cliff sensors were another critical feature, preventing Roomba from tumbling downstairs or ledges.

While the first Roomba models followed relatively random navigation, later iterations employed advanced mapping algorithms. This enabled these Roombas to create detailed maps of rooms, allowing for more systematic and efficient cleaning patterns and the ability to resume cleaning after recharging. Additionally, some Roomba incorporated more advanced navigation techniques like “Dirt Detect” algorithms that focus on areas with higher debris concentration, enhancing cleaning efficiency.

Roomba’s pioneering success opened the door to a market for household robots, inspiring innovations like robotic lawnmowers, pool cleaners, and window washers that followed a similar design concept. Roomba marked the beginning of a new era in household robotics, enriching our daily lives and buying us newfound time in various ways.

The Building Blocks of Self-Driving Cars

Self-driving cars are one of the most fascinating cases of robotic mobility. Self-driving vehicles promise to reduce traffic congestion in urban areas, optimize routes, facilitate efficient car-sharing arrangements, and revolutionize parking management. This transformation could lead to reduced air pollution and improved urban planning. Additionally, self-driving cars can enhance safety by minimizing human errors, provide accessibility solutions for those with limited mobility, and offer a practical alternative to air travel for mid-range distances. Furthermore, these autonomous vehicles can reshape how people commute and work, allowing passengers to engage in productive and enjoyable activities.

LIDAR sensors and deep neural networks called CNNs and RNNs (convolutional and recurrent neural networks) enable self-driving cars.

LiDAR is a technology dating back to the 1960s, but it was in the 2000s that LiDAR revolutionized mobility for driverless cars and other mobile robots [Taranovich]. Mountable nearly anywhere on a car due to their small form factor, LiDAR sensors work by emitting rapid sequences of laser pulses that bounce back to the sensor after hitting objects, enabling precise distance calculations. LiDAR’s ability to provide accurate information in diverse weather and lighting conditions makes it a critical component for enhancing the safety and autonomy of self-driving vehicles.

On the other hand, Deep CNNs are crucial in processing visual data from the car’s LIDAR sensors. We talked about them in detail in Chapter 6. Self-driving cars leverage both of the two main kinds of neural networks: the convolutional ones (CNN) and the recurrent ones (RNN).

CNNs excel at machine vision. By harnessing the capabilities of CNNs, self-driving cars can identify and interpret the complex visual cues necessary for safe navigation. Moreover, CNNs are able to digest high-resolution 3D environment maps from the LIDAR sensors, including the positions of other vehicles, pedestrians, obstacles, road signs, and lane markings. Self-driving car algorithms rely on this detailed perception of the surroundings to make informed decisions about navigation, obstacle avoidance, lane keeping, and compliance with traffic rules.

That is when RNNs come into play for handling sequential data and decision-making processes. RNNs are used for trajectory prediction, route planning, and real-time control. RNNs enable self-driving cars to analyze the temporal aspects of driving, such as predicting the future movements of other vehicles and pedestrians and continuously adjusting their actions to ensure safe and efficient driving.

The Military of Origins of Self-Driving Cars

The concept of autonomous vehicles first appeared in 1939 when General Motors (GM) astounded the world at the New York World’s Fair with “Futurama.” This concept showcased an automated highway with autonomous cars, providing a glimpse of a future where machines could drive on their own [Geddes].

However, it was the US Military — more specifically, DARPA (Defense Advanced Research Projects Agency) — that first seriously recognized the potential of autonomous vehicles to perform critical missions without risking human lives. Autonomous vehicles could be deployed in dangerous environments for reconnaissance missions or supply deliveries, reducing the need for human intervention in high-risk scenarios.

From 1984 onwards, DARPA started funding Carnegie Mellon University self-driving car research projects [Wallace et al.]. These projects made rapid progress. In 1985, a driverless car reached speeds of 19 mph on two-lane roads. Breakthroughs in obstacle avoidance followed in 1986, and by 1987, their vehicles could operate off-road day and night [Pomerleau]. Notably, in 1995, a Carnegie Mellon car achieved a remarkable feat, becoming the first autonomous vehicle to travel nearly 3,000 miles across the US, from Pittsburgh to San Diego, autonomously covering 98% of the journey at an average speed of 60 mph [Carnegie Mellon].

In Europe, the University of the Federal Armed Forces Munich spearheaded similar advancements. In 1995, one of their cars with robot-controlled throttle and brakes traveled over 1,000 miles from Munich to Copenhagen and back at 120 mph. Given the speed and distance on crowded roads, the vehicle periodically made overtaking maneuvers, with a safety driver intervening only in critical situations.

Progress had been underway for autonomous cars in the preceding decades, but the definitive turning point occurred when DARPA orchestrated the DARPA Grand Challenge in 2004 [Buehler]. This competition brought together teams from universities and private companies tasked with developing autonomous vehicles capable of navigating a challenging 150-mile route through the Mojave Desert. TDARPA’s primary goal was to leverage emerging technology to enhance military capabilities and safety.

Unfortunately, none of the participating autonomous vehicles completed the entire course in 2004, and DARPA repeated the competition in 2005, resulting in the victory of “Stanley,” a modified Volkswagen Touareg. Stanley’s success was attributed to various innovations, including an AI algorithm trained on the driving behaviors of real-world humans and the integration of five LIDAR laser sensors. This technological arsenal enabled the car to detect objects within an 80-foot range in front of it and to react to them adequately. After the success of Stanley, LIDAR became a vital component of all future robotic vision systems for automobiles. The runner-up team was a Carnegie Mellon team called the “Red Team.” The third edition of the DARPA Grand Challenge, known as the “Urban Challenge,” was held at a logistics airport in California in 2007 [Markoff]. The competition covered a 60-mile course through urban terrain, requiring participants to adhere to traffic regulations, navigate through other vehicles and obstacles, and seamlessly merge into traffic. A team from Carnegie Mellon called “Tartan” won the race with a modified Chevy Tahoe, while the second place went to a team from Stanford University with a Volkswagen Passat under the name of “Stanford Racing.” The teams behind these four cars — “Stanley,” “Red Team,” “Tartan,” and “Stanford Racing” — would make history.

Twenty-year-olds Build the Self-driving Car Industry

Many participants in the DARPA challenge went on to establish their own startups in the field of autonomous cars. For example, the Tartan team founded Velodyne, a company specializing in LiDAR (Light Detection and Ranging) technology. Concurrently, technology giants and automakers started hiring participants from the DARPA challenges and making substantial investments in autonomous vehicle research. For instance, many of the members of the “Stanley,” “Red Team,” and “Stanford Racing” teams joined Google, leading to the launch of Google’s self-driving car project in 2009. One of those hires was Anthony Levandowski. Levandowski is a polemic figure, and we will talk about him very extensively in Chapter 29, particularly about his “Church of AI.” Between 2009 and 2015, Google invested $1.1 billion in its self-driving car research and operationalization [Ohnsman], and by 2012, Google’s cars had logged over 300,000 miles of autonomous driving on public roads, marking significant progress [Rosen]. Google also obtained the first driverless car license in Nevada [Ryan]. In 2016, the project was rebranded as Waymo and became a separate entity within Alphabet. “Waymo” was derived from “a new WAY forward in MObility” [Sage].

At the outset of the self-driving car program, Google utilized LIDAR systems from Velodyne. A significant technological advancement occurred in 2017 when Waymo introduced its own set of sensors and chips developed in-house, which were more cost-effective to manufacture than Velodyne systems. This led to a 90% reduction in costs, and Waymo applied this technology to its expanding fleet of cars [Amadeo]. As of January 2020, Waymo had achieved an impressive 20 million miles of autonomous driving on public roads, and its progress has continued.

However, Google was not the only player in the field of autonomous cars. In 2015, under the leadership of Elon Musk, Tesla introduced the Autopilot feature, offering advanced driver assistance functions based on a combination of cameras, radar, and ultrasonic sensors. Tesla also provided over-the-air software updates to enhance and expand Autopilot’s capabilities [Associated Press]. Conventional automakers, including GM, Ford, BMW, and Audi, also started venturing into the field with ambitious plans.

Uber and Google’s competition intensified, and eventually, they got into a legal battlefield. In 2016, Anthony Levandowski left Google, created his self-driving car startup, Otto, and sold it to Uber almost immediately [Statt and Merendino]. Levandowski definitely made a windfall that year. The acquisition resulted in legal disputes between Waymo and Uber, culminating in 2019 when Levandowski was sentenced to 18 months in prison after being charged with 33 federal counts of allegedly stealing trade secrets for self-driving cars. However, he was pardoned on the last day of then-US President Donald Trump’s presidency [Byford et al.]. Eventually, Uber quit the race for self-driving cars and sold its self-driving unit to Aurora Innovation in 2020, a self-driving car company that had emerged from the “Red Team” of the DARPA Grand Challenge.

It is remarkable to consider that teams of twenty-year-olds who convened in a university competition ultimately played such a pivotal role in shaping the self-driving car industry.

By 2016, the traditional automotive players started following the twenty-year-olds and got into the game. General Motors, which had historically been the US automotive vertical leader in AI and robotics (it also owned Hughes Electronics), strategically moved into self-driving cars by acquiring Cruise Automation, a San Francisco-based startup with valuable autonomous vehicle technology. Cruise became a GM subsidiary, and in 2017, GM introduced Super Cruise, a hands-free driver assistance system enabling limited hybrid autonomous driving on specific highways, one of the early semi-autonomous systems in production vehicles. In 2020, GM unveiled the Cruise Origin, a self-driving electric car designed for ride-sharing and autonomous mobility services, notable for its lack of traditional driver controls, emphasizing full autonomy. Following GM, other conventional automakers, including Ford, BMW, and Audi, also entered the field of autonomous vehicles with ambitious plans.

Autonomous Vehicles and the Promise that Never Comes

Elon Musk famously stated in 2015 that “anywhere” driving autonomous vehicles would be available in two or three years, and Lyft CEO John Zimmer forecast in 2016 that car ownership would “all but end” by 2025. However, former Waymo CEO John Krafcik cautioned in 2018 that autonomous robot cars would take longer than anticipated. And the reality is that in 2024, cities will not see self-driving cars on the streets at any scale anytime soon [Mims].

One critical challenge in scaling autonomous vehicles lies in addressing the myriad of unpredictable scenarios on the road, such as sudden weather changes or unexpected human behaviors. Achieving autonomy that seamlessly adapts to these dynamic conditions is a formidable task for AI. A robust communication infrastructure, including Vehicle-to-Everything (V2X) communication, is required to enable vehicles to communicate with each other and with intelligent infrastructure elements like traffic lights and road signs, enhancing safety and efficiency [Dow]. Furthermore, vehicles must incorporate redundant systems to ensure safety. If one system fails, backup mechanisms should take control and bring the car to a safe stop. Additionally, extensive infrastructure changes, comprehensive regulatory frameworks, and robust connectivity between vehicles and the environment are needed to extend the autonomous car deployment timeline further. The industry’s shift towards prioritizing safety over rapid deployment, particularly in light of notable accidents, indicates that fully self-driving cars are likely decades from becoming commonplace [Devulapalli].

In the meantime, semi-autonomous cars, also known as Conditional Automation, will be the norm, a step change to the end-state. These vehicles can handle most tasks, like in a plane on autopilot, but may require human intervention in specific situations [Dow].

Transforming Logistics with Robots at Amazon

At the same time participants geared up for the DARPA Grand Challenges in self-driving cars on the US West Coast, a wave of innovation was unfolding in logistics on the East Coast, centered at MIT.

In 2003, a robotics startup called Kiva Systems was founded in Boston by Mick Mountz, an MIT alumnus [Guizzo]. Kiva engineered a fleet of small, wheeled robots called AGVs (Automated Guided Vehicles). These AGVs autonomously navigated inside warehouses and transported shelving units to human workers, dramatically reducing the time and effort needed for order fulfillment. The AGVs employed a simple yet effective approach: lifting an entire shelving unit, transporting it to a designated picking station, and presenting the required items to human workers. This streamlined the order-picking process, eliminated the need for employees to traverse long distances within increasingly larger-scale warehouses, and improved order accuracy. Kiva’s system utilized grid-based navigation, allowing robots to follow predefined paths on the warehouse floor.

Amazon, the world’s largest e-tailer, facing compressing margins across most of its selection as time went by, had an ongoing core need to improve the efficiency of its vast array of warehouses and fulfillment centers. Amazon recognized the potential of Kiva’s technology and acquired the company in 2012, rebranding it as Amazon Robotics. This acquisition marked a turning point in the warehousing industry. Amazon Robotics expanded upon Kiva’s foundation, leading to the development of what is generically called Autonomous Mobile Robots (AMRs). Amazon prefers to refer to these robots as “Amazon Drive Units” or simply “drives.” Amazon Drives are improved AGVs. Some of Amazon’s most prominent Drive models are the Amazon Pegasus, Xanthus, and Hercules [The Economist].

Amazon started equipping Drives with sensors, cameras, and LiDAR — the same technology used by self-driving cars. This allowed them to navigate the warehouse autonomously while avoiding obstacles, including humans. Unlike Kiva’s AGVs, Drives do not rely on fixed infrastructure like magnetic strips; they use advanced AI algorithms for path planning, offering greater flexibility in adapting to changing warehouse form factors, layouts, and tasks. As a result, Drives were able to optimize travel paths and minimize congestion, resulting in faster order processing and higher throughput.

Moreover, Kiva’s AGVs primarily focused on goods-to-person workflows. Drives are more versatile and can be configured for inventory replenishment and other warehouse operations. Drives have more sophisticated algorithms, particularly deep neural networks — both CNN and RNN — that were used similarly to how self-driving cars employed them. Moreover, Drives also implemented advanced coordination algorithms, enabling them to collaborate with other robots and human workers. Amazon’s drives are designed to work seamlessly with the workforce, enhancing their capabilities rather than merely delivering products to humans. This collaboration results in a more dynamic and efficient fulfillment process.

This collaboration between humans and robots was already one of the mantras in industrial robotics. Collaborative robots called cobots had already appeared in the late 1990s, and we covered their history in Chapter 12.

The significance of warehouse robots became even more pronounced with the introduction of Prime Day in 2015, the largest single online shipping day in the world, where over 375M individual items are ordered, processed, and shipped in a short timeframe, underscoring the imperative for efficiency in fulfillment. As Amazon’s customer base expanded, the company maintained substantial investments in robot research and development, continuously improving their robots’ adaptability and efficiency.

Furthermore, in 2019, Amazon Robotics strategically acquired Canvas Technology, a company offering a unique and highly advanced technology that could make robots even more autonomous than Amazon’s existing warehouse drive fleet. Canvas robotic carts were equipped with cutting-edge computer vision, AI, and depth-sensing technologies, enabling them to perceive and interact with their surroundings in real-time and create 3D maps. Unlike traditional drives, Canvas carts required no predefined maps and could adapt to changing environments using computer vision technology. They could work alongside human workers in shared spaces, performing tasks requiring skill and perception, such as bin picking and quality control.

Although Drive Units are certainly the most iconic of all Amazon robots, Amazon employs many other specialized industrial robots for specific tasks in its logistics centers, such as retrieving, sorting, picking, and packing. Many of them are advanced iterations of the initial Unimate design by George Devol and Joseph Engelberger, which we also discussed in Chapter 12.

The proliferation of robots within Amazon’s operations has been remarkable. Following the acquisition of Kiva, Amazon had already deployed 15,000 robots in its warehouses by 2014 [Shead]. Fast forward to 2019, Amazon boasted over 200,000 robots, and in 2023, the number has surged to 750,000 robots worldwide [Knight].

Last-Mile Delivery: One Litmus Test for Robot Acceptance

The so-called “last mile problem” is found persistently in all scaled delivery industries from broadband — which basically involves repeatedly delivering data bytes — to shipping or repeatedly delivering physical goods, including food bites. Solving this problem is a known key to unlocking value at the overall ecosystem level. As a thin-margin business, Amazon’s profitability has depended on successfully solving for lower costs and higher efficiency in getting its packages to customers through the last physical steps in delivery. We note that Amazon’s delivery strategy advanced significantly by incorporating innovative robotic solutions and delivery drones, immediately helping to improve profitability.

Amazon Scout, their autonomous delivery robot, has been deployed in various locations across the US. It debuted in early 2019. Amazon Scout, a fully electric and autonomous six-wheeled robot, approximately the size of a small cooler, navigates sidewalks and residential areas independently, relying on an array of sensors, cameras, and AI algorithms. These robots can transport a selection of packages and are carefully designed to operate safely alongside pedestrians and pets. When the box is nearing its destination, customers receive a notification, enabling them to collect it directly from the robot. This innovative last-mile delivery solution accelerates delivery times and minimizes the environmental impact typically associated with traditional delivery methods.

In addition to ground-based robots, Amazon has invested heavily in drone technology to improve last-mile delivery. Prime Air, Amazon’s drone delivery service, uses drones with vertical takeoff and landing capabilities, allowing them to seamlessly switch between flying and hovering modes. These drones have advanced computer vision, LiDAR, and GPS systems, enabling safe navigation, obstacle avoidance, and precise delivery location identification. Designed to accommodate packages of various sizes and weights, these drones offer versatility in delivering a wide range of products. Amazon envisions utilizing drones for ultra-fast, same-day deliveries in urban and suburban areas, providing customers with a convenient and efficient delivery option.

On the negative side, a growing number of incidents of vandalism against delivery robots starting in 2023 cast a shadow on this emerging technology’s acceptance and adoption. Deliberate acts of damage and theft not only disrupt the efficiency of autonomous deliveries but also remind us of the vandalism against robots portrayed in Steven Spielberg’s “A.I.” film from 2001, where the implementation of robots entailed widespread societal rejection.

The Zenith of Robot Agility with Boston Dynamics

Beyond autonomous cars, warehouse or delivery robots, humanoids are undoubtedly the quintessential example of robot mobility, and Boston Dynamics is the most emblematic firm. Founded by Marc Raiber, Boston Dynamics emerged in 1992 from the Leg Laboratory at MIT, which laid the scientific foundation for the company.

Boston Dynamics robots are well-known for their exceptional balance and skill in performing various physical tasks. Boston Dynamics robots have achieved impressive feats such as traversing rough terrain, performing acrobatics, and carrying heavy payloads. That kind of movement is what prompted Elon Musk to tweet in 2017: “This is nothing. In a few years, that bot will move so fast that you’ll need a strobe light to see it. Sweet dreams… “Boston Dynamics is renowned for two kinds of robots: quadruped robots, inspired by animals’ agile movements, and bipedal humanoid robots. We will cover their journey with quadruped robots first.

This journey began with the introduction of two robotic dogs funded by DARPA: BigDog and LittleDog. BigDog is a groundbreaking robot that showcases the company’s early ambitions to create a quadruped robot capable of traversing challenging terrains. BigDog was designed to serve as a pack mule for soldiers. Its defining feature is its ability to carry heavy loads, up to 340 pounds while navigating steep inclines and rocky landscapes. BigDog marked a significant leap in mobility and load-bearing capabilities for quadruped robots [Degeler]. LittleDog, considerably smaller, was not developed for a specific commercial or industrial application but rather as a research tool for improving the understanding of legged locomotion, navigation, and control algorithms. Despite its limited operational time of 30 minutes due to lithium polymer batteries, it can crawl across rocky terrains, serving as a testbed for robotics experimentation.

The AlphaDog Proto, introduced in 2011, represented the next generation of quadrupeds. AlphaDog Proto was geared completely toward military applications. With DARPA and the US Marine Corps funding, AlphaDog Proto was engineered to carry heavy payloads, weighing up to 450 pounds, over a 20-mile mission through diverse terrains, reducing the logistical challenges in remote locations. It incorporated an internal combustion engine that significantly reduced noise, making it more suitable for military missions.

One year later, in 2012, Boston Dynamics unveiled the Legged Squad Support System (LS3), which increased the robot’s versatility and robustness. LS3 was equipped with sensors that allowed it to follow its human leader, particularly in military operations while navigating rough terrain and avoiding obstacles. Perhaps one of its most impressive features was its ability to right itself if tipped over, further enhancing its adaptability in real-world scenarios [Shachtman].

2013 marked another milestone as BigDog returned with the addition of an articulated arm resembling a long neck. The new BigDog could pick up a 40-pound cinder block and throw it up to 16 feet away. BigDog was trained to leverage its legs and sole arm to open doors and tow work in construction and disaster response applications, where the robot could assist in lifting and moving heavy objects in challenging environments.

These first models were focused mainly on non-weaponized military operations. However, in 2015, Boston Dynamics’ Robots started diversifying into a broader range of industries by introducing Spot.

Spot is an electrically powered and hydraulically actuated quadruped robot [Howley]. Weighing just 180 pounds, Spot is considerably smaller than its predecessors, making it more versatile for indoor and outdoor activities. Spot’s head incorporates sensors that enable it to navigate rocky terrains and avoid obstacles during transit. Its ability to climb stairs and ascend hills further highlights its agility and adaptability. Spot finds applications in industries such as construction and agriculture, where it can perform inspections in challenging environments and provide valuable data for decision-making [Wessling].

In 2016, SpotMini was introduced as a smaller version of Spot, weighing in at 70 pounds. SpotMini is the first all-electric quadruped robot from Boston Dynamics, eliminating the need for hydraulics. This innovation extended its operational time to 90 minutes on a single charge. Equipped with advanced sensors, SpotMini demonstrates improved navigation capabilities and the ability to perform basic tasks autonomously. Additionally, it is equipped with an optional arm and gripper, like the more prominent Spot, enabling it to pick up fragile objects and regain balance if it encounters obstacles. SpotMini’s smaller size allows it to access tight areas, making it particularly useful for applications in indoor and more confined spaces like commercial inspections, security patrols, and healthcare settings where space may be limited.

2017 brought forth an improved version of SpotMini with enhanced fluid movements and robustness, even when faced with external disturbances, showcasing its reliability and adaptability in real-world environments. In 2018, Boston Dynamics introduced improved autonomous navigation capabilities into SpotMini, equipping it with a sophisticated navigation system such that it could autonomously traverse Boston Dynamics’ offices and labs, following a path previously mapped during manual operation.

It is important to note that several advances in core technology supported the integrative work done by Boston Dynamics and others during this period. LiDAR (Light Detection and Ranging) was introduced to measure distances and create precise 3D maps of the environment. This helped robots better navigate and perceive their surroundings. Innovations in cameras and depth sensors were utilized, specifically high-resolution cameras, combined with depth sensors like stereo cameras or structured light sensors, enabling Boston Dynamics’ robots to visually perceive the world and understand the depth of objects.

Robot functionality also advanced in this period due to specific advances in Machine Learning and AI Algorithms. Developers employed deep learning algorithms for tasks such as object recognition, obstacle avoidance, and path planning. Neural networks were trained on massive movement-oriented datasets to enable the robots to adapt and learn from their environment. These neural networks represent a level of complexity that surpasses, by far, those employed in self-driving cars. Some of the robots even utilized reinforcement learning techniques to improve their motor skills and movements. This involved learning from trial and error, with the robot receiving feedback on its actions.

Advances in Dynamic Balancing and Control systems also contributed to the rapid evolution of robots. For example, Inertial Measurement Units (IMUs) were incorporated to measure accelerations and angular rates and provide crucial data for stabilizing the robot and maintaining balance, along with advanced control algorithms that help robots to better maintain balance during dynamic movements. Furthermore, hydraulic actuators for precise and powerful movements were also introduced, contributing to the robots’ ability to perform dynamic and agile motions.

But perhaps the most important enabling technology that qualified the growth in robotics at this period was the use of more powerful processing units. Boston Dynamics equipped its robots with advanced CPUs and GPUs to handle complex computations; this enabled the robots to define movement decisions in real-time, make instant decisions to maintain balance on uneven terrains, respond to dynamic environmental changes, and execute intricate physical movements. It can be said of robotics — and of AI, too — that advancements have generally outstripped the development of the processing power necessary to activate them. In some ways, the gasoline for the stories of both AI and robotics has been the parallel development of processing power.

All of these core technologies helped companies like Boston Dynamics develop proprietary movement-related intellectual property, creating a Network Effect that makes it difficult for new entrants to join the space.

Athletic and Acrobatic Humanoids

Boston Dynamics is equally famous for its bipedal humanoid robots that perform incredible acrobatics on social media. The company has produced one commercial bipedal humanoid, Atlas, and one earlier prototype called Petman from 2011 that was never commercialized and used only for R&D purposes [Thomson].

Atlas was unveiled in 2013. DARPA initially funded this robot as well. Atlas marked a significant leap forward regarding agility, autonomy, and versatility. Standing approximately 180 cm tall and weighing 150 pounds, Atlas boasted an array of sensors, including stereo vision and a LIDAR system, enabling it to perceive and navigate its environment effectively. Atlas’s most groundbreaking aspect was its dynamic balance and mobility. It could walk, run, jump over obstacles, and perform backflips and other impressive acrobatics with remarkable precision.

Atlas has continually updated and improved, enhancing its agility, reducing its size, and expanding its capabilities. This ongoing innovation paved the way for the exploration of various real-world applications. Atlas finds prominent use in search and rescue missions, navigating complex terrains, accessing inaccessible locations, and relaying critical information. It excels in hazardous environments, including nuclear facilities, and offers potential in logistics and delivery services. Additionally, Atlas can collaborate with humans in various industries, apart from the military, thanks to its agility and ability to mimic human movements, enabling tasks like manufacturing assistance or medical procedures.

Many other companies are also in the business of creating humanoids. Tesla is also developing a general-purpose humanoid robot called the Tesla Bot, or Optimus. Elon Musk, the CEO of Tesla, sees the robot as a multipurpose tool to one day perform jobs that people find either objectionable or too dangerous. When used in factories or for street cleaning, the Tesla Bot can significantly reduce manual labor and boost output. In the future, the majority of factory workers and garbage collectors will be humanoid robots, as we can easily envision their output and productivity to exceed that of humans. Additionally, the Tesla Bot’s agility and dexterity can be extremely helpful in dangerous environments during rescue operations. Its capacity to move objects and negotiate difficult terrain makes it an invaluable tool for emergency situations like earthquake relief operations.

During the initial business announcement, Musk asserted that Optimus might eventually become more important than Tesla’s auto business. By 2024, Tesla hopes to have a working prototype, and by 2025, it hopes to have the robot ready for mass production. The 5 ft 8 in tall, 125 lb. Tesla Bot will be operated by the same AI system that powers Tesla vehicles. Tesla has displayed partially working prototypes that can move their arms, walk, and sort colors.

In this chapter, while we explored civilian applications for robotics, we noted the influence of the military extending widely within the sector, with DARPA in the US playing a significant role in funding numerous robotic projects such as self-driving cars and much of the output of Boston Dynamics. In the next chapter, we focus more directly on discussions about military weaponized applications.

Buy “Machines of Tomorrow on Amazon

Review it on Amazon or on Goodreads

Author: Pedro Uria-Recio

Check us out at machinesoftomorrow.ai

Machines of Tomorrow: From AI Origins to Super Intelligence & Post Humanity. How AI Will Shape Our World.

--

--

Pedro Uria-Recio
ILLUMINATION

Chief Data & AI Officer | ex-McKinsey | Forbes Tech Council | Monetize data & AI