3 Artificial Intelligence Achievements of 2020 That Will Blow Your Mind

Sofia Merenych
Geek Culture
Published in
8 min readApr 9, 2021

The pandemic and the development of Covid-19 vaccines dominated the news in 2020. But that doesn’t mean the rest of the scientific world held its breath.

In the field of artificial intelligence, 2020 was rich in discoveries and inventions. Today, we review three stunning achievements of AI that no one believed were possible just a few years ago: the solution to the protein folding problem, a language model with more background knowledge than any living being could ever have, and the most advanced self-driving system available in consumer cars.

AlphaFold introduces a solution to the protein folding problem

Protein folding has been a major challenge in biology for half a century. Literally, every process in every living organism depends on proteins and their specific functions, which derive from their structure.

Proteins are basically chains of amino acids that can have various lengths. By now, we have explored about 500 types of amino acids, and 20 of them occur in the human genome. Amino acid chains spontaneously fold into three-dimensional shapes, and it’s truly hard to predict the structure of the folded protein. The Nobel Prize holder Levinthal estimates there are around 10^300 possible conformations, and only one of this astronomical number is the native state for a specific amino acid sequence: the state in which the protein properly executes its function. A malfunctioning protein can cause various pathologies or affect other proteins in the organism.

Scientists research proteins in laboratories and can measure the 3D structure of a folded protein with around 90% accuracy. Investigating a single protein type may take around two years. Given that we’re aware of a few hundred million amino acid chain variations (and new proteins are discovered each year), the picture isn’t inspiring. Or at least it wasn’t until 2020 when DeepMind first published the achievements of AlphaFold.

Statistics of CASP competitions between 2008 and 2020
From DeepMind

Since 1994, the scientific world has followed the international Critical Assessment of Structure Prediction, or CASP, competition between software built to predict protein structures. Machine algorithms were only able to predict a protein’s structure with 30% to 40% accuracy until 2018 when AlphaFold made the first breakthrough and crossed the 50% threshold.

Two years later, in 2020, the team achieved an incredible 92.4% accuracy, which is even higher than using expensive and time-consuming experimental methods.

How does AlphaFold work and what does its success mean for the world?

The DeepMind engineers behind AlphaFold have trained their neural network on the 3D shapes of 170,000 known amino acid sequences. The learning process took a few weeks and involved 200 graphics processors.

After analyzing all available information, AlphaFold is now able to predict the 3D structure of a new protein within hours, compared to years with lab analysis.

This is a step forward to understanding many diseases. For example, the new virus SARS-CoV-2 consists of about 30 proteins, and 10 of them had not yet been researched. AlphaFold was able to predict the structure of these proteins, enabling scientists to better study the virus without waiting for laboratory results. This deep learning solution to the protein folding problem gives us the opportunity to react to new diseases faster and more effectively in the future.

The actual and AlphaFold predicted 3D structure of two protein types
From DeepMind

However, this technology will find applications outside of the healthcare industry as well. For example, understanding protein folding regularities may help in finding enzymes that can break down industrial waste.

You can read more about the protein-folding challenge and DeepMind’s solution in this blog article about AlphaFold’s success.

GPT-3 gets ready to take over any language task

OpenAI is a major player in the artificial intelligence industry, releasing impressive solutions one after another. In June 2020, they introduced GPT-3 — a new-generation language model with outstanding capabilities. Its power lies in its background knowledge: GPT-3 has “read” all texts available on the internet and learned from them. The entirety of Wikipedia makes up only 3% of the model’s knowledge base.

This information makes OpenAI’s creation not only knowledgeable but able to express its knowledge while taking into account the context.

For example, after analyzing the whole of Shakespeare’s work, GPT-3 can write new plays that are indistinguishable from Shakespeare’s even for linguists. It mimics Shakespeare’s writing style, archaic language, and plot nuances. Sounds fascinating, doesn’t it?

What is GPT-3 capable of doing?

The possibilities of this language model are impressive.

  • Text generation
    GPT-3 can write any type of text such that you’ll struggle to guess whether it was generated by a machine. The Guardian has even published an article about GPT-3 written by GPT-3. Check it out and tell us you’re not impressed.
  • Translation
    We’re not talking about the poor machine translation that inspires memes. GPT-3 makes adequate context-specific translations into any language, including rare ones.
  • Software development
    Surprised to see it here? You shouldn’t be, since code is also a language, and a language model can successfully learn it. You can give instructions to GPT-3 in plain English describing a layout and GPT-3 will design and code a page for you. It works with the most popular programming languages, but the code it writes isn’t production-ready yet. You’ll still need a software engineer to polish it, but we see GPT-3 as a major step toward software development automation.

The OpenAI language model can also create music, write text summaries, provide customer support, brainstorm content-relevant ideas for the given context, etc. You can find out more about possible applications of GPT-3 technology in this article.

In January 2021, OpenAI introduced DALL·E, an image generation tool based on GPT-3. It’s quite simple to use: Give it text instructions and the neural network will generate a set of images. Even though this tool isn’t available for wide use yet, a brief look at these pictures proves that designers and illustrators may have reason to fear that machines will take their jobs.

Examples of an avocado-shaped chair offered by GPT-3 image generation
From OpenAI blog

GPT-3 isn’t perfect yet. For one thing, it has analyzed all the content available on the internet, including Twitter and fake news websites. Text generated by GPT-3 is not always factually correct, and it sometimes is offensive. But the model is still learning, and we hope this technology will soon be able to recognize and avoid inappropriate behavior. At any rate, GPT-3 is a breakthrough in AI that’s already worth speaking about.

Tesla presents Full Self-Driving in beta — the best level 2 driving automation so far

In October 2020, a few Tesla vehicles received the long-awaited Full Self-Driving (FSD) feature.

Nevertheless, Tesla is still far away from creating a fully auto-piloted car. Some explanation is needed.

There are five levels of driving automation:

  • Level 0: The system can send warnings and notifications but doesn’t control the vehicle.
  • Level 1 (hands-on): The system can adjust the vehicle’s speed, assist with parking, etc., but the driver must be constantly involved in driving and keep their hands on the wheel.
  • Level 2 (hands-off): The system can control the vehicle, but the driver must be ready to retake control at any time. Some vehicles with level 2 driving automation control hand contact with the wheel or monitor the driver’s eyes to make sure they’re paying attention and are ready to intervene.
  • Level 3 (eyes off): The driver can text or watch videos while the system takes full control of the vehicle. However, the driver must be ready to return to manual control within a limited amount of time if needed.
  • Level 4 (mind off): The driver can leave the driver’s seat or go to sleep — no attention is required. However, this is allowed only in limited areas under special circumstances. When the car leaves one of these areas or the conditions change, the system automatically parks the car if the driver doesn’t retake control.
  • Level 5 (steering wheel optional): The fully autonomous system requires no manual control.

Tesla is currently at level 2 — it still requires a driver to control the situation on the road and be ready to intervene if the system fails. Still, this is the most advanced technology currently available for ordinary consumers.

What is Tesla’s updated FSD currently capable of?

Tesla’s new Full Self-Driving technology can detect other vehicles on the road, pedestrians, trees, and other objects. However, it still has issues with recognizing the true dimensions of some objects. For example, sometimes Full Self-Driving fails to distinguish between a box truck and a semi-truck.

Nonetheless, vehicles with Full Self-Driving recognize road markings, traffic lights and signs, parked cars and moving vehicles, pedestrians, and other objects. The system can also successfully perform right and left turns in various traffic conditions. This makes Tesla FSD ready for residential streets.

Why is FSD available only for a limited number of users?

Tesla’s Full Self-Driving is based on machine learning, so the system has to learn. Elon Musk says Tesla is going to use millions of “cars that are providing feedback, and specifically feedback on strange corner-case situations that you just can’t even come up within simulation.”

The more FSD is used in real-world conditions, the more it improves. Tesla gathers data from cars driving on the road, analyzes it, and decides on what the safe behavior would be when FSD is turned on. At the same time, the system learns to behave under certain conditions and will be able to navigate even on roads that have never been driven by a Tesla before.

Some users who have had a chance to test FSD claim that the car behaves much like a human driver would. For example, “the vehicle inched out into the opposite lane of traffic to assert itself before making the turn,” one Tesla user shared.

Elon Musk claims that Tesla is going to test the Autopilot upgrade on the streets very cautiously. Owners of compatible Tesla vehicles will receive the update gradually.

Despite the pandemic, 2020 was successful for artificial intelligence and brought us a few developments that we believed were still decades away. I’m already curious what I’ll be writing about one year from now.

What AI surprises await us in 2021?

--

--

Sofia Merenych
Geek Culture

Searching for the balance between productivity and happiness. Business and technology writer