Configuration reduction to reach sub-nanometer scale

Tom van de Ven
Technology Pioneers
3 min readNov 4, 2022

The machines used in the semiconducter industry play their game in the world of nanometers and creep towards Angstroms. Configurations of these machines take into account 100’s of paramters. A great environment to work with configuration reduction. The theory of configuration reduction with evolutionary algorithms is explained in part 1 of this article series.

Angstrom sized client case in the semiconductor industry

The chips and memory modules you use in your laptop or mobile phone are made from silicon. The manufacturing process for creating chips on a nanometre scale is mind boggling accurate. Chip manufacturers like Intel or QUALCOMM have multiple machines that can positioning layer upon layer within a chip with nanometre precision and with a volume that tries to keep up with current demand. These are so called lithographic machines. These machines communicate with a variety of other machines in a chip making factory. Pieces of silicon enter the factory and after multiple pass throughs the finalised chips exit the factory. A process that goes on 24/7 with that nanometre accuracy.

You can imagine that a software update for the lithographic machines in a factory is frowned upon. In order to keep the factory running smoothly they want as little disturbances as possible. If there is an update needed, the factory wants to have the confidence that the update is working in their factory, with their specific setup and configuration.

Empirical data

There are up to 1000 parameters that can be set on one machine in a factory. The combination with other machines throws even more variables to this mix. Setting up a test environment that covers not only the situation of Intel and QUALCOMM but also all the other factories these lithography machines operate in, is a big challenge. Brute force calculation to come up with 5 test environments that cover as much of an installed base of 800–1000 machines is just not possible (as shown in the beginning of this article; in the order of magnitude billions of years of calculating power is needed).

This is where AI comes into play. We use an evolutionary algorithm to realise two things:

- calculate a set of 1000 parameters that is representative for the installed base

- a working combination of parameters

The evolutionary algorithm is particularly useful in this situation. Evolutionary algorithms give good approximate solutions to problems that cannot be solved easily using other techniques. Many optimisation problems fall into this category. It is too computationally intensive to find an exact solution, but a near-optimal solution is sufficient. This is exactly what we want here.

We explained in the search for the optimal configuration set (chapter 2.2) how we come to a set of configurations.

For our customer in the lithographic industry, we have used this algorithm to create a set of 5 representative test environments. It has a 93% coverage of the installed base. The calculation time to get to this test set is in the order of magnitude of a couple of minutes.

Conclusion

An evolutionary algorithm is great mechanism for computationally intensive problems with big populations. A 1000 parameter problem with an installed base of roughly 1000 devices can be covered with 5 test environments up to 93%. The calculation time is 10 minutes at most.

With brute force calculations a 100% coverage is possible. The calculation time would then be near infinite. In this case the evolutionary algorithm is best alternative for generating test environments. Running automated test sets in these 5 environments gives confidence that a chip manufacturer can update the software of lithography machines not breaking production.

--

--