Often times, we find ourselves faced with uncertainty. Our choices and decisions on a day to day basis are often random and sporadic. This is much to the chagrin of mathematicians and scientists, eager to develop foolproof, deterministic models and formulas to perfectly explain the world around us. Uncertainty is inevitable! Meaning often times it is difficult to accurately and precisely create models for unexplainable, exceedingly complex, or unquantifiable scenarios. For problems of this nature, mathematicians and statisticians develop simulations, hypothetical sandboxes of sorts, which try to incorporate and capture the uncertainty of parameters in the data as best as possible. Enter: Monte Carlo Simulations (MCS).
Development of MCS
In the early 1930s, a physicist named Enrico Fermi started using mathematical modeling techniques to study neutron diffusion. He found it entertaining to impress his colleagues by always making “too-good-to-believe” predictions over their experimental results  before they came out. He educated his fellow peers by describing how he used statistical sampling techniques over numerous iterations in order to make an accurate prediction based on the data he had. It was this technique that was later coined as the first generation of the Monte Carlo Simulations!
Although the simulation that Fermi created isn’t used widely today, it was imperative in the creation of the first modern, computational iteration of the MCS, which was proposed in the ’40s by Stanislaw Ulam and his colleagues. Ulam was inspired by his endless addiction to solitaire. He tried to find the probability of winning each game of Canfield solitaire by plotting every outcome and creating a distribution  based on this data. Once Ulam realized the potential this random sampling method could have in mathematical physics, he shared his ideas with John Von Neumann. Ulam and Von Neumann worked together to develop a version of the Monte Carlo simulation many scientists still use. During the development of this simulation, they decided to create a “code name” in order to keep their breakthrough under wraps. They decided on calling it Monte Carlo, based on the Monte Carlo Casino in Monaco where random outcomes are central to victory, similar to what their simulation would be based around!
How Does it Work?
The essence of MCS is taking advantage of random sampling and modifying inputs to extract some sort of output data. In these situations, it is assumed that there are many unpredictable and/or unknown factors that result in a particular outcome. Instead of fully deciphering or capturing the intricacies of these factors, we can simply poll the problem or unknown distribution at hand with varying inputs to see what the resulting outcomes or trends are.
Suppose you have a weighted 6-sided die. It would be inconceivable to develop a theoretical model to predict the probability of rolling each number by meticulously analyzing the varying density components and intricacies in the surface area created by the divots — let alone calculating the bounce based on the density of the landing surface and air pressure at a given moment. A wiser approach would be to bypass all these factors and simply run an experiment by rolling the dice enough times — several hundred to a thousand times — to get a good estimate of the probabilities. In this example, both the expected outcome and the probability distribution are unknown, but random sampling can be utilized to collect useful data.
MCS also involves randomly modifying the input parameters. Continuing with our loaded die, suppose we now have thousands of die to test and want to find the best two die to use for rolling snake eyes. Suddenly, our problem has become immensely harder! How could we possibly roll each die hundreds, thousands of times? Since we only care about rolling snake eyes, we can bypass this hurdle by simply adding yet another layer of randomness to this simulation!
The essence of Monte Carlo Simulation is to abstract information from a model by taking advantage of random sampling and modifying inputs
You could crudely divide the dice into different groupings, and roll them as separate collections to filter out the groups that roll the highest averages. The more rolling you do for each group before filtering them out, the more accurate you’ll likely to be finding the best groups. By dividing, testing, filtering, over and over again you’d eventually end up with some of the stronger candidates for rolling snake eyes. Of course, chances are this may not be the absolute best solution there is — this inherent limitation with MCS is a consideration for many mathematicians and scientists who use it.
Many methods of the MCS exist — each catered to a particular field or problem. The dice example is a simple representation, but in the past century, Ulam’s fantasies have come to fruition: the framework used to create a MCS is applicable to any field of study. Quantum physics, aerodynamics, microelectronics, climate change, AI, finance, machine learning, weather forecasts, search and rescue are just a few. The wide variety of applications and methods makes MCS very useful for predicting any outcomes in situations that involve many inherent random variables and multiple degrees of freedom. Many biotech firms, such as Macromoltek, utilize Monte Carlo simulations for computational biology. Because MCS are so pervasive for research, we’ll be exploring a specific type of Monte Carlo simulation with our next blog — and possibly other important methods in future blogs.
Links and Citations:
- MCNP SOFTWARE QUALITY: THEN AND NOW Gregg C. Giesler, Los Alamos National LaboratoryLA-UR-00–2532; 16 October 2000 URL: http://library.lanl.gov/cgi-bin/getfile?00393668.pdf Deer*lake (talk) 19:22, 1 December 2011
Looking for more information about Macromoltek, Inc? Visit our website at www.Macromoltek.com
Interested in molecular simulations, biological art, or learning more about molecules? Subscribe to our Twitter and Instagram!