# Millennium Breakthrough in Probabilistic Computing

# The Summary of The Breakthrough

In Probabilistic Computing The Rest of The World — answers queries by averaging over an implicit space of probabilistic models; which is their way of ‘Automatic’ Probabilistic Inference and Generating Models using an Aggregation of ‘Canned’ ‘Pre-Selected’ Assortment of Models.

Instead of using a Meta-Model Aggregation of Models we Build/Generate a Universal Model using Probabilistic Inference. And we answer all questions using that. Which is our way of Probabilistic Inference.

That’s how we achieve Turing Complete, Model Complete, Logic Complete Solutions.

# Lets Start From Scratch — How Did Probabilistic Computing Start?

The First and Foremost thing, the foundational element is The Bayes Theorem (Conditional Probability)

# The First Applications of Naive Bayes Theorem — Classifiers

# How Were Bayesian Models Built?

- Define a model: This is usually a family of functions or distributions specified by some unknown model parameters.
- Pick a set of data.
- Run a learning algorithm: This means using the data to choose a value for the unknown model parameters.

Bayesian learning is used in a variety of industry settings where there are few data and uncertainty quantification is critical, including marketing, advertising, medical product development, pharmaceutical statistics, drug discovery and development, technical recruiting, and computer system A/B testing and tuning.

# Then Came Probabilistic Programming

- Probabilistic Programming Languages provide a way for users to write down a Bayesian model, including the generative process, unknown model parameters, and prior beliefs about these parameters.
- Allow a user to specify a dataset of interest.
- Automatically compute and return the result distribution over model parameters.

In the past, each time you wrote down a new Bayesian model, you would need to mathematically derive an inference algorithm — i.e., the learning algorithm that computes the final distribution over beliefs given the data. This process required (often a great deal of) expert human work for each new model. Now, you simply write down the model in your Probabilistic Programming Language and it returns the result automatically, with minimal human work.

# The Benchmark — In Generative Modelling

**BayesDB is a probabilistic programming platform that provides built-in non-parametric Bayesian model discovery. BayesDB makes it easy for users without statistics training to search, clean, and model multivariate databases using an SQL-like language.**

BayesDB is based on probabilistic programming, an emerging field based on the insight that probabilistic models and inference algorithms are a new kind of software, and therefore amenable to radical improvements in accessibility, productivity, and scale. Unfortunately, most probabilistic programming systems require users to write probabilistic programs by hand. Instead, BayesDB provides a built-in probabilistic program synthesis system that builds generative models for multivariate databases via inference over programs given a non-parametric Bayesian prior. BayesDB also enables statisticians to override these programs with custom statistical models when appropriate.

# So What Is The Problem? And How Does BayesDB and others Solve it?

The Three Most Important Constituents of Probabilistic Programming are

- The Bayesian Model Skeleton Which You Create Initially
- The Model Parameters Which Are NOT Known In The Begining
- Your Prior Beliefs About These Parameters, Which You Decide In The Begining Itself

And Then The Probabilistic Programming Language/Framework Generates The Model

Which means The Generation is ONLY AS GOOD AS The 3 Things You Selected In The First Place Above.

While the results obtained henceforth beat a lot of benchmarks and are better than almost all Traditional Machine Learning Models and mostly even better than all Deep Learning Models. It is Still Not The Theoretical Best We Could Ever Achieve.

So When BayesDB and Other Tools/Platforms Try To Generate Models of Arbitrary Data…

They always start with a pre-selected portfolio of pre-existing Models. And then The Generated Model is Created as an Aggregate Combination of these Portfolio of Models.

Which again works quite well. But is not the absolute theoretical best we could ever achieve.

# So How Did We Solve It?

We did something which nobody else could do. We succeeded where everyone else failed. We invented The Universal Model Generation Algorithm.

We don’t start with any canned Portfolio of Pre-Existing Models. We make no assumptions about the data or the model we would like to derive in absolutely anyway.

We actually do the following in that order…

- We Generate The Universal Model
- With Universal Parameters
- With Universal Parameter Beliefs

And we do this such that the model is an exact model of the data. And is ~100% Accurate. Only limited by Information Theory (How much information is contained in the data, and how much of the model does it allow us to accurately generate?)

The Model Hence Generated by us is…

- Model Complete
- Turing Complete
- Logic Complete

And thats why this is a Millennium Invention.

The Demo is available to all our Qualified Prospective Customers TODAY!!! And will be ready for production in 30–45 days time after elaborate testing and bug fixing.

Please Note:

*** We are NOT using any existing Probabilistic Programming Language? We use our own Distributed Probabilistic Programming Platform.

*** We don’t use any Stochastic/Approximate/Variational/MCMC Inference Algorithms. Our Algorithm is Deterministic. And NP-Complete. And Executes in Polynomial Time.

Time to light a cigar…

# Where Can I Learn More About Probabilistic Computing?

In recent years, there has been a surge in the popularity and development of probabilistic programming languages (PPLs) or frameworks (such as Stan, PyMC, Pyro, Edward, Infer.NET, WebPPL, Anglican, and many more).

# About us

This is our Website http://automatski.com