Open in app

Sign In

Write

Sign In

Ryan Burn
Ryan Burn

89 Followers

Home

About

Published in Towards Data Science

·Mar 10, 2022

Logistic Regression and the Missing Prior

How to reduce the bias of logistic regression using Jeffreys Prior — Suppose X denotes a matrix of regressors and y, denotes a vector of target values. If we hypothesize that the data is generated from a logistic regression model, then our belief in weights w after seeing the data is given by

Statistics

11 min read

Logistic Regression and the Missing Prior
Logistic Regression and the Missing Prior
Statistics

11 min read


Published in Towards Data Science

·Feb 23, 2022

How to Build a Bayesian Ridge Regression Model with Full Hyperparameter Integration

How do we handle the hyperparameter that controls regularization strength? — In this blog post, we’ll describe an algorithm for Bayesian ridge regression where the hyperparameter representing regularization strength is fully integrated over. An implementation is available at github.com/rnburn/bbai. Let θ = (σ², w) denote the parameters for a linear regression model with weights w and normally distributed errors of variance…

Statistics

12 min read

How to Build a Bayesian Ridge Regression Model with Full Hyperparameter Integration
How to Build a Bayesian Ridge Regression Model with Full Hyperparameter Integration
Statistics

12 min read


Published in ITNEXT

·Dec 27, 2020

Why Standard C++ Math Functions Are Slow

Performance has always been a high priority for C++, yet there are many examples both in the language and the standard library where compilers produce code that is significantly slower than what a machine is capable of. …

Cplusplus

4 min read

Why Standard C++ Math Functions Are Slow
Why Standard C++ Math Functions Are Slow
Cplusplus

4 min read


Published in Towards Data Science

·Jun 11, 2020

How to Build a Warped Linear Regression Model

We use the module peak-engines to fit monotonic transformations to data — Ordinary Least Squares (OLS) fits a linear regression model to a dataset so as to maximize likelihood under the assumption that errors are normally distributed. Normality can come about naturally when error terms break down into sums of independent identically distributed components, but for many problems the assumption is unrealistic.

Machine Learning

4 min read

How to Build a Warped Linear Regression Model
How to Build a Warped Linear Regression Model
Machine Learning

4 min read


Published in Towards Data Science

·Mar 13, 2020

What to Do When Your Model Has a Non-Normal Error Distribution

How to use warping to fit arbitrary error distributions — One of the most import things a model can tell us is how certain it is in a prediction. An answer to this question can come in the form of an error distribution. …

Machine Learning

13 min read

What to Do When Your Model Has a Non-Normal Error Distribution
What to Do When Your Model Has a Non-Normal Error Distribution
Machine Learning

13 min read


Published in Towards Data Science

·Dec 20, 2019

What Form of Cross-Validation Should You Use?

Optimize the right proxy for out-of-sample prediction error — Cross-validation partitions a dataset, trains and validates models on complementary subsets, and averages prediction errors in such a way that each datapoint is validated once as an out-of-sample prediction. By averaging errors of out-of-sample predictions across the whole dataset, we hope that the cross-validation error acts as a proxy for…

Machine Learning

8 min read

What Form of Cross-Validation Should You Use?
What Form of Cross-Validation Should You Use?
Machine Learning

8 min read


Published in Towards Data Science

·Dec 17, 2019

How to Do Ridge Regression Better

Use an optimizer to find the best performing regularization matrix — Let X and y represent a sample of training data where X is a matrix with n rows of feature vectors and y is a vector of n corresponding target values. If 𝐱′ is an out-of-sample feature vector with unknown target value y′, then we might fit a linear model…

Machine Learning

8 min read

How to Do Ridge Regression Better
How to Do Ridge Regression Better
Machine Learning

8 min read


May 5, 2019

How to make your C++ code faster with an extended stack

Allocating memory from the stack in C++ is blazingly fast, but also restrictive. It costs only a few arithmetic operations but The allocation cannot persist beyond its function’s scope. The allocation must be small enough in size that it fits into the program’s stack. …

Programming

2 min read

Programming

2 min read


Published in OpenTracing

·Aug 7, 2018

Announcing Lua OpenTracing

OpenTracing now provides an API to trace code written in Lua. Given Lua’s easy interoperability with C and C++, OpenTracing also provides a bridge tracer that implements the API on top of the OpenTracing C++ API, allowing you to immediately use existing tracers from Jaeger, LightStep, DataDog, Zipkin, or any…

Lua

1 min read

Lua

1 min read


Published in OpenTracing

·Jul 30, 2018

How to enable NGINX for distributed tracing

NGINX is a versatile and popular application. Perhaps best known as a web server, it can be used to serve static file content but is also commonly used together with other services as a component in a distributed system where it functions as a reverse proxy, load balancer, or API…

Nginx

3 min read

How to enable NGINX for distributed tracing
How to enable NGINX for distributed tracing
Nginx

3 min read

Ryan Burn

Ryan Burn

89 Followers

Mathematical Engineer | buildingblock.ai

Following
  • Kevlin Henney

    Kevlin Henney

  • Matt Klein

    Matt Klein

  • Jingles (Hong Jing)

    Jingles (Hong Jing)

  • In the Stacks

    In the Stacks

  • Hanan Ahmed

    Hanan Ahmed

Help

Status

Writers

Blog

Careers

Privacy

Terms

About

Text to speech