Open in app

Sign In

Write

Sign In

Scott Lundberg
Scott Lundberg

2.1K Followers

Home

About

Published in

Towards Data Science

·May 8

The Art of Prompt Design: Prompt Boundaries and Token Healing

This (written jointly with Marco Tulio Ribeiro) is part 2 of a series on the art of prompt design (part 1 here), where we talk about controlling large language models (LLMs) with guidance. In this post, we’ll discuss how the greedy tokenization methods used by language models can introduce a…

NLP

7 min read

The Art of Prompt Design: Prompt Boundaries and Token Healing
The Art of Prompt Design: Prompt Boundaries and Token Healing
NLP

7 min read


Published in

Towards Data Science

·May 2

The Art of Prompt Design: Use Clear Syntax

Explore how clear syntax can enable you to communicate intent to language models, and also help ensure that outputs are easy to parse — This is the first installment of a series on how to use guidance to control large language models (LLMs), written jointly with Marco Tulio Ribeiro. We’ll start from the basics and work our way up to more advanced topics. In this post, we’ll show that having clear syntax enables you…

NLP

10 min read

The Art of Prompt Design: Use Clear Syntax
The Art of Prompt Design: Use Clear Syntax
NLP

10 min read


Published in

Towards Data Science

·May 17, 2021

Be Careful When Interpreting Predictive Models in Search of Causal Insights

A careful exploration of the pitfalls of trying to extract causal insights from modern predictive machine learning models. — A joint article about causality and interpretable machine learning with Eleanor Dillon, Jacob LaRiviere, Jonathan Roth, and Vasilis Syrgkanis from Microsoft. Predictive machine learning models like XGBoost become even more powerful when paired with interpretability tools like SHAP. These tools identify the most informative relationships between the input features and…

Causality

16 min read

Be Careful When Interpreting Predictive Models in Search of Causal Insights
Be Careful When Interpreting Predictive Models in Search of Causal Insights
Causality

16 min read


Published in

Towards Data Science

·Mar 2, 2020

Explaining Measures of Fairness

Avoid the black-box use of fairness metrics in machine learning by applying modern explainable AI methods to measures of fairness. — This hands-on article connects explainable AI with fairness measures and shows how modern explainability methods can enhance the usefulness of quantitative fairness metrics. By using SHAP (a popular explainable AI tool) we can decompose measures of fairness and allocate responsibility for any observed disparity among each of the model’s input…

Interpretability

11 min read

Explaining Measures of Fairness
Explaining Measures of Fairness
Interpretability

11 min read


Published in

Towards Data Science

·Apr 17, 2018

Interpretable Machine Learning with XGBoost

This is a story about the danger of interpreting your machine learning model incorrectly, and the value of interpreting it correctly. …

Machine Learning

10 min read

Interpretable Machine Learning with XGBoost
Interpretable Machine Learning with XGBoost
Machine Learning

10 min read

Scott Lundberg

Scott Lundberg

2.1K Followers

Senior Researcher at Microsoft Research

Help

Status

Writers

Blog

Careers

Privacy

Terms

About

Text to speech

Teams