Jean Czerlinski Whitmore OrtegaHow gaming incentives is like risky foragingGaming by desperate people is similar to high-risk foraging by starving animalsSep 15Sep 15
Jean Czerlinski Whitmore OrtegaConstraining the gaming that scalesWhen one person’s gaming scales into crisisMay 1May 1
Jean Czerlinski Whitmore OrtegaWhy incentive advice diverges: Is failure likely but acceptable?Unattainable goals with unacceptable consequences spur workers to game incentives rather than give upMar 11Mar 11
Jean Czerlinski Whitmore OrtegaWhen models get gamed: changing incentives to fool a modelSpam bots, credit card fraud, and the history of search enginesJul 31, 2023Jul 31, 2023
Jean Czerlinski Whitmore OrtegaTwo ways human bias gets into — and out of — ChatGPTThe role of humans in training large language modelsMay 30, 2023May 30, 2023
Jean Czerlinski Whitmore OrtegaStrategic classification in the dark (a review)When public algorithms are more robust to manipulation than secret algorithmsFeb 9, 2023Feb 9, 2023
Jean Czerlinski Whitmore OrtegaSpotlight on the bias-variance trade-offWhy more data is not always betterSep 6, 2022Sep 6, 2022
Jean Czerlinski Whitmore OrtegaHow millions of parameters can avoid overfittingBoth linear regression and deep learning can leverage a massive number of mis-specified featuresJan 3, 2022Jan 3, 2022
Jean Czerlinski Whitmore OrtegaSocial chess: stakeholders with impactful feedback loopsHow machine learning models are embedded in a web of strategic movesNov 8, 2021Nov 8, 2021
Jean Czerlinski Whitmore OrtegaWhen model builders can — and can’t — rest on their laurelsJennifer Lopez’ dress was exogenous change while image spam was endogenous changeNov 2, 2021Nov 2, 2021