Prediction rackets, fiddle games, and science!

I’ve really got to stop writing these things on replicability, but they’re just so charming!

The prediction scam

A letter arrives. Inside there is a plain sheet of folded printer paper. Written in magic marker is “Thursday. Saints.”
You throw it in the recycling bin.
Later that week a friend mentions the score of the Saints/Falcons game. Saints 21, Falcons 17.
Next week, the letter is back. “Thursday. Seahawks.”
And sure enough, the Seahawks win.
“Dolphins.”
“Bengals.”
“Jets.”
Five in a row.
The next letter is early, and contains no team:
“Want another? $5, Po.Box 2153 Cambridge, MA 02139.”
That’s MIT’s zip code. The betting spread on Titan’s/Jaguars has a huge underdog. You send the $5. You get another letter: “Titans.” You bet something small, $100.
Titan’s win. You claim your winnings.
A new letter:
“Want another? $100, Po.Box 2153 Cambridge, MA 02139.”

Do you send it?

How the prediction scam works

The confidence trick described above is called a “perfect prediction game.” From the mark’s perspective, it works just as described above, a mysterious letter seems to have an uncanny ability to predict future events. Eventually it starts asking for money but it’s predictions continue to prove accurate.

Until they don’t!

The trick of the PPG is that the con artist sends out many, many predictions.

On the first week, when the PPG called the Saints/Falcons game, the artist sent out 1000 letters, 500 with “Saints,” 500 with “Falcons.” The next week, they sent out 500: this time only to people who got a winning prediction last week. In the second round of predictions, again half said “Seahawks” and half said “49ers.” In week 3, they sent out letters to the 250 winners from week 2. In week 4, they sent out 125 letters. In week 5, 60.

Finally, in week 6, they ask for money, but only from the people who have seen them call 5 games in a row! These people are already pretty convinced that the predictor is a damn wizard by now. The 940 people who saw a wrong prediction have forgotten about the weird letters.

An elegant twist: ask for a small amount of money, then ramp up. This lets the marks self-select for people who are most vulnerable to the scam. They paid money and got accurate predictions back! They can even get into situations where they’ve won more money than they’ve paid!

In fact, by starting with 1000, the artist is fairly likely to have one person who’s seen them call 10 games in a row, and has been winning money for four of those games.

Start with the mailing list of a sports-betting website and hope for high roller young con artist.

Ok! Now Replicability!

The replicability project/open science/science crisis crowd is essentially making the argument that current scientific practices amount to a prediction racket.

Scientists conduct a large number of experiments: most fail. A few succeed and are published.

However, given the large pool of failures it’s overwhelmingly likely that the successful ones are lucky rather than right. As soon as the con artist in the prediction scam fails a prediction, they stop contacting the mark. Their mistakes are private. Their marks are the ones in the Texas sharpshooter bullseye. Think of it as the Teddy Roosevelt effect.

Bullmoosed!

This is a charged topic because it suggests that most successful scientists are (generously) blessed with an outrageous degree of dumb luck, or (more pointedly) con artists.

There’s money in it too. If this is right, giving grants to scientists with a proven record of publication is like investing with a financial advisor because they “beat the market.” Half of all financial advisors beat the market. One in thirty has beat the market 5 years running. A lottery system would do a better job and get rid of the dreadful study section chore.

OSF purports to cure the prediction scam by showing all the letters. Pre-registered studies have to show all their predictions, and are judged based on their overall success, rather than the success of the successful.

But I don’t think the prediction racket analogy is the sole culprit.

The Fiddle Scam

You are the maître’d of a fancy restaurant. One day, a violinist stops by carrying his instrument. He eats, but then tells you he’s lost his wallet. He offers to leave his fiddle as collateral while he runs for his billfold. He tells you it’s an antique and promises it’s worth much more than the cost of his meal.
With few options, you assent. The fiddler vanishes. Minutes pass.
A new diner enters, and notices the violin resting on your station. His eyes grow wide. He asks politely if he might have a look at the fiddle in question.
Gently, he lifts it out of his case, handling as if it was a baby made of jewels. He looks closely for manufacturer’s marks and picks at the felt of the case.
He draws the bow across it and it lets out a single pure tone that vibrates in the air.
The new diner sighs contentedly.
“In all my years, I never thought I’d handle one. Where on earth did you come by an authentic Stradivarius?”
“What? Some fiddler just left it here to cover the cost of his meal.”
“You have robbed him. I am an antique dealer. I could sell that piece at auction for at least $350k. If Antonio made it, it could be worth millions. Name your price.”
“It’s not mine though, the fiddler is coming back later to claim it.”
“Could you please give him my card when he does? I must have it.”
The dealer leaves. The fiddler returns. You casually ask him about the violin.
“Oh this? My uncle found it in an antique store years ago and left it to me in his will. It’s of immense sentimental value. I dunno though, I think it sounds mediocre compared to newer ones.”

Do you try to buy it off him?

How the fiddle scam works

By now you’re all probably wise to it. The fiddle is a cheap generic that the fiddler bought at the corner store. The dealer is an accomplice. The con artists are banking on the dishonesty of their mark: they bet he will overpay for the fiddle, thinking he will receive a huge payday when he sells it onto the dealer.

The fiddle scam is based on convincing the mark to overvalue an item. The fiddle scam is a particularly ingenious version of it because it makes the mark an accomplice, relying upon her own desire to swindle the fiddler.

Ok! Now Replicability!

I think unreliability in modern science owes more to the fiddle scam, than the prediction scam.

One of the most famous and depressing areas of non-replicability is cancer studies. Amgen (who has a monetary incentive to get things right), could only replicate 6 in 53 landmark, big journal cancer studies. It’s big problem if pharma is going to be starting drug development with only an 11% chance of a real effect. This is quite a bit lower than the still mediocre 36% replication rate of the replicability project.

Why is cancer research less reliable than social psych?

Fiddles.

At it’s most basic level, empiricism is about letting the world speak for itself. Contra The Secret, the world does not care what we want or desire. It does not change its truth status to match what we hope for. It’s not malevolent either, it just doesn’t care. Our investment in the success of research means nothing to the uncaring forces of the universe. Our concern for cancer patients in no way tips the scales towards a cure.

But it sure does move funding and public interest! Tell a stranger you’re a cancer researcher and you’re a hero. Tell them you study bees and you’re a weird shut-in misappropriating public funds. This means that more cancer research will be done than bee research, but doing more research does not imply doing more successful research. The very fact that we don’t know how to cure cancer means we don’t know if the cure involves bees.

Think back to the prediction scam. An artist who sends out 10,000 letters can make more correct predictions, but the also make more incorrect predictions. I’m amenable to the idea that unreliability per se is due to cherry-picking successful research.

But if you believe this, then it follows that you believe areas with more research are less reliable. Directing funds to particular causes will make those areas bigger and lower quality.

But this is certainly not what scientists tell the public. I have never heard a scientist say

“this obsessive focus on science in America is a huge drag on actual discovery.”

Or

“We need to spend dramatically less on scientific research, especially on problems with large social impact, such as medicine.”

Or

“It sure would be nice to have greater representation of women in STEM fields. Maybe we could fire all the men?”

When a reporter comes to talk to me about my research I should say: “it’s actually not that interesting, thanks though.” Instead, I play the fiddle game: “My work on memory has contributed to developing some cognitive tests that could be useful as early diagnostics for Alzheimer’s disease! If we can identify precursors to dementia we can intervene before the disease progresses too far!”

I’m not exactly lying. I’m using a combination of coyness, obfuscation, and complex language to conceal the plain truth: my research probably isn’t of broad popular interest. It probably isn’t that valuable. It’s a creative product like pop music: inevitably mostly crap. That’s the point. Try enough junk, sometimes we get a winner!

Yet, funding, citations, and public interest are the energy that keeps science running. Scientists use a selection process (i.e., the prediction game), to adapt their research to the values of the current public environment. But scientists also use rhetoric (i.e., the fiddle game) to adapt the values of the public environment to be consistent with scientific values.

The current poor record of scientific replicability cannot be understood without taking responsibility for the temptation to sell the public an inflated value of science.

I don’t blame public institutions. The public is not a good judge of scientific accuracy and veracity. It’s impossible for someone to be a good judge of highly technical work outside their areas of expertise. The onus for honesty is on us.

Scientists have a professional duty to quote the exact price of the fiddle even if it means they lose funding. I think it’s rotten that honesty and public engagement have perverse incentives, but so long as people keep asking for diet advice, the mark of a good scientist will be how infrequently they give interviews. Including on replicability.

How to report the real value of science in plain language.

Go and get a lay person right now. Go to the front page of Science and click on the very first article. Explain to the lay person the result in simple language using plain numbers that map to everyday experience. I bet you $5 it’s impossible (and they were right to not care) or they won’t be impressed. Here was the one for me today: Math at home adds up to achievement at school.

Lay description: The most common result of playing the Bedtime Learning app with your kid is that you stop playing the Bedtime Learning app with your kid. For those who stick with it for half an hour, every night, for 6 months, you can expect math test scores to go up by about 1.5 points on a 100 point scale. Reading will not improve. Because kids’ scores drift around so much you won’t notice an increase this small anyway. I bet you can think of a better use for that half hour.

This is super hard to dig out of the study by the way, it doesn’t report data, uses several different ways to carve up the sample, and only reports extremely processed statistics about outcomes, rather than interpretable data. Likely because if it framed the study as 1.5 points over six months, it wouldn’t make it into Science.

I used to teach psych for the educational opportunities program, which is an intensive section of classes designed to support and power up kids from traditionally underrepresented groups at college. In this class I had my favorite student ever.

Not because of her scores, they sucked.

Not because of her brilliant intellectual engagement in the big ideas of psychology (which is impossible for any student at the 100 level because there’s nothing interesting at the 100 level).

She was the best because she fought the hardest. Every day she came to office hours, she did every optional quiz. Tears when she didn’t hit her own very high standards. Not sure if she slept.

I was doing teaching and learning research at the time and we were tracking all these students’ scores at the question-by-question level on tests. Her test score derivative, a measure of her improvement over the course of the class crushed, ****CRUSHED**** every other student’s. Over the course of the semester she gained nearly 4 points on a 100 points scale!

But from her perspective, she went from a C on her first test to a C on her last test.

Science!

One clap, two clap, three clap, forty?

By clapping more or less, you can signal to us which stories really stand out.