My Thoughts on Altruism
About a year and a half ago I started my first startup. I was a senior in college at the time. The idea was to provide more detailed student reviews of colleges. The tricky part of this sort of business is to get enough content.
As for how I’d get this content, plan A was to just go on Facebook and ask everyone I knew from high school to a) answer questions and b) spread the word a bit at their schools. My predictions:
- Maybe 75–80% would spend like 10–20 minutes answering questions.
- 20–40% would spend a few hours answering questions. a) to help me out, b) because think this resource should exist, and c) because it’s sorta fun!
- 5–10% would spread the word a bit. Ie. post to Facebook and tell their friends about it.
- 1% would really love what I’m doing, and maybe go to some greater lengths to help spread the word.
Before I continue, why don’t you try your hand at predicting what’ll happen?
It turns out that my predictions were way way off. Maybe like 1% of people answered any questions at all. And no one did anything in the second, third or fourth bullet points.
I thought that this was really selfish of them. My reasoning: the cost to them is way less than the benefit to me + the benefit to potential users of the site. To throw some rough numbers around, the benefit is probably like 50–100x what the cost is to them. And yes, I’m referring to the marginal benefit, not what the benefit would be if the site took off. It’s that disproportionate.
The benefit to me is a small increase in the chance that the site takes off. And the benefit to future users is basically an expected value calculation which takes into account a) the small increase in the chances that the site succeeds, b) the marginal benefit my site would have compared to what they’d otherwise use, and c) a*b multiplied across all the potential users. I won’t go into more detail than this, but keep in mind that there are some large magnitudes at play here, and that humans have a hard time reasoning about large magnitudes like this.
Quantifying Altruism
So let’s say that Bob made the decision to ignore my request to answer questions. And let’s say that the benefit to me would have been 100x the cost to Bob (without even getting into the benefit to potential users). I think that would mean that Bob cares about himself >= 100 times more than he cares about me. I’m not trying to get philosophical, but something along those lines is probably true.
Following this line of thought, perhaps you could quantify altruism? Let’s say that Bob had a preference ratio of 100:1 relative to me. Maybe you could quantify someone’s altruism by saying what their preference ratios are across all people. Like maybe Bob’s preference ratios look something like this:
- An ant: 10^100:1
- A monkey: 10^20:1
- Person in Africa: 1,000,000,000:1
- Me: 100:1
- Tier 2 friend: 20:1
- Close friend: 3:1
- Mom: 1:1
- Girlfriend: 1:2
In reality, preference ratios probably depend on magnitude. Ie. if I was walking with Bob in a desert, and he was a little thirsty but I was really thirsty, he probably would let me take the drink. He probably wouldn’t wait until the point where I’m 100x more thirsty than he is, even though he cares about himself 100x more than me in other situations.
System I and Revealed Preferences Theory
I realize that I’ve been working under the assumption that revealed preference theory is true. Revealed preference theory basically says that your preferences are revealed by your actions. Like if you have a dollar and can choose between an apple and an orange, your choice would reveal which you prefer.
I’ve been using revealed preference theory to say, “Bob chose not to answer the questions, even though doing so would benefit me 100x more than not doing so benefits him. Thus, he must care about himself >= 100x more than me”. However, I know that this isn’t entirely true.
In reality, people don’t perform these cost-benefit calculations, especially on such small decisions like this. In reality, people use System 1 to make these sorts of decisions. They respond to the intuitive impulses that their brains produce. They use rough heuristics.
So it wasn’t like Bob said, “I care about myself 500x more than Adam. This might benefit him like 50–100x more than me… but that isn’t nearly enough to get me to do this”. It’s more like, “First off, I haven’t seen or spoken to this kid in like 3 years. Who is he to request a favor of me on Facebook? Second, I don’t really care about what he’s doing, and I’ve got like 8 other tabs open that are more important to me right now. x-out.”
So, I don’t think that the intention was as bad as the action would imply. I think that intention probably matters a good deal more than action (although it’s obviously pretty complicated). But action still does matter. What comes to my mind is how the law differentiates between intentional and negligent crimes. When you kill someone on purpose, it’s realllly bad. But when you do so out of negligence… it’s still bad.
Am I a good person?
It’s an interesting question. People seem to think that I’m a pretty good person. The real answer is complicated though.
In short, I think my preference ratios are probably a good deal more altruistic than average. But in practice, the reason why people think I’m a good person is because I perform way more cost-benefit analyses than is typical. Ie. if someone else had the same preference ratios as me, they probably wouldn’t act as altruistically because they wouldn’t perform as many cost-benefit analyses.
For example (this is what got me thinking and led me to write this…), I just graduated from a coding bootcamp and am looking for jobs now. I have some leads and will probably make a decision in the next week or two. There’s an Angular meet up a week from today. I sort of want to go to it.
Positives:
- It might be a good opportunity to network. Meaning, it might lead me to a job opportunity that I otherwise wouldn’t have been exposed to. However, this isn’t that big a deal because I almost have too much on my plate at this point.
- I do want to see Google’s headquarters.
- I’ve never been to a tech meet-up, and am curious to see what it’s all about.
- It’d be nice to see my friends from the bootcamp again.
Negatives:
- It’s a schlep. I live on Long Island, it’s like a 1.5–2 hour commute, it’s cold outside…
- It’d probably be a lot of small talk/mingling. I don’t like small talk/mingling.
- I probably won’t learn much from the talks/presentations.
- I’d have to be at least a half-hour late (ironically, it’s because I’m interviewing with Google…). The whole thing only lasts for like an hour and a half anyway. The commute-to-being-there ratio is pretty bad.
So all in all, I sort of want to go, but it isn’t that big a deal. Note: I have a ticket to the meet-up and tickets are scarce. I recognize this, and I recognize that there are probably people at my bootcamp who really want to go. And so the benefit to them would probably be at least a few multiples of what it is to me. Given my preference ratios, I figured I’d let everyone know that I have a ticket and that I sort of want to go, but that if anyone really wants to go, I’d give my ticket to them. It turns out that someone did really want to go, and so I gave my ticket to her. This made me happy for two reasons: 1) helping someone out, and 2) it’s comforting to me when things work out efficiently.
My Dark Side
Everyone thought that this was such a nice thing for me to do. Is it really? Your preference ratios don’t have to be that altruistic to make such a decision. So why does that make me such a nice person?
Consider another example that came to mind today. I had a doctors appointment this morning and they had to draw blood. I really don’t like having blood drawn. I don’t faint or anything… but I definitely don’t like it.
The point I want to get at is that I don’t donate blood, and that this might say some pretty bad things about my preference ratios. My aversion to having blood drawn isn’t that strong. It’s nothing compared to the benefits it’d provide someone who needs it. I hesitate to even postulate numbers here. Let’s be nice (to me) and say that the benefit to the dying person is 1,000,000x the cost to me. Am I saying, “I care about myself ONE MILLION times more than I care about you!”?
Maybe. Probably. I don’t know.
It’s not like that thought enters my mind. Although I’m a smart enough person to recognize the implications, and thus it has entered my mind.
In the interest of dealing with reality, not fantasy, I’ll say it — there is definitely something to be said about my revealed preferences here. It’s probably true that I care about myself many orders of magnitude more than I care about these random people who are dying.
For the record, I don’t think I’m alone here. I think that this is the overwhelming norm, and that it’s what our biology urges, maybe even forces us to do. “Forces” is probably too strong a word — our brains are very adaptable to the environment. But I think it’d actually be pretty difficult to care that much about random people, it’s just not what we’re wired to do.
My Really Dark Side
Speaking of what we’re wired to do… we’re wired to seek happiness. Aka desirable experience. Consider the following thought experiment. I’ll use it to operationally definition of happiness.
Imagine that you are a blank slate. You have no memory of ever experiencing anything, and you’re unconscious. Now imagine that you experience something. I find it helpful to think of it as an emotion to a certain magnitude — an emotion-magnitude. Let’s call this emotion-magnitude A.
Now imagine that you experience emotion-magnitude B. Having had experienced two emotion-magnitudes, you could now compare them. You could say that A was preferable to B, or vice versa.
Imagine that you experience emotion-magnitude C. You could now sort A, B and C according to how preferable they are. If you keep experiencing emotion-magnitudes until you’ve experienced everything, you create an ordered list of how preferable each one is. These… are Your Preferences.
So, what matters to people? In a way, that’s like saying, “what are your preferences?”. The answer is of course… Your Preferences. People prefer what they prefer, and I’m skeptical as to how malleable Preferences really are. In practice, acting instrumentally rational (trying to achieve Your Preferences) is hard enough — trying to manipulate and achieve Your Preferences seems to be very very difficult.
But there’s an important point to make — Your Preferences are different than Your Goals. Your goals are what you would choose if given the choice. Imagine that you could choose between A and B. Say that A is Preferred to B (ie. it’d make you happier). You could still say, “I want B”. Your Goal doesn’t have to be to achieve Your Preferences.
This begs the question of what Your Goals should be. I haven’t figured this one out yet. When you tear the question apart, my understanding is that you end up just messing around with semantics and axioms. That “should” requires an axiom, and axioms are arbitrary.
Anyway, my temporary conclusions are that:
- Preferences are selfish.
- Goals are arbitrary.
Personally, My Goals happen to be almost as selfish as My Preferences.
- Take school for example. You get good grades, to get into a good college, to get a good job, to get lots of “stuff”… to be happy. It appears to me that regardless of the situation, when you look for the terminal goal by continuing to ask “why?”… it’s to be happy.
- And so in recognizing this, I admit that My Goal is to be happy.
- But I’m not sure. I notice confusion. Things like Truth and Altruism might actually matter (independent of my happiness). I don’t know, and it really never is relevant in my everyday life. But if I was faced with a situation where I had to reveal my true preferences… I’d hedge my bets, but still act pretty strongly as if Being Happy was my goal.
I used the term “Dark Side”, because my preference ratios get pretty selfish when things get really serious. Like once you hit the point where my life is being threatened… I start to act really really selfishly (in my thought experiments). In this sense, I’m way way below average as far as altruism goes. It’s not uncommon for people to sacrifice their life for things/people they really care about. Not me though. (Again, I’m not too too sure. Like I said, I’d hedge my bets a bit.)
So, am I a good person?
- In everyday life I think that I’m quite empathetic and considerate.
- In most scenarios, my utility function is quite intertwined with that of others. Ie. helping others makes me happy.
- In fact, my “life goal” is to have as big a positive impact on the world as I can. And I do make “sacrifices” to pursue this. Eg. I work hard when I could be playing. And I could pursue more stable and higher paying jobs instead of the “high risk” high reward startup path I’m on.
- However, I don’t care that much. There is a limit to how much I’m willing to sacrifice to pursue this goal of doing good.
- And helping others does/would make me happy. It’s the way I’m wired. So in the end, I really am just pursuing the path that I think will make me most happy.
- But is this all outweighed by my “dark side”? Is there some sort of point such that if you cross it… you’re just not a good person? Have I (in my thought experiments) crossed this line? I don’t know. I don’t believe in god, but for the sake of articulation, let me anthropomorphize: whoever this “god” guy is (you know, the guy that bootstrapped reality and set up its rules and conditions)… he gave us some pretty fucked up and complicated questions to deal with. Couldn’t he have been nicer?