"On caring" review

I give my thoughts on the post "On caring" by Nate Soares.

Philosophy - Scope Insensitivity - Math

"On caring" review

I recommend reading the post: On Caring

My relationship to the post

After participating in the Winter Applied Rationality Camp 2022, I learned about the Effective Altruism movement and about organizations such as LessWrong , 80,000 Hours , Open Philanthropy , etc. The ideas, goals, and values these organizations have and the ways they are going about achieving those goals has since allured my interest and kept me looking to answer my own doubts. I can say that I got a better understanding of my place in the world.

This led me to discover vast amounts of information online and led me to hear and learn about ideas such as expected values in the context of decision-making, scope insensitivity, counterfactual impact, Bayesian logic and Popperian critical rationalism/falsification. These and many more kept me intrigued and hooked me into learning as much as I could.

I had the opportunity to read the post while applying for the Atlas Fellowship twice and once through the Non-trivial Fellowship. Each time I glean some insight I didn't catch on the previous readings.

The very first post I read about this philosophy was "On caring" by Nate Soars. Although it was the first thing I read, it has stuck with me the most.

Summary & Initial thoughts

The post goes against the traditional ways of doing good while pointing to important changes in our attitude towards caring about pressing problems. It is one of my go-to posts to share, which has a high probability of being internalized/well understood by the receiver.

Us and numbers

The post urges us to action on pressing problems because it's the right thing to do, rather than guilting us into it. One of the first remarks the author makes is that we, here in the present, are responsible for the coming generations of which there will be millions. He repeats that we should not feel guilty for not being able to save all of them or that we can't care about all of them, but we should try caring even when we don't feel it.

He also looks at another type of insensitivity: care insensitivity. Again, this is a concept that people often refuse to accept, and with good reason. We as a species are built such that we get satisfaction and "happy chemicals" in our brain by leveraging direct impact and actions. It's hard to get as good of a feeling from helping someone immediately in your vicinity and from helping someone in a developing country you probably will never meet.

Some people intuit our brains as sufficiently well tuned tools that act almost perfectly rationaly with respect to our knowledge base/what we know. This is false in many cases. Often our brain takes shortcuts, "hides" information from us, and does things that aren't completely aligned with our actual goals, but rather with some shallow pleasure-optimizing goal. For example, purely catering to our survival instincts rather than our dietary goals by eating junk food.

Junk Food

Boxes

Our brains like to label things in neat boxes, so when quantities get too large we can't allocate any label except enormous. As an example, sometimes when we're faced with huge numbers our brain just chocks them up and puts them in the labeled box "HUGE NUMBER" and does not have a good system set in place to deal with even bigger numbers. I personally can't internalize well the difference between a million and a billion. This can be extremely detrimental to our decision-making.

The post gives an example of people that were willing to pay nearly the same amount of money to save 2000 birds from oil spills as they were to save 200,000 birds. This is astounding. It blew my mind the first time I read it. I immediately questioned a lot of my decision-making up until that point because I felt guilty of similar poor reasoning. Have I been truly thoughtful about my decisions when dealing with big numbers? I was also angry that I just now formalized this disturbing feeling I had for a while.

Bird Oil Spill

Although astounding, this reasoning fallacy is to be expected since for most of humanity we didn't have much use for such enormous numbers. Apples, rocks, etc. came in the magnitudes of tens, hundreds, and at most thousands. The post highlights the fact that people truly can't "care" about more than roughly 150 people at any point. To someone who is open to thinking critically with these numbers, they discover they can't care about many problems beyond their scope, since we in fact can't!

The ideas is that even though we can't rationalize the concept of helping that many people and feeling the direct impact of our help, does not mean we can't and shouldn't do the caring.

Impact is cumulative

An important aspect I want to explore here is not only the number of people we're helping but also the amount of help we're giving. Rarely any one person can significantly help a large number of people. Obviously such a person would have to be incredibly powerful and well-off which is not the case for most. The next mental blockade that happens, that stops us from doing the helping, is the apparent insignificance of our impact.

If you donate $50 to starving people in Yemen, you might feed a family for a few days, but in the grand scheme of things that doesn't make a big difference. Instead, you should donate to organizations which are tasked with effectively helping those people, who can do far more with the same amount of money and have a greater positive impact in the long term. Here, long term is the key word. An example of a good long-term decision such an organization could make is to invest in a project that will help people in Yemen for years to come, rather than just giving them food for a few days. Examples include deworming, building wells, or teaching people how to farm sustainably.

Deworming

In general, one of the greatest human reasoning biases is the conflict between short-term and long-term thinking. We often think that we can most significantly help people in the short term, but in reality, the long-term impact of our actions is much greater, if done intelligently. The post highlights that we should not be discouraged by the fact that we can't help everyone now, but rather focus on the impact we have tomorrow.

In our day-to-day lives we frequently encounter these dilemas. The inner mechanisms fighting against our System 1 e.g. "You shouldn't throw that piece of trash there since it pollutes the environment".

Decaying impact

Although long-term impact is crucial, it's interesting to think about how we decay impact into the future. For example, in Reinforcement Learning from Machine Learning you usually decay the reward by a discount factor of $\gamma$ (gamma) which is usually set to 0.9 or 0.99.

Reward decay in RL refers to the process of reducing the influence of future rewards when evaluating the value of current actions. It's most commonly seen in the form of discounting future rewards using a discount factor. This means that the further away the reward is, the less impact it has on the decision-making process now. The same can be said for our decisions. The further away the impact is, the less we care about it. But the $\gamma$ factor is not the same for all decisions and is usually too small for most people.

The cumulative reward (also called the return) at time step $t$ is defined as: \[G_t = r_t + \gamma r_{t+1} + \gamma^2 r_{t+2} + \ldots = \sum_{k=0}^{\infty} \gamma^k r_{t+k}\] where $r_t$ is the reward at time step $t$ and $\gamma \in [0,1]$ is the discount factor (controls decay rate). If $\gamma=0$ then the agent only cares about the immediate reward, and if $\gamma=1$ then the agent cares about all future rewards equally.

How policy makers (in countries such as the US, UK, and similar) usually capture impact is through QALYs, DALYs, and similar metrics. These are used to measure the quality of life and the impact of health interventions, for example.

Takeaways

My main takeaway was: "Courage isn't about being fearless, it's about being able to do the right thing even if you're afraid." and the notion to try anyway even if there is not enough money, time, or effort to do what must be done. It isn't about being afraid of something material but instead taking risks that ultimately could have a much greater counterfactual net positive impact on the world.

Another key takeaway is to be deeply analytical about all things both directly mathematical and not, such as everyday decisions. The post urges us to think critically about our decisions and the impact they have on the world, rather than just going with the flow or following our instincts.

More personal reflections

Reading the book "80,000 hours" and getting to the section about being "incredilby lucky" to be living in a developed country I couldn't immediately relate. I live in Bosnia and Herzegovina, a country in the Balkans which has less than one tenth the average GDP of the US. The situation isn't great by any means, but we're not 100 times less developed than the US like, say Somalia or Eritrea. Yet, the majority of people live pay check to pay check. Altruism is much riskier for people in such countries than anywhere else. The chance of war could ultimately make your last spending the crucial decider of whether you survive the next year or not.

But the situation isn't too bad and I'm lucky enough to have made my own opportunities to live better without relying on my parents' income. I hope to be in a position where I could give a portion of my earnings back to other people and continue the cycle. Importantly, give where it matters.

References