I give to charity — but never to people on the street. Is that wrong?

Your Mileage May Vary is an advice column offering you a new framework for thinking through your ethical dilemmas and philosophical questions. This unconventional column is based on value pluralism — the idea that each of us has multiple values that are equally valid but that often conflict with each other. Here is a Vox reader’s question, condensed and edited for clarity.

I think I have a duty to help people much poorer than me, and I give 10 percent of my salary to charities that I think are effective at preventing early death due to poverty. I also live in a city with a lot of visible homelessness, and am often solicited for money. My brain says that this is not an effective way to help people; the people asking might not be the neediest among the homeless in my city, and the people I’m sending malaria bednets and pills to are even needier. At the same time, I feel callous simply ignoring all these requests. What should I do?

Dear Would-Be Optimizer,

Nine times out of ten, when someone’s got an ethical dilemma, I think it’s because a couple of their core values are conflicting with each other. But you’re that tenth case. I say that because I don’t actually believe your question represents a battle royale between two different values. I think there’s one core value here — helping people — and one strategy that’s masquerading as a value.

That strategy is optimization. I can tell from your phrasing that you’re really into it. You don’t just want to help people — you want to help people as effectively as possible. Since extreme poverty is concentrated in developing countries, and since your dollar goes much further there than it would in your home country, your optimizing impulse is telling you to send your charity money abroad. 

Optimization started as a technique for solving certain math problems, but our society has elevated it to the status of a value — arguably one of the dominant values in the Western world. It’s been on the rise since the 1700s, when utilitarian thinkers seeded the idea that both economics and ethics should focus on maximizing utility (meaning, happiness or satisfaction): Just calculate how much utility each action would produce, and choose the one that produces the most.

You can see this logic everywhere in modern life — from work culture, with its emphasis on productivity hacks and agile workflows, to wellness culture, with its emphasis on achieving perfect health and optimal sleep. The mandate to “Live your best life!” is turbocharged by Silicon Valley, which urges us to quantify every aspect of ourselves with Fitbits, Apple Watches, and Oura Rings, because the more data you have on your body’s mechanical functions, the more you can optimize the machine that is you. 

Have a question you want me to answer in the next Your Mileage May Vary column?

Feel free to email me at sigal.samuel@vox.com or fill out this anonymous form! Newsletter subscribers will get my column before anyone else does and their questions will be prioritized for future editions. Sign up here!

Optimization definitely has its place, including in the world of charity. Some charitable organizations are much more effective than others trying to achieve the same goal. All things being equal, we don’t want to blow all our money on the wildly ineffective ones. Effective altruists, members of the utilitarian-flavored social movement that aims to do the most good possible, are fond of noting that the most effective charities out there actually produce 100 times more benefit than the average ones. Why not get the biggest bang for your buck? 

The problem is that we’ve stretched optimization beyond its optimal limits. We try to apply it to everything. But not every domain in life can be optimized, at least not without compromising on some of our values. 

In your case, you’re trying to optimize how much you help others, and you believe that means focusing on the neediest. But “neediest” according to what definition of needy? You could assume that financial need is the only type that counts, so you should focus first on lifting everyone out of extreme poverty, and only then help people in less dire straits. But are you sure that only the brute poverty level matters?

Consider an insight from the Jewish tradition. The ancient rabbis were exquisitely sensitive to the psychological needs of poor people, and they argued that these needs should also be taken into account. So they decreed that you shouldn’t only give poor people enough money to survive on — they need to have more than that so they themselves can give charity to others. As Rabbi Jonathan Sacks writes, “On the face of it, the rule is absurd. Why give X enough money so that he can give to Y? Giving to Y directly is more logical and efficient. What the rabbis understood, however, is that giving is an essential part of human dignity.” 

The rabbis also figured that those who used to be well-off but who fell into poverty might feel an especially acute sense of shame. So they suggested helping these people save face by offering them not just bare necessities, but also — when possible — some of the nicer things that graced their former lifestyles. In the Talmud, we hear about one rabbi who gave a newly poor person a fancy meal, and another who acted as the person’s servant for a day! Clearly, the ancient rabbis weren’t only aiming to alleviate poverty. They were also alleviating the shame that can accompany it.

The point is that there are many ways to help people and, because they’re so different, they don’t submit to direct comparison. Comparing poverty and shame is comparing apples to oranges; one can be measured in dollars, but the other can’t. Likewise, how can you ever hope to compare preventing malaria with alleviating depression? Saving lives versus improving them? Or saving the life of a kid versus saving the life of an adult? 

Yet if you want to optimize, you need to be able to run an apples-to-apples comparison — to calculate how much good different things do in a single currency, so you can pick the best option. But because helping people isn’t reducible to one thing — it’s lots of incommensurable things, and how to rank them depends on each person’s subjective philosophical assumptions — trying to optimize in this domain will mean you have to artificially simplify the problem. You have to pretend there’s no such thing as oranges, only apples.

And when you try to do that, an unfortunate thing happens. You end up rushing past all the unhoused people in your city and, as you put it, you “feel callous simply ignoring all these requests.” Ignoring these human beings comes at a cost, not only to them, but to you. It has a damaging effect on your moral conscience, which feels moved to help but is being told not to.

This story was first featured in the Future Perfect newsletter.

Sign up here to explore the big, complicated problems the world faces and the most efficient ways to solve them. Sent twice a week.

Even some leaders in effective altruism and the adjacent rationalist community recognize this as a problem and advise people not to shut up that part of themselves. Rationalist Eliezer Yudkowsky, for example, says it’s okay to donate some money to causes that make us feel warm and fuzzy but that aren’t producing maximum utility. His advice is to “purchase fuzzies and utilons separately” — meaning, devote one pot of money to pet causes and another (much bigger) pot of money to the most cost-effective charities. You can, he says, get your warm fuzzies by volunteering at a soup kitchen and “let that be validated by your other efforts to purchase utilons.” 

I would also suggest diversifying your giving portfolio, but it’s not because I think you need to “validate” the warm fuzzies. Instead, it’s because of another value: integrity. 

When 20th-century British philosopher and critic of utilitarianism Bernard Williams talked about integrity, he meant it in the literal sense of the word, which has to do with a person’s wholeness (think of related words like “integration”). He argued that moral agency does not sit in a contextless vacuum — it is always some specific person’s agency, and as specific people we have specific commitments. 

For example, a mother has a commitment to ensuring her kid’s well-being, over and above her general wish for all kids everywhere to be well. Utilitarianism says she has to consider everyone’s well-being equally, with no special treatment for her own kid — but Williams says that’s an absurd demand. It alienates her from a core part of herself, ripping her into pieces, wrecking her wholeness — her integrity. 

It sounds like that’s what you’re feeling when you pass a person experiencing homelessness and ignore them. Ignoring them makes you feel bad because it alienates you from the part of you that is moved by this person’s suffering — that sees the orange but is being told there are only apples. That core part of you is no less valuable than the optimizing part, which you liken to your “brain.” It’s not dumber or more irrational. It’s the part that cares deeply about helping people, and without it, the optimizing part would have nothing to optimize! 

So rather than trying to override it, I would encourage you to honor your wish to help in all its fullness. You won’t be able to run a direct apples-to-apples comparison, but that’s okay. Different types of help are useful in their own way and you can divvy up your budget between them, even though there’s no perfect formula to spit out the “optimal” allocations. 

Diversifying your giving portfolio might look something like this. You keep a small amount of cash or gift cards on you, which you hand out to unhoused people you encounter directly. You put aside a larger amount to donate to a local or national charity with a strong track record. And you devote another amount to a highly effective charity abroad. 

You might feel annoyed that there’s no universal mathematical formula that can tell you the best thing to do. If so, I get it. I want the magic formula too! But I know that desire is distinct from the core value here. Don’t let optimization eat the real value you hold dear. 

Bonus: What I’m reading I recently read Optimal Illusions, a book by mathematician Coco Krumme that traces the roots of optimization’s overreach. As she puts it, “Over the past century, optimization has made an impressive epistemic land grab.”  When torn between competing moral theories, does it make sense to diversify your donations in proportion to how much you believe in each theory? Some philosophers argue against that view, but Michael Plant and coauthors defend it in this new paper. This gorgeously written essay by anthropologist Manvir Singh introduced me to the term “cooperating without looking” (or, because it’s a New Yorker essay, “coöperating without looking”). This “tendency to willfully ignore costs and benefits when helping others” — to help without calculating what you’ll gain from the altruistic act — is “a key feature of both romantic love and principled behavior.” When we help this way, people trust us more. 

vox.com

Read full article on: vox.com

unread news