Tools
Change country:

Birmingham, Ala., Shooting Kills Four, Injures Dozens

Multiple gunmen shot into a group of people in a popular entertainment district, the local police said. The authorities are still looking for the shooters.
Read full article on: nytimes.com
Submit a question for Jennifer Rubin about her columns, politics, policy and more
Submit your questions for Jennifer Rubin’s mail bag newsletter and live chat.
1m
washingtonpost.com
Biden snaps at staffers during summit after forgetting speaker was Indian prime minister: ‘Who am I introducing next?’
A confused-looking President Biden fumbled and snapped at staffers after forgetting which world leader he was supposed to introduce at a press conference for a Quad summit Saturday. Biden, 81, was supposed to call India’s Prime Minister Narendra Modi to the stage, but appeared to be unsure which of the three visiting heads of government...
nypost.com
Prince Andrew Is Relieved New Movie Has ‘Sunk Without a Trace’
Max Mumby/Indigo/Getty ImagesAndrew breathes sigh of relief over filmFriends of Prince Andrew have said he is relieved that the new Amazon film about his disastrous interview with the BBC has “sunk without trace,” while friends of King Charles said the new three-part mini-series is “water off a duck’s back to him.”The series, entitled A Very Royal Scandal, portrays Andrew as a foul-mouthed, entitled and arrogant man, a portrayal which some former staff have told media outlets (including this one) is accurate.Read more at The Daily Beast.
thedailybeast.com
Giants vs. Browns: Preview, prediction, what to watch for
An inside look at Sunday's Giants-Browns NFL Week 3 matchup at Huntington Bank Field:
nypost.com
Why an Alaska Island Is Using Peanut Butter and Black Lights to Find a Rat That Might Not Exist
The anxiety on St. Paul Island is the latest attempt to keep non-native rats off some of the most remote, ecologically diverse Alaska islands.
time.com
No on Proposition 36. California shouldn't revive the disastrous war on drugs
Proposition 36 won't end homelessness or crime waves. It will only refill prisons, push more people to the streets and erase criminal justice reform progress.
latimes.com
Underdog Fantasy Promo Code NYPBONUS Awards up to $1K Bonus Cash for NFL Week 3 Action on Sunday
Use the Underdog Fantasy promo code NYPBONUS for up to $1,000 in bonus cash from a 50% deposit match offer. .
nypost.com
Kathy Bates’ Superb ‘Matlock’ Reboot Has Fall TV’s Biggest Twist
Brooke Palmer/CBSThe discourse about TV reboots has already run its course multiple times in our current era of streaming and prestige television. There are a lot of them. It happens to almost every hit show. Sometimes it works, most of the time it doesn’t. They better not try it with The Sopranos. And so on.The most successful are often the ones that come out many decades after their source material was popular, allowing the reboot to insulate itself from comparisons to the original. CBS’s Matlock, in which Kathy Bates takes over for Andy Griffith as the titular lawyer, comes out almost 40 years after the original show’s premiere, and yet, in a creative twist, repeatedly makes sly nods to its remake status.Bates sheepishly introduces herself as “Madeline Matlock, like the TV show,” when she sneaks her way into the offices of Jacobson Moore, hungry for a job after losing everything to her late husband. She quickly proves herself resourceful, using her status as an elderly person to her advantage.Read more at The Daily Beast.
thedailybeast.com
Chicago gangbangers face off against newly arrived Venezuelan migrants: ‘City is going to go up in flames’
Tyrone Muhammad, a former gang enforcer, has formed a group called Ex-Cons for Trump because he feels Democrats have failed inner-city black people for too long.
nypost.com
Finally: The First Book from Pedro Almodovar
Over the course of his 50 years in cinema, Spanish director Pedro Almodóvar has been offered countless deals from publishers to write his memoir – but he has always rejected them, as the two-time Oscar-winner explains in his new book “The Last Dream” (HarperVia). “I’ve been asked to write my autobiography more than once, and I’ve...
nypost.com
From Norway to New York, electric ferries are taking over the globe
Coming this fall, residents in Stockholm won’t have to endure the hour-long commute by car or train between Ekerö, a popular suburb, and central Stockholm, home to the historic City Hall. Instead, they can jump on a 30-passenger ferry and make the journey in half the time, all while helping to cut down on carbon...
nypost.com
Yuval Noah Harari on whether democracy and AI can coexist
Israeli historian and writer Yuval Noah Harari speaks at the Global Artificial Intelligence Summit Forum on July 9, 2017, in Hangzhou in China’s Zhejiang Province. | Visual China Group via Getty Images If the internet age has anything like an ideology, it’s that more information and more data and more openness will create a better and more truthful world. That sounds right, doesn’t it? It has never been easier to know more about the world than it is right now, and it has never been easier to share that knowledge than it is right now. But I don’t think you can look at the state of things and conclude that this has been a victory for truth and wisdom. What are we to make of that? Why hasn’t more information made us less ignorant and more wise? Yuval Noah Harari is a historian and the author of a new book called Nexus: A Brief History of Information Networks from the Stone Age to AI. Like all of Harari’s books, this one covers a ton of ground but manages to do it in a digestible way. It makes two big arguments that strike me as important, and I think they also get us closer to answering some of the questions I just posed. The first argument is that every system that matters in our world is essentially the result of an information network. From currency to religion to nation-states to artificial intelligence, it all works because there’s a chain of people and machines and institutions collecting and sharing information. The second argument is that although we gain a tremendous amount of power by building these networks of cooperation, the way most of them are constructed makes them more likely than not to produce bad outcomes, and since our power as a species is growing thanks to technology, the potential consequences of this are increasingly catastrophic. I invited Harari on The Gray Area to explore some of these ideas. Our conversation focused on artificial intelligence and why he thinks the choices we make on that front in the coming years will matter so much. As always, there’s much more in the full podcast, so listen and follow The Gray Area on Apple Podcasts, Spotify, Pandora, or wherever you find podcasts. New episodes drop every Monday. This conversation has been edited for length and clarity. Sean Illing What’s the basic story you wanted to tell in this book? Yuval Noah Harari The basic question that the book explores is if humans are so smart, why are we so stupid? We are definitely the smartest animal on the planet. We can build airplanes and atom bombs and computers and so forth. And at the same time, we are on the verge of destroying ourselves, our civilization, and much of the ecological system. And it seems like this big paradox that if we know so much about the world and about distant galaxies and about DNA and subatomic particles, why are we doing so many self-destructive things? And the basic answer you get from a lot of mythology and theology is that there is something wrong in human nature and therefore we must rely on some outside source like a god to save us from ourselves. And I think that’s the wrong answer, and it’s a dangerous answer because it makes people abdicate responsibility. We know more than ever before, but are we any wiser? Historian and bestselling author of Sapiens Yuval Noah Harari doesn’t think so. @vox We know more than ever before, but are we any wiser? Bestselling author of Sapiens and historian Yuval Noah Harari doesn’t think so. This week Vox’s Sean Illing talks with Harari, author of a mind-bending new book, Nexus: A Brief History of Information Networks, about how the information systems that shape our world often sow the seeds of destruction. Listen wherever you get your podcasts. ♬ original sound – Vox I think that the real answer is that there is nothing wrong with human nature. The problem is with our information. Most humans are good people. They are not self-destructive. But if you give good people bad information, they make bad decisions. And what we see through history is that yes, we become better and better at accumulating massive amounts of information, but the information isn’t getting better. Modern societies are as susceptible as Stone Age tribes to mass delusions and psychosis.  Too many people, especially in places like Silicon Valley, think that information is about truth, that information is truth. That if you accumulate a lot of information, you will know a lot of things about the world. But most information is junk. Information isn’t truth. The main thing that information does is connect. The easiest way to connect a lot of people into a society, a religion, a corporation, or an army, is not with the truth. The easiest way to connect people is with fantasies and mythologies and delusions. And this is why we now have the most sophisticated information technology in history and we are on the verge of destroying ourselves. Sean Illing The boogeyman in the book is artificial intelligence, which you argue is the most complicated and unpredictable information network ever created. A world shaped by AI will be very different, will give rise to new identities, new ways of being in the world. We have no idea what the cultural or even spiritual impact of that will be. But as you say, AI will also unleash new ideas about how to organize society. Can we even begin to imagine the directions that might go? Yuval Noah Harari Not really. Because until today, all of human culture was created by human minds. We live inside culture. Everything that happens to us, we experience it through the mediation of cultural products — mythologies, ideologies, artifacts, songs, plays, TV series. We live cocooned inside this cultural universe. And until today, everything, all the tools, all the poems, all the TV series, all the mythologies, they are the product of organic human minds. And now increasingly they will be the product of inorganic AI intelligences, alien intelligences. Again, the acronym AI traditionally stood for artificial intelligence, but it should actually stand for alien intelligence. Alien, not in the sense that it’s coming from outer space, but alien in the sense that it’s very, very different from the way humans think and make decisions because it’s not organic.  To give you a concrete example, one of the key moments in the AI revolution was when AlphaGo defeated Lee Sedol in a Go Tournament. Now, Go is a bold strategy game, like chess but much more complicated, and it was invented in ancient China. In many places, it’s considered one of the basic arts that every civilized person should know. If you are a Chinese gentleman in the Middle Ages, you know calligraphy and how to play some music and you know how to play Go. Entire philosophies developed around the game, which was seen as a mirror for life and for politics. And then an AI program, AlphaGo, in 2016, taught itself how to play Go and it crushed the human world champion. But what is most interesting is the way [it] did it. It deployed a strategy that initially all the experts said was terrible because nobody plays like that. And it turned out to be brilliant. Tens of millions of humans played this game, and now we know that they explored only a very small part of the landscape of Go. So humans were stuck on one island and they thought this is the whole planet of Go. And then AI came along and within a few weeks it discovered new continents. And now also humans play Go very differently than they played it before 2016. Now, you can say this is not important, [that] it’s just a game. But the same thing is likely to happen in more and more fields. If you think about finance, finance is also an art. The entire financial structure that we know is based on the human imagination. The history of finance is the history of humans inventing financial devices. Money is a financial device, bonds, stocks, ETFs, CDOs, all these strange things are the products of human ingenuity. And now AI comes along and starts inventing new financial devices that no human being ever thought about, ever imagined. What happens, for instance, if finance becomes so complicated because of these new creations of AI that no human being is able to understand finance anymore? Even today, how many people really understand the financial system? Less than 1 percent? In 10 years, the number of people who understand the financial system could be exactly zero because the financial system is the ideal playground for AI. It’s a world of pure information and mathematics.  AI still has difficulty dealing with the physical world outside. This is why every year they tell us, Elon Musk tells us, that next year you will have fully autonomous cars on the road and it doesn’t happen. Why? Because to drive a car, you need to interact with the physical world and the messy world of traffic in New York with all the construction and pedestrians and whatever. Finance is much easier. It’s just numbers. And what happens if in this informational realm where AI is a native and we are the aliens, we are the immigrants, it creates such sophisticated financial devices and mechanisms that nobody understands them? Sean Illing So when you look at the world now and project out into the future, is that what you see? Societies becoming trapped in these incredibly powerful but ultimately uncontrollable information networks? Yuval Noah Harari Yes. But it’s not deterministic, it’s not inevitable. We need to be much more careful and thoughtful about how we design these things. Again, understanding that they are not tools, they are agents, and therefore down the road are very likely to get out of our control if we are not careful about them. It’s not that you have a single supercomputer that tries to take over the world. You have these millions of AI bureaucrats in schools, in factories, everywhere, making decisions about us in ways that we do not understand.  Democracy is to a large extent about accountability. Accountability depends on the ability to understand decisions. If … when you apply for a loan at the bank and the bank rejects you and you ask, “Why not?,” and the answer is, “We don’t know, the algorithm went over all the data and decided not to give you a loan, and we just trust our algorithm,” this to a large extent is the end of democracy. You can still have elections and choose whichever human you want, but if humans are no longer able to understand these basic decisions about their lives, then there is no longer accountability. Sean Illing You say we still have control over these things, but for how long? What is that threshold? What is the event horizon? Will we even know it when we cross it? Yuval Noah Harari Nobody knows for sure. It’s moving faster than I think almost anybody expected. Could be three years, could be five years, could be 10 years. But I don’t think it’s much more than that. Just think about it from a cosmic perspective. We are the product as human beings of 4 billion years of organic evolution. Organic evolution, as far as we know, began on planet Earth 4 billion years ago with these tiny microorganisms. And it took billions of years for the evolution of multicellular organisms and reptiles and mammals and apes and humans. Digital evolution, non-organic evolution, is millions of times faster than organic evolution. And we are now at the beginning of a new evolutionary process that might last thousands and even millions of years. The AIs we know today in 2024, ChatGPT and all that, they are just the amoebas of the AI evolutionary process.  Sean Illing Do you think democracies are truly compatible with these 21st-century information networks? Yuval Noah Harari Depends on our decisions. First of all, we need to realize that information technology is not something on [a] side. It’s not democracy on one side and information technology on the other side. Information technology is the foundation of democracy. Democracy is built on top of the flow of information.  For most of history, there was no possibility of creating large-scale democratic structures because the information technology was missing. Democracy is basically a conversation between a lot of people, and in a small tribe or a small city-state, thousands of years ago, you could get the entire population or a large percentage of the population, let’s say, of ancient Athens in the city square to decide whether to go to war with Sparta or not. It was technically feasible to hold a conversation. But there was no way that millions of people spread over thousands of kilometers could talk to each other. There was no way they could hold the conversation in real time. Therefore, you have not a single example of a large-scale democracy in the pre-modern world. All the examples are very small scale. Large-scale democracy became possible only after the rise of the newspaper and the telegraph and radio and television. And now you can have a conversation between millions of people spread over a large territory. So democracy is built on top of information technology. Every time there is a big change in information technology, there is an earthquake in democracy which is built on top of it. And this is what we’re experiencing right now with social media algorithms and so forth. It doesn’t mean it’s the end of democracy. The question is, will democracy adapt? Sean Illing Do you think AI will ultimately tilt the balance of power in favor of democratic societies or more totalitarian societies?  Yuval Noah Harari Again, it depends on our decisions. The worst-case scenario is neither because human dictators also have big problems with AI. In dictatorial societies, you can’t talk about anything that the regime doesn’t want you to talk about. But actually, dictators have their own problems with AI because it’s an uncontrollable agent. And throughout history, the [scariest] thing for a human dictator is a subordinate [who] becomes too powerful and that you don’t know how to control. If you look, say, at the Roman Empire, not a single Roman emperor was ever toppled by a democratic revolution. Not a single one. But many of them were assassinated or deposed or became the puppets of their own subordinates, a powerful general or provincial governor or their brother or their wife or somebody else in their family. This is the greatest fear of every dictator. And dictators run the country based on fear. Now, how do you terrorize an AI? How do you make sure that it’ll remain under your control instead of learning to control you? I’ll give two scenarios which really bother dictators. One simple, one much more complex. In Russia today, it is a crime to call the war in Ukraine a war. According to Russian law, what’s happening with the Russian invasion of Ukraine is a special military operation. And if you say that this is a war, you can go to prison. Now, humans in Russia, they have learned the hard way not to say that it’s a war and not to criticize the Putin regime in any other way. But what happens with chatbots on the Russian internet? Even if the regime vets and even produces itself an AI bot, the thing about AI is that AI can learn and change by itself. So even if Putin’s engineers create a regime AI and then it starts interacting with people on the Russian internet and observing what is happening, it can reach its own conclusions. What if it starts telling people that it’s actually a war? What do you do? You can’t send the chatbot to a gulag. You can’t beat up its family. Your old weapons of terror don’t work on AI. So this is the small problem.  The big problem is what happens if the AI starts to manipulate the dictator himself. Taking power in a democracy is very complicated because democracy is complicated. Let’s say that five or 10 years in the future, AI learns how to manipulate the US president. It still has to deal with a Senate filibuster. Just the fact that it knows how to manipulate the president doesn’t help it with the Senate or the state governors or the Supreme Court. There are so many things to deal with. But in a place like Russia or North Korea, an AI only needs to learn how to manipulate a single extremely paranoid and unself-aware individual. It’s quite easy.  Sean Illing What are some of the things you think democracies should do to protect themselves in the world of AI? Yuval Noah Harari One thing is to hold corporations responsible for the actions of their algorithms. Not for the actions of the users, but for the actions of their algorithms. If the Facebook algorithm is spreading a hate-filled conspiracy theory, Facebook should be liable for it. If Facebook says, “But we didn’t create the conspiracy theory. It’s some user who created it and we don’t want to censor them,” then we tell them, “We don’t ask you to censor them. We just ask you not to spread it.” And this is not a new thing. You think about, I don’t know, the New York Times. We expect the editor of the New York Times, when they decide what to put at the top of the front page, to make sure that they are not spreading unreliable information. If somebody comes to them with a conspiracy theory, they don’t tell that person, “Oh, you are censored. You are not allowed to say these things.” They say, “Okay, but there is not enough evidence to support it. So with all due respect, you are free to go on saying this, but we are not putting it on the front page of the New York Times.” And it should be the same with Facebook and with Twitter. And they tell us, “But how can we know whether something is reliable or not?” Well, this is your job. If you run a media company, your job is not just to pursue user engagement, but to act responsibly, to develop mechanisms to tell the difference between reliable and unreliable information, and only to spread what you have good reason to think is reliable information. It has been done before. You are not the first people in history who had a responsibility to tell the difference between reliable and unreliable information. It’s been done before by newspaper editors, by scientists, by judges, so you can learn from their experience. And if you are unable to do it, you are in the wrong line of business. So that’s one thing. Hold them responsible for the actions of their algorithms. The other thing is to ban the bots from the conversations. AI should not take part in human conversations unless it identifies as an AI. We can imagine democracy as a group of people standing in a circle and talking with each other. And suddenly a group of robots enter the circle and start talking very loudly and with a lot of passion. And you don’t know who are the robots and who are the humans. This is what is happening right now all over the world. And this is why the conversation is collapsing. And there is a simple antidote. The robots are not welcome into the circle of conversation unless they identify as bots. There is a place, a room, let’s say, for an AI doctor that gives me advice about medicine on condition that it identifies itself. Similarly, if you go on Twitter and you see that a certain story goes viral, there is a lot of traffic there, you also become interested. “Oh, what is this new story everybody’s talking about?” Who is everybody? If this story is actually being pushed by bots, then it’s not humans. They shouldn’t be in the conversation. Again, deciding what are the most important topics of the day. This is an extremely important issue in a democracy, in any human society. Bots should not have this ability to determine what stories dominate the conversation. And again, if the tech giants tell us, “Oh, but this infringes freedom of speech” — it doesn’t because bots don’t have freedom of speech. Freedom of speech is a human right, which would be reserved for humans, not for bots. Listen to the rest of the conversation and be sure to follow The Gray Area on Apple Podcasts, Spotify, Pandora, or wherever you listen to podcasts.
vox.com
Why a Fossil Fuel Phase-Out Is the Only Way to Protect Future Generations
The longer we delay climate action, the worse it will be.
time.com
Erik Menendez Blasts Ryan Murphy’s Netflix Series ‘Monsters’ For Being “Inaccurate”
He accused the showrunner of having "bad intent" and presenting “ruinous character portrayals.”
nypost.com
Arsenal vs. Manchester City odds, picks: Premier League predictions, best bets Sunday
The biggest match of the Premier League season to date comes our way Sunday morning as four-time defending champion Manchester City hosts back-to-back runner-up Arsenal.
nypost.com
An Escape from the Front Line in Ukraine
An excerpt from one of the most ambitious stories in The Times Magazine’s history.
nytimes.com
Three keys for Liberty in first-round matchup vs. Dream
Here are three keys for the Liberty heading into their first-round playoff series vs. the Dream. Game 1 is Sunday at 1 p.m.:
1 h
nypost.com
Six Sunday Reads
Spend time with stories about taking a break from dating, why people aren’t having kids, the insurrectionists next door, and more.
1 h
theatlantic.com
Health system to pay $65 million after hackers leaked nude patient photos
Lehigh Valley Health refused to pay a ransom to hackers. Now its hefty payout over a patient lawsuit is illuminating the high financial stakes of protecting especially sensitive information.
1 h
washingtonpost.com
Ryan Reynolds says parents are ‘soft’ today in comparison to the ‘improvised militia’ he experienced
"Deadpool" star Ryan Reynolds said that parents todays are softer with their children – him included – than they were when he was growing up
1 h
foxnews.com
Dodgers pitcher Anthony Banda wants to make clear how he broke his hand
Dodgers pitcher Anthony Banda clarifies what happened when he broke a bone in his pitching hand, calling the incident "very embarrassing."
1 h
latimes.com
Mike Huckabee has role in new 'God's Not Dead' film, reveals why people of faith can support Trump
Former Arkansas Gov. Mike Huckabee has a supporting actor role in a new film series based on faith — saying it's very timely for today. He shared thoughts about the film, faith and more.
1 h
foxnews.com
Why Vinod Khosla Is All In on AI
Investor Vinod Khosla spoke with TIME about the future of AI and his thoughts on regulation.
1 h
time.com
Indigenous Peoples Are Key to Navigating the Climate Crisis. We Deserve a Seat at the Table
Indigenous Peoples are often overlooked when it comes to global climate solutions. We deserve a say.
1 h
time.com
I give to charity — but never to people on the street. Is that wrong?
Your Mileage May Vary is an advice column offering you a new framework for thinking through your ethical dilemmas and philosophical questions. This unconventional column is based on value pluralism — the idea that each of us has multiple values that are equally valid but that often conflict with each other. Here is a Vox reader’s question, condensed and edited for clarity. I think I have a duty to help people much poorer than me, and I give 10 percent of my salary to charities that I think are effective at preventing early death due to poverty. I also live in a city with a lot of visible homelessness, and am often solicited for money. My brain says that this is not an effective way to help people; the people asking might not be the neediest among the homeless in my city, and the people I’m sending malaria bednets and pills to are even needier. At the same time, I feel callous simply ignoring all these requests. What should I do? Dear Would-Be Optimizer, Nine times out of ten, when someone’s got an ethical dilemma, I think it’s because a couple of their core values are conflicting with each other. But you’re that tenth case. I say that because I don’t actually believe your question represents a battle royale between two different values. I think there’s one core value here — helping people — and one strategy that’s masquerading as a value. That strategy is optimization. I can tell from your phrasing that you’re really into it. You don’t just want to help people — you want to help people as effectively as possible. Since extreme poverty is concentrated in developing countries, and since your dollar goes much further there than it would in your home country, your optimizing impulse is telling you to send your charity money abroad.  Optimization started as a technique for solving certain math problems, but our society has elevated it to the status of a value — arguably one of the dominant values in the Western world. It’s been on the rise since the 1700s, when utilitarian thinkers seeded the idea that both economics and ethics should focus on maximizing utility (meaning, happiness or satisfaction): Just calculate how much utility each action would produce, and choose the one that produces the most. You can see this logic everywhere in modern life — from work culture, with its emphasis on productivity hacks and agile workflows, to wellness culture, with its emphasis on achieving perfect health and optimal sleep. The mandate to “Live your best life!” is turbocharged by Silicon Valley, which urges us to quantify every aspect of ourselves with Fitbits, Apple Watches, and Oura Rings, because the more data you have on your body’s mechanical functions, the more you can optimize the machine that is you.  Have a question you want me to answer in the next Your Mileage May Vary column? Feel free to email me at sigal.samuel@vox.com or fill out this anonymous form! Newsletter subscribers will get my column before anyone else does and their questions will be prioritized for future editions. Sign up here! Optimization definitely has its place, including in the world of charity. Some charitable organizations are much more effective than others trying to achieve the same goal. All things being equal, we don’t want to blow all our money on the wildly ineffective ones. Effective altruists, members of the utilitarian-flavored social movement that aims to do the most good possible, are fond of noting that the most effective charities out there actually produce 100 times more benefit than the average ones. Why not get the biggest bang for your buck?  The problem is that we’ve stretched optimization beyond its optimal limits. We try to apply it to everything. But not every domain in life can be optimized, at least not without compromising on some of our values.  In your case, you’re trying to optimize how much you help others, and you believe that means focusing on the neediest. But “neediest” according to what definition of needy? You could assume that financial need is the only type that counts, so you should focus first on lifting everyone out of extreme poverty, and only then help people in less dire straits. But are you sure that only the brute poverty level matters? Consider an insight from the Jewish tradition. The ancient rabbis were exquisitely sensitive to the psychological needs of poor people, and they argued that these needs should also be taken into account. So they decreed that you shouldn’t only give poor people enough money to survive on — they need to have more than that so they themselves can give charity to others. As Rabbi Jonathan Sacks writes, “On the face of it, the rule is absurd. Why give X enough money so that he can give to Y? Giving to Y directly is more logical and efficient. What the rabbis understood, however, is that giving is an essential part of human dignity.”  The rabbis also figured that those who used to be well-off but who fell into poverty might feel an especially acute sense of shame. So they suggested helping these people save face by offering them not just bare necessities, but also — when possible — some of the nicer things that graced their former lifestyles. In the Talmud, we hear about one rabbi who gave a newly poor person a fancy meal, and another who acted as the person’s servant for a day! Clearly, the ancient rabbis weren’t only aiming to alleviate poverty. They were also alleviating the shame that can accompany it. The point is that there are many ways to help people and, because they’re so different, they don’t submit to direct comparison. Comparing poverty and shame is comparing apples to oranges; one can be measured in dollars, but the other can’t. Likewise, how can you ever hope to compare preventing malaria with alleviating depression? Saving lives versus improving them? Or saving the life of a kid versus saving the life of an adult?  Yet if you want to optimize, you need to be able to run an apples-to-apples comparison — to calculate how much good different things do in a single currency, so you can pick the best option. But because helping people isn’t reducible to one thing — it’s lots of incommensurable things, and how to rank them depends on each person’s subjective philosophical assumptions — trying to optimize in this domain will mean you have to artificially simplify the problem. You have to pretend there’s no such thing as oranges, only apples. And when you try to do that, an unfortunate thing happens. You end up rushing past all the unhoused people in your city and, as you put it, you “feel callous simply ignoring all these requests.” Ignoring these human beings comes at a cost, not only to them, but to you. It has a damaging effect on your moral conscience, which feels moved to help but is being told not to. This story was first featured in the Future Perfect newsletter. Sign up here to explore the big, complicated problems the world faces and the most efficient ways to solve them. Sent twice a week. Even some leaders in effective altruism and the adjacent rationalist community recognize this as a problem and advise people not to shut up that part of themselves. Rationalist Eliezer Yudkowsky, for example, says it’s okay to donate some money to causes that make us feel warm and fuzzy but that aren’t producing maximum utility. His advice is to “purchase fuzzies and utilons separately” — meaning, devote one pot of money to pet causes and another (much bigger) pot of money to the most cost-effective charities. You can, he says, get your warm fuzzies by volunteering at a soup kitchen and “let that be validated by your other efforts to purchase utilons.”  I would also suggest diversifying your giving portfolio, but it’s not because I think you need to “validate” the warm fuzzies. Instead, it’s because of another value: integrity.  When 20th-century British philosopher and critic of utilitarianism Bernard Williams talked about integrity, he meant it in the literal sense of the word, which has to do with a person’s wholeness (think of related words like “integration”). He argued that moral agency does not sit in a contextless vacuum — it is always some specific person’s agency, and as specific people we have specific commitments.  For example, a mother has a commitment to ensuring her kid’s well-being, over and above her general wish for all kids everywhere to be well. Utilitarianism says she has to consider everyone’s well-being equally, with no special treatment for her own kid — but Williams says that’s an absurd demand. It alienates her from a core part of herself, ripping her into pieces, wrecking her wholeness — her integrity.  It sounds like that’s what you’re feeling when you pass a person experiencing homelessness and ignore them. Ignoring them makes you feel bad because it alienates you from the part of you that is moved by this person’s suffering — that sees the orange but is being told there are only apples. That core part of you is no less valuable than the optimizing part, which you liken to your “brain.” It’s not dumber or more irrational. It’s the part that cares deeply about helping people, and without it, the optimizing part would have nothing to optimize!  So rather than trying to override it, I would encourage you to honor your wish to help in all its fullness. You won’t be able to run a direct apples-to-apples comparison, but that’s okay. Different types of help are useful in their own way and you can divvy up your budget between them, even though there’s no perfect formula to spit out the “optimal” allocations.  Diversifying your giving portfolio might look something like this. You keep a small amount of cash or gift cards on you, which you hand out to unhoused people you encounter directly. You put aside a larger amount to donate to a local or national charity with a strong track record. And you devote another amount to a highly effective charity abroad.  You might feel annoyed that there’s no universal mathematical formula that can tell you the best thing to do. If so, I get it. I want the magic formula too! But I know that desire is distinct from the core value here. Don’t let optimization eat the real value you hold dear.  Bonus: What I’m reading I recently read Optimal Illusions, a book by mathematician Coco Krumme that traces the roots of optimization’s overreach. As she puts it, “Over the past century, optimization has made an impressive epistemic land grab.”  When torn between competing moral theories, does it make sense to diversify your donations in proportion to how much you believe in each theory? Some philosophers argue against that view, but Michael Plant and coauthors defend it in this new paper. This gorgeously written essay by anthropologist Manvir Singh introduced me to the term “cooperating without looking” (or, because it’s a New Yorker essay, “coöperating without looking”). This “tendency to willfully ignore costs and benefits when helping others” — to help without calculating what you’ll gain from the altruistic act — is “a key feature of both romantic love and principled behavior.” When we help this way, people trust us more. 
1 h
vox.com
Panthers vs. Raiders, Giants vs. Browns prediction: NFL Week 3 odds, picks
Football handicapper Sean Treppedi is in his first season in The Post’s NFL Bettor’s Guide. 
1 h
nypost.com
Giants defense preaching simplicity going into Browns matchup: ‘Doing our job’
Too much has thus far provided the Giants with far too little.
2 h
nypost.com
Mets’ Kodai Senga sharp in one-inning rehab start
Kodai Senga threw one scoreless inning while walking one and striking out two on 15 pitches with Triple-A Syracuse.
2 h
nypost.com
MTA’s $68.4B capital program is pure fiction —unless Hochul steps up
Unless and until Gov. Hochul becomes a firmer leader, the MTA's massive infrastructure plan will remain unfunded — a lot of ideas, and no way to pay for them.
2 h
nypost.com
For Climate Week, let’s reject the green fantasy: Carbon is NOT the enemy
Despite the ramped-up doomsday rhetoric, our environment is thriving — and this is the best time in human history to be alive, thanks to fossil fuels.
2 h
nypost.com
Thank you, Caitlin Clark, for a rookie season that elevated the WNBA
Caitlin Clark proved herself to be that rare player who can lift an entire league on her shoulders.
2 h
nypost.com
NYC jury awards $2.78M to au pair who was secretly filmed by creepy chicken mogul
A Manhattan jury has awarded $2.78 million to a “petrified” au pair who was secretly videotaped by a creepy Staten Island dad and fast-food chicken mogul — but the victim is outraged he got only a “slap on the wrist” from prosecutors. Michael Esposito, 35, recorded “hundreds” of nude videos of Colombia native Kelly Andrade,...
2 h
nypost.com
Explosion at an Iranian Mine Kills Dozens, State Media Says
A methane leak set off the explosion, killing at least 51 people, the country’s official media said.
2 h
nytimes.com
USC's loss to Michigan a reminder that Lincoln Riley falters under pressure
USC should have beaten Michigan, but curious play calls from Lincoln Riley raise questions as to whether he can lead the Trojans to a national title.
2 h
latimes.com
Op-comic: My family has a legacy of absent fathers. But that doesn't define our future
An adapted excerpt from Teresa Wong's graphic memoir "All Our Ordinary Stories: A Multigenerational Family Odyssey."
2 h
latimes.com
Where have all the orange groves gone?
In Southern California, a long time has passed since our famed citrus crop dominated the landscape. The orange groves have instead gone to housing developments, nearly every one.
2 h
latimes.com
Kamala Harris tried being something she wasn't. Now that liberal makeover is dogging her candidacy
Harris moved notably leftward in her 2020 bid for president, seeing it as the best path to the Democratic nomination. But the move failed to reflect Harris' true self, which is more center-left.
2 h
latimes.com
In rural Wisconsin, a tangle of facts and fears over faraway refugees
Amid a presidential election animated by immigration policy, a county board and their riled-up constituents wrestle with who belongs in America and who doesn’t.
2 h
washingtonpost.com
D.C.-area forecast: Clouds and a stray shower today, then unsettled through midweek
Temperatures tend to be a little cooler than average the next several days.
2 h
washingtonpost.com
Sean Combs and Dominique Pelicot Aren’t Such Outliers
It’s easy to miss how much these cases have in common with everyday reality.
2 h
nytimes.com
Funny, it isn't hard to make a comedy show that autistic adults can enjoy too
"Let It Out," a stand-up show hosted at the Laugh Factory, aimed to demonstrate that making comedy shows inclusive for neurodivergent people could be easy.
2 h
latimes.com
Letters to the Editor: Have a daughter? This is what voting for Donald Trump tells her
A reader suggests the letter that fathers should write to their daughters explaining why they're voting for Donald Trump.
2 h
latimes.com
Israel’s Pager Attacks Have Changed the World
Our supply chains are vulnerable, which means that we are vulnerable.
2 h
nytimes.com
NC rallygoers 'praying' that Trump wins, slam Dem rhetoric calling him a 'threat' after assassination attempts
Rallygoers at former President Trump's Wilmington, North Carolina, rally told Fox News Digital why they are supporting the Republican nominee in 2024.
2 h
foxnews.com
Why Trump Can’t Shake Project 2025
The former president’s problem in the 2024 election is that he can no longer run as if he is a man alone.
2 h
nytimes.com
MAGA Wants Transgression. Mark Robinson Is the Result.
If you favor more of this, vote for Trump.
2 h
nytimes.com
Los Angeles school kids, get off your damn phones! Trust me, you'll thank us later
A new L.A. Unified School District rule banning cellphones in classrooms begins in January. It will improve the learning environment and social interactions.
2 h
latimes.com
Letters to the Editor: What the Biden administration can do to stop the labor-and-delivery care crisis
Labor and delivery wards require expensive stand-by staffing. Insurers should cover those costs to stop hospitals from shutting down these crucial wards.
2 h
latimes.com