Tools
Change country:

Mögliche Wiederwahl Trumps: Welche Folgen hätte das für die Wissenschaft?

Der Präsidentschaftskandidat der Republikaner leugnet den Klimawandel und glänzt auch sonst nicht mit Faktenwissen. Womit muss die Wissenschaft bei seiner Wiederwahl rechnen?
Read full article on: tagesspiegel.de
Singer-songwriter Huey Lewis on seeing his songs come to life on stage
Singer-songwriter Huey Lewis joins "CBS Mornings" to talk about his new Broadway musical, "The Heart of Rock and Roll," and working through hearing loss.
1m
cbsnews.com
How a woke military has created a recruiting crisis — and put Americans in danger
Fox News host Pete Hegseth tackles the issue in his new book “The War on Warriors: Behind the Betrayal of the Men and Women Who Keep Us Free."
nypost.com
These Anti-Wrinkle Serums Soften Fine Lines and Combat Sun Damage
Scouted/The Daily Beast/Retailers.Scouted selects products independently. If you purchase something from our posts, we may earn a small commission.As we navigate the ever-evolving landscape of ‘anti-aging’ skincare products, searching for the right active serum to suit your specific skin goals can be challenging. Whether your aim is to soften fine lines and crow’s feet, remove UV-induced hyperpigmentation, or smooth out texture and the appearance of enlarged pores, there’s a targeted formula for everything nowadays. Of course, not all anti-aging serums are created equal.To help you narrow down the best one for you (and your skin type), we’ve rounded up some of our favorite skin-rejuvenating serums to help correct and prevent multiple signs of aging on the skin. From potent retinoid-forward serums to damage-erasing (and preventing) vitamin C formulas, these serums will help you achieve a radiant, youthful complexion.Read more at The Daily Beast.
thedailybeast.com
Man Shocks With 100-Burrito Meal Prep System That 'Changed the Game'
"For this specific video, it was one marathon of a day," Tom Walsh told Newsweek. "I made a little over 100 burritos."
newsweek.com
Donald Trump Rails Against Sentencing Date His Own Lawyer Agreed To
Defense asked for a "mid to late July" sentencing date, court transcripts show.
newsweek.com
Selena Gomez says she chooses to be friends with ‘levelheaded people’: ‘Girls are mean’
“It’s a cliché, but girls are mean,” the "Love On" singer, 31, said. “I love having levelheaded people around that couldn’t give two f--ks about what I do."
nypost.com
Trump Begs Supreme Court for Help as He Awaits Hush-Money Sentencing
Joe Camporeale/USA Today Sports via ReutersDonald Trump has called on the Supreme Court to weigh in on his hush-money case as his sentencing looms next month.The former president, who was convicted on 34 felony counts of falsifying business records, is set to be sentenced on July 11, four days before the beginning of the Republican National Convention in Milwaukee. He has vowed to appeal his history-making conviction on charges related to his efforts to unlawfully influence the 2016 election with a scheme to cover up a hush-money payment made to porn star Stormy Daniels.“The ‘Sentencing’ for not having done anything wrong will be, conveniently for the Fascists, 4 days before the Republican National Convention,” Trump wrote on his Truth Social platform on Sunday evening.Read more at The Daily Beast.
thedailybeast.com
Social Security Update: Why You Won't Be Getting a Payment This Week
Because of the number of weeks in the month, there are slight changes to the usual payment schedule in June.
newsweek.com
Florida Condo Owners in Race Against Time Before Hurricane Season
A new program will offer Florida condo associations the opportunity to get public funding to harden their buildings as hurricane season kicks off.
newsweek.com
One in Three Republicans Now Think Donald Trump Was Wrong Candidate Choice
A new poll has revealed changing attitudes to Trump from his Republican supporters.
newsweek.com
Michael Doulas visits Israel to show solidarity as war in Gaza continues
Actor Michael Douglas paid a solidarity visit to an Israeli kibbutz that was hit hard in the Oct. 7 Hamas attack that sparked Israel's war against the Islamic militant group.
cbsnews.com
Mohamed Hadid claims he’s the ‘victim’ in bitter feud with lender after filing fifth bankruptcy
Financially-strapped real estate developer Mohamed Hadid -- the celebrity dad of supermodels Gigi and Bella Hadid -- claimed he's the "victim" of a predatory lender after filing for bankruptcy over a prized California property, The Post has learned.
nypost.com
Family sues butcher who slaughtered pet pigs when he went to wrong house
Natalie and Nathan Gray say Port Orchard, Wash., butcher Jonathan Hines “recklessly” caused their family harm. Hines said he apologized to the Grays.
washingtonpost.com
Will ‘boots on the ground’ be the next red line crossed in Ukraine?
Until now, the West has ruled out sending troops to Ukraine. France’s Emmanuel Macron has other ideas.
washingtonpost.com
There's a man, a woman and a dog. But don't call 'Colin From Accounts' wacky
Harriet Dyer and Patrick Brammall created, star in and produce the Australian romantic comedy.
latimes.com
Aileen Cannon Playing 'Dangerous Game' in Donald Trump Trial: Attorney
Former President Donald Trump has been making statements that could put FBI lives at risk, said Joyce Vance.
newsweek.com
American complacency is Trump’s secret weapon
Popular culture instills the idea that good ultimately triumphs over evil. Real life begs to differ.
washingtonpost.com
China Claims Arrest of Spies Turned by US Ally
China's Ministry of State Security is continuing a monthslong campaign of spy wars against the West.
1 h
newsweek.com
Women Turn Up at Airport for Flight, Make Embarrassing Realization
Social media users were amused by the scene in the viral clip, with one wondering "how does this even happen."
1 h
newsweek.com
The campaign dichotomy in one newsletter :slightly_smiling_face:
In today’s edition … Hunter Biden’s trial set to start today … Sen. Menendez’s wife remains key figure in trial even in her absence.
1 h
washingtonpost.com
Europeans Are Watching the U.S. Election Very, Very Closely
American allies see a second Trump term as all but inevitable. “The anxiety is massive.”
1 h
theatlantic.com
Elon Musk, America’s richest immigrant, is angry about immigration. Can he influence the election?
The most financially successful immigrant in the U.S. — the third-richest person in the world — has frequently repeated his view that it is difficult to immigrate to the U.S. legally but “trivial and fast” to enter illegally.
1 h
latimes.com
Op-comic: What one doctor learned as a guinea pig for AI
I was skeptical of bringing artificial intelligence into the exam room, but it promised to reduce my screen time and shift the focus back to the patients.
1 h
latimes.com
What would the great George Balanchine do? L.A. ballet director thinks he has the answers
It's provocative to aspire to slip into the mind of one of ballet’s great masters, but Lincoln Jones sees it as a progression in his long devotion to George Balanchine’s art.
1 h
latimes.com
They cut their water bill by 90% and still have a 'showstopping' L.A. garden
A Los Angeles couple tore out 1,150 square feet of thirsty lawn, replacing it with a showstopping mix of low-water California native plants.
1 h
latimes.com
The U.S. Drought Monitor is a critical tool for the arid West. Can it keep up with climate change?
New research raises questions about the familiar map's ability to address long-term drying trends, including persistent dry spells across the American West.
1 h
latimes.com
Forget the trendy juice bars. This is the place to go for green juice
TK
1 h
latimes.com
Santa Monica sci-fi museum controversy: A child porn conviction, delays and angry ‘Star Trek’ fans
Questions surround Santa Monica’s Sci-Fi World as staff and volunteers quit and claim that its founder, who was convicted for possession of child pornography, remains active in the museum.
1 h
latimes.com
After 13 years, a homeless Angeleno broke into her old, vacant home and wants to stay forever
Maria Merritt has faced addiction, death of loved ones and other tragedies. A publicly owned home in El Sereno she had, lost, then regained gives her the strength to go on.
1 h
latimes.com
The transformative joys (and pains) of painting your own house
I self-impose and prolong my chaotic paint experiments because collectively, they form a promise: that one day I’ll be able to live happily in the house I’ve always wanted.
1 h
latimes.com
'Resident Alien' star Alan Tudyk is in no hurry to return to his home planet
'Mork and Mindy,' Looney Tunes and Mel Brooks all helped shape the actor as a young person.
1 h
latimes.com
WeHo Pride parade-goers talk joy and inclusivity, trans rights and a thread of fear
Threats against queer people didn't quell the joyful celebration at this year's West Hollywood Pride Parade.
1 h
latimes.com
Who should be the next LAPD chief? Public shrugs as city asks for input
As the Police Commission continues its citywide listening tour to hear about what residents want to see in the department's next leader, many of the stops have seen a low turnout.
1 h
latimes.com
Newsom finally gets moving on fixing California's homeowner insurance crisis
California Gov. Gavin Newsom has proposed urgency legislation to expedite the hiking of homeowner insurance rates. It’s about time. Because the alternative for many is no insurance at all.
1 h
latimes.com
Letters to the Editor: A lifeguard who can't tolerate the LGBTQ+ Pride flag shouldn't be a lifeguard
The lifeguard so upset by the presence of an LGBTQ+ Pride flag that he's suing L.A. County might want to find another line of work.
1 h
latimes.com
Letters to the Editor: California's new electricity billing scheme discourages conversation. That's crazy
A flat fee of $24.15 on most utility customers. Reduced per-kilowatt hour rates. How is this supposed to encourage power conservation?
1 h
latimes.com
Biden and Trump share a faith in import tariffs, despite inflation risks
Both candidates’ trade plans focus on tariffs on imported Chinese goods even as economists warn they could lead to higher prices.
1 h
washingtonpost.com
Caltrans' lapses contributed to 10 Freeway fire, Inspector General finds
For over 15 years, Caltrans failed to enforce safety at its property where a fire broke out last year, shutting down the 10 Freeway.
1 h
latimes.com
13 essential LGBTQ+ television shows (and a parade) to watch during Pride Month
Here’s a guide to queer TV shows, from 'Dead Boy Detectives' to 'Veneno' to 'The L Word,' to make your Pride Month merry.
1 h
latimes.com
Senate Democrats to unveil package to protect IVF as party makes reproductive rights push
The package comes as Senate Majority Leader Chuck Schumer has outlined plans for the chamber to put reproductive rights "front and center" this month.
1 h
cbsnews.com
Hunter Biden's federal gun trial to begin today
Hunter Biden faces three felony charges related to his purchase and possession of a gun while he was a drug user.
1 h
cbsnews.com
Home buyers beware: Buying a property with unpermitted structures can lead to hefty fines
California realtors advise that buyers understand a property's history and structure condition before finalizing their purchase, saving them the headache and cost of future fixes.
1 h
latimes.com
The internet peaked with “the dress,” and then it unraveled
If you were on the internet on February 26, 2015, you saw The Dress. Prompted by a comment on Tumblr, BuzzFeed writer Cates Holderness posted a simple low-quality image of a striped dress, with the headline “What Colors Are This Dress?” The answers: blue and black or white and gold. The URL: “help-am-i-going-insane-its-definitely-blue.” Do you really need me to tell you what happened next? In just a few days, the BuzzFeed post got 73 million page views, inspiring debate across the world. Seemingly every news outlet (including this one) weighed in on the phenomenon. How was it possible that this one image divided people so neatly into two camps? You either saw — with zero hint of variability — the dress as black and blue, or white and gold. There was no ambiguity. Only a baffling sense of indignation: How could anyone see it differently? Looking back, the posting of “the dress” represented the high-water mark of “fun” on the mid-2010s internet. Back then, the whole media ecosystem was built around social sharing of viral stories. It seemed like a hopeful path for media. BuzzFeed and its competitors Vice and Vox Media (which owns this publication) were once worth billions of dollars. The social-sharing ecosystem made for websites that would, for better or worse, simply ape each other’s most successful content, hoping to replicate a viral moment. It also fostered an internet monoculture. Which could be fun! Wherever you were on the internet, whatever news site you read, the dress would find YOU. It was a shared experience. As were so many other irreverent moments (indeed, the exact same day as the dress, you probably also saw news of two llamas escaping a retirement community in Arizona.) More from This Changed Everything The last 10 years, explained How the self-care industry made us so lonely Serial transformed true crime — and the way we think about criminal justice Since 2015, the engines of that monoculture have sputtered. Today, BuzzFeed’s news division no longer exists; the company’s stock is trading at around 50 cents a share (it debuted at about $10). Vice has stopped publishing on its website and laid off hundreds of staffers. Vox Media is still standing (woo!), but its reported value is a fraction of what it used to be (sigh). The dress brought us together. It was both a metaphor and a warning about how our shared sense of reality can so easily be torn apart. Whether you saw gold and white or black and blue, the meme revealed a truth about human perception. Psychologists call it naive realism. It’s the feeling that our perception of the world reflects its physical truth. If we perceive a dress as looking blue, we assume the actual pigments inside the dress generating the color are blue. It’s hard to believe it could be any other color. But it’s naive because this is not how our perceptual systems work. I’ve written about this a lot at Vox. The dress and other viral illusions like the similarly ambiguous “Yanny” vs. “Laurel” audio reveal the true nature of how our brains work. We’re guessing. As I reported in 2019: Much as we might tell ourselves our experience of the world is the truth, our reality will always be an interpretation. Light enters our eyes, sound waves enter our ears, chemicals waft into our noses, and it’s up to our brains to make a guess about what it all is. Perceptual tricks like … “the dress” … reveal that our perceptions are not the absolute truth, that the physical phenomena of the universe are indifferent to whether our feeble sensory organs can perceive them correctly. We’re just guessing. Yet these phenomena leave us indignant: How could it be that our perception of the world isn’t the only one? Scientists still haven’t figured out precisely why some people see the dress in one shade and some see it in another. Their best guess so far is that different people’s brains are making different assumptions about the quality of the light falling on the dress. Is it in bright daylight? Or under an indoor light bulb? Your brain tries to compensate for the different types of lighting to make a guess about the dress’s true color. Why would one brain assume daylight and another assume indoor bulbs? A weird clue has arisen in studies that try to correlate the color people assume the dress to be with other personal characteristics, like how much time they spend in daylight. One paper found a striking correlation: The time you naturally like to go to sleep and wake up — called a chronotype — could be correlated with dress perception. Night owls, or people who like to go to bed really late and wake up later in the morning, are more likely to see the dress as black and blue. Larks, a.k.a. early risers, are more likely to see it as white and gold. In 2020, I talked to Pascal Wallisch, a neuroscientist at New York University who has researched this topic. He thinks the correlation is rooted in life experience: Larks, he hypothesizes, spend more time in daylight than night owls. They’re more familiar with it. So when confronted with an ill–lit image like the dress, they are more likely to assume it is being bathed in bright sunlight, which has a lot of blue in it, Wallisch points out. As a result, their brains filter it out. Night owls, he thinks, are more likely to assume the dress is under artificial lighting, and filtering that out makes the dress appear black and blue. (The chronotype measure, he admits, is a little crude: Ideally, he’d want to estimate a person’s lifetime exposure to daylight.) Other scientists I talked to were less convinced this was the full answer (there are other potential personality traits and lifetime experiences that could factor in as well, they said). Even if there’s more to this story than chronotype, there’s an enduring lesson here. Our differing life experiences can set us up to make different assumptions about the world than others. Unfortunately, as a collective, we still don’t have a lot of self-awareness about this process. “Your brain makes a lot of unconscious inferences, and it doesn’t tell you that it’s an inference,” Wallisch told me. “You see whatever you see. Your brain doesn’t tell you, ‘I took into account how much daylight I’ve seen in my life.’” Moments like the dress are a useful check on our interpretations. We need intellectual humility to ask ourselves: Could my perceptions be wrong? The dress was an omen because, in many ways, since 2015, the internet has become a worse and worse place to do this humble gut check (not that it was ever a great place for it). It’s become more siloed. “You see whatever you see” Its users are seemingly less generous to one another (not that they were ever super generous!). Shaming and mocking are dominant conversational forms (though, yes, irreverence and fun can still be had). This all matters because our shared sense of reality has fractured in so many important ways. There were huge divides on how people perceived the pandemic, the vaccines that arose to help us through it, the results of the 2020 election. Not all of this is due to the internet, of course. A lot of factors influence motivated reasoning and motivated perceptions, the idea that we see what we want to see. There are leaders and influencers who stoke the flames of conspiracy and misinformation. But in a similar way to how our previous experiences can motivate us to see a dress in one shade or another, they can warp our perception of current events, too. Though, I will admit: Maybe my perception of a more siloed internet is off! It’s hard to gauge. Algorithm-based feeds today are more bespoke than ever before. I can’t know for sure whether my version of the social internet is like anyone else’s. My TikTok feed features a lot of people retiling their bathrooms. That can’t possibly be the average user’s experience, right? I have no idea if we’re all seeing the same things — and even less of an idea if we’re interpreting them the same way. More chaos is coming, I fear. AI tools are making it easier and easier to manipulate images and videos. Every day, it gets easier to generate content that plays into people’s perceptual biases and confirms their prior beliefs — and easier to warp perceptions of the present and possibly even change memories of the past. The dress represents, arguably, a simpler time on the internet, but also offers a mirror to some of our most frustrating psychological tendencies. What I wonder all the time is: What piece of content is out there, right now, generating different perceptual experiences in people, but we don’t even know we’re seeing it differently?
1 h
vox.com
How the self-care industry made us so lonely
Where were you the first time you heard the words “bath bomb?” What about “10-step skin care routine?” Perhaps you have, at some point, canceled plans in order to “unplug,” drink some tea, and take a bit of “me time.” Maybe you’ve ordered an assortment of candles meant to combat anxiety and stress or booked a rage room to exorcise your demons.  A warped notion of self-care has been normalized to the point where everyday activities like washing yourself and watching TV are now synonymous with the term. Generally understood as the act of lovingly nursing one’s mind and body, a certain kind of self-care has come to dominate the past decade, as events like the 2016 election and the Covid pandemic spurred collective periods of anxiety layered on top of existing societal harms. It makes sense that interest in how to quell that unease has steadily increased.  More from This Changed Everything The last 10 years, explained The internet peaked with “the dress,” and then it unraveled Serial transformed true crime — and the way we think about criminal justice Brands stepped forward with potential solutions from the jump: lotions, serums, journals, blankets, massagers, loungewear, meditation apps, tinctures. Between 2014 and 2016, Korean beauty exports to the US more than doubled. The Girls’ Night In newsletter was founded in 2017, with a mission to share “recommendations and night-in favorites … all focused on a topic that could use a bigger spotlight right now: downtime.” YouTube was soon saturated with videos of sponsored self-care routines. By 2022, a $5.6 trillion market had sprung to life under the guise of helping consumers buy their way to peace.  As the self-care industry hit its stride in America, so too did interest in the seemingly dire state of social connectedness. In 2015, a study was published linking loneliness to early mortality. In the years that followed, a flurry of other research illuminated further deleterious effects of loneliness: depression, poor sleep quality, impaired executive function, accelerated cognitive decline, cardiovascular disease, higher risk of coronary heart disease and stroke. US Surgeon General Vivek Murthy classified the prevalence of loneliness as an epidemic. By 2018, half of the country reported feeling lonely at least sometimes, according to a Cigna survey, a number that has only grown.  There is no singular driver of collective loneliness globally. A confluence of factors like smartphones, social media, higher rates of anxiety and depression, vast inequality, materialism, and jam-packed schedules have been identified as potentially spurring the crisis. But one practice designed to relieve us from the ills of the world — self-care, in its current form — has pulled us away from one another, encouraging solitude over connection.  How self-care became a commercial product The self-care of decades past was decidedly less individualistic and capitalist. In the 1950s, self-care was a term used in health care contexts: activities patients and their families could perform to promote their health and well-being separate from the care of medical professionals. “To me, self-care is a subjective and dynamic process aimed at maintaining health and preventing diseases or managing diseases when they appear,” says Michela Luciani, an assistant professor of nursing at the University of Milano-Bicocca. In this context, self-care can encompass everything from getting annual medical screenings to eating well.  In the years that followed, the Black Panthers stressed the importance of caring for oneself as a political act amid the civil rights movement. Through community efforts like free food programs for children and families as well as free health clinics, the Black Panthers focused on collective well-being. “[This] image of caring for your people and self-care,” says Karla D. Scott, a professor of communication at Saint Louis University, “evoked the African phrase ‘I am because we are’: ubuntu.” For Black activists, partaking in rejuvenating rituals was crucial in order to survive within and to fight against racist, classist, and sexist systems. This approach to self-care is especially evident in the works of bell hooks and Audre Lorde, who is often referenced in the context of self-care: “Caring for myself is not self-indulgence,” she wrote, “it is self-preservation, and that is an act of political warfare.” This definition of self-care emphasizes the importance of engaging with others. Not only do we receive support from family, friends, and neighbors, but communing itself is a form of care. People report high levels of well-being while spending time with their friends, romantic partners, and children. Social interaction with trusted companions has been found to help stave off depression. Even chatting with acquaintances and strangers promotes happiness and belonging. Buy a new eyeshadow, a bullet journal, Botox, a vacation to fill the need for care that never seems to abate By the late 1960s, wellness entered the lexicon. Beyond simply avoiding illness, “wellness” as a concept centered the pursuit of a higher level of existence: a more emotional, spiritual, physical, and intellectual way of living. A wellness resource center opened in California in 1975; nearly a decade later, a wellness-focused newsletter from the University of California Berkeley helped legitimize the concept. This model of well-being features individuals, not communities, moving toward their “ever-higher potential of functioning,” as posited by Halbert L. Dunn, who helped popularize the contemporary idea of wellness. (Dunn also includes the “basic needs of man” — communication, fellowship with other people, and love — as integral to wellness.)  The ethos of wellness soon became synonymous with a sullied version of self-care, one that mapped neatly to the rising fitness culture of the ’80s through the early 2000s and the concept of “working on yourself.”  The Great Recession of 2008 marked a shift in how Americans viewed their health and well-being. In her book Fit Nation: The Gains and Pains of America’s Exercise Obsession, Natalia Mehlman Petrzela argues that fitness became “a socially acceptable form of conspicuous consumption” during this time when social media and boutique fitness classes allowed people to broadcast their lavish spending in pursuit of their health. Gwyneth Paltrow’s wellness brand Goop was founded the same year, espousing occasionally unfounded health advice and recommending (and selling) “aspirational products which embody and encourage restriction, control, and scarcity,” according to one academic paper. Commoditized self-care was here to stay, reaching mass saturation right around the time Trump was elected to office. Young people, disillusioned by polarized politics, saddled with astronomical student loan debt, and burned out by hustle culture, turned to skin care, direct-to-consumer home goods, and food and alcohol delivery — aggressively peddled by companies eager to capitalize on consumers’ stressors. While these practices may be restorative in the short term, they fail to address the systemic problems at the heart of individual despair.  Thus, a vicious, and expensive, cycle emerges: Companies market skin care products, for example, to prevent the formation of fine lines, supposedly a consequence of a stressful life. Consumers buy the lotions to solve this problem, lather themselves in solitude, and feel at peace for a little while. Once the anxiety, the exhaustion, and the insufficiency creeps in again, as it inevitably does, the routine begins anew. Buy a new eyeshadow, a bullet journal, Botox, a vacation to fill the need for care that never seems to abate.  Because buying things does not solve existential dread, we are then flooded with guilt for being unable to adequately tend to our minds and bodies. We just have to self-care harder, and so the consumerism masquerading as a practice that can fix something broken becomes another rote to-do list item. Individualistic approaches to wellness promote isolation This isn’t to say that solitary activities can’t be effective forms of self-care. Many people are easily depleted by social interaction and take solace in regular quiet evenings alone; solo time is indeed integral to a balanced social regimen. Conversely, people who are constantly surrounded by others can still feel lonely. However, when companies market genuinely vitalizing practices as individualized “solutions” to real problems (like burnout) requiring structural change (such as affordable child care), we increasingly look inward. “I worry that because of this ideology we live in, rugged individualism,” Scott says, “it lands in a way where folks feel that they’re deficient. It is deflating.” Pooja Lakshmin, a psychiatrist and clinical assistant professor at George Washington University, calls this self-soothing capitalist version of self-care “faux self-care” in her best-selling book Real Self-Care: A Transformative Program For Redefining Wellness. Faux self-care manifests in two ways: I deserve to splurge on Doordash and binge Netflix because I’m so burned out and I’m going to push myself so hard in this spin class because I need to be the best. Secluding oneself by summoning sustenance to our doorstep comes at the expense of the worker earning paltry wages to deliver you that food. The doors of our apartments quite literally separate those who can afford to “care” for themselves and those who cannot. While this form of restoration appears to be more isolating, the hyper-competitive version of faux self-care is equally as confining, Lakshmin says. “They’re not engaging or present,” she says. “They’re competing with themselves.”  While many surveys and reports outline a recent rise in loneliness, researchers lack sufficient longitudinal data to definitively say whether people are lonelier now than in the past, says Luzia Heu, an assistant professor in interdisciplinary social sciences at Utrecht University. However, people in wealthier societies have more opportunities to spend time alone now, she says, whether through remote work, living alone, or participating in solitary hobbies. “We spend more time alone and we are more isolated,” Heu says. “That is where people immediately assume that loneliness must also have increased a lot.” Whether or not loneliness has grown compared to historical accounts, recent statistics show that individuals are reporting higher levels of loneliness over the last decade, especially in the wake of the pandemic. “Self-care transformed into self-obsession”  America’s loneliness epidemic is multifaceted, but the rise of consumerist self-care that immediately preceded it seems to have played a crucial role in kicking the crisis into high gear — and now, in perpetuating it. You see, the me-first approach that is a hallmark of today’s faux self-care doesn’t just contribute to loneliness, it may also be a product of it. Research shows self-centeredness is a symptom of loneliness. But rather than reaching out to a friend, we focus on personalized self-care and wonder why we might not feel fulfilled. Another vicious cycle. “Instead of self-care being this mechanism to take care of yourself so that you can then show up for others,” says psychologist Maytal Eyal and co-founder of women’s health company Gather, “self-care transformed into self-obsession.”  The wellness industry wouldn’t be as lucrative if it didn’t prey on our insecurities. It must imagine new insufficiencies for us to fixate on, new elixirs and routines — like colostrum and 75 Hard — simultaneously meant to improve your mind and body by keeping them occupied in solitude.  That isolation is detrimental to the self and to society. When people are lonely, they tend to distrust others — they’re on the lookout for social threats and expect rejection. Being so disconnected and suspicious of their neighbors, their communities, and institutions could impact their propensity to cooperate with others and act in prosocial ways. A lack of social belonging has been linked to a person’s increased likelihood of voting for populist candidates. Similarly, social rejection can lead one toward extremist views. This is especially good news for political figures who wish to sow discontent and chaos. A secluded electorate is an unengaged one. Those in positions of power have it in their best interests to keep workers, neighbors, and citizens separate, self-centered, and distracted. As Scott mentioned, the tradition of American individualism doesn’t help. When people are told they are solely responsible for their own happiness and well-being, they increasingly seek it out via solitary means. If they’re lonely to begin with — if they feel disappointed in their relationships or don’t feel understood — they have a stronger tendency to withdraw, says Heu, the social and behavioral science professor. Perhaps they seek out a form of commodified self-care to cope, but “it’s not something that tackles the cause of your loneliness,” Heu says. “For many people, the cause of the loneliness will be something else.” For women, to whom self-care is most aggressively targeted, the source of their loneliness may be tied to the demands of their lives. Even when they earn the same as their male partners, women in heterosexual relationships still do the lion’s share of housework, according to a Pew Research Center study. Women also spend more time on caregiving than their husbands, the survey found. An expensive candle won’t ease the burdens of home life or allow for more time to connect with peers outside of the household.  The narrative that the only one we can depend on, and thus should prioritize, is ourselves perpetuates the idea of the personal above the collective — and reinforces the notion of self-sufficiency. Self-care is individual, says Luciani, the nursing professor: No one else can force us to get enough sleep or go to the gym. But it shouldn’t be individualistic. “Self-care is influenced by the support from others,” she says, like a partner who cooks dinner and cares for the children while you lie down with a headache, or a friend who advocates for you at medical appointments. Communal self-care means creating space for others to tend to their needs and supporting them when necessary.  Despite the powerful forces working against us, we can reclaim self-care. We can choose to ignore compelling advertisements promising quick fixes. We can partake in revitalizing communal practices, whether they be a yoga class or a movie night with friends. We can avoid blaming ourselves for feeling stressed and scared and despondent in a violent, tumultuous, and unjust world. We can get to the root of our loneliness. True self-care involves connecting with others. Showing up for a friend in need or exchanging a few kind words with a stranger is more fulfilling than a face mask anyway. 
1 h
vox.com
The last 10 years, explained
The past decade was filled with so many unexpected turning points: moments big and small that we now understand to be truly important. These events ignited real change, warned of a not-so-far-off future, or had surprising effects that we couldn’t have imagined at the time. We started thinking about this particular time period because Vox just happened to turn 10 this year, but 2014 saw much more than the birth of our news organization. It was an incredibly divisive year kicking off an incredibly divisive decade. This was the year the police killings of Michael Brown and Eric Garner mainstreamed the Black Lives Matter movement; this was also the year of Gamergate, a harassment campaign that became entwined with the ascendant alt-right. It was a wildly online year, too, that set all sorts of attitudes and behaviors in motion (see: BLM and Gamergate, but also The Fappening and Kim Kardashian’s special brand of virality, below). Our reporters set out to explain the last 10 years of indelible moments — the good, the bad, the fascinating — in a series of pieces you can find across the site. If you want to understand how we got to where we are in 2024, read on. When nude leaks went from scandal to sex crime It’s been trendy lately to talk about how differently we now treat women, particularly famous women, than we did in the aughts. We talk about how today, we understand that it was wrong for tabloids to harass Britney Spears and publish all those upskirt photos and ask teen pop stars if their boobs were real on live TV.  There’s a specific moment, though, when we saw that much-remarked-upon evolution tip into reality, the purity culture of the 2000s coming up against the feminist outrage of the 2010s and crumbling.  More from This Changed Everything How the self-care industry made us so lonely The “racial reckoning” of 2020 set off an entirely new kind of backlash 10 big things we think will happen in the next 10 years The grossly named Fappening occurred on August 31, 2014, when one hacker’s stash of nearly 500 celebrity nudes (including Jennifer Lawrence, then at the height of her fame) leaked out to the mainstream internet. They became the fodder for a thousand op-eds about what was just beginning to be called revenge porn. (Ten years later, 2014’s cutting-edge term is now considered inaccurate, putting too much emphasis on the intent of the perpetrator and trivializing the severity of the crime being committed.) The previous decade had a playbook in place for talking about leaked photos of naked stars. You talked about them as something titillating for you, the viewer, to look at without apology, and something shameful for the woman (it was always a woman) pictured to apologize for.  For some media outlets, it seemed only natural to continue the playbook of the 2000s into 2014. “#JenniferLawrence phone was hacked her #nude pics leaked Check them out in all their gloriousness,” tweeted Perez Hilton, publicizing a post that reproduced the uncensored pictures of Lawrence.  But instead of getting the traffic windfall he might have expected, Perez was slammed with outrage across social media. He had to apologize for his post and replace it with a censored version. As Hilton and his cohort scrambled to catch up, the rest of the media was allying itself fiercely on the side of the hacking victims, denouncing anyone who looked at the leaked nudes. That included outlets that had previously covered every nipslip and upskirt photo to hit the internet with panting eagerness.  “We have it so easy these days,” the pop culture website Complex had mused in 2012 in a roundup of recent celeb nude leaks. “Who do you want to see naked?”  When the Fappening happened two years later, Complex changed its mind. “Consider this,” the website declared. “These women, regardless of their public persona, are entitled to privacy and to express their sexuality however they wish. It’s their basic human right. These women have lives, too.” It’s hard to say exactly what swung the discourse quite so hard against the hackers this time around. Perhaps it was the ubiquity of camera phones, which had made nudes so inescapable: that feeling that it could happen to you. Perhaps it was because the media at the time was obsessed with Jennifer Lawrence, like everyone else was, and they wanted to be on her side. Perhaps the collective hive mind had just decided the time had come for feminism to trend upward. Whatever the reason, the press had now established a new narrative it could use to talk about sex crimes in the social media era, especially sex crimes that involved famous and beloved actresses. Three years later, it would put that knowledge to use to break a series of stories about Harvey Weinstein as the decade-old Me Too movement re-energized itself. Me Too saw reputational losses and criminal charges wielded against powerful men who for decades had been able to get away with sexual violence with impunity. It was able to do that because of what we all learned from The Fappening. —Constance Grady A fringe, racist essay foretold the fate of a MAGAfied Republican Party In 2016, a then-minor conservative writer named Michael Anton wrote what would become the defining case for electing Donald Trump. In Anton’s view, a Clinton victory would doom the country to collapse — primarily, albeit not exclusively, due to “the ceaseless importation of Third World foreigners with no tradition of, taste for, or experience in liberty.” Whatever Trump’s faults, he alone stood in the way of national suicide. Therefore, all true conservatives must get behind him. This sort of rhetoric may seem normal now: the kind of thing you hear every day from Trump and his deputies in the conquered Republican Party. At the time, it was immensely controversial — so much so that Anton originally published it under a pseudonym (Publius Decius Mus). But it became so influential on the pro-Trump right that Anton would be tapped for a senior post in President Trump’s National Security Council. The essay’s emergence as the canonical case for Trumpism marked a turning point: the moment when the conservative movement gave into its worst impulses, willing to embrace the most radical forms of politics in the name of stopping social change. The anti-establishment Trumpers have become the establishment The title of Anton’s essay, “The Flight 93 Election,” points to its central conceit. United Airlines Flight 93 was the one flight on September 11 that did not hit its ultimate target, crashing in a field in Pennsylvania thanks to a passenger uprising. Anton argued that Americans faced a choice analogous to that of Flight 93’s passengers: either “charge the cockpit” (elect Trump) or “die” (elect Hillary).  Anton spends much of his essay castigating the conservative movement — what he calls “Conservatism, Inc” or the “Washington Generals” of politics — for refusing to acknowledge that immigration has made the electoral stakes existential. Trump “alone,” per Anton, “has stood up to say: I want to live. I want my party to live. I want my country to live. I want my people to live. I want to end the insanity.” The racism in Anton’s view of “Third World foreigners” is unmistakable. Yet there is no doubt that his basic theses are now widespread among the Republican Party and conservative movement. The anti-establishment Trumpers have become the establishment. Anton’s essay was ahead of the curve, clearly articulating where the movement was heading under Trump. “The Flight 93 Election” marked the moment in which the unstated premises of the conservative movement’s most radical wings came out into the open. That those premises are now widely shared goes to show what the movement has become — and why Anton, and many others like him, would later rationalize an attempt to overturn an American election. —Zack Beauchamp The number that made the extinction crisis real Scientists have known for decades that plants and animals worldwide are in peril — the tigers and frogs, wildflowers and beetles. But it wasn’t until recently that the true gravity of the problem, dubbed the biodiversity crisis, started sinking in with the public.  That shift happened largely thanks to a single number, published in 2019.  In spring of that year, an intergovernmental group of scientists dedicated to wildlife research, known as IPBES, released a report that found that roughly one million species of plants and animals are threatened with extinction. In other words, much of the world’s flora and fauna is at risk of disappearing for good.  “The health of ecosystems on which we and all other species depend is deteriorating more rapidly than ever,” Robert Watson, IPBES’s director for strategic development and former chair, said when the report was published. “We are eroding the very foundations of our economies, livelihoods, food security, health and quality of life worldwide.” Extinction is far from the only important metric for measuring the health of the planet. Some scientists argue that it obscures other signs of biodiversity loss, such as shrinking wildlife populations, that typically occur long before a species goes extinct.  Yet this number marked an evolution in the public’s understanding of biodiversity loss.  Extinction is an easy concept to grasp, and it’s visceral. And so the number — calculated based on estimates of the total number of species on Earth, and how threatened different groups of them are — hit especially hard. It not only raised awareness but inspired an unprecedented wave of conservation action.  World leaders have since used the IPBES number to justify major efforts to protect nature, including a historic global deal, agreed on by roughly 190 countries in 2022, to halt the decline of wildlife and ecosystems. It has also been cited by government hearings, state resolutions, corporate actions, and hundreds of scientific papers — not to mention countless news reports.  The concept of biodiversity loss is vague. This number made it concrete, and more urgent than ever. —Benji Jones One state’s chilling ban was the beginning of the end for abortion access in America In May 2019, Alabama banned almost all abortions. It was the most aggressive abortion law passed by a state in decades, and clearly flouted the protections set forth in Roe v. Wade.  With no exceptions for rape or incest, the Alabama law hit a new level of restrictiveness amid a slate of state abortion bans passed in 2018 and 2019. These measures marked a major change in anti-abortion strategy: After 10 years of pushing smaller restrictions aimed at closing clinics or requiring waiting periods for patients, abortion opponents had begun aiming squarely at the landmark 1973 Supreme Court decision establishing Americans’ right to terminate a pregnancy.  Emboldened by Donald Trump’s presidency and two new conservative Supreme Court justices, these activists believed that the constitutional right to an abortion was finally vulnerable. They were correct.  The Alabama abortion ban was expressly designed as a challenge to Roe, with sponsor and Alabama state Rep. Terri Collins telling the Washington Post, “What I’m trying to do here is get this case in front of the Supreme Court so Roe v. Wade can be overturned.”   At first, Alabama’s ban, along with six-week bans in Georgia and elsewhere, were tied up in lower courts. In 2020, however, Justice Ruth Bader Ginsburg died and a third Trump nominee, Amy Coney Barrett, was confirmed, creating a rock-solid conservative majority on the Supreme Court. Less than two years later, the court held in Dobbs v. Jackson Women’s Health Organization that Roe “must be overruled.” With the federal right to an abortion gone, Alabama’s ban went into effect. While it was once the most restrictive in the country, now more than a dozen other states have instituted near-total bans. Several more have imposed gestational limits at 15 weeks or earlier. Alabama was once again on the front lines of reproductive health restrictions in February of this year, after a  judge ruled that frozen embryos used in IVF count as “children” under state law.   The landscape of reproductive health law in America has been utterly remade, and anti-abortion activists are far from finished. While the Alabama ban once seemed to many like radical legislation that would never survive the courts, it was in fact an early look at where the country was headed, and at the extreme circumstances under which millions of Americans are living today.  —Anna North  Avengers: Endgame forced an entirely new era of storytelling There will probably never be another movie like Avengers: Endgame, the 2019 film with a lead-up that wholly altered the movie industry and even the way stories are told. For over a decade, Marvel told one central story — Earth’s mightiest heroes working to defeat the great villain Thanos — through the MCU’s plethora of interlocking superhero blockbusters. In that era, each film, with its Easter eggs and credit scenes, built toward the culmination known as Endgame.  By signaling to its audience that all 23 movies mattered to the larger story, Marvel ensured each was a financial success, including a slew — Black Panther, Captain Marvel, Avengers: Infinity War — of billion-dollar worldwide box offices. Marvel’s grand design made Endgame the second-biggest movie in history. It’s not surprising that seemingly everyone in Hollywood tried to replicate this triumph, often at the expense of creative achievement. Studio heads hoped that they could grow and cash in on properties with extant popularity, the way Marvel had with its comic book characters, and began investing in sequels and spinoffs of proven IP. Marvel’s parent company Disney developed countless new Star Wars projects and capitalized on its hits like Frozen and Moana by lining up continuations of those stories. The company created Disney+ not just to sell its existing properties, but to house its avalanche of spinoff TV shows. Amazon’s take on Tolkien’s Lord of the Rings franchise and HBO’s interest in multiple Game of Thrones spinoffs could certainly be seen as trying to capture Marvel’s magic.   Across the board, the people in charge of the purse strings became less interested in original ideas, as well as in mid-budget films; it was blockbuster or… bust.  Marvel changed what kind of stories were being told, but also how they were being told. Competitors became convinced that audiences wanted a connected cinematic universe, a format that mirrored comic book structure. Warner Bros., which owns the rights to Superman, Batman, and Wonder Woman, tried its hand at creating the DC superhero universe. Universal also played with the idea, teasing a linked movie world featuring classic monsters like Dracula. Neither of those fully panned out — some movies were critical flops, others didn’t find large audiences — signaling how difficult it is to execute what Marvel had. This tactic spread beyond the big studios too; indie darling A24, for example, has tapped into connected worlds and multiverses to tell expansive stories. Marvel’s other innovations have also lodged themselves firmly in the pop culture firmament. Easter eggs — embedding “secrets” into art — are commonplace today (see: Swift, Taylor), and foster fan loyalty. Post-credits scenes have been added to all kinds of films.  Perhaps the real testament to Endgame’s singularity, though, is that it wasn’t only rivals who were unable to replicate what Marvel was able to do. None of the studio’s post-Endgame movies have had pre-Endgame box office results, and Marvel is no longer an unstoppable force. The studio’s cinematic universe looks as vulnerable as ever.  What Marvel didn’t realize was that Endgame was truly the end of the game. In its wake — for better or worse — we’re left with new ideas about what kind of stories we tell and why we tell them.  —Alex Abad-Santos The manifesto that changed China’s place in the world Xi Jinping Thought on Socialism with Chinese Characteristics for a New Era — a definitive manifesto known as Xi Jinping Thought for short — first appeared in China in 2017, laying out the ideology and priorities not just of China’s president Xi, but of the entire Chinese Communist party-state under him. Xi’s policies and consolidation of power didn’t start with the document itself; they were developed over time, starting even before Xi became president in 2012. But given the opacity of the Chinese political apparatus and increasing censorship, the compiled doctrine provided a historic window into how Xi sees the world and his own place in it.  And what he wants is a dominant China that harks back to its former greatness, with himself at the center. Xi Jinping Thought is, according to the document, the roadmap for “a Chinese solution for world peace and human progress, and of landmark significance in the history of the rejuvenation of the Chinese nation, the history of the development of Marxism, and the progress of human society.” Xi Jinping Thought articulates a vision that harnesses military development and aggressive diplomacy Rather than lay low and just use its economic growth and the decline of US influence to propel China to world power status — as the country’s former president Deng Xiaoping advocated — Xi Jinping Thought articulates a vision that harnesses military development and aggressive diplomacy as critical factors in China’s dominance. That has translated to China deploying its increasing military might to assert dominance in the South China Sea and the Taiwan Strait and cracking down on pro-democracy protests in Hong Kong, in addition to further opening the economy and becoming a global investment powerhouse via the Belt and Road initiative. It has also meant taking significant geopolitical leadership positions — expanding the BRICS economic bloc, brokering a deal to restore relations between Iran and Saudi Arabia, and attempting to negotiate peace between Ukraine and Russia. Arguably, China and the US would have ended up on a collision course with or without Xi Jinping Thought. But that tension really kicked into higher gear after the introduction of the doctrine at the start of Xi’s second term, according to Neil Thomas, Chinese politics fellow at the Asia Society — and after the US itself started to explicitly attempt to contain China’s rise. “Looking from Beijing, you start to see this big pushback,” starting with the Trump administration’s trade war and continuing under Biden. “That has fed into a much more securitized view of the world in China,” Thomas said, as well as the notion that geopolitics “was increasingly zero sum.” Lately the aggressive policy seems to be faltering due to China’s economic troubles. Thomas says Xi has “become increasingly aware of the costs of war [to] diplomacy and has adjusted his tactics to pursue the same ambitious strategic goals but with a more sensible strategy that focuses more on making friends than making enemies.” That has not deterred the US military buildup in Asia to counter China, though diplomatic relations between the two countries have warmed somewhat in recent months. But cutting down the bluster doesn’t mean a change in priorities, just a change in tactics — for now, anyway. —Ellen Ioanes The 2016 election made us realize we know nothing about class Since Donald Trump eked out an electoral college victory in 2016 and reshaped the GOP, journalists, academics, and politicians have been trying to explain what, exactly, happened. One  prevailing narrative is that Trump spoke directly to a forgotten voting bloc — poor and working-class white people, especially those living in rural America. There’s a kernel of truth in that theory; Trump did indeed outperform previous Republican candidates among that demographic. But the stereotype of the average Trump voter that’s been born out of that narrative — the blue-collar union worker who hasn’t seen a meaningful raise in decades — is misleading at best. In fact, one of the lessons of 2016 was that there is no universal definition of what constitutes a “working-class” voter, and that class solidarity is still deeply misunderstood. As it turns out, Trump’s biggest, most reliable voting block wasn’t the downtrodden white worker; it was largely white people from middle- and high-income households. When voters were divided up by income in various exit polls, Trump was only able to beat Joe Biden in one of three tiers: those making over $100,000 a year.  Trump’s win wasn’t a high-water mark for the role of class in elections like many thought, but rather for the media focus on the role of class in elections. Even still, we haven’t really figured out how to measure our class divides or even talk about them. This lack of clarity has underscored a big problem in American politics: We have categories that are used as proxies for class — like someone’s college education level or union membership status — but they are imprecise substitutes that blur the bigger picture of the US electorate.   As a result, analysis has exaggerated, or even distorted, reality, painting the Democratic Party, for example, as a political organization that’s growing more and more elitist and out of touch, and the GOP as the party that’s winning over the working class.  That’s despite the fact that Democrats have embraced the most ambitious anti-poverty agenda since Lyndon Johnson’s presidency — championing programs that are, by and large, supported by the poor — while Republicans continue to advocate for programs that almost exclusively benefit the wealthiest members of American society. Trump’s victory may have turned people’s attention to class politics, but there’s still a long way to go before Americans get a clearer picture of how class will shape, or even determine, the election in November — and those in the years to come. —Abdallah Fayyad  A photograph of a 3-year-old refugee’s death altered global opinion on migrants  There are certain photographs that stop the world. The “Tank Man” of Tiananmen Square. A nine-year-old girl, on fire from napalm during the Vietnam War. A migrant woman in 1936 California. To this list we can add the image of the body of a three-year-old refugee boy, face down in the sand of Bodrum, Turkey, after drowning in the Mediterranean Sea on September 2, 2015. Alan Kurdi was fleeing the Syrian civil war, one of an estimated million people who were seeking safe refuge in Europe. Minutes after Kurdi and his family left the Turkish city of Bodrum in the early hours, hoping to reach the Greek island of Kos and European territory, their overloaded rubber dinghy capsized. Kurdi, along with his brother Ghalib and mother Rehana, slipped beneath the waves. That morning the Turkish photographer Nilüfer Demir came upon what she would later call a “children’s graveyard” on the beach. Alan’s body had washed up on the shore, his sneakers still on his tiny feet. Demir took the photograph. Alan Kurdi was only one of an estimated 3,700 other asylum seekers who drowned in the eastern Mediterranean that year, desperately trying to reach Europe. But Demir’s photograph, shared on social media by Peter Bouckaert of Human Rights Watch, spread to every corner of the world, where it was viewed by an estimated 20 million people. At a moment when Europeans seemed unsure whether to accept the unprecedented flow of asylum seekers, the image of a three-year-old left to die on the very edge of Europe galvanized political leaders, opening up a route for hundreds of thousands of refugees to find safety in the EU.   But the story doesn’t end there, for the compassion for asylum seekers generated by Kurdi’s image proved to have a short half-life. In the years since 2015, Europe has largely turned against asylum seekers, tightening its borders and closing off the Mediterranean. A little more than a year after Kurdi’s death, Donald Trump would win the White House, leading to a sharp reduction in the number of asylum seekers admitted into the US. That same year the UK voted for Brexit, in large part over concerns about immigration and asylum policy. The European Parliament elections held later this year are expected to cement policies that will make the EU even less welcoming to migrants and asylum seekers.   Yet with some 114 million people around the world forcibly displaced from their homes, nothing will stop the flow of refugees. We know there will be more Alan Kurdis in the future. And they will likely be met with less compassion than his photographed death generated.  —Bryan Walsh What Kim Kardashian wrought when she “broke the internet” One of the most circulated images of the past decade is of a reality star’s rear end. In November 2014, Paper Magazine unveiled its winter issue starring Kim Kardashian with a photoshoot centered around her most notable asset and the ambitious goal of “break[ing] the internet.” On one cover, Kardashian creates a champagne fountain with her curvaceous body, unleashing foam into a glass perched on her PhotoShopped backside. (The image is a recreation of controversial photographer Jean-Paul Goude’s 1976 “Carolina Beaumont, New York” photo, but drew even more fraught comparisons to Sarah Baartman, an enslaved South African woman who was made into a freak-show attraction in 19th-century Europe for her large buttocks.) The other cover, however — where Kardashian flashes her impossibly small waist and cartoonishly round butt — is what we mainly associate with the issue. She’s wearing nothing but pearls and a self-aware smile. What was once a source of mockery for Kardashian in tabloids had now become the culture’s most coveted possession.  Lest we forget, these photos arrived at the tailend of a year all about butts. White artists like Miley Cyrus, Iggy Azalea, and even Taylor Swift were incorporating twerking into their music videos and performances. Hit songs like Meghan Trainor’s “All About That Bass,” Jennifer Lopez’s “Booty,” and Nicki Minaj’s “Anaconda” were exalting curvy bodies. These moments contributed to the “slim-thick” physique becoming more accepted and desired outside Black and brown communities. (Twerking and voluptuous “video vixens” have long been features of rap videos.) However, it was Kardashian and, later, her sisters, who would come to represent the social complications this “trend” posed regarding the fetishization of Black bodies, cultural appropriation, and plastic surgery.  The American Society of Plastic Surgeons found a 90 percent increase in Brazilian butt lift procedures from 2015 to 2019. The surgery, where patients’ stomach fat is injected into their butts, has a sordid history embedded in Brazil’s eugenics movement and the hyper-sexualization of the mixed-race Black woman, known as the “mulata.” BBLs have also garnered headlines for their deadly health risks, mainly a result of fat embolisms. Nevertheless, it became hard not to notice the amount of Instagram influencers who had apparently gotten the surgery or, at least, were digitally enhancing their butts. Then, just as quickly, it seemed like the tide had turned once again for the “ideal” female body. In 2022, a controversial New York Post article declared that “heroin chic” was back in. Social media observers also began noticing that Kardashian was suddenly a lot smaller. At the same, the diabetes drug Ozempic emerged as Hollywood’s latest weight-loss craze. Thus, the media eagerly questioned whether the BBL era was “over,” despite the surgery’s persisting popularity.  The question illuminated the ways Black people — their culture, their aesthetics, their literal bodies — are objectified and easily discarded under the white gaze. As Rachel Rabbit White wrote, “to celebrate the supposed ‘end of the BBL’ is synonymous with the desire to kill the ways in which Black women, especially Black trans women, and especially Black trans sex workers, have shaped the culture.” Writer Ata-Owaji Victor pondered where the rejection of this trend leaves “Black women and people who naturally have the ‘BBL’ body.” The answer is seemingly: in the same position Black women have always been put — useful until they’re not. —Kyndall Cunningham  The sweeping strike that put power back in teachers’ hands In 2018, roughly 20,000 educators went on strike in West Virginia, protesting low pay and high health care costs. Their historic nine-day labor stoppage led to a 5 percent pay increase for teachers and school support staff.  With organizers galvanized by the victory in West Virginia, labor actions in states like Oklahoma, Kentucky, North Carolina, Colorado, and Arizona soon followed. According to federal statistics, more than 375,000 education workers engaged in work stoppages in 2018, bringing the total number of strikers that year to 485,000— the largest since 1986. The uprising sparked national attention and enthusiasm both about the future of school politics and the possibility of resurging worker activism more broadly. It went by the shorthand “Red for Ed” — a reference to the red clothing educators and their allies wore every time they took to the streets.  The momentum continued the next year: In 2019, more than half of all workers in the US who went on strike came from the education sector, with new teacher actions spreading to states like Arkansas, Indiana, and Illinois. Red for Ed changed the national political narrative To be sure, the movement didn’t create lasting change in all aspects of education policy. Average teacher pay has stayed flat for decades, and fewer people are entering the teaching profession. Union membership writ large has continued to decline. And despite educators’ pushback against school privatization, conservatives managed to push through new expansions of public subsidies for private and religious schools following the pandemic.  But the teacher uprising earned the support of parents and the public, who reported in surveys strong backing for the educators’ organizing and for increased teacher pay. This strengthened support likely helped explain why parents largely stood by their kids’ teachers during the tough months of the pandemic, when educators again banded together for stronger mitigation standards to reduce the spread of Covid-19. During the Obama era, a powerful bipartisan coalition for education reform spent much of their time attacking educators and their unions — a scapegoat for public education’s problems that most people ultimately did not buy. Red for Ed changed the national political narrative around teachers, and in many ways was a fatal nail in the coffin for that movement. —Rachel Cohen Malaria in Maryland (and Florida, and Texas, and Arkansas) showed that the future of climate change is now Last year, for the first time in two decades, mosquitoes transmitted malaria on American soil. The geographic range was unprecedented, with cases in Florida, Texas, Maryland, and Arkansas. 2023 was the hottest year on record since 1850, and for the mosquitoes that spread malaria, heat is habitat; the US cases occurred amid an uptick in malaria infections on a global scale.  Scientists have been warning us for years that without more public health resources, climate change was bound to push infectious threats into environments and populations unprepared for their consequences. Malaria’s reappearance in the US signaled to many that the future has arrived.  Wild weather turns previously inhospitable areas into ones newly suitable for lots of so-called vector insects to live. It’s not just different species of mosquitoes whose migration is changing disease trends. Ticks — different species of which spread diseases like Lyme, babesiosis, and ehrlichiosis — have progressively moved into new parts of the US in recent years as they’ve warmed. Changing weather patterns also cause many of these insects to reproduce in higher numbers in their usual habitats.  Insect habitats aren’t the only ones affected by climate change. Weather is pushing animals that serve as disease reservoirs into new environments, which can lead to more “spillover” events where germs get spread from one species to another. That’s thought to explain, at least in part, the fatal borealpox infection transmitted to an Alaska man by a vole bite last year; it’s also a concern when it comes to rabies transmission. Extreme and unseasonable heat waves are also turning a progressively large part of the US into newly comfortable digs for fungi — including molds that cause severe lung and other infections in healthy people. Warming fresh and sea waters more frequently become home to noxious blooms of toxic algae and bacteria. What’s more, the heat is kicking pathogens’ evolution into overdrive: The microorganisms that can survive it are more likely than ever to also survive in our bodies, making them more likely to cause disease — and harder to fight. As with many health risks, the consequences of climate-related infectious threats land hardest on the people with the fewest resources — and are almost incomparably worse in lower-resource countries than inside the US.  There’s a lot we still don’t understand about how climate change interacts with communicable diseases, including malaria. Some of the shifts caused by severe weather may reduce certain risks even as they amplify others. And disentangling the effects of severe weather from changes in policy, behavior, and human immunity, especially during and after a pandemic, is a formidable task. Still, the comeback — or debut — of peculiar pathogens on American shores makes understanding these links viscerally urgent. Our warming planet isn’t going to wait until we’ve reformed and funded our public health system, seamlessly integrated disease surveillance into health care, renewed public trust in vaccines, and realigned incentives for novel antibiotic production before the fallout of climate change quite literally bites us in the ass.  —Keren Landman Letting language models learn like children tipped the AI revolution Imagine you have a little kid. You want to teach them all about the world. So you decide to strap them to a chair all day, every day, and force them to stare at endless pictures of objects while you say, “That’s a banana, that’s a car, that’s a spaceship, that’s…” That’s not (I hope!) how you would actually teach a kid, right? And yet it’s the equivalent of how researchers initially tried to teach AI to understand the world. Until a few years ago, researchers were training AIs using a method called “supervised learning.” That’s where you feed the AI carefully labeled datasets. It actually yielded some decent results, like teaching AI models to tell apart a banana and a spaceship. But it’s very labor-intensive because humans have to label every bit of data. Then some researchers tried a different method: “unsupervised learning,” where the AI learns more like a real child does, by exploring the world freely, vacuuming up tons of unlabeled data, and gradually picking out the patterns in it. It figures out that bananas are those yellow oblong-shaped things without ever explicitly being told that. Turns out this leads to much more powerful AI models, like OpenAI’s ChatGPT and Google’s Gemini, which can explain complicated topics better and use language more naturally than the older, clunkier models. Of course, AIs are not actually kids, and there’s a lot we still don’t understand about what’s happening inside the models. Yet when these companies realized that the key to unlocking progress wasn’t spoon-feeding AI every bit of information but letting it play around until it figured things out, they ushered in the AI revolution we’re seeing today. Alison Gopnik, a developmental psychologist at Berkeley, was an early voice arguing that studying kids can give us useful hints about how to build intelligent machines. She’s compared children and AIs — for instance, by putting four-year-olds and AIs in the same online environments to see how each is able to learn — and found that the kids make much better inferences. Others are catching on. A team at NYU released a study this year in which a baby wore a helmet camera, and whatever the baby saw and heard provided the training data for an AI model. From a total of just 61 hours of data, the AI learned how to match words to the objects they refer to — the word “banana,” say, to that yellow oblong fruit. Researchers are pinpointing some of the qualities that make kids such amazing learning machines: they’re embodied, they’re curious, and they’re able to interact socially with others. Perhaps that’s why the researchers are now trying to create embodied multimodal AIs that can take in not just text, but sights, sounds, touch, and movement. They are, maybe without realizing it, embarking on an effort to replicate what evolution already did in making babies. —Sigal Samuel  The drug that supercharged a crisis also spelled a synthetic destiny  When doctors began liberally prescribing opium and morphine to Civil War veterans for everything from amputations to diarrhea, they inadvertently kicked off the opioid epidemic over 150 years ago. It wasn’t until pharmaceutical companies started pushing prescription opioids as painkillers in the 1990s that the problem escalated to a national emergency. By 2015, pills and heroin had already made the opioid epidemic the deadliest drug crisis in US history. Then came the fentanyl boom.  The synthetic and extremely potent opioid, introduced in the 1960s, has been used as pain medication for decades. In 2016, it became responsible for the majority of overdose deaths. It pushed the number of US drug overdoses above 100,000 in 2022, more than doubling 2015’s death toll. Because of fentanyl’s potency, it takes much less to overdose: A fatal dose fits on the tip of a sharpened pencil. Fentanyl put an already dire crisis into hyperdrive. But its spread also marked a deadlier, more prolific era of drugs where synthetics reign supreme.  Fentanyl’s rise hinges on its synthetic nature. It can be made from just a few chemicals, while heroin and opium require the slow cultivation of poppy flowers. Compared to Oxycodone — considered a “semi-synthetic” because its production involves chemically modifying natural opioids rather than brewing them from scratch — fentanyl is roughly 60 times more potent.  Fentanyl is also up to 50 times stronger than heroin, which makes smuggling it much easier since doses require far less of the actual drug. In the mid-2010’s, Mexican cartels trafficking opioids began “cutting” drugs with fentanyl to save money, since it provided a similar high with less volume. In some cities where heroin use was widespread, suppliers have altogether replaced it with fentanyl, leaving users little choice but to switch.  Before fentanyl, overdose deaths were concentrated among opioid users. But fentanyl can be found as a filler in cocaine and MDMA supplies, spreading the overdose crisis into new terrain. Variations of fentanyl — of which there are now more than 1,400 — are already making their way into the illicit drug supply. Take carfentanil, which was developed to sedate large animals like elephants, but is now showing up in thousands of human overdoses. Carfentanil is estimated to be 100 times more potent than fentanyl itself. Pure synthetics like fentanyl are where drug development is headed. Despite progress along many measurable dimensions, life in the 21st century will remain painful and unhealthy and full of ways to kill us. The incentive to continue developing legions of new synthetic drugs will stay strong as ever, which will continue unearthing cheaper and easier to make substances. As those make their way to patients, the risk of adding novel, more powerful drugs to the illicit drug supply will follow.  Rising awareness of fentanyl’s harms has driven some progress, from reducing production and investing in harm reduction strategies like testing strips to combating
1 h
vox.com
The overlooked conflict that altered the nature of war in the 21st century
On the second day of the 2020 Armenia-Azerbaijan war, the Armenian military posted a video of one of its surface-to-air missile systems shooting down a surprising enemy aircraft: an Antonov AN-2 biplane.  As it turned out, it wasn’t a sign of desperation on Azerbaijan’s part that its military was flying a plane first produced in the Soviet Union in 1947, and today used mostly for crop-dusting. Azerbaijan had converted several AN-2s into unmanned aircraft and used them as so-called bait drones. After the Armenians shot down the planes, revealing the positions of their anti-aircraft systems, their forces came under attack from more modern drones.  More from This Changed Everything The last 10 years, explained The “racial reckoning” of 2020 set off an entirely new kind of backlash Serial transformed true crime — and the way we think about criminal justice It seems strangely fitting that what was also known as the Second Nagorno-Karabakh War, a conflict that has been called “the first war won primarily with unmanned systems” and even the “first postmodern conflict,” could also end up being the last one in which biplanes played a significant role. The conflict between these two former Soviet republics in the Caucasus, on the border between Europe and Asia, was the culmination of tensions that had been building for more than 25 years and intercommunal dynamics that were far older than that. It was in some sense a throwback to a traditional type of war — two nation-state armies fighting over disputed territory — that was far more prevalent in previous centuries.  But it was also a hypermodern war where unmanned systems played an unprecedented role on the battlefield, and social media played an unprecedented role off it. Though it got relatively little coverage in the international media at the time — coming as it did at the height of the Covid-19 pandemic, a wave of global protests, and a bitter US presidential election campaign — it was in some ways a preview of the much larger war that would break out in Ukraine just two years later, and may yet be seen as the harbinger of a new and potentially devastating era of international conflict. A frozen conflict heats up The Armenia-Azerbaijan dispute is one of the so-called frozen conflicts left over from the collapse of the Soviet Union. Nagorno-Karabakh, often referred to as Artsakh by Armenians, is an ethnically Armenian region within the borders of neighboring Azerbaijan. Violence in the region erupted in the 1980s when authorities in Nagorno-Karabakh demanded to be transferred to Armenia. (At the time, all were part of the Soviet Union.)  After the Soviet collapse, when both Armenian and Azerbaijan became independent, full-scale war broke out, resulting in more than 30,000 deaths and the displacement of hundreds of thousands of people, mainly Azeris. The first war ended with a Russian-brokered ceasefire in 1994 that left Nagorno-Karabakh as a semi-independent — but internationally unrecognized — territory surrounded by Azerbaijan, and Armenia retained control of some of the nearby areas. Effectively, it was an Armenian victory.  In the years that followed, the ceasefire was frequently violated by both sides and the underlying issues never resolved. Then on September 27, 2020, Azerbaijan’s forces launched a rapid dawn offensive, beginning 44 days of war.  This time, it was a resounding success for Azerbaijan, retaking all of the Armenian-held territory around Nagorno-Karabakh as well as about a third of the territory itself. At least 6,500 people were killed before the two sides agreed to a Russian-monitored ceasefire and only a winding mountain road was left to connect Armenia and Karabakh. (Though Russia, the preeminent military power in the region, is a traditional ally of Armenia, it has been hedging its bets more in recent years, particularly since the 2018 protests that brought a Western-inclined, democratic government to power in Armenia.) Finally, in 2023 — with Russia distracted and bogged down by its war in Ukraine — Azerbaijan launched a blockade of Nagorno Karabakh, eventually seizing the region and causing the majority of its Armenian population to flee. The Republic of Nagorno-Karabakh was dissolved in 2024.     A glimpse of the future of war What made Azerbaijan’s rapid victory possible? One major factor was Turkey’s strong military support for Azerbaijan, a fellow Muslim, Turkic-speaking nation that Turkey saw as a key ally in extending its influence into the Caucasus. Another related factor was Azerbaijan’s deployment of unmanned drones, particularly the Turkish-made Bayraktar TB-2 attack drone, as well as several models of exploding drones purchased from Israel. These weapons proved stunningly effective at destroying the tanks and air defense systems of the Armenian and Nagorno-Karabakh forces.  “The Armenians and Nagorno-Karabakh had their forces dug in in advantageous positions, and they might have won if this war had unfolded the way it did in 1994, but it didn’t,” Sam Bendett, a senior fellow at the Center for a New American Security and expert on drone warfare, told Vox. “The Azeris understood that they couldn’t dislodge the Armenians in any other way than to send drones rather than piloted aircraft.” As much as this war was fought under the global radar, these tactics caused a tectonic shift in the prevailing perception of drones as a weapon. From the beginning of the 20-year-long US war on terrorism, unmanned aircraft played an important role, but they were primarily multimillion-dollar machines like the Predator and Reaper that were employed mostly as weapons for remote targeting of specific targets away from declared battlefields. Large numbers of simple, replaceable drones could turn the tide on the battlefield in a conventional war The Nagorno-Karabakh war showed how large numbers of simple, replaceable drones could turn the tide on the battlefield in a conventional war. As the military analyst Michael Kofman wrote at the time, “Drones are relatively cheap, and this military technology is diffusing much faster than cost-effective air defense or electronic warfare suitable to countering them.” Lessons learned in the Nagorno-Karabakh conflict were employed in the Ukraine war, when Ukrainian forces made effective use of cheap drones — including, once again, the Turkish TB-2 — to negate the invading Russians’ advantages in mass and firepower. Over time, the evolving use of masses of cheap drones for strikes and surveillance by both sides in Ukraine have made traditional maneuver warfare vastly more difficult, another dynamic predicted by the conflict in Nagorno-Karabakh. Two years into the war, drones are one major reason why the front line often appears stuck in place. Another way Nagorno-Karabakh seemed to be a harbinger of conflicts to come was in the role of social media in shaping global perceptions of the war. As the media scholar Katy Pearce wrote in 2020, “Armenians and Azerbaijanis in country and those who have settled elsewhere have long battled on social media, and this escalated during the war … For Armenians and Azerbaijanis, whether still in the region or part of the wider diaspora, social media provided a way to participate, and feel engaged.”  As with Ukraine two years later, this was a war with an extraordinary amount of battlefield footage that was available to the public, and where that footage was captured by the participants themselves via drone camera or smartphone, rather than conventional (and more impartial) war reporters. This allowed both sides to shape public perceptions of what was happening on the battlefield, a phenomenon we’re seeing again with the Israel-Hamas war and the way social media images have driven coverage of that conflict. Journalists attempting to write objectively about the conflict often came under attack online from partisans who objected to what they saw as biased or unduly negative coverage.  For Armenia, this may have backfired. When Prime Minister Nikol Pashinyan finally signed the ceasefire deal, he faced mass protests and accusations that he had sold the country, in part because many Armenians hadn’t actually believed they were losing the war — until they lost the war.  A new age of conquest?  Azerbaijan’s offensive was not a straightforward land grab. Nagorno-Karabakh’s independence was not recognized by any other country on earth — technically, not even Armenia — and as far as international law was concerned, Armenian troops were occupying part of Azerbaijan’s territory. There are many such unresolved border disputes and unrecognized semi-sovereign territories around the world today.  Still, as Thomas De Waal, a senior fellow at the Carnegie Endowment for International Peace and author of one of the definitive books on the conflict, told Vox, “Azerbaijan’s war of 2020 broke a pattern in European security where the assumption was that all these unresolved conflicts across Europe had to be resolved peacefully. Azerbaijan rewrote the rulebook, used force, and as far as it was concerned, got away with it.”  De Waal suggests the relatively muted international reaction to the war — the US called for a ceasefire but did not sanction Azerbaijan despite calls from some members of Congress to do so — may have been one of a number of factors that led Russia’s government to believe, two years later, that “there was a more permissive international environment for the use of force and there wasn’t going to be as much pushback [to invading Ukraine] as there might have been a decade before.” Was this brief conflict in the Caucasus a sign of a larger shift? In recent decades, wars of territorial conquest have been rare, and successful ones even rarer. The best-known examples — North Korea’s attempt to conquer South Korea in 1950, or Saddam Hussein’s invasion of Kuwait in 1990 — have prompted massive international interventions to protect international borders. Wars within states, sometimes drawing in international intervention, have been more common.  “For a very long time after the Second World War, there was a pretty widespread understanding on how the use of force is not a legitimate means of resolving territorial disputes,” Nareg Seferian, a US-based Armenian political analyst and writer, told Vox. “I don’t think many people realize that until at least the First World War, if not the Second, that was just a really normal thing.” The bloody and ongoing international conflict in Ukraine is in many ways quite rare. If that starts to change, a month-and-a-half-long war in the Caucasus in 2020 could eventually be remembered as a pivotal turning point — not just in how wars are fought, but why.
1 h
vox.com
How to Keep Watch
With smartphones in our pockets and doorbell cameras cheaply available, our relationship with video as a form of proof is evolving. We often say “pics or it didn’t happen!”—but meanwhile, there’s been a rise in problematic imaging including deepfakes and surveillance systems, which often reinforce embedded gender and racial biases. So what is really being revealed with increased documentation of our lives? And what’s lost when privacy is diminished?In this episode of How to Know What’s Real, staff writer Megan Garber speaks with Deborah Raji, a Mozilla fellow, whose work is focused on algorithmic auditing and evaluation. In the past, Raji worked closely with the Algorithmic Justice League initiative to highlight bias in deployed AI products.Listen to the episode here:Listen and subscribe here: Apple Podcasts | Spotify | YouTube | Pocket CastsThe following is a transcript of the episode:Andrea Valdez: You know, I grew up as a Catholic, and I remember the guardian angel was a thing that I really loved that concept when I was a kid. But then when I got to be, I don’t know, maybe around seven or eight, like, your guardian angel is always watching you. At first it was a comfort, and then it turned into kind of like a: Are they watching me if I pick my nose? Do they watch me?Megan Garber: And are they watching out for me, or are they just watching me?Valdez: Exactly. Like, are they my guardian angel or my surveillance angel? Surveillance angel.Valdez: I’m Andrea Valdez. I’m an editor at The Atlantic.Garber: And I’m Megan Garber, a writer at The Atlantic. And this is How to Know What’s Real.Garber: I just got the most embarrassing little alert from my watch. And it’s telling me that it is, quote, “time to stand.”Valdez: Why does it never tell us that it’s time to lie down?Garber: Right. Or time to just, like, go to the beach or something? And it’s weird, though, because I’m realizing I’m having these intensely conflicting emotions about it. Because in one way, I appreciate the reminder. I have been sitting too long; I should probably stand up. But I don’t also love the feeling of just sort of being casually judged by a piece of technology.Valdez: No, I understand. I get those alerts, too. I know it very well. And you know, it tells you, “Stand up; move for a minute. You can do it.” Uh, you know, you can almost hear it going, like, “Bless your heart.”Garber: “Bless your lazy little heart.” The funny thing, too, about it is, like, I find myself being annoyed, but then I also fully recognize that I don’t really have a right to be annoyed, because I’ve asked them to do the judging.Valdez: Yes, definitely. I totally understand. I mean, I’m very obsessed with the data my smartwatch produces: my steps, my sleeping habits, my heart rate. You know, just everything about it. I’m just obsessed with it. And it makes me think—well, I mean, have you ever heard of the quantified-self movement?Garber: Oh, yeah.Valdez: Yeah, so quantified self. It’s a term that was coined by Wired magazine editors around 2007. And the idea was, it was this movement that aspired to be, quote, unquote, “self-knowledge through numbers.” And I mean, it’s worth remembering what was going on in 2007, 2008. You know, I know it doesn’t sound that long ago, but wearable tech was really in its infancy. And in a really short amount of time, we’ve gone from, you know, Our Fitbit to, as you said, Megan, this device that not only scolds you for not standing up every hour—but it tracks your calories, the decibels of your environment. You can even take an EKG with it. And, you know, when I have my smartwatch on, I’m constantly on guard to myself. Did I walk enough? Did I stand enough? Did I sleep enough? And I suppose it’s a little bit of accountability, and that’s nice, but in the extreme, it can feel like I’ve sort of opted into self-surveillance.Garber: Yes, and I love that idea in part because we typically think about surveillance from the opposite end, right? Something that’s done to us, rather than something that we do to ourselves and for ourselves. Watches are just one example here, right? There’s also smartphones, and there’s this broader technological environment, and all of that. That whole ecosystem, it all kind of asks this question of “Who’s really being watched? And then also, who’s really doing the watching?”Valdez: Mm hmm. So I spoke with Deb Raji, who is a computer scientist and a fellow at the Mozilla Foundation. And she’s an expert on questions about the human side of surveillance, and thinks a lot about how being watched affects our reality.—Garber: I’d love to start with the broad state of surveillance in the United States. What does the infrastructure of surveillance look like right now?Deborah Raji: Yeah. I think a lot of people see surveillance as a very sort of “out there in the world,” physical-infrastructure thing—where they see themselves walking down the street, and they notice a camera, and they’re like, Yeah, I’m being surveilled. Um, which does happen if you live in New York, especially post-9/11: like, you are definitely physically surveilled. There’s a lot of physical-surveillance infrastructure, a lot of cameras out there. But there’s also a lot of other tools for surveillance that I think people are less aware of.Garber: Like Ring cameras and those types of devices?Raji: I think when people install their Ring product, they’re thinking about themselves. They’re like, Oh, I have security concerns. I want to just have something to be able to just, like, check who’s on my porch or not. And they don’t see it as surveillance apparatus, but it ends up becoming part of a broader network of surveillance. And then I think the one that people very rarely think of—and again, is another thing that I would not have thought of if I wasn’t engaged in some of this work—is online surveillance. Faces are sort of the only biometric; uh, I guess, you know, it’s not like a fingerprint. Like, we don’t upload our fingerprints to our social media. We’re very sensitive about, like, Oh, you know, this seems like important biometric data that we should keep guarded. But for faces, it can be passively collected and passively distributed without you having any awareness of it. But also, we’re very casual about our faces. So we upload it very freely onto the internet. And so, you know, immigration officers—ICE, for example—have a lot of online-surveillance tools, where they’ll monitor people’s Facebook pages, and they’ll use sort of facial recognition and other products to identify and connect online identities, you know, across various social-media platforms, for example.Garber: So you have people doing this incredibly common thing, right? Just sharing pieces of their lives on social media. And then you have immigration officials treating that as actionable data. Can you tell me more about facial recognition in particular?Raji: So one of the first models I actually built was a facial-recognition project. And so I’m a Black woman, and I noticed right away that there were not a lot of faces that look like mine. And I remember trying to have a conversation with folks at the company at the time. And it was a very strange time to be trying to have this conversation. This was like 2017. There was a little bit of that happening in the sort of natural-language processing space. Like, people were noticing, you know, stereotyped language coming out of some of these models, but no one was really talking about it in the image space as much—that, oh, some of these models don’t work as well for darker-skinned individuals or other demographics. We audited a bunch of these products that were these facial-analysis products, and we realized that these systems weren’t working very well for those minority populations. But also definitely not working for the intersection of those groups. So like: darker skin, female faces.Garber: Wow.Raji: Some of the ways in which these systems were being pitched at the time, were sort of selling these products and pitching it to immigration officers to use to identify suspects.Gaber: Wow.Raji: And, you know, imagine something that’s not 70 percent accurate, and it’s being used to decide, you know, if this person aligns with a suspect for deportation. Like, that’s so serious.Garber: Right.Raji: You know, since we’ve published that work, we had just this—it was this huge moment. In terms of: It really shifted the thinking in policy circles, advocacy circles, even commercial spaces around how well those systems worked. Because all the information we had about how well these systems worked, so far, was on data sets that were disproportionately composed of lighter-skin men. Right. And so people had this belief that, Oh, these systems work so well, like 99 percent accuracy. Like, they’re incredible. And then our work kind of showed, well, 99 percent accuracy on lighter-skin men.Garber: And could you talk a bit about where tech companies are getting the data from to train their models?Raji: So much of the data required to build these AI systems are collected through surveillance. And this is not hyperbole, right? Like, the facial-recognition systems, you know, millions and millions of faces. And these databases of millions and millions of faces that are collected, you know, through the internet, or collected through identification databases, or through, you know, physical- or digital-surveillance apparatus. Because of the way that the models are trained and developed, it requires a lot of data to get to a meaningful model. And so a lot of these systems are just very data hungry, and it’s a really valuable asset.Garber: And how are they able to use that asset? What are the specific privacy implications about collecting all that data?Raji: Privacy is one of those things that we just don’t—we haven’t been able to get to federal-level privacy regulation in the States. There’s been a couple states that have taken initiative. So California has the California Privacy Act. Illinois has a BIPA, which is sort of a Biometric Information Privacy Act. So that’s specifically about, you know, biometric data like faces. In fact, they had a really—I think BIPA’s biggest enforcement was against Facebook and Facebook’s collection of faces, which does count as biometric data. So in Illinois, they had to pay a bunch of Facebook users a certain settlement amount. Yeah. So, you know, there are privacy laws, but it’s very state-based, and it takes a lot of initiative for the different states to enforce some of these things, versus having some kind of comprehensive national approach to privacy. That’s why enforcement or setting these rules is so difficult. I think something that’s been interesting is that some of the agencies have sort of stepped up to play a role in terms of thinking through privacy. So the Federal Trade Commission, FTC, has done these privacy audits historically on some of the big tech companies. They’ve done this for quite a few AI products as well—sort of investigating the privacy violations of some of them. So I think that that’s something that, you know, some of the agencies are excited about and interested in. And that might be a place where we see movement, but ideally we have some kind of law.Garber: And we’ve been in this moment—this, I guess, very long moment—where companies have been taking the “ask for forgiveness instead of permission” approach to all this. You know, so erring on the side of just collecting as much data about their users as they possibly can, while they can. And I wonder what the effects of that will be in terms of our broader informational environment.Raji: The way surveillance and privacy works is that it’s not just about the information that’s collected about you; it’s, like, your entire network is now, you know, caught in this web, and it’s just building pictures of entire ecosystems of information. And so, I think people don’t always get that. But yeah; it’s a huge part of what defines surveillance.__Valdez: Do you remember Surveillance Cameraman, Megan?Garber: Ooh. No. But now I’m regretting that I don’t.Valdez: Well, I mean, I’m not sure how well it was known, but it was maybe 10 or so years ago. There was this guy who had a camera, and he would take the camera and he would go and he’d stop and put the camera in people’s faces. And they would get really upset. And they would ask him, “Why are you filming me?” And, you know, they would get more and more irritated, and it would escalate. I think the meta-point that Surveillance Cameraman was trying to make was “You know, we’re surveilled all the time—so why is it any different if someone comes and puts a camera in your face when there’s cameras all around you, filming you all the time?”Garber: Right. That’s such a great question. And yeah, the sort of difference there between the active act of being filmed and then the sort of passive state of surveillance is so interesting there.Valdez: Yeah. And you know, that’s interesting that you say active versus passive. You know, it reminds me of the notion of the panopticon, which I think is a word that people hear a lot these days, but it’s worth remembering that the panopticon is an old idea. So it started around the late 1700s with the philosopher named Jeremy Bentham. And Bentham, he outlined this architectural idea, and it was originally conceptualized for prisons. You know, the idea was that you have this circular building, and the prisoners live in cells along the perimeter of the building. And then there’s this inner circle, and the guards are in that inner circle, and they can see the prisoners. But the prisoners can’t see the guards. And so the effect that Bantham was hoping this would achieve is that the prisoners would never know if they’re being watched—so they’d always behave as if they were being watched.Garber: Mm. And that makes me think of the more modern idea of the watching-eyes effect. This notion that simply the presence of eyes might affect people’s behavior. And specifically, images of eyes. Simply that awareness of being watched does seem to affect people’s behavior.Valdez: Oh, interesting.Garber: You know, beneficial behavior, like collectively good behavior. You know, sort of keeping people in line in that very Bentham-like way.Valdez: We have all of these, you know, eyes watching us now—I mean, even in our neighborhoods and, you know, at our apartment buildings. In the form of, say, Rng cameras or other, you know, cameras that are attached to our front doors. Just how we’ve really opted into being surveilled in all of the most mundane places. I think the question I have is: Where is all of that information going?Garber: And in some sense, that’s the question, right? And Deb Raji has what I found to be a really useful answer to that question of where our information is actually going, because it involves thinking of surveillance not just as an act, but also as a product.—Raji: For a long time when you—I don’t know if you remember those, you know, “complete the picture” apps, or, like, “spice up my picture.” They would use generative models. You would kind of give them a prompt, which would be, like—your face. And then it would modify the image to make it more professional, or make it better lit. Like, sometimes you’ll get content that was just, you know, sexualizing and inappropriate. And so that happens in a nonmalicious case. Like, people will try to just generate images for benign reasons. And if they choose the wrong demographic, or they frame things in the wrong way, for example, they’ll just get images that are denigrating in a way that feels inappropriate. And so I feel like there’s that way in which AI for images has sort of led to just, like, a proliferation of problematic content.Garber: So not only are those images being generated because the systems are flawed themselves, but then you also have people using those flawed systems to generate malicious content on purpose, right?Raji: One that we’ve seen a lot is sort of this deepfake porn of young people, which has been so disappointing to me. Just, you know, young boys deciding to do that to young girls in their class; it really is a horrifying form of sexual abuse. I think, like, when it happened to Taylor Swift—I don’t know if you remember; someone used the Microsoft model, and, you know, generated some nonconsensual sexual images of Taylor Swift—I think it turned that into a national conversation. But months before that, there had been a lot of reporting of this happening in high schools. Anonymous young girls dealing with that, which is just another layer of trauma, because you’re like—you’re not Taylor Swift, right? So people don’t pay attention in the same way. So I think that that problem has actually been a huge issue for a very long time.—Garber: Andrea, I’m thinking of that old line about how if you’re not paying for something in the tech world, there’s a good chance you are probably the product being sold, right? But I’m realizing how outmoded that idea probably is at this point. Because even when we pay for these things, we’re still the products. And specifically, our data are the products being sold. So even with things like deepfakes—which are typically defined as, you know, using some kind of machine learning or AI to create a piece of manipulated media—even they rely on surveillance in some sense. And so you have this irony where these recordings of reality are now also being used to distort reality.Valdez: You know, it makes me think of Don Fallis: this philosopher who talked about the epistemic threat of deepfakes and that it’s part of this pending infopocalypse. Which sounds quite grim, I know. But I think the point that Fallis was trying to make is that with the proliferation of deepfakes, we’re beginning to maybe distrust what it is that we’re seeing. And we talked about this in the last episode. You know, “seeing is believing” might not be enough. And I think we’re really worried about deepfakes, but I’m also concerned about this concept of cheap fakes, or shallow fakes. So cheap fakes or shallow fakes—it’s, you know, you can tweak or change images or videos or audio just a little bit. And it doesn’t actually require AI or advanced technology to create. So one of the more infamous instances of this was in 2019. Maybe you remember there was a video of Nancy Pelosi that came out where it sounded like she was slurring her words.Garber: Oh, yeah, right. Yeah.Valdez: Really, the video had just been slowed down using easy audio tools, and just slowed down enough to create that perception that she was slurring her words. So it’s a quote, unquote “cheap” way to create a small bit of chaos.Garber: And then you combine that small bit of chaos with the very big chaos of deepfakes.Valdez: Yeah. So one, the cheat fake is: It’s her real voice. It’s just slowed down—again, using, like, simple tools. But we’re also seeing instances of AI-generated technology that completely mimics other people’s voices, and it’s becoming really easy to use now. You know, there was this case recently that came out of Maryland where there was a high-school athletic director, and he was arrested after he allegedly used an AI voice simulation of the principal at his school. And he allegedly simulated the principal’s voice saying some really horrible things, and it caused all this blowback on the principal before investigators, you know, looked into it. Then they determined that the audio was fake. But again, it was just a regular person that was able to use this really advanced-seeming technology that was cheap, easy to use, and therefore easy to abuse.Garber: Oh, yes. And I think it also goes to show how few sort of cultural safeguards we have in place right now, right? Like, the technology will let people do certain things. And we don’t always, I think, have a really well-agreed-upon sense of what constitutes abusing the technology. And you know, usually when a new technology comes along, people will sort of figure out what’s acceptable and, you know, what will bear some kind of safety net. Um, and will there be a taboo associated with it? But with all of these new technologies, we just don’t have that. And so people, I think, are pushing the bounds to see what they can get away with.Valdez: And we’re starting to have that conversation right now about what those limits should look like. I mean, lots of people are working on ways to figure out how to watermark or authenticate things like audio and video and images.Garber: Yeah. And I think that that idea of watermarking, too, can maybe also have a cultural implication. You know, like: If everyone knows that deepfakes can be tracked, and easily, that is itself a pretty good disincentive from creating them in the first place, at least with an intent to fool or do something malicious.Valdez: Yeah. But. In the meantime, there’s just going to be a lot of these deepfakes and cheap fakes and shallow fakes that we’re just going to have to be on the lookout for.—Garber: Is there new advice that you have for trying to figure out whether something is fake?Raji: If it doesn’t feel quite right, it probably isn’t. A lot of these AI images don’t have a good sense of, like, spatial awareness, because it’s just pixels in, pixels out. And so there’s some of these concepts that we as humans find really easy, but these models struggle with. I advise people to be aware of, like—sort of trust your intuition. If you’re noticing weird artifacts in the image, it probably isn’t real. I think another thing, as well, is who posts.Garber: Oh, that’s a great one; yeah.Raji: Like, I mute very liberally on Twitter; uh, any platform. I definitely mute a lot of accounts that I notice [are] caught posting something. Either like a community note or something will reveal that they’ve been posting fake images, or you just see it and you recognize the design of it. And so I just knew that kind of content. Don’t engage with those kind of content creators at all. And so I think that that’s also like another successful thing on the platform level. Deplatforming is really effective if someone has sort of three strikes in terms of producing a certain type of content. And that’s what happened with the Taylor Swift situation—where people were disseminating these, you know, Taylor Swift images and generating more images. And they just went after every single account that did that—you know, completely locked down her hashtag. Like, that kind of thing where they just really went after everything. Um, and I think that that’s something that we should just do in our personal engagement as well.—Garber: Andrea, that idea of personal engagement, I think, is such a tricky part of all of this. I’m even thinking back to what we were saying before—about Ring and the interplay we were getting at between the individual and the collective. In some ways, it’s the same tension that we’ve been thinking about with climate change and other really broad, really complicated problems. This, you know, connection between personal responsibility, but also the outsized role that corporate and government actors will have to play when it comes to finding solutions. Mm hmm. And with so many of these surveillance technologies, we’re the consumers, with all the agency that that would seem to entail. But at the same time, we’re also part of this broader ecosystem where we really don’t have as much control as I think we’d often like to believe. So our agency has this giant asterisk, and, you know, consumption itself in this networked environment is really no longer just an individual choice. It’s something that we do to each other, whether we mean to or not.Valdez: Yeah; you know, that’s true. But I do still believe in conscious consumption so much as we can do it. Like, even if I’m just one person, it’s important to me to signal with my choices what I value. And in certain cases, I value opting out of being surveilled so much as I can control for it. You know, maybe I can’t opt out of facial recognition and facial surveillance, because that would require a lot of obfuscating my face—and, I mean, there’s not even any reason to believe that it would work. But there are some smaller things that I personally find important; like, I’m very careful about which apps I allow to have location sharing on me. You know, I go into my privacy settings quite often. I make sure that location sharing is something that I’m opting into on the app while I’m using it. I never let apps just follow me around all the time. You know, I think about what chat apps I’m using, if they have encryption; I do hygiene on my phone around what apps are actually on my phone, because they do collect a lot of data on you in the background. So if it’s an app that I’m not using, or I don’t feel familiar with, I delete it.Garber: Oh, that’s really smart. And it’s such a helpful reminder, I think, of the power that we do have here. And a reminder of what the surveillance state actually looks like right now. It’s not some cinematic dystopia. Um, it’s—sure, the cameras on the street. But it’s also the watch on our wrist; it’s the phones in our pockets; it’s the laptops we use for work. And even more than that, it’s a series of decisions that governments and organizations are making every day on our behalf. And we can affect those decisions if we choose to, in part just by paying attention.Valdez: Yeah, it’s that old adage: “Who watches the watcher?” And the answer is us.__Garber: That’s all for this episode of How to Know What’s Real. This episode was hosted by Andrea Valdez and me, Megan Garber. Our producer is Natalie Brennan. Our editors are Claudine Ebeid and Jocelyn Frank. Fact-check by Ena Alvarado. Our engineer is Rob Smierciak. Rob also composed some of the music for this show. The executive producer of audio is Claudine Ebeid, and the managing editor of audio is Andrea Valdez.Valdez: Next time on How to Know What’s Real: Thi Nguyen: And when you play the game multiple times, you shift through the roles, so you can experience the game from different angles. You can experience a conflict from completely different political angles and re-experience how it looks from each side, which I think is something like, this is what games are made for. Garber: What we can learn about expansive thinking through play. We’ll be back with you on Monday.
1 h
theatlantic.com
On D-Day, the U.S. Conquered the British Empire
For most Americans, D-Day remains the most famous battle of World War II. It was not the end of the war against Nazism. At most, it was the beginning of the end. Yet it continues to resonate 80 years later, and not just because it led to Hitler’s defeat. It also signaled the collapse of the European empires and the birth of an American superpower that promised to dedicate its foreign policy to decolonization, democracy, and human rights, rather than its own imperial prestige.It is easy to forget what a radical break this was. The term superpower was coined in 1944 to describe the anticipated world order that would emerge after the war. Only the British empire was expected to survive as the standard-bearer of imperialism, alongside two very different superpower peers: the Soviet Union and the United States. Within weeks of D-Day, however, the British found themselves suddenly and irrevocably overruled by their former colony.That result was hardly inevitable. When the British and the Americans formally allied in December 1941, the British empire was unquestionably the senior partner in the relationship. It covered a fifth of the world’s landmass and claimed a quarter of its people. It dominated the air, sea, and financial channels on which most global commerce depended. And the Royal Navy maintained its preeminence, with ports of call on every continent, including Antarctica.The United States, by contrast, was more of a common market than a nation-state. Its tendency toward isolationism has always been overstated. But its major foreign-policy initiatives had been largely confined to the Western Hemisphere and an almost random collection of colonies (carefully called “territories”), whose strategic significance was—at best—a point of national ambivalence.In the two years after Pearl Harbor, the British largely dictated the alliance’s strategic direction. In Europe, American proposals to take the fight directly to Germany by invading France were tabled in favor of British initiatives, which had the not-incidental benefit of expanding Britain’s imperial reach across the Mediterranean and containing the Soviet Union (while always ensuring that the Russians had enough support to keep three-quarters of Germany’s army engaged on the Eastern Front).Things changed, however, in November 1943, when Winston Churchill and Franklin D. Roosevelt held a summit in Cairo. The British again sought to postpone the invasion of France in favor of further operations in the Mediterranean. The debate quickly grew acrimonious. At one point, Churchill refused to concede on his empire’s desire to capture the Italian island of Rhodes. George Marshall, the usually stoic U.S. Army chief of staff, shouted at the prime minister, “Not one American is going to die on that goddamned beach!” Another session was forced to end abruptly after Marshall and his British counterpart, Sir Alan Brooke, nearly came to blows.With the fate of the free world hanging in the balance, a roomful of 60-year-old men nearly broke out into a brawl because by November 1943, America had changed. It was producing more than twice as many planes and seven times as many ships as the whole British empire. British debt, meanwhile, had ballooned to nearly twice the size of its economy. Most of that debt was owed to the United States, which leveraged its position as Britain’s largest creditor to gain access to outposts across the British empire, from which it built an extraordinary global logistics network of its own.[From the April 2023 issue: The age of American naval dominance is over]Having methodically made their country into at least an equal partner, the Americans insisted on the invasion of France, code-named “Operation Overlord.” The result was a compromise, under which the Allies divided their forces in Europe. The Americans would lead an invasion of France, and the British would take command of the Mediterranean.Six months later, on June 6, 1944, with the D-Day invasion under way, the British empire verged on collapse. Its economic woes were exacerbated by the 1.5 million Americans, and 6 million tons of American equipment, that had been imported into the British Isles to launch Operation Overlord. Its ports were jammed. Inflation was rampant. Its supply chains and its politics were in shambles. By the end of June 1944, two of Churchill’s ministers were declaring the empire “broke.”The British continued to wield considerable influence on world affairs, as they do today. But after D-Day, on the battlefields of Europe and in international conference rooms, instead of setting the agenda, the British found themselves having to go along with it.In July 1944, at the Bretton Woods Conference, the British expectation that global finance would remain headquartered in London and transacted at least partially in pounds was frustrated when the International Monetary Fund and what would become the World Bank were headquartered in Washington and the dollar became the currency of international trade. In August 1944, America succeeded in dashing British designs on the eastern Mediterranean for good in favor of a second invasion of France from the south. In September 1944, the more and more notional British command of Allied ground forces in Europe was formally abandoned. In February 1945, at a summit in Yalta, Churchill had little choice but to acquiesce as the United States and the Soviet Union dictated the core terms of Germany’s surrender, the division of postwar Europe, and the creation of a United Nations organization with a mandate for decolonization.How did this happen so quickly? Some of the great political historians of the 20th century, such as David Reynolds, Richard Overy, and Paul Kennedy, have chronicled the many political, cultural, and economic reasons World War II would always have sounded the death knell of the European imperial system. Some British historians have more pointedly blamed the Americans for destabilizing the British empire by fomenting the forces of anti-colonialism (what D. Cameron Watt called America’s “moral imperialism”).Absent from many such accounts is why Britain did not even try to counterbalance America’s rise or use the extraordinary leverage it had before D-Day to win concessions that might have better stabilized its empire. The French did precisely that with far less bargaining power at their disposal, and preserved the major constituents of their own empire for a generation longer than the British did. The warning signs were all there. In 1941, Germany’s leading economics journal predicted the rise of a “Pax Americana” at Britain’s expense. “England will lose its empire,” the article gloatingly predicted, “to its partner across the Atlantic.”[Read: How Britain falls apart]The American defense-policy scholar and Atlantic contributing writer Kori Schake recently made a persuasive case that Britain came to accept the role of junior partner in the Atlantic alliance, rather than seek to balance American power, because the two countries had become socially, politically, and economically alike in all the ways that mattered. Britain, in other words, had more to lose by confrontation. And so it chose friendship.The argument makes sense to a point, especially given how close the United Kingdom and the United States are today. But the remembered warmth of the “special relationship” in the 1940s is largely a product of nostalgia. British contempt for American racism and conformist consumerism seethed especially hot with the arrival in the U.K. of 1.5 million Americans. And American contempt for the British class system and its reputation for violent imperialism equally made any U.S. investment in the war against Germany—as opposed to Japan—a political liability for Roosevelt.The British elite had every intention of preserving the British empire and European colonialism more generally. In November 1942, as Anglo-American operations began in North Africa, Churchill assured France that its colonies would be returned and assured his countrymen, “I have not become the King’s First Minister in order to preside over the liquidation of the British Empire.”The British assumed that America’s rise was compatible with that goal because they grossly miscalculated American intentions. This was on stark display in March 1944, just over two months before D-Day, when Britain’s Foreign Office circulated a memorandum setting out the empire’s “American policy.” Given how naive the Americans were about the ways of the world, it said, Britain should expect them to “follow our lead rather than that we follow theirs.” It was therefore in Britain’s interest to foster America’s rise so that its power could be put to Britain’s use. “They have enormous power, but it is the power of the reservoir behind the dam,” the memo continued. “It must be our purpose not to balance our power against that of America, but to make use of American power for purposes which we regard as good” and to “use the power of the United States to preserve the Commonwealth and the Empire, and, if possible, to support the pacification of Europe.”It is easy to see why members of Britain’s foreign-policy elite, still warmed by a Victorian afterglow, might discount Americans’ prattling on about decolonization and democracy as empty wartime rhetoric. If anything, they thought, Americans’ pestering insistence on such ideals proved how naive they were. Churchill often grumbled with disdain about Americans’ sentimental affection for—as he put it—the “chinks” and “pigtails” fighting against Japan in China, scornful of the American belief that they could be trusted to govern themselves.And the face America presented to London might have compounded the misapprehension. Roosevelt was expected to choose George Marshall to be the American commander of Operation Overlord, a position that would create the American equivalent of a Roman proconsul in London. Instead, he picked Dwight Eisenhower.Roosevelt’s reasons for choosing Eisenhower remain difficult to pin down. The president gave different explanations to different people at different times. But Eisenhower was the ideal choice for America’s proconsul in London and Europe more generally, if the goal was to make a rising American superpower seem benign.Eisenhower had a bit of cowboy to him, just like in the movies. He was also an Anglophile and took to wearing a British officer’s coat when visiting British troops in the field. He had a natural politician’s instinct for leaving the impression that he agreed with everyone. And he offered the incongruous public image of a four-star general who smiled like he was selling Coca-Cola.He was also genuinely committed to multilateralism. Eisenhower had studied World War I closely and grew convinced that its many disasters—in both its fighting and its peace—were caused by the Allies’ inability to put aside their own imperial prestige to achieve their common goals. Eisenhower’s commitment to Allied “teamwork,” as he would say with his hokey Kansas geniality, broke radically from the past and seemed hopelessly naive, yet was essential to the success of operations as high-risk and complex as the D-Day invasion.Eisenhower, for his part, was often quite deft in handling the political nature of his position. He knew that to be effective, to foster that teamwork, he could never be seen as relishing the terrifying economic and military power at his disposal, or the United States’ willingness to use it. “Hell, I don’t have to go around jutting out my chin to show the world how tough I am,” he said privately.On D-Day, Eisenhower announced the invasion without mentioning the United States once. Instead, he said, the landings were part of the “United Nations’ plan for the liberation of Europe, made in conjunction with our great Russian allies.” While the invasion was under way, Eisenhower scolded subordinates who issued reports on the extent of French territory “captured.” The territory, he chided them, had been “liberated.”The strategy worked. That fall, with Paris liberated, only 29 percent of French citizens polled felt the United States had “contributed most in the defeat of Germany,” with 61 percent giving credit to the Soviet Union. Yet, when asked where they would like to visit after the war, only 13 percent were eager to celebrate the Soviet Union’s contributions in Russia itself. Forty-three percent said the United States, a country whose Air Force had contributed to the deaths of tens of thousands of French civilians in bombing raids.In rhetoric and often in reality, the United States has continued to project its power, not as an empire, but on behalf of the “United Nations,” “NATO,” “the free world,” or “mankind.” The interests it claims to vindicate as a superpower have also generally not been its imperial ambition to make America great, but the shared ideals enshrined soon after the war in the UN Charter and the Universal Declaration of Human Rights.Had the D-Day invasion failed, those ideals would have been discredited. Unable to open the Western Front in France, the Allies would have had no choice but to commit to Britain’s strategy in the Mediterranean. The U.S. military, and by extension the United States, would have lost all credibility. The Soviets would have been the only meaningful rival to German power on the European continent. And there would have been no reason for the international politics of national prestige and imperial interest to become outmoded.Instead, on D-Day, American soldiers joined by British soldiers and allies from nearly a dozen countries embarked on a treacherous voyage from the seat of the British empire to the shores of the French empire on a crusade that succeeded in liberating the Old World from tyranny. It was a victory for an alliance built around the promise, at least, of broadly shared ideals rather than narrow national interests. That was a radical idea at the time, and it is becoming a contested one today. D-Day continues to resonate as much as it does because, like the battles of Lexington and Concord, it is an almost-too-perfect allegory for a decisive turning point in America’s national story: the moment when it came into its own as a new kind of superpower, one that was willing and able to fight for a freer world.
1 h
theatlantic.com