Tools
Change country:
The Atlantic
The Atlantic
How to Keep Watch
With smartphones in our pockets and doorbell cameras cheaply available, our relationship with video as a form of proof is evolving. We often say “pics or it didn’t happen!”—but meanwhile, there’s been a rise in problematic imaging including deepfakes and surveillance systems, which often reinforce embedded gender and racial biases. So what is really being revealed with increased documentation of our lives? And what’s lost when privacy is diminished?In this episode of How to Know What’s Real, staff writer Megan Garber speaks with Deborah Raji, a Mozilla fellow, whose work is focused on algorithmic auditing and evaluation. In the past, Raji worked closely with the Algorithmic Justice League initiative to highlight bias in deployed AI products.Listen to the episode here:Listen and subscribe here: Apple Podcasts | Spotify | YouTube | Pocket CastsThe following is a transcript of the episode:Andrea Valdez: You know, I grew up as a Catholic, and I remember the guardian angel was a thing that I really loved that concept when I was a kid. But then when I got to be, I don’t know, maybe around seven or eight, like, your guardian angel is always watching you. At first it was a comfort, and then it turned into kind of like a: Are they watching me if I pick my nose? Do they watch me?Megan Garber: And are they watching out for me, or are they just watching me?Valdez: Exactly. Like, are they my guardian angel or my surveillance angel? Surveillance angel.Valdez: I’m Andrea Valdez. I’m an editor at The Atlantic.Garber: And I’m Megan Garber, a writer at The Atlantic. And this is How to Know What’s Real.Garber: I just got the most embarrassing little alert from my watch. And it’s telling me that it is, quote, “time to stand.”Valdez: Why does it never tell us that it’s time to lie down?Garber: Right. Or time to just, like, go to the beach or something? And it’s weird, though, because I’m realizing I’m having these intensely conflicting emotions about it. Because in one way, I appreciate the reminder. I have been sitting too long; I should probably stand up. But I don’t also love the feeling of just sort of being casually judged by a piece of technology.Valdez: No, I understand. I get those alerts, too. I know it very well. And you know, it tells you, “Stand up; move for a minute. You can do it.” Uh, you know, you can almost hear it going, like, “Bless your heart.”Garber: “Bless your lazy little heart.” The funny thing, too, about it is, like, I find myself being annoyed, but then I also fully recognize that I don’t really have a right to be annoyed, because I’ve asked them to do the judging.Valdez: Yes, definitely. I totally understand. I mean, I’m very obsessed with the data my smartwatch produces: my steps, my sleeping habits, my heart rate. You know, just everything about it. I’m just obsessed with it. And it makes me think—well, I mean, have you ever heard of the quantified-self movement?Garber: Oh, yeah.Valdez: Yeah, so quantified self. It’s a term that was coined by Wired magazine editors around 2007. And the idea was, it was this movement that aspired to be, quote, unquote, “self-knowledge through numbers.” And I mean, it’s worth remembering what was going on in 2007, 2008. You know, I know it doesn’t sound that long ago, but wearable tech was really in its infancy. And in a really short amount of time, we’ve gone from, you know, Our Fitbit to, as you said, Megan, this device that not only scolds you for not standing up every hour—but it tracks your calories, the decibels of your environment. You can even take an EKG with it. And, you know, when I have my smartwatch on, I’m constantly on guard to myself. Did I walk enough? Did I stand enough? Did I sleep enough? And I suppose it’s a little bit of accountability, and that’s nice, but in the extreme, it can feel like I’ve sort of opted into self-surveillance.Garber: Yes, and I love that idea in part because we typically think about surveillance from the opposite end, right? Something that’s done to us, rather than something that we do to ourselves and for ourselves. Watches are just one example here, right? There’s also smartphones, and there’s this broader technological environment, and all of that. That whole ecosystem, it all kind of asks this question of “Who’s really being watched? And then also, who’s really doing the watching?”Valdez: Mm hmm. So I spoke with Deb Raji, who is a computer scientist and a fellow at the Mozilla Foundation. And she’s an expert on questions about the human side of surveillance, and thinks a lot about how being watched affects our reality.—Garber: I’d love to start with the broad state of surveillance in the United States. What does the infrastructure of surveillance look like right now?Deborah Raji: Yeah. I think a lot of people see surveillance as a very sort of “out there in the world,” physical-infrastructure thing—where they see themselves walking down the street, and they notice a camera, and they’re like, Yeah, I’m being surveilled. Um, which does happen if you live in New York, especially post-9/11: like, you are definitely physically surveilled. There’s a lot of physical-surveillance infrastructure, a lot of cameras out there. But there’s also a lot of other tools for surveillance that I think people are less aware of.Garber: Like Ring cameras and those types of devices?Raji: I think when people install their Ring product, they’re thinking about themselves. They’re like, Oh, I have security concerns. I want to just have something to be able to just, like, check who’s on my porch or not. And they don’t see it as surveillance apparatus, but it ends up becoming part of a broader network of surveillance. And then I think the one that people very rarely think of—and again, is another thing that I would not have thought of if I wasn’t engaged in some of this work—is online surveillance. Faces are sort of the only biometric; uh, I guess, you know, it’s not like a fingerprint. Like, we don’t upload our fingerprints to our social media. We’re very sensitive about, like, Oh, you know, this seems like important biometric data that we should keep guarded. But for faces, it can be passively collected and passively distributed without you having any awareness of it. But also, we’re very casual about our faces. So we upload it very freely onto the internet. And so, you know, immigration officers—ICE, for example—have a lot of online-surveillance tools, where they’ll monitor people’s Facebook pages, and they’ll use sort of facial recognition and other products to identify and connect online identities, you know, across various social-media platforms, for example.Garber: So you have people doing this incredibly common thing, right? Just sharing pieces of their lives on social media. And then you have immigration officials treating that as actionable data. Can you tell me more about facial recognition in particular?Raji: So one of the first models I actually built was a facial-recognition project. And so I’m a Black woman, and I noticed right away that there were not a lot of faces that look like mine. And I remember trying to have a conversation with folks at the company at the time. And it was a very strange time to be trying to have this conversation. This was like 2017. There was a little bit of that happening in the sort of natural-language processing space. Like, people were noticing, you know, stereotyped language coming out of some of these models, but no one was really talking about it in the image space as much—that, oh, some of these models don’t work as well for darker-skinned individuals or other demographics. We audited a bunch of these products that were these facial-analysis products, and we realized that these systems weren’t working very well for those minority populations. But also definitely not working for the intersection of those groups. So like: darker skin, female faces.Garber: Wow.Raji: Some of the ways in which these systems were being pitched at the time, were sort of selling these products and pitching it to immigration officers to use to identify suspects.Gaber: Wow.Raji: And, you know, imagine something that’s not 70 percent accurate, and it’s being used to decide, you know, if this person aligns with a suspect for deportation. Like, that’s so serious.Garber: Right.Raji: You know, since we’ve published that work, we had just this—it was this huge moment. In terms of: It really shifted the thinking in policy circles, advocacy circles, even commercial spaces around how well those systems worked. Because all the information we had about how well these systems worked, so far, was on data sets that were disproportionately composed of lighter-skin men. Right. And so people had this belief that, Oh, these systems work so well, like 99 percent accuracy. Like, they’re incredible. And then our work kind of showed, well, 99 percent accuracy on lighter-skin men.Garber: And could you talk a bit about where tech companies are getting the data from to train their models?Raji: So much of the data required to build these AI systems are collected through surveillance. And this is not hyperbole, right? Like, the facial-recognition systems, you know, millions and millions of faces. And these databases of millions and millions of faces that are collected, you know, through the internet, or collected through identification databases, or through, you know, physical- or digital-surveillance apparatus. Because of the way that the models are trained and developed, it requires a lot of data to get to a meaningful model. And so a lot of these systems are just very data hungry, and it’s a really valuable asset.Garber: And how are they able to use that asset? What are the specific privacy implications about collecting all that data?Raji: Privacy is one of those things that we just don’t—we haven’t been able to get to federal-level privacy regulation in the States. There’s been a couple states that have taken initiative. So California has the California Privacy Act. Illinois has a BIPA, which is sort of a Biometric Information Privacy Act. So that’s specifically about, you know, biometric data like faces. In fact, they had a really—I think BIPA’s biggest enforcement was against Facebook and Facebook’s collection of faces, which does count as biometric data. So in Illinois, they had to pay a bunch of Facebook users a certain settlement amount. Yeah. So, you know, there are privacy laws, but it’s very state-based, and it takes a lot of initiative for the different states to enforce some of these things, versus having some kind of comprehensive national approach to privacy. That’s why enforcement or setting these rules is so difficult. I think something that’s been interesting is that some of the agencies have sort of stepped up to play a role in terms of thinking through privacy. So the Federal Trade Commission, FTC, has done these privacy audits historically on some of the big tech companies. They’ve done this for quite a few AI products as well—sort of investigating the privacy violations of some of them. So I think that that’s something that, you know, some of the agencies are excited about and interested in. And that might be a place where we see movement, but ideally we have some kind of law.Garber: And we’ve been in this moment—this, I guess, very long moment—where companies have been taking the “ask for forgiveness instead of permission” approach to all this. You know, so erring on the side of just collecting as much data about their users as they possibly can, while they can. And I wonder what the effects of that will be in terms of our broader informational environment.Raji: The way surveillance and privacy works is that it’s not just about the information that’s collected about you; it’s, like, your entire network is now, you know, caught in this web, and it’s just building pictures of entire ecosystems of information. And so, I think people don’t always get that. But yeah; it’s a huge part of what defines surveillance.__Valdez: Do you remember Surveillance Cameraman, Megan?Garber: Ooh. No. But now I’m regretting that I don’t.Valdez: Well, I mean, I’m not sure how well it was known, but it was maybe 10 or so years ago. There was this guy who had a camera, and he would take the camera and he would go and he’d stop and put the camera in people’s faces. And they would get really upset. And they would ask him, “Why are you filming me?” And, you know, they would get more and more irritated, and it would escalate. I think the meta-point that Surveillance Cameraman was trying to make was “You know, we’re surveilled all the time—so why is it any different if someone comes and puts a camera in your face when there’s cameras all around you, filming you all the time?”Garber: Right. That’s such a great question. And yeah, the sort of difference there between the active act of being filmed and then the sort of passive state of surveillance is so interesting there.Valdez: Yeah. And you know, that’s interesting that you say active versus passive. You know, it reminds me of the notion of the panopticon, which I think is a word that people hear a lot these days, but it’s worth remembering that the panopticon is an old idea. So it started around the late 1700s with the philosopher named Jeremy Bentham. And Bentham, he outlined this architectural idea, and it was originally conceptualized for prisons. You know, the idea was that you have this circular building, and the prisoners live in cells along the perimeter of the building. And then there’s this inner circle, and the guards are in that inner circle, and they can see the prisoners. But the prisoners can’t see the guards. And so the effect that Bantham was hoping this would achieve is that the prisoners would never know if they’re being watched—so they’d always behave as if they were being watched.Garber: Mm. And that makes me think of the more modern idea of the watching-eyes effect. This notion that simply the presence of eyes might affect people’s behavior. And specifically, images of eyes. Simply that awareness of being watched does seem to affect people’s behavior.Valdez: Oh, interesting.Garber: You know, beneficial behavior, like collectively good behavior. You know, sort of keeping people in line in that very Bentham-like way.Valdez: We have all of these, you know, eyes watching us now—I mean, even in our neighborhoods and, you know, at our apartment buildings. In the form of, say, Rng cameras or other, you know, cameras that are attached to our front doors. Just how we’ve really opted into being surveilled in all of the most mundane places. I think the question I have is: Where is all of that information going?Garber: And in some sense, that’s the question, right? And Deb Raji has what I found to be a really useful answer to that question of where our information is actually going, because it involves thinking of surveillance not just as an act, but also as a product.—Raji: For a long time when you—I don’t know if you remember those, you know, “complete the picture” apps, or, like, “spice up my picture.” They would use generative models. You would kind of give them a prompt, which would be, like—your face. And then it would modify the image to make it more professional, or make it better lit. Like, sometimes you’ll get content that was just, you know, sexualizing and inappropriate. And so that happens in a nonmalicious case. Like, people will try to just generate images for benign reasons. And if they choose the wrong demographic, or they frame things in the wrong way, for example, they’ll just get images that are denigrating in a way that feels inappropriate. And so I feel like there’s that way in which AI for images has sort of led to just, like, a proliferation of problematic content.Garber: So not only are those images being generated because the systems are flawed themselves, but then you also have people using those flawed systems to generate malicious content on purpose, right?Raji: One that we’ve seen a lot is sort of this deepfake porn of young people, which has been so disappointing to me. Just, you know, young boys deciding to do that to young girls in their class; it really is a horrifying form of sexual abuse. I think, like, when it happened to Taylor Swift—I don’t know if you remember; someone used the Microsoft model, and, you know, generated some nonconsensual sexual images of Taylor Swift—I think it turned that into a national conversation. But months before that, there had been a lot of reporting of this happening in high schools. Anonymous young girls dealing with that, which is just another layer of trauma, because you’re like—you’re not Taylor Swift, right? So people don’t pay attention in the same way. So I think that that problem has actually been a huge issue for a very long time.—Garber: Andrea, I’m thinking of that old line about how if you’re not paying for something in the tech world, there’s a good chance you are probably the product being sold, right? But I’m realizing how outmoded that idea probably is at this point. Because even when we pay for these things, we’re still the products. And specifically, our data are the products being sold. So even with things like deepfakes—which are typically defined as, you know, using some kind of machine learning or AI to create a piece of manipulated media—even they rely on surveillance in some sense. And so you have this irony where these recordings of reality are now also being used to distort reality.Valdez: You know, it makes me think of Don Fallis: this philosopher who talked about the epistemic threat of deepfakes and that it’s part of this pending infopocalypse. Which sounds quite grim, I know. But I think the point that Fallis was trying to make is that with the proliferation of deepfakes, we’re beginning to maybe distrust what it is that we’re seeing. And we talked about this in the last episode. You know, “seeing is believing” might not be enough. And I think we’re really worried about deepfakes, but I’m also concerned about this concept of cheap fakes, or shallow fakes. So cheap fakes or shallow fakes—it’s, you know, you can tweak or change images or videos or audio just a little bit. And it doesn’t actually require AI or advanced technology to create. So one of the more infamous instances of this was in 2019. Maybe you remember there was a video of Nancy Pelosi that came out where it sounded like she was slurring her words.Garber: Oh, yeah, right. Yeah.Valdez: Really, the video had just been slowed down using easy audio tools, and just slowed down enough to create that perception that she was slurring her words. So it’s a quote, unquote “cheap” way to create a small bit of chaos.Garber: And then you combine that small bit of chaos with the very big chaos of deepfakes.Valdez: Yeah. So one, the cheat fake is: It’s her real voice. It’s just slowed down—again, using, like, simple tools. But we’re also seeing instances of AI-generated technology that completely mimics other people’s voices, and it’s becoming really easy to use now. You know, there was this case recently that came out of Maryland where there was a high-school athletic director, and he was arrested after he allegedly used an AI voice simulation of the principal at his school. And he allegedly simulated the principal’s voice saying some really horrible things, and it caused all this blowback on the principal before investigators, you know, looked into it. Then they determined that the audio was fake. But again, it was just a regular person that was able to use this really advanced-seeming technology that was cheap, easy to use, and therefore easy to abuse.Garber: Oh, yes. And I think it also goes to show how few sort of cultural safeguards we have in place right now, right? Like, the technology will let people do certain things. And we don’t always, I think, have a really well-agreed-upon sense of what constitutes abusing the technology. And you know, usually when a new technology comes along, people will sort of figure out what’s acceptable and, you know, what will bear some kind of safety net. Um, and will there be a taboo associated with it? But with all of these new technologies, we just don’t have that. And so people, I think, are pushing the bounds to see what they can get away with.Valdez: And we’re starting to have that conversation right now about what those limits should look like. I mean, lots of people are working on ways to figure out how to watermark or authenticate things like audio and video and images.Garber: Yeah. And I think that that idea of watermarking, too, can maybe also have a cultural implication. You know, like: If everyone knows that deepfakes can be tracked, and easily, that is itself a pretty good disincentive from creating them in the first place, at least with an intent to fool or do something malicious.Valdez: Yeah. But. In the meantime, there’s just going to be a lot of these deepfakes and cheap fakes and shallow fakes that we’re just going to have to be on the lookout for.—Garber: Is there new advice that you have for trying to figure out whether something is fake?Raji: If it doesn’t feel quite right, it probably isn’t. A lot of these AI images don’t have a good sense of, like, spatial awareness, because it’s just pixels in, pixels out. And so there’s some of these concepts that we as humans find really easy, but these models struggle with. I advise people to be aware of, like—sort of trust your intuition. If you’re noticing weird artifacts in the image, it probably isn’t real. I think another thing, as well, is who posts.Garber: Oh, that’s a great one; yeah.Raji: Like, I mute very liberally on Twitter; uh, any platform. I definitely mute a lot of accounts that I notice [are] caught posting something. Either like a community note or something will reveal that they’ve been posting fake images, or you just see it and you recognize the design of it. And so I just knew that kind of content. Don’t engage with those kind of content creators at all. And so I think that that’s also like another successful thing on the platform level. Deplatforming is really effective if someone has sort of three strikes in terms of producing a certain type of content. And that’s what happened with the Taylor Swift situation—where people were disseminating these, you know, Taylor Swift images and generating more images. And they just went after every single account that did that—you know, completely locked down her hashtag. Like, that kind of thing where they just really went after everything. Um, and I think that that’s something that we should just do in our personal engagement as well.—Garber: Andrea, that idea of personal engagement, I think, is such a tricky part of all of this. I’m even thinking back to what we were saying before—about Ring and the interplay we were getting at between the individual and the collective. In some ways, it’s the same tension that we’ve been thinking about with climate change and other really broad, really complicated problems. This, you know, connection between personal responsibility, but also the outsized role that corporate and government actors will have to play when it comes to finding solutions. Mm hmm. And with so many of these surveillance technologies, we’re the consumers, with all the agency that that would seem to entail. But at the same time, we’re also part of this broader ecosystem where we really don’t have as much control as I think we’d often like to believe. So our agency has this giant asterisk, and, you know, consumption itself in this networked environment is really no longer just an individual choice. It’s something that we do to each other, whether we mean to or not.Valdez: Yeah; you know, that’s true. But I do still believe in conscious consumption so much as we can do it. Like, even if I’m just one person, it’s important to me to signal with my choices what I value. And in certain cases, I value opting out of being surveilled so much as I can control for it. You know, maybe I can’t opt out of facial recognition and facial surveillance, because that would require a lot of obfuscating my face—and, I mean, there’s not even any reason to believe that it would work. But there are some smaller things that I personally find important; like, I’m very careful about which apps I allow to have location sharing on me. You know, I go into my privacy settings quite often. I make sure that location sharing is something that I’m opting into on the app while I’m using it. I never let apps just follow me around all the time. You know, I think about what chat apps I’m using, if they have encryption; I do hygiene on my phone around what apps are actually on my phone, because they do collect a lot of data on you in the background. So if it’s an app that I’m not using, or I don’t feel familiar with, I delete it.Garber: Oh, that’s really smart. And it’s such a helpful reminder, I think, of the power that we do have here. And a reminder of what the surveillance state actually looks like right now. It’s not some cinematic dystopia. Um, it’s—sure, the cameras on the street. But it’s also the watch on our wrist; it’s the phones in our pockets; it’s the laptops we use for work. And even more than that, it’s a series of decisions that governments and organizations are making every day on our behalf. And we can affect those decisions if we choose to, in part just by paying attention.Valdez: Yeah, it’s that old adage: “Who watches the watcher?” And the answer is us.__Garber: That’s all for this episode of How to Know What’s Real. This episode was hosted by Andrea Valdez and me, Megan Garber. Our producer is Natalie Brennan. Our editors are Claudine Ebeid and Jocelyn Frank. Fact-check by Ena Alvarado. Our engineer is Rob Smierciak. Rob also composed some of the music for this show. The executive producer of audio is Claudine Ebeid, and the managing editor of audio is Andrea Valdez.Valdez: Next time on How to Know What’s Real: Thi Nguyen: And when you play the game multiple times, you shift through the roles, so you can experience the game from different angles. You can experience a conflict from completely different political angles and re-experience how it looks from each side, which I think is something like, this is what games are made for. Garber: What we can learn about expansive thinking through play. We’ll be back with you on Monday.
1 h
theatlantic.com
On D-Day, the U.S. Conquered the British Empire
For most Americans, D-Day remains the most famous battle of World War II. It was not the end of the war against Nazism. At most, it was the beginning of the end. Yet it continues to resonate 80 years later, and not just because it led to Hitler’s defeat. It also signaled the collapse of the European empires and the birth of an American superpower that promised to dedicate its foreign policy to decolonization, democracy, and human rights, rather than its own imperial prestige.It is easy to forget what a radical break this was. The term superpower was coined in 1944 to describe the anticipated world order that would emerge after the war. Only the British empire was expected to survive as the standard-bearer of imperialism, alongside two very different superpower peers: the Soviet Union and the United States. Within weeks of D-Day, however, the British found themselves suddenly and irrevocably overruled by their former colony.That result was hardly inevitable. When the British and the Americans formally allied in December 1941, the British empire was unquestionably the senior partner in the relationship. It covered a fifth of the world’s landmass and claimed a quarter of its people. It dominated the air, sea, and financial channels on which most global commerce depended. And the Royal Navy maintained its preeminence, with ports of call on every continent, including Antarctica.The United States, by contrast, was more of a common market than a nation-state. Its tendency toward isolationism has always been overstated. But its major foreign-policy initiatives had been largely confined to the Western Hemisphere and an almost random collection of colonies (carefully called “territories”), whose strategic significance was—at best—a point of national ambivalence.In the two years after Pearl Harbor, the British largely dictated the alliance’s strategic direction. In Europe, American proposals to take the fight directly to Germany by invading France were tabled in favor of British initiatives, which had the not-incidental benefit of expanding Britain’s imperial reach across the Mediterranean and containing the Soviet Union (while always ensuring that the Russians had enough support to keep three-quarters of Germany’s army engaged on the Eastern Front).Things changed, however, in November 1943, when Winston Churchill and Franklin D. Roosevelt held a summit in Cairo. The British again sought to postpone the invasion of France in favor of further operations in the Mediterranean. The debate quickly grew acrimonious. At one point, Churchill refused to concede on his empire’s desire to capture the Italian island of Rhodes. George Marshall, the usually stoic U.S. Army chief of staff, shouted at the prime minister, “Not one American is going to die on that goddamned beach!” Another session was forced to end abruptly after Marshall and his British counterpart, Sir Alan Brooke, nearly came to blows.With the fate of the free world hanging in the balance, a roomful of 60-year-old men nearly broke out into a brawl because by November 1943, America had changed. It was producing more than twice as many planes and seven times as many ships as the whole British empire. British debt, meanwhile, had ballooned to nearly twice the size of its economy. Most of that debt was owed to the United States, which leveraged its position as Britain’s largest creditor to gain access to outposts across the British empire, from which it built an extraordinary global logistics network of its own.[From the April 2023 issue: The age of American naval dominance is over]Having methodically made their country into at least an equal partner, the Americans insisted on the invasion of France, code-named “Operation Overlord.” The result was a compromise, under which the Allies divided their forces in Europe. The Americans would lead an invasion of France, and the British would take command of the Mediterranean.Six months later, on June 6, 1944, with the D-Day invasion under way, the British empire verged on collapse. Its economic woes were exacerbated by the 1.5 million Americans, and 6 million tons of American equipment, that had been imported into the British Isles to launch Operation Overlord. Its ports were jammed. Inflation was rampant. Its supply chains and its politics were in shambles. By the end of June 1944, two of Churchill’s ministers were declaring the empire “broke.”The British continued to wield considerable influence on world affairs, as they do today. But after D-Day, on the battlefields of Europe and in international conference rooms, instead of setting the agenda, the British found themselves having to go along with it.In July 1944, at the Bretton Woods Conference, the British expectation that global finance would remain headquartered in London and transacted at least partially in pounds was frustrated when the International Monetary Fund and what would become the World Bank were headquartered in Washington and the dollar became the currency of international trade. In August 1944, America succeeded in dashing British designs on the eastern Mediterranean for good in favor of a second invasion of France from the south. In September 1944, the more and more notional British command of Allied ground forces in Europe was formally abandoned. In February 1945, at a summit in Yalta, Churchill had little choice but to acquiesce as the United States and the Soviet Union dictated the core terms of Germany’s surrender, the division of postwar Europe, and the creation of a United Nations organization with a mandate for decolonization.How did this happen so quickly? Some of the great political historians of the 20th century, such as David Reynolds, Richard Overy, and Paul Kennedy, have chronicled the many political, cultural, and economic reasons World War II would always have sounded the death knell of the European imperial system. Some British historians have more pointedly blamed the Americans for destabilizing the British empire by fomenting the forces of anti-colonialism (what D. Cameron Watt called America’s “moral imperialism”).Absent from many such accounts is why Britain did not even try to counterbalance America’s rise or use the extraordinary leverage it had before D-Day to win concessions that might have better stabilized its empire. The French did precisely that with far less bargaining power at their disposal, and preserved the major constituents of their own empire for a generation longer than the British did. The warning signs were all there. In 1941, Germany’s leading economics journal predicted the rise of a “Pax Americana” at Britain’s expense. “England will lose its empire,” the article gloatingly predicted, “to its partner across the Atlantic.”[Read: How Britain falls apart]The American defense-policy scholar and Atlantic contributing writer Kori Schake recently made a persuasive case that Britain came to accept the role of junior partner in the Atlantic alliance, rather than seek to balance American power, because the two countries had become socially, politically, and economically alike in all the ways that mattered. Britain, in other words, had more to lose by confrontation. And so it chose friendship.The argument makes sense to a point, especially given how close the United Kingdom and the United States are today. But the remembered warmth of the “special relationship” in the 1940s is largely a product of nostalgia. British contempt for American racism and conformist consumerism seethed especially hot with the arrival in the U.K. of 1.5 million Americans. And American contempt for the British class system and its reputation for violent imperialism equally made any U.S. investment in the war against Germany—as opposed to Japan—a political liability for Roosevelt.The British elite had every intention of preserving the British empire and European colonialism more generally. In November 1942, as Anglo-American operations began in North Africa, Churchill assured France that its colonies would be returned and assured his countrymen, “I have not become the King’s First Minister in order to preside over the liquidation of the British Empire.”The British assumed that America’s rise was compatible with that goal because they grossly miscalculated American intentions. This was on stark display in March 1944, just over two months before D-Day, when Britain’s Foreign Office circulated a memorandum setting out the empire’s “American policy.” Given how naive the Americans were about the ways of the world, it said, Britain should expect them to “follow our lead rather than that we follow theirs.” It was therefore in Britain’s interest to foster America’s rise so that its power could be put to Britain’s use. “They have enormous power, but it is the power of the reservoir behind the dam,” the memo continued. “It must be our purpose not to balance our power against that of America, but to make use of American power for purposes which we regard as good” and to “use the power of the United States to preserve the Commonwealth and the Empire, and, if possible, to support the pacification of Europe.”It is easy to see why members of Britain’s foreign-policy elite, still warmed by a Victorian afterglow, might discount Americans’ prattling on about decolonization and democracy as empty wartime rhetoric. If anything, they thought, Americans’ pestering insistence on such ideals proved how naive they were. Churchill often grumbled with disdain about Americans’ sentimental affection for—as he put it—the “chinks” and “pigtails” fighting against Japan in China, scornful of the American belief that they could be trusted to govern themselves.And the face America presented to London might have compounded the misapprehension. Roosevelt was expected to choose George Marshall to be the American commander of Operation Overlord, a position that would create the American equivalent of a Roman proconsul in London. Instead, he picked Dwight Eisenhower.Roosevelt’s reasons for choosing Eisenhower remain difficult to pin down. The president gave different explanations to different people at different times. But Eisenhower was the ideal choice for America’s proconsul in London and Europe more generally, if the goal was to make a rising American superpower seem benign.Eisenhower had a bit of cowboy to him, just like in the movies. He was also an Anglophile and took to wearing a British officer’s coat when visiting British troops in the field. He had a natural politician’s instinct for leaving the impression that he agreed with everyone. And he offered the incongruous public image of a four-star general who smiled like he was selling Coca-Cola.He was also genuinely committed to multilateralism. Eisenhower had studied World War I closely and grew convinced that its many disasters—in both its fighting and its peace—were caused by the Allies’ inability to put aside their own imperial prestige to achieve their common goals. Eisenhower’s commitment to Allied “teamwork,” as he would say with his hokey Kansas geniality, broke radically from the past and seemed hopelessly naive, yet was essential to the success of operations as high-risk and complex as the D-Day invasion.Eisenhower, for his part, was often quite deft in handling the political nature of his position. He knew that to be effective, to foster that teamwork, he could never be seen as relishing the terrifying economic and military power at his disposal, or the United States’ willingness to use it. “Hell, I don’t have to go around jutting out my chin to show the world how tough I am,” he said privately.On D-Day, Eisenhower announced the invasion without mentioning the United States once. Instead, he said, the landings were part of the “United Nations’ plan for the liberation of Europe, made in conjunction with our great Russian allies.” While the invasion was under way, Eisenhower scolded subordinates who issued reports on the extent of French territory “captured.” The territory, he chided them, had been “liberated.”The strategy worked. That fall, with Paris liberated, only 29 percent of French citizens polled felt the United States had “contributed most in the defeat of Germany,” with 61 percent giving credit to the Soviet Union. Yet, when asked where they would like to visit after the war, only 13 percent were eager to celebrate the Soviet Union’s contributions in Russia itself. Forty-three percent said the United States, a country whose Air Force had contributed to the deaths of tens of thousands of French civilians in bombing raids.In rhetoric and often in reality, the United States has continued to project its power, not as an empire, but on behalf of the “United Nations,” “NATO,” “the free world,” or “mankind.” The interests it claims to vindicate as a superpower have also generally not been its imperial ambition to make America great, but the shared ideals enshrined soon after the war in the UN Charter and the Universal Declaration of Human Rights.Had the D-Day invasion failed, those ideals would have been discredited. Unable to open the Western Front in France, the Allies would have had no choice but to commit to Britain’s strategy in the Mediterranean. The U.S. military, and by extension the United States, would have lost all credibility. The Soviets would have been the only meaningful rival to German power on the European continent. And there would have been no reason for the international politics of national prestige and imperial interest to become outmoded.Instead, on D-Day, American soldiers joined by British soldiers and allies from nearly a dozen countries embarked on a treacherous voyage from the seat of the British empire to the shores of the French empire on a crusade that succeeded in liberating the Old World from tyranny. It was a victory for an alliance built around the promise, at least, of broadly shared ideals rather than narrow national interests. That was a radical idea at the time, and it is becoming a contested one today. D-Day continues to resonate as much as it does because, like the battles of Lexington and Concord, it is an almost-too-perfect allegory for a decisive turning point in America’s national story: the moment when it came into its own as a new kind of superpower, one that was willing and able to fight for a freer world.
1 h
theatlantic.com
The Dalai Lama Is Landing in the Middle of the 2024 Election
In early September of 2020, Joe Biden, then the Democratic nominee for president, promised to put values—values held in contempt, he argued, by the man he would go on to defeat—at the center of American foreign policy. To act on his promise, he said, he would do something Donald Trump had neglected to do. “I’ll meet with His Holiness the Dalai Lama,” Biden said.For American presidents, meeting the 14th Dalai Lama can bring tension and discord, because Communist Party leaders in Beijing consider Tibet to be a part of China. They consider any recognition of the Dalai Lama—a Mandela-level icon, a symbol of Tibet’s will to survive, and also (by the way) a living Buddha, a bodhisattva, to his millions of followers—a terrible insult to Chinese sensitivities. (To be fair, Chinese leaders are omnidirectionally offended, by supporters of Taiwanese independence and Hong Kong democracy; by Christians and Uyghurs and Mongols; and by anyone else who threatens their Middle Kingdom sense of imperial entitlement.)More than three years into his term, Biden has not made good on his promise, though he has a plausible excuse: The Dalai Lama is 88 years old and in declining health, and he seldom leaves his home in exile in Dharamsala, in the Himalayan foothills of India. But the Dalai Lama’s age now provides a path for Biden to keep his promise: The bodhisattva has bad knees and has decided, after much procrastination, to come to New York this summer to investigate the possibility of replacement.A visit by Biden to the Dalai Lama’s hospital—or an after-surgery invitation to the White House—would signal continuing American concern over the oppression of Tibet and Tibetans, as well as support for one of the most heroic and pacific humanitarian leaders of our age. Such a visit would also have the benefit of signaling to the Chinese government that a U.S. president makes decisions independent of Chinese Communist feelings. (American CEOs are particularly feeble at signaling such independence.) A call on the Dalai Lama couldn’t possibly hurt Biden’s standing among voters, especially considering the Dalai Lama’s previous lack of interest in meeting with Trump when he was president. Five years ago, when I visited the Dalai Lama at his monastery in Manali, he told me that he did not look favorably on Trump’s jingoistic “America First” rhetoric. “Everyone first,” he said, laughing. “A much better idea.”The exact timing of his trip to the United States—his first in seven years—has not yet been decided, but it will follow another event of some significance: a visit later this month to Dharamsala by Representative Nancy Pelosi, the former speaker of the House, and a congressional delegation. Pelosi has championed the Tibetan cause for decades, and, to her credit, she is loathed by Beijing for her comprehensive criticism of China’s human-rights record. In one of Pelosi’s earliest meetings with the Dalai Lama, she was so ferocious in her criticism of China’s human-rights abuses that the Dalai Lama said, impishly, “Now let us all pray so that we could rid Nancy of her negative attitudes.” (Pelosi’s trip has not yet been announced, and her spokesperson declined to comment, citing security concerns; news of the Dalai Lama’s proposed visit this summer was confirmed to me by sources involved in planning the trip.)The reemergence of the Dalai Lama into American politics in the months preceding the 2024 presidential election is good news for the unfortunate Tibetan cause, constantly steamrollered as it is by the raw deployment of Chinese power. In Dharamsala, the seat of the Tibetan government in exile, fear is ever present that the Dalai Lama’s eventual demise will make even more marginal the cause of Tibetan cultural and political independence. (As is implied by his status as the 14th Dalai Lama, the discovery of a 15th Dalai Lama is likely, though he will be reincarnated, according to Tibetan Buddhist tradition, as a small child, not as someone ready for international diplomacy. And the Chinese government has its own plan to identify and elevate a quisling lama.)Two months ago, I visited Dharamsala with, among others, Arthur Brooks, the Atlantic columnist and frequent writing collaborator of the Dalai Lama’s. We both experienced a religious leader who, though hobbled by knee pain and slowed by age, was still lucid and eloquent on the great subjects of freedom and happiness. I called Arthur today to ask him what he makes of this news.“In a contentious election year, it’s good to remind Americans of our core values as a people, and among those values are religious freedom and standing up for the dignity of all people around the world,” he said. “His Holiness the Dalai Lama, as we saw in Dharamsala in April, still has the ability to remind people around the world of what is good and true. For a Tibetan monk, he has an uncanny gift for bringing out the best of what it means to be a person and an American. This is an opportunity that President Biden cannot and should not miss.”
theatlantic.com
No Miracle
it could’ve been an email,or a knife gliding over the bruise of an apple,a surgical sweetness.it could’ve been a pebble,a vagrant lullaby,a slow walk through the neighborhoodwhen spring let looseand buckled through the field,throwing its head back.delight will not ruin me.i walk over the melting roof,watch the space between the buildings. and none of this, no scent, no miracle,is original.
theatlantic.com
The Most Overlooked Organ in the Human Body
This article was originally published by Undark Magazine.When Mana Parast was a medical resident in 2003, she had an experience that would change the course of her entire career: her first fetal autopsy.The autopsy, which pushed Parast to pursue perinatal and placental pathology, was on a third-trimester stillbirth. “There was nothing wrong with the baby; it was a beautiful baby,” she recalls. We’re not done, she remembers her teacher telling her. Go find the placenta.The placenta, a temporary organ that appears during pregnancy to help support a growing fetus, didn’t look as it should. Instead, it “looked like a rock,” Parast says. As far as they could tell, no one had ever examined this patient’s placenta through her pregnancy, and it was her fifth or sixth stillbirth, Parast recalls.Every year, there are approximately 5 million pregnancies in the United States. One million of those pregnancies end in miscarriage, and more than 20,000 end in stillbirth. Up to half of these pregnancy losses have unidentified causes. Recent and ongoing research, though, suggests that the placenta may hold the key to understanding and preventing some pregnancy complications, such as preterm birth and maternal and infant mortality. A closer look at the placenta—including its size and function—may have a significant impact on stillbirth rates.The placenta and its pathologies have largely been understudied, some clinicians say. There are multiple reasons: the difficulties in studying a fleeting and dynamic organ, the limitations in researching pregnant people, a lack of scientific consensus, few prospective studies, and the absence of standardized pathology reports on placentas.Some groups are working to change that. The placenta “is this complex organ that’s critical to support fetal development, so you would think we know everything about it,” says David Weinberg, the project lead for the Human Placenta Project, or HPP, an initiative by the National Institute of Child Health and Human Development. The project has awarded studies more than $101 million from 2014 to 2023 to develop better assessment tools for the placenta while it is growing inside a pregnant person.Placental research is an area of obstetrics that is sorely lacking, according to Weinberg. Although limited research has been done on abnormal placentas after delivery, the HPP research teams realized in early meetings that if they wanted to improve outcomes, they’d need to know more about what a normal placenta does over the course of pregnancy. They are one of several U.S.-based teams tackling this issue.The shift in research is a welcome one for Parast, who is now director of the Perinatal Pathology Service and a co-director of the Center for Perinatal Discovery at UC San Diego, and has received HPP funding for some of her work. But more should be done, she adds, including adopting a more cooperative approach to applying new findings: “If we’re going to do this right, we have to come at it with this mindset.”The human placenta does a lot of work for the fetus; it is, effectively, the fetal lungs, kidneys, and digestive tract. It’s also one of the only organs in the animal world that consists of two separate organisms—with tissues from both the mother and fetus—as well as the only temporary organ.The placenta evolves across a pregnancy, too, continuing to support the developing fetus while interacting with the maternal environment, Weinberg says. The research has, so far, shown that issues with the placenta—its size, its placement, its microbiome—can signal health problems with both pregnant person and fetus, such as preeclampsia, gestational diabetes, preterm birth, and stillbirth.[Read: The mystery of Zika’s path to the placenta]As researchers have tried to develop ways to observe the placenta throughout the course of an entire pregnancy, they’re facing challenges, though. It’s difficult, for instance, to study the organ before a birth, because of potential risks both for the woman and for her developing fetus. Pregnant women have been historically excluded from most pharmacological and preventative trials according to the National Institutes of Health Office of Research on Women’s Health. The potential reasons include the threat of legal liability should the study harm the fetus, and the complex physiology of the pregnant body.Because research on pregnant women faces so many restrictions, most placental research has been done after birth in a pathology lab. Here, the organ is typically examined only after a poor pregnancy outcome, such as stillbirth or placental abruption, in which the placenta pulls away from the uterus wall and causes heavy bleeding.Placental pathology, though, has also long had limitations. “No one in their right mind was studying placentas,” says Harvey Kliman, the director of the Yale School of Medicine’s Reproductive and Placental Research Unit, recalling the early years of his pathology training in the 1980s, when the organ was particularly understudied. As a medical student, he says, “I was discouraged from going into OB-GYN. I was told you can’t really do research on pregnant women. This is still basically true.” Conducting OB-GYN research can be particularly challenging compared with other fields of medicine, he adds.Although the advanced pathology residents were working on cancer, Kliman says that newer residents started in the basement morgue performing autopsies on placentas and fetuses. Even today, there is a hierarchy in pathology, and placental pathology is at the bottom, he says, akin to “scrubbing toilet bowls in the Navy.”“A placenta review after loss can take up to six months, because there’s no priority—there’s no patient on the table,” Kliman says. Most pathologists, he adds, “don’t see the human side of this at all. I deal with patients every day. This is very real to me.”Parast says that the culture of pathology is partly responsible for the lack of placental recognition, because pathologists often work in isolation from one another: “If there’s a perinatal pathologist, they’re the only one. So few people are doing this.”Historically, getting pathologists to come together and agree on the details of placenta work is difficult; to change that, Parast has been working with Push for Empowered Pregnancy, a nonprofit that aims to end preventable stillbirths, along with other advocacy groups such as Star Legacy Foundation. Parast has also pushed the Society for Pediatric Pathology to come together and standardize the way placental autopsy reports are written. This is a big complaint among obstetricians and advocates, she says, because when it comes to the reports as they are now, “no one understands them.” She adds that clinicians also need more training on how to interpret them.Placenta research is also hampered because of how science is done more broadly, says Michelle Oyen, a biomedical-engineering professor at Washington University in St. Louis. Competitive grant proposals and funding incentives can dissuade collaboration and methodology sharing. But building improved obstetrical outcomes requires collaboration between engineers and ob-gyns, she explains. Historically, she adds, there hasn’t been a relationship between those fields, unlike other areas of medicine, such as orthopedics or cardiology.Also at issue are shame and stigma around pregnancy loss—and women’s health in general. “It’s not just about the science, it’s about the fact that these problems are much bigger than most people understand,” Oyen says, referring to the systemic, gender-based obstacles in medicine. And NIH funding, when used to study diseases that primarily affect one gender, disproportionately goes to those that affect men, according to a 2021 study published in the Journal of Women’s Health.Furthermore, a 2021 study in the journal Science showed that female teams of inventors are much more likely to pioneer inventions in women’s health than majority-male teams. With the majority of patents being held by men, “there is a balance problem there,” Oyen says.[Read: A Fitbit for your placenta]That may be changing. “Women’s health is having a moment. Those of us who have been working quietly on this for 25 years are laughing about it,” she adds. “Like we’ve been doing this this whole time, and suddenly, you’re really interested in it.”Research efforts like the Human Placenta Project aim to build a new research base on the ephemeral organ. Now, 10 years into the HPP, researchers have a better understanding of the organ and its role in pregnancy outcomes. They are developing tools to monitor the placenta noninvasively, Weinberg says, such as advances in magnetic resonance imaging and ultrasounds, both of which can help better visualize the placenta and its blood flow.“We’re at a point of clinical validation,” he says. “Researchers think they have a measure that can indicate whether or not a fetus may be a risk.” Prospective studies are the next step.Unfortunately, none of these projects will be market-ready in the near future, he says, although he argues that the project has brought national attention to the placenta.“I do believe the HPP raised global awareness,” Weinberg says. “Things that seemed sci-fi not that long ago are now a possibility.”Still, some clinicians and advocates are disheartened by what they feel is slow progress with big projects such as the HPP, including Kliman and the advocacy groups Push and Measure the Placenta. Kliman’s placental research has highlighted the role of a small placenta as the leading cause of stillbirth. An unusually small placenta, he says, is a stillbirth risk because fetuses can grow too large for it; this may cause the fetus’s growth to stagnate, or make the organ simply give out.Diagnosing a small placenta is “low-hanging fruit,” he says, estimating that it could prevent 7,000 stillbirths a year.A recent study that Kliman co-authored in the journal Reproductive Sciences showed that in the pregnancy losses they studied, one-third of previously unexplained stillbirths was associated with a small placenta. His team reviewed clinical data and placental pathology for more than 1,200 unexplained pregnancy losses and determined that the most common feature of stillbirth was a small placenta. This article has hopefully opened up a door to confirming where these losses are coming from, he says.In 2009, together with his father, an electrical engineer and a mathematician who has since died, Kliman developed a 2-D-ultrasound measurement tool called Estimated Placental Volume which takes about 30 extra seconds at a routine ultrasound. But although the tool launched 15 years ago, getting it implemented has proved difficult.Whether or not his EPV tool will become standard across obstetrics is still uncertain, he says. “We’re dealing with a paradigm change, and there’s a lot of resistance to changing the paradigm.”Other groups are also developing new tools for placental health. Oyen, for instance, is part of In Utero, a $50 million program funded by Wellcome Leap, which aims to halve stillbirth rates globally. For research on the placenta—and maternal and fetal health more broadly—the stakes are particularly high, she says: “Right now, all of the statistics on maternal and fetal mortality are going in the wrong direction in this country.” Although fetal mortality rates have held relatively steady in the most recent years for which there are data, Oyen emphasizes that stagnation is not improvement.Oyen’s team is working to develop new ways to see how oxygen flows in and out of the placenta, using high-resolution imaging and modeling. The models could help determine how the placenta is working and, ultimately, detect if there is growth restriction.The project follows a collaborative model with teams around the world made up of biomedical engineers, clinicians, and computer scientists. Because of this, Oyen argues, the project is more nimble than traditional research: “We have all these data-sharing agreements. We share techniques; we share information within this program. This is a model for how we have to move forward.”Getting obstetricians to implement these new findings in placental research will be the next big push, and in the U.S., that means taking the consensus to the American College of Obstetricians and Gynecologists—the herald of standard of care practices and guidelines for ob-gyns.Professional societies need to develop guidelines, Parast says: “Obs need to come out and say ‘We need this.’ If there’s a little bit of a push from the obs, our societies will catch on.”More than 20 years ago, when Parast processed her first placenta, the one that looked more like a rock than an organ, she and her teacher identified an accumulation of protein-containing material that indicated an underlying condition, possibly autoimmune, she says, which may have restricted the fetus’s growth. Had someone looked at this patient’s placentas sooner, Parast says, her multiple stillbirths may have been prevented with treatment.
2 d
theatlantic.com
The Rise of Poverty Inc.
In 1964, President Lyndon B. Johnson declared “unconditional war on poverty,” and since then, federal spending on anti-poverty initiatives has steadily ballooned. The federal government now devotes hundreds of billions of dollars a year to programs that exclusively or disproportionately benefit low-income Americans, including housing subsidies, food stamps, welfare, and tax credits for working poor families. (This is true even if you exclude Medicaid, the single-biggest such program.)That spending has done a lot of good over the years—and yet no one would say that America has won the War on Poverty. One reason: Most of the money doesn’t go directly to the people it’s supposed to be helping. It is instead funneled through an assortment of private-sector middlemen.Beginning in the 1980s, the U.S. government aggressively pursued the privatization of many government functions under the theory that businesses would compete to deliver these services more cheaply and effectively than a bunch of lazy bureaucrats. The result is a lucrative and politically powerful set of industries that are fueled by government anti-poverty programs and thus depend on poverty for their business model. These entities often take advantage of the very people they ostensibly serve. Today, government contractors run state Medicaid programs, give job training to welfare recipients, and distribute food stamps. At the same time, badly designed anti-poverty policies have spawned an ecosystem of businesses that don’t contract directly with the government but depend on taking a cut of the benefits that poor Americans receive. I call these industries “Poverty Inc.” If anyone is winning the War on Poverty, it’s them.Walk around any low-income neighborhood in the country and you’re likely to see sign after sign for tax-preparation services. That’s because many of the people who live in these neighborhoods qualify for the federal earned-income tax credit, which sent $57 billion toward low-income working taxpayers in 2022. The EITC is a cash cow for low-income-tax-prep companies, many of which charge hundreds of dollars to file returns, plus more fees for “easy advance” refunds, which allow people to access their EITC money earlier and function like high-interest payday loans. In the Washington, D.C., metro area, tax-prep fees can run from $400 to $1,200 per return, according to Joseph Leitmann-Santa Cruz, the CEO and executive director of the nonprofit Capital Area Asset Builders. The average EITC refund received in 2022 was $2,541.Tax preparers might help low-income families access a valuable benefit, but the price they extract for that service dilutes the impact of the program. In Maryland, EITC-eligible taxpayers paid a total of at least $50 million to tax preparers in 2022, according to Robin McKinney, a co-founder and the CEO of the nonprofit CASH Campaign of Maryland—or about $1 of every $20 the program paid out in the state. “That’s $50 million not going to groceries, rent, to pay down student debt, or to meet other pressing needs,” McKinney told me.[Annie Lowrey: The war on poverty is over. Rich people won.]Low-income tax prep is just one of many business models premised on benefiting indirectly from government anti-poverty spending. Some real-estate firms manage properties exclusively for tenants receiving federal housing subsidies. Specialty dental practices cater primarily to poor children on Medicaid. The “dental practice management” company Benevis, for example, works with more than 150 dental practices nationwide, according to its website, and reports that more than 80 percent of its patients are enrolled in either Medicaid or the Children's Health Insurance Program. (In 2018, Benevis and its affiliated Kool Smiles clinics agreed to pay $23.9 million to settle allegations of Medicaid fraud brought by federal prosecutors. The companies did not admit wrongdoing.)A second crop of companies that make up Poverty Inc. are the contractors paid directly to deliver services on the government’s behalf. The 1996 welfare-reform legislation repealed a federal prohibition on contracting out for welfare services. Barely a month after President Bill Clinton signed it into law, behemoths such as Lockheed Martin, Andersen Consulting, and Electronic Data Systems were vying for multimillion-dollar contracts to run state welfare systems. Today, the sector is dominated by firms like Maximus, a full-service contractor that, among other things, operates the state of Texas’s entire welfare system. Over the years, Maximus has been hit with multiple lawsuits and investigations, including a 2007 federal prosecution resulting in a $30.5 million settlement over allegations of Medicaid fraud and a 2023 federal class-action suit alleging that a data breach exposed the personal information of 612,000 Medicare beneficiaries. In 2023, Maximus reported revenues of $4.9 billion and gross profits of $1 billion. Its CEO made nearly $7 million in total compensation last year (including $5 million in stock).Contractors also deliver most government-funded job-training programs, which have a well-deserved reputation for ineffectiveness. One reason is the abundance of companies that are approved to receive federal funds as “eligible training providers” despite showing unimpressive results. In California, that includes institutions such as Animal Behavior College in Valencia, which offers an online dog-grooming course for a total cost of $6,298.87—and whose graduates were making median quarterly earnings of just $5,000 six months after graduation, according to state data.Perhaps the greatest damage that Poverty Inc. inflicts is through inertia. These industries don’t benefit from Americans rising out of poverty. They have a business interest in preserving the existing structure of the government programs that create their markets or provide their cushy contracts. The tax-prep industry, for instance, has spent millions over the past 20 years to block the IRS from offering a free tax-filing option to low-income taxpayers. The irony is that this kind of rent-seeking is exactly what policy makers thought they were preventing when they embraced privatization 40 years ago.In his second term, President Ronald Reagan empaneled the President’s Commission on Privatization, which recommended the wholesale transfer of major government functions to the private sector, including Medicare, jails and prisons, public schools, and even air-traffic control. Privatization advocates were heavily influenced by “public-choice theory,” posited by the Nobel-winning economist James M. Buchanan. According to Buchanan, government agencies are as motivated by self-interest as any other entity. Instead of serving the public good, Buchanan argued, bureaucrats act to preserve their own status by maximizing their budgets and job security. Insulated from competition, they become inefficient and detached from the public interest.Privatization was supposed to pop that bubble of bureaucratic indolence. Instead, it merely shifted it from government agencies to corporate boardrooms.Perhaps the clearest example of public-choice theory turned on its head is Job Corps, a $1.8 billion job-training program for young adults that, unlike most War on Poverty initiatives, has been contracted out since its inception in 1964. Decades of evidence suggest that the program accomplishes very little. It served barely 50,000 students a year before the pandemic, meaning it cost about $34,000 a student. (Job Corps largely shut down during the pandemic and hasn’t fully restored operations since.) In one 2018 audit, the Department of Labor’s inspector general concluded that the program “could not demonstrate beneficial job training outcomes.” Another investigation, by the Government Accountability Office, noted more than 13,500 safety incidents from 2016 to 2017 at Job Corps centers, nearly half of them drug-related episodes or assaults. In 2015, two students were murdered in separate campus-related crimes. Critics have also questioned the value of running an expensive residential program in mostly rural areas, far from actual jobs.[K. Sabeel Rahman: Fix America by undoing decades of privatization]Nevertheless, Job Corps administrators manage to hang on to government contracts for decades. (One such company notes on its website that it won its first Job Corps contract in 1964.) Today, the biggest operator is the Management & Training Corporation, a Utah-based company that runs 20 Job Corps centers nationwide. In 2022, MTC won three multiyear contracts, worth a total of about $263 million, to run Job Corps Centers in Nevada, New Jersey, and Hawaii. The program remains popular in Congress, especially in districts where centers are located. The Friends of Job Corps Congressional Caucus, organized by a lobbying organization for Job Corps contractors, has 80 members. (MTC’s president serves on the organization’s board.)Contractors’ longevity stems in part from their ability to outlast administrations—and the simple fact that, once a contract is awarded, the company that wins it often becomes a de facto monopoly. When the next contract rolls around, there may be no credible competitors.In short, an effort to curtail Big Government has instead preserved the worst of both worlds: all the spending and bloat of government, with none of the public accountability. No wonder, then, that poverty sticks around. There’s simply too much demand for it.
2 d
theatlantic.com