Tools
Change country:

Social media platforms aren’t equipped to handle the negative effects of their algorithms abroad. Neither is the law.

An illustration of a soldier looking out at burned trees and building wreckage with bright orange flames. This scene is all contained within a stack of digital devices.
Franco Zacha for Vox

Because of one law, the internet has no legal duty of care when it comes to hate speech. Just take a look at what happened in Myanmar.

Just after the clock struck midnight, a man entered a nightclub in Istanbul, where hundreds of revelers welcomed the first day of 2017. He then swiftly shot and killed 39 people and injured 69 others — all on behalf of the Islamic State of Iraq and Syria (ISIS).

Among those killed was Jordanian citizen Nawras Alassaf. In response, his family filed a civil suit later that year against Facebook, Twitter, and Google, which owns YouTube. They believed that these tech companies knowingly allowed ISIS and its supporters to use each platform’s “recommendation” algorithms for recruiting, fundraising, and spreading propaganda, normalizing radicalization and attacks like the one that took their son’s life.

Their case, Twitter v. Taamneh, argued that tech companies profit from algorithms that selectively surface content based on each user’s personal data. While these algorithms neatly package recommendations in newsfeeds and promoted posts, continuously serving hyper-specific entertainment for many, the family’s lawyers argued that bad-faith actors have gamed these systems to further extremist campaigns. Noting Twitter’s demonstrated history of online radicalization, the suit anchored on this question: If social media platforms are being used to promote terrorist content, does their failure to intervene constitute aiding and abetting?

The answer, decided unanimously by the Supreme Court last year, was no.

The Court insisted that without ample evidence that these tech companies offered explicit special treatment to the terrorist organization, failure to remove harmful content could not constitute “substantial assistance.” A similar case in the same Supreme Court term, Gonzalez v. Google, detailing a 2015 ISIS attack in Paris, shared the same decision as Twitter v. Taamneh.

Both decisions hinged on 26 words, stemming from a nearly three-decades-old law: “[N]o provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.”

​​Known as Section 230 of the Communications Decency Act, the law fundamentally encoded the regulation — or lack of it — of speech on the internet. According to the logic of Section 230, which dates back to 1996, the internet is supposed to act something like a bookstore. A bookstore owner isn’t responsible for the content of the books they sell. The authors are. It means that while online platforms are free to moderate content as they see fit — just as a bookstore owner can choose whether or not to sell certain books — they are not legally responsible for what users post.

Such legal theory made sense back in 1996, when fewer than 10 million Americans were regularly using the internet and online speech had very little reach, be it a forum post or a direct message on AOL. That’s simply not the case today, when more than 5 billion people are online globally and anything on the internet can be surfaced to people who weren’t the intended audience, warped, and presented without context. Targeted advertisements dominate most feeds, “For you” pages tailor content, and a handful of platforms control the competition. Naturally, silos and echo chambers emerged.

But when a thirst for personalization exacerbates existing social tensions, it can amplify potential harm. It’s no surprise that the US, where social media platforms have intensified partisan animosity, has experienced one of the largest rising political polarization levels in a developed democracy over the past four decades. And given how most platforms are based in the US and prioritize English speakers, moderation for other languages tends to be neglected, especially in smaller markets, which can make the situation even worse outside the US.

Investments follow competition. Without it, ignorance and negligence find space to thrive.

Such myopic perspectives end up leaving hate speech and disinformation undetected in most parts of the world. When translation algorithms fail, explicitly hateful speech slips through the cracks, not to mention more indirect and context-dependent forms of inciting content. Recommendation algorithms then surface such content to users with the highest likelihood of engagement, ultimately fueling further polarization of existing tensions.

Speech is not the crux of the issue; where and how it appears is. A post may not directly call for the death of minorities, but its appearance in online groups sharing similar sentiments might insinuate that, if not help identify people who might be interested in enacting such violence. Insular social media communities have played a sizable role in targeted attacks, civil unrest, and ethnic cleansing over the past decade, from the deadly riots that erupted from anti-Muslim online content in Sri Lanka to the targeted killings publicized online in Ethiopia’s Tigray War.

Of course, the US Supreme Court doesn’t have jurisdiction over what a person in another country posts. But what it has effectively accomplished through Section 230 is a precedent of global immunity for social media companies that, unlike the Court, do act globally. Platforms can’t be held responsible for human rights abuses, even if their algorithms seem to play a role in such atrocities.

One notable instance would be Meta’s alleged role in the 2017 Rohingya genocide, when Facebook’s recommendation algorithms and targeted advertising amplified hateful narratives in Myanmar, playing what the UN later described as a “determining role” in fueling ethnic strife that instigated mass violence against the Rohingya Muslim minority in Myanmar’s Rakhine state. While the company has since taken steps to improve the enforcement of its community standards, it continues to escape liability for such disasters under Section 230 protection.

One thing is clear: To see regulation only as an issue of speech or content moderation would mean disregarding any and all technological developments of the past two decades. Considering the past and ongoing social media-fueled atrocities, it is reasonable to assume that companies know their practices are harmful. The question initially posed by Twitter v. Taamneh then becomes a two-parter: If companies are aware of how their platforms cause harm, where should we draw the line on immunity?

Myanmar’s walled garden and the many lives of online speech

The rapid adoption of Facebook when it entered Myanmar in the 2010s offers a poignant example of the pitfalls of unbridled connectivity.

Until fairly recently, Myanmar was one of the least digitally connected states on the planet. Its telecommunications market was largely state-owned, where government censorship and propaganda were prevalent. But in 2011, the deregulation of telecommunications made phones and internet access much more accessible, and Facebook found instant popularity.

“People were using Facebook because it was well-suited to their needs,” anthropologist Matt Schissler said. By 2013, Facebook was Myanmar’s de facto internet, Schissler added. In 2016, the Free Basics program, an app that provided “free” internet access via a Facebook-centric version of the internet, was launched.

Myanmar is a military-ruled, Buddhist-majority state with a demonstrated history of human rights abuses — and in particular, a record of discrimination against Muslim populations since at least 1948, when Myanmar, then called Burma, gained independence. As a result, the Rohingya — the largest Muslim population in the country — have long been a target of persecution by the Myanmar government.

In the process of connecting millions of people in just a few years, anthropologists and human rights experts say Facebook inadvertently helped exacerbate growing tensions against the Rohingya. It took very little time for hateful posts — often featuring explicit death threats — to proliferate.

Then came the Rohingya genocide that began in 2017 — an ongoing series of military-sanctioned persecutions against the Rohingya that have resulted in over 25,000 deaths and an exodus of over 700,000 refugees. Anti-Rohingya posts on Facebook were gaining traction, and at the same time, reports from the Rohingyas of rape, killings, and arson by security forces grew. Myanmar’s military and Buddhist extremist groups like the MaBaTha were among the many anti-Muslim groups posting false rape accusations and calling the Rohingya minority “dogs,” among other dehumanizing messages.

In a 2022 report, Amnesty International accused Facebook’s newsfeed ranking algorithms of acting to significantly amplify hateful narratives, actively surfacing “some of the most egregious posts advocating violence, discrimination, and genocide against the Rohingya.”

The Amnesty International report heavily referenced findings from the UN’s Independent International Fact-Finding Mission on Myanmar, outlining how Facebook’s features, along with the company’s excessive data-mining practices, not only enabled bad-faith actors to target select groups but also created financial incentives for anti-Rohingya clickbait.

“Facebook’s signature features played a central role in the creation of an environment of violence,” said Pat de Brún, the report’s author and the head of big tech accountability and deputy director at Amnesty International. “From the Facebook Files leaked by Frances Haugen, we found that Facebook played a far more active and substantial role in facilitating and contributing to the ethnic cleansing of Rohingya.”

Facebook, hosting nearly 15 million active users in Myanmar at the time, also operated with a malfunctioning translation algorithm and only four Burmese-speaking content moderators — a disastrous combination. Drowning in the sheer quantity of posts, moderators more often than not failed to detect or remove the majority of the explicitly anti-Rohingya disinformation and hate speech on its platform.

In one case, a post in Burmese that read: “Kill all the kalars that you see in Myanmar; none of them should be left alive,” was translated to “I shouldn’t have a rainbow in Myanmar,” by Facebook’s English translation algorithm. (“Kalar’’ is a commonly used slur in Myanmar for people with darker skin, including Muslims like the Rohingya.) If a moderator who encountered such a post wasn’t one of the company’s four Burmese speakers, a post that’s equally if not more inflammatory would go undetected, freely circulating.

Facebook’s reported failure to detect hate speech was only one small part of the platform’s role in the Rohingya genocide, according to the report. Facebook’s recommendation algorithms acted to ensure that whatever slipped through the cracks in moderation found an audience. According to Amnesty International’s investigation, Facebook reportedly surfaced hateful content to insulated online communities seeking affirmations for their hateful positions — all in the service of engagement. Between Facebook’s market entry and the mass atrocities of 2017, the UN’s investigation found that some of the most followed users on the platform in Myanmar were military generals posting anti-Rohingya content.

Hate speech was not the only type of speech that engagement-optimizing algorithms amplified. “There’s hate speech, but there’s also fear speech,” said David Simon, director of the genocide studies program at Yale University.

Forcing formerly neutral actors to take sides is a common tactic in genocidal campaigns, Simon said. Core to the Burmese military’s information operations was “targeting non-Rohingya Burmese who had relationships with Rohingya people,” Simon said. In doing so, militant groups framed violence against the Rohingya as acts of nationalism — and, consequently, inaction as treason. Reuters’ 2018 investigation reported that individuals who resisted campaigns of hate were threatened and publicly targeted as traitors. By forcing affiliations, the Burmese military was able to normalize violence against the Rohingya.

“It’s not a matter of making everyone a perpetrator,” Simon told Vox. “It’s making sure bystanders stay bystanders.”

The context-dependent nature of fear speech manifested most notably in private channels, including direct texting and Facebook Messenger groups. In an open letter to CEO Mark Zuckerberg, six Myanmar civil society organizations reported a series of chain messages on Facebook’s messaging platform that were sent to falsely warn Buddhist communities of “jihad” attacks, while simultaneously notifying Muslim groups about anti-Muslim protests.

While hate speech, considered in isolation, explicitly violates Facebook’s community guidelines, fear speech, taken out of context, often does not. “Fear speech would not get picked up by automatic detection systems,” Simon said.

Nor can Meta claim it had no advance notice of what might unfold in Myanmar. Prior to the 2017 military-sanctioned attacks in northern Rakhine state, Meta reportedly received multiple direct warnings from activists and experts flagging ongoing campaigns of hate and cautioning of an emergent mass atrocity in Myanmar. These warnings were made as early as 2012 and persisted until 2017, taking shape in meetings with Meta representatives and conferences with activists and academics at Meta’s Menlo Park headquarters.

Meta, the parent company of Facebook, has published several reports in the years since about current policies and updates in Myanmar, including that it significantly increased investments there to help with moderation, in addition to banning the military (Tatmadaw) and other military-controlled entities from Facebook and Instagram.

The internet is nothing like a bookstore

The Rohingya are not recognized as an official ethnic group and have been denied citizenship since 1982. A majority of stateless Rohingya refugees (98 percent) live in Bangladesh and Malaysia. Being a population with little to no legal protection, the Rohingya have very few pathways for reparations under Myanmar law.

On the international stage, issues of jurisdiction have also complicated Meta’s liability. Not only is Myanmar not a signatory of the Rome Statute, the treaty that established the International Criminal Court (ICC) to address acts of genocide, among other war crimes and crimes against humanity, the ICC is not designed to try corporations. Ultimately, the closest anyone can get to corporate accountability is in the US, where most of these platforms are based but are effectively protected under Section 230.

Section 230 was written for an internet that did not have recommendation algorithms or targeting capabilities, and yet, many platform regulation cases today cite Section 230 as their primary defense. The bill grounds itself in the analogy of a bookkeeper and a bookstore, which is now a far cry from the current state of our internet.

In the landmark First Amendment case Smith v. California, which involved a man convicted of violating a Los Angeles ordinance against possessing obscene books at a bookstore, the Supreme Court ruled in 1959 that expecting a bookstore owner to be thoroughly knowledgeable about all the contents of their inventory would be unreasonable. The court also ruled that making bookstore owners liable for the material they sell would drive precautionary censorship that ultimately limits the public’s access to books.

The internet in 1996, much like a bookstore, had a diverse abundance of content, and then-Reps. Chris Cox and Ron Wyden, of California and Oregon respectively, saw a meaningful parallel. They decided to take the Court’s bookstore analogy one step further when they framed Section 230: Not only should online platforms have free rein to moderate, but pitting websites with better, “safer” curations against each other would also create monetary incentives for moderation.

Today, the concentration of users on a handful of social media platforms shows that real competition is long gone. Social media companies, without such competition, lose incentive to maintain safe environments for site visitors. Instead, they’re motivated to monetize attention and keep users on the platform for as long as possible, whether via invasive ad targeting or personalizing recommended information.

These developments have complicated the original analogy. If entering a platform like Facebook were akin to entering a bookstore, that bookstore would only have a personalized display shelf available, stocked with selections based on personal reading histories.

Today, the bounds of Section 230 are painfully clear, yet that law still effectively bars activist groups, victims, and even countries from trying to hold Meta accountable for its role in various human rights abuses.

Section 230 has prevented the landscape of platform regulation from expanding beyond a neverending debate on free speech. It continues to treat social media companies as neutral distributors of information, failing to account for the multifaceted threats of data-driven targeted advertising, engagement-based newsfeed rankings, and other threatening emergent features.

Although platforms do voluntarily enforce independently authored community guidelines, legally speaking, there is little to no theory of harm for social media platforms and thus no duty of care framework. In the same way landlords are responsible for providing lead-free water for their tenants, social media platforms should have the legal duty to protect their users from the weaponization of their platforms, alongside disinformation and harmful content — or in the case of Myanmar, military-driven information operations and amplified narratives of hate. Social media companies should be legally obligated to conduct due diligence and institute safeguards — beyond effective content moderation algorithms — before operating anywhere, akin to car manufacturers installing and testing road safety features before putting a car on the market.

“It’s not that companies like Facebook intentionally want to cause harm,” Schissler said. “It’s just that they’re negligent.”

The way forward

What needs to change is both our awareness of how social media companies work and the law’s understanding of how platforms cause harm.

“Human rights due diligence as it is currently practiced focuses narrowly on discrete harms,” said André Dao, a postdoctoral research fellow studying global corporations and international law at Melbourne Law School. He said internationally recognized frameworks designed to prevent and remedy human rights abuses committed in business operations only address direct harms and overlook indirect but equally dire threats.

In a Business for Social Responsibility (BSR) report that Meta commissioned in 2018 about its operations in Myanmar, BSR — a corporate consultancy — narrowly attributed human rights abuses to Meta’s limited control over bad actors and Myanmar’s allegedly low rate of digital literacy. The report recommended better content moderation systems, neglecting a core catalyst of the genocide: Facebook’s recommendation algorithms.

Giving users more agency, as Brún notes in the Amnesty report, is also critical in minimizing the effects of personalized echo chambers. He advocates for more stringent data privacy practices, proposing a model where users can choose whether to let companies collect their data and whether the collected data is fed into a recommendation algorithm that curates their newsfeeds. To Brún, the bottom line is effective government regulation: “We cannot leave companies to their own devices. There needs to be oversight on how these platforms work.”

Between fueling Russia’s propaganda campaigns and amplifying extremist narratives in the Israel-Hamas war, the current lack of social media regulation rewards harmful and exploitative business practices. It leaves victims no clear paths for accountability or remediation.

Since the Rohingya genocide began in 2017, much of the internet has changed: Hyperrealistic deepfakes proliferate, and the internet has started sharing much of its real estate with content generated by artificial intelligence. Technology is developing in ways that make verifying information more difficult, even as social media companies are doubling down on the same engagement-maximizing algorithms and targeting mechanisms that played a role in the genocide in Myanmar.

Then, of course, there’s the concern about censorship. As Vox has previously reported in the past, changes to Section 230 might engender an overcorrection: the censorship of millions of social media users who aren’t engaging in hate speech. “The likelihood that nine lawyers in black robes, none of whom have any particular expertise on tech policy, will find the solution to this vexing problem in vague statutes that were not written with the modern-day internet in mind is small, to say the least,” wrote Vox’s Ian Millhiser.

But to an optimistic few, programmable solutions that address the pitfalls of recommendation algorithms can make up for the shortfalls of legal solutions.

“If social media companies can design technology to detect copyright infringement, they can invest in content moderation,” said Simon, referencing his research for Yale’s program on mass atrocities in the digital era. He said these new technologies shouldn’t be limited to removing hate speech, but should also be used in detecting potentially harmful social trends and narratives.

ExTrac, an intelligence organization using AI to detect and map emerging risks online, and Jigsaw, a Google incubator specialized in countering online violent extremism, are among the many initiatives exploring programmable solutions to limit algorithmic polarization.

“Tech isn’t our savior, law isn’t our savior, we’re probably not our own saviors either,” Simon said. “But some combination of all three is required to inch toward a healthier and safer internet.”


Read full article on: vox.com
  1. Which Activity Is Frequently Performed by Someone Described as Peripatetic? Test your wits on the Slate Quiz for May 20, 2024.
    slate.com
  2. Dali to be refloated weeks after collapse of Key Bridge, a milestone in reopening access to the Port of Baltimore. Here's what happens next The Dali, the 948-foot-long cargo ship stuck in the Patapsco River for weeks since it felled the Francis Scott Key Bridge, was refloated Monday.
    cbsnews.com
  3. Slate Crossword: Basketball Legend–Turned–Icy Hot Spokesperson (Five Letters) Ready for some wordplay? Sharpen your skills with Slate’s puzzle for May 20, 2024.
    slate.com
  4. Woman Slammed for Risking It All for the Perfect Instagram Shot: 'Memories' "The woman was more concerned with a picture than she was making the transition easy for both," an expert told Newsweek.
    newsweek.com
  5. Trump Trial Enters Its Endgame With Michael Cohen’s Final Grilling Carlos Barria/ReutersDonald Trump’s hush-money trial will resume Monday with more cross-examination of Michael Cohen, the former president’s one-time lawyer and fixer who is at the epicenter of the historic case.It will probably be Cohen’s final day on the stand after already delivering explosive testimony about his old boss’ alleged involvement in the $130,000 payment made to porn star Stormy Daniels to buy her silence about a sexual liaison with Trump. Trump’s defense attorneys will likely continue with questioning designed to chip away at the credibility of Cohen—the last prosecution witness in the trial.Prosecutors claim Trump broke the law by falsifying business records to conceal the hush-money payment to Daniels. He did so, they claim, to protect his 2016 presidential campaign from Daniels’ potentially damaging story of an extramarital one-night stand coming to light. Trump has pleaded not guilty to charges in the case and has denied having sex with Daniels.Read more at The Daily Beast.
    thedailybeast.com
  6. In Europe, politicians eye Gen Z — to fight For decades conscription has been shrinking in Europe. That might now be changing as the threat from Moscow mounts.
    washingtonpost.com
  7. What Queen Camilla Said About Harry and Meghan's Wedding "I think it was very touching," Camilla said of Prince Harry's comment to his father on his wedding day in 2018.
    newsweek.com
  8. You Thought You Quit Your Job. Your Old Employer Has Other Ideas. Sometimes an old job isn’t done with you—even when you’re done with it.
    slate.com
  9. 5 Apps That Could Make You Richer Apps for iPhone and Android offer the chance for you to make some extra cash on the side.
    newsweek.com
  10. What to Know About Mohammad Mokhber, Iran’s Acting President Mr. Mokhber has long been involved in business conglomerates tied to Iran’s supreme leader, Ayatollah Ali Khamenei. He must hold elections for a new president within 50 days.
    nytimes.com
  11. Many Young Voters Have Turned on Biden. But a Different Group Just Might Rescue Him. There’s one cohort he should be very smart about courting.
    slate.com
  12. Iran President Raisi's Critics Celebrate His Death The hardline leader had the nickname "Butcher of Tehran."
    newsweek.com
  13. US Ally Detains Chinese Ship Crew The crew had switched off their vessel's transponder system and failed to furnish the necessary identification, according to the Philippines Coast Guard.
    newsweek.com
  14. The One Way Trump Could Surprise Us Now I’m worried the media is still possibly afflicted by a core problem when it comes to Donald Trump.
    slate.com
  15. J. Cole won the Kendrick Lamar-Drake beef The hip-hop industry can learn a lot from J. Cole’s apology.
    washingtonpost.com
  16. Prince William's Key Honor at Wedding Harry Is Missing Hugh Grosvenor is Prince Archie's godfather, but Harry will skip the wedding, reportedly over tensions with his brother.
    newsweek.com
  17. Travis Kelce reveals his favorite song from Taylor Swift’s ‘TTPD’ album: ‘I might be a little biased’ The 3-time Super Bowl champ, 34, was asked to pick his favorite track from "TTPD" -- and it seems as though the Kansas City Chiefs tight end plays favorites.
    nypost.com
  18. The Dangerous Political Headwind Facing Biden Many voters who respect the president don’t necessarily want him to return to office.
    nytimes.com
  19. We’re in a ‘My Wife Did It’ Moment Justice Alito and Senator Menendez have one thing in common.
    nytimes.com
  20. A Trump Conviction Doesn’t Hang on Michael Cohen As provocative as the testimony has been, this case may turn on something a great deal more mundane.
    nytimes.com
  21. D.C.-area forecast: Sunnier and warmer than last week, but weekend is a question mark The weather Monday through Wednesday looks great. After that, the outlook is more uncertain.
    washingtonpost.com
  22. What happened when TikTok said users can’t promote weight-loss drugs TikTok announced new rules in April that will crack down on videos about weight-loss drugs, upending a hub of information and commerce on popular GLP-1 drugs.
    washingtonpost.com
  23. Ashley Madison reveals top 20 American cities for adulterous behavior: ‘Something is going on in the Midwest’ Ashley Madison chief strategy officer Paul Keable provided Fox News Digital with an exclusive look at the 20 cities across America that are "hot spots for non-monogamy."
    foxnews.com
  24. General says he warned that Afghanistan would get ‘very bad, very fast’ Austin Scott Miller, the last four-star U.S. commander based in Kabul, met privately with House lawmakers scrutinizing the Biden administration’s Afghanistan exit.
    washingtonpost.com
  25. Heart attacks more likely during presidential elections and other stressful times, study shows People with high genetic stress sensitivity, as well as anxiety or depression, are at a much higher risk of heart attack during times of socio-political stress. Doctors explain.
    foxnews.com
  26. slate.com
  27. 'Multiple Students' Harmed in China Elementary School Attack: State Media A suspect was taken into custody after the attack in the eastern Jiangxi province on Monday, authorities said.
    newsweek.com
  28. Iran Video Shows Shock Moment President Raisi's Death Announced Wailing could be heard during the announcement at Iran's largest mosque.
    newsweek.com
  29. AstraZeneca Plans $1.5 Billion Factory in Singapore as Drug Industry Splits From China Astra is working to ensure it can independently supply drugs to major markets as the U.S. pushes to reduce the industry’s reliance on China.
    time.com
  30. Russia's Black Sea Fleet Loses Another Warship: Reports Russia used occupied Crimea as its primary Black Sea base but Ukrainian strikes have forced Moscow to relocate many of its assets further east.
    newsweek.com
  31. Aileen Cannon's Latest Jack Smith Filing Torn Apart by Legal Analysts The judge overseeing Donald Trump's classified documents case wrote that she was "disappointed" in special counsel Jack Smith.
    newsweek.com
  32. Joe Biden Handed Gun Control Loss by Texas Judge A judge blocked a Biden administration rule that would close the so-called gun show loophole.
    newsweek.com
  33. South Africa’s Highest Court Says Jacob Zuma Can’t Run in Election The political comeback of Mr. Zuma, the former president and onetime leader of the African National Congress, has presented a test for the country’s fledgling democracy.
    nytimes.com
  34. Linguist Reveals the Secret Behind Trending US Baby Names "It's really interesting to see how prominent the patterns were," Northeastern University linguist Adam Cooper said.
    newsweek.com
  35. My Father Kept Secret His Mother's Deathbed Apology. It Stopped Me Cold As a little boy, I was mystified by his penchant for stillness and solitude. Hard on himself, my father could be hard on us too.
    newsweek.com
  36. Today's 'Wordle' #1,066 Answer, Clues and Hints for Monday, May 20 Game "Wordle" isn't always easy, so if you are struggling with today's puzzle, Newsweek has some clues to help you out.
    newsweek.com
  37. Dog Food Recall Map Shows States Impacted: 'Monitor for Unusual Behavior' Affected bags of pet food are being recalled in four southern states.
    newsweek.com
  38. NY v. Trump to resume with continued cross-examination of Michael Cohen as trial nears conclusion Former President Trump’s criminal trial is expected to resume Monday with Michael Cohen on the stand for the third day of cross-examination by defense attorneys.
    foxnews.com
  39. Advocates for IRS whistleblowers accuse Special Counsel Weiss of retaliation, misleading: ‘Smear campaign’ Lawyers for two IRS whistleblowers claim special counsel "hid and twisted" information, prompting angst on Capitol Hill about alleged politicization of the Biden Justice Department.
    foxnews.com
  40. NYT 'Connections' Hints May 20: Answers and Clues for Puzzle #344 If today's "Connections" brainteaser is proving to be a struggle, Newsweek has some handy hints to help you out.
    newsweek.com
  41. Dear Abby: My husband’s daughter is annoyingly self-involved Dear Abby weighs in on a self-absorbed daughter and a new member of the LGBTQ community who is not accepted by their family.
    nypost.com
  42. How Lollapalooza Changed Rock Music Forever Paramount+During its trailblazing heyday, Lollapalooza provided an unparalleled platform for the alternative. The problem was that by doing so it made the alternative mainstream, which ultimately caused the summer music festival to betray its roots—an inevitable evolution that stands as the most fascinating aspect of Lolla: The Story of Lollapalooza, a nostalgia-fest which understands that all good underground things must either die or become the very thing against which they rebel.Ironically, Michael John Warren’s three-part Paramount+ docuseries (May 21) is somewhat undercut by a similar dynamic, given that its own desire to end on a happy note means that it must ignore the fact that there’s no going home again to recapture the magic that made a sensation truly sensational in the first place. As a result, it’s ultimately more promotion than critique.Lollapalooza launched in 1991 as the brainchild of Perry Farrell, frontman for the paradigm-shifting band Jane’s Addiction, who—along with cofounders Ted Gardner, Don Muller, and Marc Geiger—viewed it a way to channel the spirit of England’s Reading Festival (and its ilk) by bringing together an assorted line-up of artists on a single stage. Designed as a farewell for Jane’s Addiction, which planned to disband once this run of shows was completed, it quickly blossomed into an invigorating new take on an old format. Attendees were offered not just one great performance after another, but a common area filled with local avant-garde artists, social activist booths that sought to raise awareness about guns, the environment, and voting, and additional attractions that made it an immersive day-long experience directly attuned to the era’s youth culture.Read more at The Daily Beast.
    thedailybeast.com
  43. Truth About the Wild Drug-Taking and Illegal Booze on Ukraine’s Front Lines Photo Illustration by Thomas Levinson/The Daily Beast/GettyKHARKIV, Ukraine—Fighting on the front lines has taken a disastrous turn for the worse after Russia broke through Kharkiv’s line of defense on May 10. Throughout Eastern Ukraine, the brutal repercussions of the six-month-long pause in new U.S. aid led to massive losses on the battlefield. Ukrainian soldiers risking their lives for their country are suffering from physical and psychological trauma. With few ways to decompress from the all-encompassing effects of war, many are turning to drugs and alcohol as a way to cope, which has plunged the military even deeper into darkness.Alcohol is banned in Donbas, the region that has been the epicenter of the war, and all stores and restaurants are forbidden from selling it, but still it makes its way to the front lines via off-rotation soldiers, volunteers, smugglers who charge outlandish prices, or journalists, who bring it in as a peace offering before they begin their interviews.Multiple soldiers who spoke to The Daily Beast claim that some of those defending Ukraine are also abusing illegal substances that they buy through shady online businesses run by the country’s mafia. Some, they say, are drunk at military positions. The men claim that some of their comrades have been so drunk or high that they have killed civilians, soldiers, and animals in a blind rage or while driving under the influence.Read more at The Daily Beast.
    thedailybeast.com
  44. American couple stranded in Brazil facing 'bureaucratic nightmare' after newborn son arrives months early Bureaucratic red tape has left a Minnesota couple stranded in Brazil after their son was born months earlier than expected.
    foxnews.com
  45. Rhode Island police officer and shelter pup pair up for rescue and renewal: 'Can achieve great things' A shelter dog in Rhode Island rejected by multiple families is finally picked by a police officer — who helps the dog finds its purpose in life in a heartwarming, faith-filled family story.
    foxnews.com
  46. When Will the Hottest Bridgerton Finally Get His Own Season? Liam Daniel/NetflixDearest gentle reader, I recognize that the laws of attraction can be fickle. The very same features that might make one person quiver beneath her bodice might, in fact, stir nothing but disgust for another. The flirtatious habits that once made a bachelor’s eyes sparkle during one season might bore him to tears in another. Nevertheless, I must ask: What the hell is going on with the hottest brother in Netflix’s Bridgerton? No, I’m not talking about Anthony, the dreamy (now married) Viscount played by Jonathan Bailey. I’m talking about his younger brother, Benedict, whose love life has been a hot, sexy mess for three seasons. Sex appeal might be subjective, but as far as this writer is concerned our boy Benny checks all the right boxes: He’s got gorgeous blue eyes, witty ballroom comments, and a soul-meltingly mischievous smile. Also, he’s the most reliable of his many, many siblings. Too bad Bridgerton can’t seem to decide what to do with him. Next season will likely belong either to Benedict or his younger sister, Eloise, who already made a catastrophic debut on the Ton’s marriage mart in Season 2. But if we really are moving on to Benedict, it’s going to take some real finessing to stick the landing.Read more at The Daily Beast.
    thedailybeast.com
  47. Renting is Increasingly Cheaper Than Buying a Home Mortgage rates remain high throughout the U.S. and a lack of supply is keeping home prices up.
    newsweek.com
  48. Taiwan’s New President Extends an Olive Branch to Beijing. It Matters Little For William Lai’s inauguration speech, the priority seemed to sound the right notes and avoid any language that could be construed as an affront.
    time.com