Tools
Change country:

Protests continue to bring upheaval to colleges nationwide

New York officials are facing scrutiny after an NYPD officer accidentally discharged a firearm while police were clearing pro-Palestinian protesters from a building at Columbia University earlier this week. Meanwhile, officers clashed with protesters who occupied a library at Portland State University, and at Brown, administrators and students peacefully reached a deal to dismantle an encampment. Lilia Luciano has the latest on the nationwide protests.
Read full article on: cbsnews.com
Election 2024 latest news: Biden heading west on fundraising swing; Trump back in court
Live updates from the 2024 campaign trail with the latest news on presidential candidates, polls, primaries and more.
washingtonpost.com
Robert F. Kennedy Jr. says he suffered from parasitic brain worm
Robert F. Kennedy Jr., an independent candidate for president, confirmed doctors found a dead worm in his brain over a decade ago, but his spokesperson says he is now in good health. CBS News medical contributor Dr. Celine Gounder explains more about it.
cbsnews.com
Undercover sting operation renews attention of need to protect kids on social media
An undercover sting operation in New Mexico is renewing calls by law enforcement for social media platforms to do more to protect kids online. Bodycam video exclusively obtained by CBS News shows how police took down a suspected sexual predator allegedly targeting kids on Facebook messenger.
cbsnews.com
House rejects effort to oust Speaker Mike Johnson from leadership role
The House of Representatives on Wednesday overwhelmingly rejected Rep. Marjorie Taylor Greene's effort to oust Speaker Mike Johnson from his leadership role. Fewer than a dozen Republicans joined Greene in the vote as Democratic leaders showed their support for Johnson.
cbsnews.com
Biden says the U.S. won't supply weapons to Israel for Rafah invasion
In a major shift in U.S. policy, President Biden said the U.S. will not supply weapons for any invasion of Rafah as Israel considers a full-scale assault on the southern Gaza city.
cbsnews.com
Harvey Weinstein expected to appear in court today — just days after being transferred to Rikers Island
Disgraced movie mogul Harvey Weinstein will appear in a New York City courtroom Thursday – just days after he was transferred back to Rikers Island from a cushy private room at Bellevue Hospital. Weinstein, 72, is expected for a procedural hearing over a writ of extradition filed by California in the wake of the New York State...
nypost.com
Worth the wait? The Beatles’ farewell film ‘Let It Be’ hits streaming 54 years later: review
Whether or not you’ve already watched the tedious-at-times “Get Back," The Beatles' farewell film "Let It Be" — now streaming on Disney+ — is only 80 minutes versus eight hours of your time.
nypost.com
Noncitizen voting is rare. Republicans are focusing on it anyway.
GOP leaders at the state and federal level are pushing measures to ban noncitizen voting even though it is already illegal in nearly all cases.
washingtonpost.com
Column: Newsom's appointees should stop delaying this great climate solution
Community solar is an easy winner. The Public Utilities Commission should make it happen.
latimes.com
How TikTok Shop ads turned an obscure, inaccurate book into a bestseller
Photo Illustration by Omar Marques/SOPA Images/LightRocket via Getty Images Was this book the reason TikTok is getting banned in the US? No, but ads saying so sold a lot of copies. If you’ve spent enough time scrolling through TikTok, you might have seen a video from an account like @tybuggyreviews, a handle with half a million followers that exclusively posts videos selling products through the TikTok Shop. The creator, whose verified Instagram account identifies him as Tarik Garrett, used the @tybuggyreviews account to pitch viewers on supplements, water flossers, earbuds, workout machines, bible study guides, probiotics for women to help “that smell down there,” watch bands, inspirational hoodies, inspirational T-shirts, face massagers, foot massagers, rhinestone necklaces, oil pulling kits, and colon cleanses. In the TikTok Shop, creators earn a commission for each sale linked to their account. Garrett’s product videos got tens of thousands of views. A few even topped a million views. But nothing from his account took off quite like his sales pitch for an obscure 2019 publication called The Lost Book of Herbal Remedies. “Now I see why they’re trying to remove TikTok. This book right here? This book of herbal remedies? They do not want us to see this book,” Garrett said at the beginning of one Shop video, referring to a new US law that requires TikTok’s Chinese parent company to either sell the app or face a ban. TikTok is challenging the law in court, arguing that lawmakers citing national security concerns as a reason to pass the bill did not adequately argue why those concerns should supersede the First Amendment. The law, to be clear, does not cite the Lost Book of Herbal Remedies’s availability on the TikTok Shop as a reason for banning the platform. Garrett posted his pitch for the book on April 15. As of May 7, the video had more than 16 million views. Garrett opened the book and showed pages of its recommendations, urging users to take screenshots (and purchase a copy of their own) before it’s too late. The camera lingered on a list of plants that, the book claimed, were treatments for cancer, drug addiction, heart attacks, and herpes. As of Wednesday, the listing for The Lost Book of Herbal Remedies that Garrett linked to has more than 60,000 sales on the TikTok Shop. To put that number in perspective, appearing on a bestseller list generally requires 5,000–10,000 sales in a week. And that interest isn’t staying exclusively on TikTok. Google search interest in the book’s title spiked on the same day Garrett posted his video. The Lost Book of Herbal Remedies was, as of Wednesday, May 8, ranked 10 on Amazon’s bestseller list for books, and has appeared toward the top of Amazon’s bestseller rankings for the past three weeks. I sent a handful of Garrett’s videos advertising the book, along with about a half dozen additional widely viewed videos from other creators promoting The Lost Book of Herbal Remedies, to TikTok for comment. A spokesperson for TikTok said that videos linking to Shop products must abide by both the community guidelines, which ban medical misinformation, and Shop policies, which do not allow misleading content. If a video violates only the Shop policies, they said, they’ll simply remove the link to the Shop but keep the content up. If it violates community guidelines, the video comes down. The violations were enough for TikTok to remove his product review account. Garrett did not respond to a series of emailed questions. How e-commerce took over TikTok TikTok has long been good at guessing what its users might want to see, but less good at monetizing that trick. When the platform launched its Shop feature in the United States last fall, the For You page shifted, pushing video after video like those made by @tybuggyreviews in the hope that its users will start buying the products that go viral on TikTok directly from their store. The result became a For You page with constant interruptions from random product pitches. Right now, for instance, my For You page shows me a bunch of creators dancing to a German song about rhubarb, a bunch of pet birds behaving poorly, chaotic nonbinary people, and lots of ads from alternative wellness creators trying to sell me oils, mushrooms, and books. The Shop ads I see, like much of the content pushed to me on TikTok, are personalized, though my TikTok Shop recommendations are heavily influenced by my reporting on stories like this one. Your results may differ. And yet, it is clear that TikTok has catapulted the Remedies book into relevance beyond a niche audience. The company earns money off of the explosion of sales on the shop, some of which come from creators who are explicitly promoting unproven cancer “cures” and conspiracy theories about the platform. Like the Shadow Work Journal, a workbook that went super viral on TikTok Shop several months ago as a mental health tool — despite its dubious effectiveness — The Lost Book of Herbal Remedies is part of a swell of wellness creators, brands, and products that have found success reaching new audiences on TikTok Shop. Shop videos have become a sort of “loophole” for health misinformation on TikTok, said Evan Thornburg, a bioethicist who posts on TikTok as @gaygtownbae and studies mis/disinformation and public health. Creators, and those with something to sell, know that Shop videos will get privileged on For You pages. Some creators may use those videos to promote dangerous health claims. In other cases, Thornburg noted, “the creator promoting the material isn’t necessarily spouting off disinformation, but the material that they’re convincing people to purchase is.” A recipe for misinformation The Lost Book of Herbal Remedies appears to be a case of both: The book contains misleading information, and creators are circulating misleading health claims in order to sell books. A video with nearly 1 million views promoting the book’s TikTok Shop listing is basically a series of ominous, AI-generated images with an AI voiceover. The video claims that the book contains secrets previously locked away in an ancient book located in the “Vatican library,” and that The Lost Book of Herbal Medicine was previously only available on the “dark web” before surfacing on TikTok. (Not true: The book is for sale on Amazon, the author’s website, and appears to be available through some academic and public library systems.) Another Shop video with more than 1 million views is captioned, “Cure for over 550 diseases, even cancer.” I scanned through a copy of The Lost Book of Herbal Remedies this week. The 300-page book contains a disclaimer noting that it’s intended to “provide information about natural medicine, cures, and remedies that people have used in the past,” that it is not medical advice, and that some of the “remedies and cures found within do not comply with FDA guidelines.” It’s split into two parts: an alphabetical listing of ailments and conditions alongside the plants that the authors believe can cure or treat them, and an alphabetical list of plants, sorted by region, with instructions on how to prepare them. The list of ailments the book includes proposed treatments for cancer, several STDs, mental health disorders, and digestive issues, among many other things. A few stand out: The book lists cures for smallpox, strep, and staph infections. There’s an emergency medicine section that includes plant remedies for serious medical conditions like internal bleeding and poisoning. Flip to the entries for the plants and you’ll find lists of claims referring to research that is not cited. An entry promoting Ashwagandha’s “anti tumor effects” and ability to “kill ... cancerous cells” refers to “research,” but does not note that, while there is some indication that Ashwagandha can slow the growth of cancer cells, these studies were conducted on rodents and have yet to be replicated on humans. Nicole Apelian, one of the book’s authors, did not reply to an emailed request for comment. While active on TikTok, it’s not her main social media presence. Her TikTok bio encourages her 17,000 followers there to check her out on Instagram, where she has 100,000 followers. Apelain also runs Nicole’s Apothecary, an herbal shop mentioned in the book that sells some of the tinctures she recommends, sells memberships to an online “Academy” for fans of her book, and advertises her paid appearances and workshops. The endless whack-a-mole As a journalist, there’s a pattern that becomes evident when writing about health misinformation on social media: something gets views, you assess the real or potential harm and try to understand its context, you contact the company to ask about the harmful thing. Maybe the video or post or group is taken down, maybe it’s not. The company gives you a statement, refers you to their policies on misinformation, and then you publish the article. This happens over and over and over and over because writing about misleading health information is a game of whack-a-mole that feels harder and harder to win. Thornburg, the bioethicist, noted a couple reasons why I can’t climb out of this purgatory. First, meaningful moderation of a platform like TikTok is somewhat implausible. Social media companies are “never going to prioritize the amount of labor that would need to consistently be put into misinformation management,” they said. Most sites rely on a combination of human moderators and AI, and it’s difficult to create automated moderation tools that don’t also censor allowed content. For example: health misinformation targeting minority communities often taps into legitimate distrust of medical professionals and institutions that have roots in recent history. An AI tool designed to moderate keywords associated with this sort of targeted misinformation might also sweep up criticism of health care systems in general. And second, the creators who profit off health misinformation are really good at figuring out what they can say where, and what Thornburg calls “life boating” their audiences from one platform to another as needed. “You will have people who will drive interest in something through TikTok because the virality and the algorithm are aggressive,” Thornburg said. Then, their profile will link out to their Instagram or Linktree or YouTube channel. Health misinformation on social media is a million cross-pollinating moving targets. TikTok Shop is a hot spot right now. Later, it might be something else on another platform. Chasing this content from platform to platform, harm to harm, viral video to viral video, is exhausting. I am exhausted. At the end of our interview, Thornburg shared the question that drives a lot of their work in this space, “Who do we consider accountable for these things that are harmful and regulate them or hold them to certain standards?” Often, it’s not really the person behind the individual piece of content driving the incentives for making it. As a result of my reporting, Garrett’s account was taken down, along with a few other popular videos advertising a book that has already sold tens of thousands of copies. As long as the incentives remain, it won’t be long until the next product promising a miracle starts polluting my For You page.
vox.com
Diamondbacks vs. Reds prediction: MLB odds, picks, best bets for Thursday
Stitches predicts Hunter Greene will lead the host Reds to a victory over the Diamondbacks on Thursday.
nypost.com
NYC’s $50M QC NY spa on Governors Island announces an expansion set to open this summer
Governors Island’s buzzy, and Instagrammable, Italian day spa QC NY is expanding -- with a new building and restaurant set to open this July.
nypost.com
Bike in style with the BirdBike, over half off with free shipping
Get active for less!
nypost.com
Trump ‘hush money’ NYC trial live updates: Stormy Daniels back on witness stand after admitting she hates ex-president
Follow the Post’s live updates for the latest news, analysis and photos from the Trump trial in NYC.
nypost.com
Social media platforms aren’t equipped to handle the negative effects of their algorithms abroad. Neither is the law.
Franco Zacha for Vox Because of one law, the internet has no legal duty of care when it comes to hate speech. Just take a look at what happened in Myanmar. Just after the clock struck midnight, a man entered a nightclub in Istanbul, where hundreds of revelers welcomed the first day of 2017. He then swiftly shot and killed 39 people and injured 69 others — all on behalf of the Islamic State of Iraq and Syria (ISIS). Among those killed was Jordanian citizen Nawras Alassaf. In response, his family filed a civil suit later that year against Facebook, Twitter, and Google, which owns YouTube. They believed that these tech companies knowingly allowed ISIS and its supporters to use each platform’s “recommendation” algorithms for recruiting, fundraising, and spreading propaganda, normalizing radicalization and attacks like the one that took their son’s life. Their case, Twitter v. Taamneh, argued that tech companies profit from algorithms that selectively surface content based on each user’s personal data. While these algorithms neatly package recommendations in newsfeeds and promoted posts, continuously serving hyper-specific entertainment for many, the family’s lawyers argued that bad-faith actors have gamed these systems to further extremist campaigns. Noting Twitter’s demonstrated history of online radicalization, the suit anchored on this question: If social media platforms are being used to promote terrorist content, does their failure to intervene constitute aiding and abetting? The answer, decided unanimously by the Supreme Court last year, was no. The Court insisted that without ample evidence that these tech companies offered explicit special treatment to the terrorist organization, failure to remove harmful content could not constitute “substantial assistance.” A similar case in the same Supreme Court term, Gonzalez v. Google, detailing a 2015 ISIS attack in Paris, shared the same decision as Twitter v. Taamneh. Both decisions hinged on 26 words, stemming from a nearly three-decades-old law: “[N]o provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.” ​​Known as Section 230 of the Communications Decency Act, the law fundamentally encoded the regulation — or lack of it — of speech on the internet. According to the logic of Section 230, which dates back to 1996, the internet is supposed to act something like a bookstore. A bookstore owner isn’t responsible for the content of the books they sell. The authors are. It means that while online platforms are free to moderate content as they see fit — just as a bookstore owner can choose whether or not to sell certain books — they are not legally responsible for what users post. Such legal theory made sense back in 1996, when fewer than 10 million Americans were regularly using the internet and online speech had very little reach, be it a forum post or a direct message on AOL. That’s simply not the case today, when more than 5 billion people are online globally and anything on the internet can be surfaced to people who weren’t the intended audience, warped, and presented without context. Targeted advertisements dominate most feeds, “For you” pages tailor content, and a handful of platforms control the competition. Naturally, silos and echo chambers emerged. But when a thirst for personalization exacerbates existing social tensions, it can amplify potential harm. It’s no surprise that the US, where social media platforms have intensified partisan animosity, has experienced one of the largest rising political polarization levels in a developed democracy over the past four decades. And given how most platforms are based in the US and prioritize English speakers, moderation for other languages tends to be neglected, especially in smaller markets, which can make the situation even worse outside the US. Investments follow competition. Without it, ignorance and negligence find space to thrive. Such myopic perspectives end up leaving hate speech and disinformation undetected in most parts of the world. When translation algorithms fail, explicitly hateful speech slips through the cracks, not to mention more indirect and context-dependent forms of inciting content. Recommendation algorithms then surface such content to users with the highest likelihood of engagement, ultimately fueling further polarization of existing tensions. Speech is not the crux of the issue; where and how it appears is. A post may not directly call for the death of minorities, but its appearance in online groups sharing similar sentiments might insinuate that, if not help identify people who might be interested in enacting such violence. Insular social media communities have played a sizable role in targeted attacks, civil unrest, and ethnic cleansing over the past decade, from the deadly riots that erupted from anti-Muslim online content in Sri Lanka to the targeted killings publicized online in Ethiopia’s Tigray War. Of course, the US Supreme Court doesn’t have jurisdiction over what a person in another country posts. But what it has effectively accomplished through Section 230 is a precedent of global immunity for social media companies that, unlike the Court, do act globally. Platforms can’t be held responsible for human rights abuses, even if their algorithms seem to play a role in such atrocities. One notable instance would be Meta’s alleged role in the 2017 Rohingya genocide, when Facebook’s recommendation algorithms and targeted advertising amplified hateful narratives in Myanmar, playing what the UN later described as a “determining role” in fueling ethnic strife that instigated mass violence against the Rohingya Muslim minority in Myanmar’s Rakhine state. While the company has since taken steps to improve the enforcement of its community standards, it continues to escape liability for such disasters under Section 230 protection. One thing is clear: To see regulation only as an issue of speech or content moderation would mean disregarding any and all technological developments of the past two decades. Considering the past and ongoing social media-fueled atrocities, it is reasonable to assume that companies know their practices are harmful. The question initially posed by Twitter v. Taamneh then becomes a two-parter: If companies are aware of how their platforms cause harm, where should we draw the line on immunity? Myanmar’s walled garden and the many lives of online speech The rapid adoption of Facebook when it entered Myanmar in the 2010s offers a poignant example of the pitfalls of unbridled connectivity. Until fairly recently, Myanmar was one of the least digitally connected states on the planet. Its telecommunications market was largely state-owned, where government censorship and propaganda were prevalent. But in 2011, the deregulation of telecommunications made phones and internet access much more accessible, and Facebook found instant popularity. “People were using Facebook because it was well-suited to their needs,” anthropologist Matt Schissler said. By 2013, Facebook was Myanmar’s de facto internet, Schissler added. In 2016, the Free Basics program, an app that provided “free” internet access via a Facebook-centric version of the internet, was launched. Myanmar is a military-ruled, Buddhist-majority state with a demonstrated history of human rights abuses — and in particular, a record of discrimination against Muslim populations since at least 1948, when Myanmar, then called Burma, gained independence. As a result, the Rohingya — the largest Muslim population in the country — have long been a target of persecution by the Myanmar government. In the process of connecting millions of people in just a few years, anthropologists and human rights experts say Facebook inadvertently helped exacerbate growing tensions against the Rohingya. It took very little time for hateful posts — often featuring explicit death threats — to proliferate. Then came the Rohingya genocide that began in 2017 — an ongoing series of military-sanctioned persecutions against the Rohingya that have resulted in over 25,000 deaths and an exodus of over 700,000 refugees. Anti-Rohingya posts on Facebook were gaining traction, and at the same time, reports from the Rohingyas of rape, killings, and arson by security forces grew. Myanmar’s military and Buddhist extremist groups like the MaBaTha were among the many anti-Muslim groups posting false rape accusations and calling the Rohingya minority “dogs,” among other dehumanizing messages. In a 2022 report, Amnesty International accused Facebook’s newsfeed ranking algorithms of acting to significantly amplify hateful narratives, actively surfacing “some of the most egregious posts advocating violence, discrimination, and genocide against the Rohingya.” The Amnesty International report heavily referenced findings from the UN’s Independent International Fact-Finding Mission on Myanmar, outlining how Facebook’s features, along with the company’s excessive data-mining practices, not only enabled bad-faith actors to target select groups but also created financial incentives for anti-Rohingya clickbait. “Facebook’s signature features played a central role in the creation of an environment of violence,” said Pat de Brún, the report’s author and the head of big tech accountability and deputy director at Amnesty International. “From the Facebook Files leaked by Frances Haugen, we found that Facebook played a far more active and substantial role in facilitating and contributing to the ethnic cleansing of Rohingya.” Facebook, hosting nearly 15 million active users in Myanmar at the time, also operated with a malfunctioning translation algorithm and only four Burmese-speaking content moderators — a disastrous combination. Drowning in the sheer quantity of posts, moderators more often than not failed to detect or remove the majority of the explicitly anti-Rohingya disinformation and hate speech on its platform. In one case, a post in Burmese that read: “Kill all the kalars that you see in Myanmar; none of them should be left alive,” was translated to “I shouldn’t have a rainbow in Myanmar,” by Facebook’s English translation algorithm. (“Kalar’’ is a commonly used slur in Myanmar for people with darker skin, including Muslims like the Rohingya.) If a moderator who encountered such a post wasn’t one of the company’s four Burmese speakers, a post that’s equally if not more inflammatory would go undetected, freely circulating. Facebook’s reported failure to detect hate speech was only one small part of the platform’s role in the Rohingya genocide, according to the report. Facebook’s recommendation algorithms acted to ensure that whatever slipped through the cracks in moderation found an audience. According to Amnesty International’s investigation, Facebook reportedly surfaced hateful content to insulated online communities seeking affirmations for their hateful positions — all in the service of engagement. Between Facebook’s market entry and the mass atrocities of 2017, the UN’s investigation found that some of the most followed users on the platform in Myanmar were military generals posting anti-Rohingya content. Hate speech was not the only type of speech that engagement-optimizing algorithms amplified. “There’s hate speech, but there’s also fear speech,” said David Simon, director of the genocide studies program at Yale University. Forcing formerly neutral actors to take sides is a common tactic in genocidal campaigns, Simon said. Core to the Burmese military’s information operations was “targeting non-Rohingya Burmese who had relationships with Rohingya people,” Simon said. In doing so, militant groups framed violence against the Rohingya as acts of nationalism — and, consequently, inaction as treason. Reuters’ 2018 investigation reported that individuals who resisted campaigns of hate were threatened and publicly targeted as traitors. By forcing affiliations, the Burmese military was able to normalize violence against the Rohingya. “It’s not a matter of making everyone a perpetrator,” Simon told Vox. “It’s making sure bystanders stay bystanders.” The context-dependent nature of fear speech manifested most notably in private channels, including direct texting and Facebook Messenger groups. In an open letter to CEO Mark Zuckerberg, six Myanmar civil society organizations reported a series of chain messages on Facebook’s messaging platform that were sent to falsely warn Buddhist communities of “jihad” attacks, while simultaneously notifying Muslim groups about anti-Muslim protests. While hate speech, considered in isolation, explicitly violates Facebook’s community guidelines, fear speech, taken out of context, often does not. “Fear speech would not get picked up by automatic detection systems,” Simon said. Nor can Meta claim it had no advance notice of what might unfold in Myanmar. Prior to the 2017 military-sanctioned attacks in northern Rakhine state, Meta reportedly received multiple direct warnings from activists and experts flagging ongoing campaigns of hate and cautioning of an emergent mass atrocity in Myanmar. These warnings were made as early as 2012 and persisted until 2017, taking shape in meetings with Meta representatives and conferences with activists and academics at Meta’s Menlo Park headquarters. Meta, the parent company of Facebook, has published several reports in the years since about current policies and updates in Myanmar, including that it significantly increased investments there to help with moderation, in addition to banning the military (Tatmadaw) and other military-controlled entities from Facebook and Instagram. The internet is nothing like a bookstore The Rohingya are not recognized as an official ethnic group and have been denied citizenship since 1982. A majority of stateless Rohingya refugees (98 percent) live in Bangladesh and Malaysia. Being a population with little to no legal protection, the Rohingya have very few pathways for reparations under Myanmar law. On the international stage, issues of jurisdiction have also complicated Meta’s liability. Not only is Myanmar not a signatory of the Rome Statute, the treaty that established the International Criminal Court (ICC) to address acts of genocide, among other war crimes and crimes against humanity, the ICC is not designed to try corporations. Ultimately, the closest anyone can get to corporate accountability is in the US, where most of these platforms are based but are effectively protected under Section 230. Section 230 was written for an internet that did not have recommendation algorithms or targeting capabilities, and yet, many platform regulation cases today cite Section 230 as their primary defense. The bill grounds itself in the analogy of a bookkeeper and a bookstore, which is now a far cry from the current state of our internet. In the landmark First Amendment case Smith v. California, which involved a man convicted of violating a Los Angeles ordinance against possessing obscene books at a bookstore, the Supreme Court ruled in 1959 that expecting a bookstore owner to be thoroughly knowledgeable about all the contents of their inventory would be unreasonable. The court also ruled that making bookstore owners liable for the material they sell would drive precautionary censorship that ultimately limits the public’s access to books. The internet in 1996, much like a bookstore, had a diverse abundance of content, and then-Reps. Chris Cox and Ron Wyden, of California and Oregon respectively, saw a meaningful parallel. They decided to take the Court’s bookstore analogy one step further when they framed Section 230: Not only should online platforms have free rein to moderate, but pitting websites with better, “safer” curations against each other would also create monetary incentives for moderation. Today, the concentration of users on a handful of social media platforms shows that real competition is long gone. Social media companies, without such competition, lose incentive to maintain safe environments for site visitors. Instead, they’re motivated to monetize attention and keep users on the platform for as long as possible, whether via invasive ad targeting or personalizing recommended information. These developments have complicated the original analogy. If entering a platform like Facebook were akin to entering a bookstore, that bookstore would only have a personalized display shelf available, stocked with selections based on personal reading histories. Today, the bounds of Section 230 are painfully clear, yet that law still effectively bars activist groups, victims, and even countries from trying to hold Meta accountable for its role in various human rights abuses. Section 230 has prevented the landscape of platform regulation from expanding beyond a neverending debate on free speech. It continues to treat social media companies as neutral distributors of information, failing to account for the multifaceted threats of data-driven targeted advertising, engagement-based newsfeed rankings, and other threatening emergent features. Although platforms do voluntarily enforce independently authored community guidelines, legally speaking, there is little to no theory of harm for social media platforms and thus no duty of care framework. In the same way landlords are responsible for providing lead-free water for their tenants, social media platforms should have the legal duty to protect their users from the weaponization of their platforms, alongside disinformation and harmful content — or in the case of Myanmar, military-driven information operations and amplified narratives of hate. Social media companies should be legally obligated to conduct due diligence and institute safeguards — beyond effective content moderation algorithms — before operating anywhere, akin to car manufacturers installing and testing road safety features before putting a car on the market. “It’s not that companies like Facebook intentionally want to cause harm,” Schissler said. “It’s just that they’re negligent.” The way forward What needs to change is both our awareness of how social media companies work and the law’s understanding of how platforms cause harm. “Human rights due diligence as it is currently practiced focuses narrowly on discrete harms,” said André Dao, a postdoctoral research fellow studying global corporations and international law at Melbourne Law School. He said internationally recognized frameworks designed to prevent and remedy human rights abuses committed in business operations only address direct harms and overlook indirect but equally dire threats. In a Business for Social Responsibility (BSR) report that Meta commissioned in 2018 about its operations in Myanmar, BSR — a corporate consultancy — narrowly attributed human rights abuses to Meta’s limited control over bad actors and Myanmar’s allegedly low rate of digital literacy. The report recommended better content moderation systems, neglecting a core catalyst of the genocide: Facebook’s recommendation algorithms. Giving users more agency, as Brún notes in the Amnesty report, is also critical in minimizing the effects of personalized echo chambers. He advocates for more stringent data privacy practices, proposing a model where users can choose whether to let companies collect their data and whether the collected data is fed into a recommendation algorithm that curates their newsfeeds. To Brún, the bottom line is effective government regulation: “We cannot leave companies to their own devices. There needs to be oversight on how these platforms work.” Between fueling Russia’s propaganda campaigns and amplifying extremist narratives in the Israel-Hamas war, the current lack of social media regulation rewards harmful and exploitative business practices. It leaves victims no clear paths for accountability or remediation. Since the Rohingya genocide began in 2017, much of the internet has changed: Hyperrealistic deepfakes proliferate, and the internet has started sharing much of its real estate with content generated by artificial intelligence. Technology is developing in ways that make verifying information more difficult, even as social media companies are doubling down on the same engagement-maximizing algorithms and targeting mechanisms that played a role in the genocide in Myanmar. Then, of course, there’s the concern about censorship. As Vox has previously reported in the past, changes to Section 230 might engender an overcorrection: the censorship of millions of social media users who aren’t engaging in hate speech. “The likelihood that nine lawyers in black robes, none of whom have any particular expertise on tech policy, will find the solution to this vexing problem in vague statutes that were not written with the modern-day internet in mind is small, to say the least,” wrote Vox’s Ian Millhiser. But to an optimistic few, programmable solutions that address the pitfalls of recommendation algorithms can make up for the shortfalls of legal solutions. “If social media companies can design technology to detect copyright infringement, they can invest in content moderation,” said Simon, referencing his research for Yale’s program on mass atrocities in the digital era. He said these new technologies shouldn’t be limited to removing hate speech, but should also be used in detecting potentially harmful social trends and narratives. ExTrac, an intelligence organization using AI to detect and map emerging risks online, and Jigsaw, a Google incubator specialized in countering online violent extremism, are among the many initiatives exploring programmable solutions to limit algorithmic polarization. “Tech isn’t our savior, law isn’t our savior, we’re probably not our own saviors either,” Simon said. “But some combination of all three is required to inch toward a healthier and safer internet.”
vox.com
Jelly Roll continues weight loss journey by completing first 5k run
Jelly Roll became emotional after completing his first 5K run. The country star wiped away tears at the finish line and later revealed that he has lost "50 to 70" pounds.
foxnews.com
6 last-minute gifts for Mother's Day that will get to her on time
We've rounded up six Mother's Day gift ideas that say everything but last-minute.
foxnews.com
18 bodies — 9 left with messages — found in Mexico
Nine men were found dead in the city of Morelos in Zacatecas — a day after nine bodies were found on an avenue in the city of Fresnillo.
cbsnews.com
Live updates: Stormy Daniels to continue testimony in Trump’s hush money trial
Stormy Daniels is expected to return to the stand to continue her testimony in Donald Trump’s trial on allegations of business fraud related to hush money paid.
washingtonpost.com
Sneak peek: The Day My Mother Vanished
When her mother disappears, 7-year-old Nicki Bates begins a lifelong search to find her and bring her killer to justice. "48 Hours" correspondent Peter Van Sant reports Saturday, May 11 at 10/9c on CBS and streaming on Paramount+.
cbsnews.com
Putin defends Russia's planned tactical nuclear weapons drill, calling exercise 'nothing unusual'
The planned exercise involving the practice deployment of tactical nuclear weapons in southern Russia is nothing unusual, according to Russian President Vladimir Putin.
foxnews.com
CA regulators to vote on divisive energy bill proposal
California regulators will vote on whether to allow the state's utility companies to add a fixed charge to power bills in exchange for lowering the price of electricity.
foxnews.com
Scientists discover thick atmosphere enveloping rocky so-called 'super Earth' planet
Researchers have discovered a thick atmosphere enveloping a planet called 55 Cancri e, which is twice the size of Earth and located in a nearby solar system.
foxnews.com
The tension in the Islanders’ parting comments before an offseason that promises changes
Asked whether he’s nervous about changes to the roster, Mat Barzal replied, “Definitely. It’s been three years now where we haven’t got over the hump in the first round."
nypost.com
Giants sign veteran receiver Allen Robinson to bolster offense
The Giants added 10 years of experience to their receivers unit.
nypost.com
Liz Cheney joins old foe Trump in public slam of Biden's latest move in Israel: 'Wrong and dangerous'
Former Wyoming Congresswoman Liz Cheney and former President Trump both publicly criticized President Biden on social media this week.
foxnews.com
Steakhouse customer accused of pulling teen’s skirt down at restaurant said she was ‘applauded’ by other patrons in new body-cam footage
Newly emerged video shows the enraged Utah diner who went wildly viral for yanking down a teenager’s skirt pleading her case to cops after her arrest -- telling them that she was "applauded" by others.
nypost.com
Biden set to tighten asylum access at US-Mexico border, sources say
WASHINGTON – The Biden administration is set to tighten access to asylum at the U.S.-Mexico border via a new regulation that could be issued as soon as Thursday, four sources familiar with the matter said, in a targeted move aimed at reducing illegal crossings. The regulation would require migrants to be assessed at an initial...
nypost.com
Saudi authorities approve lethal force to clear residents from land for futuristic eco-city: report
A report from the BBC claims that the government of Saudi Arabia has approved the use of lethal force against residents of land set aside for an eco-friendly city development project.
foxnews.com
Boeing 737 Crash During Takeoff Leaves 11 Injured
Universal Images Group via Getty ImagesA Boeing 737 plane skidded off the runway during takeoff at an airport in Senegal, the country’s transport minister said Thursday, leaving multiple injured.Transport Minister El Malick Ndiaye said 10 people were injured when the Air Sénégal flight operated by TransAir went off the tarmac at Blaise Diagne International Airport, which serves the capital of Dakar. Local media reports citing press communications from the airport say 11 people were injured, including four seriously.Intense video footage shared on Facebook by the Malian musician Cheick Siriman Sissoko purportedly showing the aftermath of the crash depicts a panic scene of people evacuating the plane, which he also said had “caught fire.” People can be heard screaming while flames are visible on one side of the aircraft.Read more at The Daily Beast.
thedailybeast.com
Boeing whistleblower says he had to downplay issues when inspecting planes: ‘It was just a matter of time’
A former quality manager of Boeing supplier Spirit AeroSystems says he was pressured to downplay defects he discovered while inspecting the troubled planes — and that he always felt it was "just a matter of time before something bad happened."
nypost.com
Elon Musk’s Neuralink encounters problem with first in-human brain implant
Elon Musk’s brain-chip startup said Wednesday that its first-ever implant has malfunctioned. Neuralink’s brain-computer interface, known as a BCI, was implanted into 29-year-old patient Nolan Arbaugh’s brain back in January. Designed to help patients with paralysis control external technology using only their mind, Arbaugh — who is paralyzed from the shoulders down due to a...
nypost.com
David Axelrod pummels Biden's defiant stance on economy following CNN interview: A 'terrible mistake'
Former Obama adviser David Axelrod panned President Biden's dismissal of polls about his handling of the economy during a CNN interview, calling it a "terrible mistake."
foxnews.com
Ascension health care network disrupted by cyberattack
Ascension said it responded immediately​, and access to some systems has been interrupted with remediation efforts in progress.
cbsnews.com
Passengers flee fiery Boeing 737 that skidded off runway in Senegal
One of the 78 passengers was seen scrambling away from the aircraft as it went up in flames.
nypost.com
‘The Idea of You’ Proves It’s Time Nicholas Galiztine Graduates to the Big Hollywood Movies
Like, ones that don't go straight to streaming.
nypost.com
Rangers’ run 30 years later has plenty of echoes back to 1994
New Yorkers have been talking for weeks about the similarities between this Rangers Stanley Cup chase and the one in 1994 which ended the franchise’s 54-year curse.
nypost.com
Stormy Daniels to resume testimony in Trump trial
Donald Trump's criminal hush money trial continues in New York. Follow here for the latest live news updates, analysis and more.
edition.cnn.com
Tom Brady Netflix roast was the 'worst piece of garbage,' radio legend says
Radio legend Christopher "Mad Dog" Russo lambasted the Tom Brady Netflix special on Wednesday. He said it was "awful" and wondered how Brady "could subject himself to that nonsense."
foxnews.com
Putin claims there’s ‘nothing unusual’ about Russian nuclear weapons drill
"There is nothing unusual here, this is planned work," Putin said, state news agency TASS reported. "It is training."
nypost.com
Feds have 'significant safety concerns' about Ford fuel leak recall and demand answers about the fix
Federal investigators say they have “significant safety concerns” about a Ford SUV recall repair that doesn’t fix gasoline leaks that can cause engine fires
abcnews.go.com
Couple goes viral on TikTok for planting their own wedding flowers, expert offers tips for DIY approach
A Tennessee couple saved thousands by deciding to grow their own wedding flowers. The 2023 bride spoke with Fox News Digital about her and her husband's experience.
foxnews.com
Feds Have ‘Significant Safety Concerns’ About Ford Fuel Leak Recall. What to Know
Federal investigators say they have concerns about a Ford SUV recall repair that doesn't fix gasoline leaks that can cause engine fires.
1 h
time.com
Jerry Seinfeld begs Howard Stern to forgive him after ‘insulting’ his ‘comedy chops’
Jerry Seinfeld apologized for saying Howard Stern lacks "comedy chops" and has been "outflanked."
1 h
nypost.com
Exclusive: Mom speaks out after Air Force rescued her son at sea from a cruise ship
Angela Bridges spoke exclusively to ABC News about how her 12-year-old son was rescued at sea by the Air Force.
1 h
abcnews.go.com
Australia and Tuvalu's new security deal clarifies 'veto power' over defense agreements with other countries
Australia and the Pacific island nation of Tuvalu came to a new security agreement that eased the latter's concerns over the deal's effects on its sovereignty.
1 h
foxnews.com
Bravo fans mistake Kelly Osbourne for Kim Zolciak after body-contouring treatment, hair transformation
Social media users made the comparison in the comments section of Osbourne's latest Instagram post, repeatedly tagged the "Real Housewives of Atlanta" alum.
1 h
nypost.com
What does Los Angeles owe the people who lost their homes in Chavez Ravine? More than an apology
A community was uprooted for the land that became home to Dodger Stadium. The city of Los Angeles should make amends to the displaced families of Chavez Ravine.
1 h
latimes.com