Tools
Change country:
ChatGPT can talk, but OpenAI employees sure can’t 
Sam Altman (left), CEO of artificial intelligence company OpenAI, and the company’s co-founder and then-chief scientist Ilya Sutskever, speak together at Tel Aviv University in Tel Aviv on June 5, 2023. | Jack Guez/AFP via Getty Images Why is OpenAI’s superintelligence team imploding? On Monday, OpenAI announced exciting new product news: ChatGPT can now talk like a human. It has a cheery, slightly ingratiating feminine voice that sounds impressively non-robotic, and a bit familiar if you’ve seen a certain 2013 Spike Jonze film. “Her,” tweeted OpenAI CEO Sam Altman, referencing the movie in which a man falls in love with an AI assistant voiced by Scarlett Johansson. But the product release of ChatGPT 4o was quickly overshadowed by much bigger news out of OpenAI: the resignation of the company’s co-founder and chief scientist, Ilya Sutskever, who also led its superalignment team, as well as that of his co-team leader Jan Leike (who we put on the Future Perfect 50 list last year). The resignations didn’t come as a total surprise. Sutskever had been involved in the boardroom revolt that led to Altman’s temporary firing last year, before the CEO quickly returned to his perch. Sutskever publicly regretted his actions and backed Altman’s return, but he’s been mostly absent from the company since, even as other members of OpenAI’s policy, alignment, and safety teams have departed. But what has really stirred speculation was the radio silence from former employees. Sutskever posted a pretty typical resignation message, saying “I’m confident that OpenAI will build AGI that is both safe and beneficial…I am excited for what comes next.” Leike ... didn’t. His resignation message was simply: “I resigned.” After several days of fervent speculation, he expanded on this on Friday morning, explaining that he was worried OpenAI had shifted away from a safety-focused culture. Questions arose immediately: Were they forced out? Is this delayed fallout of Altman’s brief firing last fall? Are they resigning in protest of some secret and dangerous new OpenAI project? Speculation filled the void because no one who had once worked at OpenAI was talking. It turns out there’s a very clear reason for that. I have seen the extremely restrictive off-boarding agreement that contains nondisclosure and non-disparagement provisions former OpenAI employees are subject to. It forbids them, for the rest of their lives, from criticizing their former employer. Even acknowledging that the NDA exists is a violation of it. If a departing employee declines to sign the document, or if they violate it, they can lose all vested equity they earned during their time at the company, which is likely worth millions of dollars. One former employee, Daniel Kokotajlo, who posted that he quit OpenAI “due to losing confidence that it would behave responsibly around the time of AGI,” has confirmed publicly that he had to surrender what would have likely turned out to be a huge sum of money in order to quit without signing the document. While nondisclosure agreements aren’t unusual in highly competitive Silicon Valley, putting an employee’s already-vested equity at risk for declining or violating one is. For workers at startups like OpenAI, equity is a vital form of compensation, one that can dwarf the salary they make. Threatening that potentially life-changing money is a very effective way to keep former employees quiet. (OpenAI did not respond to a request for comment.) All of this is highly ironic for a company that initially advertised itself as OpenAI — that is, as committed in its mission statements to building powerful systems in a transparent and accountable manner. OpenAI long ago abandoned the idea of open-sourcing its models, citing safety concerns. But now it has shed the most senior and respected members of its safety team, which should inspire some skepticism about whether safety is really the reason why OpenAI has become so closed. The tech company to end all tech companies OpenAI has spent a long time occupying an unusual position in tech and policy circles. Their releases, from DALL-E to ChatGPT, are often very cool, but by themselves they would hardly attract the near-religious fervor with which the company is often discussed. What sets OpenAI apart is the ambition of its mission: “to ensure that artificial general intelligence — AI systems that are generally smarter than humans — benefits all of humanity.” Many of its employees believe that this aim is within reach; that with perhaps one more decade (or even less) — and a few trillion dollars — the company will succeed at developing AI systems that make most human labor obsolete. Which, as the company itself has long said, is as risky as it is exciting. “Superintelligence will be the most impactful technology humanity has ever invented, and could help us solve many of the world’s most important problems,” a recruitment page for Leike and Sutskever’s team at OpenAI states. “But the vast power of superintelligence could also be very dangerous, and could lead to the disempowerment of humanity or even human extinction. While superintelligence seems far off now, we believe it could arrive this decade.” Naturally, if artificial superintelligence in our lifetimes is possible (and experts are divided), it would have enormous implications for humanity. OpenAI has historically positioned itself as a responsible actor trying to transcend mere commercial incentives and bring AGI about for the benefit of all. And they’ve said they are willing to do that even if that requires slowing down development, missing out on profit opportunities, or allowing external oversight. “We don’t think that AGI should be just a Silicon Valley thing,” OpenAI co-founder Greg Brockman told me in 2019, in the much calmer pre-ChatGPT days. “We’re talking about world-altering technology. And so how do you get the right representation and governance in there? This is actually a really important focus for us and something we really want broad input on.” OpenAI’s unique corporate structure — a capped-profit company ultimately controlled by a nonprofit — was supposed to increase accountability. “No one person should be trusted here. I don’t have super-voting shares. I don’t want them,” Altman assured Bloomberg’s Emily Chang in 2023. “The board can fire me. I think that’s important.” (As the board found out last November, it could fire Altman, but it couldn’t make the move stick. After his firing, Altman made a deal to effectively take the company to Microsoft, before being ultimately reinstated with most of the board resigning.) But there was no stronger sign of OpenAI’s commitment to its mission than the prominent roles of people like Sutskever and Leike, technologists with a long history of commitment to safety and an apparently genuine willingness to ask OpenAI to change course if needed. When I said to Brockman in that 2019 interview, “You guys are saying, ‘We’re going to build a general artificial intelligence,’” Sutskever cut in. “We’re going to do everything that can be done in that direction while also making sure that we do it in a way that’s safe,” he told me. Their departure doesn’t herald a change in OpenAI’s mission of building artificial general intelligence — that remains the goal. But it almost certainly heralds a change in OpenAI’s interest in safety work; the company hasn’t announced who, if anyone, will lead the superalignment team. And it makes it clear that OpenAI’s concern with external oversight and transparency couldn’t have run all that deep. If you want external oversight and opportunities for the rest of the world to play a role in what you’re doing, making former employees sign extremely restrictive NDAs doesn’t exactly follow. Changing the world behind closed doors This contradiction is at the heart of what makes OpenAI profoundly frustrating for those of us who care deeply about ensuring that AI really does go well and benefits humanity. Is OpenAI a buzzy, if midsize tech company that makes a chatty personal assistant, or a trillion-dollar effort to create an AI god? The company’s leadership says they want to transform the world, that they want to be accountable when they do so, and that they welcome the world’s input into how to do it justly and wisely. But when there’s real money at stake — and there are astounding sums of real money at stake in the race to dominate AI — it becomes clear that they probably never intended for the world to get all that much input. Their process ensures former employees — those who know the most about what’s happening inside OpenAI — can’t tell the rest of the world what’s going on. The website may have high-minded ideals, but their termination agreements are full of hard-nosed legalese. It’s hard to exercise accountability over a company whose former employees are restricted to saying “I resigned.” ChatGPT’s new cute voice may be charming, but I’m not feeling especially enamored. A version of this story originally appeared in the Future Perfect newsletter. Sign up here!
vox.com
Joy Behar warns Jennifer Lopez to keep her ‘mouth shut’ about Ben Affleck as divorce rumors loom
Behar's "View" co-host Sunny Hostin said the pair still "love each other," saying, "Stop hating on true love because you don't have it in your life."
nypost.com
TikTok says it's testing letting users post 60-minute videos
TikTok is allowing select users to upload longer-form videos as the social media app looks to compete with YouTube.
cbsnews.com
‘Oversight After Dark’: Lawmakers Hurl Insults at Session
In an after-hours session on Capitol Hill, insults by the right-wing Republican Marjorie Taylor Greene led to a raucous exchange with Democrats, featuring insults about personal appearance, intellect and more.
nytimes.com
'I blew it big time.' Former Facebook DEI head gets 5 years in prison for stealing millions
Barbara Furlow-Smiles pleaded guilty in December to stealing more than $5 million from her jobs at Facebook and Nike from 2017 to 2023. Her lawyer had asked the court to impose no time behind bars.
latimes.com
Melania Trump styles Gucci with Dior for son Barron’s graduation
The former first lady looked chic in a blazer and skirt as she watched her only child graduate from the exclusive Oxbridge Academy in West Palm Beach.
nypost.com
‘The View’s Ana Navarro Compares Jennifer Lopez To Elizabeth Taylor Amid Divorce Rumors: “She’s Addicted To Marriage”
"She’s wonderful for the marriage industry."
nypost.com
Alito says wife displayed upside-down flag after argument with insulting neighbor
Supreme Court Justice Samuel Alito tells Fox News that his wife flew an upside-down American flag outside their home in 2021 following insults from neighbors.
foxnews.com
James Brown opens up on ‘sadness’ of Boomer Esiason and Phil Simms’ exiting ‘NFL Today’
With the NFL Today on CBS crew broken up, studio host James Brown has expressed disappointment in the exiling of colleagues Boomer Esiason and Phil Simms.
nypost.com
Boeing whistleblower John Barnett's cause of death revealed as coroner releases official findings
The cause and manner of death for Boeing whistleblower John Barnett have been revealed, weeks after he was found dead amid a lawsuit against the company.
foxnews.com
Mike Tyson-Jake Paul press conference takes sexual turn: ‘I had an erection’
Things got weird at a Mike Tyson-Jake Paul pre-fight press conference on Thursday. 
nypost.com
Taylor Swift and Travis Kelce pose for loved-up pics, kiss on romantic boat ride in Italy
The pair, who started dating last summer, looked every bit in love as they posed for a few loved-up snaps before enjoying their final dinner together in Italy.
nypost.com
Israeli military finds bodies of 3 hostages in Gaza, including Shani Louk, killed at music festival
The Israeli military says it found the bodies of three hostages in Gaza, including German-Israeli Shani Louk, who was killed at a music festival.
latimes.com
‘Race To Survive: New Zealand’: Watch The First Three Minutes Of The Pulse Pounding Premiere [EXCLUSIVE]
"Welcome to the adventure of a lifetime."
nypost.com
Man with Rolex mugged, beaten unconscious moments before catching elevator, suspect in LAPD custody
Los Angeles police arrested 25-year-old Pablo Garcia for a robbery that took place in downtown Los Angeles on April 28, 2024, in which the victim was brutally beaten.
foxnews.com
Billie Eilish packs a punch, again, on ‘Hit Me Hard and Soft’: review
Billie Eilish's third album “Hit Me Hard and Soft” shows that the singer is just way ahead of her peers even when she misses.
nypost.com
Harrison Butker’s sexist, anti-LGBTQ commencement speech condemned by Benedictine College’s nuns
"One of our concerns was the assertion that being a homemaker is the highest calling for a woman," the Benedictine Sisters of Mount St. Scholastica said.
nypost.com
Dallas suburb residents crush developer’s dreams of turning a historic farm into a strip mall
In Plano, community opposition to a zoning proposal for turning a historic farm into a mixed-use development made the developer withdraw his plans.
nypost.com
Sunny Hostin Clashes With Alyssa Farah Griffin Over Congress Chaos On ‘The View’: “You Gotta Go Low, Alyssa”
"Going high doesn't work anymore."
nypost.com
Dems call Marjorie Taylor Greene racist, suggest she was drunk after wild House Oversight meeting
A pair of House Democrats accused far-right Rep. Marjorie Taylor Greene Friday of making racist comments during a wild House Oversight Committee meeting the previous evening — even suggesting that alcohol may have been behind the "Jerry Springer"- style verbal throwdown.
nypost.com
Fans freak out over Travis Kelce’s Cupid shirt during Taylor Swift date night in Lake Como, Italy
Swifties took to social media to share their thoughts on the romantic print.
nypost.com
Francis Ford Coppola’s New Movie Took 41 Years to Make. It Might Take as Long to Understand.
Megalopolis will leave you speechless. That may not be a good thing.
slate.com
The Sad Desk Salad Is Getting Sadder
Every day, the blogger Alex Lyons orders the same salad from the same New York City bodega and eats it in the same place: her desk. She eats it while working so that she can publish a story before “prime time”—the midday lunch window when her audience of office workers scrolls mindlessly on their computers while gobbling down their own salad. Lyons is the protagonist of Sad Desk Salad, the 2012 novel by Jessica Grose that gave a name to not just a type of meal but a common experience: attempting to simultaneously maximize both health and productivity because—and this is the sad part—there’s never enough time to devote to either.The sad desk salad has become synonymous with people like Lyons: young, overworked white-collar professionals contemplating how salad can help them self-optimize. Chains such as Sweetgreen and Chopt have thrived in big coastal cities, slinging “guacamole greens” and “spicy Sonoma Caesars” in to-go bowls that can be picked up between meetings. The prices can creep toward $20, reinforcing their fancy reputation.But fast salad has gone mainstream. Sweetgreen and similar salad chains have expanded out of city centers into the suburbs, where they are reaching a whole new population of hungry workers. Other salad joints are selling salad faster than ever—in some cases, at fast-food prices. Along the way, the sad desk salad has become even sadder.Anything can make for a sad desk lunch, but there’s something unique about salads. Don’t get me wrong: They can be delicious. I have spent embarrassing amounts of money on sad desk salads, including one I picked at while writing this article. Yet unlike, say, a burrito or sushi, which at least feel like little indulgences, the main reason to eat a salad is because it’s nutritious. It’s fuel—not fun. Even when there isn’t time for a lunch break, there is always time for arugula.[Read: Don’t believe the salad millionaire]During the early pandemic, the sad desk salad seemed doomed. Workers sitting at a desk at home rather than in the office could fish out greens from the refrigerator crisper drawer instead of paying $16. Even if they wanted to, most of the locations were in downtown cores, not residential neighborhoods.But the sad desk salad has not just returned—it’s thriving. Take Sweetgreen, maybe the most well-known purveyor. It bet that Americans would still want its salads no matter where they are working, and so far, that has paid off. The company has been expanding to the suburbs since at least 2020 and has been spreading ever since. In 2023, it opened stores in Milwaukee, Tampa, and Rhode Island; last week, when Sweetgreen reported that its revenue jumped 26 percent over the previous year, executives attributed that growth to expansion into smaller cities. Most of its locations are in the suburbs, and most of its future stores would be too.Sweetgreen is not the only company to have made that gamble. Chopt previously announced that it would open 80 percent of its new stores in the suburbs; the Minnesota-based brand Crisp & Green is eyeing the fringes of midwestern cities. Salad has become so entrenched as a lunch option that even traditional fast-food giants such as Wendy’s and Dairy Queen have introduced salad bowls in recent years. Maybe the most novel of all is Salad and Go, an entirely drive-through chain that sells salads for less than $7. It opened a new store roughly every week last year, and now has more than 100 locations across Arizona, Nevada, Oklahoma, and Texas, with plans to expand to Southern California and the Southeast. Its CEO, Charlie Morrison, has positioned it as a cheap and convenient alternative to unhealthy options: a rival not to Sweetgreen, but to McDonald’s.Indeed, sad desk salads can be made with shocking speed. According to Morrison, you can drive off with your salad in less than four minutes. Other chains including Just Salad and Chopt are opening up drive-through lanes to boost convenience. Sweetgreen, which has also dabbled with the drive-through, has installed salad-assembling robots in several locations, which can reportedly make 500 salads an hour.[Read: Your fast food is already automated]Greater accessibility to salad, in general, is a good thing. America could stand to eat a lot more of it. No doubt some salads will be consumed outside of work: on a park bench with friends, perhaps, or on a blanket at the beach—a girl can dream! But surely many of them will be packed, ordered, and picked up with frightening speed, only to maximize the time spent working in the glow of a computer screen, the crunching of lettuce punctuated by the chirping of notifications.As I lunched on kale and brussels sprouts while writing this story, my silent hope was that they might offset all the bad that I was doing to my body by sitting at my desk for almost eight hours straight. Dining while distracted makes overeating more likely; sitting for long stretches raises the risk of diabetes and heart disease. People who take proper lunch breaks, in contrast, have improved mental health, less burnout, and more energy. No kind of cheap, fast salad can make up for working so fervidly that taking a few minutes off to enjoy a salad is not possible or even desirable.Earlier this month, Sweetgreen introduced a new menu item you can add to its bowls: steak. The company’s CEO said that, during testing, it was a “dinnertime favorite.” That the sad desk salad could soon creep into other mealtimes may be the saddest thing yet.
theatlantic.com
Bodies of three hostages recovered by Israeli forces in Gaza
The Israel Defense Forces recovered the bodies of Shani Louk, Amit Bouskila and Itshak Gelernter in Gaza, Rear Adm. Daniel Hagari said.
cbsnews.com
Draya Michele, 39, gives birth to third baby, her first with NBA star Jalen Green, 22
The 39-year-old "Basketball Wives" alum revealed in a March Instagram upload that she and the Houston Rockets player, 22, had a little one on the way.
nypost.com
Citi Bike rider who viciously attacked Orthodox Jewish boys is shown in new NYPD video
The victims, 11 and 13 – dressed traditionally as they played with several others on Franklin Avenue near Myrtle Avenue in Bedford-Stuyvesant – were both targeted Sunday night when the menace spotted them as he pedaled down the street.
nypost.com
“I lost trust”: Why the OpenAI team in charge of safeguarding humanity imploded
Sam Altman is the CEO of ChatGPT maker OpenAI, which has been losing its most safety-focused researchers. | Joel Saget/AFP via Getty Images Company insiders explain why safety-conscious employees are leaving. For months, OpenAI has been losing employees who care deeply about making sure AI is safe. Now, the company is positively hemorrhaging them. Ilya Sutskever and Jan Leike announced their departures from OpenAI, the maker of ChatGPT, on Tuesday. They were the leaders of the company’s superalignment team — the team tasked with ensuring that AI stays aligned with the goals of its makers, rather than acting unpredictably and harming humanity. They’re not the only ones who’ve left. Since last November — when OpenAI’s board tried to fire CEO Sam Altman only to see him quickly claw his way back to power — at least five more of the company’s most safety-conscious employees have either quit or been pushed out. What’s going on here? If you’ve been following the saga on social media, you might think OpenAI secretly made a huge technological breakthrough. The meme “What did Ilya see?” speculates that Sutskever, the former chief scientist, left because he saw something horrifying, like an AI system that could destroy humanity. But the real answer may have less to do with pessimism about technology and more to do with pessimism about humans — and one human in particular: Altman. According to sources familiar with the company, safety-minded employees have lost faith in him. “It’s a process of trust collapsing bit by bit, like dominoes falling one by one,” a person with inside knowledge of the company told me, speaking on condition of anonymity. Not many employees are willing to speak about this publicly. That’s partly because OpenAI is known for getting its workers to sign offboarding agreements with non-disparagement provisions upon leaving. If you refuse to sign one, you give up your equity in the company, which means you potentially lose out on millions of dollars. One former employee, however, refused to sign the offboarding agreement so that he would be free to criticize the company. Daniel Kokotajlo, who joined OpenAI in 2022 with hopes of steering it toward safe deployment of AI, worked on the governance team — until he quit last month. “OpenAI is training ever-more-powerful AI systems with the goal of eventually surpassing human intelligence across the board. This could be the best thing that has ever happened to humanity, but it could also be the worst if we don’t proceed with care,” Kokotajlo told me this week. OpenAI says it wants to build artificial general intelligence (AGI), a hypothetical system that can perform at human or superhuman levels across many domains. “I joined with substantial hope that OpenAI would rise to the occasion and behave more responsibly as they got closer to achieving AGI. It slowly became clear to many of us that this would not happen,” Kokotajlo told me. “I gradually lost trust in OpenAI leadership and their ability to responsibly handle AGI, so I quit.” And Leike, explaining in a thread on X why he quit as co-leader of the superalignment team, painted a very similar picture Friday. “I have been disagreeing with OpenAI leadership about the company’s core priorities for quite some time, until we finally reached a breaking point,” he wrote. OpenAI did not respond to a request for comment in time for publication. Why OpenAI’s safety team grew to distrust Sam Altman To get a handle on what happened, we need to rewind to last November. That’s when Sutskever, working together with the OpenAI board, tried to fire Altman. The board said Altman was “not consistently candid in his communications.” Translation: We don’t trust him. The ouster failed spectacularly. Altman and his ally, company president Greg Brockman, threatened to take OpenAI’s top talent to Microsoft — effectively destroying OpenAI — unless Altman was reinstated. Faced with that threat, the board gave in. Altman came back more powerful than ever, with new, more supportive board members and a freer hand to run the company. When you shoot at the king and miss, things tend to get awkward. Publicly, Sutskever and Altman gave the appearance of a continuing friendship. And when Sutskever announced his departure this week, he said he was heading off to pursue “a project that is very personally meaningful to me.” Altman posted on X two minutes later, saying that “this is very sad to me; Ilya is … a dear friend.” Yet Sutskever has not been seen at the OpenAI office in about six months — ever since the attempted coup. He has been remotely co-leading the superalignment team, tasked with making sure a future AGI would be aligned with the goals of humanity rather than going rogue. It’s a nice enough ambition, but one that’s divorced from the daily operations of the company, which has been racing to commercialize products under Altman’s leadership. And then there was this tweet, posted shortly after Altman’s reinstatement and quickly deleted: So, despite the public-facing camaraderie, there’s reason to be skeptical that Sutskever and Altman were friends after the former attempted to oust the latter. And Altman’s reaction to being fired had revealed something about his character: His threat to hollow out OpenAI unless the board rehired him, and his insistence on stacking the board with new members skewed in his favor, showed a determination to hold onto power and avoid future checks on it. Former colleagues and employees came forward to describe him as a manipulator who speaks out of both sides of his mouth — someone who claims, for instance, that he wants to prioritize safety, but contradicts that in his behaviors. For example, Altman was fundraising with autocratic regimes like Saudi Arabia so he could spin up a new AI chip-making company, which would give him a huge supply of the coveted resources needed to build cutting-edge AI. That was alarming to safety-minded employees. If Altman truly cared about building and deploying AI in the safest way possible, why did he seem to be in a mad dash to accumulate as many chips as possible, which would only accelerate the technology? For that matter, why was he taking the safety risk of working with regimes that might use AI to supercharge digital surveillance or human rights abuses? For employees, all this led to a gradual “loss of belief that when OpenAI says it’s going to do something or says that it values something, that that is actually true,” a source with inside knowledge of the company told me. That gradual process crescendoed this week. The superalignment team’s co-leader, Jan Leike, did not bother to play nice. “I resigned,” he posted on X, mere hours after Sutskever announced his departure. No warm goodbyes. No vote of confidence in the company’s leadership. Other safety-minded former employees quote-tweeted Leike’s blunt resignation, appending heart emojis. One of them was Leopold Aschenbrenner, a Sutskever ally and superalignment team member who was fired from OpenAI last month. Media reports noted that he and Pavel Izmailov, another researcher on the same team, were allegedly fired for leaking information. But OpenAI has offered no evidence of a leak. And given the strict confidentiality agreement everyone signs when they first join OpenAI, it would be easy for Altman — a deeply networked Silicon Valley veteran who is an expert at working the press — to portray sharing even the most innocuous of information as “leaking,” if he was keen to get rid of Sutskever’s allies. The same month that Aschenbrenner and Izmailov were forced out, another safety researcher, Cullen O’Keefe, also departed the company. And two weeks ago, yet another safety researcher, William Saunders, wrote a cryptic post on the EA Forum, an online gathering place for members of the effective altruism movement, who have been heavily involved in the cause of AI safety. Saunders summarized the work he’s done at OpenAI as part of the superalignment team. Then he wrote: “I resigned from OpenAI on February 15, 2024.” A commenter asked the obvious question: Why was Saunders posting this? “No comment,” Saunders replied. Commenters concluded that he is probably bound by a non-disparagement agreement. Putting all of this together with my conversations with company insiders, what we get is a picture of at least seven people who tried to push OpenAI to greater safety from within, but ultimately lost so much faith in its charismatic leader that their position became untenable. “I think a lot of people in the company who take safety and social impact seriously think of it as an open question: is working for a company like OpenAI a good thing to do?” said the person with inside knowledge of the company. “And the answer is only ‘yes’ to the extent that OpenAI is really going to be thoughtful and responsible about what it’s doing.” With the safety team gutted, who will make sure OpenAI’s work is safe? With Leike no longer there to run the superalignment team, OpenAI has replaced him with company co-founder John Schulman. But the team has been hollowed out. And Schulman already has his hands full with his preexisting full-time job ensuring the safety of OpenAI’s current products. How much serious, forward-looking safety work can we hope for at OpenAI going forward? Probably not much. “The whole point of setting up the superalignment team was that there’s actually different kinds of safety issues that arise if the company is successful in building AGI,” the person with inside knowledge told me. “So, this was a dedicated investment in that future.” Even when the team was functioning at full capacity, that “dedicated investment” was home to a tiny fraction of OpenAI’s researchers and was promised only 20 percent of its computing power — perhaps the most important resource at an AI company. Now, that computing power may be siphoned off to other OpenAI teams, and it’s unclear if there’ll be much focus on avoiding catastrophic risk from future AI models. To be clear, this does not mean the products OpenAI is releasing now — like the new version of ChatGPT, dubbed GPT-4o, which can have a natural-sounding dialogue with users — are going to destroy humanity. But what’s coming down the pike? “It’s important to distinguish between ‘Are they currently building and deploying AI systems that are unsafe?’ versus ‘Are they on track to build and deploy AGI or superintelligence safely?’” the source with inside knowledge said. “I think the answer to the second question is no.” Leike expressed that same concern in his Friday thread on X. He noted that his team had been struggling to get enough computing power to do its work and generally “sailing against the wind.” Most strikingly, Leike said, “I believe much more of our bandwidth should be spent getting ready for the next generations of models, on security, monitoring, preparedness, safety, adversarial robustness, (super)alignment, confidentiality, societal impact, and related topics. These problems are quite hard to get right, and I am concerned we aren’t on a trajectory to get there.” When one of the world’s leading minds in AI safety says the world’s leading AI company isn’t on the right trajectory, we all have reason to be concerned.
vox.com
Nancy Pelosi asks for "very long" sentence for man who attacked her husband
David DePape, convicted of trying to kidnap Pelosi and attacking her husband with a hammer in October 2022, is to be sentenced Friday.
cbsnews.com
Marjorie Taylor Greene and Jasmine Crockett’s Appearance-Based Insults Reflect an Ugly New Norm in Politics
A recent congressional meeting devolved into a back-and-forth that reflects a changing norm in politics — one that rarely makes anybody look good.
nytimes.com
A Guide to the Real-Life Figures in the Amy Winehouse Biopic Back to Black
Late musician Amy Winehouse is the subject of a new biopic that explores her substance abuse and recording to Back in Black.
time.com
When you hear 3 Rembrandts are stolen, you think of one person
A new episode of the CNN Original Series "How It Really Happened" uncovers new details about history's biggest art heist, when a half-billion dollars worth of paintings, including three Rembrandts, were stolen from Boston's Isabella Stewart Gardner Museum. It premieres Sunday, May 19 at 9pm ET/PT.
edition.cnn.com
David Gilmour adds 3 more MSG shows to 2024 tour. Get tickets today
The Pink Floyd singer/guitarist now has 5 concerts at MSG this year.
1 h
nypost.com
‘Bridgerton’ Season 3 Part 1 Ending Explained: Are Penelope and Colin Getting Married?
Plus: Is Francesca's love story going to be Bridgerton Season 4?
1 h
nypost.com
Everything we know about the Jennifer Lopez and Ben Affleck divorce rumors
Is there trouble in paradise between Jennifer Lopez and Ben Affleck? It’s been reported that Ben is staying at a separate house and rumors surfaced claiming the couple is headed for a divorce. Despite the gossip, Ben was seen wearing his wedding ring but, leave it to JLo to continue fueling talk that there may...
1 h
nypost.com
What to know about South Africa's election that could see ruling party of 30 years deposed
South Africa’s African National Congress party has been in power since the end of apartheid 30 years ago, but polls predict the party will receive less than 50% of the national vote.
1 h
foxnews.com
Caitlin Clark, Fever teammates' mental toughness questioned after loss
Indiana Fever head coach Christie Sides questioned her team's mental toughness following a loss Thursday night to the New York Liberty.
1 h
foxnews.com
Biden billionaire megadonor Joe Kiani faces ouster from own company over ‘serious concerns’ about conduct
The entrepreneur founded Masimo but its shares have plunged. He could be fired if a hedge fund has its way. His fortune is a cash spigot for Democrats.
1 h
nypost.com
As job growth in California falls back, unemployment rate remains highest in the country
California posted another anemic month of job growth in April, and the state’s unemployment rate remained the highest in the land at 5.3% in new reports.
1 h
latimes.com
Vatican to hold press conference on ‘apparitions and other supernatural phenomena’
The Vatican will hold a press conference about “apparitions and other supernatural phenomena” today, leading to frenzied speculation.
1 h
nypost.com
Xi and Putin Join the ‘Awkward Hugs Between World Leaders’ Club
Mikhail Metzel/ReutersRussian President Vladimir Putin, who’s historically averse to physical touch, embraced the Chinese leader Xi Jinping in a pair of awkward hugs on Friday as the two met at a northeastern China university renowned for its military defense research.The seemingly uncomfortable—and slightly robotic—embrace appeared to have been initiated by Xi, who could be seen mouthing something while raising both of his arms to welcome in a hesitant Putin. Xi was then seen patting the Russian president’s side, followed by Putin initiating a handshake as they pulled away from each other. Awkward hugs aside, the willingness of Putin to leave Moscow for a meeting with Xi in China underscores the pair’s growing alliance—something that’s sure to worry Western leaders.Read more at The Daily Beast.
1 h
thedailybeast.com
'Rust' star Alec Baldwin's lawyers argue for judge to dismiss involuntary manslaughter charge
Alec Baldwin's lawyers argued for their motion to dismiss his involuntary manslaughter charge during a virtual hearing Friday on the "Rust" movie shooting.
1 h
foxnews.com
Dua Lipa called up Chris Stapleton to collaborate on that surprise ACM Awards duet
Chris Stapleton and Dua Lipa surprised the audience at the Academy of Country Music Awards on Thursday with a duet of 'Think I'm in Love With You.'
1 h
latimes.com
Dave Portnoy bets $50,000 on Scottie Scheffler after ‘garbage’ arrest at PGA Championship
Dave Portnoy didn't let Scottie Scheffler's arrest stop him from supporting the world's No. 1 golfer.
1 h
nypost.com
Zendaya’s ‘Erotomaniac’ Cyberstalker Sentenced in France
GEOFFROY VAN DER HASSELT/AFP via Getty ImagesA French man was sentenced to five years of high-security psychiatric hospitalization on Thursday for cyberstalking and harassing global superstar Zendaya, according to the daily French newspaper Ouest-France.The unnamed man sent the star thousands of messages, which escalated in number between September 2023 and February 2024, wherein he threatened to kidnap and rape her, according to the newspaper. Some of the messages also reportedly featured “pornographic photomontages.”Read aloud in Rennes Criminal Court, one example message sent to the star threatened the star with his knowledge of the Roissy Charles de Gaulle airport, into which she was flying, and said that if he didn’t hear from her within 24 hours of receiving his message, “she will tremble.”Read more at The Daily Beast.
1 h
thedailybeast.com
Will Zalatoris, Cameron Young seen walking on foot to PGA Championship amid chaotic morning
In footage shared on X, the two young pros can be seen walking on foot in dreary conditions amid a chaotic start to the morning Friday.
1 h
nypost.com
Lara Trump, Kevin Sorbo tout traditional values in new children's books at Florida story hours
Lara Trump and Kevin Sorbo tout traditional values with children's story hours in Florida this weekend, reading from their new children's titles published by Brave Books.
1 h
foxnews.com
Eastman Is First Trump Ally Arraigned in Arizona Election Case
The author of a plan to put forward fake electors in states Donald J. Trump lost in 2020 is one of 18 defendants charged in the Arizona case.
1 h
nytimes.com
NY man turns his garage into raging nightclub, fights city to keep it open in ‘high crime’ neighborhood
A Rochester man turned his garage into a bumping nightclub complete with DJs and security -- but now he’s fighting to keep the party going after the city pulled the plug.
1 h
nypost.com