Tools
Change country:

Your phone can tell when you’re depressed

A woman sits in bed, holding her iPhone in front of her face, smiling for the camera. AI-powered apps may be able to use your data (including selfies) to predict your current mental state. | Jaap Arriens/NurPhoto via Getty Images

Emerging apps use AI to guess when you’ll be sad. Can they also help you feel better?

If you have a sore throat, you can get tested for a host of things — Covid, RSV, strep, the flu — and receive a pretty accurate diagnosis (and maybe even treatment). Even when you’re not sick, vital signs like heart rate and blood pressure give doctors a decent sense of your physical health.

But there’s no agreed-upon vital sign for mental health. There may be occasional mental health screenings at the doctor’s office, or notes left behind after a visit with a therapist. Unfortunately, people lie to their therapists all the time (one study estimated that over 90 percent of us have lied to a therapist at least once), leaving holes in their already limited mental health records. And that’s assuming someone can connect with a therapist — roughly 122 million Americans live in areas without enough mental health professionals to go around.

But the vast majority of people in the US do have access to a cellphone. Over the last several years, academic researchers and startups have built AI-powered apps that use phones, smart watches, and social media to spot warning signs of depression. By collecting massive amounts of information, AI models can learn to spot subtle changes in a person’s body and behavior that may indicate mental health problems. Many digital mental health apps only exist in the research world (for now), but some are available to download — and other forms of passive data collection are already being deployed by social media platforms and health care providers to flag potential crises (it’s probably somewhere in the terms of service you didn’t read).

The hope is for these platforms to help people affordably access mental health care when they need it most, and intervene quickly in times of crisis. Michael Aratow — co-founder and chief medical officer of Ellipsis Health, a company that uses AI to predict mental health from human voice samples — argues that the need for digital mental health solutions is so great, it can no longer be addressed by the health care system alone. “There’s no way that we’re going to deal with our mental health issues without technology,” he said.

And those issues are significant: Rates of mental illness have skyrocketed over the past several years. Roughly 29 percent of US adults have been diagnosed with depression at some point in their lives, and the National Institute of Mental Health estimates that nearly a third of US adults will experience an anxiety disorder at some point.

While phones are often framed as a cause of mental health problems, they can also be part of the solution — but only if we create tech that works reliably and mitigates the risk of unintended harm. Tech companies can misuse highly sensitive data gathered from people at their most vulnerable moments — with little regulation to stop them. Digital mental health app developers still have a lot of work to do to earn the trust of their users, but the stakes around the US mental health crisis are high enough that we shouldn’t automatically dismiss AI-powered solutions out of fear.

How does AI detect depression?

To be formally diagnosed with depression, someone needs to express at least five symptoms (like feeling sad, losing interest in things, or being unusually exhausted) for at least two consecutive weeks.

But Nicholas Jacobson, an assistant professor in biomedical data science and psychiatry at the Geisel School of Medicine at Dartmouth College, believes “the way that we think about depression is wrong, as a field.” By only looking for stably presenting symptoms, doctors can miss the daily ebbs and flows that people with depression experience. “These depression symptoms change really fast,” Jacobson said, “and our traditional treatments are usually very, very slow.”

Even the most devoted therapy-goers typically see a therapist about once a week (and with sessions starting around $100, often not covered by insurance, once a week is already cost-prohibitive for many people). One 2022 study found that only 18.5 percent of psychiatrists sampled were accepting new patients, leading to average wait times of over two months for in-person appointments. But your smartphone (or your fitness tracker) can log your steps, heart rate, sleep patterns, and even your social media use, painting a far more comprehensive picture of your mental health than conversations with a therapist can alone.

One potential mental health solution: Collect data from your smartphone and wearables as you go about your day, and use that data to train AI models to predict when your mood is about to dip. In a study co-authored by Jacobson this February, researchers built a depression detection app called MoodCapture, which harnesses a user’s front-facing camera to automatically snap selfies while they answer questions about their mood, with participants pinged to complete the survey three times a day. An AI model correlated their responses — rating in-the-moment feelings like sadness and hopelessness — with these pictures, using their facial features and other context clues like lighting and background objects to predict early signs of depression. (One example: a participant who looks as if they’re in bed almost every time they complete the survey is more likely to be depressed.)

The model doesn’t try to flag certain facial features as depressive. Rather, the model looks for subtle changes within each user, like their facial expressions, or how they tend to hold their phone. MoodCapture accurately identified depression symptoms with about 75 percent accuracy (in other words, if 100 out of a million people have depression, the model should be able to identify 75 out of the 100) — the first time such candid images have been used to detect mental illness in this way.

In this study, the researchers only recruited participants who were already diagnosed with depression, and each photo was tagged with the participant’s own rating of their depression symptoms. Eventually, the app aims to use photos captured when users unlock their phones using face recognition, adding up to hundreds of images per day. This data, combined with other passively gathered phone data like sleep hours, text messages, and social media posts, could evaluate the user’s unfiltered, unguarded feelings. You can tell your therapist whatever you want, but enough data could reveal the truth.

The app is still far from perfect. MoodCapture was more accurate at predicting depression in white people because most study participants were white women — generally, AI models are only as good as the training data they’re provided. Research apps like MoodCapture are required to get informed consent from all of their participants, and university studies are overseen by the campus’s Institutional Review Board (IRB) But if sensitive data is collected without a user’s consent, the constant monitoring can feel creepy or violating. Stevie Chancellor, an assistant professor in computer science and engineering at the University of Minnesota, says that with informed consent, tools like this can be “really good because they notice things that you may not notice yourself.”

What technology is already out there, and what’s on the way?

Of the roughly 10,000 (and counting) digital mental health apps recognized by the mHealth Index & Navigation Database (MIND), 18 of them passively collect user data. Unlike the research app MoodCapture, none use auto-captured selfies (or any type of data, for that matter) to predict whether the user is depressed. A handful of popular, highly rated apps like Bearable — made by and for people with chronic health conditions, from bipolar disorder to fibromyalgia — track customized collections of symptoms over time, in part by passively collecting data from wearables. “You can’t manage what you can’t measure,” Aratow said.

These tracker apps are more like journals than predictors, though — they don’t do anything with the information they collect, other than show it to the user to give them a better sense of how lifestyle factors (like what they eat, or how much they sleep) affect their symptoms. Some patients take screenshots of their app data to show their doctors so they can provide more informed advice. Other tools, like the Ellipsis Health voice sensor, aren’t downloadable apps at all. Rather, they operate behind the scenes as “clinical decision support tools,” designed to predict someone’s depression and anxiety levels from the sound of their voice during, say, a routine call with their health care provider. And massive tech companies like Meta use AI to flag, and sometimes delete, posts about self-harm and suicide.

Some researchers want to take passive data collection to more radical lengths. Georgios Christopoulos, a cognitive neuroscientist at Nanyang Technological University in Singapore, co-led a 2021 study that predicted depression risk from Fitbit data. In a press release, he expressed his vision for more ubiquitous data collection, where “such signals could be integrated with Smart Buildings or even Smart Cities initiatives: Imagine a hospital or a military unit that could use these signals to identify people at risk.” This raises an obvious question: In this imagined future world, what happens if the all-seeing algorithm deems you sad?

AI has improved so much in the last five years alone that it’s not a stretch to say that, in the next decade, mood-predicting apps will exist — and if preliminary tests continue to look promising, they might even work. Whether that comes as a relief or fills you with dread, as mood-predicting digital health tools begin to move out of academic research settings and into the app stores, developers and regulators need to seriously consider what they’ll do with the information they gather.

So, your phone thinks you’re depressed — now what?

It depends, said Chancellor. Interventions need to strike a careful balance: keeping the user safe, without “completely wiping out important parts of their life.” Banning someone from Instagram for posting about self-harm, for instance, could cut someone off from valuable support networks, causing more harm than good. The best way for an app to provide support that a user actually wants, Chancellor said, is to ask them.

Munmun De Choudhury, an associate professor in the School of Interactive Computing at Georgia Tech, believes that any digital mental health platform can be ethical, “to the extent that people have an ability to consent to its use.” She emphasized, “If there is no consent from the person, it doesn’t matter what the intervention is — it’s probably going to be inappropriate.”

Academic researchers like Jacobson and Chancellor have to jump through a lot of regulatory hoops to test their digital mental health tools. But when it comes to tech companies, those barriers don’t really exist. Laws like the US Health Insurance Portability and Accountability Act (HIPAA) don’t clearly cover nonclinical data that can be used to infer something about someone’s health — like social media posts, patterns of phone usage, or selfies.

Even when a company says that they treat user data as protected health information (PHI), it’s not protected by federal law — data only qualifies as PHI if it comes from a “healthcare service event,” like medical records or a hospital bill. Text conversations via platforms like Woebot and BetterHelp may feel confidential, but crucial caveats about data privacy (while companies can opt into HIPAA compliance, user data isn’t legally classified as protected health information) often wind up where users are least likely to see them — like in lengthy terms of service agreements that practically no one reads. Woebot, for example, has a particularly reader-friendly terms of service, but at a whopping 5,625 words, it’s still far more than most people are willing to engage with.

“There’s not a whole lot of regulation that would prevent folks from essentially embedding all of this within the terms of service agreement,” said Jacobson. De Choudhury laughed about it. “Honestly,” she told me, “I’ve studied these platforms for almost two decades now. I still don’t understand what those terms of service are saying.”

“We need to make sure that the terms of service, where we all click ‘I agree’, is actually in a form that a lay individual can understand,” De Choudhury said. Last month, Sachin Pendse, a graduate student in De Choudhury’s research group, co-authored guidance on how developers can create “consent-forward” apps that proactively earn the trust of their users. The idea is borrowed from the “Yes means yes” model for affirmative sexual consent, because FRIES applies here, too: a user’s consent to data usage should always be freely given, reversible, informed, enthusiastic, and specific.

But when algorithms (like humans) inevitably make mistakes, even the most consent-forward app could do something a user doesn’t want. The stakes can be high. In 2018, for example, a Meta algorithm used text data from Messenger and WhatsApp to detect messages expressing suicidal intent, triggering over a thousand “wellness checks,” or nonconsensual active rescues. Few specific details about how their algorithm works are publicly available. Meta clarifies that they use pattern-recognition techniques based on lots of training examples, rather than simply flagging words relating to death or sadness — but not much else.

These interventions often involve police officers (who carry weapons and don’t always receive crisis intervention training) and can make things worse for someone already in crisis (especially if they thought they were just chatting with a trusted friend, not a suicide hotline). “We will never be able to guarantee that things are always safe, but at minimum, we need to do the converse: make sure that they are not unsafe,” De Choudhury said.

Some large digital mental health groups have faced lawsuits over their irresponsible handling of user data. In 2022, Crisis Text Line, one of the biggest mental health support lines (and often provided as a resource in articles like this one), got caught using data from people’s online text conversations to train customer service chatbots for their for-profit spinoff, Loris. And last year, the Federal Trade Commission ordered BetterHelp to pay a $7.8 million fine after being accused of sharing people’s personal health data with Facebook, Snapchat, Pinterest, and Criteo, an advertising company.

Chancellor said that while companies like BetterHelp may not be operating in bad faith — the medical system is slow, understaffed, and expensive, and in many ways, they’re trying to help people get past these barriers — they need to more clearly communicate their data privacy policies with customers. While startups can choose to sell people’s personal information to third parties, Chancellor said, “no therapist is ever going to put your data out there for advertisers.”

Someday, Chancellor hopes that mental health care will be structured more like cancer care is today, where people receive support from a team of specialists (not all doctors), including friends and family. She sees tech platforms as “an additional layer” of care — and at least for now, one of the only forms of care available to people in underserved communities.

Even if all the ethical and technical kinks get ironed out, and digital health platforms work exactly as intended, they’re still powered by machines. “Human connection will remain incredibly valuable and central to helping people overcome mental health struggles,” De Choudhury told me. “I don’t think it can ever be replaced.”

And when asked what the perfect mental health app would look like, she simply said, “I hope it doesn’t pretend to be a human.”


Read full article on: vox.com
U.S. and Saudi Arabia near potentially historic security deal
United States National Security Advisor Jake Sullivan met with Crown Prince Mohammad bin Salam on Saturday to discuss a potentially historic bilateral agreement between the two nations.
cbsnews.com
Ron Howard honored at Cannes Film Festival by Lily Gladstone at Variety awards
Howard gave tribute to his own mentors, including George Lucas, who was a producer on his film, "Willow," and famed B-movie filmmaker Roger Corman.
nypost.com
You’ll never worry about AI stealing your job working in these trades
While some jobs, like those held by accountants, customer service pros, financial underwriters and paralegals, may be sharply and negatively affected by AI, “the demand for workers in the skilled trades will be less disturbed” said Stahle.
nypost.com
NY mom sues American Airlines for 14-year-old son’s death after defibrillator ‘failed, then went missing’
"Did someone at American intentionally destroy it? Is it defective? Put back out in service?" the lawyer for grieving mom Melissa Arzu, of the Bronx, said.
nypost.com
Couple Invites Guests to Celebrate Their Engagement—But There's a Twist
Julia Polley told Newsweek that she and her fiance "had the idea and never really looked back."
newsweek.com
Albert the Alligator’s owner an emotional wreck after NY takes 750-pound live-in pet away: ‘I don’t sleep’
Taking away his best friend was a cold-blooded thing to do. An upstate New York man who kept a 750-pound pet alligator in his house is an emotional wreck after government agents hauled his pal away – and he sees the same heartbreak in Albert’s scaly face. “I know his look, and that happy face...
nypost.com
Prince Harry, Meghan Markle Push Back on Nigerian ‘Wanted Fugitive’ Claims
Akintunde Akinleye/ReutersMeghan and Harry counter ‘free flight’ claimsSources in Prince Harry and Meghan Markle’s camp have defended the couple after claims were made in the Daily Mail that they were flown around Nigeria for free “by an airline whose chairman is a fugitive wanted in the U.S.”The Mail said that the founder of Nigerian airline Air P
thedailybeast.com
Donald Trump's 'Glitch' During NRA Speech Raises Questions
Trump stopped speaking for more than 30 seconds while addressing thousands of members of the National Rifle Association in Dallas.
newsweek.com
CNN Political Commentator Alice Stewart Is Remembered By Peers After Death Aged 58
Alice Stewart, a CNN political commentator and veteran political adviser, has died at age 58. Peers have paid tribute.
time.com
Need a New Cologne or Fragrance? Ask a Teen Boy.
Some Gen Z kids can’t seem to get enough of the luxury fragrances.
nytimes.com
Ukraine Destroys Russian Black Sea 'Kovrovets' Minesweeper
The ship is another loss for Russia's dwindling Black Sea fleet around the Crimea peninsula.
newsweek.com
Arizona AG confirms Rudy Giuliani served in elections case amid former Trump associate's 80th birthday party
Rudy Giuliani, a former associated of former President Donald Trump, became the final defendant served indictment among 18 charged in Arizona elections case.
foxnews.com
Stefanik to rebuke Biden and praise Trump in address to Israeli parliament
GOP Rep. Elise Stefanik of New York excoriated Biden over a paused weapons shipment to Israel in her address to the Knesset.
cbsnews.com
After years of Democratic dominance, Nevada could be slipping from Biden’s grasp
No Republican presidential candidate has won Nevada since George W. Bush in 2004, but Democratic margins have narrowed in recent presidential elections.
washingtonpost.com
How US High Speed Rail Plan Compares to China's
There are proposals for a number of high-speed rail lines in the US, but even if constructed the American network will be tiny compared to China's.
newsweek.com
Man Brings Tiny Dog for a New Haircut, Can't Cope With End Result
"I used to have the same cut and Biggie thought we should match," Jeff Cole told Newsweek.
newsweek.com
Meerkats Keep Dropping Dead From Heart Failure
At the start of the spring of 2015, Jeffrey, a three-year-old meerkat, was happily eating, tussling with his brothers, and surveying zoo patrons from his usual perch, his forepaws gathered and his black-tipped snout aloft. But one day in April, his caretakers discovered him in his enclosure, so weak that he could barely lift his head. By the time h
theatlantic.com
I Need to Tell My Daughter That Her Days of Nudity Must Come to an End
No other member of the household does this.
slate.com
How Bill Maher charms and offends at the same time
I have a confession to make: I like Bill Maher. Kind of a lot. For a long time I resisted putting this in writing, or saying it out loud. (My wife, for one, can’t stand him and proceeds to remind me of this whenever his name comes up.) It’s not that I disagreed with the...
nypost.com
Ukraine Launches ATACMS in Major Attack on Russia
An oil refinery reportedly had to halt its operations in the southern region of Slavyansk.
newsweek.com
Spectacular fireball lights up sky bright blue for millions over parts of Europe
The celestial object was seen streaking across the atmosphere, illuminating the clouds with a bright neon blue color for about 7 seconds.
nypost.com
Suspect arrested after woman at Clark College stabbed in neck
Salvador Aguilar, 31, was arrested in connection with the stabbing of a woman in her neck on the Clark College campus in Washington state last week.
foxnews.com
How Drake Became White
We’d gathered that day at the cafeteria’s “Black” table, cracking jokes and philosophizing during the free period that was our perk as upperclassmen. We came in different shades: bone white, tan and brownish, dark as a silhouette. One of my classmates, who fancied himself a lyricist, was insisting that Redman, a witty emcee from nearby Newark, New
theatlantic.com
Fairfax's McRae siblings keep trying to one-up each other, all the way to Dartmouth
Romello McRae will graduate this spring from Fairfax High and join high-achieving brother Robert and sister Elyjah at the Ivy League school.
latimes.com
Man in Disbelief After Using Radiation Detector in His Kitchen
The Redditor is fascinated by the reading, and so far he has no plans to change his kitchen counters or move.
newsweek.com
Great-Power Politics Is Ruining the Olympics
In 2021, on the eve of the Tokyo Olympics, 23 top Chinese swimmers tested positive for the drug trimetazidine. In its proper clinical setting, the medication is used to treat angina. But for an athlete or a coach willing to cheat, it is a performance-enhancing drug, boosting the heart muscle’s functioning. Nonprescription use of trimetazidine, or T
theatlantic.com
100-hour weeks and heart palpitations: Inside Wall Street’s brutal work culture
The tragic deaths of two Bank of America employees has become a flashpoint for anger over allegedly unrealistic work expectations on Wall Street.
nypost.com
Amazon Scraps Marvel’s ‘Silk: Spider Society’ Drama Series
The Angela Kang-led drama series is expected to shop to other streamers.
nypost.com
An airstrike kills 20 in central Gaza as Israel's leaders air wartime divisions
An Israeli airstrike killed 20 people in central Gaza, mostly women and children, on Sunday, as fighting raged and Israel's leaders aired divisions over who should govern Gaza after the war.
npr.org
Smellmaxxing, Explained
Some teenage boys have grown obsessed with designer fragrances that cost hundreds of dollars.
nytimes.com
Would you eat this weird sandwich? Barry Enderwick would.
Barry Enderwick’s hobby is recreating historical sandwich recipes. Since 2018, he has created and eaten more than 700 sandwiches and posted results on TikTok.
washingtonpost.com
Biden to deliver Morehouse commencement address as protests disrupt graduations across the country
President Biden will deliver the commencement address at Morehouse College on Sunday, marking an opportunity to reach out to Black voters.
foxnews.com
Americans are down on the economy (again), with inflation topping election concerns
After a spurt of optimism, Americans are feeling a little more glum about the economy — again.
washingtonpost.com
Bronny James is ready to be himself, but the NBA still sees LeBron James Jr.
Scouts and executives see Bronny James as a viable NBA player and confirm he could be leverage to force the Lakers into a trade to unite him with his father.
latimes.com
Lakers and JJ Redick are a match made in Looney Tunes
JJ Redick could be the next Lakers head coach because he has a podcast with LeBron James. Period. End of resume. That's a joke, right?
latimes.com
Sam Alito’s flag flew upside down. Are his ethics?
Justice Alito’s wife hoisted a “Stop the Steal” flag after Jan 6. Should her husband recuse himself now?
washingtonpost.com
Wild Reason Student Reschedules Meeting Gets '10/10' From Professor
On her way to meet her professor, Annie Rogovin got sidetracked when she noticed a baby duck.
newsweek.com
Putin's 'Revenge': Georgia's Jailed Ex-President Urges West to Act
Mikheil Saakashvili told Newsweek that Moscow "has everything to gain" from Georgia passing its controversial "foreign agent" law.
newsweek.com
How Meghan Markle's Wedding Dress Was an Act of Royal Rebellion
Meghan's haute-couture Givenchy wedding gown for the 2018 ceremony broke a long-established royal tradition.
newsweek.com
Inside look at some of Knicks’ other memorable Game 7s
Sunday’s Game 7 against the Pacers will mark the Knicks’ first since 2000. The Post takes a look at some of their most memorable Game 7s:
nypost.com
A Much-Needed History of Queer Women’s Spaces
slate.com
A Compelling Made-For-TV Reality Season
This is an edition of The Atlantic Daily, a newsletter that guides you through the biggest stories of the day, helps you discover new ideas, and recommends the best in culture. Sign up for it here.Welcome back to The Daily’s Sunday culture edition, in which one Atlantic writer or editor reveals what’s keeping them entertained. Today’s special guest
theatlantic.com
The General Intendant’s Daughter
The girl’s expressive gifts surpass those of all the members of his company, even the aging starlet Klamt. That is something the General Intendant of the City Theater can no longer deny.Up to this point, he has done everything in his power to keep his daughter off the stage, for the General Intendant is intimately acquainted with the unscrupulousne
theatlantic.com
Is it ever okay to film strangers in public?
Getty Images Nobody wants to be filmed without their knowledge. Why does it make up so much of the content we watch? The experience of realizing you are being surreptitiously filmed by a stranger is now a relatively common one, but this is how it happened for Mitchell Clark: The 25-year-old was working a shift at his Atlanta Target when someone pr
vox.com
Ukraine’s Oleksandr Usyk Becomes World’s Undisputed Heavyweight Champion
The Ukrainian boxer Oleksandr Usyk became the world’s undisputed heavyweight champion on Sunday. The victory has lifted morale in a country struggling to contain Russian advances on the battlefield.
nytimes.com
Premier League predictions: Arsenal vs. Everton, Manchester City vs. West Ham picks
All 10 matches this weekend will kick off at 11 a.m. Sunday, but the focus will be on two: Arsenal vs. Everton and Manchester City vs. West Ham United. 
nypost.com
Ukraine Follows Russia's Playbook in Mobilizing Convicts
Prisoners can now be drafted into the war effort after new laws were signed on Friday.
newsweek.com
Airstrike kills 20 in central Gaza; fighting rages as Israel’s leaders air wartime divisions
An Israeli airstrike killed 20 people in central Gaza, mostly women and children, and fighting raged across the north on Sunday as Israel's leaders aired divisions over who should govern Gaza after the war, now in its eighth month.
nypost.com