A chatbot that’s cautious about GPT, AI pronouncements, and a VC looks to Israel

You’re reading the web edition of STAT Health Tech, our guide to how tech is transforming the life sciences. Sign up to get this newsletter delivered in your inbox every Tuesday and Thursday. 

The mental health chatbot that doesn’t want GPT (yet)

advertisement

AdobeStock_479650030-768x432

Adobe

Wysa launched its AI-powered chatbot that helps people manage their mental health long before ChatGPT fueled enthusiasm for technologies that seem to think and talk like humans. But while other companies are racing to find ways to incorporate generative AI into health care, Wysa is taking a much more cautious approach to the tech, the company’s co-founder and president Ramakant Vempati told me.

Wysa’s interactive bot uses techniques from cognitive behavioral therapy to help people manage anxiety, stress, and other common issues. But under the hood it doesn’t share ChatGPT’s DNA: The bot uses natural language processing to interpret input from users, but it always delivers one of its pre-written and vetted responses. No generative responses means no potentially unsafe content.

advertisement

It’s a formula that’s been working so far for Wysa, which announced a Series B funding round last year and says 6 million people have tried its app. Wysa is freely available to consumers with paid content options, and is also used by the U.K.’s National Health Service and U.S. employer groups and insurers.

Vempati said that the company has fielded a lot of questions about ChatGPT and is even having active conversations with a handful of customers about possible use cases. But as the company outlined in a recent guide to generative AI, they aren’t comfortable releasing updates that they aren’t completely sure will perform safely and reliably. Still, with proper guardrails and testing, Vempati said he believes there’s an opportunity to use generative AI to do things like help the company translate its scripts into other languages or make the bot’s conversation less dry and repetitive. He’s clear, however, that the company hasn’t embarked on any updates yet.

Vempati said that the hype around ChatGPT has created an openness to chat as a delivery mechanism for mental health care, but has also raised the bar for quality.

“Expectations have increased in terms of what the service should and can do, which is I think probably a call to action for us saying it needs to start actually delivering a very human like conversation — sometimes Wysa does not,” he said. “So how do you balance safety as well as the demand of the client?”

AI pronouncements galore

Speaking of AI hype, the current buzz has generated the need, it seems, for storied institutions to take public positions or otherwise organize around the idea of doing AI safely and ethically. This week alone we have seen:

  • Stanford Medicine announced the launch of Responsible AI for Safe and Equitable Health, or RAISE-Health, which will be co-led by the school’s dean Lloyd Minor and computer science professor Fei-Fei Li. According to the release, the effort will “establish a go-to platform for responsible AI in health and medicine; define a structured framework for ethical standards and safeguards; and regularly convene a diverse group of multidisciplinary innovators, experts and decision makers.”
  • At its annual meeting, American Medical Association leaders called for  “greater regulatory oversight of insurers’ use of AI in reviewing patient claims and prior authorization requests,” citing a ProPublica investigation which revealed that Cigna was using technology to enable doctors to reject huge numbers of claims without reading patient files. And earlier this year, a STAT investigation found that Medicare Advantage plans use AI to cut off care for seniors.
  • Nature Medicine, the LancetPNAS, and other publishers are working together to develop standards for the “ethical use and disclosure of ChatGPT in scientific research.” In an email, a representative said there are concerns generative AI use might lead to plagiarism and derivative work, but that an outright ban on the technology could be short-sighted.

General Catalyst’s health partnerships expand into Israel

Venture giant General Catalyst, the backer behind companies like Warby Parker and Airbnb, is growing the slate of partner health systems who pilot and use technology developed by its portfolio companies. Sheba Medical Center is the first Israeli partner to join the 15 health systems GC already works with, including HCA, Jefferson, Intermountain, and more.

They’re all part of what GC calls its “health assurance ecosystem,” which it plans to grow further by adding payers and potentially pharma companies, GC’s Daryl Tol, who heads that division, told STAT’s Mohana Ravindranath. Formal partnerships with these outside groups helps GC bridge the gap between the conservative, regulated pace of traditional health care and the venture and startup world, which is “adhoc, fast moving, not always nearly as systematic,” he said.

The goal is not only to potentially embed U.S. technology at Sheba, but also tap into products emerging from Israeli startups. “The more we create a global capability, a global economy that can smooth over [cultural and regulatory] differences the more successful those startup companies can be,” he said.

Proposal to keep better track of medical devices fails

A panel of experts that advises the federal government voted not to recommend a series of updates to Medicare claims forms, including a proposal that would have added medical device identifiers to the paper trail. These unique ID numbers are attached to all medical devices, but are rarely added to health records, making it harder to recall faulty products.

As STAT’s Lizzy Lawrence writes, Medicare claims forms have not been updated since 2009, and the National Committee on Vital and Health Statistics voted not to push forward with revisions now owing to technical hurdles. The Centers for Medicare and Medicaid Services has been complaining about the difficulty of adding identifiers since at least 2015.

“It’s a setback in patient safety and surveillance,” said Sanket Dhruva, a device safety expert and cardiologist at the University of California, San Francisco. “It will leave us with an insufficient regulatory system for identifying unsafe devices and performing comparative evaluations.”

Read more here.