The SF AppWorks Blog

Designing responsible AI frameworks with Leslie Witt

Written by Andrew Greenstein | May 4, 2023 3:04:11 PM

The mental health crisis in the US is alarming – one in five adults experiences mental illness. Yet, due to a lack of affordable and quality resources, only half of those who need care actually receive it.  

 

Fortunately, technology is increasingly being used to make mental health care and wellness more accessible and inclusive. But, as someone who’s gone both the in-person and digital routes for my own mental health and wellness care, I wonder: Can technology really be as effective as “traditional” mental health care? And, how exactly do you design and develop a digital product that makes people feel better?

 

On the latest episode of The Next Great Thing podcast, I talked with Leslie Witt, Chief Product and Design Officer at Headspace Health, about the challenges of designing and building digital products for mental health and wellness. And, like most technology conversations these days, we touched on the potential of generative AI in this space – and where exactly large language models and ChatGPT fit into the future of mental health technology. 

 

Check out the episode on our website, Apple Podcasts, Spotify, or anywhere you listen to podcasts.

 

Check out all podcast episodes.

 

One great thing I learned: While generative AI has the potential to revolutionize mental health care (and beyond), product teams need first to create guardrails that allow for safe experimentation and protect against catastrophes big and small. 

 

Every day I read about another tech leader advocating for us to be more responsible with generative AI, that we have to “be careful,” or that – hold on a minute! – we need a 6-month pause. Of course, there are plenty of organizations and cross-industry consortiums working to make this technology more responsible. That’s a good thing. But, there are still many core attributes of tech's ethos and operating model that escape oversight. Look at the negative impact social media has had over the past decade. If digital products aren't designed responsibly from the get-go, it may be too late by the time we get serious. 

 

That’s why so much of the responsibility lies in the hands of product and design leaders to create guardrails and principles while encouraging innovation and experimentation with generative AI. That’s especially critical when creating digital products and experiences for improving mental health and wellness. 

 

As both a product and design leader, Leslie is creating a culture that’s both innovative and responsible, built on “systems of control in tandem with new ideas.” First, her team adheres to a framework – which covers seven core principles including explainability, bias, oversight, and autonomy – that guides them in making decisions about experimenting and innovating with AI. One of her favorite core principles is beneficence: does the product make the user feel better, not simply increase engagement? 

 

“A benefit to the user is not engagement,” Leslie explains. “Engagement can be a mechanism to, say, form a healthy habit, but you need to ensure that it actually did that. Part of that measurement framework that we put in place sits there as a kind of auditory balance in order to be able to say, ‘Hey, I had you do this. Did it help you?

 

In tandem with hackathons, her team also holds “safety salons” to innovate and experiment safely within pre-established guardrails for responsible AI. The salons bring together clinical experts to discuss data science, self-policing, principles-based frameworks, and other topics. She brings in other experts, like Headspace Health’s Chief Diversity Officer, who has a clinical background, to help develop oversight and systems of control around bias and inclusivity. 

 

“It's very easy to either be super utopian or to demonize [generative AI],” Leslie explains. “As you get into concrete use cases, I think it is simpler to say, ‘Okay, that's something where I can say, go experiment with impunity. Here's something where I can say, proceed with caution.’ We need to be very intentional about how it is that we govern and oversee, even early experimentation…As you look at specific use cases executed with intention, it gets less scary.”

 

At SF AppWorks, we recently held our own internal hackathon focused on generative AI (you can watch the video here!). It was amazing how much creativity it inspired. The winning team built a tool that helped children recognize and understand emotions using AI-generated animal characters. Another team created storybooks that put children at the center of the story and were based on elements in their lives. It was exciting to see these teams explore the potential of generative AI to help children in meaningful ways. But as exciting as generative AI is, every dev and design team needs to establish some rules of the road and principles to design and build by – for experimenting, for innovating, and, ultimately, for deploying. 

 

How are your product and design teams building a responsible AI framework that encourages safe experimentation and innovation for specific, user-centric use cases?