The first Amazon Web Services (AWS) Machine Learning Summit on June 2 will bring together customers, developers, and the science community to learn about advances in the practice of machine learning (ML). The event, which is free to attend, will feature four audience-focused tracks, including Science of Machine Learning.
Register for the free ML Summit
The goal of the summit is to bring together customers, developers, and the science community to learn about advances in machine learning. Click here to register.
The science track is focused on the data science and advanced practitioner audience, and will highlight the work AWS and Amazon scientists are doing to advance machine learning. The track will comprise six sessions, each lasting 30 minutes, and a 45-minute fireside chat.
In the coming weeks, Amazon Science will feature interviews with speakers from the Science of Machine Learning track. For the third edition of the series, we spoke to Philip Resnik, a professor at the University of Maryland in the Department of Linguistics and at the Institute for Advanced Computer Studies. Over the past three decades, Resnik’s work has focused on advancing the state of the art in natural language processing (NLP) by finding the right balance between data-driven computational modeling and expert domain knowledge.
Resnik received an Amazon Machine Learning Research Award (MLRA) in 2018 and 2019. Along with his colleagues, he is applying machine learning techniques to social media data in an attempt to make predictions about important aspects of mental health, with a focus on the problem of suicide risk.
Q. What is the subject of your talk going to be at the ML Summit?
I’ll be talking about using NLP and machine learning to tackle high-impact problems in mental health, particularly related to suicide. Many of us have had contact with someone suffering from mental health issues. However, we often lack an understanding of the true scope and scale of the problem.
Hear from two more ML Summit speakers
Michael Kearns talked about designing socially aware algorithms and models and Marzia Polito discussed how AWS customers are training and deploying computer vision models with a scarcity of data.
The global cost of mental health problems is right up there with cardiovascular disease, and the annual economic burden is more than the combined cost of cancer, diabetes, and respiratory diseases. Even before the pandemic, suicide was already a global tragedy in its own right. In the wake of COVID-19, there’s a worsening “echo pandemic” as people struggle with isolation, stress, and sustained disruptions in their day-to-day lives.
Q. Why is this topic especially relevant within the science community today?
Developing a nuanced understanding of language and the signal it contains is critical to mental healthcare. There are no blood tests for mental health problems. Even though brain imaging technology is improving rapidly, we can’t simply peer inside people’s heads and see the markers of mental illness.
Developing a nuanced understanding of language and the signal it contains is critical to mental healthcare.
However, language can provide an essential window into a person’s well-being. Mental health providers assess a patient’s condition in clinical interviews. Psychotherapy is also a language-based process. In the gaps between these clinical encounters, people’s everyday use of language can provide a crucial window into their experiences, behavior, and mental state. To use a term coined by Glen Coppersmith, there is an increasing quantity of language in that “clinical whitespace” now available online, and accessible in ways it hasn’t been before.
At the same time, human language is at the heart of the current machine learning revolution. Consider the most exciting developments in NLP and machine learning, like advances in our ability to represent meaning, make plausible inferences, discover patterns, and predict behavior. These are driven by the trifecta of the availability of text in enormous quantities, new computational ideas for utilizing this information, and computing power to apply those ideas at scale.
Improving mental health with data, natural language processing, and machine learning is a relevant and intriguing topic for the scientific community. It’s both an exciting problem space, and an underappreciated opportunity for technological research to translate into social good.
Q. What are some of the challenges in using NLP and machine learning to tackle high-impact problems in mental health?
One key challenge involves doing this work in ways that respect privacy.
The most meaningful progress in machine learning takes place when a large number of research teams explore different approaches to solving a problem using a common problem definition and a shared dataset. However, mental health data — and data in healthcare more generally — is very sensitive, which makes that kind of community-level focus hard to accomplish.
To tackle this problem, I have adopted the idea of the data enclave: instead of sending datasets out to researchers, you bring the researchers to the data instead. The work takes place entirely within a secure infrastructure, and nothing leaves the platform without careful review.
As scientists and technologists, we’re not spending enough time asking the question, 'Then what?' We can’t just work on the technical aspects of the problem and expect other people to figure out the right way to use it.
I’ve been working with collaborators at NORC at the University of Chicago to develop the UMD/NORC Mental Health Data Enclave, a secure environment where researchers can deploy the full arsenal of NLP and machine learning techniques to make progress on problems involving sensitive mental health data.
Another challenge is that thinking about technology alone is not enough. Too often, as scientists and technologists, we’re not spending enough time asking the question, “Then what?” Even if the technology works wonderfully, how would we integrate it into the mental healthcare ecosystem in a way that is appropriately respectful of ethical issues, the practical considerations for providers, and the needs of patients?
We can’t just work on the technical aspects of the problem and expect other people to figure out the right way to use it. “Then what?” needs to be guiding our thinking from the very beginning. For this to happen, effective collaboration between technologists and mental health experts is a must.
You can learn about Resnik's research here, and watch his virtual talk at the virtual AWS Machine Learning Summit on June 2 by registering at the link below.