New mental health AI is a violation of privacy

Detecting depressive language is great, but at what cost?

Recently, a U of A computing scientist developed a program that’s more effective at detecting depressive language on social media.

This is, in isolation, a good thing. The idea is that this kind of technology can look at the social media profiles of vulnerable people and catch warning signs of depression and suicidal behaviour. That’s great! A lot of people suffer from mental health issues, and it’s not something we pay enough attention to. 

Many people point towards social media as part of the problem, so I’m glad there are people looking for ways technology can help. But the thing is, this program is built on a system which has a lot of issues. Although this might be a good idea on its own, I can’t give it my full support. In a better world, a world with better digital security, this would be impossible to implement.

The problem with this program is that it relies on the continued violation of our right to digital privacy.

These days, the mass surveillance and automated analysis of all our online interactions is ubiquitous, but that doesn’t make it okay. It’s important that our right to privacy is protected, and right now, well… it’s not. Data is constantly being collected on our habits, our beliefs, what we like, who we talk to, and so much more. It’s bad enough that it’s collected at all, but it’s worse that it goes into the hands of people we have no control over, and who have no motivation to use that data ethically.

This kind of mass data collection is necessary for an AI like this one to exist, and its existence lends legitimacy to a system that surveys us in unethical ways. Corporate data hoarding is still common practice, even after the massive Cambridge Analytica Scandal of 2018 (Cambridge Analytica itself didn’t even go away; they made a new company under a different name that does the same thing), and when it comes time to truly challenge massive tech companies on this, this is what they’ll point to in order to justify it. They’ll say that we need these systems in place to keep you safe, or to keep you healthy. While they can sometimes help do that, our privacy is more important.

The longer this kind of surveillance goes on, the more normalized it will become, and the harder it will be to keep our lives private from private interests. This kind of project is the type of thing that makes it seem like these behaviours are okay.

I’m not saying we shouldn’t implement these depression-detecting systems. On some sites they’re already in place, and even if they weren’t, if we’re doing this we might as well use it for something positive. But we all need to remember that even as we do that, the way the internet works right now isn’t normal, nor is it okay, and someday we’ll need to get rid of everything that relies on that — even the things that are helping people. This depression detecting AI could do a lot of good, but not enough to justify the system it’s built on.

Related Articles

Back to top button