What the Eating Disorder Chatbot Disaster Tells Us About AI

By Luke Gralia: Complete Post through this link…

Maybe we shouldn’t let bots influence our relationship to food. Or expect them to have any empathy.

I firmly believe that, just like every sci-fi film has ever predicted, AI is disastrous for humanity. Even if it doesn’t directly turn on us and start killing us like M3GAN, it certainly has the power to get us to turn on each other and harm ourselves—not unlike, say, social media. And it’s already coming for our jobs. First, AI came for McDonald’s drive-thru. Then it came for my job, with ChatGPT and Jasper promising to write provocative prose about the human condition despite having no experience with it whatsoever. Now, AI is trying to replace crisis counselors, but in a not-so-shocking turn of events, the bots appear to lack the proper empathy for the job.

In March, the staffers of the National Eating Disorders Association (NEDA) crisis hotline voted to unionize, NPR reports. Days later, they were all fired and replaced with an AI chatbot named Tessa. While NEDA claimed the bot was programmed with a limited number of responses (and thus wouldn’t start, I don’t know, spewing racial slurs like many chatbots of the past), the bot is still not without its problems. Most importantly, it seems the tech doesn’t quite serve the purpose it was intended to. Fat activist Sharon Maxwell revealed on Instagram that during her interactions with Tessa, the bot actually encouraged her to engage in disordered eating.

Leave a Reply