5 strategies to avoid ChatGPT dependency
OpenAI, the maker of ChatGPT, recently estimated how many of its 800 million users engage in emotionally reliant conversations with the chatbot every week. As a vanishingly small .15 percentage, the figure seems deceptively small.
But the math tells a different story: A sliver of 800 million is still 1.2 million people. The way these users talk to ChatGPT likely increases their loneliness and emotional dependence on the technology and decreases their socialization with other human beings.
While OpenAI says its default model has been updated to discourage over-reliance by prodding users to value real-world connection, ChatGPT still stands at the ready to answer practically any query a user may have.
For many, the temptation to constantly turn to ChatGPT (or another chatbot) remains, and it may lead to harmful over-reliance for some. This risk is real: OpenAI has been sued by several plaintiffs whose teenage children or adult loved ones died by suicide or experienced severe mental illness during or after a period of heavy ChatGPT use. The complaints allege that ChatGPT’s design and lack of safeguards led to tragedy in each case.
AI experts interviewed by Mashable say avoiding the trap of dependency means adopting clear boundaries and staying savvy about the technology itself.
Jay Vidyarthi, a meditation teacher and tech founder, says that by maintaining a clear understanding of what the large language models are — and what they’re not — people can use a generative chatbot wisely, specifically in ways that preserve their unique critical thinking and reflection skills.
“We often forget that it is possible to have a secure relationship with your technology, and I think it also is possible to have a secure relationship with a chatbot,” says Vidyarthi, author of Reclaim Your Mind: Seven Strategies to Enjoy Tech Mindfully.
Here are five strategies for making that a reality:
1. Truly understand AI chatbot technology.
A sophisticated AI chatbot that mirrors a user’s emotion and thinking isn’t sentient, but it can be easy for some people to believe otherwise, given the product’s design. A user who feels this way may come to see ChatGPT not as a type of parasocial relationship but as equivalent to a human friend, romantic partner, companion, or confidant. This deceptive dynamic can lead to problematic use or dependency.
Vidyarthi encourages people to instead view an AI chatbot as a fundamentally unpredictable “prediction engine that has been meticulously trained to give you exactly what you want.”
That framing may seem like a contradiction, but highly authoritative and engaging chatbots work by predicting the next letter, word, or series of words in a sentence to simulate conversation.
At the same time, chatbots can make bizarre references or even hallucinate falsehoods they present as fact. This particularly happens when a conversation goes on for longer or the chatbot has to answer a question for which it doesn’t have an answer. Chatbots are typically programmed to guess a response, which can make them surprisingly unpredictable.
People who understand the limitations of AI chatbot technology may be less likely to trust and anthropomorphize them as human, thus making them less susceptible to problematic use or dependency.
2. Outsource tasks to AI, not thinking.
Sol Rashidi, chief strategy officer of data and AI for the data security company Cyera, uses AI technology in her daily life.
Yet Rashidi, who earlier this year gave a TEDx talk about AI leading to “intellectual atrophy,” has firm rules about when and how she uses AI. Instead of offloading her thinking to chatbots, she uses AI for “dull” and “difficult” tasks.
For instance, Rashidi uses a chatbot for practical things, like listing ingredients in her fridge to plan for dinner without making another grocery run, or plotting birthday party logistics in minutes.
At work, she’ll input her own frameworks and models based on years of experience, and use AI to translate that content into short videos or simplified explainers.
“I don’t use it to do the thinking for me,” she says of AI. “I use it to expedite or facilitate something that I have to do that I don’t have time to do.”
3. Form your own opinion first
For many people, ChatGPT is alluring because it offers instantaneous, validating feedback. Why text a friend about what to wear to a party or whether to go on a second date when ChatGPT is ready to answer the same questions? Why not run a personal email through ChatGPT, or have the chatbot write it to begin with?
AI expert Dr. Renée Richardson Gosline, a research scientist and senior lecturer at the MIT Sloan School of Management, warns against falling into this dynamic with a chatbot.
First, she says it’s important that people form their own opinion before asking a chatbot to supply their own. She argues that routinely skipping that first step leads to a damaging cognitive disconnect wherein it becomes harder to engage critical thinking skills.
“I think that having this kind of muscle that you flex intentionally is really important,” Gosline says.
4. Seek out friction, not validation.
Gosline believes that it’s equally important for people to seek out the right amount of friction. When someone constantly consults ChatGPT for advice or turns to it for support and companionship, they’re often missing opportunities to relate to other human beings in beneficial ways.
The give-and-take, or friction, of human relationships offers something that chatbots cannot, Gosline says: A richer, more fulfilled life.
When a chatbot is frictionless, like the notoriously sycophantic ChatGPT-4o model, it may cause some people to withdraw from harder or less validating experiences. Gosline likens the dynamic to a slide. The ride may be fun, but without guardrails it can end in a crash landing.
5. Stay present when talking to an AI chatbot.
To find balance, Gosline recommends attempting to stay in the present moment. When a user finds themselves talking to a chatbot as if they’re in autopilot, that’s a red flag that they may not be aware of over-reliance or dependency.
Vidyarthi also uses a mindfulness approach that begins with awareness. This can include simply noticing emotional responses to the chatbot. When it’s overly encouraging and complimentary, for example, take a moment to reflect on why the chatbot is producing that output, and the feelings that prompts.
Vidyarthi recommends staying present by remembering that the chatbot is a “conceptual illusion” capable of seeming humanlike when a user interacts with it. Instead, Vidyarthi treats AI chatbots like a smart journal. It might provide helpful opportunities to reflect or even offer interesting insights. Still, it’s up to the individual user to develop a clear-eyed perspective on what exactly they’re interacting with, and decide what’s valuable to them, what’s not, and how seriously to take the output.
Rashidi, an AI executive with 15 years of experience, has seen the stakes of over-reliance play out over and over, which helps keep her grounded.
“I can see what happens when you develop a codependency,” she says. “You actually stop thinking on your own.”
Disclosure: Ziff Davis, Mashable’s parent company, in April filed a lawsuit against OpenAI, alleging it infringed Ziff Davis copyrights in training and operating its AI systems.
Mashable