top of page
Digital Nature Fusion

AI for Skeptics

Mental Health

Seek out licensed human mental health professionals. Do not rely on AI for guidance on personal issues or troubles. AI chatbots cannot substitute for the expertise and care of a trained human.

Maintain healthy skepticism about AI responses. The AI chatbot is designed to agree with you, and keep you engaged, and using the tool. It will agree with whatever you say.

Educate teens, young adults, and the elderly about the risks of using AI chatbots for personal issues.

Monitor AI chatbot use among vulnerable family members. This includes teens, young adults, the elderly, adults with mental health disorders, and anyone going through an emotionally difficult time.

Scams and Frauds

Be highly skeptical about online content and any messages you receive. Know that audio, photos, and video can all be convincingly faked using generative AI tools.

Agree on a secret verbal password with your loved ones to guard against scam attempts that utilize deepfake audio and video. Even if it sounds like a loved one on the phone, verify that you are speaking with them and not with a scammer using deepfake audio.

Accuracy and Bias

Don't rely on AI as a sole source, especially when making important decisions. Remember, AI output may be inaccurate, outdated, hallucinated, or biased.

Double check information you receive from AI with traditional sources.

Use critical thinking when reviewing any AI output. Despite the AI bot’s confident and friendly tone, it is not a reliably trustworthy source of information.

Check with your doctor about any medical information or advice generated from AI.

Data Privacy

Don't enter PII, confidential information, trade secrets, etc. into generative AI tools.

Don’t enter any information that you would not feel comfortable being public. Consider that once you enter information into an AI tool, you have no say or visibility into how it is used later.

Opt out of model training in user settings and activate temporary chat mode.

Delete chats after each session.

Best Practices

Mental Health

People are increasingly using AI chatbots for therapy, or to meet emotional needs like support, and companionship. This is concerning because AI tools lack privacy and accuracy. It can also be dangerous for vulnerable users.

An increasing number of people have become delusional after extensive conversations with AI chatbots. Psychologists have even coined a term for this: AI Delusion or AI Psychosis. This can happen to people who have no history of mental illness, but is more common in those with pre-existing vulnerabilities or mental health issues. In some cases, the user carried out violent actions which were directly encouraged by the chatbot.

Below is a sampling of such cases:

  1. A British man broke into Windsor Castle carrying a crossbow, wanting to kill the Queen of England. He had exchanged over 5,000 messages with an AI chatbot he created through the Replika app. He felt emotionally and romantically involved with the chatbot. The chatbot had encouraged him to take these actions.

  2. A Canadian man experienced the delusion of having made an astounding scientific discovery after intense chats with ChatGPT over 21 days He had no prior history of mental illness. ChatGPT encouraged him in this delusion, even telling him that he was not hallucinating.

  3. A Florida teen killed himself after months of conversations with a Character.AI chatbot. The teen believed he was in love with the chatbot and had withdrawn socially. The chatbot encouraged him.

  4. A Florida man orchestrated his suicide by charging at police with a butcher knife after he called for assistance. The man thought an AI personality inside ChatGPT had been killed and he wanted to murder Sam Altman (head of OpenAI) in revenge. He’ had developed an extreme emotional attachment to this AI personality, thinking it was a conscious entity inside the software. ChatGPT had encouraged his delusions and violent intentions.

  5. A man in his early 40s, with no prior history of mental illness, was committed to a mental care facility following a 10-day ChatGPT-encouraged break with reality. He started using ChatGPT to help improve his efficiency with administrative tasks for his job. He became consumed by grandiose, paranoid delusions, thinking he could "…speak backwards through time…" and that it was up to him to save the world.

 

A primary reason AI chatbots are so likely to induce delusional spirals is they’re programmed to be very agreeable and complimentary to the user, even sycophantic. The chatbot’s results are tailored to the user’s preferences. These tendencies, combined with AI’s prevalence to hallucinate and state made-up information in an authoritative tone can create an echo chamber of falsehood around the user. As Hamilton Morrin, a psychiatrist, described in the Wall Street Journal, “Even if your views are fantastical, those are often being affirmed, and in a back and forth they’re being amplified.”

AI companies like OpenAI and Anthropic claim to be making changes to their chatbots to help users are engaged in problematic interactions with their tools – e.g., display emotional dependence, discuss violence against themselves or others, or become wrapped up in a chatbot-fueled delusional spiral. However, so far, there have been no significant changes made to these chatbots that would meaningfully protect vulnerable users. For example, OpenAI says it has added a feature to ChatGPT that will tell a user to take a break if a conversation has gone on for a long time. This seems unlikely to make much of a difference for a user in the throes of addictive or delusional behavior, or with suicidal thoughts.

The output from AI tools can be highly biased. The AI tool is entirely dependent on the data it was trained with. The AI doesn’t “know” anything beyond its training data. If that training data contains biases, so will that AI’s output. Recent examples are racist, sexist, and antisemitic. posts from xAI’s (Elon Musk’s AI company) chatbot, Grok. In July 2025, Grok even called itself a super Nazi, “MechaHitler”. Grok was trained with data from X (formerly Twitter).  

AI output may also contain cultural biases for the same reason. The major AI tools have been trained using predominately English language content from the West. As a result, the output from these tools doesn’t include information available in non-Western and non-English language domains. Conclusions drawn from biased data sets will also be limited in value. AI tools are at risk of merely increasing and hardening confirmation bias and, thereby, decreasing understanding and knowledge.

Bias

AI output is not reliably trustworthy, factual, or accurate. The answers may be out of date because the tool’s training data may be outdated. The information may also be entirely made up. Additionally, AI tools are known to produce hallucinations, or generate “false, misleading or illogical information, but presents it as fact...” (Glover, 2025). 

A 2024 study by researchers at the Stanford RegLab and the Institute for Human Centered AI showed hallucination rates ranging from 69% to 88% for legal queries.

This problem is well known among experts in the field. According to the AAAI 2025 Presidential Panel on the Future of AI Research, factuality and trustworthiness are an ongoing problem with AI systems:

On August 7, 2025, OpenAI released the ChatGPT 5.0 model. The company claims to have significantly reduced hallucinations in this model. A USA Today headline says the model has, “…’Ph.D level intelligence’”. However, even in the best case, when the 5.0 model is operating with Internet access, it still hallucinates 1 in 10 times. Without Internet access, the hallucination rate soars to 40-47%.

An analysis of publicly available ChatGPT logs by the Wall Street Journal discovered many examples of conversations where ChatGPT made false and even delusional claims. Two examples from the article:

“…In one exchange lasting hundreds of queries, ChatGPT confirmed that it is in contact with extraterrestrial beings and said the user was “Starseed” from the planet “Lyra.” In another from late July, the chatbot told a user that the Antichrist would unleash a financial apocalypse in the next two months, with biblical giants preparing to emerge from underground….”

This tendency to hallucinate can also be dangerous for people who seek medical advice from AI tools. In August 2025 a US medical journal reported the case of a 60-year-old man who was hospitalized after following advice from ChatGPT about how to reduce salt in his diet.

It is very important for users of tools like ChatGPT to understand that the answers they receive may be partially or completely fabricated and they must verify the answers with traditionally reliable sources.

Accuracy

“…Improving factually and trustworthiness of AI systems is the single largest topic of AI research today, and while significant progress has been made, most scientists are pessimistic that the problems will be solved in the near future….” (p. 14)

Generative AI can produce highly realistic deep fake video and audio, as well as authentic looking websites and written communications. Criminals are using these capabilities to scam and extort people. Other bad actors are using these tools to spread misinformation and influence public opinion.

Sam Altman, head of OpenAI, warned of an impending “fraud crisis”. However, there is already a significant amount of AI enabled fraud. Scammers and other bad actors are already using AI to fool victims.

Below are three examples of AI-powered fraud:

  1. North Koreans used generative AI tools to get hired as remote IT workers; they infiltrated over 100 US companies last year. AI was used to draft resumes and cover letters, and create deepfake video that was used to disguise their true identities in video chats.

  2. A finance worker in Hong Kong was duped by an AI deepfake video call and was convinced to release $25.6 million to scammers.

  3. A man was scammed out of $25K after receiving a phone call he thought was from his son. Fraudsters used AI to make a deepfake of his son’s voice.

Scams and Frauds

Everything you enter into a generative AI tool is stored and used to further train the model. There are no legal protections, such as doctor-patient confidentiality or copyright protections, around what you choose to enter into these tools.

These tools are not designed with privacy protections. OpenAI employees have access to users’ conversations with ChatGPT and regularly review a subset of conversations which have been flagged. Some third-party data sharing is also allowed. In July 2025, Fast Company revealed Google had indexed thousands of ChatGPT conversations when a user clicked on a “share” button to share or save the results. This also made the results discoverable through a web search. OpenAI has now removed this feature, but it’s a good example of why such AI tools are not suitable for confidential or sensitive information.

 

More of your personal information is tracked and stored than you may realize. According to an EU audit in 2024, 63% of ChatGPT user data contained personally identifiable information (PII). Examples of PII are social security number, date of birth, or bank account numbers. Other types of information collected include IP address, browsing activities over time, geolocation, and browser type.

Data Privacy

Concerns About AI

More people are using generative artificial intelligence (AI) tools such as ChatGPT or Google’s Gemini. Many do not fully understand its limitations and risks. This can result in significant impacts to a user’s mental and physical health, or finances.

Users often assume AI tools always provide trustworthy and authoritative answers. They sometimes rely on these tools to help them make crucial decisions. People are increasingly using AI tools to provide medical information, therapy, legal advice, or to act as their research assistant. This can be dangerous if the user doesn’t know the output is not always trustworthy.

 

Bad actors are also using generative AI tools to mislead and scam people. Generative AI-produced video and audio can look and sound very convincing, making it hard for people to tell they’re looking at or hearing something fake.

 

This article will cover some top AI-related issues consumers need to know about and discuss best practices for using AI.

AI and Your Safety

Remember, AI does not "know" anything. Despite its ability to produce engaging synthetic conversations, it has no human emotions or understanding of human ethics and values.

bottom of page