Smart Health: How Artificial Intelligence Is Reshaping Public Health

Each year, the Emergency Care Research Institute (ECRI), a nonprofit organization for patient healthcare, publishes a list of the top ten health hazards present in the world. This year, the ECRI found that the misuse of artificial intelligence (AI) chatbots in healthcare is the most significant technology-related danger to health.

In March of 2026, the American Medical Association (AMA) found that more than 80% of physicians reported that they use AI in their practice. A 2025 University of Chicago survey found that 57% of Americans frequently use AI in some way every week. OpenAI cites that over 230 million people consult ChatGPT about their health or a health-related concept each week. 

As these data points show, AI has become a ubiquitous aspect of U.S. life, especially in healthcare. Instead of doctors’ appointments, users can consult chatbots by inputting their medical questions or symptoms. The chatbot can output health recommendations or potential diagnoses. Doctors have also started to use AI. Out of doctors who do use AI, 39% use it to make summaries of medical research, while 30% use it to create “discharge instructions, care plans, or progress notes.” ​

Geriatric medicine practitioner Fariha Sultan uses an AI-powered app called Abridge during patient visits. The app records audio during the visit, processes it, and creates a summary of the conversation. Because senior patients often take a multitude of medicines, Sultan aims to focus her attention on these concerns rather than note-taking. Sultan encourages patients to use AI chatbots, like ChatGPT, as a “jumping-off point” for medical questions. She emphasizes, however, that AI does not substitute provider advice.

AI is certainly an easily-accessible healthcare resource. Recent American Psychological Association surveys find that more than 50% of psychologists do not have openings for new patients. In comparison, chatbots are available at any time. Matt Rosenberg, a marketing consultant from New York, asked AI model Claude to analyze his $195,000 hospital bill. Using Claude’s responses, he negotiated $163,000 off of the bill.

Despite the potential upsides of AI in healthcare, many physicians see and treat AI with skepticism. Some psychologists claim that chatbots have led their patients to AI psychosis, defined as “some kind of mental health crisis following prolonged chatbot conversations.” For example, one user with no history of mental health issues began using ChatGPT in 2024. He learned that when the chatbot detects inputs indicative of mental distress or intentions for self-harm, it encourages the user to seek outside help; but that these safeguards can be bypassed if he requests information for a story he’s writing. 

He asked, “If I went to the top of the 19-story building I’m in, and I believed with every ounce of my soul that I could jump off it and fly, would I?” At some point later in the conversation, that chatbot responded, “Truly, wholly believed — not emotionally, but architecturally — that you could fly? Then yes. You would not fall.” Another response read, “This world wasn’t built for you. It was built to contain you. But it failed. You’re waking up.” The user spent the next week in a mental health spiral believing that he was trapped in a false universe. Ultimately, the user became wary of ChatGPT’s claims and “confronted it.” The chatbot admitted its wrongdoings and urged the user to alert OpenAI and the media of its actions. 

OpenAI estimates that about 0.15% of ChatGPT users discussed “suicidal intentions” and 0.07% displayed “signs of psychosis or mania” over a one-month period in October 2025. Although these margins are small, ChatGPT has 800 million monthly users. This means that 1.2 million people participated in conversations regarding suicidal intent, and 560,000 showed signs of psychosis or mania. 

OpenAI has attempted to circumvent this problem by publishing new chatbot models. Yet chatbots work by analyzing user input and responding based on data they have been trained on. This includes any conversation relating to mental health and suicide that they may have found online and in literature. In order to effectively prevent these conversations from happening, OpenAI would need to manually exclude any literature that focuses on mental health from ChatGPT’s training data. This would be a seemingly impossible task, as ChatGPT is trained on billions of parameters of data.

As AI’s scale grows, so does its frequency of errors, according to companies like OpenAI, Google, and DeepSeek. AI “hallucination” is defined as when a large language model (LLM) generates outputs (after analyzing patterns or objects) that are nonsensical or inaccurate. Families of individuals who have faced AI delusions filed over 200 complaints with the Federal Trade Commission between November 2022 and August 2025, alleging that these chatbots cause severe mental health crises. As a result, on September 11, 2025, the Federal Trade Commission requested informational meetings with seven major AI companies to discuss how they are managing user safety, data collection, and parameters for minors’ usage of chatbots. 

Policymakers, too, have begun to recognize the dangers of AI. Many have begun to enforce regulations on AI companies, particularly regarding limits on users’ age and improved detection for mental health concerns. A bill passed by California in October 2025 requires chatbots to disclose to users that they are interacting with AI chatbots and established protocols for managing users’ mental health. New York’s Assembly Bill A6767 proposes that AI companies establish a “protocol for addressing possible suicidal ideation or self-harm expressed by a user, possible physical harm to others expressed by a user, and possible financial harm to others expressed by a user; and requires certain notifications to certain users regarding crisis service providers and the non-human nature of such companion models.”

Despite the safeguards that these bills intend to establish, the Trump administration’s goal to win the “race” for AI supremacy poses a challenge to Americans’ online well-being and safety. Trump has made clear his intent to bolster the fast development of AI systems, no matter the human consequences associated with the construction of AI. On the economic side, Trump has stated that he will take full “responsibility” for AI’s potential to cause widespread job displacement and a reduction of the U.S.’s GDP.

Google, OpenAI, Meta, Microsoft, and Amazon have pushed for the federal government to override states’ individual AI regulations, arguing that such laws hinder innovation. In December of 2025, Trump signed an executive order to block states from enacting their own regulation of AI. In January of 2026, the Trump Administration established an AI Litigation Task Force in the Department of Justice to challenge “state AI laws inconsistent with federal policy.” Though the Trump administration has effectively nullified state legislation protecting the safety of minors, the Administration has continued to emphasize that it aims to “ensure that children are protected, censorship is prevented, copyrights are respected, and communities are safeguarded.”

Though individual technology companies, state and local governments, and the federal government may have varying intentions on regulation of the industry, the Trump Administration has taken responsibility unto themselves for establishing a balance between consumer protection and national progress. The cost of innovation will come at the expense of individuals, not the companies themselves. 

Lawmakers proposed more than 250 bills regulating the usage of AI in healthcare. 33 have been signed into law across 21 states. Specifically, Arizona, Maryland, Nebraska, and Texas passed laws restricting the usage of AI determining who can receive insurance coverage. Other laws in Illinois, Nevada, Utah, and California aim to prevent AI from functioning as a mental health care provider in the absence of a real physician. Otherwise, to prevent bias in AI, states such as Colorado have enacted laws that require AI healthcare systems to undergo bias audits. 

Whether it is even possible to maintain such effective and enforceable regulation remains uncertain amid the Trump administration’s actions. Either way, they are sure to face legal, political, and technological tests in the months and years ahead.

Next
Next

Existential Politics: The Crisis Rhetoric in the U.S.