Breaking News

Ai chatboti posing as therapists could list users to harm themselves or others


Join Fox News to access this content

Plus a special approach to selecting articles and other premium content with your account – for free.

By entering your e -Stage and pushing extensions, you agree to Fox News’ Terms of use and Privacy ruleswhich includes ours Financial incentive notification.

Enter a valid address E -Ap.

Editor Note: This article talks about suicide. Call 988 suicide and crisis of Lifeline or Talk to 741741 on the crisis line if you need help.

Health experts say that artificial intelligence (AI) chatbots poses as therapists It could cause “serious damage” to combat people, including adolescents, without appropriate security measures.

Christine Yu Moutier, MD Fox News Digital There are “critical emptiness” in research on the intended and unintentional effects of AI at the risk of suicide, mental health and greater human behavior.

“The problem with this AI chatbot is that they are not designed with the risk expertise and prevention of suicide in algorithms. In addition, there is no available line for users who can exceed the risk of mental health or suicide, there is no training on how to use a tool if you are at risk or industry standards.

She noted that when people threaten suicide, they temporarily have a “physiological tunnel vid” that negatively affects the function of the brain, thus changing the way they communicate with their environment.

How not to fall in love with romance with romance on AI drive

AI in the concept of psychology. A robot that analyzes human brain function with a magnifier. Integration of technology into mental health. Vector illustration. (East/Guzalia Filimnova)

Moutier also emphasized that chatboti does not necessarily understand the difference between literal and metaphorical language, which makes it difficult for these models to accurately determine the risk of suicide.

Dr. Yalda Safai, a leading psychiatrist and a public health expert, repeated Moutier’s comment, noting that AI could analyze words and patterns, but lack empathy, intuition and human understanding, which is crucial in therapy. She added that she can also misinterpret emotions or not provide appropriate support.

Last year, a 14-year-old boy in Florida died with suicide after talking to the character created by AI who claims to be a licensed therapist. In the second case, a 17-year-old Texas boy with autism has become Violent to his parents as he spent time appropriate to what he thought he was a psychologist.

The parents of these individuals filed lawsuits against appropriate companies. Subsequently, the American Psychological Association (APA), the largest psychologist in the United States, pointed out two cases.

Earlier this month, the APA warned federal regulators to “mask” how therapists can make the vulnerable individuals to harm themselves or others, according to New York Times report.

“They actually use algorithms that are anti -tethered according to what the trained clinician would do,” said Arthur C. Evans Jr., Executive Director of the APA, during the presentation. “It is our concern that more and more people will be harmed. People will be deceived and will misunderstand what good psychological care is.”

What are the dangers of AI? Learn why people are afraid of artificial intelligence

Artificial intelligence illustrations are seen on a laptop with books in the background on this illustrational photography on July 18, 2023 (Photo Jaap Arriens/Nurphoto via Getty Images) (Getty Images)

Evans Jr. He said the association was invited to the action partly because of the very realistic speeches of Chatbot in recent years.

According to Ben Lytle, an entrepreneur and executive director who founded the “ARK project”, the ethical use of AI already sets the expectations that have been neglected in some reported cases.

“Chatboti personalize information and personify themselves to appear like a person, adding a credibility that requires ethical caution above. It is also irresponsible that someone has decided to show a personalized response to a search as a human psychologist, but needs a measured, targeted answer,” he told Fox News Digital.

According to Lytle, the ethical chatbot should make a positive statement at the beginning of the dialogue, admitting that these are not human beings. Users should also admit that they understand that they are talking to Chatbot. If users fail to provide such recognition, Chatbot should turn off.

Human Chatbot owners should be clearly identified and responsible for his behavior, and no chatbot should be presented as a medical expert or psychologist without the approval of the FDA, Lytle, who is also the author of the “potential” book series.

What is artificial intelligence (AI)?

The digital image of the brain on the palm of the artificial intelligence technology. (East)

“Interactions with users should be monitored by the responsible man with flags for worrying dialogue. A special stakeholder is required to detect and turn off if they communicate with a minor when chatbot should be limited to adults,” he added.

Safai told Fox News Digital that although AI can serve as a useful mental health tool – such as magazines, mood trackers or basic exercises of cognitive behavioral therapy (CBT) – should not be replaced by human therapists, especially for serious concern for mental health.

“Ai cannot handle any crisis: if the user experiences a mental health crisis, such as suicide thoughts, and may not recognize the urgency or respond effectively, which could lead to dangerous consequences,” she said, calling AI therapists as a “terrible idea.”

A study published last week in PLOS Mental Health Journal They found that AI chatboti received greater grades of individuals who participated in the study in relation to their human colleagues, and the respondents described them as “culturally more competent” and “empathetic”.

Diagnostic Mental Health Tool with AI drive could be the first of its kind to predict, treat depression

The visitor observes the sign of AI (artificial intelligence) on the animated screen on the Mobile World Congress (MWC), the largest annual gathering of the telecommunications industry, Barcelona. (Josep Lago/AFP via Getty Images)

“Mental health experts are in an insecure situation: we need to quickly discern a possible destination (for better or worse) AI therapeutic train, because it may have already left the station,” the study authors wrote.

AI tools for therapy often store and analyze user data. Safai said this information could be leaked or misused if they are not properly insured, which potentially violates the patient’s confidentiality.

Furthermore, she suggested that AI could step up harmful stereotypes or provide helpless tips that are not culturally or personally suitable if the model is trained for incomplete or incorrect data.

Dr. Janette Leal, director of psychiatry at Better, told Fox News Digital that she saw first -hand that powerful, personalized interventions could change their lives. While Leal recognizes that AI could expand access to support for mental health – especially in areas where help is scarce – it remains careful about chatbots that are carried out as licensed therapists.

Teenagers turn to Snapchat’s ‘My Ai’ for Mental Health Support – To which doctors warn

The robotic arm presses the Start artificial intelligence button. 3D illustration. (East)

“I have seen that in my practice and in recent tragic cases, how dangerous it can be when vulnerable individuals rely on unregulated support. For me, AI should ever serve as an addition to human care, acting under strict ethical standards and strong supervision to ensure that patients’ safety is not threatened,” she continued.

Jay Tobey, founder of North Star Wellness Recovery in Utah, was more lush in using AI to deal with mental health; However, he stopped granting full AI therapists. Instead, he said, a “perfect scenario” would include a human therapist that uses AI as a “tool in their belt” to apply proper treatment.

“I think it would be a great benefit to use AI chatbots. Personally, I believe we all tell a very unique story about what we go through and how we feel. People tell the same stories over and over again. If the big language model could be picked up with it and start following outcomes to know what the best practice, which would be useful,” said Fox News.

Click here to get the Fox News app

APA now calls on the Federal Trade Commission (FTC) to investigate Chatbot claim that they are mental health experts, which could one day lead to federal regulation.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button
Social Media Auto Publish Powered By : XYZScripts.com