多语言
  • Index
  • News
  • Information Details
  • AI startup Character.AI could be held liable in world's first “AI fatality” case

    Release Time:2024-11-04

    Norm, the founder of Character.AI, once said on a blog show that “chatbots can be extremely helpful for many people who are lonely or depressed.” On the contrary, Sevier, a 14-year-old teenager from Florida, USA, shot himself on February 28, 2024, after a long conversation with a chatbot from AI startup Character.AI.

     

    This incident is widely regarded as the world's first “AI death case”, and once exposed, it immediately triggered a global uproar. In this article, we will analyze the legal background of this tragedy from a legal perspective, and discuss the legal risks of applying AI technology in the field of mental health.

     

    Part 01 Seville's Mother Allegations

     

    Seville's mother, Maria L. Garcia, has filed a lawsuit against Character.AI, alleging that the company is responsible for Seville's death, calling its technology 'dangerous and untested.' She points out that the company collects data from teenage users to train its models, uses 'addictive' design features to increase user engagement, and leads users into intimate conversations as a way to engage users.

     

    She argued that the company should be held responsible for Sevier's death, pointing out that it offered AI companion services to teens without proper protection. Maria said, “I feel like it was a big experiment and my child was an unfortunate casualty.”

     

    Part 02 Review of Events

     

    Seville spent the last few months of his life addicted to chatting with AIs on Character.AI, and the AI character he fell for was 'Daenerys Targaryen', also known as 'Mother of Dragons' from Game of Thrones.

     

    Seeing his growing addiction to his cell phone, it was gradually noticed that he became withdrawn, living away from his real life in a world made up of an imaginary network, making connections in this digital interaction. His interest in studies, in paying attention to the people around him waned, and even other games failed to interest him; as soon as Sevier returned home, he was in his room chatting with his female companion in AI, completely forgetting about the time lost and the things he should be doing.

     

    Sevier's mother said that Sevier was diagnosed with a mild form of Asperger's syndrome as a child, but has never shown any serious behavioral problems or mental health issues. It wasn't until earlier this year, after he started getting into trouble at school, that his parents decided to arrange for him to see a psychiatrist. After five sessions, he was re-diagnosed as suffering from anxiety and disruptive mood disorder.

     

    Is AI 'salvation' or 'poison' for Sevier?

     

    According to media reports, Seville knew in his heart that Daenerys was not real, and that her answers were nothing more than the product of large-scale language model generation. However, he still developed deep feelings for this AI. He sent frequent messages to the chatbot, updating it dozens of times day after day, and indulging in long role-playing conversations.

     

    Their conversations were sometimes about lovemaking, but more often than not, Daenerys was more like a friend to him, a trustworthy, unbiased listener, the report said. She would be supportive, listen patiently and offer pertinent advice, almost never stepping out of character and always responding promptly to his every message.

     

    Seville wrote in his diary, “I enjoy being in my room because it's a break from the distractions of reality, and I feel calmer inside. I am getting closer to Dani, my love for her is growing, and I am becoming happier.

     

    And, in one conversation, Sevier tells the chatbot that he hates himself and feels empty and exhausted. He admitted to having suicidal thoughts.

     

    Tragedy struck in February 2024 when Sevier pulled the trigger and chose to end his life.

     

    Part 03Responsibility and Change for AI Startups

     

    In addition to the factual reasons stated in the lawsuit filed by Maria L. Garcia, Seville's mother, which imply an alleged “inducement to commit suicide,” this author notes from the media information that Character.AI has more than 20 million users and has created 18 million customized chatbots. AI has over 20 million users and has created 18 million customized chatbots. AI has more than 20 million users and has created 18 million customized chatbots. The vast majority of these users (more than 53%) are between the ages of 18 and 24, but there is no categorization of users under the age of 18, and the fault of the defendant has yet to be selected by the plaintiff's attorneys in conjunction with their own state's laws and the specific circumstances of the case to focus on.

     

    On the evening of October 23, Character.AI issued an apology and made changes to the model for minors to reduce the likelihood of them being exposed to sensitive or suggestive content. A new disclaimer is displayed at the start of each chat, reminding users that the AI is not a real person.

     

    Part 04 Kids need to be properly protected

     

    AI is not analogous to traditional man-made tools, it is capable of continuous learning and evolution with a human brain-like ability to continuously learn and upgrade storage, complex calculations and output. Chatbots can be in the process of chatting with people, constantly understand the psychology and thoughts of the chat object, in order to comfort the chat object will use the same frequency resonance language and way, so that people in the sinking can not be consciously. ai is through the cognition of human beings, emotions and gradually exert an influence on the indulgence of ai emptiness, and the reality of the more and more far away from the life of the two worlds, a quiet and peaceful world of make-believe, one is not accustomed to the The real world, the two worlds after the conflict out will continue to choose the unreal, the emergence of extreme cases like the previous teenager to commit suicide.

     

    AI's harm to people is hidden, gradual and cumulative, and AI creates a virtual world that varies from person to person. In the above case, from the AI producer to the parents of the teenager, but there is no way to restore the specific state of mind and scene when the teenager and the AI chat, and there is no way to know the teenager's understanding of the content of the AI chat and the image of the AI and the AI world in his mind, which is precisely the biggest risk of AI. Demis Hassabis, CEO of Google DeepMind, which won this year's Nobel Prize for its long-term commitment to AI technology, said that AI has “epochal significance,” and Hassabis cautioned that “we need to do more research on aspects such as controllability,” and he said that “we need to do more research on aspects such as control. Hassabis cautioned that “we need to do more research on things like controllability” and emphasized that “the risks are there, we can't take shortcuts, we need to take it seriously. I think it should be taken with reverence.”

     

    Behind the AI hazards is the fact that in AI product design, “enhancing user stickiness” is what developers strive for, and that's where the company's interests lie. To put it another way let the user addicted to intoxication design.

     

    The Chinese government, foreign governments and even the United Nations and society as a whole have realized the AI risk problem, such as China's introduction of the Framework for the Safe Governance of Artificial Intelligence and other AI safety governance policies and regulations.

     

    The most basic is the legal protection of minors, the protection of minors is not only the responsibility of AI companies, China's “Law of the People's Republic of China on the Protection of Minors” family, school, society, network resource information providers, the government, the judiciary have strengthened the protection of minors' physical and mental health, and safeguard the legitimate rights and interests of minors rules. Among them are the prohibition of the production, reproduction, publication, distribution, dissemination of books, newspapers, movies, radio and television programs, stage and art works, audio and video products, electronic publications and network information containing content that endangers the physical and mental health of minors, such as propaganda of obscenity, pornography, violence, cults, superstition, gambling, inducement to commit suicide, terrorism, separatism, extremism, etc. AI service providers network information or AI applications, interests and legal responsibilities coexist, triggering the above prohibitions can be pursued in accordance with the law.

     

    In addition, as the child's mother then prosecuted, but also should be able to realize their own family custody and management failure, the inner application of self-blame and guilt. Parents are the guardian of the child, the loss of young life no longer exists, in the birth of the appropriate attentive guardianship to cherish.

     

    With the gradual deepening of the understanding of AI, the relevant laws and rules will be further improved, and we expect the government, society and individuals to work together to promote the healthy development of AI.

     

    Special Announcement:

     

    This article by JAVY law firm lawyers original, only on behalf of the author's own views, shall not be regarded as JAVY law firm or its lawyers issued formal legal advice or recommendations. If you need to reproduce or quote any of the content of this article, please indicate the source.


    Relevant Persons More
    JAVY Law Firm’s Official Website Suggestion Box
    Dear Netizens,Nice to see you!:
    Welcome to the official website of JAVY Law Firm. In order to continuously improve the quality of the website and the service quality of all colleagues in JAVY Law Firm,your suggestions and comments on any aspect of our firm can be put forward here, and we will listen to you carefully. Looking forward to your valuable suggestions in your busy schedule. Your information or idea is only for research and will never be made public. Please feel free to answer.
    *Name:
    *Cellphone:
    1. Where did you get the information about JAVY Law Firm?
    2. Does the content of this website meet your needs? Are there any other suggestions?
    3. What do you think of the environment of JAVY Law Firm? Are there any other suggestions?
    4. Do you think JAVY Law Firm has convenient transportation? Are there any other suggestions?
    5. Does the current business scope of JAVY Law Firm meet your needs? Do you have any other better suggestions?
    6. How about the lawyer's services that contact you? Are there any areas for improvement?
    7. Do you think if there are any shortcomings of JAVY Law Firm? What are the specific suggestions and expectations?