Episode Description
In 2023 New York Times journalist Kevin Roose reported that a chatbot had declared love for him and urged him to divorce his wife. Since then stories abound of vulnerable people harming themselves after lengthy exchanges with GenAI chatbots. In a recent instance, a vulnerable teen discussed suicide with a chatbot and asked for feedback about the noose he had fashioned. In yet another instance a clearly delusional person was encouraged to murder his mother and then commit suicide.
Medical professionals are concerned that use of chatbots in diagnosis and treatment recommendations without real-time supervision by experienced professionals may lead to harm. GenAI tools have not been designed to fulfill the Hippocratic oath to do no harm. Physicians are asking whether these GenAI tools can be and will be used responsibly. Currently the FDA categorizes chatbot systems as self-help or wellness tools, placing them outside of existing regulatory regimes.
In this episode of Mind The Gap: Dialogs on Artificial Intelligence we discuss the implications of GenAI tools with Dr Jane Rosenthal, a seasoned clinician with extensive experience examining medical ethics in the context of a major medical center.
Medical professionals are concerned that use of chatbots in diagnosis and treatment recommendations without real-time supervision by experienced professionals may lead to harm. GenAI tools have not been designed to fulfill the Hippocratic oath to do no harm. Physicians are asking whether these GenAI tools can be and will be used responsibly. Currently the FDA categorizes chatbot systems as self-help or wellness tools, placing them outside of existing regulatory regimes.
In this episode of Mind The Gap: Dialogs on Artificial Intelligence we discuss the implications of GenAI tools with Dr Jane Rosenthal, a seasoned clinician with extensive experience examining medical ethics in the context of a major medical center.