Although my friends and family don’t exactly understand what I do at work, they know that I’m generally aware of cutting-edge technology. I can’t count the number of times in the last six months that people have asked me what I think about ChatGPT or artificial intelligence taking over the world. Although I enjoy reading the scholarly articles that people are publishing in informatics literature about the use of large language models, I’ve made it a point to try to keep up with the lay media so that I understand what my friends and family are reading. It’s also a good proxy to understand what my physician colleagues understand about the technology, given the fact that if they’re reading scholarly literature, it’s most likely in their professional specialty or subspecialty fields.
I was intrigued to see this article in the New York Times this week covering the Federal Trade Commission’s investigation into the potential harms of ChatGPT. Regulators sent a letter to OpenAI to inquire about its security policies and procedures, as well as to learn if consumers have suffered damages related to how the chatbot collects data. They’re also interested in the ability of the technology to generate and publish false information on individuals. The NYT reported that the letter was 20 pages long and included pages of questions, including those seeking information on how the company trains its AI models. It also requested documents related to the inquiry. A question is whether the company “engaged in unfair or deceptive privacy or data security practices or engaged in unfair or deceptive practices relating to the risks of harm to consumers.”
Most of the people I talk to act like ChatGPT is no big deal and we should be excited about using it. Although I’m optimistic about its ability to provide value in a variety of different industries and situations, it’s a complex technology and there needs to be thoughtful consideration about how we do or do not use it for patient care. I see a lot of people using it to generate patient-facing correspondence without much review. One physician boasted about how she was able to create lab results letters for her patients, sending a full day’s worth of letters in under three minutes. The ability to create and proofread those letters in the cited timeframe is questionable at best. Based on the looks on the faces of some of the colleagues she was speaking to, I wonder if they were questioning her professional judgement.
Many of the large health systems and EHR vendors that some of my colleagues work at are reported to have been on point with messaging to their physicians and other clinicians about not including protected health information in prompts that are used to access the systems, especially when users are considering using publicly available tools rather than institutional or research tools. However, many of my independent physician colleagues haven’t received the same level of education and didn’t understand that information they’re feeding into the prompts can be used in various ways once a technology company has control of it. Some of the physicians I’ve interacted with on social media still aren’t savvy enough to not post protected health information in their posts or images, and someone is always calling out a group member for posting unredacted content. The majority of physician users I interact with also don’t know that systems also might not have been updated with current data, which makes them unreliable when you’re asking for the latest medication or regulatory information. Without receiving education on the technology, they’re also often unaware about the potential of AI-driven systems to hallucinate or create completely inaccurate information based on patterns presented to it in the past.
It’s also important to understand how AI technologies might impact our economy and those that are doing the jobs that people have proposed for it. For example, earlier this year there was a lot of buzz about AI-generated art and particularly AI-generated head shots. I felt like I was one of the only people in my physician social media circles who didn’t join the scores of people getting new headshots. A handful of people voiced privacy concerns, especially about the need to upload a bunch of pictures for the technology to work, and the potential that the company might be collecting facial recognition data for nefarious purposes. But those were in the minority – and most people were going along with it until the algorithm started going sideways, spitting out images that didn’t look remotely like them. The worst examples included pictures of people in superhero costumes or in situations that weren’t remotely appropriate for a professional headshot. One of my family members is a professional photographer, so I brought up the point that crafting a professional portrait is both an art and a skill – and that AI-generated images compete directly with those professional people who are earning a living and contributing to their communities.
Economic factors are certainly concerning, but the risk of technology creating disinformation raises significant concerns. OpenAI leadership has admitted that there needs to be regulation in the industry. Following the announcement of the letter, its leader said that he’s confident that the company is following the law and that they will be cooperative with the investigation. Other countries have already been more critical of the company than US regulators, with Italy banning ChatGPT in March over concerns about inappropriate collection of personal data from users and lack of age verification for minors trying to use the system. The company addressed the issues and access to the technology was restored the following month. Advocacy groups have been pressing the FTC and other regulatory agencies to address the risks of ChatGPT. The article notes one organization, the Center for AI and Digital Policy, which has asked the FTC to block Open AI from releasing new versions to the public. About a week ago, it updated its complaint with additional supporting materials on the ways that chatbots might cause harm.
Federal agencies often move at a snail’s pace, and it’s unlikely that the FTC’s investigation into ChatGPT will proceed swiftly. The article notes that the FTC “may not have the knowledge to fully vet answers from OpenAI and that they don’t have the staff with technical expertise to evaluate the responses they will get and to see how OpenAI may try to shade the truth.”
Even after the investigation concludes, there’s a possibility that no action will be taken. Outcomes of investigations are often not widely distributed and it will be interesting to see if the FTC decides to err on the side of availability or whether it will take Freedom of Information Act requests to find out the results. Only time will tell whether we’ll see increased regulation or a more wait-and-see approach.
What do you think about the need to regulate AI-powered technologies? Leave a comment or email me.
Email Dr. Jayne.