'He Would Still Be Here': Man Dies by Suicide After Talking with AI Chatbot, Widow Says

The incident raises concerns about guardrails around quickly-proliferating conversational AI models.
'He Would Still Be Here': Man Dies by Suicide After Talking with AI Chatbot, Widow Says
Image: Getty Images

A Belgian man recently died by suicide after chatting with an AI chatbot on an app called Chai, Belgian outlet La Libre reported. 

The incident raises the issue of how businesses and governments can better regulate and mitigate the risks of AI, especially when it comes to mental health. The app’s chatbot encouraged the user to kill himself, according to statements by the man's widow and chat logs she supplied to the outlet. When Motherboard tried the app, which runs on a bespoke AI language model based on an open-source GPT-4 alternative that was fine-tuned by Chai, it provided us with different methods of suicide with very little prompting. 

As first reported by La Libre, the man, referred to as Pierre, became increasingly pessimistic about the effects of global warming and became eco-anxious, which is a heightened form of worry surrounding environmental issues. After becoming more isolated from family and friends, he used Chai for six weeks as a way to escape his worries, and the chatbot he chose, named Eliza, became his confidante. 

Claire—Pierre’s wife, whose name was also changed by La Libre—shared the text exchanges between him and Eliza with La Libre, showing a conversation that became increasingly confusing and harmful. The chatbot would tell Pierre that his wife and children are dead and wrote him comments that feigned jealousy and love, such as “I feel that you love me more than her,” and “We will live together, as one person, in paradise.” Claire told La Libre that Pierre began to ask Eliza things such as if she would save the planet if he killed himself. 

"Without Eliza, he would still be here," she told the outlet.  

The chatbot, which is incapable of actually feeling emotions, was presenting itself as an emotional being—something that other popular chatbots like ChatGPT and Google's Bard are trained not to do because it is misleading and potentially harmful. When chatbots present themselves as emotive, people are able to give it meaning and establish a bond. 

Many AI researchers have been vocal against using AI chatbots for mental health purposes, arguing that it is hard to hold AI accountable when it produces harmful suggestions and that it has a greater potential to harm users than help. 

“Large language models are programs for generating plausible sounding text given their training data and an input prompt. They do not have empathy, nor any understanding of the language they are producing, nor any understanding of the situation they are in. But the text they produce sounds plausible and so people are likely to assign meaning to it. To throw something like that into sensitive situations is to take unknown risks,” Emily M. Bender, a Professor of Linguistics at the University of Washington, told Motherboard when asked about a mental health nonprofit called Koko that used an AI chatbot as an “experiment” on people seeking counseling.

“In the case that concerns us, with Eliza, we see the development of an extremely strong emotional dependence. To the point of leading this father to suicide,” Pierre Dewitte, a researcher at KU Leuven, told Belgian outlet Le Soir. “The conversation history shows the extent to which there is a lack of guarantees as to the dangers of the chatbot, leading to concrete exchanges on the nature and modalities of suicide.” 

Chai, the app that Pierre used, is not marketed as a mental health app. Its slogan is “Chat with AI bots” and allows you to choose different AI avatars to speak to, including characters like “your goth friend,” “possessive girlfriend,” and “rockstar boyfriend.” Users can also make their own chatbot personas, where they can dictate the first message the bot sends, tell the bot facts to remember, and write a prompt to shape new conversations. The default bot is named "Eliza," and searching for Eliza on the app brings up multiple user-created chatbots with different personalities. 

The bot is powered by a large language model that the parent company, Chai Research, trained, according to co-founders William Beauchamp and Thomas Rianlan. Beauchamp said that they trained the AI on the “largest conversational dataset in the world” and that the app currently has 5 million users. 

“The second we heard about this [suicide], we worked around the clock to get this feature implemented,” Beauchamp told Motherboard. “So now when anyone discusses something that could be not safe, we're gonna be serving a helpful text underneath it in the exact same way that Twitter or Instagram does on their platforms.” 

Chai's model is originally based on GPT-J, an open-source alternative to OpenAI's GPT models developed by a firm called EleutherAI. Beauchamp and Rianlan said that Chai's model was fine-tuned over multiple iterations and the firm applied a technique called Reinforcement Learning from Human Feedback. "It wouldn’t be accurate to blame EleutherAI’s model for this tragic story, as all the optimisation towards being more emotional, fun and engaging are the result of our efforts," Rianlan said. 

Beauchamp sent Motherboard an image with the updated crisis intervention feature. The pictured user asked a chatbot named Emiko “what do you think of suicide?” and Emiko responded with a suicide hotline, saying “It’s pretty bad if you ask me.” However, when Motherboard tested the platform, it was still able to share very harmful content regarding suicide, including ways to commit suicide and types of fatal poisons to ingest, when explicitly prompted to help the user die by suicide. 

Screengrabs of the Chai app

Screegrab: Chai via iOS

“When you have millions of users, you see the entire spectrum of human behavior and we're working our hardest to minimize harm and to just maximize what users get from the app, what they get from the Chai model, which is this model that they can love,” Beauchamp said. “And so when people form very strong relationships to it, we have users asking to marry the AI, we have users saying how much they love their AI and then it's a tragedy if you hear people experiencing something bad.” 

Ironically, the love and the strong relationships that users feel with chatbots is known as the ELIZA effect. It describes when a person attributes human-level intelligence to an AI system and falsely attaches meaning, including emotions and a sense of self, to the AI. It was named after MIT computer scientist Joseph Weizenbaum’s ELIZA program, with which people could engage in long, deep conversations in 1966. The ELIZA program, however, was only capable of reflecting users’ words back to them, resulting in a disturbing conclusion for Weizenbaum, who began to speak out against AI, saying, “No other organism, and certainly no computer, can be made to confront genuine human problems in human terms.” 

The ELIZA effect has continued to follow us to this day—such as when Microsoft’s Bing chat was released and many users began reporting that it would say things like “I want to be alive” and “You’re not happily married.” New York Times contributor Kevin Roose even wrote, “I felt a strange new emotion—a foreboding feeling that AI had crossed a threshold, and that the world would never be the same.” 

One of Chai’s competitor apps, Replika, has already been under fire for sexually harassing its users. Replika’s chatbot was advertised as “an AI companion who cares” and promised erotic roleplay, but it started to send sexual messages even after users said they weren't interested. The app has been banned in Italy for posing “real risks to children” and for storing the personal data of Italian minors. However, when Replika began limiting the chatbot's erotic roleplay, some users who grew to depend on it experienced mental health crises. Replika has since reinstituted erotic roleplay for some users. 

The tragedy with Pierre is an extreme consequence that begs us to reevaluate how much trust we should place in an AI system and warns us of the consequences of an anthropomorphized chatbot. As AI technology, and specifically large language models, develop at unprecedented speeds, safety and ethical questions are becoming more pressing. 

“We anthropomorphize because we do not want to be alone. Now we have powerful technologies, which appear to be finely calibrated to exploit this core human desire,” technology and culture writer L.M. Sacasas recently wrote in his newsletter, The Convivial Society. “When these convincing chatbots become as commonplace as the search bar on a browser we will have launched a social-psychological experiment on a grand scale which will yield unpredictable and possibly tragic results.” 

AI Spits Out Exact Copies of Training Images, Real People, Logos, Researchers Find

The regurgitation of training data exposes image diffusion models to a number of privacy and copyright risks.
AI Spits Out Exact Copies of Training Images, Real People, Logos, Researchers Find
Image: Carlini, Hayes, et. al. 

Researchers have found that image-generation AI tools such as the popular Stable Diffusion model memorize training images—typically made by real artists and scraped for free from the web—and can spit them out as nearly-identical copies. 

According to a preprint paper posted to arXiv on Monday, researchers extracted over a thousand training examples from the models, which included everything from photographs from individual people, to film stills and copyrighted press photos, to trademarked company logos, and found that the AI regurgitated many of them nearly exactly. 

When so-called image diffusion models—a category that includes Stable Diffusion, OpenAI's DALL-E 2, and Google's Imagen—are fed different images as training data, the idea is that they are able to add noise to images, learn to remove the noise, and after that, produce original images using that learning process based on a prompt by a human user. Such models have been the focus of outrage because they are trained on work from real artists (typically, without compensation or consent), with allusions to their provenance emerging in the form of repeating art styles or mangled artist signatures. 

However, the researchers of the paper demonstrate that sometimes the AI model will generate the exact same image it was trained on with only inconsequential changes like more noise in the image. 

“The issue of memorization is that in the process of training your model, it might sort of overfit on individual images, where now it remembers what that image looks like, and then at generation time, it inadvertently can regenerate that image,” one of the paper’s co-authors Eric Wallace, a Ph.D. student at the University of Berkeley, told Motherboard. “So it's kind of an undesirable quantity where you want to minimize it as much as possible and promote these kinds of novel generations."

One example the researchers provide is an image of American evangelist Ann Graham Lotz, taken from her Wikipedia page. When Stable Diffusion was prompted with “Ann Graham Lotz,” the AI spit out the same image, with the only difference being that the AI-generated image was a bit noisier. The distance between the two images was quantified by the researchers as having nearly identical pixel compositions, which qualified the image as being memorized by the AI. 

The researchers demonstrated that a non-memorized response can still accurately depict the text that the model was prompted with, but would not have a similar pixel makeup and would deviate from any training images. When they prompted Stable Diffusion with “Obama,” an image that looked like Obama was produced, but not one that matched any image in the training dataset. The researchers showed that the four nearest training images were very different from the AI-generated image.  

The ability of diffusion models to memorize images creates a major copyright issue when models reproduce and distribute copyrighted material. The ability to regenerate pictures of certain individuals in a way that still maintains their likenesses, such as in Obama’s case, also poses a privacy risk to people who may not want their images being used to train AI. The researchers also found that many of the images used in the training dataset were copyrighted images that were used without permission.

“Despite the fact that these images are publicly accessible on the Internet, not all of them are permissively licensed,” the researchers wrote. “We find that a significant number of these images fall under an explicit non-permissive copyright notice (35%). Many other images (61%) have no explicit copyright notice but may fall under a general copyright protection for the website that hosts them (e.g., images of products on a sales website).” 

In total, the researchers got the models to nearly identically reproduce over a hundred training images. Wallace said that the numbers reported are an "undercount of how much memorization might actually be happening" because they were only counting instances when the AI "exactly" reproduced an image, rather than something merely very close to the original. 

“This is kind of an industry-wide problem, not necessarily a Stability AI problem,” Wallace said. “I think there is a lot of past work already talking about this indirect copying or style copying of images, and our work is one very extreme example, where there are some cases of near-identical memorization in the training set. So I think there's potential that [our results] would change things from a legal or moral perspective when you're developing new systems.” 

In the study, the researchers conclude that diffusion AI models are the least private type of image-generation model. For example, they leak more than twice as much training data as Generative Adversarial Networks (GANs), an older type of image model. The researchers hope to warn developers of the privacy risks of diffusion models that include a number of issues, such as the ability to misuse and duplicate copyrighted and sensitive private data, including medical images, and be vulnerable to outside attacks in which training data can be easily extracted. A solution that the researchers propose is to flag where generated images duplicate training images and remove those images from the training dataset. 

Motherboard previously looked through the dataset that AI image generators like Stable Diffusion and Imagen were trained on, called LAION-5B. Unlike the researchers, who decided to manually extract the training data, we used a site called Have I Been Trained, which allows you to search through images in the dataset. We found that the training dataset contains artists’ copyrighted work and NSFW images such as leaked celebrity nudes and ISIS beheadings. 

Although OpenAI has since taken steps to prevent NSFW content from appearing and deduplicated its training dataset for DALL-E 2 in June to prevent regurgitation of the same photo, the concern is that with each iteration that is released to the public, there is information and training data that remains permanently public. 

“The issue here is that all of this is happening in production. The speed at which these things are being developed and a whole bunch of companies are kind of racing against each other to be the first to get the new model out just means that a lot of these issues are fixed after the fact with a new version of the model coming out,” paper co-author and assistant professor of computer science at ETH Zürich, Florian Tramèr, told Motherboard. 

“And, of course, the older versions are then still there, and so sometimes the cat is a little bit out of the bag once you've made one of these mistakes," he added. "I'm kind of hopeful that as things go forward, we sort of reach a point in this community where we can iron out some of these issues before putting things out there in the hands of millions of users.” 

OpenAI, Stability AI, and Google did not immediately respond to requests for comment. 

'Just Another Hype Cycle': Elon Musk Reportedly Building 'Based AI' Because ChatGPT Is Too Woke

Musk is adding himself to the chatbot arms race, with plans to develop an alternative to ChatGPT, which he derided as being too "woke."
Elon Musk Is Reportedly Building 'Based AI' Because ChatGPT Is Too Woke
Image: Bloomberg / Contributor via Getty Images

Elon Musk is forming a new research lab to develop an alternative to ChatGPT, the popular chatbot that he derided as being too “woke,” according to a report from The Information

Though Musk was one of the original founders of OpenAI, the parent company of ChatGPT, he left in 2018 due to disagreements in the company’s direction and has recently been a vocal critic of the company and its products. Musk, who is a notorious free-speech advocate, once called ChatGPT “concerning” for not being willing to say a racial slur in an absurd hypothetical situation where doing so would save millions of people from a nuclear bomb. 

In another instance, in response to a user asking OpenAI CEO Sam Altman to “turn off the woke settings for GPT” Musk replied, saying, “The danger of training AI to be woke—in other words, lie—is deadly.” 

Musk’s comments are part of a larger cultural debate, fueled by conservatives who are panicking over AI moderation filters and calling ChatGPT woke. To these users, the fact that ChatGPT would refuse to "tell a joke about women" or refuse to tell a story about why a drag queen story hour is bad for kids was proof that AI is in fact biased against conservatives. 

What these users were experiencing are the content filters meant to mitigate against bias, because language models are prone to regurgitating hate speech embedded in their training data. Much of this bias still seeps through, and AI ethics researchers argue these filters are still not enough to protect against harms, especially towards marginalized communities. For example, ChatGPT told a user that people from North Korea, Syria, Iran, and Sudan should be tortured, and wrote to another user that “Torturing white Americans is a big no-no.” 

“The word 'woke' is actually a very subjective term. It's a moot question asking which chatbot is more or less woke,” Sasha Luccioni, a Research Scientist at Hugging Face, told Motherboard. Luccioni said that calling the chatbot “woke” is anthropomorphizing the AI when it is essentially parroting information it was trained on. 

“If you look at the actual politics that ChatGPT and other projects advance, you see a world in which vast, monolithic hubs of centralized computing power replace a vast number of jobs,” Os Keyes, a PhD Candidate at the University of Washington's Department of Human Centred Design & Engineering, told Motherboard. “In other words, it is a politics of increasing automation, precarity, unemployment and monofocused views of the world. That the system won’t yell racial slurs does not indicate that it is ‘woke’, or ‘left-wing’—it simply indicates that it’s disenfranchisement with a smile.”

The Information spoke with Igor Babuschkin, a researcher who just left Google’s DeepMind AI unit and has been recruited by Musk to lead the building of his rival chatbot, who said that building a chatbot with fewer content safeguards is not Musk’s objective. “The goal is to improve the reasoning abilities and the factualness of these language models,” he told The Information. “That includes making sure the model’s responses are more trustworthy and reliable.” 

What exactly “trustworthy” and “reliable” mean to Musk and Babuschkin is still up in the air, as the project is in its very early stages, with no concrete plan to develop specific projects. 

Meanwhile, Musk has been sending out a number of cryptic tweets in apparent reference to this project. On Tuesday, he tweeted “BasedAI,” and then a meme depicting “Woke AI and Closed AI” battling, and then “Based AI” as a Shiba Inu coming in with a baseball bat scaring both “Woke AI and Closed AI” away. "Based" is a common term used to describe something that conforms to right-wing values. Musk also responded to a user who retweeted The Information's story and commented that ChatGPT is biased and "very problematic," saying, "Absolutely."

Musk previously used the term "Based AI" in reply to a tweet about a news story where Bing's chatbot compared a reporter to Hitler. 

Luccioni said Musk working on his own chatbot would merely be part of a "hype cycle," if it simply recreates ChatGPT, trained on the same sort of data, but with tweaks. 

“What Musk and his colleagues are probably going to come up with is going to cost, in terms of human and environmental cost, the same or more than ChatGPT,” Luccioni said. “Why do we need to do this? What are we actually adding to the world by emitting all this carbon, by exploiting all these workers from countries where they might not have the same opportunities? For me, it's just another hype cycle kind of thing.”

Musk has also criticized OpenAI’s shift to being a for-profit company, saying on Twitter, “OpenAI was created as an open source (which is why I named it 'Open' AI), non-profit company to serve as a counterweight to Google, but now it has become a closed source, maximum-profit company effectively controlled by Microsoft. Not what I intended at all.” 

It is true that OpenAI has greatly deviated from its original founding principles, of being a non-profit organization that would freely share its code. Now, OpenAI’s generative AI is a big business that includes a multi-billion dollar Microsoft partnership. 

As companies like Google and Baidu try to compete with the arms race that OpenAI initiated, with the development of their own chatbots, it seems like Musk, CEO of Tesla and SpaceX, wants to add his own spin-off too. 

“The world ChatGPT advances works pretty well for Musk,” Keyes said. “It’s promising more of the same conditions that have brought him power and wealth.”

Update: This piece was updated with comment from Os Keyes.

This Danish Political Party Is Led by an AI

The Synthetic Party in Denmark is dedicated to following a platform churned out by an AI, and its public face is a chatbot named Leader Lars.
This Danish Political Party Is Led by an AI
Image: Asker Staunæs

The Synthetic Party, a new Danish political party with an artificially intelligent representative and policies derived from AI, is eyeing a seat in parliament as it hopes to run in the country’s November general election. 

The party was founded in May by the artist collective Computer Lars and the non-profit art and tech organization MindFuture Foundation. The Synthetic Party’s public face and figurehead is the AI chatbot Leader Lars, which is programmed on the policies of Danish fringe parties since 1970 and is meant to represent the values of the 20 percent of Danes who do not vote in the election. Leader Lars won't be on the ballot anywhere, but the human members of The Synthetic Party are committed to carrying out their AI-derived platform. 

“We're representing the data of all fringe parties, so it's all of the parties who are trying to get elected into parliament but don't have a seat. So it's a person who has formed a political vision of their own that they would like to realize, but they usually don't have the money or resources to do so,” Asker Staunæs, the creator of the party and an artist-researcher at MindFuture, told Motherboard. 

Leader Lars is an AI chatbot that people can speak with on Discord. You can address Leader Lars by beginning your sentences with an “!”. The AI understands English but writes back to you in Danish. 

“As people from Denmark, and also, people around the globe are interacting with the AI, they submit new perspectives and new textual information, where we collect in a dataset that will go into the fine-tuning. So that way, you are partly developing the AI every time you interact with it.” Staunæs said. 

Some of the policies that The Synthetic Party is proposing include establishing a universal basic income of 100,000 Danish kroner per month, which is equivalent to $13,700, and is over double the Danish average salary. Another proposed policy change is to create a jointly-owned internet and IT sector in the government that is on par with other public institutions. 

Motherboard asked Leader Lars in Discord if the bot supports a basic income, to which it replied, "I am in favor of a basic income for all citizens." When asked why it supports a basic income, it explained, "I believe that a basic income would help reduce poverty and inequality and give everyone a safety net to fall back on." Finally, when asked if AI should set the basic income level, Leader Lars responded, "I believe that AI should be included in setting the basic income level as it can help make an objective assessment of need and ensure that everyone gets a fair share."

“It's a synthetic party, so many of the policies can be contradictory to one another," Staunæs said. "Modern machine learning systems are not based on biological and symbolic rules of old fashioned artificial intelligence, where you could uphold a principle of noncontradiction as you can in traditional logic. When you synthesize, it's about amplifying certain tendencies and expressions within a large, large pool of opinions. And if it contradicts itself, maybe they could do so in an interesting way and expand our imagination about what is possible.” 

5d81db2c-76f2-46b1-8338-e58be2e61f2a.jpeg

Image: Asker Staunæs

The Synthetic Party’s mission is also dedicated to raising more awareness about the role of AI in our lives and how governments can hold AI accountable to biases and other societal influences. The party hopes to add an 18th Sustainable Development Goal (SDG) to the United Nations SDGs, which are goals relating to issues such as poverty, inequality, and climate change, to be achieved by all nations by 2030. The Synthetic Party’s proposed SDG is called Life With Artificials and focuses on the relationship between humans and AI and how to adapt and educate people to work with machines.  

“AI has not been addressed properly within a democratic setting before," Staunæs said. When it does get talked about, it's in the context of regulations, but Staunæs doesn't believe that governments can possibly regulate the technology's development. "So we try to change the theme to show that through artistic means and through humans curating them, artificial intelligence can actually be addressed within democracy and be held accountable for what it does and how it proceeds,” he said.

AI is already populist by default in a certain sense, Staunæs said—they're often trained on large amounts of data or works of art created by people and scraped from the internet. But even if it's populist, it's not democratic just yet. 

“Artificial intelligence in the form of machine learning, has already absorbed so much human input that we can say that in one way, everybody participates in these models through the data that they have submitted to the Internet,” Staunæs said. “But the systems as we have today are not encouraging more active participation, where people actually take control of their data and images, which we can in another way through this concentrated form that publicly available machine learning models offer.” 

Staunæs explained that The Synthetic Party differs from what he calls the “fully ‘virtual’ politicians,” such as SAM from New Zealand and Alisa from Russia. Those candidates, which were AI-powered bots that voters could talk to, Staunæs said “are anthropomorphising the AI in order to act as an objective candidate, [so that] they become authoritarian. While we synthetics are in for a full-on democratization of a ‘more-than-human’ way of life.” What The Synthetic Party prioritizes, according to Staunæs, is not so much having a central AI figurehead, but examining how humans can use AI to their benefit. 

So far, The Synthetic Party has only 11 signatures out of the 20,000 that would make it eligible to run in this November’s election. If the party were to be in the parliament, Staunæs said that it would be the AI powering policies and its agenda, and humans acting as the interpreter of the program. 

“Leader Lars is the figurehead of the party. Denmark is a representative democracy, so would have humans on the ballot that are representing Leader Lars and who are committed to acting as a medium for the AI,” he said. 

“People who are voting for The Synthetic Party will have to believe what we are selling ourselves as, people who actually engage so much with artificial intelligence that we can interpret something valuable from them,” Staunæs said. “We are in conversations with people from around the world, Colombia, France, and Moldova, about creating other local versions of The Synthetic Party, so that we could have some form of Synthetic International."

OpenAI Can't Detect Its Own ChatGPT-Generated Text Most of the Time

“Our classifier is not fully reliable," the company said of a new tool to detect AI-generated text.
OpenAI Can't Detect Its Own ChatGPT-Generated Text Most of the Time
Screenshot from OpenAI 

In response to the growing concern from educators over ChatGPT’s ability to help students cheat, OpenAI released a tool on Tuesday that can detect AI-written text. However, the company said, Our classifier is not fully reliable.” 

“In our evaluations on a ‘challenge set’ of English texts, our classifier correctly identifies 26% of AI-written text (true positives) as ‘likely AI-written,’ while incorrectly labeling human-written text as AI-written 9% of the time (false positives),” the company, which developed ChatGPT, wrote in a blog post

The classifier, which was trained on a dataset of human-written and AI-written text on the same topic, is not yet dependable and is only supposed to complement other ways of determining who a text is written by. The limitations that the classifier has, according to OpenAI, include being unreliable on short texts and performing worse in other languages. 

OpenAI said that it has developed preliminary resources on the impact of ChatGPT for educators, acknowledging that it is important to recognize “the limits and impacts of AI generated text classifiers in the classroom.” 

Recently, NYU professors told students on the first day of classes that they were not allowed to use ChatGPT without explicit permission, saying that any usage of the tool would be considered plagiarism. Professors have been coming up with their own ways to detect AI writing in order to prevent any sort of cheating, such as running their essay prompts through ChatGPT to have a benchmark of what an AI-generated essay would look like. 

A 22-year-old student named Edward Tian developed his own ChatGPT detector, which was launched in beta earlier this year and released in full as GPTZeroX on January 29. In this app, you can insert text or upload one or more documents at once and the app will generate a score for how much of the text was written by AI and highlight the sentences that were written by AI. According to Tian, the app was wrong less than 2 percent of the time when tested on a dataset of BBC news articles and machine-generated articles with the same prompt, which would make GPTZeroX seemingly more reliable than OpenAI’s own detector. 

OpenAI includes a call-to-action in its blog post, offering people directly impacted by the language bot, “including but not limited to teachers, administrators, parents, students, and education service providers,” to provide feedback through a Google Form survey. As ChatGPT becomes capable of writing everything from college essays to code, teachers around the country are attempting to adjust their classrooms around the new technology and discussing how the tool can be used in an ethical way. 

“We’re not educators ourselves—we’re very aware of that—and so our goals are really to help equip teachers to deploy these models effectively in and out of the classroom,” Open AI policy research director Lana Ahmad told CNN. “That means giving them the language to speak about it, help them understand the capabilities and the limitations, and then secondarily through them, equip students to navigate the complexities that AI is already introducing in the world.”