Generative AI grabbed headlines this year. Here’s why and what’s next

An easy-to-use interface, lots of training data and public access raised the tech’s profile

A photo shows someone's hand using an ai chatbot touch screen.

People are increasingly using chatbots that respond in humanlike ways. As with any technology, there are benefits and also drawbacks.

Laurence Dutton/E+/Getty Images Plus

Ask ChatGPT “Why is the sky blue?” and seconds later, it will tell you: “The blue color of the sky is primarily due to a phenomenon called Rayleigh scattering,” which the chatbot goes on to explain in a textbook-like, six-paragraph response. Follow up with, “Explain like I am 5 and make it short, please,” and back will come: “The sky is blue because tiny things in the air make the blue light from the sun bounce around and come to our eyes.”

ChatGPT is a type of generative AI. It’s a computer model that taps into language patterns to predict the next words in a sentence, answering a user’s prompt with a humanlike response. The model is structured with many layers of interconnected nodes, vaguely inspired by neural connections in the brain. During a training period, the interconnected nodes ran through billions of pieces of writing scraped from the internet, learning patterns by changing the strength of different node connections. Other types of generative AI have been trained to make images, videos and more.

Released late last year, ChatGPT quickly captivated public imagination, raising the visibility of generative AI. More chatbots, such as Google’s Bard, followed. But amid the buzz, critics have warned of generative AI’s inaccuracies, biases and plagiarism (SN: 4/12/23). And then in mid-November, Sam Altman, the CEO of OpenAI, the company that developed ChatGPT and other generative AI models such as DALL-E 3, was fired, and then rehired days later. In response, most of the company’s board resigned. The upheaval sparked widespread discussion about rushing to commercialize generative AI without taking precautions to build in safety measures to ensure the technology doesn’t cause harm.  

To understand how generative AI came to dominate headlines and what’s next, Science News spoke with Melanie Mitchell of the Santa Fe Institute, one of the world’s leading AI experts. This interview has been edited for length and clarity.

SN: Why was generative AI big this year?

Mitchell: We have had language models for many years. But the breakthrough with systems like ChatGPT is that they had much more training to be a dialog partner and assistant. They were trained on much more data. And they had many more connections, on the order of billions to trillions. They also were presented to the public with a very easy-to-use interface. Those things really were what made them take off, and people were just amazed at how humanlike they seemed.

SN: Where do you think generative AI will have the greatest impact?

Mitchell: That’s still a big open question. I can put in a prompt to ChatGPT, say please write an abstract for my paper that has these points in it, and it will spit out an abstract that’s often pretty good. As an assistant, it is incredibly helpful. For generative im- ages, systems can produce stock images. You can just say I need an image of a robot walking a dog, and it will generate that. But these systems are not perfect. They make mistakes. They sometimes “hallucinate.” If I ask ChatGPT to write an essay on some topic and also to include some citations, sometimes it will make up citations that don’t exist. And it may also generate text that is just not true.

SN: Are there other concerns?

Mitchell: They require a lot of energy. They run in giant data centers with huge numbers of computers that need a lot of electricity, that use a lot of water for cooling. So there is an environmental impact.These systems have been trained on human language, and human society has a lot of biases that get reflected in the language these systems have absorbed — racial, gender and other demographic biases.

There was an article recently that described how people were trying to get a text-image system to generate a picture of a Black doctor treating white children. And it was very hard to get it to generate that.

There are a lot of claims about these systems having certain capabilities in reasoning, like being able to solve math problems or pass standardized tests like the bar exam. We don’t really have a sense of how they are doing this reasoning, whether that reasoning is robust. If you change the problem a little bit, will they still be able to solve it? It’s unclear whether these systems can generalize beyond what they have been trained on or whether they are just relying very much on the training data. That’s a big debate.

SN: What do you think about the hype?

Mitchell: People have to be aware that AI is a field that tends to get hyped, ever since its beginning in the 1950s, and to be somewhat skeptical of claims. We have seen again and again those claims are very much overblown. These are not humans. Even though they seem humanlike, they are different in many ways. People should see them as a tool to augment our human intelligence, not replace it — and make sure there’s a human in the loop rather than giving them too much autonomy.

SN: What implications might the recent upheaval at OpenAI have for the generative AI landscape?

Mitchell: [The upheaval] shows something that we already knew. There is a kind of polarization in the AI community, both in terms of research and in terms of commercial AI, about how we should think about AI safety — how fast these AI systems should be released to the public and what guardrails are necessary. I think it makes it very clear that we should not be relying on big companies in which power is concentrated right now to make these huge decisions about how AI systems should be safeguarded. We really do need independent people, for instance, government regulation or independent ethics boards, to have more power.

SN: What do you hope happens next?

Mitchell: We are in a bit of a state of uncertainty of what these systems are and what they can do, and how they will evolve. I hope that we figure out some reasonable regulation that mitigates possible harms but doesn’t clamp down too hard on what could be a very beneficial technology.

About Ananya

Ananya is a freelance science writer, journalist and translator, with ​a ​research background in robotics. She covers all things algorithms, robots, animals, oceans, ​​urban and the people involved in these fields.