Readers discuss cholesterol treatments and AI
Get low (cholesterol’s version)
An experimental genetic treatment called VERVE-101 can deactivate a cholesterol-raising gene in people with hypercholesterolemia, Meghan Rosen reported in “Base editing can lower cholesterol” (SN: 1/27/24, p. 8).
Rosen wrote that researchers are testing to see what dosage of VERVE-101 is most effective. Given that the treatment edits a gene, reader Linda Ferrazzara wondered why the dose matters.
Too low a dose may mean that not enough VERVE-101 makes it to the liver, where it turns off the gene, Rosen says. If too few cells have the gene switched off, patients will not experience the drug’s cholesterol-lowering effects. If cholesterol levels remain high after an initial treatment, a second infusion of the drug may help, Rosen says. But the developers prefer for the treatment to be one dose.
Reader Jack Miller asked whether VERVE-101 affects germ cells, which give rise to sperm and egg cells.
In mice, scientists have found that most of the drug ends up in the liver, and none goes to the germ line, Rosen says. The offspring of treated mice are also unaffected by the drug. So if the children of treated patients also have hypercholesterolemia, those kids would still need their own treatment, she says.
AI etiquette
To develop better safeguards, scientists are studying how people have tricked AI chatbots into answering harmful questions that the AI have been trained to decline, such as how to build a dangerous weapon, Emily Conover reported in “Chatbots behaving badly” (SN: 1/27/24, p. 18).
Reader Linda Ferrazzara wondered if AI chatbots have been trained on languages other than English.
AI chatbots like ChatGPT are based on large language models, or LLMs, a type of generative AI typically trained on vast swaths of internet content. Many of the biggest, most capable LLMs right now are tailored to English speakers, Conover says. Although those LLMs have some ability to write in other languages, most of their training data is in English. But there are language models designed to use other languages, she says. Efforts so far have focused on languages that are widely spoken and written, and for which large amounts of training data are available, such as Mandarin.
Ferrazzara also asked if boosting computing power could help the bots better defend against trickery.
LLMs already use a lot of computing power, and it will only increase as people use LLMs more and more, Conover says. But even if increased power would make establishing safeguards easier, we need to recognize that greenhouse gas emissions linked to such energy-intensive calculations contribute to climate change, she says. “The time and energy needed to respond to a chatbot query is not something we want to overlook while waiting for computers to improve.”
Many of the defensive techniques described in the story screen for harmful questions. Reader Mike Speciner wondered if filtering the responses to those questions would be easier.
Some filters like this are already applied on some chatbots, Conover says. For example, Microsoft’s Bing AI tends to cut off its answers if it wades into forbidden territory. These filters are more general, rather than targeted specifically at one kind of attack. “To avoid letting bad stuff slip through, they may cast too wide of a net, filtering out innocuous answers as well as dangerous ones and making the user’s experience worse,” Conover says. What’s more, an attacker who knows how the LLM’s self-filtering works may figure out a way to fool that filter.
Correction
“Saving lives with safe injection” incorrectly described Elizabeth Samuels of UCLA as an epidemiologist and emergency medicine physician (SN: 2/10/24, p. 16). She is an emergency and addiction medicine physician. That story also mistakenly stated that drug policy consultant Edward Krumpotich helped write the 2023 legislation in Minnesota that authorized funding for an overdose prevention center. He advocated for that legislation but did not help write it.