
Science communication isn’t just about translating research into simpler language or explaining unfamiliar concepts. It is, at its core, an “exchange of information and viewpoints” in science, as the National Academies of Sciences, Engineering, and Medicine defines it. That exchange plays out in public, where interpretations are presented, compared, and often compete for attention. Whether it’s securing the first expert quote or building recognizable editorial identities, newsrooms are shaped by pressures to maintain audience trust, publishing speed, and credibility. But that landscape now includes a different kind of competitor: one that can generate explanations instantly and at scale. The race to explain science is no longer just between newsrooms; it’s between humans and algorithms.
I noticed this shift while posting an explainer about optical illusions, only to scroll past a dozen nearly identical ones. They followed the same structure, used the same examples, even the same stock image, appearing in my feed almost in unison. It wasn’t an isolated case. Across topics, explainers began to converge on the same formats and framings, many generated or assisted by AI systems. These AI explainers felt polished and nearly indistinguishable from human work, distilling complex topics like quantum mechanics into elementary-level explanations within seconds. The result wasn’t just speed, but uniformity. It felt like an environment where explanation itself had become standardized.
These patterns point toward a shift in how knowledge is produced and circulated. If explanation itself is no longer scarce, what remains uniquely valuable about the human science communicator?
The answer is judgment—not just voice and storytelling, but the ability to connect facts to real-world stakes, anticipate misunderstanding, and build trust over time.
Why Accuracy Isn’t Enough
Most people think “good” science communication is characterized by correct information that’s presented clearly for a general audience, but it’s more than a transfer of facts. In practice, accurate information is rarely fixed. Scientific findings are often probabilistic, evolving, or context-dependent, and the way they’re summarized can strip away that nuance. What gets presented as a clear takeaway may reflect interpretation as much as fact.
Science communication therefore isn’t just a transfer of facts. It involves engagement: conversations between writers, researchers, and the public, where audiences can ask questions, offer perspectives, and shape how information is understood.
The Communicator’s Role
This limitation is reflected in how the public perceives scientists themselves. A 2024 Pew Research Center survey found that 45% of U.S. adults describe scientists as good communicators, compared to 65% of respondents who view them as honest and 71% who see them as skilled. In other words, while the majority of the public recognizes scientific expertise and integrity, communication remains a relative weakness. A science communicator’s job is therefore not to just convey accurate information, but to bridge expertise and public understanding. They shape how information is framed, interpreted, and used to excite the reader to investigate further.
AI and the Limits of Explanation
What AI Can Do Well
AI’s strength lies in conveying accurate information. Large language models like ChatGPT have been trained on hundreds of billions of words and tuned by tens of thousands of human experts to ensure accuracy.
A 2025 study relating to vascular patient education, in fact, demonstrated how LLM-generated materials improved understanding in a clinical setting: AI summaries of physician instructions were rated by patients to be more clear than physician-written materials. By simplifying discharge summaries, these systems reduced linguistic complexity without degrading core medical information. However, they were only used as assistive drafting and simplification tools, requiring human oversight rather than functioning as an autonomous communicator.
If surface-level accuracy is no longer scarce, it can no longer define effective science communication.
Where AI Stops
This is where human judgment separates explanation from interpretation. AI excels at summarizing, but it can’t independently determine what uncertainty to preserve, what to emphasize, or how competing interpretations should be framed. It can generate clever analogies or humor, but not with the same intentionality that makes a reader pause and reconsider an idea. Effective science communication depends on these choices: changing how audiences perceive the meaning of a concept, emphasizing the stakes of a new discovery, or fostering excitement for what they’re learning about. They also sustain the kind of change the National Academies emphasizes: one where audiences aren’t just receiving information, but participating in how it’s understood.
The Responsibility of the Communicator
If AI exposes the limits of explanation, it also clarifies the fundamental responsibilities of human science communicators. We selectively decide which excerpts from papers and which findings are relevant to the public and necessary for them to know. In doing so, we position audiences in relation to the science we’re explaining, encouraging them to contribute fresh perspectives or even participate in the process itself. And, at the same time, our work requires us to take responsibility for our tone, emphasis, and the implications of our words. Our audience understands not only the information we share, but also our way of engaging them—that’s where the implied connection and trust between the author and reader stems from.
Judgment in Practice
This kind of judgment isn’t abstract; it’s how some of the most influential science communicators shaped public understanding.
Carl Sagan, renowned astronomer and science communicator, encouraged both wonder and skepticism when explaining the cosmos; he wanted his audience to use their own critical thinking to detect lies and pseudoscience in what they learned. Apple pies represented the chemical composition of our universe, and the iconic “Pale Blue Dot” photograph connected it to humanity’s shared responsibility. These weren’t just meant to be memorable; they were deliberate choices about how audiences should understand science and their place within it.
Jane Goodall, the world’s foremost expert on chimpanzees and another renowned science communicator, drew on the empathy-driven and storytelling side of science communication. Through the stories of individual chimpanzee families, and even sharing her subjects’ names, she showed the “interconnectedness” of animals, humans, and the environment. For broader audiences, it offered a window into the “why” behind her decades-long research—something that might not have resonated in a traditional article. More than presenting findings, she shaped how audiences interpreted their significance and relationship to the natural world.
Sagan’s and Goodall’s influences came not from explaining more, but from deciding how science should be understood.
As explanations become easier to generate at scale, what distinguishes human communicators isn’t speed or clarity. It’s their ability to decide what matters, foster trust within their readers, and place their readers in a broader, more mindful context. AI can assist with access and efficiency, but human communicators remain responsible for using their judgement to communicate uncertainty, nuance, and the evolving evidence to the public.
Sources
- National Academies of Sciences, Engineering, and Medicine. Communicating Science Effectively: A Research Agenda. https://www.nationalacademies.org/read/23674/chapter/2
- Pew Research Center. Public Trust in Scientists and Views on Their Role in Policymaking (2024). https://www.pewresearch.org/science/2024/11/14/public-trust-in-scientists-and-views-on-their-role-in-policymaking/
- “Large language models improve readability of patient education materials on vascular conditions”. ScienceDirect (2025). https://www.sciencedirect.com/science/article/pii/S2949912725001357

Leave a Reply