If Your Snark Be a Boojum
In The Hunting of the Snark, Lewis Carroll narrates the quest by nine men and a beaver for an elusive creature no one has ever seen. A Snark, the poem tells us, is known only through five ill-assorted characteristics: It is crispy when cooked, gets up late, can’t take a joke, has a fondness for bathing machines, and is ambitious. The Snark never appears throughout the poem, though one of the hunters mysteriously vanishes on meeting the Snark’s evil alter-ego, a Boojum.
Trust in science has lately emerged as the Snark of American politics. Analysts as motley in their disciplinary identities as Carroll’s hunters devote endless energy to identifying the causes of a phenomenon that remains hard to pin down except through statistical means that notoriously create the very phenomena they claim to be studying. “The Strange New Politics of Science” (Issues, Spring 2025), by M. Anthony Mills and Price St. Clair, adds to this venerable genre.
Loss of confidence in science, the authors argue, has emerged as a new axis of polarization in America, especially since the COVID-19 pandemic. Not only have Republicans and Democrats changed places in their relative degrees of trust in science, but the disparity has grown more extreme. According to a Pew Research Center survey, Republicans “remain 22 percentage points less likely than Democrats to express a ‘great deal’ or a ‘fair amount’ of confidence in scientists.” This widening split, Mills and St. Clair conclude, threatens the legitimacy of government and destabilizes society as whole.
Yet in spite of the best efforts of data collectors, the object at the center of their quest remains strangely undefined. It is unclear from the growing literature on trust in science what exactly the public disavows: trust in scientists, in expert claims, in specific bodies of knowledge, in the institution of science, or in the authority of scientists to steer public policy.
It is unclear from the growing literature on trust in science what exactly the public disavows.
Turning to solutions, Mills and St. Clair rightfully reject simplistic explanations for the rift between Republicans and Democrats, such as public ignorance or hostility to government, but their own proposals seize on the wrong end of the stick. Leaning on the work of the British sociologist Anthony Giddens, they suggest the problem lies in “abstract institutions” that have lost contact with the citizenry. Maybe so, but then “re-embedding” experts in relationships with their institutional clients may not be the right response.
Research shows that America more than any other industrial society tries to resolve political problems as if they are fundamentally scientific. It is hardly surprising, then, that when politics becomes intransigent, science also proves vulnerable. One can’t rely on re-embedded experts to paper over deep-seated political differences concerning the appropriate distribution of risks and benefits or the minimum levels of public support owed to citizens in a society.
Like many predecessors, Mills and St. Clair suggest that producing more buy-in to science will lead to a more stable society and politics. The evidence suggests to the contrary that a more trustworthy politics leads to more buy-in for expertise. If we lose sight of the quality of our politics, then our institutions, like Carroll’s unfortunate Baker, may “vanish away” when the Snark of skepticism toward science turns out to be the Boojum of a decaying democracy.
Sheila Jasanoff
Pforzheimer Professor of Science and Technology Studies
John F. Kennedy School of Government
Harvard University
M. Anthony Mills and Price St. Clair provide valuable insight for addressing poor public trust in science, but more attention is needed on how the structure of expertise itself contributes to the polarization that they document.
Research on self-reinforcing orientations to expertise by the communications scholar Benjamin Lyons gives reason to think Mills and St. Clair’s proposed solutions to public distrust of science may be structurally inadequate. Negative prior experiences create cognitive frameworks that make individuals more resistant to expert claims and more susceptible to counternarratives. Thus, subsequent improved expert-public interactions may not be enough to disembed that frame.
This insight challenges the authors’ optimism about rebuilding trust through improved communication and political diversity. Their historical comparison to the 1970s is instructive but incomplete; the reforms they praise (creation of the Office of Technology Assessment, increased oversight of research) succeeded not only because they improved expert-public communication, but because they restructured the relationship between expertise and democratic authority. These reforms acknowledged that expert legitimacy requires more than technical competence; it requires institutional mechanisms that make expertise accountable to democratic processes and values.
Public participation chips away at what makes scientific knowledge special and what makes specialists most suited to make technical judgments in their areas of expertise.
Lyons’s research suggests that current forms of scientific expertise may be systematically generating negative orientations to expertise among a significant portion of Americans. The professionalization processes that ensure technical quality—peer review, credentialing, disciplinary boundaries—also create experiences of exclusion and dismissal for those whose knowledge, values, and concerns fall outside professional jurisdiction. These experiences accumulate into stable orientations that condition how individuals interpret subsequent expert claims.
This problem of legitimacy for science has been addressed by efforts to democratize science via public participation in science. But this creates an inescapable tension: Public participation chips away at what makes scientific knowledge special and what makes specialists most suited to make technical judgments in their areas of expertise. This paradox puts expert systems and democratic values into frequent conflict, as science cannot be populist nor should it be elitist.
This crisis of public trust in science seems to require more substantial changes than better communications and diversity initiatives launched after the political attacks on DEI—diversity, equity, and inclusion—initiatives. We are challenged to rethink how expertise itself is organized and legitimized. This could include developing forms of “democratic expertise” that maintain technical rigor while systematically incorporating broader participation in defining problems and evaluating solutions.
In sum, rebuilding trust in science may require not just better ambassadors for existing forms of expertise, but fundamental reconsideration of how expertise operates in democratic societies where citizens hold diverse values and worldviews.
Maya J. Goldenberg
Professor, Department of Philosophy
University of Guelph, Ontario, Canada
I appreciated the article by M. Anthony Mills and Price St. Clair and the interview with Celinda Lake and Emily Garner, “Who’s Afraid to Share Science in Their Listserv?” in the Spring 2025 Issues. The articles’ reading of the current moment—that it is less about distrust in science and more about distrust of the institutions and elites that do science—resonated. My mother and I share guardianship of my sister, who has an intellectual disability. My experience with relevant research is rich, rewarding, and personal. I work in a university, so I can have coffee with experts, explore ideas, and ask questions. My mother served on an advisory board for the organization in which my sister lives and works—and where my mother’s experience of research was distant, impersonal, and disempowering. A new regulation would arrive, based on research to which she had no input, and she would have to do the work of figuring out how to comply.
As the articles suggest, my mom’s distrust of research isn’t a deficit problem: she doesn’t need someone to explain the research more clearly. It’s not an invisible hand problem: it won’t help to recount progress enabled by research. It is, as Mills and St. Clair describe, a “relational problem”: My mother didn’t get to know or interact with researchers, had no input to research agendas, never got asked about what she knew, and wasn’t part of translating research into policy.
Solving the relational problems means inviting people from all walks of life to interact with scientists, set research agendas, contribute their knowledge, and participate in science translation and application. We have lots of ideas for how to do this.
Solving the relational problems means inviting people from all walks of life to interact with scientists, set research agendas, contribute their knowledge, and participate in science translation and application.
Upscaling citizen science and participatory governance would allow more people to collect and use data to make personal and civic decisions. Universities could expand their extension and clinical models so that any city or community-based organization, no matter how small or far away, had a research partner. Community liaisons who listen for research ideas, open calls for topics and ideas, and community-designed requests for proposals could help drive new research agendas. Simplified application and reporting processes would open research to new organizations and free up time for collaboration. Participatory budgeting and expanded review panels would allow people from all walks to help make funding decisions. User-friendly open-access science publications and new use-focused products would give more people access to the research they are supporting. We can do meta-research about how to maximize the benefits of research for everyone. Patient’s rights approaches give us a model for expanding participation on regulatory and advisory boards. Expanded scientific fellowships could offer nonpartisan and responsive research support to judges, juries, and lawmakers.
There is no better time to explore these and other approaches. Recognizing and responding to the current distrust of institutions and elites compels us to scale up these and other approaches in ways that support every American’s ability to guide, participate in, and benefit from research.
Rajul Pandya
Executive Director and Professor of Practice
Mary Lou Fulton College for Teaching and Learning Innovation
Arizona State University
Some 50 years ago, the German chancellor Helmut Schmidt reminded scientists of their Bringschuld to society—their obligation to continuously justify public investments in the research enterprise. But as M. Anthony Mills and Price St. Clair remind us yet again, delivering on this obligation still proves challenging for scientists as a social group whose politics, religious views, and social values do not align with significant proportions of the electorate.
Public skepticism about extended school closings and mask mandates during the COVID-19 pandemic and the perceived role that science played in informing those policy interventions has not made things any easier. Moreover, the political and scientific dynamics emerging from the pandemic have triggered very limited introspection among the scientific community about our own contributions to eroding science-public interfaces and about some of the questions we need to grapple with as science hopes to (re)build public trust.
First, how much trust in science is democratically desirable? Of course, enlightened, liberal democracies would not be sustainable with wide portions of their electorate rejecting the idea that science is society’s best tool for systematically creating, curating, and communicating knowledge. At the same time, societal endorsement of various scientific breakthroughs without careful public reflection on their potential disruptive impacts is equally undesirable for any democratic system.
Second, has science done what it needs to do to deserve widespread public trust? At the height of the pandemic, scientists faced the unenviable task of correcting information they knew to be wrong with emerging findings that they were unsure would hold up to future scrutiny. When faced with public pushback, the scientific community failed to meaningfully engage the public on the inevitable uncertainties surrounding research that was being conducted and published—literally and figuratively—at warp speed. Instead, many in the scientific community shifted the blame by doubling down on “just follow the science” mantras and blaming public “anti-science” sentiment simply on widespread misinformation or a lack of trust in experts.
Public debates about emerging science have less to do with the technical merits or risks of the science itself than the social values its applications are perceived to be in conflict with.
This highlights an important lesson: Public debates about emerging science have less to do with the technical merits or risks of the science itself than the social values its applications are perceived to be in conflict with. Public trust, as a result, will increasingly be driven by the willingness of scientists to engage with values-based concerns that different publics see as most pressing, rather than questions related to probabilistic or technical matters that scientists might see as most relevant. Should we as a society continue pushing for artificial general intelligence, for instance, or approve editing the human germline for widespread therapeutic use? These questions cannot be conclusively answered by science alone.
Nor should they. Public decisionmaking about these questions requires careful weighing of scientific, moral, political, and social considerations. As a result, the more scientists insist on being able to provide authoritative recommendations on what are, in essence, value or policy disagreements, the more corrosive their public communications will be to public trust in science as an institution. We cannot prioritize being right over being heard, nor ideological righteousness over political compromise. Trust is the currency in which we would pay for those mistakes.
Dietram A. Scheufele
Morgridge Institute for Research
Madison, Wisconsin
M. Anthony Mills and Price St. Clair detail the complex relationship between partisanship and trust in science and argue for the importance of diversifying scientific organizations along multiple dimensions. They note that although scientists are aware of the underrepresentation of women and people of color within their ranks, efforts to broaden participation of religious or politically conservative people in science are largely absent.
Interestingly, women and people of color tend to be more religious but also more politically liberal than are men and white people. Recruiting women and people of color into science may therefore increase the proportion of religious individuals while simultaneously decreasing the proportion of conservatives. This is important because religious and conservative people comprise substantial chunks of the US population, and if they remain underrepresented in science, scientific organizations will continue to be dominated by people and groups seen as espousing values that diverge from the public’s values.
Recruiting women and people of color into science may therefore increase the proportion of religious individuals while simultaneously decreasing the proportion of conservatives.
Scientists are more liberal and less religious than the average American. Thus, in the short term, any attempt to explicitly recruit religious or conservative individuals into scientific fields may be met with resistance from scientists themselves. After all, many scientists believe that religious and conservative people are “anti-science,” a belief that has the capacity to foster a hostile environment for these groups. Why engage with science when members of one’s group are seen as uninterested or even adversarial toward science? Given this context, it is unsurprising that religious and conservative individuals shy away from scientific fields.
We, as social scientists who study religion, are acutely aware of the skepticism many scientists hold toward religion research and religious individuals in science. Mills and St. Clair acknowledge that the data on public trust in science are more nuanced than commonly portrayed, and we encourage the scientific community to attend to these nuances, rather than envision religious and conservative individuals as monoliths. For instance, our research has shown that most religious individuals (Christians in particular) don’t see a conflict between religion and science. It is the non-religious who are most likely to believe religion and science are in conflict and to favor science. We expect many scientists—a highly secular group—hold these beliefs. Furthermore, although trust in scientists is dropping among conservatives, it is still rather high compared with trust in other individuals embedded within elite institutions (e.g., politicians). Informing scientists of their own biases, if done strategically, could reduce scientists’ antipathy toward religious people and conservatives.
Once scientists feel comfortable recruiting religious and conservative individuals into scientific fields, they may begin by explicitly mentioning “religion” (and in some cases “ideology”) as forms of diversity in science job advertisements, similar to how gender, racial, and ethnic diversity are often mentioned. Doing so may provide a signal to religious and conservative individuals that their identities are welcomed. Science should not be closed to anyone, even those whose values we may not understand or endorse.
Cameron D. Mackey
Postdoctoral Researcher, University of Wisconsin-Madison
Kimberly Rios
Professor of Psychology, University of Illinois Urbana-Champaign
We should be more strategic and precise when discussing trust in science. Becoming more intentional can help us see opportunities to demonstrate the scientific community’s trustworthiness to help bolster society’s willingness to turn to science when making decisions and to trust science-informed advice. Being precise about trust and trustworthiness can also help us identify areas where the scientific community can become more worthy of trust.
The science communication literature does not typically discuss the four items that Celinda Lake and Emily Garner identify as dimensions of trustworthiness. Some, such as efficacy, are distinct variables in models used to understand people’s willingness to engage in science-related behaviors (e.g., Ajzen’s Theory of Planned Behavior). Instead, the most common way to think about trust is to distinguish between the behavior of putting one’s trust in someone else (i.e., behavioral trust as a willingness to make oneself vulnerable) and the trustworthiness beliefs (i.e., perceptions) that make such behavior more likely. These beliefs typically include beliefs about a trustee’s ability, benevolent motivations, and integrity. Some recent discussions have suggested that beliefs about openness, including both a willingness to listen and share, may also matter. These sub-dimensions go by different names in various literatures but the underlying concepts are similar. For example, researchers who focus on credibility suggest that people will trust a source if they see it as having expertise, goodwill, and honesty.
Being precise about trust and trustworthiness can also help us identify areas where the scientific community can become more worthy of trust.
We agree with Lake and Gardner that a prevailing “us vs. them” feeling (e.g., “In this house, we believe in science” yard signs) impacts trust in science. We also agree with concerns about the entanglement between (political) ideology and trust; that science has become a vehicle for asserting political agendas. Our concern is that scientists and their universities, at least in the United States, have not helped matters. Many have engaged in what some refer to as “cultural missionary work” that has predictably alienated large segments of American society by demonstrating that scientists’ often don’t seem to care about their everyday challenges and are unwilling to listen to their concerns. We critically need to reimagine higher education so that it serves everyone in society.
Universities should be safe environments in which people learn to engage respectfully with each other and our diverse, often contradicting, ideas. In doing so, we learn to manage, debate, and discuss the fluidity of evidence and uncertainty that is inherent in science. In other words, we learn to live in and cultivate a civil society that respects difference and diversity. As scientists, we can’t control how others talk about or treat us. However, what we can do is make deliberate choices about how we behave. We, therefore, need to behave in ways that demonstrate trustworthiness, including the degree to which we care about others’ needs, work hard to protect the integrity of science, and listen to others’ concerns. And it is imperative that we do so now.
Sara K. Yeo
Associate Professor, University of Utah
John C. Besley
Ellis N. Brandt Professor of Public Relations, Michigan State University