While political figures and the general public debate whether artificial intelligence is a threat to societal values, a more practical and urgent reality is unfolding: AI is already being integrated into the fabric of daily life.
The current discourse, exemplified by recent warnings from figures like Senator Bernie Sanders, focuses heavily on the risks of AI—ranging from job displacement to misinformation. However, this fear-based framing risks creating a dangerous paralysis. For critical sectors like public health, the real danger isn’t the technology itself, but the decision to “sit it out.”
The Paradox of Adoption and Trust
There is a striking contradiction in how Americans interact with AI. While skepticism is high, usage is widespread:
– Widespread Use: Over half of Americans use AI for research, writing, and professional analysis.
– Low Trust: Only about one in five people report trusting AI-generated information most of the time.
This suggests that we are not rejecting the technology; rather, we are experiencing “adoption with hesitation.” If this hesitation is not managed through active engagement, it will likely turn into total disengagement, leaving the most important decisions to be made by those who do not share the same ethical or safety priorities.
The Risk of Passive Inheritance
In the field of public health, caution is a virtue. The stakes involve sensitive data and human lives. However, there is a fine line between being cautious and being avoidant.
While public health professionals debate the abstract ethics of AI, other sectors are already implementing it to drive decision-making and information delivery. If the public health sector waits for absolute certainty before acting, it will lose its ability to shape the technology. Instead of leading, these professionals will be forced to inherit systems they did not design.
AI as a Tool for Extension, Not Replacement
AI is already performing tasks that public health agencies often struggle to scale. It is not a replacement for human expertise, but an extension of it. Current applications include:
– Simplifying Communication: Translating complex medical guidance into plain, accessible language.
– Audience Adaptation: Tailoring public health messages for diverse demographic groups.
– Rapid Response: Generating initial drafts and communications during fast-moving health crises.
– Pattern Recognition: Identifying trends in public feedback that human analysts might miss.
In an industry that is chronically under-resourced, these capabilities offer a way to amplify the impact of existing staff.
Guardrails vs. Walls: A Strategic Distinction
The debate often gets stuck on whether to regulate or reject AI. To move forward, we must distinguish between two different approaches:
- Building Guardrails: Establishing rules for human oversight, data privacy, and scientific integrity. This is what agencies like the CDC are beginning to do—moving from studying AI to responsibly using it.
- Building Walls: Creating barriers that delay engagement entirely.
The goal should be to build guardrails, not walls. Guardrails define how a technology can be used safely; walls simply ensure that by the time you are ready to enter, the rules have already been written by someone else.
Addressing the Human Element: Jobs and Training
The fear that AI will reduce job opportunities is shared by 70% of Americans. While this is a legitimate concern, history shows that new tools tend to reshape work rather than simply eliminate it.
The critical question for leadership is not whether AI will change jobs, but how the workforce is being prepared. Are agencies investing in training? Is there space for experimentation? Or is the institutional culture signaling that it is “safer” to ignore the technology?
The choice for public health is not between acceptance and rejection, but between shaping the technology and being forced to adjust to it later.
Conclusion
The public health profession stands at a crossroads. Rather than allowing fear to dictate a policy of avoidance, leaders must move toward active, responsible engagement. By helping to define the ethical and practical boundaries of AI now, they can ensure the technology serves the public good rather than dictating it.
