When AI Meets the Human Heart: Lessons from the Cath Lab

A Personal Journey Through the Promise and Peril of AI in Healthcare


Part One: The Patient's Perspective

I've spent a significant portion of my career advocating for the ethical adoption of artificial intelligence in professional settings. I've written books, delivered keynote speeches, and counseled healthcare associations on navigating the AI revolution. I've built frameworks, developed policies, and championed the transformative potential of these technologies.

And then, a few weeks ago, I became a data point in this discussion.

Lying on the procedure table as a team prepared to perform pulse field ablation (PFA) to treat my atrial fibrillation, I experienced a profound shift in perspective. The theoretical became intensely personal. The frameworks became flesh. And the ethical questions I'd posed from conference stages suddenly mattered in ways that made my heart—both literally and figuratively—race.

This is the story of what I learned when AI stopped being something I studied and became something that studied me. It's about the extraordinary promise of artificial intelligence in healthcare, the very real pitfalls we must navigate, and why your association—whether you serve physicians, hospitals, or healthcare professionals—needs to think deeply about these issues right now.

The Reality Check

Let me be clear from the start: I'm alive, healthy, and my procedure was extremely successful. AI-enhanced technologies likely played a role in that positive outcome. This isn't a cautionary tale about technology gone wrong. Rather, it's a reflection on what happens when we bring our most powerful computational tools to bear on our most vulnerable human moments—and what that means for the future of healthcare delivery.


Part Two: Understanding the Technology (Without the Jargon)

Before we dive into the ethical deep end, let's establish what we're actually talking about. Pulse field ablation is a relatively new technique for treating irregular heart rhythms. Unlike older methods that use heat or cold to destroy problematic heart tissue, PFA uses precisely controlled electrical pulses to target only the cells that need treatment while largely sparing surrounding structures.

Think of it like the difference between using a flamethrower and a laser pointer. Both can get the job done, but one is considerably more selective.

Here's where AI enters the picture—and this is crucial for understanding both its promise and its limitations:

AI isn't performing the procedure. A highly trained electrophysiologist is. But AI is increasingly serving as a sophisticated assistant, helping in several key ways:

  1. Pre-procedure Planning: Machine learning algorithms analyze my cardiac CT scans, identifying subtle anatomical features that might predict how successful the ablation will be. These aren't obvious to the human eye—they're patterns in data points, vessel geometries, and tissue characteristics that only emerge when you can process thousands of similar cases simultaneously.
  2. Real-time Guidance: During the procedure, AI systems can analyze the electrical signals from inside my heart, helping identify the precise locations that need treatment. One system I learned about—Volta Medical's AI platform—uses machine learning trained on thousands of electrogram patterns to spot problematic tissue in real-time.
  3. Risk Prediction: Before my doctor even scheduled the procedure, predictive models had helped assess my likelihood of success and potential complications. These models consider everything from my age and medical history to the specific shape of my left atrium.
  4. Procedural Safety: AI-assisted imaging helps ensure the catheter is exactly where it needs to be, reducing radiation exposure and potentially catching complications before they become serious.

This is sophisticated stuff. And here's the thing that struck me as both exciting and unsettling: I don't fully know which of these AI systems were used in my care, how they influenced clinical decisions, or how they were validated.

And that's a problem.


Part Three: The Promise That Keeps Me Optimistic

Despite my concerns—and we'll get to those—I remain deeply optimistic about AI's role in healthcare. The research is compelling, and the potential benefits are enormous.

Democratizing Expertise

The TAILORED-AF trial demonstrated something remarkable: when AI systems guide ablation procedures, outcomes become more consistent across different operators and medical centers. Think about what this means. The expertise that once resided solely in the hands and minds of a few elite electrophysiologists can now be, to some degree, democratized.

This matters profoundly for healthcare equity. If you live in a major academic medical center's shadow, you have access to cutting-edge expertise. If you're in a rural community or underserved urban area, your options may be more limited. AI won't solve this problem entirely, but it can narrow the expertise gap.

Personalizing Treatment

Here's what traditional medicine offered: You have atrial fibrillation. We'll try treatment A. If that doesn't work, we'll try treatment B. If that fails, maybe treatment C.

Here's what AI-enhanced medicine is beginning to offer: Based on your specific anatomy, your unique electrical patterns, and data from thousands of similar patients, we predict treatment A has a 73% success rate for you, while treatment B has only a 45% chance of working. Let's start with A.

The difference is moving from population averages to individual predictions. It's the difference between weather forecasts for "the Midwest" versus forecasts for your specific zip code.

Reducing Complications

One of the most compelling aspects of combining AI with PFA is safety. The traditional ablation techniques carried small but real risks of serious complications—damage to the esophagus, injury to the phrenic nerve that helps you breathe, even stroke. PFA's tissue selectivity addresses much of this, but AI adds another layer of protection.

Real-time monitoring systems can detect when something is going wrong—changes in electrical patterns, catheter positioning issues, tissue heating—often before the human operator notices. In the high-stakes environment of cardiac procedures, milliseconds matter.

A recent study showed AI-guided ablation achieved excellent outcomes with zero periprocedural complications. Zero. That's the kind of number that makes this AI evangelist's heart sing—in normal sinus rhythm, naturally.


Part Four: The Pitfalls We Cannot Ignore

Now let's talk about what keeps me up at night—and what should concern every healthcare association leader reading this.

The Black Box Problem

During my pre-procedure consultation, my electrophysiologist explained the risks and benefits thoroughly. But here's what didn't happen: No one explained which AI systems would be involved in my care, how they worked, what data they were trained on, or how much weight they carried in clinical decision-making.

This isn't a criticism of my medical team—they were exceptional. It's a systems problem. We haven't yet established standards for what patients should know about AI involvement in their care.

Consider this scenario: An AI system recommends a particular ablation strategy based on patterns it detected in my imaging. The physician follows this recommendation. The outcome is poor. Who's responsible? The physician who followed the AI's guidance? The company that created the algorithm? The hospital that purchased the system?

These aren't hypothetical questions. They're already emerging in malpractice cases across the country.

The Validation Gap

Here's an uncomfortable truth: Many of the AI systems being used in healthcare today have been validated on relatively small, homogeneous populations. A system trained primarily on data from Asian patients may not perform equally well on European patients. A model developed using data from academic medical centers may not translate well to community hospitals.

In my research for this piece, I found that 61% of published studies on AI in cardiac ablation showed high risk of bias due to lack of external validation. That's a sobering statistic when we're talking about tools that influence life-or-death decisions.

The diversity problem in AI training data is well-documented in other domains—facial recognition systems that perform poorly on darker skin tones, voice recognition that struggles with certain accents. In healthcare, these disparities can translate directly into health inequities.

The Interpretability Challenge

When an AI system recommends a particular course of treatment, can it explain why? Not always. Many advanced machine learning models are essentially black boxes—they produce accurate predictions, but they can't tell you the reasoning behind them.

For physicians trained to explain their clinical reasoning, this creates a real tension. How do you explain to a patient, "The computer says you're high risk for recurrence, but I can't tell you exactly why"?

Some newer "explainable AI" approaches are trying to solve this, but we're not there yet. And in the meantime, we're asking patients to trust systems that even their doctors don't fully understand.

The Economic Pressures

Let's be frank about the business model here. AI systems in healthcare are expensive. Companies investing millions in development need to recoup those costs. This creates pressure to adopt, to integrate, to scale—sometimes before we've fully worked out the kinks.

Healthcare associations face a particular challenge here: Your members are being pitched AI solutions constantly. How do you help them separate genuine innovation from expensive gadgets? How do you ensure that adoption is driven by evidence rather than fear of being left behind?


Part Five: The Questions Your Association Should Be Asking

If you're leading a healthcare association, here are the critical questions you should be wrestling with right now:

1. Transparency and Disclosure

The Question: What should patients know about AI involvement in their care?

Why It Matters: In my case, I had to do my own research to understand the role AI might play in my procedure. That's backwards. As an informed, engaged patient with technical expertise, I could do this. Most patients can't and shouldn't have to.

Action for Associations: Develop model disclosure policies. Create patient education materials that explain AI in healthcare without requiring a computer science degree. Advocate for regulatory requirements around AI transparency.

2. Validation Standards

The Question: What level of evidence should be required before AI systems are used in clinical care?

Why It Matters: We don't let pharmaceutical companies market drugs without rigorous clinical trials. We shouldn't apply a lower standard to AI systems that influence patient outcomes.

Action for Associations: Work with standard-setting bodies to establish evidence requirements. Create frameworks for evaluating AI systems. Provide your members with tools to assess vendor claims critically.

3. Equity and Access

The Question: How do we ensure AI benefits all patients, not just those at well-resourced institutions?

Why It Matters: If AI-enhanced care becomes the gold standard but is only available at academic medical centers, we've created a two-tiered system. That's antithetical to healthcare's mission.

Action for Associations: Advocate for policies that promote equitable access. Support research on AI performance across diverse populations. Push for open-source alternatives where appropriate.

4. Professional Training

The Question: How do we prepare healthcare professionals to work effectively with AI?

Why It Matters: The physicians coming out of training today will practice in an AI-saturated environment. Are we teaching them to be thoughtful collaborators with these systems, or just consumers of them?

Action for Associations: Integrate AI literacy into continuing education requirements. Create competency frameworks. Develop case studies that explore both successful and unsuccessful AI integration.

5. Liability and Accountability

The Question: When AI-influenced care leads to poor outcomes, who's responsible?

Why It Matters: Uncertainty around liability can either accelerate inappropriate AI adoption (physicians deferring responsibility to algorithms) or slow beneficial adoption (physicians afraid of malpractice liability).

Action for Associations: Engage with malpractice carriers to develop clear guidelines. Advocate for legal frameworks that appropriately allocate responsibility. Create documentation standards for AI-assisted care.


Part Six: A Framework for Ethical AI Adoption

In my book "Ethical AI for Associations: Leading with Integrity in the Digital Age," I introduced the SCALE framework for AI implementation. Let me show you how it applies specifically to healthcare AI:

S - Stakeholder Alignment

In Practice: Before implementing AI in clinical care, bring together physicians, nurses, administrators, patients, and ethicists. Not in separate meetings, but in the same room. What are the goals? What are the concerns? Where do interests align and diverge?

I've seen too many AI implementations fail because the technology was chosen by administrators for cost-saving reasons, but the physicians saw it as reducing their clinical autonomy, and no one asked the patients what they valued.

C - Capability Assessment

In Practice: Be brutally honest about organizational readiness. Do you have:

  • The data infrastructure to support AI systems?
  • The technical expertise to evaluate vendor claims?
  • The change management capacity to shift clinical workflows?
  • The training resources to bring staff along?

One of the most expensive mistakes in AI adoption is buying sophisticated systems your organization isn't ready to use effectively.

A - Agile Implementation

In Practice: Start small. Pilot. Measure. Learn. Adjust. Repeat.

Don't announce that AI will revolutionize your entire cardiology department by next quarter. Pick one specific use case. Implement it carefully. Study what works and what doesn't. Build organizational learning before scaling.

In my research on AI in pulse field ablation, the most successful implementations were incremental. They added AI-assisted imaging first, mastered that, then added predictive analytics, mastered that, then added real-time guidance.

L - Learning Culture

In Practice: Create environments where it's safe to say "the AI got this wrong" or "I don't understand why the system recommended this."

The worst thing that can happen is physicians silently overriding AI recommendations without documenting why, or blindly following them without critical thinking. Both happen when organizations punish questions and doubt.

E - Ethics and Governance

In Practice: Establish AI governance committees that review systems before procurement, during implementation, and continuously during use. Make ethics a living practice, not a checkbox.

Ask questions like:

  • Does this system exacerbate existing health disparities?
  • Are we getting meaningful informed consent?
  • Do we have the ability to audit this system's decisions?
  • What happens when the AI and human judgment conflict?

Part Seven: Personal Reflections and Looking Forward

Lying on that procedure table, I had a moment of profound cognitive dissonance. Here I was, an AI evangelist who has spent years encouraging associations to embrace these technologies, suddenly very aware of how little control I had over the AI systems involved in my own care.

That didn't make me anti-AI. If anything, it reinforced my conviction that we need more thoughtful, ethical, transparent AI adoption—not less AI, but better AI implementation.

The promise is real. AI-enhanced pulse field ablation has the potential to:

  • Reduce complications through better patient selection and real-time guidance
  • Improve success rates by personalizing treatment strategies
  • Democratize expertise so that geography matters less in healthcare access
  • Accelerate medical knowledge by learning from every procedure

But the pitfalls are equally real:

  • Black box decision-making that erodes trust and accountability
  • Validation gaps that may perpetuate health inequities
  • Economic pressures that prioritize speed over safety
  • Professional practice changes that outpace our ethical frameworks

The Path Forward

For healthcare associations, this moment demands leadership. Your members are navigating this transition right now. They need:

Education: Not just "what is AI" but "how do we evaluate AI systems," "what are our ethical obligations," and "how do we maintain professional judgment while leveraging computational power."

Standards: Clear, evidence-based guidelines for AI adoption. What level of validation is sufficient? What documentation is required? What disclosure is appropriate?

Advocacy: Someone needs to represent the interests of healthcare professionals and patients in the regulatory discussions happening right now. That someone should be you.

Community: Forums where your members can share experiences, both successes and failures, without fear of judgment. The learning happens in the honest conversations about what didn't work.

What I Tell My Cardiologist

Since my procedure, I've had several follow-up conversations with my electrophysiologist. He's fascinated by my interest in the AI aspects of my care. I told him what I'm telling you:

"I'm grateful for whatever role AI played in my successful outcome. But I also want you to know that as these systems become more sophisticated, your patients—and the broader public—will have increasing expectations around transparency, validation, and ethical use. The associations that represent you will need to lead on these issues."

He nodded thoughtfully and said, "That's fair. We're all learning as we go."

And that's the thing: We are all learning. The technology is evolving faster than our ethical frameworks, faster than our regulatory structures, faster than our professional training programs.

But we're not powerless. We can choose to be intentional, thoughtful, and ethical in how we integrate AI into healthcare. We can insist on transparency. We can demand evidence. We can prioritize equity. We can maintain human judgment at the center of care.

An Invitation

I'm sharing my story because I believe transparency builds trust, and trust is essential for the kind of thoughtful AI adoption healthcare needs. I'm optimistic about where we're headed, but that optimism is conditional on us getting this right.

To my colleagues in healthcare associations: You have a crucial role to play. Your members are counting on you to help them navigate this transformation with integrity.

To the physicians reading this: Your clinical judgment remains irreplaceable. AI is a tool to enhance that judgment, not replace it. Don't abdicate your professional responsibility, and don't be afraid to question the algorithms.

To the AI developers: We need your innovation, but we also need your humility. Complex biological systems have a way of humbling even the most sophisticated models. Build with transparency. Validate rigorously. Partner with clinicians and patients, not just hospital IT departments.

And to patients: You have the right to understand what role AI plays in your care. Don't be afraid to ask. In fact, please do ask. It's the only way we'll build the kind of transparency that makes trustworthy AI possible.


Epilogue: The Human at the Center

Six weeks post-procedure, my heart is beating in normal rhythm. The AI systems that supported my care did their job. My physician did his job brilliantly. The technology and the human expertise worked in concert, which is exactly how it should be.

But here's what I'll remember most: The moment before the procedure when my electrophysiologist sat down, made eye contact, and said, "I know you understand the statistics and the technologies involved. But I want you to know that I'm going to take care of you. The computer is just a tool. You're the patient, and you have my full attention."

In that moment, I wasn't a data point. I was a person. And that's the future of AI in healthcare I'm working toward—one where technology amplifies human expertise and compassion, rather than replacing it.

The algorithms can process millions of data points. They can identify patterns invisible to human perception. They can predict outcomes with remarkable accuracy.

But they can't hold my hand. They can't reassure me that I'm going to be okay. They can't bring wisdom, compassion, and decades of experience to bear on the thousand little judgment calls that make the difference between adequate care and excellent care.

That's why the most important question isn't "Will AI transform healthcare?" It will. The question is: "How do we ensure that transformation serves human flourishing?"

I believe we can get this right. But it requires all of us—patients, physicians, technologists, ethicists, and association leaders—to engage with both the promise and the peril, to demand better, and to keep the human at the center of the equation.

My heart is back in rhythm. Now let's make sure our approach to healthcare AI stays in rhythm too—with our values, our ethics, and our profound responsibility to the people we serve.


Rick Bawcum, CAE, CISSP, AAiP, is CEO and Founder of Cimatri and author of "Ethical AI for Associations: Leading with Integrity in the Digital Age." He recently underwent pulse field ablation for atrial fibrillation and is apparently using his recovery time to write excessively long blog posts about the experience.


Discussion Questions for Your Association Board:

  1. Does our organization have clear policies around AI transparency with our members/patients?
  2. What validation standards should we advocate for in AI healthcare systems?
  3. How are we preparing our members to work effectively with AI technologies?
  4. Are we creating spaces for honest conversations about AI failures and challenges, or just celebrating successes?
  5. What role should our association play in shaping AI regulation and policy in our field?

Resources for Further Reading:

  • TAILORED-AF Trial Results (Nature Medicine, 2025)
  • "Digital Twin Models in Atrial Fibrillation" (Journal of Personalized Medicine, 2025)
  • MANIFEST-17K Safety Study (Nature Medicine, 2024)
  • "Ethical AI for Associations" by Rick Bawcum

Subscribe to our Newsletter

Contact Us