Research Roundtables

Home » Attend » Research Roundtables

Overview

Date: Friday, June 27, 2025
Time: 11:10 AM – 12:30 PM
View Schedule

Research Roundtables are dynamic and collaborative sessions designed to foster meaningful dialogue and generate innovative ideas in a focused, small-group setting. Each roundtable brings together a diverse mix of participants—ranging from researchers, clinicians, and industry professionals to policy experts and community leaders—to discuss critical topics at the intersection of machine learning and health.

Guided by moderators, the sessions encourage open exchange of perspectives, rigorous exploration of challenges, and the co-creation of actionable solutions. The format emphasizes inclusivity and engagement, ensuring every voice is heard, while promoting an environment of respect and intellectual curiosity. The outcomes of these discussions often inform future initiatives, white papers, or collaborative projects.

How to Participate

We will host in-person roundtables on Friday, June 27th. The session will be split into two 30-minute parts, with a 10 transition break between sessions to allow participants to move to a different roundtable. Participants are encouraged to pick at most two roundtables that they would like to join and engage in the discussion. To avoid the disruption of conversations, the participants should only move between roundtables during the break.

Topics


1. Explainability, Interpretability, and Transparency

Senior roundtable leader: Ashery Christopher Mbilinyi
Junior roundtable leader:  Chanwoo Kim

  • Why do you think explainability, interpretability, and transparency are important in healthcare ML? What led you to join this discussion? any specific experiences or thoughts?
  • Who benefits most from explainable AI in healthcare, and how do their needs differ (clinicians, patients, developers, regulatory authorities)? What constitutes a ‘good explanation’ for each stakeholder group?
  • What kinds of explanation methods are most valuable in clinical AI? Beyond SHAP and LIME, what approaches best align with clinical reasoning and decision-making?
  • Application: What unique challenges arise when explaining models that integrate diverse data types (time-series, images, labs, notes)? Any insights on the best way to handle this?
  • Application: How can interpretability methods help identify and address disparities in predictions across demographic groups?
  • Future directions: What critical gaps remain in medical AI explainability, and what research directions show the most promise for addressing them?

2. Explainability/Interpretability in Sequence Models

Senior roundtable leader: Tanveer Syeda-Mahmood,
Junior roundtable leader: Nikita Mehandru, Shashank Yadav

  • How should uncertainty arising from time-dependent features in sequence models be communicated to support clinical decision-making?
  • In what ways might temporal bias or data sparsity in patient medical histories affect the fairness and reliability of sequence model outputs across populations?
  • How can explanations/interpretations separate genuine pathophysiology from treatment effects in multivariate time series?
  • What latency + visual format makes an explanation actionable for bedside staff during a rapid-response window?
  • What failure points emerge when interpretability techniques developed for one domain (e.g., NLP, Audio) are applied to time series?

3. Uncertainty, Bias, and Fairness

Senior roundtable leader: Emma Pierson
Junior roundtable leader: Aparajita Kashyap

  • How can we appropriately communicate bias, fairness, and uncertainty considerations with different healthcare stakeholders?
  • What kinds of safeguards should we be implementing for LLMs to mitigate bias?
  • What are promising use cases for applying LLMs and other AI tools to improve equity? 
  • What recommendations can we make regarding health data collection that could translate to improved & more fair models?
  • How can we measure intersectional fairness in AI/ML systems for healthcare?

4. Causality

Senior roundtable leader: Marie-Laure Charpignon
Junior roundtable leader: Nitesh Nagesh

  • How can causal models be embedded into offline RL pipelines for safer and more robust sequential decision-making in clinical settings?
  • What are the most promising approaches for integrating causal reasoning into large language models and clinical AI agents?
  • What methods enable causal discovery from high-dimensional, multimodal health data when ground truth is unavailable?
  • How can we estimate and validate counterfactual outcomes from longitudinal observational data?
  • What sensitivity analysis techniques best quantify the impact of unmeasured confounding in causal inference for health applications?

5. Scalable AI Solutions: Domain Adaptation

Senior roundtable leader: Aishwarya Mandyam
Junior roundtable leader: Simon Lee

  • What criteria do we need to expect that ML algorithms will scale across hospital/application domains?
  • How can we publicize dataset (electronic health records, claims data etc.) in a scalable manner while maintaining HIPAA compliance and privacy?
  • What is the role of industry partnerships in making AI/ML systems shared across hospital systems?

6. Scalable AI Solutions: Foundation Models

Senior roundtable leader: Akane Sano
Junior roundtable leader: Arvind Pillai

  • What are the key challenges in integrating foundation models into existing clinical workflows?
  • How can foundation models be designed and trained to adapt to changes in data distribution and maintain high performance in dynamic environments and diverse populations?
  • How do we integrate multiple foundation models for multi-modal analysis? How do we develop/generate datasets for this application?
  • What collaborative models and frameworks can be developed to overcome data access limitations and enable broader access to valuable datasets?
  • How can the responsible and secure handling of sensitive biomedical data be ensured when training and deploying foundation models?

7. Scalable AI Solutions: Learning from Small Medical Data

Senior roundtable leader: Emily Alsentzer
Junior roundtable leader: Brighton Nuwagira

  • Are traditional small-data techniques (like transfer learning, self-supervision, or data augmentation) still relevant—or have foundation models made them obsolete in healthcare?
  • Is synthetic data a scalable solution—or an illusion—when it comes to overcoming real-world data scarcity in healthcare?
  • When does incorporating medical domain knowledge enhance performance—and when does it risk introducing bias or reducing generalizability?
  • Is the biggest bottleneck in small data collaboration technical (e.g., lack of methods) or institutional (e.g., policies, incentives, trust)?
  • Should we double down on fixing data-sharing policies—or focus instead on building models that don’t require data sharing at all (e.g., federated, decentralized, synthetic)?
  • How do we evaluate the real-world utility and fairness of models trained on small, institution-specific datasets—especially when results aren’t reproducible elsewhere?
  • Can we meaningfully assess reproducibility and rigor without sharing the underlying data? If not, what should be the minimum bar?

8. Multimodal Methods

Senior roundtable leader: Jason Fries
Junior roundtable leader: Bill Chen

  • Why are multimodal models important in health AI? Are you incorporating multimodal approaches in your own research? What challenges have you encountered in doing so?
  • Can one model truly capture imaging, text, and waveform data—or do we need modality-specific experts? What trade-offs emerge for real-world use?
  • Patient data arrives on different clocks—vitals (minutes), labs (hours to days), and notes (sporadically). How should models align them?
  • How does the integration of multimodal data affect the interpretability and explainability of models?
  • When adding more modalities increases not just predictive power but regulatory, ethical, and technical risk, how should evaluation frameworks adapt to responsibly govern this trade-off?
  • Multimodal models often rely on specific combinations of inputs that aren’t consistently available across datasets. Should we design models that can flexibly incorporate different sets of modalities (e.g. for generalization, external validation, etc.)?
  • Multimodal models can demand orders of magnitude more aligned data—does this favor big health systems over under-resourced clinics and exacerbate inequities?
  • When do the privacy, governance, and consent burdens of fusing imaging, genomics, and text outweigh its predictive gains?

9. Scalable and Translational Healthcare Solutions

Senior roundtable leader: Niharika S. DSouza
Junior roundtable leader: Yixing Jiang

  • What are the primary roadblocks preventing promising AI research from becoming widely adopted, impactful clinical tools?
  • How can we systematically dismantle them?
  • What collaborative frameworks (involving academia, industry, clinicians, policymakers, and patients) are needed to accelerate the journey from AI innovation to scalable, translational impact in healthcare?
  • Beyond initial validation, what types of evidence (e.g., pragmatic trials, real-world effectiveness studies, health economic outcomes) are most crucial for convincing healthcare systems to invest in and scale AI solutions? How can we generate this evidence more efficiently?
  • What are viable economic and reimbursement models that can support the long-term sustainability and scaling of effective AI healthcare solutions? Who bears the cost, and who reaps the benefits?