Skip to main content
Class/Seminar

HAI Seminar with Sheng Wang | Generative AI for Multimodal Biomedicine

Sponsored by

This event is over.

Event Details:

Once medical foundation models achieved state-of-the-art performance on a variety of biomedical applications, there was a push to build even larger models by training on more medical datasets. Despite their encouraging performance on artificial biomedical benchmarks, critical gaps remain that must be filled before these models can be used in real-world clinics. 

This talk addresses  three gaps—unmatched patient information, privacy, and GPU constraints—and the models that can help resolve them. First, Sheng will introduce BioTranslator, a multilingual translation framework that projects a variety of biomedical modalities into the text space, allowing the comparison between patients with unmatched profiles. Next, he will discuss BiomedCLIP, a public medical foundation model trained on 15 million public text-image pairs that can be used as a proxy for clinicians to query large language models on the cloud without exposing private data. Finally, he will introduce LLaVA-Rad, a 7B parameter model that achieves a performance superior to Med PaLM M (84B) in radiology by exploiting the trade-off between domain specificity and model size, demonstrating the possibility of building small models for efficient fine-tuning and inference in clinics. The talk concludes with a vision of “everything everywhere all at once,” where medical foundation models and generative AI benefit every patient in every clinic all at once.

2 people are interested in this event

Location: