Pre-trained language models have received more consideration in recent studies as a result of their outstanding performance in the general natural language domain.

In the general language domain, there are two main branches of pre-trained language models: BERT (and its variants) and GPT (and its variants). The first one, BERT (and its variants), has received the most attention in the biomedical domain; examples include BioBERT and PubMedBERT, while the second one has received less attention.

In this presentation, I’ll demonstrate how the BioGPT generative language model, along with some fine-tuning, can be used for tasks like extracting biomedical relationships, addressing queries, categorizing documents, and creating definitions for biomedical terms.

 

About the speaker
Amy-Heineike

Nafiseh Mollaei

Postdoc at Loyola University Chicago

Dr. Nafiseh Mollaei is a Postdoc Research Associate in the Department of Health Informatics and Data Science in the Center for Health Outcomes and Informatics Research at Loyola University Chicago. Dr. Mollaei received her PhD in Biomedical Engineering from NOVA University of Lisbon (Portugal) in 2022.

Prior to joining Loyola, she was Research Assistant in the Department of Medical Department in VolkswagenAutoeuropa in Portugal for 4 years.

Dr. Mollaeiā€™s research generally lies in the intersection of decision science and artificial intelligence (AI), which furthers knowledge of monitoring of and predictive analytics for healthcare systems. In the past few years, Dr. Mollaei has been working on designing AI-based models in critical care.

 

NLP-Summit

When

Online Event: October 3-5, 2023

 

Contact

nlpsummit@johnsnowlabs.com

Presented by

jhonsnow_logo