Utility of General and Specific Word Embeddings for Classifying Translational Stages of Research

Abstract

Conventional text classification models make a bag-of-words assumption reducing text, fundamentally a sequence of words, into word occurrence counts per document. Recent algorithms such as word2vec and fastText are capable of learning semantic meaning and similarity between words in an entirely unsupervised manner using a contextual window and doing so much faster than previous methods. Each word is represented as a vector such that similar meaning words such as ‘strong’ and ‘powerful’ are in the same general Euclidian space. Open questions about these embeddings include their usefulness across classification tasks and the optimal set of documents to build the embeddings. In this work, we demonstrate the usefulness of embeddings for improving the state of the art in classification for our tasks and demonstrate that specific word embeddings built in the domain and for the tasks can improve performance over general word embeddings (learnt on news articles, Wikipedia or PubMed).

Publication
AMIA Annual Symposium 2018 Proceedings
Vincent J. Major
Vincent J. Major
Assistant Professor

Vincent J. Major, PhD is an Assistant Professor at NYU Grossman School of Medicine working on applied machine learning for healthcare using electronic health record (EHR) data.