Health industry payers and providers accrue a wealth of data on patients with conditions such as diabetes and cancer. These patients frequently need to engage with contact centers, yet the specific pieces of data relevant to resolving their concerns may be frustratingly scattered throughout their various records. The ASAPP Research team is developing new AI methods that will help healthcare organizations make use of all that data to better serve their patient/member needs.
Better use of data can drive better patient health
Imagine a patient with diabetes contacts your payer organization with a concern about prescription coverage for their new second-line medication. The ASAPP platform identifies the health problem being discussed. It then retrieves and organizes the patient’s history of medications, procedures, and lab results that pertain to the management of the condition and surfaces this information alongside the conversation in the agent desk. At a glance, the agent can understand the patient’s situation in full context.
This platform also provides proactive suggestions of actions for an agent to take and things to say based on the conversation and the patient’s medical history. For example, while addressing the patient’s prescription concern the agent might be prompted to say, “You had an appointment with a dietitian over a year ago, but haven’t been since. It could be a good time for another visit. I can send you a list of in-network providers in your area.”
Similarly, a physician’s agent can respond to a request for a refill with visibility into appointment history and recent lab results related to the patient’s concern. In this case, considering a refill of the blood thinner Warfarin, the agent could quickly check the patient’s most recent INR lab result to determine whether to reassess dosage before refilling.
Building on the ASAPP AI core
This extension of the ASAPP platform will be borne out of focused research efforts in managing healthcare data. Our most recent work in this area is our construction and automatic expansion of a healthcare knowledge base, powered by machine learning. The knowledge base stores relationships at a conceptual level between problems, medications, procedures, and lab results, at a level of detail that hasn’t been done before. We train a neural network model to learn patterns in how these concepts do and do not relate to each other, using the existing knowledge base. After training, we query the model to receive suggestions for other medications, procedures, or labs that may be related to a problem, which lets us grow the knowledge base, re-train the model, and repeat in a positive feedback loop.
Key to our approach’s success is its use of longitudinal health records, such as health insurance claims or electronic medical records, to learn embeddings of medical concepts. These embeddings organize the set of medical concepts so that those that appear in similar contexts are near each other. For example, in our embedding set, the concept “Metformin” has as nearest neighbors the concepts for other diabetes medications “Glipizide”, “Sitagliptin”, and “Glimepiride.” The concept “EEG” (Electroencephalograph) is close to concepts for “MRA of the head” (Magnetic resonance angiography), and “TCD” (Transcranial doppler), all techniques for imaging the brain. We learned embeddings for diagnosis codes (ICD-10), procedure codes (CPT/HCPCS), medications (RxNorm), and laboratory tests (LOINC). We also developed a method to harmonize embeddings from separate sources, which is helpful in the common situation when internal data has a unique vocabulary.
The embeddings are a useful starting point for surfacing related concepts for agents, and we tailor them to this use case by using them to initialize the neural network model. Because what makes a procedure relevant may be different from what makes a medication relevant, the model treats these categories distinctly. Finally, the model also incorporates data from the local institution, such as how often a given problem and medication are recorded in the same patient encounter, into its suggestions.
Ready for what comes next
This design is extensible: give the model a new type of health problem (e.g. COVID-19), and it can examine its similarities to known problems and determine the proper types of information to link together. We can use this to quickly grow and customize the knowledge base around problems that may be particularly important to your population, and then apply it in conversations to pull up information relevant to those problems.
David Sontag, PhD is Chief Health Strategist and Principal Scientist at ASAPP, and Associate Professor at MIT in the Computer Science and Artificial Intelligence Laboratory and the Institute for Medical Engineering and Science. Prior to joining MIT, Dr. Sontag was an Assistant Professor in Computer Science and Data Science at New York University’s Courant Institute of Mathematical Sciences. Dr. Sontag received the Sprowls award for outstanding doctoral thesis in Computer Science at MIT in 2010, best paper awards at the conferences Empirical Methods in Natural Language Processing (EMNLP), Uncertainty in Artificial Intelligence (UAI), and Neural Information Processing Systems (NeurIPS), faculty awards from Google, Facebook, and Adobe, and a NSF CAREER Award. Dr. Sontag received a B.A. from the University of California, Berkeley.