AI Research
Wav2vec could be more efficient, so we created our own pre-trained ASR Model for better Conversational AI.
Wav2vec 2.0 is arguably the most popular approach for using self-supervised training in speech, but it could be more efficient. We introduce SEW for better efficiency and performance. Read more

Felix Wu, PhD
Research Scientist at ASAPP
Utilizing Pre-trained Language Model for Speech Sentiment Analysis
On the path to real-time speech sentiment analysis, new ASAPP research achieves training efficiency gains with transfer learning between spoken and written language domains. Read more

Suwon Shon, PhD
Senior Speech Scientist at ASAPP
Multi-mode ASR: Increasing Robustness with Dynamic Future Contexts
Rather than maintaining multiple ASR models that work under varying time constraints or conditions, new ASAPP research introduces a single multi-mode model that can dynamically adjust to different scenarios. Read more

Kwangyoun Kim
Senior Speech Scientist at ASAPP
Introducing CLIP: A Dataset to Improve Continuity of Patient Care with Unsupervised NLP
In pursuit of our mission to enhance human performance and automate the world’s workflows, ASAPP is releasing one of the largest annotated datasets for clinical NLP. Read more

James Mullenbach
Research Engineer at ASAPP
Task-oriented dialogue systems could be better. Here’s a new dataset to help.
Dialogue State Tracking has run its course. That’s why we’re establishing a new Action-Based Conversations Dataset. Read more

Derek Chen
Research Scientist at ASAPP
Addressing instabilities for few-sample BERT fine-tuning
Building on recent advances in natural language processing, new research from Felix Wu identifies ways to significantly stabilize BERT fine-tuning on small datasets. Read more

Felix Wu, PhD
Research Scientist at ASAPP
Filling in the missing pieces for automation
Natural language input can be hard to classify. ASAPP research goes beyond conventional methods, building better systems to inform more accurate automation. Read more

Yoav Artzi
Research Fellow at ASAPP
From network compression to DenseNets
Neural Network Compression. What is it, and why does it matter? Here’s a look at what led to the development of DenseNets for parameter-efficient networks with significantly more accurate predictions. Read more

Kilian Weinberger, PhD
Principal Scientist and Head of ASAPP Ithaca Research Lab
Why I joined ASAPP: Taking AI to new levels in enterprise solutions
Ryan McDonald, PhD shares his excitement about joining ASAPP, and the opportunity to build revolutionary products that are AI-centric from concept. Read more

Ryan McDonald, PhD
Chief Scientist at ASAPP
Reducing the high cost of training NLP models with SRU++
Highly expressive and efficient neural models can be designed using SRU++ with little attention computation needed. Read more

Tao Lei, PhD
Research Leader and Scientist at ASAPP