NLP Challenges in analyzing human conversations

Avaya Conversational Intelligence (ACI) is a solution for transcribing, analyzing, and extracting actionable insights from millions of customer calls in real time. ACI employs multiple NLP/SLU (Spoken Language Understanding) algorithms on multilingual human-human conversations. Processing of spontaneous conversations imposes unique challenges.

Downstream NLP tasks such as intent discovery, named entity recognition, punctuation, true casing, key phrases spotting, have to take into consideration not only speech recognition errors, but also particularities of conversational speech. The mismatch between NLP models trained on written language, with correct grammar and clear sentence boundaries, and the disfluencies of spontaneous speech only exacerbates the problem.

During our presentation, we want to cover both the technical side of scaling NLP/SLU solutions to the production environment (handling multiple parallel calls from customers with low latency and high throughput), as well as scientific challenges posed by spontaneous conversations and some details of our solutions.

About the speaker

Adrian Szymczak

Lead Machine Learning Engineer at Avaya

Adrian Szymczak is Lead Machine Learning Engineer supervising the delivery of numerous machine learning capabilities powering Avaya Conversational Intelligence.

He gained technical experience while working at Microsoft in Dublin, Amazon in Gdansk, Poznan Supercomputing and Networking Center, and previously as technical co-founder of expans.io.

He is co-author of scientific articles with researchers from Avaya, JHU, PUT, and WUST – accepted at top tier journals and conferences. In free time he reads and listens about data and statistics, does alpine skiing and downhill mountain biking.

Presented by