Visual NLP – Combining Computer Vision and Text Mining for Intelligent Document Processing

Many businesses depend on paper documents or documents stored as images, such as receipts, manifests, invoices, medical reports, contracts, waivers, leases, forms, and audit records digitized with scanners.

Up until now, extracting data from these images mainly involved extracting the text through OCR and using NLP techniques, while neglecting the layout and style information which are often vital for document image understanding.

Novel deep learning techniques combine features from computer vision and NLP into unified models, resulting in improved state-of-the-art accuracy for form understanding and visual information extraction.

This talk shares real applications of these models to digitize and analyze documents with the purpose of extracting meaningful and easily exploitable data.

About the speaker
Amy-Heineike

Dia Trambitas

Head Of Product at John Snow Labs

Dia Trambitas is a computer scientist with a rich background in Natural Language Processing.

She has a Ph.D. in Semantic Web from the University of Grenoble, France, where she worked on ways of describing spatial and temporal data using OWL ontologies and reasoning based on semantic annotations.

She then changed her interest to text processing and data extraction from unstructured documents, a subject she has been working on for the last 10 years.

She has rich experience working with different annotation tools and leading document classification and NER projects in verticals such as Finance, Investment, Banking, and Healthcare.

NLP-Summit

When

Sessions: October 5 – 7
Trainings: October 8 – 9

Contact

nlpsummit@johnsnowlabs.com

Presented by

jhonsnow_logo