How To Train BERT 15x Faster

While state-of-the-art NLP models are very powerful, they also require massive computational resources to train.

Access to GPUs is increasingly necessary for modern NLP teams, but that frequently comes with headaches: sharing a GPU cluster is difficult, and porting your code to use distributed training is a hassle.

Consequently, many deep learning teams spend more time on DevOps than they do on deep learning.

About the speaker
Neil-Conway

Neil Conway 

CTO at Determined.ai

Neil Conway is co-founder and CTO of Determined AI, a startup that builds software to dramatically accelerate deep learning model development. Neil was previously a technical lead at Mesosphere and a major developer of both Apache Mesos and PostgreSQL.

Neil holds a Ph.D. in Computer Science from UC Berkeley, where he did research on large-scale data management, distributed systems, and programming languages.

Presented by