Accelerating NLP Pipelines with Hardware-Software synergies

We would like to look at a broader view of what it takes to accelerate NLP pipelines beyond just largest and fastest model architectures. This includes a deeper look at hardware, cloud and software ecosystem, language and industry-specific datasets, and MLOps. In this hands-on workshop, we will take a look under the hood of our Intelligent Processing Unit using PopVision tools to see how architecture enables the development of latest AI models to run efficiently on massively parallel platform, and how the rest of the development ecosystem enables getting NLP solutions into real world applications.

About the speaker

Tim Santos

Director of Devrel at Graphcore

Tim is leading the Developer Relations in Graphcore to help the AI & ML community achieve maximum success with IPUs and make the next breakthroughs in machine intelligence. Tim has worn many developer hats in his career, from being a research engineer, data scientist and leading MLOps teams. Along the way, he’s gained experience across all stages of the development lifecycle taking AI applications from experimentation to deployment. If you’re looking to try out IPUs, learn more about our Poplar SDK and tools, showcase your innovations, connect with the community, request educational resources, or provide feedback on our technology, then Tim is your champion.



Sessions: October 4 – 6
Trainings: October 11 – 14


Presented by