Featuring speakers from:
Stream processing is increasingly relevant in today’s world of big data, thanks to the lower latency, higher-value results, and more predictable resource utilization afforded by stream processing engines. At the same time, without a solid understanding of the necessary building blocks, streaming can feel like a complex and subtle beast. It doesn’t have to be that way.
Join Google Open Source team for a tour of stream processing concepts via a walkthrough of the easiest to use yet most sophisticated stream processing model on the planet, Apache Beam. You’ll explore a series of examples that help shed light on the important topics of windowing, watermarks, and triggers; observe firsthand the different shapes of materialized output made possible by the flexibility of the Beam streaming model; experience the portability afforded by Beam, as you work through examples using the runner of your choice (Apache Flink, Apache Spark, or Google Cloud Dataflow); and interact with engineers who have years of experience with massive-scale stream processing.
Arrival, networking, environment setup
10:00 - 11:00 AM
Introduction to streaming concepts and Apache Beam
11:00 - 11:30 PM
Case study: developing a data processing pipeline for a mobile game
11:30 - 1:30 PM
1:30 - 2:00 PM
2:00 - 3:00 PM
Unified batch and stream processing model in Apache Beam
3:00 - 5:00 PM
Exercises, LeaderBoard and GameStats
I learned about parts of user experience journey that I didn’t understand before, so now I really know what mistakes I can avoid and what steps I shouldn’t miss.
It was a fun environment and it seemed like we were just learning about UX design and hanging out with friends.