Loading…
DevConf.CZ 2020 has ended
Back To Schedule
Saturday, January 25 • 5:00pm - 5:25pm
Distributed data workflows: PySpark vs Dask

Sign up or log in to save this to your schedule, view media, leave feedback and see who's attending!

Feedback form is now closed.
Until very recently, Apache Spark was the de facto choice of framework for batch data processing at scale. For Python (or new) developers, diving into Spark is challenging, as it requires learning the Java infrastructure, memory and configuration management. The multiple layers of indirection also make it harder to debug errors, especially when dealing with the PySpark API.

With Dask, a pure Python framework for parallel computing, Python developers have now an intuitive and elaborate way of building scalable data pipelines. In this talk, we'll be using a data aggregation use-case to highlight the important differences between the two frameworks, and make it clear the overall benefits of moving from one framework to other.

By the end of the talk, developers/ data engineers/ scientists, would have a framework and benchmarks to refer to, to make an informed decision while building their production Data Engineering pipelines.

Speakers
avatar for Vaibhav Srivastav

Vaibhav Srivastav

Data Scientist, Deloitte Consulting LLP
Hi! I am a Data Scientist working with Deloitte Consulting LLP, I work with Fortune Technology 10 clients to help them make data-driven (profitable) decisions. In my surplus time I serve as a Subject Matter Expert on Google Cloud Platform to help build scalable, resilient and fault... Read More →



Saturday January 25, 2020 5:00pm - 5:25pm CET
D0207 Faculty of Information Technology Brno University of Technology, Božetěchova, Brno-Královo Pole, Czechia