Good Morning Everyone at 7pm? Apache Spark Webinars project is over now
What the heck to say “Good Morning Everyone at 7pm”, you may ask?!
Well, if your client is at least 6 hours away, you’d say “Good Afternoon” as often as “Good Morning” or “Good Evening” and even use all the possible variants to greet the audience (possibly in multiple languages!)
For the past 6 weeks, every Tuesday and Thursday, I was hosting a one-hour webinar about in-depth areas of Apache Spark 2.0 for a client in North America. Due to the time difference (I live in Warsaw Poland), my 7pm was their 1pm at the very least (Boston MA) or even 10am (Redmond, WA).
That alone made it an experience not to mention the advanced topics we covered.
The list below are the topics of the webinars about Apache Spark 2.0 (in the order they were presented):
- DataSource API
- Debugging Spark Applications
- User-Defined Functions (UDFs)
- ML Pipelines In-Depth
- Tungsten and Catalyst Optimizer in Spark SQL
- Windowed Operators in Spark SQL
- Dynamic Allocation of Executors
- Monitoring using SparkListeners
- Spark Thrift JDBC/ODBC Server
- Structured Streaming
- Spark and Cluster Managers
- Spark History Server
You can find the slides in my GitHub repository.
As I’ve been taking notes about Apache Spark every day for over a year now, not only was choosing the topics a challenge but also what to include to make it worthwhile to listen for one hour.
The more you know the less certain you are that a given topic could be more useful than others, fit nicely to the one-hour time slot and that you are the speaker to make it pleasant to listen. After all it takes one hour!
I did think I could have done a webinar for one hour without much preparation as I had chosen and been familiar with the topics quite well, but as it turned out, every webinar took me at least two days of “filling the gaps” to be ready to provide content they’d have spent a great deal of their time to find on their own.
I was therefore spending even more time with the Mastering Apache Spark notes of mine, taking screenshots and notes, watching videos about Apache Spark, going over emails in the Spark users and developers mailing lists, reading books, and hosting Spark-a-thons, that all landed in a bunch of slides and live demos.
As a nice side effect, I think the topics were a nice 12-hour mixture of topics for every role in an organization — Spark developers, administrators and operators as well as product owners. I think every role in an organization that deals with Apache Spark could have found a topic for themselves.
I’m flying to Boston MA next week for another 3-day workshop with Apache Spark 2.0 and Scala. I’m doubtless that the experience from having been part of the one-hour webinars has made a huge impact on my own understanding of Apache Spark that (finger crossed) is going to be more helpful for others to learn from.
Follow me on twitter to learn Apache Spark from my own experience. Read Mastering Apache Spark 2.0 notes to delve into the gory details of what sits behind the very great project Apache Spark. I’d appreciate your feedback so send it to me early and often!
Thanks! Grazie! Dziękuję! Danke! Cпасибо!