This post will provide the Sample code (Python) to consume Kafka topics using Azure Databricks (Spark), Confluent Cloud (Kafka) running on Azure, Schema Registry and AVRO format.
Reading the topic:
Stream Data formatted and stored in a Spark SQL Table (view):
Special thanks and credits to Gianluca Natali, Henning Kropp, Yatharth Gupta, Bhanu Prakash, Awez Syed, Nick Hill, Robin Davidson, Liping Huang, Chris Munyasya, Sid Rabindran and many more people from the Databricks, Confluent and Microsoft team engaged to make this integration to work.
Most people are aware that I love Unix/Linux and open source. I have used Linux and Open Source for more than 16 years.
Back in 2007, I created a company in Brazil/Spain to provide real world solutions based on open source.
Later, I worked at Pentaho, an open source big data, business intelligence and data mining company sold to Hitachi (the reason probably was because of Pentaho’s open source portfolio of products and solutions).
Going back to the past (2 June, 2005) I found this BBC…