Speaking at BIWA Summit 2017

It’s a new year, new career, and time to kick off the 2017 presentation circuit in Redwood Shores. I’ll be speaking at BIWA Summit for just the second time now and I’m hoping to make it a regular event. With so many great sessions in data warehousing, big data, as well as the graph and spatial summit, it’s a great conference to start the year. Since my last post, you might remember that I’ve moved out of the Oracle-only space and into the new world of Hadoop, Big Data, and hybrid data stores. But, this recent change in direction for my career hasn’t changed my thoughts about Oracle data integration, or my ability to share my knowledge about it. In the coming months, I’ll be speaking at various Oracle-focused conferences about the usual suspects: Oracle Data Integrator, Oracle GoldenGate, etc. As I move further away from using these products on a daily basis, and continue my work at Gluent, my focus will naturally change and you’ll (hopefully) see me at many different types of data-focused events throughout the world.

As for BIWA Summit 2017, I have 2 sessions that I am excited to present at this event.

Tuesday, January 31, 1:20 pm — 1:45 pm
Oracle Data Integrator 12c: Getting Started

Wednesday, February 1, 4:30 pm — 5:00 pm
Streaming Transformations Using Oracle Data Integration

The ODI Getting Started session idea began with a series of blog posts. Over the years, I’ve realized that I, and other ODI experts, always looked for the difficult challenges or new and exciting product features to share with the world, but nobody was telling the newbies how to begin one’s journey with Oracle Data Integrator. So, it’s time for a refresher on ODI. I’m hoping to continue the blog post series, time permitting.

My other session is focused on the Oracle Data Integration stack and the latest releases of GoldenGate and ODI which include streaming integration. To be fair, GoldenGate for Big Data 12.2+ has supported integration with Kafka for the just over of a year now. But Oracle Data Integrator, new to the streaming game, just had version 12.2.1.2.6 released this last December, and now has support for Kafka and Spark Streaming integration. This means that you can build out your entire data replication and streaming process using familiar technologies that support metadata management, data quality checks, and integrates nicely with your current batch and micro-batch ETL processes. In this session, I’ll quickly guide you through the latest Oracle DI product releases to help get you started with real-time data streaming.

As always, I look forward to seeing many friends and former colleagues when in town, as well as meeting some new friends at the conference. If you want to meet up, please drop me a note on Twitter or at michael@mRainey.co. Hope to see you there!