From Mainframe COBOL into the Object Oriented Abyss: A Disturbance In The Force
A short time ago, within the Chicago city limits, a conversation between an aspiring Python Developer, a Grand Python Master, and the Python Council Leader, discussing whether or not to take on the apprentice teaching him the ways of a Python Knight, might have gone something like this…
Grand Master: I cannot teach him. He knows not what Object Oriented means.
Council Leader: He will learn Object Oriented.
Grand Master: Hmmm. Much stubbornness in him, like his father.
Council Leader: Was I any different when you taught me?
Grand Master: Hah! He is not ready.
Python Developer: I am ready! I can be a Python Knight. Python Council Leader, tell him I’m ready!
Grand Master: Ready, are you? What know you of ready? For 26 years have I trained Python Knights. My own counsel will I keep on who is to be trained! A Python Knight must have the deepest commitment, the most serious mind. Inefficient, brittle, unreadable code. A Python Knight craves not these things… You are reckless!
Council Leader: So was I, if you’ll remember.
Grand Master: He is too old. Yes, too old to begin the training.
Python Developer: But I’ve learned so much!
Grand Master: *sighs*…Will he finish what he begins?
Python Developer: I won’t fail you — I’m not afraid.
Try not. Do. Or do not. There is no try.
Mastering Python by the end of this mentorship program I will not, but creating a foundation from which I can make good use of Python and advance my skills even further I will. And hopefully you’ll stop hearing Yoda’s voice in your head as you read the rest of this blog. :)
Here I am in the second month of the ChiPy Spring Mentorship program and not only have I developed a fairly good handle on Python, I have also learned much with regards to best practices. Early scripts I wrote would be considered brittle as there was no exception handling, and main routines were lengthy and unreadable with little use of defined functions. Now the code I develop is very readable with processes broken down into smaller functions with longer but more descriptive names (still perfecting this). Any unforeseen errors caused those early modules to come crashing down, but now using various exception handling techniques my code will identify and catch potentially serious errors. When this happens control is passed back to the OS with appropriate return codes so the system can take defined actions, and this also allows handlers to neatly close up related system tasks.
I have also learned to log informative messages at appropriate levels (info, critical, debug, etc) as scripts execute using the logging function. These messages are useful to know what my scripts are doing and the important parameters and data sources being referenced or created, and also prove very valuable when I need to debug issues. These are just a few examples where the code I am delivering is written using best coding practices, but I am also learning and putting to use best practices used throughout the entire development lifecycle as well.
But how did I get to this point, and where I am going from here? Read on…
After a few initial conversations to go over my background and goals along with some ideas for a project, Allan put together a “Roadmap” for what I should learn during this program. The following are the roads I had to travel down before completing basic training and becoming a young Python Knight:
Basic Python 101> For loops, If/Elif/Else, navigate a dictionary, navigate a list, work with tuples, work with strings, write to and read from a file.
Advanced Python 101> Command line arguments, making modules (calling context and __main__ kinks), flake8/PEP8, doc strings/PEP257, list comprehension, “with” context manager, handle exceptions, classes, more as they occurred to him (this was my personal favorite), parsing XML, regex.
Git 101> Commit and push, commit on a branch and push, merge a branch, resolve conflicts, GitHub pull requests, and let’s not forget .gitignore.
Testing 101> Write tests to validate my script’s output, setup Continuos Integration on GitHub for bonus points. These were special requirements for me from a Quality Engineer.
Graduating From Python 101
Not only did I develop a good understanding of the items in the Roadmap, in some cases I demonstrated an understanding beyond Allan’s expectations. At times I discovered alternate methods and put them to use, then would discuss with Allan. I questioned many things and I think I challenged Allan almost as much as he challenged me.
So after six long weeks with many late nights and countless hours logged, and without any real fanfare, I earned enough points to graduate from Python 101. This was a huge step for me, especially given I come from the COBOL Period within the Mainframe Era of the Technological Timescale. This dinosaur feels pretty awesome about what he’s learned so far and is ready to put it to good use and learn even more. So I’m taking what I’ve learned and building a system in the cloud. Let’s talk about my project!
2017 Spring Mentorship Project
Do you think Thoreau ever dreamed that people 160 years later would really build castles in the air? Well, we are not building real castles but he was not talking about them in a literal sense either. We all have dreams. I have mine and you have yours, and if we want our dreams to come to fruition we need to get creative and put some real work into them.
Who doesn’t love a good baseball game, especially if you’re a Cubs fan these days. I have chosen to develop a small system centered around retrieving up to the minute baseball statistics, and this system will be built in the cloud which in itself is another great learning opportunity. My mentor Allan plays fantasy baseball and I myself have played it in the past. Getting up to the minute information for the players on your fantasy team is a big deal as Fantasy Baseball team owners want to know how their team is doing at any given moment. This not only sounded useful but quite fun as well.
Major League Baseball has a data server where they store information about all games throughout the year, even those in progress. The directories on this server are very organized and use JSON dictionary files to record data as play happens and is accessible by anyone. There aren’t any MLB API’s I can use to access the data. Well actually, there are, but I’m not a preferred partner of MLB (think ESPN), so until my system gets national notoriety I am just an average user of this data. I’ve studied the complex structure of these dictionaries and understand how they point to each other and how the relevant data is stored. I’ll need to create master files to efficiently run my request scripts and these will be stored in structured JSON dictionaries.
Turns out the data is updated within roughly 25 seconds of a player batting so the information available is quite timely. I plan to develop a series of scripts and web/mobile interfaces that will provide a means to request MLB statistics for desired players. The output will show data relevant to fantasy baseball since points are awarded based on a player’s performance with regards to certain key pitching and batting statistics. But first I need to finish up the scripts I am writing to extract data on a daily and even hourly basis to build and maintain those master dictionary files I mentioned. I have these written already but am tweaking as I review them with Allan.
And Then This Happened…
Yeah, it really did. Allan and I spent 4 hours in a tavern last Friday and among other things created an AWS account and setup my cloud instance to host my MLB data retrieval system. We not only got it completely setup with all necessary installations and SSH security setup, we linked it to my GitHub account and ported over the first few scripts I had already written. We decided to give them a whirl… and guess what, they ran successfully on the very first attempt! “Can’t be!” exclaimed Allan, “This doesn’t happen, not on the first try!” After validating the results we high-fived, defined some next steps, and called it a night.
I mentioned earlier I was excited to have graduated from Python 101 with flying colors. So imagine my excitement in setting up an AWS cloud instance and actually getting a few scripts to run on my first attempt! I am well on my way to delivering the system I described and cannot wait to present statistics to an awaiting fantasy baseball team owner behind a friendly user interface. This is getting more and more fun with each progression! There definitely is a disturbance in the Force within my world as I am learning many new skills centered around Python that I can put to great use, and this will have an impact on the company I work for and my own personal satisfaction.
In my final blog next month I’ll go into details of the different pieces I will have built in the cloud. It will cover the input data sources and how they link together, the dictionaries I create to get to the desired data quickly, and some key code behind the services delivering the content to the user. If you want to follow along, you can browse my GitHub account here…
RBecker262 has 2 repositories available. Follow their code on GitHub.github.com
A Fresh Perspective: Updated
In my previous blog I gave a perspective on the differences and similarities between the Python / Object Oriented world I have recently entered and the Mainframe COBOL world from which I came. I cited how they are vastly different in the technologies used and that is obvious, but I also talked about how the underlying principals are really the same across the two.
With another month of development and increasing use of best practices under my belt I still maintain the position that the worlds are technically different yet very similar. Within the past two weeks as I began developing my scripts for this cloud system we are now following what’s called Trunk Based Development.
Trunk Based Development (TBD) is where all developers (for a particular deployable unit) commit to one shared branch…paulhammant.com
This concept is really no different than what we use in the mainframe world that we call an Enterprise Release. One to many teams owning different parts of the system working in unity to make enhancements or develop new features to the existing system, and coming together in an organized fashion to develop, test, and release the code into production. This is controlled by a Release Coordinator who manages the introduction of any new code into the pipeline (or trunk), ensuring teams aren’t stepping on each other’s toes, and manages the installations into the various test and production environments. The technology is different, that is a given, but again the underlying principles are the same.
June is around the corner…and so is my final blog…stay tuned!
Oh and by the way…I am still getting free pizza at every meeting! #togoodtobetrue #thankyouChiPy