This is the first of several posts on Python meta-programming. This will be a somewhat light introduction to some of the topics I hope to cover in significantly more depth in future posts. Inspired by my ongoing study of the language (as well as daily coding), especially my recent study of meta-classes in Mark Lutz’s voluminous and epic masterpiece Learning Python, I want both to extend as well as share my own thinking about the power and beauty of Python, especially its “meta” aspects, which I feel differentiate it from other programming languages such as C++.
The root meanings in Greek of meta include “after”, “beyond”; but what was once merely an albeit significant Greek-derived prefix has now become in contemporary culture a popular word (adjective) in its own right. For example, it’s common to hear the expression “that is so meta” to refer to something that is about itself, that is, something that is self-referential. Technology, being a scientific field, uses such nomenclature more soberly, of course. But to my mind, Python’s intrinsic support of meta-programming remains fairly awe-inspiring. For if the you, the programmer, can have your thinking shifted, however slightly, however indirectly, to ponder not only the meaning of computation, but the meaning of meaning, so to speak, then coding begins to approach quite a profound human endeavor…. …
This blog post is a brief reflection on the elegance of the design principles of the Scikit-Learn library. To be clear: this is not meant to be a tutorial in using Scikit-Learn. Scikit-Learn is a powerful, rich, and extensive Python library for implementing machine learning. The library provides tools for modeling (e.g., classification, regression, and clustering algorithms), model selection (e.g., grid search), preprocessing (e.g., feature extraction), and more. I maintain that the success of the library has as much to do with its interface and ease of use, as it does with its powerful and profound functionality.
As manifest in the Scikit-Learn “algorithm cheat sheet” diagram below, the library is large and tremendously…
I want to discuss some of the power and limitations of Bayesian statistics as used in statistical learning, but am limited somewhat by lack of statistical depth! Nevertheless, I think it’s worth reviewing, even at a high level, some of the key ideas of Bayesian decision making, and discuss some of the implications for data processing. There are also some fascinating philosophical issues to consider, which I hope to touch on briefly.
The well-known Bayes theorem is:
Which says that the probability of some event “A”, given “B” is equivalent to the probability of “B” given “A”, multiplied by the probability of “A”, over (or divided by) the probability of “B”. The elegance of the equation has to do with how conditional probabilities can be re-arranged and inverted. If we are looking at the observation of some data, and have some hypothesis or belief, we can ultimately invert the probability of the observed data given our belief to arrive at a probability of our belief given data — provided we have some other knowledge of the probabilities of the data and our hypothesis. …
In the middle of the journey of our life
I found myself astray in a dark wood
where the straight road had been lost sight of
(Dante’s Inferno Canto I, Translated by Seamus Heaney)
This is a reflection on big data architecture from the point of view of a former distributed systems coder and erstwhile financial services product manager who is pivoting back to coding, with a focus on data science. Unlike my previous Medium posts (Python function decorators and Python generator functions), this article does not proffer any concrete coding examples; it is not a tutorial; and it does not explicate specific Python language features (albeit tinged with some nostalgia from my experience with C++). Although I discuss some technologies in passing, I make no pretense of expertise, but rather try to share some ideas about the importance and usefulness of big data architectures in contemporary computing. …
Python’s generator functions provide a powerful mechanism for managing data and computation resources, but for those new to Python, they are not necessarily intuitive. In this piece, I’ll break down the mechanics of generators, while also introducing what I hope is a motivating example: a small class for managing and streaming an S3 file resource.
Given how easy it is to get started with Python and write code that actually does something (e.g., iterate over a list of values, computing and/or printing those values), it may not dawn on the new or casual Python programmer that the language builds in the notion of procrastination, or deferred computation. …
Function decorators are a powerful programming tool, and Python supports them natively in the language itself through some clever syntactic sugar.
This article aims to provide a brief tutorial on the mechanics of function decorators, with some examples of their use. As a former
C++ coder, I also muse a bit about decorators as one gateway (among others) to the area of meta-programming: you can argue that decorators are a higher-order feature that affords the use of design patterns at the language level.
One appeal of decorators for new Python programmers (myself included) is how their use and support in the language reveal many of Python’s functional programming aspects. This functional expressivity of Python is deeply appealing to erstwhile
C++ programmers like myself. …