Major Points from PyData’s “So you want to be a Python expert?” — Metaclasses, Generators, and Context Managers

Robert R.F. DeFilippi
21 min readSep 18, 2018

--

Preface

If you’re not familiar with PyData, it is a non-profit organization which provides a forum for developers to share ideas and learn from each other. Specifically using Python and on the topic of data.

They are also a great resource for online tutorials and lectures about everything to do with data and Python, which you can find here.

The video I want to go over is titled “So you want to be a Python expert” by James Powell from his lecture (talk?) at PyData Seattle 2017. I’m not an expert in Python by any means, so I thought it would be a great video to learn a few things.

As someone who uses Python mostly for Data Science and ad hoc analysis, it was interesting to see an approach to the language with someone who was much more knowledgeable than me and presents examples beyond simple line by line execution. It also gave me a chance to think of the inner working of Python, rather than the procedural view I generally have.

My goal is to explain some of those insights in this article.

And, just one thing before we get into the guts of this article, if you’re in the Vancouver area and are interested in Python or data drop by our local chapter of PyData and check out a few of our local speakers.

Python, Python, Python …

Note on the Syntax

Python 3.5 allows for parameter and return type hints, which can be used to annotate functions and classes. However, unlike a strictly typed language like Java, no type checking occurs at runtime. Meaning, in the example below for greeting(name: str) -> int) which specifies an int should be returned, the type will not be enforced at runtime.

# Previously before in Python3
def greeting(name):
return 'Hello ' + name
# Now in Python3
def greeting(name: str) -> int:
return 'Hello ' + name
greeting('foo') # Will return 'str' even with 'int' specified
>>
'Hello foo'

I’ve decided not to include this syntax in the article, but it’s good to note for completeness and clarity when learning about Python.

Patterns to come later on in the article

Library Code and User / Application Code

I’ve had these two items brought up before, but James really hit home the concept of these two terms.

Library code is meant to be used and reused in an application, and meant to be flexible enough to be adapted or interfaced with without changes to any of the code itself. It should have some predefined behaviour which you’re trying to invoke within the application. The user does not need to know the implementation details, and only needs to know the interfaces.

User or application code, however, is built for a single environment or a specific task. It is not meant to reuse, as when another application is built that block or section of code will not be used again.

This distinction is important when reviewing meta-classes later on in the article.

The Power of *args and **kwargs

Consider a simple sum() function that can take many different numbers, and provide the sum of all of them. You don’t know if you’re going to have 1 or N numbers, and the function should — in this case — account for every number of parameters passed in. Therefore, you should use *args.

# Create a sum function using *argsdef sum(*args):
sum_of_numbers = 0

for i in args:
sum_of_numbers += i

return sum_of_numbers

Now our function can take as many parameters as we want, and can return the sum.

print(sum(1,2,3,4,5)) # 15
print(sum(1,2)) # 3
print(sum(5)) # 5

Now by adding ** to a parameter you’re able to pass in a dictionary. E.g you want to pass in a parameter dict = {'foo': 1, ‘bar':2} to the function, use **kwargs.

def key_values(**kwargs):
for k, v in kwargs.items():
print("The key is {}, and the value is {}".format(k, v))
key_values(**{'foo': 1})
# The key is foo, and the value is 1
key_values(**{'bar': 2, 'baz': 3})
# The key is bar, and the value is 2
# The key is baz, and the value is 3

This was probably review for a lot of people, and maybe not to others. But I wanted to start of with something simple.

Let’s combine the two and see what we can come up with.

def combine_params(foo, *bar, baz=10, **qux):
return foo, bar, baz, qux
print(combine_params(True, *(1,2,3,4,5), **{"key":"value"}))
>>
(True, (1, 2, 3, 4, 5), 10, {'key': 'value'})

Notice how the named parameter baz=10 comes after *bar? If it had come before *bar the value for baz would have been overwritten. This is just some syntax to be aware of. But it allows us to specify name parameters within our functions while still using *args and **kwargs.

Looks like we have a good understanding of this process, and can move on.

Using def __repr__() for debugging

Remember this when you’re debugging. It’s going to save a lot of time.

The __repr__ function returns a string containing the representation of an object, and when defined in a class it can help understand the properties of an instance. Its goal is to be unambiguous, in the sense you will always know what the object is and it’s parameters

There is a great post here by moshez who goes into much more detail that I will below. So if you find my explanation lacking, check out his answer.

To echo the final part of his post, “…[repr] is useful to express ‘this is everything you need to know about this instance’ ”. Which can be difficult for Data Scientists who usually stick to Jupyter which has terrible terrible debugging capabilities, and does not allow you to really look under the hood of a lot of the code being used.

Here is an example of __repr__ for our class Foo.

class Foo():  def __init__(self, foo, bar, *baz):
self.foo = foo
self.bar = bar
self.baz = baz
def __repr__(self):
return "Class Foo(foo:{!r}, bar:{!r},
*baz{!r})".format(self.foo,
self.bar, self.baz)
# Make sure you use {!r} when assigning the repr values# Create the object
f1 = Foo("foo_param", "bar_param", 1,2,3,4,5)
# Test repr
repr(f1)
>>
"Class Foo(foo:'foo_param', bar:'bar_param', *baz(1, 2, 3, 4, 5))"

Simple enough, but this method is critical. It is the first step for peaking into the workings of Python which we will be doing throughout this article.

Dunders?

What are these __ method __ things? They are called dunder methods or magic method, but James would call them data model methods. So when you want to implement some behaviour, Python will call a method for you behind the scenes as you don’t have to invoke it directly

Consider the class above Foo() and when f1 was created. We defined __init__ but we did not define __new__. However, when that instance was created Python called both __new__ and __init__.

This is a pattern James focuses on during the talk, where a developer has some top-level function or top-level syntax, which is then invoked by some corresponding __dunder__ method.

E.g. We want to get the bar attribute from the f1 object created. This is our top-level syntax. So we write f1.bar. And, in the background python will call f1.__getattribute__(‘bar’).

Or consider if we want to create a new Foo. This, again, is our top-level syntax. So, we write f2 = Foo(), in the background Python will call f2.__new__().

Just to reiterate, in Python you have repeated pattern of some simple top-level syntax of what you want to implement and a corresponding methods which does the implementation. The pattern here is important, because if you can work through what the top syntax is you’ll be able to implement it Python quite easily

Metaclasses and Metaprogramming

The technical lead at the company I’m currently at would chastise me to no end for even writing about this, but I thought it best to include this section for completeness. The topic is metaprogramming which should sound the alarm bells.

However let’s see if there is some use case for it in Python.

Credit to https://blog.ionelmc.ro for the image

Sometimes and with specific use cases, you can get a greater understanding of the code you’re given — this is our library code — by modifying how classes are created. The use of metaclass programming is used to modify these classes.

Just keep this in mind, and this is right from the Python 3 docs, “It’s worth re-emphasizing that in the vast majority of cases, you don’t need metaclasses.”

However, it will be useful in your Python career to understand what meta-classes are, how they are useful, and what can be achieved if they are used correctly

Consider this, in Python 3, an object’s type and its class are not the same as shown below.

class Foo():
pass
foo = Foo()type(foo) is foo.__class__ # True
type(foo) # <class '__main__.Foo'>
type(Foo) # <class 'type'>
type(type) # <type 'type'> ... this is like Inception

Remember, in Python, everything is an object: classes, integers, strings, etc. And every type in Python is defined by a Class.

Because Classes are also an object, they can be modified in someway outside of their construction, by adding or subtracting fields and / or methods outside of the class.

Foo.bar = 49
Foo.funct = lambda self: print("Foo funct")
foo = Foo()
foo.bar # 49
foo.funct() # "Foo funct"

Give all the Foo objects the ability to have y as a function

setattr(Foo, "baz", "this is baz")
setattr(foo, "qux", "this is qux")
# Check where the method has been applied
Foo.baz # "this is baz"
foo.baz # "this is baz"
foo.qux # "this is qux"
Foo.qux # AttributeError: type object 'Foo' has no attribute 'qux'

A key point to understand is, first the assignment of the qux occurs on the instance of the class and not the class itself. And second, the setattr which is just foo.__setattr__('qux', ‘this is a qux') is called just before the actual assignment occurs. Meaning we could debug the assignment of the method to the instance or to the class.

This is important if we’ve been given some library code, and we need to evaluate how it is being executed or we need to protect against any changes that could occur to the code.

With Python, we can explicitly code creation of a class from a metaclass. The arguments for creating a class, are first the name of the class, second a tuple of base classes, and third a dict of the namespace for the class.

class Foo: 
pass
# Really meansFoo = type('Foo', (), {})

Now let’s build a metaclass so we can start to show how this all fits together.

# Create a simple classclass MetaClass(type):
# To be filled in below

The important data model methods we’re going to override are executed in order as the class is created are:

  1. __prepare__ (the first method called), then;
  2. __new__ (the second method called), then;
  3. __init__ (the final method called).

Why are these important?

When a class is created __prepare__ is called first. The key insight here is __prepare__ is called before the object is created, and must return a dict with the attributes defined for the class. So, if you want to keep track of the method’s attributes during assignment they will be in this dict.

To edit this you need to use @classmethod decorator — I’ll go over what that is later — to be able to edit the method.

@classmethod
def __prepare__(metacls, name, bases, **kwargs):
print("__prepare__ called")
print('metacls: ', metacls)
print('name : ', name)
print('bases: ', bases)
print('kwargs: ', kwargs)
return {}

Next the __new__ is called after __prepare__, and returns the class being created. You’re able to edit the construction of the class right now: remove or add functions, stop creation entirely, add parameters, etc. The return value of __new__ should be the new object instance you’re creating.

Also, remember if __new__ does not return an instance of the class — in this case it is metacls — then the instance’s __init__ method will not be invoked and you’ll need to call it manually.

I’ve also included a custom method new_method() during this process, so we can check it later when we create our final object.

def __new__(cls, name, bases, namespace, **kwargs):
print("\n__new__ called")
print('cls: ', cls)
print('name : ', name)
print('bases: ', bases)
print('namespace: ', namespace)
print('kwargs: ', kwargs)
def new_method(self, x):
'''
New method added during object construction
'''
return x
newClass = type.__new__(cls, name, bases, dict(namespace))
setattr(newClass, new_method.__name__, new_method)
print("new method: ", newClass.new_method)
newClass.new_param = 'new_param'
return newClass

Finally, the __init__ is called on the class. This is not a creation, but a initialization. Because we’re working with metaclasses, it should be noted this __init__ method is not on the object instance, but a class method. It is only called once in the lifetime of the class, and must not return anything.

I’ve also put in the default_param as a parameter we’ll use when creating our class.

def __init__(cls, name, bases, namespace, default_param, **kwargs):
print("\n__init__ called")
print('name : ', name)
print('bases: ', bases)
print('namespace: ', namespace)
print('kwargs: ', kwargs)
cls.default_param = default_param

The point here is, __new__ should be implemented in your metaclass when you want to control how a new object is created while __init__ should be used if you want to change the the initialization of the new object after it has been created.

Let’s put it all together

# foo.pyif __name__ == "__main__":

class Foo(metaclass=MetaClass, default_param='default_param'):

two_param = 2
def bar(self, param):
pass
def baz(self):
return self.param_added
f1 = Foo()
print(f1.new_method('foo'), f1.new_param, f1.two_param,
f1.default_param)
...$ python3 foo.py>>
__prepare__ called
metacls: <class '__main__.MetaClass'>
name : Foo
bases: ()
kwargs: {'default_param': 'default_param'}
__new__ called
cls: <class '__main__.MetaClass'>
name : Foo
bases: ()
namespace: {'__module__': '__main__', '__qualname__': 'Foo', 'two_param': 2, 'bar': <function Foo.bar at 0x106b3cc80>, 'baz': <function Foo.baz at 0x106b3cd08>}
kwargs: {'default_param': 'default_param'}
new method: <function MetaClass.__new__.<locals>.new_method at 0x106b3cd90>
__init__ called
name : Foo
bases: ()
namespace: {'__module__': '__main__', '__qualname__': 'Foo', 'two_param': 2, 'bar': <function Foo.bar at 0x106b3cc80>, 'baz': <function Foo.baz at 0x106b3cd08>}
kwargs: {}
foo new_param 2 default_param

We can see in the __new__ the default_param and new_method are being printed to the console, so we were able to edit the creation of the Foo class.

Why is this important at all? It just seems we made it harder to create a class by creating a metaclass, and adding all these extra functions and class parameters.

Well, think about code you have access to (user or application code) and those you don’t (library or package code). Imagine you have access to an API and it is creating classes or objects within your code and you need to make sure that everything created has specific parameters. Because you don’t control when changes occur within that API or are even notified when changes occur, you need to add defences to your code. This is the point of metaclasses.

E.g you want to protect for bar not being in the class and you want to make sure the value for param_added always starts at a value < 3.

We can easily check this by adding the following code to __new__:

# Add this to __new__...# Check for the bar in object creation
if 'bar' not in namespace:
raise NameError("bar is not defined")
# Check if the two_param is 2
if namespace.get('two_param') != 2:
raise ValueError("Param 'two_added' is not 2. It is",
args.get('two_param'))
# Make sure default is set correctly
if kwargs.get('default_param') != 'default_param':
raise ValueError("Param 'default_param' is not
'default_param'. It is",
args.get('default_param'))

So now if something changes, and we’re no longer able to get certain information from our class — say two_param is no longer 2 — Before anything serious occurs at runtime our class won’t even get created. Meaning we can start getting into the debugging of why things changed.

Finally, you’re also able to make sure an object has the attributes you want it to have before you call the method. Use the hasattr() methods on your objects after creation. This is an additional check on the objects you can perform.

assert hasattr(Foo, 'new_method'), "There is no new_method method"

Disassemble with dis()

The dis package all you to look at Python method calls, by disassembling it into something actually readable.

Let’s take a look at the Foo class we just made, and see the methods associated with it.

import disdis.dis(Foo)Disassembly of bar:
69 0 LOAD_CONST 0 (None)
2 RETURN_VALUE
Disassembly of baz:
72 0 LOAD_FAST 0 (self)
2 LOAD_ATTR 0 (two_param)
4 RETURN_VALUE
Disassembly of new_method:
43 0 LOAD_FAST 1 (x)
2 RETURN_VALUE

I’m not going to go into how to what this means in the interest of length — this is long enough already — but if you’re interested in how to debug or optimize your code using dis check out this article here.

Using Function Decorators aka. Wrapping Functions

In Python where everything is defined at runtime, and this idea can be used to wrap functions with some set of behaviour.

Consider a web app which requires a user to login. By adding a session state to the app, you’re able to track when users started using the product and even determine if a user should have access to various parts of the app.

By using decorators — denoted in python by the @ above a function — you’re able to modify existing functions without having to change its code. Remember the @classmethod from above?

There are a few major concepts to understand when thinking about decorators, or considering building one:

  1. Decorators are just functions;
  2. The decorator will use the decorated function as function parameter;
  3. The decorator returns a new function, and;
  4. The returned function takes the same number and type of parameters.

Sounds easy enough, so let’s create a simple function called add and see how we can add a decorator to it.

from inspect import getsource, getfile# The function we're going to be reviewing
def add(x, y=10):
return x + y
add(10)
add(10, 30)
>>
20
40
# Take a look around the functionadd.__defaults__
>>
(10, )
add.__name__
>>
'add'
add.__module__
>>
'__main__'
add.__code__
>>
<code object add at 0x10a8bfd20, file "<ipython-input-69-a85e5a00abd1>", line 3>
add.__code__.co_argcount
>>
2
add.__code__.co_varnames
>>
('x', 'y')
getsource(add)
getfile(add)
>>
'def add(x, y=10):\n return x + y\n'
'<ipython-input-69-a85e5a00abd1>'
type(add) # This is passing a function to a function
>>
<class 'function'>

What is the problem we’re trying to solve with decorators? Given the option between writing a complete perfect complete problem or a simple solution, write the simple solution. And with python, you have more room to make mistakes initially, and when the problem gets more complex you’re able to go back and refactor easily. Then say, if you had to go back and refactor with C++.

Let’s see if we can write another function, to return our function.

def wrap_add(add_function, x_value):
return add_function(x_value)
wrap_add(add, 30)
>>
40
wrap_add(type, 30)
>>
<class 'int'>
wrap_add(type, wrap_add)
>>
<class 'function'>

The point of this is to show function can be wrapped (nested) inside other function. If you’re familiar with Java, consider inner and outer classes as a parallel example.

This is how wrap_add(add, 30) executes:

  1. wrap_add is called with add and 30 as parameters
  2. add is called inside wrap_add with 30 as the parameter
  3. 30 + 10 = 40 ... and 40 is returned by the inner function
  4. 40 is returned by the outer function

By having our wrap_add return a function, we’re able to create our decorator. What about having many — or none — arguments passed into our add? Let’s just modify our function quickly.

# Allow for multiple values, and also have a default value
def add(*x_vals, y=10):
for x in x_vals:
y+=x
return y

add(1,2,3,4,5)
>>
25
add() # What about with no arguments
>>
10
lst = [1,2,3,4,5]
add(*lst)
>>
25

Now let’s create a decorator for our add. As far as python is concerned, we’re just creating another function which returns — but not invokes — another function. We want to check if the input type is a str and if so, we want to print that message to the console.

Here is the basic structor of the decorator check_for_strings.

We need to make sure Python identifies the functions inside the decorator to the interpreter correctly. To do this we use the method wraps from the functools package, and modify our check_for_strings function. This is part of the standard Python library, so don’t worry about installing any new packages.

from functools import wrapsdef check_for_strings(func):
@wraps(func)
def wrapper(*args, **kwargs):
# Do things here before the decorated function
if this_thing:
return func(*args, **kwargs)

elif that_thing:
return func(*args, **kwargs)

else other_thing:
return print('Call this instead of the decorated function')

return wrapper # Notice how it is not invoked as a function

Let’s quickly review this. The nested function wrapper(*args, **kwargs) will call the func passed in from our outer class check_for_strings(func) and additional arguments within the if-else blocks.

By adding the @check_for_strings around the add method’s signature, we’re telling Python to use add as the func parameter in check_for_strings.

Let’s see this in action

# dec.pyfrom functools import wraps# Create the wrapper function
def check_for_strings(func):

@wraps(func)
def wrapper(*args, **kwargs):
print('Calling wrapper to check for strings')

str_in_args = 'str' in [type(x).__name__ for x in args]
str_in_kwargs = 'str' in [type(x).__name__ for x in
kwargs.values()]
if not str_in_args and not str_in_kwargs:
print('args and kwargs have no strings')
return func(*args, **kwargs)

return 'There are strings in the parameters. Check again'
return wrapper

Now let’s create a simple application and run it twice.

@check_for_strings
def add(*args, y=10, **kwargs):
print('Starting add')
sum = y
for arg in args:
sum += arg
for _, val in kwargs.items():
sum += val
return sumif __name__ == '__main__':
print('Starting __main__')
print(add(10, 20))
print(add(10, 'a'))
$ python3 dec.py>>
Starting __main__
Calling wrapper to check for strings
Starting add
40
Calling wrapper to check for strings
There are strings in the parameters. Check again

The add(10, 20) ran correctly, and our add(10, ‘a’) printed to the console instead of failing and stopping the entire application. Nice.

The logic behind this is, an abstraction for checking something which was previously complex and putting it into one place for reusability. That place is our decorator which is now available throughout the project, to be used to wrap other function definitions.

And, we’re able to take messy code that was written in haste and wrap it in a function that can add further complexity without editing existing code.

That’s fairly dominant.

Generators

A generator is a function that behaves like an iterator. But what a generator can really do, is take long computations to break them into smaller parts. Doing this enables greater control over each of the steps of the computation, and greater control over memory.

Lets quickly go over the eager loading vs. lazy loading divide.

With eager loading, everything is loaded all at once into memory before it can be accessed. Which is good you don’t have a lot of data, or you’re able to cache the information for quick access.

However, with lazy loading only requested information is loaded into memory when it is requested. This is useful when you’re not sure how long it will take to load an entire set or if the set is so large the time loading would hamper performance.

Consider these two snippets for the Adder class and the add function.

class Adder:
def __call__(self, x, y):
return x + y
# How is this different than the following?def add(x, y):
return x + y

What about the memory requirements of each of them? Would each give the entire result all at once, or can you process each of the elements one by one?

Do you need all of the values? Because if you don’t it is wasteful from the amount of time it takes and the amount of memory it takes.

What if you wanted one element at a time, evaluate it, and then discard it. This is, in fact, a basic looping pattern. Which could be done with an iterator, but we run into the problem brought up earlier with how much data we need to plug into memory or if we have performance constraints.

The core idea here is a generator will allow for interleaving with other code and enforce sequencing, which means if we have performance constraints through loading data into memory, we can bypass that by only loading part of it, running some functions, and repeating the sequence until complete.

When the function calls yield, the generator is paused and local variables and their states are remembered between successive calls. Finally, when the function end, StopIteration is raised automatically on further calls.

This is quite powerful.

Think of yield as a return statement, which does not end the function. The generator will not move on from first yield to the second yield, until next is called. Meaning we can have actions occur outside the generator, and the state will be remembered until next is called again.

# Sample generator 
def sample_generator():
yield first_thing()

yield second_thing()

yield last_thing()
# You can use the generator in a for loop
for item in sample_generator():
print(item)
# Or use can use next
simple_gen = sample_generator()
next(simple_gen) # first_thing is called
next(simple_gen) # second_thing is called
next(simple_gen) # third_thing is called

And you can do things in an even more pythonic way, but using generator comprehension / expressions. Here is a trivial example to check for prime numbers. The is_prime function is based off Wilson’s Theorem and this post here.

Even through we created a generator, without the use of yield in is_prime, it will be behave the same as a for loop.

# gen.py
from math import factorial
def is_prime(x):
return factorial(x - 1) % x == x - 1
if __name__ == "__main__": primes = (i for i in range(2, 10000000000) if is_prime(i)) print(primes) for x in primes:
print(x)
$ python3 gen.py<generator object <genexpr> at 0x105c640f8>
2
3
5
7
...
# Will print for a long time ...

By modifying the function, so they generator only prints what and when we want — in this case one prime at a time — securing the function against eager loading and memory problems.

def is_prime(x):
yield factorial(x - 1) % x == x - 1 # Changed to yield
if __name__ == "__main__": primes = (i for i in range(1, 10000000000000000) if is_prime(i)) print(primes) # Will show a generator object print(next(primes))
print(next(primes))
print(next(primes))
print(next(primes))
>><generator object <genexpr> at 0x1049d90f8>
1
2
3
4

The Final Example: Context Manager

This has already been a long article, but there is one more thing to go over.

Consider an event or action where a setup occurs (create or initialize) and then tear it down happens after. How to tie these two actions together, even if an error pops up, you still have both actions occurring.

This pattern is implemented with two class methods: __enter__ (the setup) and __exit__(the teardown), and both methods must be called regardless of what happens.

Think about opening a file (the setup) and once you’re done with it you need to close it (the teardown). Here are the basic methods for our context manager.

def __init__(self):
# Not a requirement, but can be useful to throw this in
# Could be used to check credentials
def __enter__(self):
# This is where all the setup code goes
# Occurs before the with
def __exit__(self):
# This is where all the teardown code goes
# Occurs after the with
# Should be able to handle any failure with exception handing
# if something does occur

The __enter__ should always be called before the __exit__, as one would expect.

Is it possible you’ve used or seen a context manager before? What about when you’ve opened a file?

with open('some_text_file.txt') as file:
for line in file:
print(line)
# __enter__ is called to make sure the results from the .txt
# are assigned to file

Now let’s make our base class to edit later. We want to connect to some sort of database with our fictional package database.package. To connect it takes a dictionary of arguments. First we want to capture the arguments during initialization, then connect to the database (__init__), do our thing (__enter__), then close everything down (__exit__).

To connect to a db, you usually need:

  • Host IP (127.0.0.1);
  • Username (foo);
  • Password (bar) , and;
  • The database name (testDB).
config = { host:'127.0.0.1',
user:'foo',
password:'bar',
database: 'testDB' }

Let’s start creating our class ConnectToDb as to connect to some database.

import database.packageclass ConnectToDb():  def __init__(self, config):
self.config = config # A dictionary of values
def __enter__(self):
'''
Place the setup code here. It is something you'll need to do
every time you want to talk to the db.
'''
self.conn = database.package.connect(**self.config)
def __exit__(self, excep_type, excep_value, excep_trace):
'''
Place the teardown code here. It is something you'll need to do
every time you want to shut down the connection.
'''
self.conn.close()

Looking fairly standard. We’ve taken the basic operations of connecting to a database through a package, and wrapped it in a class that performs setup (__enter__) and teardown (__exit__)operations. Everything should run smoothly when enter is called, however if an exception is raised during __enter__ the execution will terminate and our __exit__ will not be called.

Not good, and outside of the pattern we described.

However, if something goes wrong during the __exit__ three arguments are passed in by the interpreter: excep_type, excep_value, and excep_trace. Remember the code here must execute as per the definition of the context manager, even if exceptions occur. So if something goes wrong you want the exception to fire after the code which must fire, fires.

# Create some exceptions to fireclass ConfigError(Exception):
pass
class InternalDbError(Exception):
pass
class SomeCustomError(Exception):
pass
..# Edit the ConnectToDb class# Put after our __exit__ method to catch the exceptions
# after the connection closes
def __exit__(self, excep_type, excep_value, excep_trace):
self.conn.close()

if excep_type is database.package.errors.ConfigError:
raise ConfigError(excep_value)

elif excep_type is database.package.errors.SomeCustomError:
raise SomeCustomError(excep_value)

Now our simple contact manager is sure to close the connection no matter what happens, as the exceptions are handled after self.conn.close(). Now our ConnectToDb conforms to the pattern laid out at the beginning.

Final Thoughts

Expert level code is not code that uses all the features together. It is code with a specific where and when a certain feature or pattern should be used. Code that does not waste the time of the person writing it or that of the person reading it.The language itself should provide the core pieces you need, and it is up to you to understand those core pieces.

Remember what these features are for. The core meaning behind these are what is important.

As always, I hope this was helpful and you’ve learned something new.

Cheers,

Additional Reading

--

--

Robert R.F. DeFilippi

Sometimes Chef ◦ Sometimes Data Scientist ◦ Sometimes Developer