DjangoCon 2017 in Florence

Gerard Puig
Elements blog
Published in
6 min readApr 20, 2017

Odeon Cinema, in the middle of beautiful Florence, was the center of DjangoCon Europe 2017, a conference held earlier this month, focused on Python and in particular the Django framework. The first three days where days of speeches and presentations followed by two days of workshops which they called sprints.

Developers from both our offices got the chance to go the conference days. A delegation of five got on a plane from both Barcelona and Amsterdam with the expectation to learn a lot about Python and in particular Django. After all, the conference was called DjangoCon for a reason.

After a short flight we arrived in the city of Florence the evening before the conference started. We all met in a nice restaurant to have dinner and some time to catch up. The next morning the registration of the conference started at 09:00. We arrived at the Odeon Cinema to end up in the back of a long line for registration. The organization was a bit messy in the beginning, but we later heard that they only had the evening before to organize everything.

Once everybody was in the welcome talk stared with some general announcements. After that the first talk started. This was the beginning of what turned out to be a conference of talks that where not really focused on technical challenges but more a some sort of community-driven focus.

The talks we found most interesting are discussed below.

Weird and wonderful things to do with the ORM

Expressions, lookups and transforms: Marc Tamlyn started his talk about a collection of non-standard techniques for working with object-relational mappers (ORMs) which I found very interesting. He gave an example of how to use a lookup to provide a cleaner API, such as:

UsedVehicle.objects.filter(
registration_plate__year=2007
)

This requires a custom lookup implementation ‘registration_plate’ which enables you to do a more complex SQL query and keep the beauty of the Django ORM. Technically, you define your lookup class with method as_sql which will produce your custom query string, and keep the logic encapsulated. Afterwards, you only need to register it with register_lookup method before you can use it. For more detailed information visit the Django docs.

Descriptors, who knows about descriptors? And who has used descriptors? These were the two questions that Marc asked the audience at DjangoCon. Only a few people acknowledged the usage of descriptors, so that gave more attention to the talk.

He introduced descriptors by using a vehicle object and its transactions logic (purchase, credit_check, or any other operation related with a vehicle), the first approach is usually to place the logic inside vehicle object -by using properties-, and it will work, yes, although we are going to overload our model, cluttering our code base. Therefore, a descriptor may help us to have a better code structure by decouple that logic from the object.

So what is a descriptor? Although by definition a descriptor is an object that implements any form of the __get__(), __set__(), and __delete__() methods, it can be defined as “reusable properties”. A descriptor can encapsulate complex logic which can be shared across different objects.

This is a very simple example to show how to use a descriptor:

class CustomFieldProxy(unicode):
@property
def transaction(self):
return "complex transaction logic"


class CustomFieldDescriptor(object):
def __init__(self, name, cls):
self.name = name
self.cls = cls

def __get__(self, instance=None, owner=None):
value = instance.__dict__[self.name]
return self.cls(value)


class AObject(object):
custom_field = CustomFieldDescriptor('name', CustomFieldProxy)

def __init__(self, name):
self.name = name

Let’s see the results:

>>> obj = AObject('original name')
>>> obj.name
original name

>>> obj.custom_field.transaction
complex transaction logic

So we have achieved to link a property method called transaction with an object without the need to write the code inside this model. Also we can link the same logic to another object since it’s linked just by instantiating a field.

Data internationalization with Django

In this talk Raphael Michel made a benchmark between the different data internationalization possibilities that are available in Django.

Internationalization in general is already handled very well by Django for static data, but i18n dynamic/user data needs to be added via third-party libraries. From all the libraries we saw the most well maintained ones:

  • django-hvad
  • django-modeltranslations
  • django-klingon
  • django-i18nfield
  • django-parler
  • django-nece

On the data layout side there are three different behaviors to highlight between all the libraries studied:

  • modeltranslations: Separate fields on the same table, which it means that we will need to run migrations for each new language we add.
  • parler, hvad, klingon: The translation table is set by a foreign key per model, so filtering is harder due the joins between tables.
  • nece, i18nfield: Are defined in compound fields, it means that either be postgres specific or lose ability to filter.

In regard of the model definition there are again three different styles:

  • parler, hvad, klingon, nece: Defined by inheriting from a custom base class (either via a special field class or a Meta class attribute).
  • i18nfield: By defining a model field with a custom field type.
  • modeltranslations: Registration based, is needed to define a “options class” with the proper descriptor. Feels complex, but allows you to integrate easily third-party apps.

As the final part of the study, it can be spotted, again, three separate ways to approach the language instantiation depending on the library used:

  • parler, hvad, nece: Handle one language at a time (via explicit call or context).
  • modeltranslations, i18nfield: Handle all at once (eg via lazy evaluation or generated per-language attributes).
  • klingon: Handle only the default language, and every other language explicitly at all times.

At the end Raphael offered a final table with a comparison between the libraries as a table in which we can easily observe the library that suits us the best for each use case:

Notable are klingon and i18nfield’s disability to filter well, and the excellent form support in modeltranslations and i18nfield. Admin support is fairly easy to implement in all of these except nece, which seems to lack any documentation on how to display anything else than just the JSON data. Performance does not vary wildly, but we can highlight klingon and parler above the others.

The amazing world behind your ORM

This talk was presented by Louise Grandjonc, Lead developer at Ulule. The main goal was to shown us how important is the performance behind the ORM (object-relational mapper) and how we can analyze them. Louise started introducing the tools we have to analyze Django ORM queries, as usual she mentioned django-debug-toolbar and log all the queries to Django runserver as output. In the end she focused on database logs, in this case, PostgreSQL logs.

She continued explaining the difference between executing a query inside a template or inside a view. It’s important to know when this happen to apply a cache in the proper place.

To avoid an insane amount of queries, she reminds us to use select_related and prefetch_related. Select_related will perform a join between the tables to do only one query. We will have performance issues if the tables have are big with a lot of columns and no index.

Prefetch_related does a second query your join table avoiding a big join, but with some extra queries.

Originally published at www.elements.nl on April 20, 2017.

--

--