Playing with face detection in Pharo

Modelling pictures and faces to explore Azure Face API

In this tutorial we build a tiny object-oriented model for highlighting faces within pictures. We also extend the IDE while developing the model.


Pharo is a live, object-oriented programming language and environment, offering a unique combination of concepts. Think pure and deeply reflective objects combined with a moldable IDE available at runtime and a kind of a persistent memory. This enables the creation of new development tools within minutes instead of days, opening new ways to approach software development.


Outlook

In this tutorial we build an object-oriented model for an application that uses Microsoft Azure Face API to detect and highlight faces in a gallery of pictures:

This tutorial requires Pharo 6.1. Connecting to the Face API further requires a subscription key. You can also follow the entire tutorial without getting a subscription key. For that you need to load the code of the tutorial, as it contains the data returned by the Face API. Running the complete application requires Bloc, a new graphical framework for Pharo. More details on how to load the code for the tutorial are on the github page. The short version is:

Metacello new
baseline: '
CognitiveServiceDemo';
repository: 'github://
chisandrei/cognitive-service-demo/src';
load.

Starting with a basic model

We start by designing a simple object-oriented model for our application. Given that we want to highlight faces within pictures, our model needs at least two entities, Picture and Face. A picture has then a list of faces, and a face knows the picture it belongs to. We also add a rectangle attribute to a face to indicate the contour of that face within the picture.

We could create our application without this model. However, we can leverage explicit entities to have a structure for our application that is closer to the actual domain. Once we have domain entities as explicit objects we can extend the IDE with custom views through which we can have high-level conversations about our domain.

Implementing the Picture class

To implement this model in Pharo we first create a class modelling a picture. As a side note, we use the prefix CSD for the classes defined in this tutorial. The classes in the reference implementation have the prefix CS.

Object subclass: #CSDPicture
instanceVariableNames: 'faces pictureForm'
classVariableNames: ''
package: 'Cognitive-Services-FaceAPI-Demo'

The picture has a list of faces stored within the faces attribute that we can initialise with an empty list. The picture has also a pictureForm attribute for storing the actual pixels; this will be an instance of the Form class. To interact with Picture objects let us only create for now accessors for the pictureForm attribute, and a getter for the faces attribute.

"initialization"
CSDPicture>>#initialize
super initialize.
faces := OrderedCollection new
"accessing"
CSDImage>>#pictureForm
^ pictureForm
"accessing"
CSDImage>>#pictureForm: aForm
pictureForm := aForm
"accessing"
CSDImage>>#faces
^ faces

Now that we have the first version of our model we can play with it. To do that we need a picture. For this tutorial we’ll use the following picture available on wikipedia here.

Einstein with Habicht and Solovine (wikipedia)

We can load this picture into Pharo using Zinc HTTP client. We configure the client to do a single request, expect a jpeg image, and create a Form object from the response data. To access the URL we can copy it to the clipboard and then retrieve it directly from Pharo using the Clipboard class.

pictureUrl := Clipboard clipboardText asString.
pictureForm := ZnClient new
beOneShot;
accept: ZnMimeType imageJpeg;
contentReader: [ :entity |
ImageReadWriter formFromStream: entity readStream binary ];
get: pictureUrl.
picture := CSDPicture new
pictureForm: pictureForm.

Rather than just looking at static snippets of code, we’d like to execute them and interact with the resulting objects. In Pharo, the Playground is the tool that allows us to do this. We can execute first the code to download the image and inspect the resulting object. This opens a new pane with that object.

Inspecting a Form object using the Raw view.

In this case the result is a Form object. By default the Raw view is selected, showing us the actual technical implementation of this object. However, this only allows us to reason about how the picture is stored as a bitmap. This can be useful if we are working on implementing the Form class. If we are just consuming its content, seeing how the picture actually looks can provide more value. Luckily, every Form object provides a second view, Morph that does just this.

Looking at a Form object using the Morph view shows its graphical representation.

Now, we can immediately see that we got the right picture, and can proceed to create and inspect the picture object.

Inspecting a Picture object using the Raw view.

Creating a custom view for Picture objects

Like in the case of the form object, when inspecting the Face object, the inspector shows us by default the Raw view. This allows us to check the implementation of this object and see if it is what we expect. However, after assessing the implementation, we’d also like to see how the attached picture looks like. Unfortunately, this object does not provide us with such a view.

A first solution is to select in the Raw view the attribute pictureForm. This opens the form object in a new pane to the right where we can see the picture.

Navigating from a picture object to the form containing the actual pixels.

This requires a navigation in the inspector, for which we need to have knowledge about the internals of this object. Here we need to know that the picture is stored in the pictureForm attribute. For more complex objects, it might not be as obvious what attribute or set of attributes are directly relevant for understanding a domain-specific aspect. This can also be the case if other developers need to understand our code.

Hence, another solution is to create a new custom view for this object that shows us the graphical representation of the picture.

Given that Pharo has a moldable object inspector we can easily add this extension. We attach an extension to an object by creating a new method in the class of that object and adding a specific annotation to that method (gtInspectorPresentationOrder:).

"inspection"
CSDPicture>>#gtInspectorPictureIn: composite
<gtInspectorPresentationOrder: 25>
composite morph
title: 'Picture';
when: [ self pictureForm notNil ];
display: [ (AlphaImageMorph withForm: self pictureForm)
layout: #scaledAspect ]

This method takes as parameter a builder object that can instantiate various types of views. In this case, we select a view for displaying graphical components (composite morph). Next, we need to configure the view. For this we specify a title (title: ‘Picture’) and use the method when: to indicate that this view should be available only if the picture has a pictureForm. We use the display: method to indicate what this view should display. We could directly return self pictureForm, however, we can improve the view by wrapping the form in a graphical component that automatically resizes it to fit within the available space. This is not something that we want in general for the Morph view of a Form, as that view needs to work for all Form objects.

Now, with just 7 lines of code we have a new way to interact with our object.

Inspecting a picture object with a view that shows its graphical representation.

Implementing the Face class

Next, let’s create a class for modelling a face within a picture. For now, this can be just a class with two attributes, rectangle and containerPicture, plus the corresponding accessors. We store a reference to the picture containing the face so we can recover the actual graphical representation of the face.

Object subclass: #CSDFace
instanceVariableNames: 'rectangle containerPicture'
classVariableNames: ''
package: 'Cognitive-Services-FaceAPI-Demo'
"accessing"
CSDFace
>>#rectangle
^ rectangle
"accessing"
CSDFace>>#rectangle: aRectangle
rectangle := aRectangle
"accessing"
CSDFace>>#containerPicture
^ containerPicture
"accessing"
CSDFace>>#containerPicture: aPicture
containerPicture := aPicture

Now we can instantiate aFace object. Since we do not yet have a client that can automatically detect faces, we will do it manually. For example, the position of Einstein’s face is given by the rectangle (860@320) corner: (960@420).

CSDFace new 
rectangle: ((860@320) corner: (960@420));
containerPicture: picture.

To interact with this Face object let’s add the code to the previous snippet from the Playground and inspect it.

Inspecting a Face object using the Raw view.

Creating a custom view for Face objects

Again, when inspecting a Face object the inspector shows us by default the Raw view. While this view helps us to asses the implementation of the Face object, what would really help to see is the actual graphical representation of the face. This is a piece of information that is not directly visible in the Raw view. However, we have access to the container picture and the rectangle delimiting the face. Hence, we can write a small code snippet that extracts the graphical representation of the face from the picture.

self containerPicture pictureForm copy: self rectangle

We can now use the code editor from the Raw view to execute this snippet of code and inspect the resulting object in a new pane to the right.

Using a code snipper to extract the graphical representation of a Face object.

This gives us the desired result. However, to repeat this action on a new face object we have to remember the code and manually re-execute it. A different approach consists in creating a custom view for this object that directly show us the graphical representation of the face.

"accessing - dynamic"
CSDFace>>#faceForm
^ self containerPicture pictureForm copy: self rectangle
"inspection"
CSDFace>>#gtInspectorFaceMorphIn: composite
<gtInspectorPresentationOrder: 20>
composite morph
title: 'Face';
display: [ (self faceForm scaledToSize: 256@256) asMorph ];
when: [ self hasFaceForm ]
"testing"
CSDFace>>#hasFaceForm
^ self containerPicture notNil and: [
self containerPicture pictureForm notNil ]

Like in the case of the Picture object, we create a view displaying a graphical component. We label the view as Face and make sure that the view is only available when the Face object has a container picture attached. We can place the logic for extracting the form object representing the face in a dedicated method, so we can reuse it. Also, to handle faces that are two small or two large we scale the form to fit in a 256 by 256 rectangle.

With only 10 lines of code we have a new custom view for our Face object.

Inspecting a Face object with a view that shows the graphical representation of the face.

Prototyping a basic client for Azures Face API

Now that we have a basic model for handling pictures and faces let’s create a client for the Azures Face API and use it to obtain the position of faces within pictures. This part of the tutorial requires access to a subscription key for the Face API. One can be obtained for free. However, if you do not want to do this you can skip this section. If you downloaded the code of the tutorial it already contains the data that would be obtained by calling this API.

Let’s start by creating an HTTP client that can make a request to the Face API URL. At this point you will need to use you own key for the Face API. Once you have obtained a key you can insert it directly in the script or copy it from the clipboard. In our case, to not insert the subscription key directly in the code snippet, we copy it from the clipboard. We further configure the client to use a JSON parser to read the response data.

subscriptionKey := Clipboard clipboardText asString.
client := ZnClient new
url: 'https://westeurope.api.cognitive.microsoft.com/face/v1.0';
headerAt: 'Ocp-Apim-Subscription-Key' put: subscriptionKey;
contentReader: [ :entity | STONJSON fromString: entity contents ].

Next we configure the client to do a detect request according to the API specification. A detect request returns the list of faces found in the given picture. Since for now we do not want to keep track of faces using this service we set returnFaceId to false. However, we want the API to return us data about landmark positions in the face as well as several attributes. Since our target picture was downloaded from an URL we can just pass the API the actual URL.

faceAttributes := #(age gender headPose smile glasses emotion).
client
addPath: 'detect';
method: #POST;
queryAt: 'returnFaceId' put: false;
queryAt: 'returnFaceLandmarks' put: true;
queryAt: 'returnFaceAttributes' put: (String streamContents: [:s |
faceAttributes asStringOn: s delimiter: ',']);
contents: (STONJSON toString: {'url' -> pictureUrl} asDictionary);
contentType: ZnMimeType applicationJson.

If we execute the above code we will get a ZnClient object. We can then inspect the client object to see if our request header was properly configured.

Exploring the headers from the request objects using a dedicated view.

Since everything looks ok, we can proceed to execute the request

faceStructures := client execute.

The call to execute makes the request to the FaceAPI, parses the result into an array and returns it. By inspecting the resulting array we can see that the service indeed detected three faces in the picture.

Inspecting the data returned by the Face API.

If we inspect again the client object we can also look at the response header.

Exploring the response headers of a request in the inspector using a dedicated view.

Adding Face objects to pictures

Now that we can detect faces, we need a way to add them to pictures. To implement this we can extend the Picture class with a addFace: method.

"adding"
CSDPicture
>>addFace: aNewFace
self faces add: aNewFace.
aNewFace containerPicture: self

We can also extend the Face object with a method that knows how to extract the rectangle delimiting a face from the data returned by the Face API.

"initialization"
CSDFace
>>#initializeFromJson: aFaceStructure
| rectangleData |
rectangleData := aFaceStructure at: 'faceRectangle'.
self rectangle: (Rectangle
origin: (rectangleData at: 'left')@(rectangleData at: 'top')
extent: (rectangleData at: 'width')@(rectangleData at: 'height'))

Next, we can iterate over the list of faces returned by the API and add them to the corresponding picture object.

faceStructures do: [ :aFaceStructure |
picture addFace: (CSDFace new
initializeFromJson: aFaceStructure) ].
picture.

We can add this code to the Playground and inspect the resulting image.

Adding faces to a picture object using the data retuned by the Face API.

Alternatively, if you did not use the Face API and loaded the code of the tutorial, you can initialise the faceStructures variable with the data normally returned by the Face API.

faceStructures := CSExamplesData jsonEinsteinHabichtSolovine1280px.
faceStructures do: [ :aFaceStructure |
picture addFace: (CSDFace new
initializeFromJson: aFaceStructure) ].
picture.
Using the cached face data to initialise the list of faces from the picture.

An extension for displaying the list of faces from a picture

When inspecting the picture object created in the previous section, we can see that the faces attribute stores three faces. To access them we could do a navigation in the inspector where we first select the faces attribute and then the desired face.

Navigating in the inspector between panes to view face objects.

Instead of navigating horizontally we could dive in the faces attribute by expanding it in the view, or write a snippet of code that gets us the face.

Using the Raw view to navigate to the list of faces from a picture.
Using a snippet of code for accessing a face attached to a picture.

While these solutions work, they still require us to do manual repetitive work to navigate though the list of faces attached to a picture. As before, we can address this be attaching to the picture object a custom view that directly shows the list of faces. In this case we can use a table view. In the first column we can display a small preview of the face and in the second one the rectangle delimiting the face.

"inspection"
CSDPicture>>#gtInspectorFacesIn: composite
<gtInspectorPresentationOrder: 20>
composite fastTable
title: 'Faces';
display: [ self faces ];
column: '' evaluated: [ :aFace | aFace hasFaceForm
ifTrue: [aFace faceForm scaledIntoFormOfSize: 32@32]] width:32;
column: 'Location' evaluated: [ :aFace |
aFace rectangle ] width: 200

This view, together with the Face view and the ability of the inspector to display two objects at a time give us, directly in the inspector, a browser to visually navigate through the faces attached to a picture object. This transforms the object inspector from a basic tool that focuses on implementation details, to a comprehension tool for understanding our objects from the point of view of the domain; in this case faces and pictures. Nonetheless, we can apply this to any other kind of objects.

Navigating through the list of faces from a picture using two custom extensions.

Taking a step back, we could build this browser as we explicitly modelled our domain entities as objects, and the inspector allowed us to cheaply customise the way that we see and interact with those objects. This makes the effort of reifying our domain through objects worthwhile.

A more advanced extension for Picture objects

The extensions that we previously added to Picture objects show us the list of faces and the graphical representation of pictures. However, those two extensions are separate. What would be interesting is to actually highlight faces directly in the Picture view. As we want to anyway do this in the actual application, we could use this chance to prototype and explore a possible design for our user interface directly in the inspector.

To build this extension we can modify the method creating the Picture view (gtInspectorPictureIn:) to also draw a rectangle around each face.

CSDPicture>>#gtInspectorPictureIn: composite
<gtInspectorPresentationOrder: 25>
composite morph
title: 'Picture';
display: [
| newForm |
newForm := self pictureForm deepCopy.
self faces do: [ :aFace |
newForm border: aFace rectangle width: 2 fillColor: Color blue].
(AlphaImageMorph withForm: newForm)
layout: #scaledAspect];
when: [ self pictureForm notNil ]

If we look now at the Picture view we can directly see the three faces.

Highlighting the faces from a picture directly in the Picture view.

This view, however, has a limitation: if we click on a face nothing happens. Ideally we should be able to continue the inspection with the face object that we clicked on. In a moldable inspector, views should not stop our exploration. It should not be the creator of a view that decides when the exploration stops but the users. The contexts in which users could employ a view can be very diverse, and rather different then those of their authors.

As a first solution we can extent again the gtInspectorPictureIn: method to continue the navigation with the face on which we clicked.

CSDPicture>>#gtInspectorPictureIn: composite
<gtInspectorPresentationOrder: 25>
| morphPresentation |
morphPresentation := composite morph.
morphPresentation

title: 'Picture';
display: [
| newForm displayMorph |
newForm := self pictureForm deepCopy.
self faces do: [ :aFace |
newForm border: aFace rectangle width: 2 fillColor: Color blue].
displayMorph := (AlphaImageMorph withForm: newForm)
layout: #scaledAspect.
displayMorph on: #mouseDown send: #value: to: [ :event |
| initialExtent scaledPoint scaleFactor |
initialExtent := displayMorph form extent.
scaleFactor := initialExtent / displayMorph cachedForm extent.
scaledPoint := (event position - (displayMorph layoutPosition)).
scaledPoint := scaledPoint scaleBy: scaleFactor.
morphPresentation selection: (self faces
detect: [ :face | face rectangle containsPoint: scaledPoint ]
ifNone: [ self ]) ].
displayMorph
];
when: [ self pictureForm notNil ]

While this view does what we want, at this point the method implementing it became quite complex. Apart from defining the view, it further contains the logic for drawing faces and transforming between the coordinates of the initial image and the ones of the scaled image. Also, the fact that we duplicate the form object holding the image (self pictureForm deepCopy) to avoid modifying the original form is not the best approach. We could create helper methods in the Picture class, but this logic still does not really belong to this class. A better solution is to create a custom morph that draws faces on a given image(CSInspectorPictureMorph). Using it we can then simplify the implementation of the Picture view. (The implementation of this class is given in Appendix A; this class is already present in the tutorial code).

CSDPicture>>#gtInspectorPictureIn: composite
<gtInspectorPresentationOrder: 25>
| morphPresentation |
morphPresentation := composite morph.
morphPresentation
title: 'Picture';
display: [
| displayMorph |
displayMorph := CSInspectorPictureMorph new
picture: self.

displayMorph on: #mouseDown send: #value: to: [ :event |
morphPresentation selection: (displayMorph
objectAtLocalCoordinates: event position)
].
displayMorph ];
when: [ self pictureForm notNil ]

The logic for drawing the faces and transforming coordinates is now encapsulated by CSInspectorPictureMorph. The code of the view still specifies what should happen when the user clicks on a picture, as this feature is particular to the inspector.

Using this view we can continue navigating by selecting any face.

Navigating to a face object by selecting it in the Picture view.

An extension for locating faces

The extension defined in the previous section makes it easy to move from a picture to a face from that picture. However, if we start the inspection from a face object, finding where that face is located within the container picture still requires a manual investigation. We can improve this by adding to Face objects an extension that highlights the current face within the picture.

To build this extension we can take advantage of the fact that we already have an extension for showing all the faces from a picture. Hence, we can reuse it. Just instead of copy-pasting the code of gtInspectorPictureIn: in the Face class, we can put the code of the extension in an utility method.

"inspection"
CSDPicture>>#gtInspectorPictureHighlighting: facesColor in:composite
| morphPresentation |
morphPresentation := composite morph.
morphPresentation
title: 'Picture';
display: [
| displayMorph |
displayMorph := CSInspectorPictureMorph new
picture: self;
facesColor: facesColor asDictionary.
displayMorph on: #mouseDown send: #value: to: [ :event |
morphPresentation selection: (displayMorph
objectAtLocalCoordinates: event position)].
displayMorph ];
when: [ self pictureForm notNil ]

Now we can reimplement the Picture view from the Picture class, as well as the new view from the Face class using this method.

"inspection"
CSDPicture
>>#gtInspectorPictureIn: composite
<gtInspectorPresentationOrder: 25>
self
gtInspectorPictureHighlighting: {}
in: composite
"inspection"
CSDFace>>#gtInspectorPictureIn: composite
<gtInspectorPresentationOrder: 25>
self containerPicture ifNotNil: [ :aPicture |
aPicture
gtInspectorPictureHighlighting: { self -> Color red }
in: composite ]

With this new view we can directly identify a face within a picture.

Highlighting the current face when inspecting a face object.

This new view enables also another use case: when inspecting a face object we can navigate to other face objects from the same picture. This way, the inspector does not force us to go back to the picture object if we want to inspect another face object from the same picture. We can just continue the navigation.

Navigating from a face object to another face object from the same picture.

Storing Face details

Apart from the rectangle delimiting the face, the Face API also returns data about face attributes and landmarks. Since we need this data in our application we can store it in the face object.

One possible solution is to store the faceLandmarks and faceAttributes dictionaries from the data returned by the HTTP client directly as attributes inside a Face object. The downside of this solution is that the logic for handling attributes and faces also needs then to go in the Face class. The alternative consists in creating dedicated classes for handling the landmark and attribute data. We can extend out model as follows:

Using dedicated objects for storing the landmarks and attributes associated with a face.

To implement this let’s first create the three new classes.

Object subclass: #CSDFaceProperties
instanceVariableNames: 'properties'
classVariableNames: ''
package: 'Cognitive-Services-FaceAPI-Demo'.
CSDFaceProperties subclass: #CSDFaceAttributes
instanceVariableNames: ''
classVariableNames: ''
package: 'Cognitive-Services-FaceAPI-Demo'.

CSDFaceProperties subclass: #CSDFaceLandmarks
instanceVariableNames: ''
classVariableNames: ''
package: 'Cognitive-Services-FaceAPI-Demo'

Next, let’s add accessors for the properties attribute.

"accessing"
CSDFaceProperties>>#properties: aDictionary
properties := aDictionary
"accessing"
CSDFaceProperties>>#properties
^ properties ifEmpty: [ properties := OrderedDictionary new ]
"accessing"
CSDFaceProperties>>#propertyAt: aName
^ self properties at: aName
CSDFaceProperties>>#propertyAt: aName ifAbsent: aBlock
^ self properties at: aName ifAbsent: aBlock

We can now add the attributes landmarks and attributes to the Face class, together with the corresponding accessors.

Object subclass: #CSDFace
instanceVariableNames:'rectangle attributes landmarks containerImage'
classVariableNames: ''
package: 'Cognitive-Services-FaceAPI-Demo'
CSDFace>>#attributes: faceAttributes
attributes := faceAttributes
CSDFace>>#attributes
^ attributes ifNil: [ attributes := CSDFaceAttributes new ]
CSDFace>>#landmarks: faceLandmarks
landmarks := faceLandmarks
CSDFace>>#landmarks
^ landmarks ifNil: [ landmarks := CSDFaceLandmarks new ]

Once this is in place we can extend initializeFromJson: to also extract attributes and landmarks from the JSON data returned by the Face API.

"initialization"
CSDFace
>>#initializeFromJson: aFaceStructure
| rectangleData |
rectangleData := aFaceStructure at: 'faceRectangle'.
self rectangle: (Rectangle
origin: (rectangleData at: 'left')@(rectangleData at: 'top')
extent: (rectangleData at: 'width')@(rectangleData at: 'height')).
self attributes: (CSDFaceAttributes fromDictionary: (aFaceStructure
at: 'faceAttributes' ifAbsent: [ Dictionary new ] )).
self landmarks: (CSDFaceLandmarks fromDictionary: (aFaceStructure
at: 'faceLandmarks' ifAbsent: [ Dictionary new ]))

In the above implementation we can take advantage of the fact that attributes and landmarks have dedicated classes and delegate the initialisation through those classes. For attributes we can simply store the given dictionary, while for landmarks we can transform them into actual point objects.

"instance creation"
CSDFaceAttributes class>>#fromDictionary: aDictionary
^ self new properties: aDictionary
"instance creation"
CSDFaceLandmarks class>>#fromDictionary: aDictionary
| landmarks |
landmarks := aDictionary associations
inject: Dictionary new
into: [ :currentLandmarks :association |
currentLandmarks
at: association key put: (Point
x: (association value at: 'x')
y: (association value at: 'y'));
yourself ].
^ self new properties: landmarks

We can now recreate the picture object and inspect it.

Adding data about attributes and landmarks to face objects.

Extensions for exploring attributes and landmarks

A consequence of explicitly modelling attributes and landmarks using dedicated objects is that we need to do a longer navigation in the inspector to get to see those values.

Inspecting face attributes using a dedicated column.
Inspecting face attributes using the Raw view.

As previously, we can add a custom extension to attributes and landmarks to avoid this. We can define this extension in the class FaceProperties so that it is inherited by both types of properties.

CSDFaceProperties>>#gtInspectorPropertiesIn: composite
<gtInspectorPresentationOrder: 25>
^ composite table
title: 'Properties';
display: [ self properties associations ];
column: 'Name' evaluated: #key width: 150;
column: 'Value' evaluated: #value;
children: [ :association |
association value isDictionary
ifTrue: [ association value associations ]
ifFalse: [ #() ] ];
send: #value

Now we can directly switch to the Properties view to see the attributes.

Inspecting a FaceAttributes object using a custom view showing the attributes.

Reusing the Properties view in Face objects

While we added a custom view to the FaceAttributes and FaceLandmarks classes, we still need to select the right fields in the Raw view to get to those objects. What we could further do is to actually add two views, Attributes and Landmarks directly to the Face object. We can build these views by just making a call to the method creating the Properties view

CSDFace>>#gtInspectorFaceAttributesIn: composite
<gtInspectorPresentationOrder: 25>
(self attributes gtInspectorPropertiesIn: composite)
title: 'Attributes'
CSDFace>>#gtInspectorFaceLandmarksIn: composite
<gtInspectorPresentationOrder: 30>
(self landmarks gtInspectorPropertiesIn: composite)
title: 'Landmarks'

With this addition we can directly see attributes and landmarks when inspecting a Face object.

Using custom views to inspect the attributes and landmarks associated with a face.

Exploring the landmarks of a face using a custom view

If we look at the Landmarks view we get a long list of points. To better understand what these points represent and how they map to a face we can create a custom extension for face objects that overlays those points on a face.

To build it we can follow the same idea as for the Image view, and make a dedicated morph (CSInspectorFaceMorph) that draws the landmark points on a face. Once we have it we can build an inspector extension that uses it. (Appendix B details the implementation of this class that is also present in the tutorial code from github).

CSDFace>>#gtInspectorFaceMorphLandmarksIn: composite
<gtInspectorPresentationOrder: 25>
composite morph
title: 'Face (landmmarks)';
display: [ CSInspectorFaceMorph new
initializeForFace: self
withExtent: 256@256 ]

We can now use then this extension to better understand the landmarks.

Using a custom view to visualise the landmarks associated with a face.

Due to the implementation of CSInspectorFaceMorph this extension also works if the face has no explicit picture attached.

Viewing the landmarks of a face when no pictureForm is available.

Adding more details to the Faces extension

When we created the Faces extension we just had the rectangle information about faces. Now we have more attributes. This means that we can extend this extension to show more details relevant for our application, like the age and gender inferred for a face.

We can add these new details using two more columns. Also we can add to the FaceAttributes class method for returning the age and gender properties.

"inspection"
CSDPicture>>#gtInspectorFacesIn: composite
<gtInspectorPresentationOrder: 20>
composite fastTable
title: 'Faces';
display: [ self faces ];
column: '' evaluated: [ :aFace | aFace hasFaceForm
ifTrue: [aFace faceForm scaledIntoFormOfSize: 32@32]] width:32;
column: 'Location' evaluated: [ :aFace |
aFace rectangle ] width: 200;
column: 'Gender' evaluated: [ :aFace |
aFace attributes gender ] width: 100;
column: 'Age' evaluated: [ :aFace | aFace attributes age ]
CSDFaceAttributes>>#age
^ self properties at: 'age' ifAbsent: [ 0 ]
CSDFaceAttributes>>#gender
^ self properties at: 'gender' ifAbsent: [ '-' ]
Adding the age and gender attributes to the Faces view.

Reusing views to create custom browsers

Until now we created different views to help us to better reason about our domain objects, however, we only used them when interacting with our objects within the inspector. However, that does not have to always be the case. A custom view captures an interesting domain-specific representation of an object that we can reuse to build other tools. For example, we can create custom data browsers by reusing views.

Let’s say that for our current application we’d like to build a simple browser to explore the list of faces from a given picture. We could build this browser from scratch, or take advantage of the fact that we already have a view for showing the list of faces from a picture and another one for displaying a face. We can leverages these two views to quickly prototype a browser.

browser := GLMTabulator new.
browser
column: #picture;
column: [ :column |
column
row: #faces ].
browser transmit to: #picture; andShow: [ :composite :picture |
picture gtInspectorFacesIn: composite ].
browser transmit
from: #picture;
to: #faces;
andShow: [ :composite :face |
face gtInspectorFaceMorphIn: composite ].

To start this browser we can add the code above to the Playground that we used until now, and choose to initialise the browser with the image that we are currently using. Then we can just inspect the resulting objects representing the browser and select the Live view. This view gives us access to the actual live browser. We can use it to quickly interact with the browser.

browser 
startOn: picture.
Using custom views to prototype a browser for exploring a list of faces.

Alternatively we can open the browser as a standalone tool.

browser 
title: 'Faces explorer';
openOn: picture.

We can also make the browser more complicated by displaying multiple information about the face.

browser := GLMTabulator new.
browser
column: #picture;
column: [ :column |
column
row: #landmarks;
row: #attributes ].
browser transmit to: #picture; andShow: [ :composite :picture |
picture gtInspectorFacesIn: composite ].
browser transmit
from: #picture;
to: #landmarks;
andShow: [ :composite :face |
face gtInspectorFaceMorphLandmarksIn: composite ].
browser transmit
from: #picture;
to: #attributes;
andShow: [ :composite :face |
face gtInspectorFaceAttributesIn: composite ].
browser 
title: 'Picture explorer';
openOn: picture.

Wrapping up

Object inspectors are an essential category of tools as they allow us to reason about live objects. Nonetheless, understanding an object does not always mean looking at its raw state. An object can be interesting from multiple points of view other then just its raw state. What these points of view are is however highly contextual. Hence, object inspectors should focus on allowing us to easily decide how we want to view and interact with our objects. And we should create new views and interactions as we are developing our objects.

In this tutorial we looked at how this can looks like, if we take this idea seriously. We created a rather tiny object model and continuously added custom views to make explicit what is important about those objects, and make sense of the data with which we had to work. This makes it possible to have a conversation with our system at the level of abstraction of the domain, instead of always reasoning about its raw implementation.

Exploring the landmarks associated with a face from a picture object using only the Raw view.
Exploring the landmarks associated with a face from a picture object using custom views.

Real world applications are indeed a few order of magnitude more complex. However, this simple exercise of continuously adapting our development tools to improve the way we view and interact with our systems can tame a significant part of that complexity.


Appendix A: CSInspectorPictureMorph

In this Appendix we create the class CSInspectorPictureMorph. In the code of the tutorial that is on github this class is already present.

Since AlphaImageMorph is a class present in Pharo that knows how to draw and scale pictures we can make CSInspectorPictureMorph a subclass of it.

AlphaImageMorph subclass: #CSInspectorPictureMorph
instanceVariableNames: 'picture facesColor'
classVariableNames: ''
package: 'Cognitive-Services-FaceAPI-Demo'
"initialization"
CSInspectorPictureMorph
>>#initialize
super initialize.
self layout: #scaledAspect
"accessing"
CSInspectorPictureMorph
>>#picture: aPicture
picture := aPicture.
self form: aPicture pictureForm
"accessing"
CSInspectorPictureMorph
>>#facesColor: aDictionary
facesColor := aDictionary
"accessing"
CSInspectorPictureMorph
>>#facesColor
^ facesColor ifNil: [ facesColor := Dictionary new ]

Since we need to know the picture object and the actual faces we can store the picture object in an instance variable. When setting the picture we also need to update the form in the superclass (form: is a method in AlphaImageMorph). To highlight faces using different colours we use the attribute facesColors . This will store a dictionary mapping faces to colours.

Next we can implement the drawing. For that we need to override the drawOn: method. To draw the image we just need to call the super method. For the actual faces we need to implement a custom drawing that also scales the rectangles denoting faces to the actual size of the morph.

"drawing"
CSDInspectorPictureMorph
>>#drawOn: aCanvas
super drawOn: aCanvas.
self drawFaces: picture faces on: aCanvas.
"drawing"
CSDInspectorPictureMorph
>>#drawFaces: facesList on: aCanvas
facesList do: [ :aFace |
| scaledRectangle |
scaledRectangle := self toLocalRectangle: aFace rectangle.
aCanvas
frameRectangle: scaledRectangle
width: 2
color: (self facesColor at: aFace ifAbsent: [ Color blue ]) ]

toLocalRectangle: is a method that takes a rectangle in the original coordinates of the image and translates it the coordinates of the scaled image.

"transforming"
CSDInspectorPictureMorph
>>#toLocalRectangle: faceRectangle
| scaleFactor scaledRectangle |
scaleFactor := self cachedForm extent / self form extent.
scaledRectangle := faceRectangle scaleBy: scaleFactor.
^ Rectangle
origin: scaledRectangle origin + self layoutPosition
extent: scaledRectangle extent

Now that we can display the image we should make it possible to continue the navigation when clicking on a face. To support this functionality we can add a method (objectAtLocalCoordinates:) that takes a point in the coordinates of the scaled picture and returns the face object at those coordinates; if no face is found then the picture object is returned.

"accesing"
CSDInspectorPictureMorph
>>#objectAtLocalCoordinates: aPoint
| scaledPoint |
scaledPoint := self toInitialCoordinates: aPoint.
^ self locateElementAt: scaledPoint
"transforming"
CSDInspectorPictureMorph
>>#toInitialCoordinates: aPoint
| scaleFactor |
scaleFactor := self form extent / self cachedForm extent.
^ (aPoint - (self layoutPosition)) scaleBy: scaleFactor
"searching"
CSDInspectorPictureMorph
>>#locateElementAt: scaledPoint
^ picture faces
detect: [ :aFace | aFace rectangle containsPoint: scaledPoint ]
ifNone: [ picture ]

Here, toInitialCoordinates: transforms a point from scaled coordinates to the coordinates of the initial image. Then locateElementAt: finds a face at those coordinates.

Appendix B: CSInspectorFaceMorph

Unlike the CSInspectorPictureMorph in this case we do not need the morph to scale automatically as we want to set an explicit size to the face. Hence, this time we can just subclass the Morph class.

Morph subclass: #CSInspectorFaceMorph
instanceVariableNames: 'face scale cachedForm'
classVariableNames: ''
package: 'Cognitive-Services-FaceAPI-Demo'

For this morph we need to know the face for which to draw the landmarks. Sine we want to be able to set a custom size for the face, internally we also need two other attributes for storing the scale by which to transform the face form and a scaled version of the form.

"initialization"
CSInspectorFaceMorph>>#initialize
super initialize.
self color: Color transparent
"initialization"
CSInspectorFaceMorph
>>#initializeForFace: aFace withExtent: anExtent
self extent: anExtent.
face := aFace.
scale := self computeScale
"transformation"
CSInspectorFaceMorph
>>#computeScale
| scaleX scaleY |
scaleX := (self extent x / face rectangle width).
scaleY := (self extent y / face rectangle height).
^ (scaleX min: scaleY) asPoint
"utils"
CSInspectorFaceMorph>>#ensureCachedForm
^ cachedForm ifNil: [
| faceForm |
faceForm := face hasFaceForm
ifTrue: [ face faceForm ]
ifFalse: [ (Form extent: face rectangle extent)
fillColor: Color white ].
cachedForm := faceForm
magnify: faceForm boundingBox
by: scale smoothing: 2 ]

Next we can implement the drawing by overriding the drawOn: method. We can split the drawing of each type of landmark into a separate method.

CSInspectorFaceMorph>>#drawOn: aCanvas
super drawOn: aCanvas.
aCanvas paintImage: self ensureCachedForm at: self bounds topLeft.
self drawEyesDetailsOn: aCanvas.
self drawNoseDetailsOn: aCanvas.
self drawMouthDetailsOn: aCanvas.
CSInspectorFaceMorph>>#drawEyesDetailsOn: aCanvas
self drawConnectedLandmarks: self eyeRight on: aCanvas.
self drawConnectedLandmarks: self eyeLeft on: aCanvas.
CSInspectorFaceMorph>>#drawMouthDetailsOn: aCanvas
self drawConnectedLandmarks: self upperLip on: aCanvas.
self drawConnectedLandmarks: self underLip on: aCanvas.
self drawConnectedLandmarks: self mouth on: aCanvas.
CSInspectorFaceMorph>>#drawNoseDetailsOn: aCanvas
self drawConnectedLandmarks: self noseTip on: aCanvas.
self drawConnectedLandmarks: self noseRoot on: aCanvas.
self drawConnectedLandmarks: self noseRightAlar on: aCanvas.
self drawConnectedLandmarks: self noseLeftAlar on: aCanvas.

For each of these drawing methods we draw the points that delimit the corresponding landmark. We also add dedicated methods for returning each landmark.

"landmarks"
CSInspectorFaceMorph
>>#eyeLeft
^ { self propertyAt: 'eyeLeftTop'.
self propertyAt: 'eyeLeftInner'.
self propertyAt: 'eyeLeftBottom'.
self propertyAt: 'eyeLeftOuter' }
CSInspectorFaceMorph>>#eyeRight
^ { self propertyAt: 'eyeRightTop'.
self propertyAt: 'eyeRightInner'.
self propertyAt: 'eyeRightBottom'.
self propertyAt: 'eyeRightOuter' }
CSInspectorFaceMorph>>#mouth
^ { self propertyAt: 'mouthLeft'.
self propertyAt: 'mouthRight' }
CSInspectorFaceMorph>>#noseLeftAlar
^ { self propertyAt: 'noseLeftAlarOutTip'.
self propertyAt: 'noseLeftAlarTop' }
CSInspectorFaceMorph>>#noseRightAlar
^ { self propertyAt: 'noseRightAlarOutTip'.
self propertyAt: 'noseRightAlarTop' }
CSInspectorFaceMorph>>#noseRoot
^ { self propertyAt: 'noseRootLeft'.
self propertyAt: 'noseRootRight' }
CSInspectorFaceMorph>>#noseTip
^ { self propertyAt: 'noseTip' }
CSInspectorFaceMorph>>#underLip
^ { self propertyAt: 'underLipBottom'.
self propertyAt: 'underLipTop' }
CSInspectorFaceMorph>>#upperLip
^ { self propertyAt: 'upperLipBottom'.
self propertyAt: 'upperLipTop' }
"accessing"
CSInspectorFaceMorph
>>#propertyAt: aName
^ face landmarks propertyAt: aName ifAbsent: [ 0@0 ]

What we are now missing is the method for actually drawing the points. Apart from visually marking a point we will also draw a line between the points that delimit a landmark.

"drawing"
CSInspectorFaceMorph
>>#drawConnectedLandmarks: points on: aCanvas
| translatedPoints |
translatedPoints := self translateLandmarks: points.
aCanvas
drawPolygon: translatedPoints
fillStyle: Color transparent
borderWidth: 1
borderColor: Color blue.
translatedPoints do: [ :aLandmarkPoint |
aCanvas
fillRectangle: (Rectangle
center: aLandmarkPoint extent: 4 asPoint)
fillStyle: Color blue ]
"transformation"
CSInspectorFaceMorph
>>#translateLandmarks: landmarkPoints
^ landmarkPoints collect: [ :aPoint |
(aPoint - face rectangle topLeft) scaleBy: scale ]