CTHRU: A composition-based taxonomy of information graphics

Arno Klein
Jun 1, 2016 · 18 min read

This paper outlines a formalism for deconstructing and classifying images by their composition. The proposed taxonomy of visual composition and corresponding symbolic notation, CTHRU, are meant to provide a bridge between image taxonomies of low-level data and high-level content. Applications of a composition-based taxonomy of graphical images include: automated generation and annotation of information graphics, cross-fertilization of graphical techniques across research and application domains, and “visualization exploration” to explore the space of possible visualizations resulting from the taxonomy.

Categorizing images

Categorizing images is motivated by the profound conceptual and practical utility of Linnaeus’s taxonomical nomenclature for biology and by Mendeleev’s Periodic Table of the Elements. Each classification system, whether it applies to the biological world or to the atomic world, organizes that world in order to better understand it, and to generate and test hypotheses about it. In these examples, organization by structure (genotype, number of protons) is inextricably related to organization by function or behavior (ethology, chemistry). By analogy, structure/function relationships in image design may expose principles of categorization. Structural properties that define a category should be essential to instances of that category. For example, a bar chart could have any color scheme, so color would not be a property that defines a bar chart or the category containing bar charts, whereas relative size of given shapes may be a defining property. Defining a bar chart may seem relatively straightforward, but there are an infinite number of possible images, from text to table to scatter plot to picture to rich mixtures like a geographical map with overlying elements such as networks, icons, and text. Categorization then becomes a question of how to break up the infinite number of possibilities into a manageable set of independent, interchangeable elements, as well as a means of clearly communicating how any one example is a combination of these elements.

At the other end of the spectrum from classification by graphical structure is image classification by content description [1]. For example, a Google Image search (http://images.google.com/) is expected to yield images containing the subject matter of the search. “Human computation” methods have been developed to take advantage of the Internet and network gaming trends to annotate images by content [2,3]. The author built a searchable database of information graphics from numerous repositories on the Internet with well over 1,000 manually annotated information visualizations (http://infovis.info/). At present, a visitor has the capacity to search by content or by original Internet repository from which the image was obtained, but not by the structure or form or function of an image. Formal descriptions of these and other images based on a taxonomy will facilitate clear image categorization, navigation, search, and generation [4–7].

Perceptual considerations

In addition to image categorization, from an information design perspective we might also be interested in the question of what elements enhance or detract from the clarity of a representation. When designing more informative static representations of data or graphical user interfaces, the temptation may arise to increase the number of layers of information and to increase their visual complexity. A good designer is concerned about the effects on human understanding or performance, so seeking “a more effective image” may be qualified as: “Is there a minimal, optimal, or maximal complexity for an effective image?” That is, how sparse or how rich can an image be and still effectively convey information under given task conditions to a given user group without degrading task performance beyond an acceptable degree? A classification scheme, coupled with measures of visual complexity informed by perceptual and cognitive psychology may be used to empirically determine the relationship between visual complexity and task-related information content and task performance. One goal of providing a taxonomy of visual images is to take a step closer to establishing a set of evaluative and guiding principles for minimal (parsimonious), optimal (efficient), and maximal (rich) design.

Others have incorporated principles of perception and parsimony in rule-based design [8–10]. For example, Mackinlay [8] used accuracy levels attributed to different graphical elements of a diagram, such as position and color [11], to rank the encoding of information by importance. He approached images as well-defined graphical languages, and suggested that what he termed expressiveness criteria could “determine whether a graphical language can express the desired information,” and indeed whether the language “expresses all the information and only the information.” He also addressed the differences between image types by defining the manner in which different languages (such as “retinal-list,” “map,” and “connection” languages) encode information. For example, the syntactic structure of sentences from connection languages would be m_n(m_l), where the positions of node marks (m_n) constrain the positions of link marks (m_l). Zhang [9] analyzed the relation between representations of displays and structures of tasks in terms of a mapping principle: “The information perceivable from a [relational information diagram] should exactly match the information required for the task.” In his paper he provided useful examples of mismatches between certain graphical elements and the information they’re intended to convey.

Data-type and task-based taxonomies

A variety of image taxonomies have been created [12,13], including taxonomies of data types and tasks [5,6,14–17], and taxonomies of visualization algorithms [18,19]. Of particular relevance to this paper are the taxonomies incorporating perceptual considerations [8–11,20–23]. Tables 1 and 2 contain classification terms extracted from the references [8–10,14–22,24] and organized according to data type (Table 1) or task type (Table 2). Data-type taxonomies attempt to relate the graphical structure of information graphics to their underlying data structures. Task-type taxonomies attempt to match a task or function with an appropriate graphical category. Table 1 and Table 2 are intended to convey the collective variety of organizational principles in the different image taxonomies.

Table 1: Terms from data-type taxonomies

Image for post
Image for post

Table 2: Terms from task-based taxonomies

Image for post
Image for post

Composition-based taxonomies

An additional category of taxonomies may be termed “composition-based.” This is distinct from and complements taxonomies based on content, task, algorithm, and data, in that it is not concerned with the semantic content of a picture or overt functions of an information graphic, and only with some of the retinal and encoding aspects of data represented by an image (Table 1). Instead, its primary focus is to establish a general syntactical formalism for de/constructing graphical compositions. It is intended to provide a bridge between low-level data structures and high-level content description. Perhaps the most practical goal for creating a composition-based taxonomy of images is to provide the foundation for an automated or semi-automated means to annotate and generate graphs [4,5,7]. Another goal is the cross-fertilization of graphical techniques across research and application domains, independent of content and function, and devoted to form. A related goal is “visualization exploration,” to explore the space of possible visualizations resulting from the taxonomy.

The CTHRU Taxonomy

This section details a formalism for classifying images by their visuospatial characteristics in a bottom-up manner. The proposed taxonomy and corresponding symbolic notation are meant to complement top-down taxonomies that classify according to content, function, intent, interpretation, or effect. Although general in scope, its intended application domain is information graphics, images intended to convey information of an objective nature. It is not intended to account for all low-level features of an image, such as the axes of a graph, or high-level aspects of an image, such as realistic or abstract pictorial representations. It is perhaps most closely related to the work of Wilkinson [25], Wickham [26], Engelhardt [27,28] and Zhou [6], all of which are similarly interested in decoding the syntax of graphics and exposing its embedded and hierarchical structures. The derivation of this taxonomy draws from concepts in basic geometry and linear algebra as well as graph theory and computer graphics, and draws inspiration from Gestalt psychology. Since the taxonomy facilitates looking past image content at the structural relationships underlying the image, it is called “CTHRU” (for “Compositional Taxonomy Highlighting Relational Underpinnings”).

We will consider any 2-dimensional image to be a set of graphical objects set in a space and spatially related to each other. We will build up our classification system from a set of graphical elements making up these objects, the transforms that situate them in a space, and relational operators for relating these objects to one another in this space.

Graphical elements

We will consider graphical objects to consist of shapes enclosing surfaces. The graphical element shape includes curves (a one-dimensional continuous set of points such as the outline of a shape) and boundaries (a visual property that implicitly marks the limit of an area or volume). The graphical element surface includes simple visual qualities such as color, density, pattern, and texture. A graphical object such as a triangle could be formed by a simple outline, by an illusory contour, or by visual differences along a surface.


Our graphical transforms consist predominantly of affine transformations used in geometry and linear algebra, where any deformation to a shape is allowed so long as parallel lines remain parallel. (An affine transformation between two vector spaces consists of a linear transformation followed by a translation. Our transforms will sometimes deviate from affine transforms, for example when objects are scaled linearly with respect to basis vectors different than those of the image such as for radial plots. Another case is when the intent is to imply scale changes between objects, as in a caricature.) Example transformations include translation and changes in scale and direction (rotation, reflection, and shear). Rather than consider the transformations themselves, we will consider the effects they have on graphical objects: position, size, and orientation.

Relational operators

Our relational operators are constructed from and act on the above elements and transforms or on each other, and will be of five general classes that we will refer to as connection, containment, composition,correspondence, and context. We will describe each relational operator below.

A graph (a mathematical construct formally relating different objects, depicted by a set of vertices connected by edges) provides perhaps the simplest way of relating two otherwise dissimilar objects in a diagram, by simply connecting them with an edge. A hypergraph is a generalization of a graph, where instead of each edge connecting two vertices, each hyperedge can connect any number of vertices. The hyperedge is schematically represented in diagrams as an enclosure surrounding a subset of vertices, like a Venn diagram. Since the taxonomy deals with visual images, we will distinguish between the visual analogs of graphs and hypergraphs to derive two different graphical relational operators: connection and containment.

Connection operators are commonly seen in network and relational diagrams, where the edge of a graph can explicitly connect graphical objects, or where edges are replaced with implicit “pointers” from one graphical object to another (markers, text, etc.).

Containment operators indicate the inclusion of subsets of graphical objects by the explicit use of an outline or the implicit use of boundaries. Tables and Venn diagrams are examples of visuals that rely on containment to organize graphical objects.

Composition refers literally to the case when one graphical object is made up of others, such as a texture made up of shapes. Mosaics are canonical examples of composition of graphical objects by other objects.

Correspondence refers instead to the case where a graphical property (element, transform, or relational operator) is a function of another property. An example is an elevation map where a color contour plot uses color to indicate height along an axis perpendicular to the image plane; in the taxonomical nomenclature, color is a function of position. Functional relationships refer to either explicitly defined mathematical functions or visually apparent patterns or relationships for which no apparent function exists. Correspondence is usually defined in a key or legend.

Context means the space in which graphical objects are situated, and may be defined by a graphical element, a transform, and/or an operator. For example, an element such as color could define a space such as a color wheel, the position transform characterizes the Cartesian coordinate space, and the containment operator characterizes tabular space.

Symbolic Notation

The concepts above are intended to be useful for discussion and to gain a deeper understanding of information visualizations. To help discover relationships among visualizations, Table 3 provides symbolic notation to compactly represent our taxonomic classifications. In addition to providing a succinct representation of images, our symbols for graphical elements, transforms, and relational operators enable functional descriptions (in a mathematical or computer programming sense), where one can hierarchically embed symbols and perform operations on them. Embedding of symbols enables their use as building blocks, where a single symbol could also stand for a group of symbols for the purposes of compact representation and abstraction. Operations on symbols could take the form of functions or morphisms (in category theory), for a more rigorous treatment that could reveal mathematical properties of taxonomical classification of images.

Table 3 presents our notation for graphical elements, transforms, and relations. For the graphical elements, the notation for shape is o (empty circle or letter “o”) and for surface is # (hash or pound sign). The position, size, and direction transforms are represented by + (addition),× (multiplication), and (arrow or dagger) symbols, respectively. The relational operators connection, containment, composite, correspondence, and context are represented by |vertical bars|, {curly brackets}, [square brackets], <angle brackets>, and /slash symbols, respectively. The bars and brackets (not the slash — see Table 5) enclose symbols to signify an operation on them. A symbol can be “connected by,” “contained by,” “composed of,” “a function of,” and “in the space of” another symbol. Parentheses are reserved for grouping symbols.

Table 3: Graphical elements, transforms, and relations


shape		o
surface #


position	+
size ×
orientation †


connects	||	A|B|  
"A connects B"

contains {} A{B}
"A contains B"

composed of [] A[B]
"A is composed of B"

corresponds to <> A<B>
"A corresponds to B" or
"A is a function of B"

contextualizes / B/A
"B is in A space" or
"A contextualizes B"

The above notation is deliberately coarse-grained in this early stage of development, but could of course be expanded to include new properties of graphics, such as distinguishing different surface properties such as color from texture. The notation need not be restricted to static, two-dimensional images. By preceding any of the symbols with a (delta) symbol, we can invoke a change (transition or animation). Table 4 shows this for the graphical elements and transforms.

Table 4: Animating graphical elements and transforms

Animating elements:

shape       	o   	morph		∆o
surface # resurface ∆#

Animating transforms:

position    	+   	translation   	∆+
size × scale ∆×
orientation † turn ∆†


Graphical element and transform symbols may stand alone or precede any symbol, and help to describe subsequent symbols. The context relational operator (see Table 5) is used to describe the space within which graphical objects (represented by preceding symbols) reside. All of the other relational operators enclose symbols (A and B in Table 3), and help to describe the relationship among these symbols. The order in a sequence of graphical element symbols does not matter; they observe symmetric and associative properties. This is true for a sequence of transform symbols as well. Transform symbols usually precede element symbols, since they “act on” them.

Where symbols need to be clearly grouped together they are enclosed by parentheses; where they function independently they are separated by commas. Use of the comma distinguishes the case (× # o)/+ from (×, # , o)/+ representing, for example, multivariate vs. univariate markers of a scatter plot. In the former, each marker could vary in one or more of: size, color, and/or shape; in the latter, each marker could vary in just one of the following: size, color, or shape.

In the previous example, the symbols /+ indicate that the markers reside in a metric space (spatial context defined by position). If an element or transform symbol in Table 3 is presented without a spatial context, it may signify a single instance of a dominating element or transform, whereas with a spatial context, the symbol may signify that there are many instances, whose variation conveys meaningful information. For example, an arrow (direction transform: †) could represent the orientation of a single dial on a meter, whereas an arrow in a metric space (†/+) could represent a multitude of varying orientations in a quiver plot. For examples of other spatial contexts, see Table 5.

Table 5: Providing context

The context relational operator provides a “space” for an image:

/o	shape space
- drawing
/# surface space
- color space
- texture map
- gradient
/+ position/metric space
- Cartesian coordinate frame
/× size/scale space
- space-filling diagram
(pie chart, tree diagram)
/† orientation space
- flow field
- quiver plot
/|| connector space
- network diagrams
- node-link graph
/{} container space
- table
/[] composite space
- mosaic
- tiling
- fractal
/<> correspondence/function space
- color map
(color as a function
of some other property)

Multiple contexts or constraints defining the space of an image:

/o/+	shape/metric space
- latitude-longitude map
(shape space constrained
by a metric space)

Of course the richest context is often provided by text captions. Hereafter, text annotation is assumed to accompany almost every image, and we will only treat text (t for text and n for numbers) as graphical objects in three cases: text as identifiers, pointers, or primary data embedded in a graphic (not as a caption). Text identifiers are text labels identifying objects in a picture or locations on a map (see Table 6). Text pointers are text labels pointing to another place in the image, analogous to a footnote. Primary text data include text entries in table cells, flow charts, etc. We will denote any symbol, including t or n, as an identifier or pointer label by surrounding it with double quotes, and place it in front of, or as a superscript to, symbols requiring identification or pointing.


Our symbols and their relations constitute a sign system in semiotics, a field that studies syntax, semantics, and pragmatics of artificially constructed and natural languages. While we intend for future work on this taxonomy to delve deeper into each of these, our current focus is on laying the foundations of a rigorous formalism and symbolic notation for a bottom-up characterization of images, in as objective a manner as possible. By minimizing the subjectivity that comes with top-down analysis of images or their cultural context, we have a better chance of objectively comparing and computing on images. Otherwise, we would need to encode a tremendous amount of information latent in images. For example, the color blue may be associated with “sky” or “water” or even a mood, an upper right orientation may be associated with “northeast” or “2 o’clock,” and a link connecting two pictures may create the impression of causal relation between the contents of the two. Because of the unlimited potential for applying semiotics to each of these categories, we will disregard the symbolic aspects of images for the purposes of classification. In doing so, the taxonomy currently assigns no meaning to an image beyond that of its symbolic (notational) representation, and the symbols, for the most part, are a direct reflection of graphical features of the image. The primary exception is the correspondence relational operator, where a visible attribute is a function of an unseen attribute. To de/code this relationship in an information graphic requires inferring the intent of the graphic designer.

Table 6: A few example compositions

Simple examples:

  • bar chart: “positioned, scaled objects”

  • scatter plot with scaled icons: “scaled shapes in a metric space”


  • political map of the world with latitude and longitude lines: “different shapes with different surface properties in a metric space”


  • spreadsheet with numbers: “numbers in a container space”


  • flowchart: “directed links connecting different shapes containing text”


More complex example: the New York City subway map:

Image for post
Image for post

1. The subway is a network of colored links connecting stops, where connecting links are colored to indicate subway line (“objects with different surface properties connect objects”):


2. Local and express stops are distinguished by filled or empty nodes (“connected objects have different surface properties”):


3. Text labels indicate the name of each stop (“text is used to label connected objects”):


4. Differently shaped boxes with text are used to label the subway lines to indicate the type of service (“different shapes contain text to label connections”):


5. The network is pictured atop an illustration of the map of the city for context (“the image is in a surface-shape space”):


Conclusion and future work

To summarize, the taxonomy has five general classes of relational operators (connection, containment, composition, correspondence, and context) that act on each other or on the following five properties of graphical objects: the elements shape and surface, and the transforms position, size, and direction. The conciseness and power of this taxonomy become apparent when applying combinations of the above properties to complex images such as information graphics. Distinctive, canonical forms of visual presentation are neatly distinguished from one another by these bottom-up classifications and are succinctly represented by the symbolic notation in Table 3.

There are many ways to extend the taxonomy or tailor it to specific use cases, but the most general next step would be to integrate complementary (data-type and top-down) taxonomies. Future applications include assigning taxonomical classifications to collections of information graphics (such as http://infovis.info/), performing an exploratory analysis of information graphics to evaluate and discover new graphical hybrids, and to codify the taxonomy in a programming language [26] or declarative format (like Vega, https://github.com/vega/vega/wiki/Documentation) to perform large-scale, bottom-up, computation on images.


1. Hanbury A. A survey of methods for image annotation. Journal of Visual Languages & Computing [Internet]. 10/2008 [cited 2016 Jun 27];19(5):617–27. Available from: http://linkinghub.elsevier.com/retrieve/pii/S1045926X08000037

2. von Ahn L, Dabbish L. Labeling images with a computer game. In: Proceedings of the 2004 conference on Human factors in computing systems — CHI ’04 [Internet]. New York, New York, USA: ACM Press; 2004. p. 319–26. Available from: http://portal.acm.org/citation.cfm?doid=985692.985733

3. von Ahn L, Liu R, Blum M. Peekaboom: a game for locating objects in images. In: Proceedings of the SIGCHI conference on Human Factors in computing systems — CHI ’06 [Internet]. New York, New York, USA: ACM Press; 2006. p. 55. Available from: http://portal.acm.org/citation.cfm?doid=1124772.1124782

4. Mittal VO, Carenini G, Moore JD, Roth S. Describing Complex Charts in Natural Language: A Caption Generation System. Comput Linguist [Internet]. 1998 Sep;24(3):431–67. Available from: http://dl.acm.org/citation.cfm?id=972749.972754

5. Roth SF, Mattis J. Data characterization for intelligent graphics presentation. In: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems [Internet]. ACM; 1990 [cited 2016 Aug 11]. p. 193–200. Available from: http://portal.acm.org/citation.cfm?doid=97243.97273

6. Zhou MX, Chen M, Feng Y. Building a visual database for example-based graphics generation. In: Information Visualization, 2002 INFOVIS 2002 IEEE Symposium on [Internet]. 2002. p. 23–30. Available from: http://dx.doi.org/10.1109/INFVIS.2002.1173143

7. Zhou MX, Chen M. Automated generation of graphic sketches by example. In: Proceedings of the 18th international joint conference on Artificial intelligence [Internet]. Morgan Kaufmann Publishers Inc.; 2003 [cited 2016 Aug 11]. p. 65–71. Available from: http://dl.acm.org/citation.cfm?id=1630659.1630669

8. Mackinlay J. Automating the design of graphical presentations of relational information. ACM Trans Graph [Internet]. 1986 Apr 1 [cited 2016 Aug 11];5(2):110–41. Available from: http://portal.acm.org/citation.cfm?doid=22949.22950

9. Zhang J. A representational analysis of relational information displays. Int J Hum Comput Stud [Internet]. 1996 Jul 1;45(1):59–74. Available from: http://www.sciencedirect.com/science/article/pii/S1071581996900427

10. Rogowitz BE, Treinish LA. An architecture for rule-based visualization. In: Visualization, 1993 Visualization ’93, Proceedings, IEEE Conference on [Internet]. 1993. p. 236–43. Available from: http://dx.doi.org/10.1109/VISUAL.1993.398874

11. Cleveland WS. The elements of graphing data [Internet]. Monterey, Calif: Wadsworth Advanced Books and Software; 1985. 323 p. Available from: https://openlibrary.org/books/OL3030039M.opds

12. Duke DJ, Brodlie KW, Duce DA. Building an Ontology of Visualization. In: Visualization, 2004 IEEE [Internet]. 2004. p. 7p — 7p. Available from: http://dx.doi.org/10.1109/VISUAL.2004.10

13. Blackwell A, Engelhardt Y. A Meta-Taxonomy for Diagram Research. In: MSc MA, Bernd Meyer Dr, Patrick Olivier MA M, editors. Diagrammatic Representation and Reasoning [Internet]. Springer London; 2002 [cited 2016 Aug 11]. p. 47–64. Available from: http://link.springer.com/chapter/10.1007/978-1-4471-0109-3_3

14. Wehrend S, Lewis C. A problem-oriented classification of visualization techniques. In: Visualization, 1990 Visualization ’90, Proceedings of the First IEEE Conference on [Internet]. 1990. p. 139–43, 469. Available from: http://dx.doi.org/10.1109/VISUAL.1990.146375

15. Shneiderman B. The eyes have it: a task by data type taxonomy for information visualizations. In: Visual Languages, 1996 Proceedings, IEEE Symposium on [Internet]. 1996. p. 336–43. Available from: http://dx.doi.org/10.1109/VL.1996.545307

16. Wenzel S, Bernhard J, Jessen U. A taxonomy of visualization techniques for simulation in production and logistics. In: Simulation Conference, 2003 Proceedings of the 2003 Winter [Internet]. 2003. p. 729–36 Vol.1. Available from: http://dx.doi.org/10.1109/WSC.2003.1261489

17. Yu CH, Behrens J. The alignment framework for data visualization: relationships among research goals, data types, and multivariate visualization techniques [Internet]. 1995 [cited 2016 Aug 11]. Available from: http://www.creative-wisdom.com/alignment/alignment.html

18. Tory M, Moller T. Rethinking Visualization: A High-Level Taxonomy. In: Information Visualization, 2004 INFOVIS 2004 IEEE Symposium on [Internet]. 2004. p. 151–8. Available from: http://dx.doi.org/10.1109/INFVIS.2004.59

19. Chi EH. A Taxonomy of Visualization Techniques Using the Data State Reference Model. In: Proceedings of the IEEE Symposium on Information Vizualization 2000 [Internet]. IEEE Computer Society; 2000 [cited 2016 Aug 11]. p. 69. Available from: http://dl.acm.org/citation.cfm?id=857190.857691

20. Semiology of graphics [Internet]. University of Wisconsin Press; 1983. 415 p. Available from: http://books.google.com/books/about/Semiology_of_graphics.html?hl=&id=ruZQAAAAMAAJ

21. Noik EG. A space of presentation emphasis techniques for visualizing graphs. Graphics Interface [Internet]. 1994; Available from: http://citeseerx.ist.psu.edu/viewdoc/download?doi=

22. Carr U. A cognitive classification framework for 3-dimensional information visualization. 1998; Available from: http://pure.ltu.se/portal/files/1774822/LTU-TR-9804-SE.pdf

23. Lohse J, Rueter H, Biolsi K, Walker N. Classifying visual knowledge representations: a foundation for visualization research. In: Visualization, 1990 Visualization ’90, Proceedings of the First IEEE Conference on [Internet]. 1990. p. 131–8. Available from: http://dx.doi.org/10.1109/VISUAL.1990.146374

24. Zhou MX, Feiner SK. Visual task characterization for automated visual discourse synthesis. In: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems [Internet]. ACM Press/Addison-Wesley Publishing Co.; 1998 [cited 2016 Aug 11]. p. 392–9. Available from: http://portal.acm.org/citation.cfm?doid=274644.274698

25. Wilkinson L. The Grammar of Graphics [Internet]. Springer Science & Business Media; 2005. 691 p. Available from: http://books.google.com/books/about/The_Grammar_of_Graphics.html?hl=&id=_kRX4LoFfGQC

26. Wickham H. A Layered Grammar of Graphics. J Comput Graph Stat [Internet]. 01/2010 [cited 2015 Jul 1];19(1):3–28. Available from: http://www.tandfonline.com/doi/abs/10.1198/jcgs.2009.07098

27. The Language of Graphics: A Framework for the Analysis of Syntax and Meaning in Maps, Charts and Diagrams [Internet]. Yuri Engelhardt; 2002. 197 p. Available from: http://books.google.com/books/about/The_Language_of_Graphics.html?hl=&id=b8yjh5KUdQYC

28. Engelhardt Y. Objects and Spaces: The Visual Language of Graphics. In: Barker-Plummer D, Cox R, Swoboda N, editors. Diagrammatic Representation and Inference [Internet]. Berlin, Heidelberg: Springer Berlin Heidelberg; 2006. p. 104–8. (Hutchison D, Kanade T, Kittler J, Kleinberg JM, Mattern F, Mitchell JC, et al., editors. Lecture Notes in Computer Science; vol. 4045). Available from: http://link.springer.com/10.1007/11783183_13

Welcome to a place where words matter. On Medium, smart voices and original ideas take center stage - with no ads in sight. Watch
Follow all the topics you care about, and we’ll deliver the best stories for you to your homepage and inbox. Explore
Get unlimited access to the best stories on Medium — and support writers while you’re at it. Just $5/month. Upgrade

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store