1x3 — Claudio Bezzi: Evaluation, on the pragmatic and hypothetical side

Emiliano Carbone
Design topics — Conversations
6 min readMar 26, 2020

In this occasion, I have had an opportunity for exchange with Claudio Bezzi: sociologist and professional evaluator. Hence this conversation aimed to highlight “evaluation” as an important issue which is always present and beneath design practice. A large part of what seems to happen in that indeed is to evaluate between worlds. In particular between “existing” and “non existing” ones, between “actual” and “potential” products or services, especially within its critical dimension.

The professor Bezzi has always been committed to public policy evaluation, and the administrations’ actions, but also to evaluative and sociologic methodologies. He is co-founder, — and has been director, of the Associazione Italiana di Valutazione, and Lead editor of the scientific journal of Rassegna Italiana di Valutazione. A prominent researcher with the Francoangeli publisher, and he is author of several works: il disegno della ricerca valutativa; Cos’è la valutazione?; Fare ricerca con i Gruppi; Il brainstorming: pratica e teoria. Finally, as usual, the answers of professor Bezzi were recorded and synthesized by myself.

Enjoy the talk!

Dear professor, let’s start making an overview. What is the up-to-date of evaluation? Hence, how much is it widespread as a professional practice?

“To respond to that question, we need to start from a distinction. The evaluation culture differs a lot if you consider Italy, Europe and world. First of all, paradoxically, in Italy we have an intolerance through evaluation, especially for those who are supposed to use it. For instance, the public C-suite are adverse to receive technical and reasoned opinions relative to their actions, and thus they don’t comprehend thoroughly the crucial argumentation behind the outcomes achieved. Such a fact has increased a superficial audience of evaluators who always propose really few evaluation methods, and consequently common outcomes which are uniquely social. In Italy, in other words, the lack of interest and distrust make the evaluation’s course of action, in terms of understanding, rearrangement, and reflection, useless. Unfortunately, this situation didn’t allow our evaluative culture to develop as in the European contexts, particularly as in Anglo-Saxon countries. In the rest of the world, despite the greater success of evaluation, practitioners are trying to distance themselves from its positivist nature. Now, regarding the spreading of evaluation, I think the overall level is high enough due to the laws that demand it. By now almost everything demands evaluation and, unfortunately again, sometimes in a really ephemeral manner. For example, I remember the “customer satisfaction” trend, which is not evaluation; as well as the “contra-factual” one. And again, today’s ubiquitous demand for “impact” evaluation. So I think evaluation is widespread, but it tends to follow trends, stereotypes, and misunderstandings which has the focal point within the method”.

Now, moving to evaluation ethics, what are usually the most remarkable hurdles to master with its design and research? And regarding the latter, what are the basics would you suggest?

“As within the rest of professional practices, also in evaluation one a deontological code exists. For instance, the respect of people with whom you are designing the evaluation, as well as the data gathering, as again, the clarity and plurality of communication. Concerning the design and research of evaluation instead, the topic is actually different. Due to the above mentioned mechanical usage of methods, often the evaluation resolves itself through operationism. Or rather, the hardest struggle you make is within the technics you pick out for the implementation. Unfortunately, the idea that a problem can be resolved administering tools and technics, ignores the epistemological and methodological complexity that is present within the design. And this manifests itself especially in the first moments of the study, where you have to shape the so-called evaluative “mandate”. When you are in front of an evaluative question, you can’t respond to it only by operative research: there is also the crucial need to understand what you are talking about. And thus, the first part of the evaluation should favour context comprehension. The actors, the problems, and the underlying theories which characterize it. The question is, what are the values — especially in semantic terms — that the customer assigns to, for instance, evaluation basics? This question requires to interact with the stakeholders, in order to understand what they mean by “efficacy”, “efficiency”, or “equity” and “sustainability”.

Let’s remain within evaluation design and research. Is it possible to definitely ascertain what you need to evaluate? And if not, does the evaluation account for hypothetical interpretations?

“The evaluator who grounds current evaluations on the previous ones as a rule of thumb is bad. It is not possible to understand ab anteriori what you need precisely to evaluate. Each evaluation is a new case, with new stakeholders, new issues and challenges you have to deal with. As I previously mentioned, that issue should be resolved by moving away from a superficial level of syntactical mendacious outcomes, because numbers per se don’t explain anything. If you don’t go deeper into the pragmatic levels, your outcomes don’t grab the problems which are the essence of your evaluation. Within the strictly methodological part of the research, I am in charge of the understanding and deepening of the pragmatics of words. In first-person, I led several activities of evaluation that stopped at the mandate phase, where the semantic and value problems are circumscribed, and which are often misaligned by the true purpose of the customer. And this is the reason why each evaluation takes advantage of a new semantic universe. Of course, it is also true that you don’t throw away your expertise a priori; an evaluator should indeed be specialised in a certain context. Hence, following a method, you go to verify the nuance which is characterising your context of study, especially within the mandate phase. So, the hypothetical evaluation is always present. Nobody of us is free from biases, false myth, and routines with which they nurture their worldview”.

Dear professor, considering the evaluation outcomes overview, are there precise limitations to the knowledge they construct and produce? In other words, should the evaluation outcomes be recognized only as factual, or also as predictive knowledge?

“Certainly, I would say that knowledge doesn’t have definitive boundaries. The evaluation outcomes have and should have predictive aims. Also here, the methodology developed a lot of tools and techniques, and activities, which aim to accomplish hypotheses regarding how a specific context would react to determined solicitations. In this regard there exists a whole branch called Futures Studies, which applies in the long or short term scenarios, striving to evaluate the probable drift about of a specific phenomenon. Within evaluation indeed, after the overcoming of a first hurdle regarding why a specific action has been done well or badly, often you face the problem about its sustainability. For instance, within public policy, the common question you have to evaluate is to understand if when you finish the implementation phase of a project, its intrinsic mechanisms will continue to work properly and independently. To answer that, you need to leverage tools and methods which serve to formulate hypotheses for designing strategic scenarios. And thus, the knowledge produced by evaluation actions is not only factual but also has more predictive aims”.

Concluding, we consider design practice today, such as the innovation of products and services. Do you think in such a purpose, it is possible not to account for an evaluative methodology?

“Look, this issue is complex on the syntactic level. There are a lot of practices which have similar aims to evaluation, but within different professional fields; for instance, marketing could be compared to evaluation, or economics, or policy… I think the answer should be discovered right at the syntactic levels, or where the sense of things is created and then put into action. Each practice has its language, it constructs its meanings and then develops its lexicon. Drawing on long personal experience, communities of different practices and different scientific communities, have odd languages for each other, and use different languages to say the same thing or the same terms, to say different things”.

--

--

Emiliano Carbone
Design topics — Conversations

Senior Business Designer @ Tangity — NTT DATA Design studio #design #research #complexity (views are my own)