An interview series that highlights the human side of interesting people in tech.
Nava Tintarev interviewed by the FT.
Full Professor in Explainable Artificial Intelligence, at the University of Maastricht. Some people would put Chair somewhere in that title, but I don’t feel like a piece of furniture just yet.
What do you do on a normal day?
The days differ a lot, and it’s probably one of the reasons I like this job. What people maybe don’t realise is that there’s a lot of interacting with people. As professors we teach — education is a big part of our job, but it’s also not as big as people might think. There are things related to the research itself: the generation of ideas, writing them down, and running experiments. Then we sometimes discover that it doesn’t work and we need to replan. There’s the aspect of mentoring and guiding other people. There are a lot of things to do with the larger research community to make sure that we’re talking to each other outside our little silos. This happens at national gatherings of scientific ideas or international conferences.
That sounds like a lot!? ?!??
There is a norm and glorification of overwork. I think it’s seen to be good, somehow, to work, I don’t know, 60–80 hours/week. Certainly, to be excellent and visible internationally, which is not really manageable in a 40-hour workweek. On the other hand, we have tremendous flexibility. The joke goes: “Where do you want your half-hour of sleep, 2 am or 6 am? It’s up to you.”
What’s the most exotic place you’ve been to with this job?
I’d say Singapore. It’s also the furthest. Usually, the conferences are held where there’s a strong research community, so up until recently, it was mainly North America and Europe. Now we see that it’s spreading more globally, but it’s taking some time.
What was the first piece of code that you wrote?
Ooooh lordie. *looks to the ceiling with a gloriously dreamy smile* I think it was BASIC. We had these magazines at home where you could copy over the text and write it into a terminal, and you’d press a button and stuff would happen. But I remember the program that impressed me the most was for making music. So there were certain commands that you could do for playing a specific note, and then you would type out, note by note what to play and loop a certain strand. It was exciting because then you could go to the first line and repeat it a certain number of times. I thought it was tremendous! This was at the ripe age of maybe 8 or so, on our family computer — an IBM 286. It was all sooo much simpler back then *laughs* just plain text, no IDE, no text formatting, one file, you actually knew what was happening in your code…
That brings us to explainable AI. What is that, or rather, what’s the unexplainable part?
Isn’t most of it? *laughs* The type of explaining I’ve looked at has been specifically for decision support for filtering and ranking algorithms. So it’s slightly different from the kind of explaining we have for classification tasks. It’s got to do with us humans trying to make decisions together with a computer. For example: “I’m searching for cockatoos and there are a number of results that come up on cockatoos on my screen. Why do I get these particular results in response to the query? Why am I getting cockatoos in France and not in the Netherlands? Or why is there a grey parrot in my results when I asked for cockatoos?”
Why are you asking for cockatoos in the first place? 🤔
This is where interactive explanation systems can really help people improve their own mental model of what they’re looking for. We as humans have a view of the world, and, say, what a good bird is and what it should and shouldn’t do. And the computer has a model of what constitutes a good pet bird in a search result. Additionally, the computer has the advantage of the larger knowledge of the web, but it doesn’t have common sense knowledge. A computer won’t tell you that it’s a bad idea to get a loud bird to live with you in your thin-walled flat (because your neighbours will hate you). For a human, it’d be an important factor in the decision making. We’re trying to find the match between the human’s model of the world and the computer’s model of the world. To a certain degree, we also try to help the computer “be smarter”. For example, we can indicate which information it should use to train models, or feed it “common sense” human knowledge. In this context, it means figuring out what kind of information the computer needs as a source to start generating explanations.
What is the ideal outcome of your research?
A lot of my current work looks at online information, like Tweets and news — so idealistically, awareness of a variety of viewpoints. Tolerance. We’re living in an increasingly polarised world and people have very strong opinions. I think we’d benefit from getting to the position where we all can say “I disagree with you but I will fight for your right to state your opinion”. I’d wish for a continuous questioning and challenging and growing and developing because the world is not static. Points of view, however, ingrained and fixed and part of our personality, are informed by the past. So one thing I’m trying to understand with my graduate students is what we need to know about people when we pick the content for them. What do we need to know when we make the explanations? Because I don’t think the same explanations work the same for all people, so you’d probably need to adapt the explanations. We’re just at the beginning stages of thinking about what those dimensions might be. How do we adapt the interventions, the explanations and the nudges in a way that is specific for the users?
What was the last piece of code you wrote?
Now that is terrifying *sighs* That’s what happens when you become a professor — you become a bit of a manager and get more and more disconnected from the work. I wrote some R code to do statistical analysis when I was an assistant professor in Delft. Before that, I made an experiment for visualisations for, autonomous systems and logistics. That would have been 2014 or so.
Do you miss it?
I do. But I also don’t miss the frustration of being stuck on a bug, and having a deadline and staying up all night to figure out what’s wrong.
Did you always know that you wanted to be a professor?
No!! It’s something that snuck up on me. I didn’t know I wanted to do research until I did a Ph.D. I was blessed with a fantastic supervisor who supported and pushed me to do better while being encouraging. But then I fell in love with the process of uncovering things that nobody had uncovered and opening up new ground. I love reading, so I’m counting my blessings for getting to read interesting articles as part of my work.
What advice would you give to someone who’s just starting out in engineering?
Figure out what you’re good at and what the actual activities are that you enjoy doing, as opposed to what outcomes you want to achieve. Inherent ability goes a long way, but resilience and grit is what will carry you over the finish line. The ability to sit with a problem, and patience, is a big one to nurture in software engineering. The field will always keep changing, so you’ll always have to keep learning.
When I say Financial Times, you think…?
Actually, and this is going to sound corny: high-quality news. That’s the exposure I’ve had to it. When I worked at Bournemouth University we had an FT subscription as members of staff. One of the few perks of being a university professor 😆 (The other one was a really nice bike repair service). I remember I’d read it in the morning and get news that I felt were reasonably complete and well written.
Thank you Nava Tintarev.
Interested in getting to know the FT? Click here to find out about our Product & Technology teams: roles.ft.com