Privacy Talk with Hao Ping(Hank) Lee, PhD student, Human-Computer Interaction Institute at Carnegie Mellon University:What is your research at Human-Computer Interaction Institute at Carnegie Mellon University?

Kohei Kurihara
Privacy Talk
Published in
7 min readJun 16, 2024

“This interview has been recorded on 23th May 2024 and discusses AI and privacy risk”

  • Why did you start to research your interest?
  • What is your research at Human-Computer Interaction Institute at Carnegie Mellon University?

Kohei: Hello, everyone. Thank you for joining the Privacy Talk.

I’m so glad to invite Mr. Hank from the US. He’s doing remarkable research in AI space.I’m pleased to share his current activity and his research field.

So, Mr. Hank, thank you for joining today’s interview.

Hank: Thank you for having me.

Kohei: Thank you. First of all, I’d like to introduce a bit of this profile.

Hank’s research lies at the intersection of human-computer interaction, usable privacy & security, and human-centered AI. His research helps practitioners design and build privacy-preserving AI products.

He also studies end-user privacy needs in consumer AI, such as online behavioral advertising and ChatGPT. He is currently a Ph.D. student in the Human-Computer Interaction Institute at Carnegie Mellon University, advised by Dr. Sauvik Das and Dr. Jodi Forlizzi.

So again, Mr.Hank, thank you for joining today.

Hank: Thank you for the introduction.

Kohei: Yeah. At first agenda. I’d like to explore your history and you do work. It is a very impressive action and I study your research paper. It was very interesting and so could you tell us why you started the research of your interest?

Hank: Yes, totally. So I guess I could provide two of the two different types of reasoning. One is I will say it’s more micro level. One is more about the macro level, right?

  • Why did you start to research your interest?

So the micro level I saw started my research journey as I see myself as a human-computer interaction researcher, where I started my research undergrad. So I like something that interests me. When I start doing research, there’s something missing about the relationship between computers and humans.

So something I did before has actually nothing to do with privacy. I specifically study how people use their mobile devices, specifically how we could create a notification system that could actually be helpful for you.

You could even argue that it’s actually on the opposite side of privacy. When I applied to grad school back in 2020, I got the chance to pivot in a different direction into my research, right.

So that’s also the time when I got introduced with the topic of usable private security in a way that we rob human factors or a concept about human centered design into the space of security and privacy.

So that’s how I got introduced to space. So the micro level reasoning here is that you know, during the time I just like tried to want to try something new, right?

So that’s in some ways I could main reason why choose this direction, and work with my advisor.

But the macro level, that recently behind why I choose this path so privacy specific in AI, is that there’s sort of a time when the conversation around responsible AI which means that how we could create AI technologies more socially responsible start to get really popular, at least in the Western context. right.

You know, and obviously, privacy is something that always brought up, but you know, something we found here is that you know, a lot of time when people talk about privacy in AI or machine learning, a lot of times things talk about the differential privacy or federal learning and there are super useful tools for sure.

Or people will talk about, you know, how we could better collect user data. That’s to say, you know, but yeah, I feel like the AI’s changes or impact to privacy could be beyond that, right.

For example, as we now see that there are more AI capability in views technology being created by facial recognition technology, surveillance technology in a way that you know, for sure that we have to care about the collections of data, but how did the size of this AI capability infuse technology being you know, impacted individuals.

Or as the society as a whole has, since it will be less focused at least when I was, you know, starting my PhD journey.

So I feel like, you know, the macro level here is that I want to engage with the conversation about how we could actually build AI systems more responsibly, and I feel like privacy actually had a really, really interesting angle to actually engage with this conversation meaningfully.

Kohei: Thank you for sharing that’s been very important in the privacy space at this moment. And you are now joining the Institute at Carnegie Mellon University and doing research in human computer interaction, which is a very interesting topic for us.

So could you share about your research work and in the university and what the vision that you are embracing right now?

  • What is your research at Human-Computer Interaction Institute at Carnegie Mellon University?

Hank: Yeah, sure. So as I briefly mentioned, now, I solely focus on my research around privacy and AI, right. There’s also two angles how I approach this normally, the first one is actually from the practitioner perspective, which means who actually use AI to build the system right.

So who actually kind of introduced this AI technology to the real world that interacted with lay people or normal users that use those technologies. So a lot of work in the early stage of my PhD had to really try to build the foundation of this work, right.

Essentially, we tried to answer how this AI change privacy, if at all right, and how does this practitioner who actually use AI to build system so like, do they have awareness of this AI risk or how capable or do they have any kind of support in doing so right until he decides that we have so far is that one.

AI indeed, about privacy? We could definitely talk more later on. And second insight here is that practitioners actually need more AI specific privacy guidance and support in order to actually do better work in their products.

So this is the foundation we have laid for this research. And what I’m currently working on specifically as I tried to use this foundation now is that we learn from talk actually talking to practitioners and actually doing analysis on what’s actually happening in the real world by analyzing the AI in the database.

Of interest is how this AI system have been deployed to the real world and actually causing global harm. right. So laying down from this knowledge we’re trying to build tools that could actually help practitioners to edge, identify privacy, harms potentially, in their AI products they end up or out actually building.

This is one aspect I saw how I study AI privacy from the practitioner perspective. And I also did some research that studied more from the end user perspective, right. Obviously there is the balance of that consumer facing end products out there.

One of the studies I did before is trying to understand privacy within the context. So online behavioral advertising, or short for OBA, so you know, this is like the fancy word for that basically, any kind of ads that mostly ask that you can actually see on either social media website or you know when you’re browsing on the internet, right.

(Mo.vie: When and Why Do People Want Ad Targeting Explanations? Evidence from a Mixed-Methods Field Study)

So those types of tasks I actually in some way know some of your personal information and they served personalized ads for you. So I saw the when and how you know people will want to know more how this advertisement potentially has no information answered really customized accurately right.

So when do they want to know more in terms of information right now? Why do they know it right? And another project I did in this space as well is looking into what we call long language model based conversational agents for example, ChatGPT.

So we try to understand user’s disclosures, in terms of whether they will share some personal information when they are using ChatGPT for example, how do they navigate the risk and the you know, teachers that they want to gain from the ChatGPT and their mental model how ChatGPT use your data.

So you know, in the space of that, how end-users interact with those customer AI products? So yeah, that’s like two aspects. One is like practitioner facing and one is like end-user facing but right now I will say it is like more focusing, leaning more toward practitioner facing research.

Kohei: That’s very amazing though we can touch on this at the research part later and I’m happy to, like to read up your paper and helpful as the practitioners should be different from around it up with that. It’s been very good.

So as we move to the next topics, we always wonder as to how AI will change our privacy concepts. You quoted some of the articles just like the big player, interesting player, like the Clearview AI or this new kind of startup is the changing of the traditional concept of privacy. So could you tell us how has AI changed privacy?

To be continued…

Thank you for reading and please contact me if you want to join interview together.

Privacy Talk is the global community with diversified expert, and contact me below Linkedin if we can work together!

--

--