iConference 2018 trip report: information science, computing education, and HCI converge
When I joined the UW iSchool, I had no idea what iSchools were. I knew nothing about library science, information science, library information science, information systems, information management, or any other flavor of information research. I thought information retrieval was a topic in CS research. My naive point of view as an HCI and computing researcher was that research about libraries and information was niche, with a narrow focus on books. If you’re a CS researcher, you might think the same thing.
I couldn’t have been more wrong. As I’ve learned over the past decade as faculty in an iSchool, the constellation of academic information disciplines has a rich century-long history, full of powerful ideas about what information is, how people seek it, how systems and technology (from books to bits) facilitate this, and how information changes lives. As a computer scientist, I’ve come to see computing as partially subordinate to the broader field of information, in that most of the value in computing to society is as information technology, and that applied areas of computing such as HCI and software engineering can and should be viewed from an information lens. That computing researchers do not know this history or this field is a travesty of academic history.
As a scholar, I’ve also come to view iSchools as a uniquely rich context for interdisciplinary research about computing. Its this context that’s helped me develop my perspectives, and also broadened and deepened by my scholarship beyond the narrow walls of CS and HCI. I believe iSchools hold huge capacity for reinventing academia, and that universities that have strong iSchools will lead the world in advancing knowledge and human progress.
The iConference is where people pursuing this vision gather. The conference started as a way to bootstrap the nascent collection of information schools that started popping up in the 1990’s. It was a place for iSchool deans to exchange knowledge, and strengthen each other’s schools, much like the Snowbird conference that helps computer science leadership in academia move the field forward. These days, the iConference has many other things typical of academic conferences, including peer-reviewed conference papers, works in progress, and posters, but I view these as peripheral. The heart of the conference is still about facilitating and growing the iSchool movement.
One way to judge the success of the movement is to look at the now 91 iSchools that exist. Some of the best universities in the world have iSchools, including UC Berkeley, University of Washington, Seattle, University of Michigan, UCLA, Carnegie Mellon University, Georgia Tech, Rutgers, University of Wisconsin, Madison, University of Colorado, Boulder. And some of the best work about urgent topics around information, whether algorithmic bias, fake news, social media, and data privacy, are happening in iSchools.
Despite all of this growth, however, the iConference, the majority of schools in the iSchool community are still very much defined by studying libraries and training librarians. Attending my second iConference here in Sheffield, UK, as a reminder of this, and a reminder that the UW Information School is still unique in its exceptional disciplinary breadth. It was also a reminder that this history and diversity are the movement’s greatest strengths and weaknesses.
Below I discuss some of my experiences at the conference, and my effort to reconcile iSchools’ past with their future.
Papers and Awards
One interesting way to understand what iSchools are up to and what they value is looking at the awards it gives. Award winning papers contributed:
- Data mining techniques for extracting information from news stories
- Analyses of cloud service contracts
- Studies of privacy attitudes in fitness tracking services
- The information use in strategic decisions in software companies
- Interactions between police and the public on Twitter.
These topics mirrored the broader set of topics at the conference, spanning social media, bias, retrieval, info viz, analytics, data mining, etc.
As an human-computer interaction and computing researcher, these topics are remarkably similar to the topics I would see at CHI, CSCW, and other HCI venues, not to mention other computer science venues. HCI and CS researchers would be smart to pay attention to the work happening in information science venues, and vice versa.
Lynn Silipigni Connaway’s keynote: Online Information Seeking
The opening keynote was by Lynn Silipigni Connaway, who is the Director of user research of OCLC Research, which is a center that focuses on challenges facing libraries due to modern information technology. She described a massive longitudinal study of how people seek information in their lives.
Methodologically, this was no different than any other mixed-methods work I would see published in an HCI conference, including interviews, surveys, and some ethnographic methods. And yet, its longitudinal scope was uniquely ambitious, as was its scale of data collection, with more than 150 interviews. This size of project is relatively rare in HCI.
Some of the discoveries of the work were interesting, but not that surprising, since the trends mirrored many people’s everyday experiences. Millennials struggle to evaluate online sources, but don’t think that they struggle. Youth assume that polished visual design indicates credibility. They assume that popularity on Google indicates credibility, and that Google was always right. People think they know how Google ranks information, but when pressed, they have no clue. People use Wikipedia to find citations, but they never read those primary sources. Most youth are completely unaware of what libraries do and generally don’t read books. These trends reveal a global citizenry that is information-illiterate in several critical ways, but more resourceful than ever.
The recurring theme in all of these that I saw was that interface is king: lower the bar to accessing information, and that is what people will use, regardless of whether the information is less valid, less accurate, or inherently biased. Perhaps not surprisingly, my view is that the right response to these troubling trends is to change the interface.
But Lynn, with her focus on libraries, and her assumed audience of LIS researchers, focused instead on recommendations about how to change libraries. She talked about LIS being more proactive and less reactive, more entrepreneurial, and more community and relationship based. She argued that there needs to be a renewed focus on teaching information literacy, critical thinking, and that libraries should be part of this effort.
I found the focus on libraries to be misguided. I agreed with the skills she was advocating we teach, but I don’t see how libraries are an effective institution to teach them. People don’t go to libraries to be told what to learn; they are contexts of informal, self-guided learning. If no one realizes they lack information literacy, they’re not going to go to libraries to gain this literacy. If we want the public to learn these skills, we need school to teach them as part of compulsory education, integrating them into K-12 curriculum, and training teachers to teach them.
This is a challenging chicken and egg problem: if the public doesn’t know it lacks information literacy skills, why would it support teaching them in schools? Much like the CS for All movement advocating for teaching CS in schools, information science needs its own global effort to teach information literacy. Unfortunately, I don’t see anyone with this vision in agencies like the National Science Foundation with the resources to bootstrap such an effort. Perhaps the best strategy is join the CS for All effort, integrating information literacy into K-12 CS courses.
Susan Dumais’s keynote: Large-Scale Behavioral Analysis: Potential and Pitfalls
Susan Dumais, Distinguished Scientist at Microsoft Research, also gave a keynote. She’s long worked in information retrieval and HCI, and is one of the few HCI researchers that has a foot in both HCI and information science. I eagerly awaited her keynote, which focused on tradeoffs of using large data sets to understand information seeking.
She started by discussing the massive logs we now have about people’s interactions with web services, and how these logs effectively represent how people seek information online. She then reminded us of the history of the web, and just how little content, activity, and data was available just 20 years ago. Lycos, a popular search engine in the nineties, only had 1,500 queries a day! At the same time, digital library efforts were picking up, including at Stanford, which directly led to Google. Interestingly, web search used to be something that one needed a Ph.D. to do; now everyone uses it for everything in nearly all contexts.
In Susan’s view, behavior search logs are how search became ubiquitous. It’s what allowed us to continually improve systems using evidence. They’re traces of real world in situ behavior, and so they’re very strong evidence of what people do, at the scale of billions. But early work in how to leverage these logs were based on assumptions from library and information science. The field of information retrieval quickly found that web search was not at all the same as library search, because they have different interfaces. All of these innovations in how to leverage log data to effectively retrieve relevant results, including both descriptive and experimental methods, have rapidly improved search.
Susan discussed many downsides and limitations to data-driven design:
- Designing purely through data ignores the immense power of design principles and design expertise.
- There’s no way to know why people do what they do through logs, unless interfaces are designed to reveal motive explicitly.
- Logs are limited to the current systems and their designs; they can’t tell you anything about entirely new designs.
- Just analyzing logs ignores bias, privacy, ethics, and other societal issues.
She advocated, like Lynn above, for mixed methods approaches that address this limitations. She gave several examples of mixed-methods research she’d done about re-finding, relevance, and ranking.
In some sense, Susan’s talk was bringing knowledge about information retrieval research in HCI and computer science to an audience of information science researchers, and so little of this was new to me. I’d love to see a talk by an information scientist at an informational retrieval conference, to learn what theories and ideas might reshape computing’s efforts to advance retrieval.
My colleague Joe Janes probed into this link, asking how all of these improvements are changing us as people, if they are at all. Susan found it hard to argue that more information is bad, but clearly didn’t have a deep answer to the question. Perhaps that’s the contribution that information science is best positioned to contribute.
Luciano Floridi’s keynote: an information view of democracy
Luciano Floridi, faculty of Philosophy and Ethics at the University of Oxford, gave the third keynote of the conference. He warned us that he was giving an explicitly political talk.
His argument was as follows. First, he argued that “digital” has allowed ideas to be glued together and cut apart by eliminating context from information. Some of this are cut apart. Presence is no longer necessarily tied to location, killing physical banks. Law no longer is tied to territory, complicating enforcement. Agency is no longer intelligence, divorcing the need for intelligence and the ability to perform a task. Others are glued together. Analog and digital. Offline and online. Information and identity. All of these are now fused. The world is moving into a space where we live a hybrid analog and digital existence.
Luciano’s second point was that the real challenge in our digital age is not more innovation, but governance of the digital. Governments and companies don’t realize this yet, but Luciano sees this as the next phase, moving beyond the innovation era. This next phase is creating a crisis in democracy, because our legal frameworks are not built for a hybrid existence. Snowden, for example, got caught between tensions between security and freedom of speech. Apple and the FBI exercised a tension between security and privacy. Our dichotomous existence is exacerbating these tensions, requiring a more flexible democratic policy framework. Simultaneously, debate in this environment, particularly mediated by digital platforms, only seems to create more toxic, extreme conditions. Since democracy requires debate to function, this is a problem.
His third point was that we are entering an age of design, because digital has allowed us more affordances and fewer constraints.. Lower barriers to design means more innovation, but that is causing even further tension in democracy. But Luciano believes design can also be a solution to this problem, allowing us to innovate in democracy with design.
Luciano believes the core design problem is consensus building, which is an information problem. One solution he proposed was more voters through compulsory voting, because of evidence that more people improve collective decisions, especially via participation from youth. Another solution is better voters; get people to Google “what is Brexit” before they vote instead of after. He admitted that as a philosopher and not a designer, he did not have strong design proposals, but he was optimistic that a society that encourages design innovation in democracy would be able to converge toward better systems.
What does all of this have to do with information? Luciano was arguing that democracy is inherently a collective intelligence problem, which is inherently about getting the right information to the right people.
Discussions about what iSchools are is a perennial topic at the iConference, and I engaged in it with other iSchool deans, several of our former iSchool doctoral students now at other iSchools, and several of my faculty colleagues.
One of the more interesting conversations I had discussed exercises like having every faculty member explicitly describe the relationship between their expertise and information. I optimistically argued that most faculty in most iSchools could probably do this convincingly, and that making these links explicit would probably result in productive common ground about the scope of expertise in each school.
Others were more pessimistic, and believed iSchools would continue to be defined by it’s scholars’ misfit, boundary spanning roles between academic disciplines. There was some discussion of whether existing disciplines would eventually broaden, absorbing the ideas being explored by iSchool’s boundary spanners, hollowing out expertise iSchools, leaving only those who can’t succeed in their primary disciplines.
Personally, I don’t think this is that important of a debate. iSchool faculty should do excellent, foundational work in whatever disciplines they want as long as they can relate their work to information. Weak links to information are tolerable if iSchools can uniquely foster innovative scholarship through their intellectual breadth.
Computational thinking in iSchools
I stumbled upon a 90 minute session about computational thinking in iSchools, which connected the global efforts to expand computing education in schools to the kinds of informal learning facilitated by libraries, and often taught in Masters programs in Library and Information Science. Mega Subramaniam from the University of Maryland, Hai Hong from Google Education, a representative from the ALA Policy Office, and others came to spread the CS for All gospel, but also bridge the CS efforts into Information Science. The organizers argued that the key intersection was that computing is a 21st century literacy, and libraries are a key institution for promoting all forms of literacy.
It turns out that Google has invested pretty heavily in efforts in libraries over the past year. Some of the work has been integrating computational thinking into MLIS degrees, other efforts have tested activities in libraries with youth and families. The goal is for the work to produce a toolkit that facilitates librarians offering programs that leverage community partners, but also curricular ideas that help reshape curricula in library and information science. These efforts are nowhere near the scope of CS K-12 efforts, but they’re also just getting started.
One of the interesting topics at the workshop was what role iSchools might play in these national and global efforts. We talked about iSchools being great places to hire computing education researchers, because they can bridge CS and education. We discussed the numerous opportunities for deconstructing computational thinking and information literacy to find overlaps. That said, many of the more library-focused attendees wondered whether computational thinking was within the scope of what libraries should do, desiring a more robust notion of how computational thinking is a critical literacy.
On the Sunday before the conference, I ran a workshop with my colleagues Matt Saxton and Richard Sturman. We had about a dozen leaders from other iSchools there to share their experiences running capstones, but also to help schools considering capstones set theirs up.
Of those that had capstones, they had a lot of similarities. Students worked in teams of 3–5, they spanned anywhere from a quarter to two semesters in length, they often involved external sponsors or clients from industry, government, or non-profits, and many universities engaged adjunct faculty with a foot outside the university to bring in domain expertise and project management skills. Most of the capstones had a strong focus on designing information systems (really what looked like HCI+design projects to me), or on data science projects. The benefits to students included lots of practical experience working on a team, developing portfolio content, and enriching their personal experiences for job interviews. The benefits to iSchools were also common, including stronger external partnerships, the ability to market the capabilities of their students, and to engage alumni.
Schools varied in some notable ways, exposing some of the critical decisions one needs to make in structuring experiential learning:
- There was broad variation in how much the experience was scaffolded. Carnegie Mellon provided very little support, expecting students to productively struggle through their project with lightweight faculty and sponsor advising. Other iSchools offered much more structured milestones and resource facilitation.
- Some schools had rigorous assessment of student outcomes, while others were largely pass fail. Many reported that summative assessment greatly constrained student work, and inhibited the necessary pivoting that comes with real projects. Others reported being required to assess student outcomes, and ended up focusing on project management skills, such as developing and meeting milestones, or justifying changes to goals. Others used peer-assessment to evaluate teamwork skills. Another common idea was to assess the ability to communicate project outcomes in various forms (e.g., posters, videos, elevator pitches, web sites). There was clear consensus was that the ideal would be to have no fixed learning objectives and use a pass/fail grade.
- Everyone had significant questions about IP. These were most prominent in projects with external clients or sponsors, but also inherent to teamwork generating shared IP. There was broad consensus that students, faculty, and external sponsors are unformed of these issues, that negotiating all of these issues upfront is key, and that the best way to do it would be a check list that students work through with sponsors and each other in order to establish upfront agreements. The benefit of this is that such upfront negotiations are authentic to practice in industry, and that in addition to avoiding problems, it can help motivate students to contribute the amount the agreed to contribute. One complexity to overcome is avoiding students being forced to give up their IP to graduate; there must be some form of informed consent, and alternatives for that consent to be given sincerely.
- In addition to IP, data privacy was another issue also requiring upfront agreement. Many companies that might share data don’t have licenses on the use of those data sets. Students should ask for them, so they know how they can and cannot use the data they might obtain from a company.
- For students considering using their capstone to bootstrap a company, fair use of existing IP was another concern. Students using copyrighted or patented ideas might be reasonable within the context of school, but if they take what they’ve created outside of school, fair use no longer applies.
- When an employee of a company violates an agreement, you can fire them. You can’t fire a student. No one really knew what to do with a student that broke an agreement.
- There was considerable agreement that the single biggest instructional challenge is helping students manage their project scope. This is hard partly because faculty may not have project management experience, and partly because managing project scope is inherently difficult work. Most schools required students to iteratively develop a project plan, reducing scope after each iteration until everyone felt the project was feasible within the allotted time and students existing skills.
The iConference is not a great academic conference. It doesn’t disseminate the best research in the field, it doesn’t draw all of the researchers in the field, and as an umbrella venue for a collection of schools that are more intellectually diverse than any academic movement in history, it by no means has a coherent intellectual theme.
But it is an important conference. It brings together leadership amongst nearly one hundred schools representing the world’s best hope at broadening and deepening society’s understanding of information, information technology, and computing. We need strong iSchools to educate the world that “tech” (as the public likes to call it) is not a bunch of magic algorithms created by boy geniuses that will save society. It is a complex, consequential, and computational form of information medium that, like all other forms of information technology, requires careful design to harness well. To forget that is not only dangerous, but a missed opportunity to express the best of humanity.