TILT-y Mail #14
The Open Face of Reference
It’s a joke among librarians that we all get asked for directions when we’re in retail stores because we look approachable and like we might know a thing or two. I’ve always called this The Open Face of Reference and wonder how it maps to the online world.
In library school we learn that people with an information need will ask their friends first and often will only go to an expert if their friends don’t know the answer. The trick to being a good librarian in the real world, to my mind, is being someone that people feel is in their friend circle. Easier in a small community.
I encourage librarians who don’t spend a lot of time in Q&A sites to pop in and see what questions people are asking on Stack Overflow or their Stack Exchange set of subcommunities. We feel in the library world that you can ask us anything, but is the library really the best place to figure out how to tell a clone apart from its original or whether balrogs have their own language? How do we, or should we, become the open face of reference in online spaces? How do we get people to ask us ALL the questions?
A few reports I’ve read this week.
- E-Rate trends — surprise, the E-Rate application process is cumbersome and 20% of applicants report getting “timed” out of the application process.
- Digital Divide Index by the Intelligent Community Institute of Mississippi State gives concrete metrics about how digitally divided the US is, broken down at a county level based on a few indicators.
- Not a report but I like this idea of silent reading parties.
The New York Times and YouTube both announced that they are starting to use automated moderation tools and volunteer moderators to help their horribly overworked community staffers. Try seeing if you agree with the NY Times’ moderation decisions (I did in four out of five cases and disagree with their lack of moderation on their comment insulting to women shown below) and seeing what YouTube’s Hero program offers (basically whuffie in exchange for unpaid work). While I wish them luck in improving their comments, and applaud their attempts at transparency, I wonder if we’ll hear more as these programs progress?
As I’ve said previously, if we approached the “Our comments suck” problem like any other system failure maybe we can get some traction and not just handwave and say “The problem is that people are awful and we give up!”
People, some people, have been awful since the beginning of time and society has somehow managed and drawn lines about what is and is not okay. But engineers, programmers, techies generally, have a tendency to gravitate towards the problems that code can solve and this may not be one of them. If we know that “neutral” code can reflect and amplify society’s biases and prejudices, what is our responsibility to correct for that? Especially if we have built the platforms for that bias and prejudice?