My own Graph Epiphany came back in the early 1980s. I was working for the National Science Foundation in a great group called the “Productivity Improvement Research Section”. The name had been bestowed on us a year or so earlier when our previous incarnation as the “Innovation Processes Research Section” ended as we were absorbed by the Division of Industrial Science and Technological Innovation. I mention all these mind-blowingly tedious Federal government names only to highlight the futility of trying to decide what a given workgroup is actually doing by looking at its name.
We were an interdisciplinary group of some 6–8 social scientists — a political scientist, an economist, a sociologist, an industrial engineer, an organizational psychologist, an educational psychologist — all thrown together under the leadership of a wild and crazy cognitive psychologist. Here we were jammed into a division of NSF to study technological innovation, with a group of old-line mechanical engineers whose idea of technology was anything that dripped oil on the floor and who couldn’t find the “any” key on the computer when the instructions read “Press any key to continue” (literally true!) They were administering several programs supporting university/industry cooperation — grants to small businesses, universities, and occasionally companies, all aimed at improving linkages between university researchers and the places where new ideas might be put into practice. Nice guys all, and pretty smart, considering where they’d come from. We, on the other hand, were all in our mid-30s, PhD’d within the last five or so years, and pretty convinced that we had Ground Truth by the short hairs. It was a great time; best job I ever had.
Trying to figure out what we could do in our new division led us soon to the idea of evaluation research. All of us had some experience with evaluation — I’d even headed up a planning and evaluation branch in another agency before going back to grad school — and we figured that we could do some good particularly for the University/Industry Centers program. At the time, they were supporting five such centers at different universities, in fields as diverse as welding engineering and computer graphics. The idea was that the university would set up a center in some area of significant faculty research capability on a specific technology, recruit five to twenty companies with an interest in that technology who would pay between $20,000 and $50,000 per year to become members, formulate collectively a portfolio of research projects to be carried out by the professors and their students, and then share the results of the research on a fair basis. The professors had their research supported, their students got support as well, the companies got advanced technology, and the government got the credit for being a Good Guy. Wins all around in those days of yore!
One of the ideas we thought might be interesting for our evaluation was a then rather new approach looking at communications within these centers — that is, who was talking to whom about what, how often, and with what effect? You probably don’t believe it, but at that time the idea of a “social network” was largely unknown, even to most researchers. This idea led us fairly quickly to graph theory and various measures that had been suggested for assessing the properties of such directed graphs. I became the point person in the group on this particular part of the evaluation.
In order to think about communication issues, we needed data. So we put together questionnaires for each of the centers to pass out to their professors, the students working with them, and their industry contact people (generally but not always one per company.) These questionnaires asked the respondents to indicate who they talked to, how often, and about what kinds of issues. Just simple check-the-boxes stuff, but when some centers had close to 100 people involved, it could get pretty complicated. Others were smaller. But we still had a small mountain of paper when we started to get the questionnaires back.
All of these paper forms had to be coded into the computer. Remember, this was back in the days of IBM punch cards; online statistical analysis was then still a very rare possibility. Then we had to figure out how to compute the measures we wanted to report. Since there weren’t any standard procedures, I wound up having to write FORTRAN programs to go through these fat decks of cards, create some totals, and calculate the resulting coefficients. It was a fine way to understand just what we were doing to the data, but tedious and complicated. And when we wanted to present the linkages among members of a center in visual form — as maps based on graphs, essentially— we had to draw them by hand.
Our reports wound up being a huge success, both with the program and with the centers. We were able to find out some things about the effects of organization that mattered. In one center, almost all communication, at least about administrative matters, flowed through one person, who turned out to be the departmental secretary. When she abruptly quit a few months later, the center basically fell apart; our map told them why. In most of the centers, contact was heaviest with those industry members who were located right down the road. And the reason why most companies became members of these centers was unexpected. It turned out to be access to negative research findings — that is, things that the researchers had tried that didn’t pan out. You can’t publish non-results, but it turned out to be just the thing that could save a company a lot of time and money by not trying out the same dead ends. By now, this observation has become commonplace—then, it was brand new.
So what’s the point of this historical ramble? Well it’s basically to note that the use of directed graphs to elaborate social interactions has a history at least back into the early 80s. The tools weren’t well developed, and much of what we did was knocked together. And nobody knew what a social network was, let alone a map of one— trying to explain what these graphs showed, why the nodes were located where they were, how the axes might be interpreted, and what the lines connecting the node showed was a major educational process. Once people figured out these things, then they began to find the stuff useful. Today, the tools are vastly better, and the idea of social networks has become a meme. Which is all to the good. However, it’s interesting to remember the old pre-paradigm days of network analysis, and to reconsider how it was that we didn’t dream up Facebook 20 years earlier.