REPRISE: Diffusion, Technology Transfer, and Implementation: Thinking and Talking About Change

Jd Eveland
Socio-techtonic Change
29 min readSep 9, 2016

NOTE: This article is a sort of “blast from the past”. It was originally published in Knowledge: Creation, Diffusion, Utilization (Vol. 8 №2, December 1986 303–322), and it’s had a steady if modest readership ever since. It’s a bit long; it is, after all, an academic article.

I’m reprinting it here (in a moderately edited version, with some new thoughts added) because so many of the issues it describes are still with us in pretty much the same form. Despite the passage of some 30 years and enormous revolutions in information technology, we are no closer to resolving most of the social issues in technology transfer and utilization than we were then. Perhaps this republication may help call attention to some of the problems we thought we’d solved a long time ago but which keep popping back up because we keep forgetting what we did that worked.

Your comments and suggestions are welcome, both with regards to where this article is now outdated and where its conclusions and advice remain valid. The 1980s were fun (at least in part, and for some of us), but there’s no reason we need to revisit that experience completely. Hopefully, knowledge is cumulative, although it won’t be unless we remember what we did before.

There is today an increasing consciousness that our technology has, in enough cases to worry us, outstripped the ability of many organizations and individuals to make productive use of it. In almost any scientific field one cares to mention — from agriculture to robotics, computing to genetic engineering — the refrain of practitioners is the same: “We know so much — why can’t we get people to use it right?” The degree of frustration and uncertainty surrounding the effects of technology on society generally has reached serious proportions for both technology developers and users.

And yet it is also clear that there is a substantial body of knowledge, both theoretical and practical, that bears on precisely this set of issues — namely, how ideas move and become modified in the course of being used by people and groups to accomplish purposes. Why then the frustration with the applications of these ideas in the day-to-day world? Does it matter? And what might be done about it through social action? Are there generic mechanisms whereby knowledge might be moved from place to place more effectively? And would it be a good thing if there were such mechanisms? And given what we have learned in 50 years of systematic analysis, what might we as “knowledge workers” do about the problem?

The problem of making productive use of technology — generically called “implementation” — is essentially a phenomenological issue — that is, one of understanding how people think about technology in relation to their lives and interests, and how thoughts lead to human action (Cochran, 1980). It is, after all, basically fruitless to look at technology outside of the context of human systems. Technology application is a problem only for people — it does not bother a machine at all not to be used, or to be used as a fancy doorstop; it matters only to those who paid for it and do not get a return on their investment.

In this article I will outline first what I see as some of the basic dimensions of the technology transfer issue generally; then look at some of the implications of those dimensions for action. The theme throughout is the centrality of the problem of meaning in technology utilization, and how we can use the phenomenological viewpoint to organize commonly recognized problems in diffusion, technology transfer, and implementation. There is a long and rich tradition of analysis revolving around these social issues, and my purpose is less to add entirely new insights to this tradition than to suggest how certain recurrent themes in the literature — both theoretical and “wisdom” — can be used to scientific and practical advantage. Helping others to think and talk creatively about change requires that we think as creatively ourselves, and find the appropriate organizing vision for our knowledge. It is to these points that my final conclusions are addressed.

Defining “Technology Transfer”

If we are to understand how technology transfer should be conceived and understood, we have to begin with the words themselves. First, technology. The concept of technology has to be used in the broadest possible sense if it is to make any sense at all. Technology is not simply hardware or physical objects; rather, it is knowledge about the physical world and how to manipulate it for human purposes. This point is absolutely critical — technology is essentially information. The physical objects usually regarded as “technology” are important only insofar they embody and convey this information.

At a minimum, they must encompass both the tools (sometimes physical, sometimes procedural) and the uses — the purposes to which that tool is put (Eveland, Rogers & Klepper, 1977). All technology is essentially behavioral. Tools cannot be understood aside from the things they are used to do — the purposes of the individuals and groups that use them. This is essence of the “sociotechnical system” concept (Taylor, 1975; Cherns 1976).

Both tools and uses are defined at varying levels of abstraction — “hammers,” “computers,” “hybrid corn,” and “flexible manufacturing systems” all can refer to extremely generic concepts, or to highly specific objects and procedures, or to a vast range in between. Choosing an appropriate degree of specificity is critical to the technology implementation process. Over time, uses define tools and tools define uses, interacting iteratively (Pelz, 1982).

Technology transfer depends critically on facilitation — that is, does this tool help me do something useful? if what you want to transfer does not help, your transfer arrangements won’t work (Bikson et al., 1985). As we note later, uses and goals for technology may be defined in many different ways and from the point of view of many diverse groups and individuals. A consequence valued by one person may be a disaster from the viewpoint of another. Understanding how different ideas about the usefulness of technology interact with implementation is one of the major advantages of a phenomenological approach to technology transfer.

The term transfer is also problematical. Technology is essentially information, and “transfer” is essentially communication of information — both within individuals and groups and between them — and the use of that information in the recipient system (Bikson et al., 1984).1 Technology transfer is a subset of technological change or innovation (Eveland, 1979). Transfer is essentially a metaphor for physical relocation (Eveland, 1997). But the movement of physical objects from one place to another is meaningless unless the recipient does something with that object and information it embodies. Thus, “utilization” is both the target and the test of the process (Larsen, 1980). Concentrating on the transaction itself rather than on what happens as a result of the transaction is a notable shortcoming in technology transfer as it is currently practiced (if not in the conceptual literature itself).

Technology transfer uses language to communicate, and understanding how language affects individual, organizational, and social action is essential (McHugh, 1968; Blumer, 1969). People understand new things largely through metaphors — that is, defining new things by how they are both like and unlike already familiar things (Bandler and Grinder, 1975; Lakoff and Johnson, 1980). Each metaphor carries with it a set of affective and substantive associations that for good or ill carry over to the new thing (Meyer, 1982).

Different metaphors create different responses. Consider the personal computer during its introduction to an organization that has not had such tools before. Three commonly used metaphors for such computers are “typewriter,” “calculator,” and “terminal.” Seeing PCs as typewriters implies one-to-one access, usually by secretaries, on desks or in typing pools (“WP Centers”); there is little consultation by system engineers with those who use them, except possibly about aesthetics or ergonomics. The “calculator” metaphor implies that the tools will be used one-on-one, largely by engineers in professional offices, with choices about both equipment and usage left largely to the individuals. Others see PCs as “terminals”, an approach that implies they should be scattered around, spaced roughly equally apart, for open use by anyone who wanders by. None of these metaphors is precisely wrong — but each tends to limit the choices of users in critical ways (Englebart, 1982).

2016 NOTE: since the publication of this article, the computer has entirely revolutionized office work. Keyboarding has become an essential work skill, practiced almost universally; from the CEO to the intern, everyone types. These metaphors have faded into the background as familiarity with the device has spread. But new tools are now undergoing the same process of definition. Consider how different organizations are trying to decide what role, if any, Facebook might play in their technology mix. What is it like, and not like, among tools they currently use? Tools change; the process of their adoption and implementation is remarkably stable.

“Myths” are sets of metaphors used for explanation in circumstances where empirical evidence is lacking. They help with sense making while experience is accumulating. Myths build gradually, as metaphors are continually reshaped, usually to be more specific — for instance, what kind of typewriter, or calculator, or whatever. Unfortunately, once you have decided what something is, it is often difficult to go back and decide it is really something else. This is true for both physical tools and social roles. Eventually objects and practices become their own things, and serve as the basis for subsequent metaphors for new ideas and objects (while, of course, retaining their own metaphorical associations). They become familiar constructs whose meaning is generally assumed to be shared and not generally discussed.2

Sharing information among people (and organizations) requires that everyone is operating at the same general level of abstraction, and using sharing roughly the same kind of metaphors. It does not require perfect information, or precise specificity, to be effective. Ambiguity and generality can be very effective, particularly one does not know just what sorts of metaphors an information recipient is applying. This is a lesson known to all good salesmen, but only recently has it been understood equally well by the research community (Havelock, 1973).

In some critical ways, therefore, the term ‘technology transfer” is an unfortunate one — almost as unfortunate as ‘diffusion,” which is also applied to these phenomena. Both terms have the disadvantage of erroneous metaphorical connotations. Speaking of “technology” tends to lead us to focus on the hardware, the physical object involved, which is, as I noted earlier, almost the smallest part of the question. The term focuses our attention away from the behavioral dimensions of tools and their interactions with human purposes. “Transfer” emphasizes the movement of physical objects from one place to another, with the implication that the object moved is the same at the beginning and at the end.

“Diffusion” is even worse. It implies some sort of anonymous if inexorable physical process spreading across the landscape, rather like a disease.3 If we as analysts persist in using terms whose connotations are directly opposite from what we wish to convey, we cannot really blame an audience of practitioners trying to apply the concepts for drawing the wrong conclusions.

Our understanding of technology transfer systems was shaped in this awkward way through a perfectly reasonable and logical chain of events. Like many parts of behavioral science, the “diffusion of innovations” started out as a real-world problem, and only later turned into a field of study (Rogers, 1983). The original problem was simple market research, in this case how to sell hybrid seed corn back in the 1930s. In the course of finding out that what farmers thought about corn really did affect what they decided to do about it, Ryan and Gross (1943) and their followers also formulated a set of categories and models that soon came to be seen as generalizable.

Generalizing came first from individuals to organizations, then to a lot of other situations — first fluoridation and health practices (Becker, 1970), then school programs, public works, and social policies (Feller and Menzel, 1976: Bingham, 1976; Berman and McLaughlin, 1977; Lambright, 1980), recently computers and related tools (Johnson et al., 1985). The number of such studies is now incalculable, and there is a well-established “literature” in the field (Doctors and Stubbart, 1979). What is less clear is how deeply the best ideas in this field have penetrated into the applications literature, still less into field practice in transfer.

The practice of innovation diffusion was critically shaped by marketing. It is impossible for anyone to speak ten words about diffusion without two of them being “agricultural extension.” Expectations about what technology transfer systems should and should not do and look like have, for good or ill, been critically shaped by our understanding of that program, its practices and its effects (Rogers et al., 1976; Feller et al., 1985). In many ways, extension is the defining metaphor for all technology transfer efforts.

I will not attempt here to define or describe all its features — only to note that what extension really is is virtually impossible to untangle from all the things people think it is or should be. Untangling extension-as-an-organization from extension-as-a-concept is more readily accomplished in the literature than in the field.

This point is evident when one looks at how agricultural extension served as the basis for a large number of Federal programs in the 1970s aimed at replicating extension’s success in other technical areas (Roessner, 1975; FCCSET, 1977). Agencies such as NASA (Chakrabarti and Rubenstein 1976), the Department of Defense (Hetzner and Rubenstein, 1971), the Office of Education (Hall and Loucks, 1977), and the National Institute of Justice (Blakely et al., 1983), among others, all started “diffusion” programs aimed at industry or governmental users of technology.

There are ebbs and flows in these movements; lately (i.e, the mid-1980s), direct transfer efforts appear to have been overshadowed by an emphasis on “university-industry cooperative relationships” of various varieties (Eveland and Hetzner, 1982). While transfer remains a significant political symbol, it is clear that its content has and will continue to shift considerably.

In summary, each phase of the development of the field of technology transfer — both conceptual and practical — has contributed new insights and complexities that have enriched subsequent, developments. But there has been a consistent tendency to focus on the content of the change rather than on the meaning of the change for those who changed. If one’s research is being sponsored by seed companies, it is reasonable to concentrate on the seed as the central focus — but letting the meaning of the seed for users be defined entirely by the meaning as perceived by developer/sellers is excessively limiting. Only by looking at the problem from the point of view of the recipient systems can we take our understanding of technological innovation to its next productive level (Havelock and Eveland, 1985).

Generic Problems with Understanding Technology Transfer

We have gone to great lengths to define what “technology” and “technology transfer” really are. But what can we do with this formulation? In the remainder of this article I sketch some of the things about technology transfer that make the formulation and application of generic models of process rather problematical, and then suggest some principles might guide us to a new and more effective formulation of the issues involved.

Two main sets of problems/issues complicate understanding of technology transfer — structural problems (largely independent of context and organization) and dynamic problems (those posed by processes evolving over time in particular situations).

Structural problems. The first problem is deciding what the technology really is. A great deal of the conceptual history of diffusion research was focused on the development of lists of “innovation characteristics,” aimed at defining “adaptability” of different technologies (Tornatzky and Klein, 1982). If we only understood enough things about technologies, it was felt, we could predict efficiently where and by whom they would (or at should) be used.

By the mid -1970s, we had come to see that this approach was terminally complicated by differences in perceptions, or, in the language used earlier here, by varying metaphors for the new ideas (Downs and Mohr, 1977). This became particularly apparent when the innovations under study were “social technologies” such as educational or social programs (Larsen, 1982), rather than hardware. One way around this was to conceive of innovations as sets of specific “elements,” bundled in various ways, like a car that can be bought in any number of different configurations (Hall and Loucks, 1977). The more specific these elements are, the better chance they stand of being “transferred” in some form recognizable to their original definer (Blakely et al., 1983). While this approach makes the job easier for the analyst, it does little to resolve problems for the user.

As noted earlier, the choice of an appropriate level of aggregation to look at organizational behavior is a key issue both analytically and practically (Roberts et al., 1978). Organizations are at bottom made up of individuals, who are at best “partially included” in the organizational system — that is, they participate in many other systems as well, and must relate what they do in one system to what they do in another to maintain some degree of personal integration. When we speak of “the organization’s behavior” we sometimes forget that such behavior — however useful as an analytical construct — is nothing more than a composite average of the behavior of lots of individuals each acting out of their own context and responding to their own imperatives and interests. Ultimately, technology transfer is a function of what individuals think — because what they do depends on those thoughts, feelings, and interests. Choosing a high level of aggregation to look at transfer phenomena can sometimes obscure this key concern.

The interplays of individual and collective judgments about costs, benefits, and behavior are essentially the province of organizational politics. I use this word here in its relatively strict sense to refer to the interactions of interests among parties in a relationship (Weiss, 1973; Benson, 1975). Commitment to goals in social group is always relative. People embrace goals with positive consequences to themselves with considerably more fervor than they do goals for which their payoff is more personally tenuous. The problem is complicated by culturally induced embarrassment in talking about values and value conflicts; but such issues do not go away simply because we avoid them.

Any technology, and particularly any technological change, involves an unequal distribution of these costs and benefits in the system. Some people must pay the costs, and others receive the benefits. If all the costs are to be paid by the lower hierarchical levels of the system and all benefits appropriated by the upper levels, “resistance to change” is not merely understandable but positively rational (Mechanic, 1962). The problem is really why some should make a change — that is, what is in it for them. As I noted earlier, research tends to confirm that functionality is a critical determinant the acceptance of new technology; people do things for which they are rewarded. Any analysis of technological change that does not address explicitly cost/benefit distribution or allows the costs and benefits to be defined according to the perspective of only a limited part of the participants will be fundamentally misleading.

Since all organizations have a range of purposes, they also have reasons why those purposes have not been reached — the set of things they define as “problems” (Walker, 1974). This agenda is a constantly shifting set, redefined as circumstances change. Innovation, as a part of the general system demand for adaptability, is only one of the system problems to be addressed — others include integration, coordination, and the achievement of output (Parsons, 1965). In fact, most organizational decisions have very little to do with technology as such, but with things like finance, personnel, scheduling, and resource management. That is sometimes hard for a change agent (or even someone researching change) to appreciate. No one else takes your changes as seriously as you do. On the other hand, you do not take the organization’s problems as seriously as it does. Eventually, the interaction balances out.

Culture has recently become a word in organizational analysis with many diverse meanings. Essentially, shared meanings, remembrances, patterns of activity, and particularly expectations about what other people will do in organizations really matter in explaining what takes place. With culture, we are applying and anthropologist’s view of the relationships rather than a sociologist’s or a psychologist’s.

Technology affects culture dynamically. For example, as we noted earlier, personal computers have a wide variety of potential meanings to those who use them, meanings that change over time. These meanings are part of organizational culture, and both are shaped by it and shape it in turn as they evolve through experience.

Consider a hierarchical, controlled organization introducing PCs — a potentially anarchic,” power-to-the-user” situation. Such organizations often respond with elaborate control systems, sets of passwords, procedures for controlling access to disks, and the like. Results are often circumvention of the rules, frustration on the part of managers, and general failure to achieve the promised benefits of the technology. Sometimes this just produces paralysis; sometimes it can lead to a new culture more adapted to being able to use the technology, as, for example, professionals begin to keyboard their own work and clerical personnel are freed for more valuable and productive tasks. Sometimes there is a “synthesis” in which old patterns are reinterpreted in light of new conditions, such as is evident in the recent trend for data-processing managers to reassert control over stand-alone computing equipment. The point is that lots different outcomes are possible, but no one outcome is necessary or inevitable.

Over time, cultures and patterns of technology usage both change. New information based on experience is incorporated into the mental sets of the participants in the culture. The process almost always involves friction and costs. The degree to which those costs are worth the positive consequences of the change is a function of the change process itself as well as of the inherent features of the technology and the context. Appreciating the role that culture plays in organizations, and how culture can be dynamically shaped by the organization’s own intelligent sociotechnical choices, can vastly improve the efficiency of innovation utilization (Johnson, 1985).

Dynamic Problems of Process. The second set of issues arises because most technological innovations of any interest are embedded in organizational contexts (Chakrabarti, 1973). Each change has repercussions for the whole system, “ripple effects” across both space and time moderated by the degree of “coupling” of the system but always present to varying degrees. Understanding how different parts of the system are interdependent can help a lot in accounting for unplanned and unanticipated effects, which can be both positive and negative. Often when we fail to understand such interdependencies, we sub-optimize a system, making one part (usually the technical subsystem) work a lot better and other parts (typically the social subsystem) work a lot worse. Satisfaction with these arrangements depends a lot on whether you are talking to the person in charge of the first part of the system, or to the people in charge of the others, or to someone who has to balance the interests of the whole system.

Issues related to the staging and dynamics of implementation have intrigued researchers for a long time. It is self-evident that putting technology into place in an organization is not matter of a single decision, but rather of a series — usually a long one — of linked decisions and non-decisions. People make these choices, and these choices condition future choices. While the researcher may identify one particular choice as a focal point of “adoption,” he only fools himself he believes that choice has the same meaning to the user as it does to him. Understanding the leverage exerted by some decisions over other decisions is critical to making intelligent choices about where to intervene creatively in the process (Hall and Nord, 1984).

Researchers have developed the idea of innovation “stages” as a way of categorizing how some decisions of necessity precede and shape those later on. There are many different formulations of such stages; the question is not which one (if any) is “true,” but what the relative utility of a particular formulation might be to you (Tornatzky et al., 1983). One basic difference in frameworks relates to whether you prefer to focus on the content of decisions (such as the technology itself) or on the nature of the action being taken by the system. These different approaches lead somewhat different ways of categorizing behavior. While the same general phenomena are under discussion in each model, the categories tend to highlight rather different focal issues.

Two Views of Innovation

The action-centered approach essentially considers change as process of gradually shaping a general idea, which can mean lots of different things to different people, into a specific idea that most people understand to mean more or less the same thing. Five general stages or categories of decisions to be made in sequence can be distinguished (Eveland, Rogers & Klepper, 1977):

  • agenda setting
  • matching
  • redefining
  • structuring
  • interconnecting

The first stage is one of establishing the “agenda of problems and solutions”, a set of ideas known to the system but that do not necessarily provoke the system to action directly. All organizations and units within them have such agendas, although they may not be consciously aware of them without elicitation. When a problem and a solution come together in the mind of a person or persons in the system, a “match” is made and organizational action commences. Rather than a defined “adoption” point, this model emphasizes a more or less gradual “redefinition” in which both the proposed innovation and its potential uses come to be understood in sequentially greater detail.

When both “tool” and “use” are defined clearly enough to be communicated to others, a process of creating the organizational structure to embody the innovation can begin. When the structure is generally understood, it can be interconnected to other parts of the system as its relationships to them become clear. The whole process, in these terms, is one of defining the innovation in successively greater detail, distinguishing both what it is and what it is not.

Regardless of how stages are defined, they cannot be stretched too far out of shape; nor can they be anticipated in great detail before they take place. The principal value of stage models of any sort lies in helping the analyst and the change agent to understand that he or she can encompass or affect only a relatively small part of the process any given time. Analytical humility is generally to be encouraged.

The structural approach, by contrast, looks at consistent issues that arise across situations; what is the structure of recurrent problems? One of these is the assessment of effects of technological change — whose criteria are to shape decisions? As I have noted, organizations are made up of multiple people (and aggregates of people), and therefore multiple criteria and evaluations of outcomes based on diverse goals are the rule in complex decision sequences (Mintzberg, Raisinghani, and Theoret, 1976; Nutt, 1984). Multiple criteria can affect even individual decisions aimed at a similar purpose.

Sometimes such complex decision criteria allow “win-win” solutions to be formulated; sometimes situations are truly zero-sum, and someone has to lose (Quinn and Rohrbaugh, 1981). Moreover, criteria can change in salience and applicability over time (Prien, 1966; Kimberly and Miles, 1980). In any event, the problem of multiple criteria of assessment is the dynamic problem posed by the political nature of organizations described above.

Another issue is that of horizons — when do you choose to make your valuation of outcomes, given that there is never any if defined end-point to a change process? Sociotechnical analysts refer to this as the problem of “incompletion” — nothing is ever final (Trist, 1964). Short-term and long-term criteria are both appropriate (and used) depending on the perspective of the analyst and his or her interests (Hayes and Abernathy, 1985). Reinterpretation of past results is a constant phenomenon, as new information about decision consequences remote in space and becomes available.

Again, there is no single answer about what “true” outcomes are, only the need to remember that the issue cannot be unequivocally resolved, either by the participants in the process or of the analyst. This does not mean that perceptions about consequences do or should not shape decision making, only that such perceptions should not be “reified” beyond their limits.

The Bottom Line

Where does all this leave us in our quest for efficient and effective ways to increase the utility of knowledge transfer research for organizational and social management? In some ways, it is easy to feel that we almost know less than we did 30 years ago; at least, we are probably a good deal less certain of what we do know than we used to be. Agricultural extension is not the last word in technology transfer.

A more realistic assessment is that we are a good deal more conscious now of just what are the limitations on the utility or prescribabilty of any particular analytical paradigm or organizational model. The more we study the technological innovation processes that underlie technology transfer, the more complex and contingent they seem, and the less clear it is that any model, regardless of its sophistication, can adequately represent more than a small part of the whole range of processes of interest to us. Even agricultural extension has proved singularly inapplicable to most other situations — and perhaps even, today, to agriculture.

What I would like to suggest here is a set of propositions that must underlie any effective approach to understanding technological change, regardless of context or content. Any administrative system that we create to distribute and apply knowledge must take these principles into account or fail.

First, technological change is a process without beginning or end. Individual people and tools and purposes come and go, but the sequence is iterative and evolutionary, and linear patterns are always artificial constructs generated by the analyst (Eveland, 1979). If the working model for organizational research is the novel, a model with a clear starting point, defined characters (variables), a plot (the model), and an ending (dependent variables), the working model of organizational life must be the soap opera, where characters come and go, their roles are constantly changing and being reinterpreted, and what seems good today is bad tomorrow and good again day after tomorrow.

Like the characters, the technology is constantly subject to modification and reinterpretation. “Routinization” of technology takes places only in the sense that one tends after time to forget that one ever thought of a particular tool as “new,” given all the other new things that have come along in the meantime. As we noted earlier, over time even a technology as unusual and shocking as personal computers becomes accepted and even ignored; the keyboard is today as ubiquitous and unremarkable as the telephone, and this is in barely five years.

But technology is never “routine” to the point that it is not subject to change and modification. If we aim our efforts at routinization, we are likely to damn ourselves with success. Organizations that carefully implement state-of-the-art computer systems tend to have a great deal of difficulty taking advantage of changing technology; they have too many “sunk costs” in the old systems (Bikson et al., 1985). It is well to remember that every old, outdated, ossified tool or practice in any organization was once an innovation” that got “routinized” all too well. We would do well to remember this in our zeal to fasten new things on organizations.

Second, the context of change is vitally important. Because organizations are systems, any action or choice has repercussions across both space and time, and even across the borders of systems we are trying to affect. Members are aware (sometimes) of these; a change agent/sales person must be equally so. The organization’s culture and its connections with the rest of the world provide the context within which all external messages — including those dealing with technological change — get filtered and interpreted. Meaning must of necessity be generated internally by people; only in the most general terms can it be supplied by an external source.

The one thing we have rather conclusively demonstrated in the course of 20 years of public programs intended to promote technological change — in fact, through the long years of agricultural extension as well — is that one cannot pay people enough, long enough, to get them to do things or use tools that do not have intrinsic worth and value to them. “Incentives” that do not institutionalize a clear long-term yield have only short-term effects. While one can through “demonstration programs” or other subsidy mechanisms induce the temporary use of a technique or policy, it will not outlast the subsidy unless it becomes structured as part of the system and interconnected to it in multiple ways, because it provides such value. External sources cannot provide that value; it must be of value to those who practice it. This is one of the hardest lessons all change agents must come to terms with. It implies that change agents must concentrate far more attention on how people think about the change than what actually changes.

Third, what matters most to organizations, whether they realize it or not, is process, not technological content. From the point of view of a given organization, the key problem should be less choosing and implementing the “right” technology than it is developing and putting into place a procedural set for making technology choices intelligently. Computers are today perhaps the most extreme of a technological area where no single choice remains valid indefinitely; those organizations that cope well with computer technology are those where the system has the capacity to remain experimental (Johnson et al., 1985). Organizations need to encourage continuous learning about technology and sociotechnical interactions on the part of members, and to maintain and use that learning without being paralyzed by it. Remembering too much, after all, can create so many metaphors that the system can never work through to an understanding of the change itself.

An organization that understands the strategic nature of innovation choices, and can approach the process systematically rather than as series of individual and discrete decisions, will always have an advantage, according to the Law of Requisite Variety. A technology transfer system that can facilitate change processes rather than sell specific technologies is one that will have long-term success.

Finally, the purpose of innovation/diffusion research is not to prescribe but to raise consciousness. To the extent that research can help organizations understand that they have the power to make good choices, and help them understand the implications of those choices, it will contribute to social goods. To the extent that research creates new and better ways to manipulate individuals and organizations into adopting other people’s views of what is a “good thing,” it will contribute instead to a devolution of social progress. I realize that this may be a difficult point to swallow for those who legitimately believe they have a “good thing” other people really need — a group that includes most of the “true believers” in technological and social innovation.

On balance, however, we are all likely to be better off by encouraging the development of the capacity for effective and purposive internalized self-directed evolution and control than by relying on any “diffusion system” to overcome the shortcomings of organizational and individual change processes. As Peters and Waterman (1983) tell us, one of the key lessons their “excellent companies” have all learned is to appreciate the validity of their customers’ needs and understanding of those needs. Surely public mechanisms for “technology transfer” can do as much.

Notes

  1. “Information” is usually defined as something that reduces uncertainty about the world (Miller. 1965). In fact, technology information not infrequently increases uncertainty about applications as it expands the horizons of the possible. Uncertainty should not be confounded with ignorance.
  2. This may or may not be a good thing. In fact, as we note later, one of the major failings in many technology implementation processes is a tendency to assume that meanings are shared without exploring them. This leads almost inevitably to confusion, frustration, and costs beyond what needs to be incurred.
  3. In fact, classical diffusion models are in practice largely indistinguishable from epidemiological models in terms of parameters and underlying dynamics (Hamblin et al., 1973).

References

Bandler, R. and J. Grinder (1975) The Structure of Magic. Palo Alto, CA: Science and Behavior Books.

Becker. M. H. (1970) “Sociometric location and innovativeness: reformulation extension of the diffusion model.” Amer. Soc. Rev. 35: 267–282.

Benson, J. K. (1975)”The interorganizational network as a political economy.”Adm. Sci. Q. 20: 229–249.

Berman, P. and M. W. Mclaughlin (1977). Federal Programs Support Educational Change: Factors Affecting Implementation and Continuation. Santa Monica, CA: RAND Corporation. — l589/7 — H)

Bikson, T. K., B. A. Gutek, and D. Mankin (1985) Understanding the Implementation of Office Technology. Santa Monica, CA: Rand Corporation.

Bikson, T. K., B. F. Quint, and L. L. Johnson (1984) Scientific and Technical Information Transfer: Issues and Options. Santa Monica, CA: Rand Corporation Report to NSF, Grant # N — 2131 — NSF.

Bingham, R. D. (1976) The Adoption of Innovation by Local Government. Lexington MA: Lexington.

Blakely, C., J. Mayer, R. Gottschalk, D. Roitman, N. Schmitt, and W. Davidson (1983) Salient Process in the Dissemination of Social Technologies. National Science Foundation Grant #ISI — 7920576–0l.

Blumer, H. (1969) Symbolic Interactionism: Perspective and Method. Englewood Cliffs, NJ: Prentice — Hall.

Chakrabarti, A. K. (1973) “Some concepts of technology transfer: adoption Innovations in organizational context.” R&D Management 3: 111–130.

Chakrabarti. A. K. and A. H. Rubenstein (1976) “Interorganizational transfer of technology: a study of the adoption of NASA innovations.” IEEE Transactions on Engineering Management. EM — 23: 20–34.

Cherns, A. B. (1976) “The principles of organizational design.” Human Relations, 783–792.

Cochran, N. (1980) “Society as emergent and more than rational: an essay on the inappropriateness of program evaluation.” Policy Sciences 12: 113–129.

Doctors, S. I. and C. Stubbart (1979) A Review of the Research Literature Technology Transfer. Working Paper WP — 344, Pittsburgh, PA: Graduate School of Business, University of Pittsburgh.

Downs, G. and L. B. Mohr (1977) Toward a Theory of Innovation. IPPS Discussion Paper №92, University of Michigan.

Englebart, D. C. (1982) “Evolving the organization of the future: a point of view,” pp. 287–307 in R. M. Landau, I. H. Bair, and J. H. Siegman (eds.) Emerging Office Systems. Norwood, NJ: Ablex Publishing Corp.

Eveland, JD (1979) Issues in using the concept of ‘adoption’ of innovations. J Technology Transfer 4, 1:1–14.

Eveland, JD (1981) “Implementation: the new focus of technology transfer,” in S. Doctors (ed.) Issues in State and Local Government Technology Transfer. Cambric MA: Oelschlager, Gum, and Ham.

Eveland, JD (1997) Glue, lube, and money: alternative metaphors for making sense of organizational information and communication. Working paper, California School of Professional Psychology, Alhambra CA

Eveland, JD, EM Rogers, and C. Klepper (1977) The Innovation Process in Public Organizations: Some Elements of a Preliminary Model. Ann Arbor, Report to the National Science Foundation, University of Michigan, Grant No. R75–17952.

Eveland, JD, L. G. Tornatzky, W. A. Hetzner, A. Schwarzkopf, and R. Colton (1983) “University/industry cooperative research centers.” Grants Magazine.

Federal Coordinating Council for Science, Engineering and Technology (1977) Directory· of Federal Technology Transfer. Washington, DC: Government Printing Office.

Feller, I., L. Kaltreider, P. Madden, D. Moore, and L. Sims (1985) The Agricultural Technology Delivery System. University Park, PA: lnstitute for Policy Research and Evaluation, Pennsylvania State Univ. Report to USDA, Contr.53–32R6 — I — 55.

Feller, I. and D. C. Menzel (1976) Diffusion of Innovations in Municipal Governments. University Park, PA: Penn State University, Center for the Study of Science Policy, Report to NSF, Grant No. RDA — 44350.

Hall, G. E. and S. F. Loucks (1977) “A developmental model for determining· whether the treatment is actually implemented.”Amer. Educational Research 1. 14,3:263–276.

Hamblin, R. L. et al. (1973) A Mathematical Theory of Social Change. New York:Wiley-Interscience.

Havelock, R. G. (1973) Planning for Innovation through Dissemination and Utilization of Knowledge. Ann Arbor, Ml: Center for Research on Utilization of Scientific Knowledge, University of Michigan.

Havelock, R. G. and JD Eveland (1985) “Change agents and the role of the linker in technology transfer.” pp. 35–56 in Proceedings of the Federal Laboratory Consortium for Technology Transfer Fall Meeting. Seattle, WA.

Hayes, R. H. and W. J. Abernathy (1980) “Managing our way to economic decline.’’ Harvard Business Rev. 58 (July — August) 67–77.

Hetzner, W. A. and A. H. Rubenstein (1971) An Analysis of Factors Influencing the Transfer of Technology from DOD Laboratories to State and Local Agencies. Army Research Office: Program of Research on the Management of Research and Development

Jervis, P. (1975) “Innovation and technology transfer — the roles and characteristics of individuals.” IEEE Transactions on Engineering Management. EM — 22: (19–2)7.

Johnson, B. M. et al. (1985) Innovation in Office Systems Implementation. University of Oklahoma: Report to National Science Foundation, Productivity Improvement Research Section.

Kimberly, J. R. and R. H. Miles (eds.) (1980) The Organizational Life Cycle. San Francisco: Jossey-Bass.

Kraemer, K. L. and J. L. King (1979b) “Problems of operations research technologytransfer to the urban sector.” Presented to the American Society for Public Administration, Baltimore.

Lakoff, G. And M. Johnson (1980) Metaphors We Live By. Chicago: Univ. of Chicago Press.

Lambright, W. H. (1980) Technology Transfer to Cities. Boulder, CO: Westview.

Larsen, J. K. (1980) “Knowledge diffusion: what is it?” Knowledge 1, 3: 421–422.

Larsen, J. K. (1982) Information Utilization and Non — Utilization. Mental Health Services Development Branch, NIM H grant ~ 25121 American Institutes for Research in the Behavioral Sciences.

Mchugh, P. (1968) Defining the Situation. Indianapolis: Bobbs Merrill.

Mechanic, D. (1962) “Sources of power of lower participants in complex organizations.” Admin. Sci. Q. 7:349–3M.

Meyer, A. D. (1982) “Mingling decision-making metaphors.” Milwaukee: University of Wisconsin — Milwaukee, School of Business Administration, Working Paper.

Miller, J. G. (1965) “Living systems: the organization.” Behavioral Sci. 10: 193–237.

Mintzberg, H., D. Raisinghani and A. Theoret (1976) “The structure of unstructured’ decision processes.” Admin. Sci. Q. 21: 246–275.

National Academy of Engineering (1974) Technology Transfer and Utilization: Recommendations for Redirecting the Emphasis and Correcting the Imbalance. Washington l)C: Academy Report No. PB — 232 123.

Nutt, P. C. (1984) “Types of organizational decision processes.” Admin. Sci. Q. 29, 3:414–450.

Pelz, D. C. (1982) Use of Information in Innovating Processes by Local Governments. Ann Arbor, Ml: University of Michigan, Report to the National Science Foundation, Grant No. ISI — 79–20575.

Peters, T. J. and R. H. Waterman (1983) In Search of Excellence. New York:Harper and Row.

Prien, E. P. (1966) “Dynamic character of criteria: organization change.” I. of Applied Psychology 50: 501–504.

Quinn, R. E. and J. Rohrbaugh (1981) “A competing values theory of organizational effectiveness.” Public Productivity Rev. 5, 2:122–140.

Roberts, K. H., C. L. Hulin, and D. M. Rousseau (1978) Developing an Interdisciplinary Science of Organizations. San Francisco: Jossey-Bass.

Roessner. J. D. (1975) ‘Federal technology transfer: an analysis of current program characteristics and practices.” Washington, DC: Committee on Domestic Technology Transfer, Federal Council for Science, Engineering and Technology.

Rogers, E. M. (1982) Diffusion of Innovations. New York: Free Press.

Rogers, E. M., JD Eveland, and A. S. Bean (1976) Extending the Agricultural Extension Model. Stanford, CA: Institute for Communication Research, Stanford University.

Ryan, B. and N. C. Gross (1943) “The diffusion of hybrid seed corn in two Iowa Communities.” Rural Sociology 8:15–24.

Taylor, J. C. (1975) “The human side of work: the sociotechnical approach to work system design.” Personnel Rev. 4, 3:17–22.

Tornatzky, L. G. and K. J. Klein (1982) “Innovation characteristics and innovation adoption — implementation: a meta-analysis of findings.” IEEE Transactions on Engineering Management. EM — 29: 28–45.

Tornatzky, L. G., J. D. Eveland, M. G. Boylan, W. A. Hetzner, E. C. Johnson, D. Roitman, and I. Schneider (1983) The Process of Technological Innovation: Reviewing the Literature. National Science Foundation

Walker. J. L. (1974) “The diffusion of knowledge and policy change: toward a theory of agenda-setting.” Presented to the American Political Science Association, Chicago.

Weiss, C. H. (1977) “Research for policy’s sake: the enlightenment function of social research.” Policy Analysis 3(4): 531–546.

--

--