In the early days of the Internet, when the prospect of online distance learning began to emerge, many faculty openly regarded it as inferior to face-to-face classroom learning. They were accustomed to using the old sage-on-the-stage approach, and they claimed it was the gold standard by which all other approaches should be judged — which really meant, they didn’t want to change.
The labors of time and social pressure eventually forced them to earnestly try online learning, which many did, albeit grudgingly. The quick and early lesson learned in this nascent stage is that online teaching required more time and effort than its counterpart in the classroom. Research faculty in particular disliked this, because it hijacked time away from the research, publishing and grant writing that would get them to tenure.
By and by, online learning platforms came along, concepts like re-use and re-purposing of recorded lectures simplified things, and these, along with other newly learned tricks, began to alleviate the time and effort rub. But these progressive steps came at the expense of having early adopters wade through the trials and tribulations that come with the dual challenge of not only enlisting new approaches, but also passing mustard during the rigorous process of peer review for winning widespread acceptance. This was very time consuming on both the micro and macro levels. It took over a decade to finally gain some real traction.
In light of this history, it comes as little surprise that AI and other new technologies might be greeted with skepticism, and that there might be a lag period of time before they’re productively employed in education. Early adopters don’t come easy, insofar as it costs them time and money to traverse the required learning curve. First, they have to find ways to leverage the promise of this potential efficacy. And second, they have to negotiate the peer review process. These two barriers make it tough on them — they can even derail careers if things don’t work out.