The Digital Divide: A Critical Review of Tech’s Role in Education
Since the nation’s beginnings, a lot has changed within our education system. Simultaneously, much has not. Those who hold positions of privilege often fail to acknowledge that our schools were racially segregated up until the 1960’s: not very long ago. Have inequities vanished, or have they just been reinvented? Being white in America, born to an upper or middle class family, and/or residing in a community that prioritizes funding for education and social services are beneficial identities to hold while navigating the educational system. If having one of these identities is advantageous, having all of them is ideal in getting the most out of what the school system has to offer. It can mean attending schools with experienced teachers, a strong extracurricular program, and the resources necessary to successfully maneuver the college admissions process.
It is impossible to examine the modern world without recognizing the ways in which “success” is determined on the basis of personal capital and social status. In the 21st century, one’s membership in privileged spaces is, in part, dependent on their engagement with contemporary technologies. In educational settings, this translates to the early decision of who will and who will not have the tools they need to elevate social capital. The digital divide is defined as “the gap between demographics and regions that have access to modern information and communications technology and those that don’t, or have restricted access.” The concept underlying the digital divide has been present since long before the digital age. 46% of households living in poverty don’t have broadband at home. Historically, inventions like telephones were only accessible for the “haves,” while the “have nots” were left behind and not given the tools to advance. Not much has changed. If safe and equitable access to technology posed challenges for low-income communities of color before the pandemic, the challenges they face in a COVID-ridden world are, in many cases, far more formidable.
“Education technology” — a term that has gained relevance in the new millennium — is now both a field and a discipline of its own. While those who work directly with students in a classroom experience education technology as any type of digital tool, device, or technique that facilitates the formal learning process, we must recognize the ways in which this definition continues to expand. Media and technology touch every aspect of the educational experience far beyond the classroom.The impact of tech is immense not only in the processing of academic content, but in the macro-level ways that students interact with the education system in the United States. Experiences in the U.S. education system vary greatly among students, with factors such as race, gender, class, geographic location, and socioeconomic status affecting the outcome. The racially-rooted discrepancies that have always impacted education persist in full force. Countless technologies are used in educational settings, and while issues of equity in this space were always present, the COVID-19 pandemic has widened the digital divide in ways that are dangerous to the pursuit of justice.
So the question is this: if we know that the uncontrollable elements of a student’s identity impact what education will mean for them, what happens when we throw access to technology into the mix?
Since its introduction into the world of education, technology has been celebrated as an equalizer; a tool with potential to finally close the gap between who does and who does not get to be successful in school. It’s been called “inclusive” and something that “levels the playing field.” Does technology have the capability of enhancing the educational experience? Sure. But are the necessary tools made available equally, in a manner that’s free of bias? Absolutely not.
In reflecting on all the technologies that a high schooler might come into contact with, a few might come to mind: Gmail, Blackboard, Naviance, digital applications, and particularly in the era of Covid, Google Classroom and Zoom. There are plenty of reasons to celebrate what tech holds the potential to do for students. For example, using tech allows educators to meet young people where they are. Younger children, in particular, tend to associate devices with fun or excitement, so real productive learning can happen when tech is introduced to the classroom. Additionally, getting all kinds of learners familiar with tech is essential: it’s not going anywhere. It is already prevalent in most fields, and possibilities for innovation are limitless. This can certainly be positive for curious students. Using technology in the classroom can connect them to opportunities they didn’t know were possible, and students can explore the world with a growing sense of depth and creativity.
However, examining these technologies through a critical lens requires making space for the sentiment that two things can at once be true: technologies may hold the capacity to be helpful and make advancements in human life while simultaneously serving as a barrier, and for some, even a tool of oppression.
In order to ensure all students are served equally by technology, we have to first ensure that all kinds of people are designing technology. Most of us operate under the assumption that technology is free of bias, until we sit with the fact that all tech is made by human beings, and no human being is inherently free of bias. What does this mean for young users? It means their needs are, more often than not, failing to be met by the very devices said to help them.
Ruha Benjamin in her groundbreaking work Race After Technology defines “The New Jim Code” as “the employment of new technologies that reflect and reproduce existing inequities but that are promoted and perceived as more objective or progressive than the discriminatory systems of a previous era.” While young learners may benefit from and appreciate education technologies, it is also necessary to continuously challenge designs until the needs of students are equally met.
Consider this: even if all people, regardless of race, class, gender, or socioeconomic status, had access to the technology that is now seemingly required to achieve success, they would still be disproportionately impacted by the policies and ethics surrounding its usage. Each interaction a student has with tech is tied to the next; from when they enter the school building, to when they return home and do their assignments on a computer.
Digital tools used in the classroom specifically in an academic way is far from the only interaction students will have with technology throughout their time in education institutions. Perhaps it’s time to expand the definition of education technology to encompass more of these interactions, as they hold vastly different meanings depending on who the student is.
For example, physical access to technology is what predominately keeps the digital divide in place, preventing all students from reaping the benefits of technology equally. Let’s examine the post-secondary education planning process. It is generally stressful, and has uniquely challenging implications for students in school communities that are under-resourced. Without access to reliable internet or a personal device such as a smartphone or computer, students miss emails with critical information from college admissions offices, job recruitment programs, and more. Emails containing deadlines for scholarship applications and personal outreach from college recruiters often go unseen, and the process itself is naturally unforgiving.
Algorithmic bias also largely impacts the way technologies serve some students while harming others. Naviance, for example, is a college and career readiness platform used primarily by guidance counselors in their direct work with students. It currently reaches 40% of high school students in the United States, but keep in mind just how varied the demographics of America’s teens are. Naviance’s main feature is a scattergram that provides a visual representation of a student’s competencies. The goal is to help students understand their development status in different arenas as a means of preparing them for a successful post-high school life. Some scholars argue that Naviance specifically benefits low-income students of color, as it paints a realistic picture of colleges or other post-secondary paths that they may not be a strong candidate for. For example, it provides transparency so that students may only apply to schools that meet their financial needs. This remains a necessary part of the application process for disadvantaged students. At the same time, using the software in this way begs the question: what is being done at the systems level to make access to a student’s dream college possible? It’s certainly useful to know which schools are a “reach” versus a reasonable target. But what service does this data provide if the software is measuring the capabilities of every high school student in the nation by the same exact standards? Until the college admissions process serves all students equally, a singular software assessing readiness in a uniform way seems inherently flawed and allows for the oversimplification of circumstances. Take the example of a child living in extreme poverty. The biological, psychological, social and environmental elements of their life are extremely complex, and therefore deserving of a personalized assessment of college readiness. How accurate can an algorithm really be in measuring the human experience?
Data collection is another element of tech that often flies beneath the radar, but has harmful implications for many students. Targeted marketing, for example, might open students to new opportunities, but could also function to keep cycles of oppression in place.Colleges and universities themselves utilize customer relationship management systems such as TargetX as a means of target marketing to students based on a variety of demographic information. The argument could be made that this type of targeted marketing could benefit students, as it provides information they may not have otherwise accessed about specific colleges. The problem arises when students are marketed experiences that they are not equipped for or cannot afford. These software companies have so much personal information about students (which they are able to purchase) that ultimately can be used against them. Race After Technology addresses targeted marketing in the following manner: “There is a slippery slope between effective marketing and efficient racism. The same sort of algorithmic filtering that ushers more ethnically tailored representations into my feed can also redirect real estate ads away from people ‘like me.’ This filtering has been used to show higher-paying job ads to men more often than women, to charge more for standardized test prep in areas with a high density of Asian residents, and many other forms of coded inequity.”
An additional negative impact technology can have in educational settings is when it’s used as a means of surveillance. Surveillance technologies are not implemented in the same way across all of America’s schools. Following the tragic series of publicized school shootings in recent years, Jason P. Nance, an associate professor of law at the University of Florida Levin College of Law, conducted a study that unveiled the stark differences between security measures being taken in low-income urban neighborhoods vs. middle/upper-class suburban communities. X-ray machines, metal detectors, and the use of various other technologies contribute to youth involvement in the justice system at higher rates. The psychological effects of exposure to these technologies is significant, as the creation of punitive environments negatively impacts the learning experience. In some communities, the message might be that increased security measures keep students safe from intruders. In others, students are reminded every morning that they themselves are seen as a threat. Of course, as this impacts the rest of their day, it impacts how they interact with every other technology they might come in contact with. The physical existence of these extensive technologies in schools doesn’t allow for students to feel like they are in much of a school at all.
COVID-19 and the Digital Divide
The COVID-19 pandemic has rapidly increased the need for tech use in education. For some young learners, access to a safe and reliable internet connection is the biggest hurdle being faced in the world of distance learning. Remote learning presents challenges for everyone, but millions of children in the United States have no broadband internet access at all, placing them on the fast track to essentially lose a year’s worth of education (or more). As a math equation, it looks like this: the internet is not free, and 70% of teachers nationwide assigned homework that required internet access before the pandemic. Where does that leave an entire nation of students using the internet daily, and how are we continuing to hold all students to the same caliber under such circumstances? Anyone can get into college if they work hard enough, right? Wrong. And that’s where the greatest ethical dilemma of education technology lies.
Students who live with essential workers might be left alone during the day with no guidance if something goes wrong. “There’s a clear difference between the students who have family members at home to support them and those that don’t,” shares a New York City teacher navigating the challenges of remote schooling. She also points out the disadvantages facing students who experience learning disabilities, language learners, and those who read below grade level. Where programs like Seesaw allow students to voice-record themselves, softwares like Google Classroom do not have this feature. For assignments that require hand-written work, students without access to a printer are at a loss. Most students are required to have their camera on at all times, which also poses barriers. Some live in crowded households, or simply feel uncomfortable knowing that their teachers and classmates can see their personal space. All of this occupies headspace that can make learning difficult. Students who receive additional services outside the classroom are responsible for keeping track of their class times, accessing their login links, participating in their sessions and re-joining the class session when it’s over. This means they either miss out on important class time, or mandated services, regularly. As a society, we need to take this time to consider the to which we are holding our students. If the disparities experienced by distance learning during the pandemic have not shed light on where the “one size fits all” approach fails our young people, it’s unclear what will.
Dozens of large tech companies have stepped up to the plate during the pandemic to donate thousands of tablets and laptops to school districts. This is a necessary gesture and redistribution of resources, but remains a short-term solution to a much larger problem.
Standardized testing is a prime example of how racial coding has infiltrated the education system and perpetuated inequities, as is being highlighted by COVID-19. Since it is true that human beings are not free of bias, it can also be concluded that all things we design reflect these biases in some way or another. Designing a standardized test is a lot like designing a software, is it not? Remember that many standardized tests are graded by algorithms. Algorithms make an effort to appear “neutral,” but we know that neutrality in human creation is unattainable. “The animating force of the New Jim Code is that tech designers encode judgements into technical systems but claim that the racist results of their designs are entirely exterior to the encoding process. Racism thus becomes doubled — magnified and buried under layers of digital denial,” (Benjamin, p. 11).
We are due for an overhaul of our barometer for academic success.
This was true in our pre-COVID world: we even saw some higher education institutions begin to adjust by making admission “test optional.” How many common core teachers have you heard express frustration that creativity in the classroom has been stifled as a result of the necessity to “teach to the test”? We can claim naivety all we want, but standardized tests have never been neutral. How could they? Not only are standardized tests created with a lack of cultural context in mind, they are also expensive to both prepare for and to sit for. If districts across the nation are resourced unequally but students take all of the same tests, what do we really expect the outcome to be?
The digitization of education holds potential for positive learning outcomes, but not without design accountability. Dr. Andre Perry and Dr. Nicole Turner Lee write “AI is only as good as the information and values of the programmers who design it, and their biases can ultimately lead to both flaws in the technology and amplified biases in the real world.” Standardized tests were created as a means of holding teachers and administrators accountable when students did not meet expectations, but the overuse of these tests avoids the necessary addressing of systemic inequities in the first place. In 2019, the College Board proposed the implementation of an “adversity score” to help mitigate these inequities. The plan did not move forward after large-scale criticism. Bias in algorithms will not be resolved with more biased algorithms. The complexity of the human experience cannot be boiled down to an algorithm.
If there were ever a time to be less reliant on standardized testing, wouldn’t the time be now, during a pandemic, when sitting for these exams poses a physical threat? What an excellent opportunity to attempt to level the playing field. Instead, students are being faced with the difficult choice to take the test or not, when levels of risk surrounding COVID-19 infection vary extremely from person to person. Proper preparation for these exams is even more difficult to achieve than before. Skyrocketing numbers of unemployment means less money available to pay for prep courses, and in some cases, to afford the internet at all. Of course, this poses more challenges for students who have already been disproportionately impacted by COVID 19 (Black, Latinx, and low-income communities in particular, as well as students with disabilities, etc). Barriers exist on both ends of the spectrum in a COVID world. Some might feel safer at home, but for others, the sudden shift to a virtual world is a dangerous reality.
Some institutions made the decision to move certain standardized tests to a virtual format, including the American Bar Association. A recent graduate of the University of Miami’s School of Law shared some of what she saw from this experience:
“Adversity has not been evenly experienced. In general, while everyone has been in the same storm, we have not been in the same boat — some of us have yachts, others have canoes. Those who had access to funds and resources were able to navigate the delays, changing formats, and tech requirements of a remote bar exam much easier than those who did not. The boards of bar examiners seemed to presuppose that each bar examinee would be able to not only study remotely, but take an exam remotely using software that required certain tech specifications, including WiFi speeds. In every single aspect of the exam, those who are economically comfortable had the tools for success. Those without those resources didn’t. The bar exam is supposed to be a test of skills to become an attorney, but this summer it looked more like a test of privilege. And perhaps that’s what it’s always been.”
In a myriad of ways, technology can be used to strengthen the many facets of education. But before we continue to move fast and break things, we need to critically consider how to make the intersection of technology and education a safe, ethical, and equitable space.