Artificial Intelligence, or AI, has captured our imaginations as human beings for many decades, from Mary Shelley’s Frankenstein to Star Trek’s Data. With fascination, and at times horror, we have entertained the idea of living alongside mechanized beings. Within fictional spaces, the AI age is often depicted in futuristic terms. However, these technological structures have been with us for decades.
In fact, the first humanoid automaton was introduced to the world in 1930 by Westinghouse. This was a mechanical man built in the likeness of an African American sharecropper, whom journalists referred to using the racist nickname “Rastus.” The physical form of this robot is not random, nor is the coining of the term “robot,” which is etymologically rooted in the Czech word for “forced labor.” The art of technology has long imitated society in this way. Vastly more advanced than the “Rastus Robot,” in the 1950’s we saw the emergence of the first programmed chess player encoded into an IBM.
Fast forward to today, AI is now more seamlessly incorporated into our everyday interactions. Most of our day-to-day tasks and habits touch upon the vast and growing AI digital infrastructure, from the advanced AI in our digital assistants (Alexa, Google, Siri, etc.) and facial recognition software on our phones, to the more streamlined autofill and autocorrect text support embedded in our email applications. Although we have accepted these automations as par for the course with existing in this modern world, many of us could not even begin to articulate what AI actually is, let alone how it works. As such, we are often swept up in the dominant narratives built around these innovations and their speculative potential.
We should be aiming for future possibilities co-imagined by many of us, as opposed to living within a vision for the future concocted by a privileged few.
AI formerly operated mostly within the shadows of our everyday infrastructure, but it is now in the limelight and as such has generated unprecedented reactions from both AI-elite and the general public. Take ChatGPT for example; hailed as a revolutionary advancement in AI, this tool has engendered broad speculations about potential disruptions in education, from widespread plagiarism to obsolete teachers.
This rhetoric builds on a long-established deficit framing of teachers in relation to technology. Much like the Luddites were type-cast as anti-technology (even though their main focus was on advocating for better labor practices), teachers are positioned as barriers to integrating more technology in education. Research that explores this phenomenon is typically aimed at resolving teacher resistance in order to more seamlessly bring technology into the classroom. What underscores this narrative is the blanket assumption that integrating technology will make education better, and upholding this claim is the widespread acceptance that technology equals progress or “the future.” I highlight this not to make a case against technology, but rather to point out how we put so much faith in technology without questioning our assumptions about its role, and without discerning the nuances and specifics of each tool.
Dominant narratives about the promise of technology also serve to distort our perceptions of, and distract us from paying attention to, the exploitative practices and oppressive logics incorporated into our technological infrastructure. Moreover, as a society, we continually make adjustments to accommodate or account for the latest gadget or digital innovation released into the wild. So, while a niche group of coders within an even more selective set of corporations program computers to do more and more, they are essentially also programming us to behave in ways that are symbiotic with our mechanized environments. Therefore, it is critical that we pause to question these innovations rather than accept them as inevitable. We should be aiming for future possibilities co-imagined by many of us, as opposed to living within a vision for the future concocted by a privileged few.
Right now, we are surrounded by rhetoric about the existential threat AI poses to humanity. Notably, many of these same speculators once promoted the benefits and brilliance of AI and played a key role in developing the technology. This current panic about AI’s threat to our future brings to mind the word apocalypse, which (as a personal correspondence taught me) is etymologically rooted in the Greek word for reveal or uncover. The technology we create mirrors what we value and legitimize as a society, but this reflection is often hidden behind a veil of coding. For example, for all the automation these tools promise, AI is built on a vast, racialized, and gendered underclass of labor that is invisibilized and devalued. Perhaps an AI apocalypse primarily threatens to reveal the deeply embedded coded injustices, prejudices, and fallacies, so that we can no longer ignore the widespread suffering in our world.
Many scholars, particularly women and people of color such as Safiya Noble, Cathy O’Neil, Ruha Benjamin, Louis Chude-Sokei, Joy Buolamwini, and Timnit Gebru, have been sounding the alarm for decades about harms caused by AI and its complicity in upholding systems of oppression. Their work shows us how these advanced technologies are not neutral, but rather exacerbate, make more efficient, and serve to further veil the existing disparities in society. Discerning truths from myths, nuance from noise, and wisdom from cleverness are critical skills we will need to meet the challenges of our day, especially in the age of AI. We must grapple with historical context, learn from our past, and co-create our collective, interdependent, and free futures.