Before answering the explicit questions posed in this prompt, I’d like to devote some of my response to addressing the issues Dr Noble raised in her essay and the larger issues implicated in a discussion of technology, the hegemony, and the inscription of gender, racial, and socioeconomic identities.
To begin, while I found Dr Noble’s work on this subject to be so brilliant and well-argued, I also found it somewhat devastating. I’d like to think that I knew, that I have always known on some level, that ideology works through all systems and all institutions, and that we are interpolated by every “voice” that speaks through these hegemonic mouthpieces. I’d like to think that I knew that the internet (and technology more generally) is not and has never been a “neutral” space inside of this. And yet, I also know that on some level I was comfortable operating under the fiction (a known-fiction) that whether in its coding or in its hardware, technology might somehow be able to exist outside of the ideologies endemic in these hegemonic systems. Dr Noble’s work threw into sharp relief how misguided and even dangerous a belief in this fiction of technological neutrality can be. I, like many of you, not only use Google, but I actually quite like it as well: its algorithms have lead me to breakthroughs in my own research and it has helped me discover some incredibly productive databases, tools, archives, etc. I’ve only listed its academic uses here, the mundane tasks whose completion is predicated on Google notwithstanding.
All of this in itself induced a sort of existential dread at the realization that (yet again) no space is safe or devoid of the workings of the hegemony (and further, its endemic sexism, racism, classism, all manner of bigotry). I willingly participate in these systems because they are convenient and because I have been conditioned to rely on their use, i.e. I am wholly dependent on them. A sort of malaise lingered for a few days after Dr Noble’s address to the class. I am not mimicking the cynicism the prompt so rightly predicts for rhetorical effect: I felt (feel) all of this and more. So how best to combat this in a “younger, less knowledgeable audience?” I haven’t the slightest idea (though I noticed Gabe posted a very comprehensive response to this question, so I shall have to read his post after I complete mine). I do however have some theories as to how to “translate” these concepts and concerns into a language that young people / the technologically illiterate might better understand (though is it fair to conflate the two?). I see the biggest obstacle to this task is how “normalized” this sort of technology has become. Though I will be the first to admit that I am dependent on Google (and other instruments of the technohegemony) for so many day to day tasks, I can still remember a time before the internet. This is in fact not the case with the freshman I taught last semester. I will use them as my fictional “audience” in this post, as they are the only age group with which I have had any professional teaching experience. The internet (and technology more generally) has become so enmeshed in their social, academic, and professional lives that I believe there must be a process of “defamiliarizing” those technologies. Without exposing to them the inworkings of these platforms, attempting to convince them of their function in maintaining and perpetuating a dominant ideology would be akin to arguing that a toaster or a Kitchenaid is an instrument of the hegemony (which in itself could be the subject of another blog post). This means, yes, “deconstructing” these devices — explaining that while hardware, software, and firmware coalesce in a single “object” (be it an iPhone, Macbook, or tablet), they not only serve distinct functions but each is bound up with its own implicit ideological machinations. This presupposes some knowledge of how these component parts work. I would advocate for the need to teach CompSci classes throughout a child’s education or at least provide some exposure to it. In understanding how something “works” even at a most basic level, I believe one is better able to see and understand the “work” that it is doing.
However, were this audience unable to understand the more technologically advanced cant resultant from “deconstructing” those devices, disrupting the idea that Google is in some way “neutral” would be a necessary first step. Perhaps likening “code” to our own “genetic coding” would be a helpful metaphor: while our genome is a random assemblage of nucleobases and hydrogen bonding, whose “mutations” and “genes” are selected for based on their advantageousness or viability, Google’s “genome” is designed, i.e. its “code” has been deliberately written. And could they imagine if someone — not an infallible deity and not chance, but a flesh and blood person with their own prejudices and assumptions and biases — were to “write their own code?” Would it not stand that what was being “encoded” would be the construct of an imperfect, ideologically-driven mind? Would it not stand that the product of such a mind would carry, however subtly, those same prejudices in its very genes? Andrew Feenberg’s essay “Democratic Rationalization: Technology, Power, and Freedom” posits this another way: “Technologies are selected… from among many possible configurations. Guiding the selection process are social codes established by the cultural and political struggles that define the horizon under which the technology will fall. Once introduced, technology offers a material validation of the cultural horizon to which it has been preformed. I call this the ‘bias’ of technology: apparently neutral, functional rationality is enlisted in support of the hegemony” (147). It is of course not “natural selection” to which he refers but a deliberate “selection” that is guided by “social codes” which participate in and stem from the dominant ideological ordering of the world.
I think the danger here is making this all seem so nefarious, so explicit and intentional: while this audience might resist a the claim that racial bias in Google’s search algorithm is deliberate, they perhaps would be more receptive to thinking of it as something operating outside of or perhaps below intentionality. In other words I believe figuring Google as some deliberately manipulative “Evil Corporation” would be met with resistance because it is such an essential and productive tool, so ingrained into day-to-day processes, that to insult Google would be to insult people’s dependence upon it. Further to this, in discussing the production and reproduction of “ideology,” I think most would assume that this is enacted by means of “repressive” instruments, or that which “prohibits” or “impedes” an exchange of information or belief, whereas Google is viewed by many as an instrument of “liberation” in this regard. It is too easy to eschew subtlety in favor of making powerful, impactful claims, especially when the process of fleshing out those subtleties can prove so daunting, but in making this a complex issue wherein Google at once participates in and reinforces the hegemony (footnote: I fully believe that the “hegemony” is a concept that can be explained to a “teen” audience, as I subjected my nineteen-year-old sister to a discussion of it last Easter break and my Rhetorical Arts class seemed to grasp the concept last semester), but perhaps does so on a level slightly below the deliberate instantiation of hegemonic values might better serve the audience we are attempting to address.
And if they are perhaps not so keen on the “genome” metaphor I’ve constructed here, perhaps it would be best to focus on the assumptions that they have about the way Google indexes websites, to dismantle the “vote and rank” supposition that is so prevalent in people’s minds. Regardless of the course, I think central to all of this is having conviction in the students’ ability to grasp this material. Complex issues, whether they are about technology or ideology or the way in which these two intersect, should not be reduced to platitudes or generalizations. Rather, they should be worked through in a language and with a patience that intervenes in a reductive “Google = Bad” narrative. Not because this claim doesn’t possess value or merit, but because attacking Google as a product or as a service might alienate those who are dependent on its use. Further to this, to suppose there are limits to what individuals are “capable” of understanding merely reinforces those same limits. As I said, I don’t think I can address latter half of this prompt. I’m effected by the same cynicism (though I’m not sure if its cynicism, so much as resignation and exhaustion) after having been forced to confront a fact I already knew to be true. If anyone has any ideas about how to help the “audience” we are meant to be working with here (or myself for that matter), I’d certainly be open to ideas!
Keywords: pedagogy; programming; ideology; algorithms
(interestingly, the Library of Congress uses Google to search their own cache of Subject Headings)