Acessibilidade / Reportar erro

Será que ler um robô desrobotiza um leitor?

Can reading a robot derobotize a reader?

Resumos

Discute a relação entre letramentos digitais e letramentos críticos com base nos conceitos de transcodificação cultural e dialogismo, de forma contextualizada por exemplos de interações entre o pesquisador e um agente de conversação automatizado disponível na WWW. Demonstra que esse tipo de interação pode ser considerada dialógica no sentido de colocar em evidência o 'povoamento' dos textos digitais por dois tipos vozes ou intenções discursivas: uma voltada para racionalidade e outra para a racionalização. Conclui que esse hibridismo de vozes pode ser corretamente aproveitado para uma educação crítica no sentido de desmontar oposições binárias entre tecnologia e cultura.

letramentos digitais; transcodificação cultural; agentes de conversação automatizados


This paper describes, in a contextualized way, the relation between digital literacies and critical literacies based on the concepts of cultural transcodification and dialogism, by means of examples of interactions between the researcher and an automated conversational agent available on the WWW. It shows that this type of interaction can be considered dialogic in the sense of highlighting the ‘peopling’ of digital texts by two types of voices or discursive intentions: one related to rationality and the other to rationalization. The conclusion is that this hybridism of voices may be correctly used for a critical education in order to dismantle binary oppositions between technology and culture.

digital literacies; cultural transcodification; automated conversational agents


ARTIGOS

Can reading a robot derobotize a reader?* * Trabalho financiado pela FAPESP sob o número de processo 2009/00671-7.[ This research was funded by FAPESP, process number: 2008/00671-7].

Será que ler um robô desrobotiza um leitor?

Marcelo El Khouri Buzato

Unicamp, Campinas (SP), Brasil. <marcelo.buzato@gmail.com>

RESUMO

Discute a relação entre letramentos digitais e letramentos críticos com base nos conceitos de transcodificação cultural e dialogismo, de forma contextualizada por exemplos de interações entre o pesquisador e um agente de conversação automatizado disponível na WWW. Demonstra que esse tipo de interação pode ser considerada dialógica no sentido de colocar em evidência o 'povoamento' dos textos digitais por dois tipos vozes ou intenções discursivas: uma voltada para racionalidade e outra para a racionalização. Conclui que esse hibridismo de vozes pode ser corretamente aproveitado para uma educação crítica no sentido de desmontar oposições binárias entre tecnologia e cultura.

Palavras chave: letramentos digitais; transcodificação cultural; agentes de conversação automatizados.

ABSTRACT

This paper describes, in a contextualized way, the relation between digital literacies and critical literacies based on the concepts of cultural transcodification and dialogism, by means of examples of interactions between the researcher and an automated conversational agent available on the WWW. It shows that this type of interaction can be considered dialogic in the sense of highlighting the ‘peopling’ of digital texts by two types of voices or discursive intentions: one related to rationality and the other to rationalization. The conclusion is that this hybridism of voices may be correctly used for a critical education in order to dismantle binary oppositions between technology and culture.

Keywords: digital literacies; cultural transcodification; automated conversational agents.

Data provided by the PNAD (Pesquisa Nacional por Amostra de Domicílios) survey in 2009 shows that nearly fifty percent of all Brazilian citizens living in the southeastern part of the country now have access to the Internet on a regular basis, whether at home or elsewhere. Although such access is unevenly distributed among the regions of the country (regions other than the southeast can show an access rate as low as twenty percent), the figure approximately doubles the rates announced only five years before and, somehow, confirms a conjecture I've made elsewhere: lack of access to information and communication technologies should not be the main concern of those who believe those technologies have a potential for positive educational reform and social change simply because it is in the interest of governments and business to provide such access as quickly and as pervasively as possible (BUZATO, 2007).

What, then, are we to worry about? I would risk answering that for those who research into digital literacies, or at least for those who conceive of digital literacies as social practices and not simply sets of physical and cognitive skills that involve decoding signs on electronic screens, the major concern should be: how can "the digital" and "the critical" dimensions of contemporary literacy education be productively interwoven?

Some educators are frankly optimistic about the potential of digital literacies to promote, enhance and/or support critical literacies in formal and informal learning communities, on grounds that can be roughly summarized into three claims. First, the claim that new literacies (of whatever kind) constitute new opportunities for change in personal and collective attitudes towards texts and reading/writing, changes which, in turn, can "revitalize" literacy as a strategy for social reform and empowerment of the under-classes. Second, some would claim that digital literacies irrevocably connect and entangle technologies and genres of power with popular culture and with everyday semiotic practices, so that, insofar as diverse interpretive communities engage with these new technologies and genres, previously subaltern local cultural views may play a more decisive role in destabilizing hegemonic global1 1 Following Latour (2005, p.204), we should not understand 'local' and 'global' as existing empirical sites opposed to each other since "no place dominates enough to be global and no place is self-contained enough to be local". Global and local are, therefore, made out of "circulating entities", among which texts and computers are certainly very significant. ones. Finally, there is the claim that by immersing the reader-writer in textually/semiotically mediated social spaces, digital literacies make it especially noticeable for everyone that texts (of whatever kind and make) do not simply "give access" to objective reality: rather, they construct knowledges (discourses) about reality that are open to second guessing.

Not all educators, however, would agree with such claims. As a matter of fact, those very same arguments can be turned against the digital-critical tandem. Against the first claim, it can be said that new literacies equally offer a new opportunity for texts and reading to be co-opted by conservative forces, as is the case with the discourse of "information" as a new form of wealth supposedly available for all, as if information itself were a neutral resource that anyone would know how to profit from. Apparently, it is user information constantly collected by web services and used for making profits that we should be interested in (PETERSEN, 2008).

Serious doubts can also be cast on the second claim when one takes into account that transculturality and transcontextuality are produced by flows (of people, money and knowledge) whose directions and rates are not necessarily fair, or even at random. To reuse an example I proposed elsewhere (BUZATO, 2007), if, say, MacDonald's is forced to add more pepper to their burgers in order to suit the taste of consumers in Bahia, it doesn't necessarily mean a Baiano should be able to eat Acarajé at any MacDonald's store in New York city! Even the most powerful counter-discourses that the Internet helps to put in circulation, including those of ecological sustainability and slow food, can't destabilize MacDonald's and the like unless they get translated into the language of falling stock ratings, a language that flows much faster towards local markets where more burgers mean more (low qualification) job opportunities and more joint ventures with local agribusinesses.

One could question the third claim, as well, by remembering that the textually/semiotically based sociability spaces that are now available on the Internet push the envelope of affinity in an era of receding solidarity, a fact that can be acknowledged very painfully when we glance at the architectures of exclusion that dominate the physical sociality spaces of metropolises worldwide (BAUMAN, 2000).

The purpose of this essay, however, is not to solve the disputes around those and similar claims, but rather to try and contextualize the issue at stake from one especially interesting, although possibility unexpected, point of view, one of the many venues where the digital and the critical are now supposed to connect: automated conversation agents, also known as chatbots. I claim that it is an especially interesting case for two reasons. First, because chatbots foreground the fact, often neglected in discussions about digital-critical literacies, that ICTs are not only a delivery means for digital texts, but also a powerful tool for distributed calculation and simulation of textualities. Second, because chatbots are hybrids that provide us with quite a few interesting and "situated" questions about how culture and technology relate. To qualify those questions, I will resort to Lev Manovich's (2001) concept of cultural transcoding, which I will try to relate to Mikhail Bakhtin's (1982; 2003) notion of dialogue. If all goes well, instead of putting together a case either for or against any particular claim on the critical-digital dispute, I should be able to show that none of those claims will be productive unless we get rid of binaries such as global/local, human/machine, culture/technology and so on.

According to Manovich (2000), cultural transcoding is one of five principles that distinguish the language of new media from the previous media languages used to produce stock and distribute cultural artifacts such as texts. The principle says that human and machine meanings exist simultaneously, but separately, in every new media object, those meanings, therefore, being hybrids conjoining cultural and computational semantic layers that affect each other as new literacies develop. It is, of course, questionable that computational can be separated from cultural, from both philosophical and anthropological perspectives (LATOUR, 2005), but, in any case, it is quite clear that computers have only become popular media tools because they entertain the users' illusion that they can deal with "their kinds" of meanings without any influence from "other kinds" of meanings that the computer deals with.

Thus, the separation assumed by Manovich, and seldom questioned by the ordinary user, provides us with the figuration of the computer as "the other"2 2 Of course it would make much more sense to figurate the other as the programmer or designer who instructs the computer to act, but that is often not the case even among computer programmers, who would, in most cases, rather erase the traces of their subjectivity in the programs and interfaces they produce (de SOUZA, 2006). (as has been the case in science fiction literature for so long), and, consequently, opens up a space for the interesting question of whether a conversation between a human reader and a chatbot can be (truly) dialogic (in the bakhtinian sense of the word).

The significance of such question in relation to the critical/digital issue should be easy enough to grasp: if new media and new literacies are to facilitate critique, than somehow they must be (or be forced to be) dialogic, or else there will be no evaluative stance taken from such literacies towards the previous ones (first claim), no decentralization and diversification of literacy in a heteroglossic world (second claim) and, finally, no realization that the textually/semiotically supported social spaces we inhabit on the Internet are the objectivation of knowledges about us projected by an exotopic consciousness (third claim) However, before I am able to speculate in that direction any further, I should explain what a chatbot is and how it works.

1. WHAT ARE CHATBOTS (OR CHATTERBOTS)?

There is more than a way to define a chatbot, and I will provide a regular definition in the next section, but I would like to start addressing the problem that connects chatterbots to critical literacies in my argument with a little narrative, just so I feel less robotic myself.

About twelve or so years ago, I went (as a student) to a seminar on Language & Technology, in the city of Sao Paolo On that occasion, I came to attend two separate presentations which I thought (and still think) would have been much more interesting if delivered together. In one of them, a psychologist reported on her research of telemarketing practices in a particular company. She was interested in studying the psychological effects of what was then a new professional activity in Brazil in the workers' overall well-being, but a particular trend that emerged from her data was preoccupying: telemarketing attendants were being trained and monitored (by means of audio recording) in the use of written scripts that flashed on the screen to a point beyond the physically and psychologically acceptable. As a matter of fact, the efficacy of such training and surveillance was not only jeopardizing the workers' health, but it was also beginning to make them "act like machines" when talking to people. Someone in the audience then put forward a question that has ever since intrigued me: what is more disturbing, that computers now are being programmed to act like humans, or that humans are being pushed by managerial practices and computer systems into behaving like robots?

In the second presentation, a team of linguists and engineers reported on the development of a piece of speech synthesis software that would be able to synthetize written Brazilian Portuguese into speech "without an accent". To that date, speech synthesis systems were only designed to serve the phonetic particularities of English, and their use in automated customer service systems produced an uncomfortable "gringo" accent that undermined customers trust in the service itself3 3 It is important to acknowledge that such systems are very much the rule today in Brazil when it comes to making complaints or querying for information from utility service providers, and other businesses, over the phone. Likewise, the very idea of being tended directly by a computer, which sounded a bit like science fiction by then, is now the norm in those services. .

In my review of the literature for this article, I found a paper authored by those same researchers (BARBOSA et al, 1999, p. 2059) in which they explain they decided to name the system after "the Tupi-Guarani name of the most talkative of the Psittacidae: aiuruetê (true parrot, species Amazona æstiva)". Aiuruetê seems to have been an interesting project in many ways. First of all, because it put together an interdisciplinary group (comprised of engineers and phoneticians) working around the cultural recontextualization (or glocalization) of a sophisticated technology; second, as the team say in their summary, because "from the very beginning, technology was never an end in itself, but a by-product of decisions taken from [their] linguistic and engineering insights."

To the extent that one might venture in trying to guess the insights behind the choice of a name, I can't help but to think that the parrot metaphor was supposed to state their consciousness about the fact that the system's ability to act though speech was devoid of proper rationality (related to human meanings that a parrot can't grasp). On the other hand, naming the system "aiuruetê" allowed them to point out (whether intentionally or not) to the socio-historical situatedness of their endeavor, and perhaps to ironize discourses existing in more "scientifically developed" parts of the world that still depict Brazil in terms of "primitive" peoples living in idyllic communion with tropical beauty and exotic fauna.

This little narrative about the two presentations has no other function than to contextualize the argument I shall pursue in the following sections of this paper: that chatbots are just one instantiation of the much broader phenomenon that must be considered if we are to theorize the relation between digital and critical literacies through the lens of dialogue and cultural transcoding. Namely, that previous distinctions between technology and culture, human and machine, and the like can no longer be taken for granted if we are to investigate how the digital can hinder or support the critical in literacy education.

2. BUT TELL ME, WHAT WAS THE QUESTION AGAIN?

Now, for a regular definition, a chatbot is a specific kind of robot, that is, an automated agent designed to execute repetitive tasks that are deemed useful for someone (who, herself, can either be a human or another bot). This particular kind of robot is built with the purpose of conversing with human users through artificial intelligence and natural language processing software4 4 That, of course, in the context of a marketing effort by the company who hosts the bot to promote a particular brand of toothpaste whose target consumers are young and interested in ICTs. . This "conversation" is, generally, directed to some practical goal such as finding information on a database, but there have been experiments with chatbots in language learning, e-commerce and psychotherapy, for example. Different types of chatbots, operating on different programming languages, are used nowadays in applications such as e-commerce customer service, call centers, Internet gaming and web-searching systems. There are, in fact, quite a few chatbots currently available on the WWW5 5 Some examples available on the WWW at the time when this article was written are AGHATE < http://www.aghate.org/debut.asp?image=1>, AV < http://www.adsdigital.com.br/ad/ads/index.php>, Dr. Electric < http://www.medical-tribune.de/patienten/service/dr-electric/>, Ed Outromundo < http://www.conpet.gov.br/ed/>, Eloísa < http://www.eloisa.it/esp/index.htm>, Eve < http://ftydavid.free.fr/eve.htm> and Se7e < http://bot.insite.com.br/sete/> for recreational use, free of charge.

One can attempt to describe the basics of how chatbots work by referring to a programming language called AIML (Artificial Intelligence Markup Language) (BUSH, 2001). AIML is a "dialect" of the more general-purpose computer language named XML (Extensible Markup Language), developed to enable the sharing of structured data across different information systems, particularly via the Internet. By "structured data", programmers usually refer to information that has been organized in "meaningful" chunks (called "entities"), these chunks being grouped together in "relations" or "classes" and having the same descriptions (named "attributes"). What markup languages enable programmers to do is allow the computer to turn unstructured (or not predictable) data - such as, say, a large piece of text, or a mixture of text, image and sound - into structured or semi-structured (more predictable) data.

In a (very shallow) nutshell, AIML works with fundamental units of knowledge called categories, which, in turn, are comprised of two elements: the pattern and the template. The pattern can be thought of as the pre-defined input to which the program should respond, and the template as the response to such input that it should show on the screen - or print, or utter through a loudspeaker, if speech synthesis software is integrated to it - in response to the pattern.

In the example of code below, the pre-defined answer for "What do you like to read?" is "I like science fiction":


A pattern can include what programmers call a "wild card", normally represented by "*", which is an indication that a discrete part of the input can be substituted by any value. In that case, should the pattern in the example above be "what do you like to *", the program would recognize as valid input a wide selection of sentences such as "what do you like to eat?" or "what do you like to talk about?".

Templates, on the other hand, can include variables of which the values can be stored in the computer beforehand, or collected in the previous stages of the conversation with a user. Thus, if in a previous exchange the human user says "I enjoy pasta" and "I live in São Paulo", or if information about "food in São Paulo" is available on a database, the program can eventually come up with apparently more sophisticated and surprising outputs such as "I have had fantastic fettuccini alfredo in Bixiga6 6 Bixiga is a traditional Italian neighborhood in São Paulo where several famous Italian restaurants are located. Such information can be obtained by the bot from the WWW, for example. once", or "What's your favorite Italian place in Bixiga?".

AIML also contains resources that allow for some degree of synonymy, so that the same template can be used in response to patterns expressed though different textual constructions ("What is your home town" and "Where do you live", for example). And, obviously, the programmer may include templates that are to be used when the input does not match any of the patterns stored in the program (e.g. "I don't understand that. Sorry!" or "Tell me more about <the unknown word in the input>, please"). In that case, whatever subsequent inputs the user is able to provide may eventually be added to the database, which constitutes one of the ways though which the user can "teach" the bot to be more effective in producing the effect of a coherent conversation in that particular subject and language in the future.

At this point I want to turn back to my imaginary dialogue between those who toil in the humanization of robots and those who worry about the robotization of humans to suggest the following question: is there any way we can use the (temporary) robotization of a human to (temporarily) dehumanize robots and vice-versa? Or, in a critical literacy teacher's plainer words: can reading a chatbot be useful in derobotizing a reader? By derobotizing I mean probing into the assumptions that lie beneath patterns and templates that store certain knowledges of language and reality in the hope of creating an "intelligent conversation" to the point where one can experience the usually unwelcome feeling that knowledge does not come in units, nor promotes unity.

3. OF COWS AND COHERENCE

I shall try to answer the question at the end of the previous session with another little narrative, if I can. The chatbot I am about to describe impersonates a young, attractive female who "lives" on a website that advertises a popular brand of toothpaste in Brazil. She loves electronic music, night-life, photography, traveling and dating, among other things, or so she says to please me. I have chatted with her (or it?) on quite a few occasions, each of which has inevitably left me feeling either at awe or completely disappointed, or both. Below, I transcribe one of these encounters which, I am sure, is very similar to many there have been since she was made available on the WWW:

Maybe a sympathetic reader can charge me with unfairness on the "poor little bot" since I picked a rather an unusual question to start off. After all, had I been chatting with a real young attractive female photographer who likes dancing, I would hardly have started a conversation by asking her if she knew what a cow was. Nevertheless, I did behave like a human does in certain situations and genres such as, say, when a psychologist is running some sort of cognitive experiment, or when a teacher is conducting a pop quiz.

I went on to ask the robot about "doida" (mad), a word that, by now, I was sure her database contained:

Come to think of it, it is hard to tell from the transcript alone whether this conversation was "natural" or not. How many genres, from police questionings to riddles, from school exams to interviews in cognitive science experiments do we use that go just like that? And how many times has a culprit, student or research subject not previously "programmed", responded to perfectly plausible, but uncomfortable, questions with an evasive "what was the question, again, please?"

4. MARCELO: THE QUESTION THAT MATTERS IS WHO IS CONVERSING WITH WHOM

Actually, the fact that not only robots can act like humans, but humans can often act robotically is at the very heart of artificial intelligence research. And that is precisely why the answer to the question "can machines think?" is not as important as the answer to another question: "how do we know that machines can or cannot think?" In his classic "Computing machinery and intelligence" Alan Turing (1950) proposes an answer, which came to be called the Turing's Test, and a version of which is now actually applied in an annual contest among chatbots named Loebner Prize. Turing devised the following solution. One human interlocutor (the juror) is seated in a closed room and requested to ask questions, through a keyboard, to two interlocutors that can not be seen: a person and a computer. If after a given time interval the juror is not able to tell person from machine, that is, if the odds that the juror can guess it right are not significantly higher than 50%, then it can be said that the machine demonstrated a certain degree of intelligence. In the Loebner prize, ten jurors are used instead of one, and the binary classification (robot/human) is replaced by a scale ranging from more human to less human, but the concept remains the same. In order to win the prize, the robot must obtain an average score (based on the less human to more human scale) higher than at least one of the human interlocutors'. So far no robot has been able to do so.

An interesting question that can be raised in relation to such tests is what criteria the judges might use in order to determine whether the respondent is more or less human. In principle, because many regular contextual clues (from pitch and tone of voice to rhythm to lip reading to smell, etc.) are not available - and that is pretty much the usual situation for humans conversing across computer networks by means of keyboards and screens today the judges must rely chiefly on the textual constructions of the selves they interact with, and based on those, on the ability of the interlocutor to sustain a coherent conversation.

But here again we might resort to the "do you know what * is" pattern, and apply it to the notion of coherence. Because, as textual linguistics has it, coherence is not something located within a text or a person (or even a textual impersonation such as a chatbot), neither is it something that can be forged sufficiently and predictably out of precise combinations of linguistic elements (although it may also depend on such combinations). Coherence is something that emerges in a given communicative event situated in some circumstantial and socio-historical context, and, therefore, it depends (1) on the previous knowledge of the interlocutors (about the world and the topic), (2) on the world itself that is being represented by means of the textual construction at stake and (3) on the textual genre employed (think for instance of how (in)coherent a shopping list containing "toilet paper", "roller skates" and "pajamas" is for someone who tries to read it as if it were a poem, a medical prescription or a family tree).

Therefore, if one is asked to judge whether a sentence like "the pregnant lizard and her husband, the sun, were crying" is coherent, and wants to be fair, his or her answer will have to depend on a series of other questions such as "coherent for whom?", " when ?", "where? ", to mention a few. That is, after all, one way of understanding that we occupy a place in the world that is unique, and one reason for welcoming the gaze of the other as a mirror that lets us see ourselves somehow bounded in our uniqueness, as bakhtinians would have it. In that case, talking to a chatbot is not too different from questioning a textbook that says something like "the discoverers of America brought us exotic fruits such as tomatoes and avocados". The question that really matters, before the one about tomatoes and avocados being fruit, is "who is talking to whom", who is on each side of the borders between botanists or cooks, indigenous narrators or European historians, brought or took, us or them, exotic or proper, discovered or invaded... That question, I suspect, is more often asked when, like the interlocutor of a chatbot, one is caught between awe and disappointment about how the other sees us.

To sum it up, if coherence is something that emerges in a communicative event situated in a socio-historical context, and if it depends ultimately on games and "calculations" played by the interlocutors on each side, (digital) texts (especially) should be seen as borderland spaces promoting encounters that are (fortunately) not always predictable, for several reasons. First and foremost, because an utterance (that is, a text promoting such an encounter between the other and me) can be (and often is) populated by more than one intention or consciousness, or, to use a bakhtinian concept, because texts, especially those that are mediated by what we recognize as ICTs, are often semantic hybrids.

Here is where I want to go back to critical literacy by suggesting that one way of conceptualizing (and of practicing) it in connection with ICTs might be to think of reading texts produced, stocked and circulated in processes of cultural transcoding as performing a reverse Turing's Test. What I mean, more straightforwardly, is: instead of questioning a digital text so as to decide whether it was produced by a human or a non-human, one should teach to read digital texts as to disclose the calculations and games played on us through culture-technology (human-nonhuman) hybrids. Or, even more crudely put: reading critically in the digital age is producing counter-hybrids.

The conversation between the robot and me that I transcribed above was the best example of counter-hybrid I could come up with for the reader of this essay. It was never a dialogue between Marcelo and the bot, but between applied linguist and programmer, colloquial Portuguese and AIML. The counter-hybrid was deployed in the form of an algorithm drawn from my cultural/academic training that allowed me to test the program's reaction to ambiguity, and to speech acts such as insulting. To dehumanize the robot, I kind of robotized myself for a while.

When I say critical literacies can be conceptualized and practiced that way, I do not mean readers should interrogate digital texts with a pre-defined agenda for a linguistic investigation (although that is one of many possibilities), but I certainly mean they should be asking themselves what meanings are being attempted at by the hybrids on the screen that reveal some kind of knowledge about the readers themselves, their language, their world. If such meanings are, as Bakhtin (1982; 2003) pointed out, always only half "ours" and half "theirs", then to be critical is to populate them with our share of knowledge about what the other side of the screen (program and programmer included) can't do or see about us.

This, in my view, is a pedagogical response to literacy that makes education congruous with the bakhtinian claim that outsidedness is a most powerful factor in (cultural) understanding, a claim that grows more and more significant as digital literacies become significant in connecting a plethora of textually/semiotically mediated worlds.

Among such practices, I highlight chatbotting not so much as a pedagogical strategy, but more likely as a teaser for educational engagement in the ongoing dialogue between rationality and rationalization that pervades the construction of "the real" in the ICT-supported socio-historical processes that we usually refer to as globalization.

The distinction between rationality and rationalization I am trying to draw on here is put forward by Edgard Morin in his discussion of Education for the future. Rationality, says Morin (1999, p. 6; 7) "is by nature open and engaged in dialogue with the real, which resists it. It constantly goes back and forth between the logical instance and the empirical instance; it is the fruit of debate of ideas, and not the property of a system of ideas". Rationalization, in turn, results from "rationality that closes itself into doctrine" and which is "based on false or mutilated foundations, and remains closed to dispute from contradictory arguments and empirical verification".

The distinction between rationality and rationalization relates to the discussion about coherence and critical literacies inasmuch as "rationalization believes itself to be rational because it constructs a perfectly logical system based on deduction or induction". Also, in that "rationality is not an exclusive prerogative of scientific and technical minds, denied to others". But, a true critical pedagogy, like true rationality, "knows the limits of logic, determinism, mechanics; (...). It negotiates with the obscure, the irrationalized, the irrationalizable. It is not only critical but self-critical".

The case of the chatbot is especially significant in this sense, for it seems to me that it foregrounds, to any active listener/reader, not only the limits of rationalization -instantiated when disappointment in the program's ability inevitably kicks in, but also the possibility of true rationality as the openness that emerges when patterns and templates (not necessarily coded into a computer program) simply won't do. That openness, as I see it, allows for the interlocutors to step into each-other's worlds for a moment. In this case, the robot (programmer) stepped into mine by admitting it got "a little mad", just as I stepped into hers (the programmers') by applying a formal pattern repeatedly. Critical literacy pedagogy, therefore, probably has to do with keeping up those loops between the logical and the "real" that prevents the decay of rationality into rationalization.

5. IN CONCLUSION: FORTUITOUS ENCOUNTERS?

It seems to me that as people necessarily bring their worlds to communicative encounters on the border, those encounters tend to result in one of three possibilities: they can confirm the consensual world the interlocutors believe to be talking in and about, no matter how many other possible worlds there are; they can turn into resentment and alienation if the interlocutors remain closed to contradictions that would disturb their perfectly logical - and thus totally real and sufficient - worlds, and finally, they can engage in a dialogical, open rationality that does not confirm or disqualify each others' worlds, but simply protects both of them from the illusion of unity.

I think it would be just logical that, in guise of conclusion, I left the reader with the question of what we may think will result (or is already resulting) from the encounter of those who hold a view of humans as threatened by robotization and those who strive to humanize robots. It order to answer it, some would perhaps be tempted to simply draw a borderline between those two groups, and to sort of situate people like teachers or literacy researchers on one side, and programmers or telemarketing systems on the other.

Tempting, but not effective, I would say. First of all, because people cannot really "be situated" by other people without the help of allied technologies, no matter how sophisticated the rationalization involved. In other words, contexts do not simply encapsulate people as pre-determined containers, but are actively and continuously created and modified by those which it contextualizes (GEE, 1998), by means of social semiotic practices that articulate humans and nonhumans (LATOUR, 2005), literacies included. To situate technology (and those who practice it) somewhere beyond the borders of culture (as a distinctive feature of the Human), or vice-versa, is nothing but a rationalization which blinds us to the ways technologies, especially media, enable and constrain cultural practices, while at the same time get enabled and constrained differently by different cultural systems (Mazzarella, 2004).

Thus, even if we do draw a borderline between culture and technology for the sake of analytical thinking, everybody — and not just ordinary people - will certainly stand one time or another with one foot on each side of it. In other words, to worry that a human is becoming robotized is only possible if one robotizes oneself at least to the exact extent where it is possible to realize that for certain things rationalization will not do.

Perhaps, the only way to overcome such a not-so-credible anymore divide between culture (language, humans, human meanings) and technology (machines, technical knowledges, computer meanings) is to adopt Latour's (2005) generalized symmetry principle, or, to put it in other words, to recognize that we live in a world full of hybrids, a chatbot being, perhaps, only a little bit more outrageous7 7 As a matter of fact, the very etymological roots of the word "hybrid" can be traced back to the Latin hubris, meaning "an outrage on nature". to us than others such as robotized readers.

BIBLIOGRAPHICAL REFERENCES

Recebido: 25/07/2010

Aceito: 27/11/2010

  • BARBOSA, P.A. et al. (1999). Aiuruetê: a high-quality concatenative text-to-speech system for Brazilian Portuguese with demisyllabic analysis-based units and a hierarchical model of rhythm production. Proceedings of the Eurospeech'99 Budapest, v. 5, p. 2059-2062.
  • BUSH, N. (2001). Artificial Intelligence Markup Language (AIML) Version 1.0.1. Disponível em <http://www.alicebot.org/TR/2001/WD-aiml/> Acesso em: 29 jul. 2007.
  • BAKHTIN, M. M. (1982). Marxismo e Filosofia da Linguagem. São Paulo: Hucitec.
  • _______. (2003). Estética da criação verbal São Paulo: Martins Fontes.
  • BAUMAN, Z. (2000). What it means 'To Be Excluded': Living to Stay Apart - or Together? In: ASKONAS, P.; STEWART, A. (eds.) Social Inclusion: possibilities and tensions London: Palgrave, p. 73-88.
  • BUZATO, M. E. K, (2007). Entre a Fronteira e a Periferia: linguagem e letramento na inclusão digital Tese de Doutorado em Lingüística Aplicada. Instituto de Estudos da Linguagem, Unicamp, Campinas.
  • De SOUZA, C.S. (2006). Da(s) subjetividade(s) na produção de tecnologia. In: Nicolaci-da-Costa, A. M. (org.) Cabeças Digitais: o cotidiano na era da informação Rio de Janeiro: Editora da PUC-Rio / Edições Loyola, p. 81-106.
  • GEE, J. P. (2000). The New Literacy Studies: from 'socially situated' to the work of the social In: Barton, D.,Hamilton, M. and Ivanic, R. (eds), Situated Literacies: reading and writing in context. London: Routledge, p. 180-196.
  • LATOUR, B. (2005). Reassembling the Social: an introduction to actor-network-theory New York: Oxford University Press.
  • MANOVICH, L. (2001). The Language of New Media Cambridge, MA: MIT Press.
  • MAZZARELLA, W. (2004). Culture, Globalization, Mediation. Annual Review of Anthropology, v. 3, p. 345-367.
  • MORIN, E. (1999). Seven complex lessons in education for the future. Paris: UNESCO. Disponível em < http://unesdoc.unesco.org/images/0011/001177/117740eo.pdf> Acesso em: 12 Ago. 2007.
  • PETERSEN, S. M. (2008). Loser Generated Content: From Participation to Exploitation. First Monday, v. 13, n. 3, (n.p.). Disponível em <http://www.uic.edu/ htbin/cgiwrap/bin/ojs/index.php/fm/article/viewArticle/2141/1948> Acesso em 24 Jul. 2009.
  • TURING, A. M. (1950). Computing machinery and intelligence. Mind, v.59, p. 433-560.
  • *
    Trabalho financiado pela FAPESP sob o número de processo 2009/00671-7.[ This research was funded by FAPESP, process number: 2008/00671-7].
  • 1
    Following Latour (2005, p.204), we should not understand 'local' and 'global' as existing empirical sites opposed to each other since "no place dominates enough to be global and no place is self-contained enough to be local". Global and local are, therefore, made out of "circulating entities", among which texts and computers are certainly very significant.
  • 2
    Of course it would make much more sense to figurate the other as the programmer or designer who instructs the computer to act, but that is often not the case even among computer programmers, who would, in most cases, rather erase the traces of their subjectivity in the programs and interfaces they produce (de SOUZA, 2006).
  • 3
    It is important to acknowledge that such systems are very much the rule today in Brazil when it comes to making complaints or querying for information from utility service providers, and other businesses, over the phone. Likewise, the very idea of being tended directly by a computer, which sounded a bit like science fiction by then, is now the norm in those services.
  • 4
    That, of course, in the context of a marketing effort by the company who hosts the bot to promote a particular brand of toothpaste whose target consumers are young and interested in ICTs.
  • 5
    Some examples available on the WWW at the time when this article was written are AGHATE <
  • 6
    Bixiga is a traditional Italian neighborhood in São Paulo where several famous Italian restaurants are located. Such information can be obtained by the bot from the WWW, for example.
  • 7
    As a matter of fact, the very etymological roots of the word "hybrid" can be traced back to the Latin
    hubris, meaning "an outrage on nature".
  • Datas de Publicação

    • Publicação nesta coleção
      22 Jun 2011
    • Data do Fascículo
      Dez 2010

    Histórico

    • Recebido
      25 Jul 2010
    • Aceito
      27 Nov 2010
    UNICAMP. Programa de Pós-Graduação em Linguística Aplicada do Instituto de Estudos da Linguagem (IEL) Unicamp/IEL/Setor de Publicações, Caixa Postal 6045, 13083-970 Campinas SP Brasil, Tel./Fax: (55 19) 3521-1527 - Campinas - SP - Brazil
    E-mail: spublic@iel.unicamp.br