Open-access Google is the new Mickey Mouse (and Cognitive Science of Religion still isn’t clear about what a god is)

O Google é o novo Mickey Mouse (e a Ciência Cognitiva da Religião ainda não tem clareza sobre o que é um deus)

ABSTRACT

In a 2008 paper, Justin Barrett outlined five conditions meant to be jointly sufficient for an agent-concept to elicit faith and religious commitment. In other words, he outlined five requirements for an entity to be a god. His table of criteria was intended as a solution to the so-called Mickey Mouse problem, the problem of explaining why people believe in god(s), but not in other entities, such as Mickey Mouse. Barrett was criticized in a 2010 paper by Gervais & Henrich, who claimed the table yields false positives: some god-concepts meet all five criteria but are not the object of faith and religious commitment, such as Zeus, as well as any god-concepts of every extinct religion. In this paper, I argue along similar lines, but beyond. I show that some of the false positives Barrett’s table allows for don’t even stand for gods of any genuine religion, extinct or not.

Keywords:
religious beliefs; cognitive science of religion; MCI hypothesis; Mickey Mouse problem; god concepts

RESUMO

Em um artigo de 2008, Justin Barrett descreveu cinco condições que deveriam ser conjuntamente suficientes para que um conceito de agente suscite fé e compromisso religioso. Em outras palavras, ele descreveu cinco requisitos para que uma entidade seja um deus. Sua tabela de critérios foi concebida como uma solução para o chamado problema do Mickey Mouse, o problema de explicar por que as pessoas acreditam em deus(es), mas não em outras entidades, como o Mickey Mouse. Barrett foi criticado em um artigo de 2010 por Gervais e Henrich, que alegaram que a tabela produz falsos positivos: alguns conceitos de deus atendem a todos os cinco critérios, mas não são objeto de fé e compromisso religioso, como Zeus, bem como quaisquer conceitos de deus de todas as religiões extintas. Neste artigo, argumento em linhas semelhantes, mas além. Mostro que alguns dos falsos positivos que a tabela de Barrett permite nem mesmo representam deuses de qualquer religião genuína, extinta ou não.

Palavras-chaves:
crenças religiosas; ciência cognitiva da religião; hipótese MCI; problema do Mickey Mouse; conceitos de deus

1 Initial considerations

The cognitive science of religion (CSR) is a relatively new approach in the studies of religion, born in the final years of the 20th Century through the work of scholars such as Thomas Lawson & Robert McCauley (1990), Pascal Boyer (1994 a , 1994b), Barrett & Keil (1996), amongst others. A multidisciplinary field marked by strong analytic tendencies, CSR imported into the study of religion a number of ideas from mainstream contemporary epistemology and philosophy of mind, as well as from cognitive science, in its quest for understanding what’s peculiar about religious ideas. In this tradition, a religious idea is basically an idea that deploys some god-concept, such as belief in a particular god, or acceptance of some god-related doctrine.

In this paper I present a challenge to CSR by discussing further developments of the so-called “Mickey Mouse problem”, a problem that unfolds from CSR’s understanding of deities, or gods, and that remains unsolved, despite scholars’ best efforts. Here is the plan. In section 1 I present CSR’s default spelling out of god-concepts and how it gives rise to the Mickey Mouse problem. In section 2 I discuss the many responses to the problem that have been given by CSR scholars so far, including the state of the art (most comprehensive) solution devised by Barrett (2008 a ), together with additions suggested by Gervais & Henrich (2010). In section 3 I present a counter-example aimed at showing that even the most sophisticated solution formulated by CSR’s rationale so far, the Barrett-Gervais&Henrich solution, is not entirely satisfactory. CSR’s definition of a deity keeps allowing for things that are not deities to pass as such. The consequence will be that, if I’m right, CSR hasn’t been successful in distinguishing religious ideas from non-religious ones, which was the primary goal it was designed to achieve. I conclude, in section 4, with a brief discussion of those consequences.

2 The MCI hypothesis and the Mickey Mouse problem

By and large, the biggest part of CSR’s work done so far has been centred around religious belief (its formation, maintenance and transmission), where the notion of religious belief is equated with belief in special entities, certain superhuman agents that are not part of the natural, empirical world (in short, gods)1.

Now, one of the biggest challenges CSR has faced so far is the difficulty to explain how religious belief differs from other sorts of belief, such as perceptual beliefs and metaphysical beliefs. This involves, of course, explaining how religious concepts (concepts that make up the content of religious beliefs) differ from other concepts, and how this distinction is reflected by the cognitive success of those concepts, i.e., their spreadability and prevalence across cultures.

CSR’s stance on this question is to purport that god-concepts are a special case of representations of certain agents. These representations combine intuitive elements that are easy to process (they conform to our natural expectations about reality, what Boyer calls our “intuitive ontology”), on the one hand, with a small amount of counter-intuitive elements that are attention-grabbing (small breaks in those expectations), on the other hand. This is what is called a minimally counter-intuitive representation (MCI representation, for short) of an agent: a representation of an agent that conforms to most of the constraints of our intuitive ontologies, and fails to conform only to some of them. An MCI representation deploys concepts that violate our intuitive ontology to a moderate degree: not too little, nor too much. This hypothesis has been embraced by authors such as Keil (1989), Boyer (1994a, 2001), Barrett (2000, 2004), amongst others, and is one of the most popular theories in CSR these days.

One background assumption underpinning the MCI hypothesis is that having one or two counter-intuitive properties helps the spread of a concept by helping it stand out against other concepts, that are entirely unexceptional (the ones that don’t break our expectations at all) while, at the same time, making it cognitively more successful than the very counterintuitive ones (the ones that break our expectations dramatically). A concept is “cognitively successful” when it is adherent, that is, attention-grabbing, memorable, less likely to be ignored and more likely to be handed down to the next generations2.

For instance, consider the three following concepts, borrowed from Barrett (2008 a , p. 151-152):

  1. an invisible buffalo

  2. an invisible buffalo that is immortal, made of steel, experiences time backwards, fails to exist on Saturdays, gains nourishment from ideas, and gives birth to kittens

  3. an ordinary buffalo

According to MCI, (I) is less counter-intuitive than (II), but more counter-intuitive than (III). Therefore, (I) is more adherent than both (II) and (III). That’s because (I) breaks just one expectation of our intuitive ontology, the expectation that animals are visible; whilst (II) breaks many3 and (III) breaks none. It is said, thus, that a concept such as (I) is minimally counterintuitive with respect to what we expect of buffaloes.

CSR purports that (I) is a stronger candidate to make up the content of a religious belief than both (II) and (III). That is, it submits that god-concepts are more similar to (I) in structure, than to (II) or (III). And the MCI hypothesis is expected to explain the widespread prevalence of those concepts across cultures and across human history. Why do we find religion in pretty much all human cultures, and why does religion involve a recurrent pattern of certain representations, namely, god-concepts? That’s because god-concepts are minimally counterintuitive in the sense outlined above.

Roughly, the idea of an entity that has most of the predicates of intelligent beings (properties such as creativity, ability to judge, ability to know, etc.) but not all of them (e.g., is omniscient) is minimally counter-intuitive with respect to our expectations about intelligent beings. It is more counter-intuitive than the concept of an ordinary person, since it breaks the folk psychology expectation that persons have limited knowledge (i.e., no ordinary person is thought to know everything); but less counter-intuitive than the concept of, for instance, a person that is omniscient, has scaly skin, moves only in zigzag and ceases to exist from Monday to Friday. The former wouldn’t grab our interest, whilst the latter, due to its complexity, would be either too quickly ignored or too quickly forgotten about.

Now, this raises a problem. For, if being an MCI representation of an agent is what is distinct about god-concepts, then what distinguishes them from concepts such as Mickey Mouse? Mickey Mouse combines intuitive with attention-grabbing elements, insofar as a talking mouse breaks the folk psychology expectation that animals cannot use language (Swan & Halberstadt 2019). Nevertheless, Mickey Mouse is not a religious representation. It is not believed to be real nor does it inspire the faith and deep commitments associated with religious representations, even though it is comparably counter-intuitive and cognitively successful4.

This came to be known in the field as “the Mickey Mouse problem”5: how can we tell agent-concepts that are liable to be believed in and to become candidates for religious devotion from agent-concepts that are not? Why is it that people across cultures believe (and worship) religious agents such as gods, but not fictional agents such as Mickey Mouse? Or, to put it in a simpler way, why is it that Mickey Mouse is not a god?

3 Responses to the Mickey Mouse problem and Barrett’s table

Responses to the Mickey Mouse problem have been limited in number and scope. In what follows I am going to present the most promising insights that CSR scholars have come up with so far. One popular suggestion is that the answer to the Mickey Mouse problem lies in motivation: we have certain motivations to believe in religious agents qua religious agents, that is, to believe that gods are worthy of our faith and devotion, and those motivations are absent in the case of a fictional character like Mickey Mouse.

One such motivation is the fact that gods are represented as being agents that care about what we believe. This possibility has been explored by Swan & Halberstadt (2019). Gods are thought to care about human affairs, they are interested in what their “followers” do and think, whereas fictional agents such as Mickey Mouse are not. Whilst humans have a well-documented tendency to track agency within the perceptible environment, whether as faces in the clouds or as the cause for noises in the bushes, not all agents are equally worth of our cognitive commitment in the form of belief. We are specially wired to care about agents that are either anthropomorphic or that care about the same things we care about. Since God, but not Mickey Mouse, is represented as caring about us, we are inclined to believe the former but not the latter.

Another motivation is that gods are normally thought of as having wide access to socially strategic information and as liable to act on that information. Socially strategic information is information that has consequences for social interaction, survival and morality, a point that is developed at length in Boyer (2001, Ch. 4). Without that, an agent would matter little to our daily life and would not be worth discussing or thinking about, whilst agents with strategic information are seen as useful allies or dangerous enemies. As Sørensen puts it, “the gods know if we cheat on our spouses or steal from our neighbour, even if no one else does. Because of this they become highly relevant social partners and not just aesthetic figures” (Sørensen, 2005, p. 474). This is a motivation for us to believe and worship them, or even fear them, but not cartoon characters such as Mickey Mouse.

Another point is the following. Aside from us having certain motivations to believe in god but not in Mickey Mouse, belief in god undergoes processes of validation. For an agent to be believed in, its representation has to be validated, and the way to validate it is by representing the agent in question as being the cause of certain phenomena in the physical world. For instance, someone’s belief in God is validated by the fact that they prayed it would rain the next day, and then it does rain the day after. God, in this situation, is represented as having brought the rain about, or as having answered to the person’s prayer. An entity that possesses strategic information but never acts, or only acts in another universe, would fail to excite discussion and devotion (or fear). A plausible answer to the Mickey Mouse problem along those lines can be drawn from Lawson & McCauley (1990) and Sørensen (2000). In short, people believe in gods but not in Mickey Mouse because there are no validating actions in which the latter is represented as interfering with the world.

Combining and summing up these considerations in a more refined scheme, in a paper called “Why Santa Claus is not a God” (2008a), Justin Barrett proposes four additional criteria (besides minimal counter-intuitiveness) that an agent-concept must satisfy in order to elicit commitment and to be believed in. In other words, he comes up with five requirements for an agent to be a god. His list comprises the three elements presented above and an extra one. In addition to being MCI representations (this would be the requirement zero), god-concepts must represent agents that

  1. are intentional. A religious agent must be represented as an entity that deliberately and purposefully initiates action.

  2. possess strategic information. A religious agent must be represented as having information about people that might be pertinent in terms of their survival and other goals.

  3. have detectable interactions with our world. A religious agent must be represented as being causally connected to certain events in the physical world.

  4. motivate behaviour that reinforces belief. A religious agent must be represented as being the reason for people to engage in rites, rituals and praying, which, in turn, contributes to making the belief more prevalent and stronger.

Barrett analysed the concept Santa Claus across this five-criteria framework and argued that although Santa is a more suitable candidate for belief than many other counter-intuitive agents, such as Mickey Mouse, it ultimately does not have the basic conceptual features necessary to become a god. According to him, whereas it is true that Santa is represented as being an intentional agent, it is debatable whether he is represented as possessing strategic information. He knows where you live, and how well you’ve behaved, but that qualifies as strategical only marginally. Information he actually needed to have, in order to satisfy the second additional requirement, would be information that has future social consequences, such as whether or not you plan to plant a bomb in the city centre, or to invade the capitol.

Also, Barrett notes, it is debatable whether he satisfies the third and the fourth criteria. That’s because he only acts in detectable ways under very limited circumstances, namely, on Christmas day (which happens to be just once a year); whereas gods are typically portrayed as acting in detectable ways at various times and places relevant to human concerns (Barrett, 2008a, p. 157-158). Likewise, it is not strictly speaking correct to say that Santa motivates reinforcement to the belief that he is real, since behaviours that might reinforce that belief (e.g. writing letters to him, hanging stockings and leaving out cookies) are circumscribed to a very specific contingency too.

Now, Barrett intended his framework to represent criteria that are jointly sufficient for an agent-concept to be a god-concept. That means that each criterion taken individually must be necessary, and that if the whole set of criteria is satisfied by a concept, then this concept will necessarily be a god-concept. Together with Santa and Mickey Mouse, he tested concepts such as the Tooth Fairy and George W. Bush, to remark that the latter two fall one criterion short, but that failing to satisfy even one criterion is sufficient for disqualification (Barrett, 2008a, p. 159).

Some authors have argued that certain concepts satisfy all the criteria in Barrett’s table but fail to be the object of faith and devotion. The most famous of those concepts is Zeus (Gervais & Henrich, 2010). Zeus fulfils all of Barrett’s requirements, and was once a target for widespread belief, worship, and commitment. Nevertheless, no present-day religion has Zeus as its god. It is not the target of religious devotion by one single person. If someone claimed they worship Zeus today, we would likely think that they are joking, or perhaps not quite mentally sound. That would be the case not only with the concept Zeus, but also with concepts referring to any other agents that were once regarded as gods, but are nowadays relegated to the class of “mythological characters”, such as Anubis, Thor, Quetzalcoatl, and so many others. Gervais & Henrich conclude that there must be something wrong with Barrett’s table or, best case scenario, something missing from it, inasmuch as it clearly yields false positives6.

In fact, they go on to suggest that what is missing from Barrett’s table is reference to context. According to them, it is a combination of content and context - not the former alone - that allows us to tell religious concepts and non-religious concepts apart, so that a cognitive science of religious belief needs to take context into account in order to succeed in its broad aims. So Gervais & Henrich propose that the MCI-based framework provided by Barrett be amended to accommodate contextual features that are important to determine which concepts are genuinely religious. The table should stop yielding false positives if we add a contextual requirement, more or less along the following lines. In addition to being an MCI concept and satisfying all the other criteria listed, in order for an agent-concept to be a god-concept it must

5) have been committed to as a result of cultural mechanisms. In short, there must be religious commitment to the agent in question (belief in it), and commitment has to have been learnt through input from culture (Gervais & Henrich, 2010, p. 386).

Actually, though not formally included in the original list, an additional criterion is alluded to by Barrett in the beginning of the paper (Barrett, 2008 a , p. 150) that could make for the contextual requirement suggested by Gervais & Henrich, the distribution criterion. In addition to being an MCI-concept and possessing all the additional traits in the list, in order to be a god-concept an agent-concept must be distributed. Concepts that are not shared by multiple individuals do not count as cultural or religious concepts. If the framework is able to accommodate such requirement, its sieve will cease to allow for concepts such as Zeus to pass as religious in contexts where they are not the object of any religious commitment. The concept Zeus will fail to satisfy this distribution requirement in today’s context, therefore, it wouldn’t count as a genuine god in this context7.

My outlook on the Mickey Mouse problem and on the solution offered by those scholars is a little less optimistic. In what follows, I’ll go on to argue that adding a contextual requirement might keep the table from yielding some false positives, namely, gods from extinct religions will no longer pass the test. But some false positives keep being yielded. Specifically, even with the amendment proposed by Gervais & Henrich, concepts that should not pass Barrett’s test (because they are not genuine god-representations, nor have they ever been evoked by genuine religions, extinct or not) are still passing.

4 Concepts that should not pass Barrett’s test are still passing

As mentioned before, Barrett’s framework is intended to comprise criteria that are jointly sufficient for an agent-concept to be a god-concept. Barrett tested four concepts: Santa Claus, Mickey Mouse, Tooth Fairy and George Bush, to show that none of them passes the test. The test bars all those concepts and, in fact, even though some of the agents represented by those concepts are believed to exist, none of them is believed in, or believed to be a deity.

Now, some concepts represent agents that, just like Santa, Mickey Mouse, Tooth Fairy and George Bush, the test should bar, since they are not believed in, or believed to be deities; but that are, instead, not being barred. One of such concepts is Google, though I believe others, such as Bing, DuckDuckGo, and even Amazon Alexa and Chat GPT, as weirdly as it might seem, also meet the sufficient conditions for being gods, following Barrett’s test tenets - even after the amendment suggested by Gervais & Henrich. For the purposes of easing up exposition, I’ll present the case for Google, but pretty much all I’ll say in this section can be taken as applying, ceteris paribus, to the other concepts mentioned and, in fact, to many other forms of technology of today’s world8.

The question of what Google is is a complicated one, since most people think of it as being simply a search engine, whereas in reality it’s not just that. Google is a limited liability company (LLC), which means it is an entity, by all means; an entity that “has” a search engine. It was founded in 1998 by two PhD students from Stanford University. To be sure, Google’s is the world’s largest search engine today, which means Google has access to an awesome lot of information.

The question of whether or not Google is the new god has been around pretty much since the very beginning, once it became clear that Google’s pretensions were somewhat almighty: it aimed to be a god-like presence in the world, to the extent that it actually resembled the Judaeo-Christian god, in many respects. For one thing, it is the closest you can get to an omniscient and omnipresent entity, since it possesses, today, access to more information than any other single entity in the universe and it is virtually everywhere. What’s more, Google never forgets anything. A few books have been written on the subject9, and there is even a website dedicated to discussing the remarkable similarities between Google’s relationship with its users and formal religion: “The Reformed Church of Google”10.

How would Google fare in Barrett’s test? Inquiring into this will require a brief refinement of my explanation of what it is to be an MCI concept. The MCI hypothesis, in the way it has been framed by scholars such as Boyer and Barrett, is built upon the auxiliary hypothesis that human cognitive architecture is domain-specific. As Sørensen remarks (2005, p. 471-472), this is the hypothesis that our implicit understanding of the world is not homogeneous, that is, we don’t process everything we come across in the same way. Rather, cognition is divided into a number of ontological domains. We have specific expectations for individual objects because we subsume them into broader, more general, categories. There are five ontological domains: Spatial Entities (i.e., Places), Solid Objects, Living Things that do not appear to be self-propelled (i.e., Plants), Animals, and Persons (Barrett, 2008b, p. 318).

Those domains are like general conceptual templates for specific concepts (Boyer & Ramble, 2001, p. 537). In this respect, a concept such as angel (person with wings) is much like, say, the concept fireman (person whose job is putting out fires), inasmuch as both of them are variations of the template Person (Horst, 2013, p. 379-380). The important difference between angel and fireman - the thing that renders the former, but not the latter, an MCI concept - is that having wings violates expectations pertaining the ontological category Person; whereas having the job of putting out fires doesn’t. Within this way of conceiving of counter-intuitiveness, angel is counter-intuitive with respect to the ontological domain Person. In other words, the MCI hypothesis stipulates that the counter-intuitiveness of a concept is always relative to the ontological domain it belongs to.

Now, when it comes to Google, for it to be an MCI concept (the criterion zero) it would have to be a variation of some of the five ontological domains; and it would have to be a variation that breaks some (but not many) of the expectations we have about that domain. If Google had to be subsumed under some of the ontological categories mentioned, because those are all there is, it would certainly not be a Solid Object, nor a Plant, nor an Animal. It could perhaps be thought of as a Place, but it would probably fit into the category Persons less awkwardly than into the category Places. Why is that? Because we treat it like a person, more so than a place.

Wegner & Ward (2013) describe findings from recent experiments showing that Google is beginning to replace a friend or family member as a companion in sharing the daily tasks of remembering. We interact with information possessed by Google in a way that bears striking resemblance to what is in a friend’s head, since we nowadays turn to Google for information rather than to other people and we rely on it. Google, in turn, almost never lets us down: it retrieves information in response to our questions just like a super knowledgeable friend would. What’s more, Google interacts with us in surprisingly human ways, remembering our birthday and even responding to our voice commands (Wegner & Ward, 2013, p. 61). Sometimes, interaction with Google is indistinguishable from interaction with a real person, to the extent that users anthropomorphize and extend social expectations to it, even when explicitly aware that it is not human (Nass & Moon, 2000). In short: we treat Google like a person and we feel that Google, in turn, treats us like a person back.

If Google is subsumed under the general category of Persons, it surely is not an ordinary exemplar of that category. That’s because, in other ways, it is not like any person we have ever met before. Apart from being disembodied (it doesn’t inhabit a human body), Google is always present, is always on and knows virtually everything. The information you can get from it is vastly greater in scope than can be stored by any single person, or, many times, entire groups. It is always up-to-date, and, barring a power blackout, it is not subject to the distortion and forgetfulness that afflicts the memories harboured inside our heads (Wegner & Ward, 2013). In short, Google is a variation on the template Person, but endowed with special skills that no single person is expected to possess. Contrary to what happened in the Santa Claus case, the set of special skills here is not on account of something external. It is a property of Google11. Google is, thus, an MCI concept par excellence.

What about the other requirements in Barrett’s table? For instance, is Google intentional? Google certainly doesn’t have a mind of its own, since it is not a real person. Its AI chatbot, Google Bard, for instance, produce captivating interactions that feel very close to human speech because of advancements in architecture, techniques of pattern recognition, and the volume of data it handles, not true wit, candor or intent. But it must be observed that the question is not whether Google is actually intentional, but rather whether it is represented by us as such. It appears that it is. People think of Google as an entity that acts deliberately, purposefully and autonomously. Examples of that include, of course, the prosaic initiatives alluded to before (e.g., Google compliments you on your Birthday and warns you against potential cyber-threats). But, apart from that, other actions performed by Google are almost creepingly purposeful. For instance, it personalises search results, in order to show each user content it thinks they are going to like, or to consider more relevant (Krafft et. al., 2019). It does that autonomously, that is, without having been asked by you, me or any other user. We understand that this is done by Google LLC with the main purpose of improving the user’s experience. But also, Google acts in ways that are purposefully self-interested (e.g., it decreases visibility of negative narratives about itself) as well as community-interested (e.g., it censors hate speech and other forms of hateful content).

Then, there is the question of whether Google is thought about as possessing strategic information. That’s the easiest of all questions. Google is, if anything is, in the possession of information that we consider critical in terms of our goals, including survival. If I get lost in the midst of a foreign country whose language I don’t master, I know I can rely on Google to find my way back. If an exotic spider appears in my backyard and I don’t know whether or not it is venomous, all I have to do is take a picture of it, and Google will tell me something that is either going to save me or set my mind at ease. If I’m kidnapped by a terrorist group by mistake, Google can prove I’m not their intended target12. If strategic information has to be understood in terms of information about future affairs, that is socially relevant and that might make a difference in terms of defining who is our enemy and who is our potential ally, Google has it too. Google can predict vote turnout, that is, what the voting behaviour of a certain cut of a particular population will be13.

Now, the third requirement - do we really think of Google as acting in the “real world” (as opposed to the virtual world)? Do we represent it as being causally connected to certain events in the physical world? Here we can, again, point to some trivial facts. For instance, Google Assistant, who we can talk to by saying “Hey, Google!”, books appointments for us and makes restaurant reservations. That might count as actual interactions with the real world, albeit minor. What about something major, like provoking a military conflict? Google did that. In 2010, Costa Rica and Nicaragua came to the brink of war. The spark that nearly lit this war was Google. Google Maps placed the disputed border several miles west of where it is nowadays agreed it actually is, leading one of its users - who happened to be a Nicaraguan military commander - to make an incursion into Costa Rican territory, with his army. The story might sound comical, mainly because it didn’t actually end in a bloodbath. But, as The New York Times wrote in 2013, the real worry came from Google arbitrarily taking a stand on an active border dispute14. It could have led to war. And, in the eventuality that it would, there would be no question as to whether or not Google would be seen as causally connected to that.

To those wondering whether this should count - since, like in the Santa Clause case, those are not actions that happen every day -, here is another example: blacklisting. Google quarantines more than 10,000 websites daily, that its engine has flagged as suspicious or infected. When a website is quarantined, it stops being shown on search results. Many of those are small business’ websites. When a small business’ website is quarantined, its traffic and sales decrease dramatically (together with its reputation, sometimes unfairly), leading the business to almost instant failure15. People, especially those business owners, have no one else to blame the mischief on, except for Google.

Last, there is the question of whether Google motivates behaviour that reinforces belief. A genuine god must be represented by its believers as being the reason they engage in rituals, such as praying; which, in turn, contributes to making the belief more prevalent and stronger. Is Google represented like so? And are there such rituals? To better understand this, it makes sense to get clear on what a ritual is.

Rituals are voluntarily performed behaviours and prescribed actions that are purposefully repeated over time, with a degree of formality and seriousness (Bell, 1997). They take an extraordinary array of shapes and forms but, generally, a ritual is composed of a set of actions that is repeated with the purpose of bringing comfort and a sense of security. How they are carried out matters more than what is actually accomplished (Lewis, 1988). Rituals come within degrees (Grimes, 2013) and are described by agents through a vocabulary that is similar to the one they use to describe their habits (Turner, 1992).

Kwon (2022) proposes that the way in which digital media and other applications are influencing the meaningful actions people perform in order to feel comfortable and secure can be understood in terms of ritual. People use technological resources to perform starting-the-day rituals, stop-working rituals, connecting rituals, amongst others. One of the subjects that took part in Kwon’s experiment had a stop-working ritual with Google. He used Google Home to reach out for his wife every afternoon when he was about to head home from work. “I tell Google ‘I’m going home’, […] it broadcasts a message out of the speaker and […] it will tell her that I’m coming home. […] I just have this thing that I know it will get to her”, he says (Kwon, 2022, p. 35).

Google is also at the centre of other, more widespread, practices designed to bring comfort and security, or at least the illusion of it. One such example is self-diagnosing. In 2013, for instance, more than one third of the population in the U.S. used Google search to self-diagnose (Kuehn, 2013). Google is the first thing people turn to whenever they start to develop any unexpected symptoms. They search for possible causes of the symptoms and for triage advice online before they even consider asking a knowledgeable person, or going to see a doctor (Semigran et. al., 2015).

Now, would these count as reinforcing behaviours of the sort that could make Google meet the fourth criterion in Barrett’s table? For one thing, the practices described are not rituals in which people praise Google, or pray to it, and that generate, as a result, reinforcement of their belief in its existence. No such rituals exist, as far as I know. That’s simply because nobody that uses Google actually doubts that Google exists. Nevertheless, the sort of behaviours described above does reinforce agents’ bond to the technology, their belief that they can rely on Google, or, simply put, their faith in Google. Because virtually every time the agent engages with the technology he or she manages to get the intended result, the positive outcome retro-feeds Google’s “godliness” and sovereignty (the status quo of Google as the go-to fix for our problems)16.

If the test designed by Barrett to tell gods from non-gods is precise, and if what I’ve been arguing makes sense, Barrett should accept that Google is a god. It meets all the requirements that are jointly sufficient for it to be a god, including the contextual requirement (we all use Google because culture encouraged us to, and this is something we share with our fellows). Nevertheless, Google is not a genuine god, by any means.

Gods are entities that people can turn to in times of need. Google fulfils this to arguably the same degree as traditional gods, but people don’t actually believe Google to be a deity, like they do with Allah, Jehovah, Shiva, and others. It is not the object of genuine religious commitment. Even the “Reformed Church of Google”, mentioned earlier, describes itself as a parody religion. Parody religions are joke religions. In a paper called “Google, A Religion: Expanding Notions of Religion Online” (2017), Joanna Sleigh discusses this concept. A joke religion makes a satire of genuine religion. It initiates dialogue with adherents of established religions about whether the former actually believe in the respective religion, and engages audience in perceptual and thought experiments aimed at triggering reflection about authenticity and fakery, as well as about the oddities of the religions they are mocking (Sleigh, 2017, p. 256).

5 Concluding remarks

In this paper I’ve made the case that Barrett’s solution to the Mickey Mouse problem, his table of criteria, even after incorporating the amendments proposed by others, is problematic. It doesn’t really enable one to tell god-concepts from other agent-concepts. The reason why the table appears to work, at first, is that we already know, in a deeper and perhaps non-conceptual level, that entities such as Mickey Mouse and Santa Claus are not gods. The minute you try to apply it to concepts in the grey area, like Gervais & Henrich showed, or to exotic concepts, like I hope to have been able to show, it yields false positives. Now if a device designed to enable you to tell the difference between Xs and Ys only works if you already know that Xs and Ys are different, there is something intrinsically unsatisfactory about it.

It’s important to bear in mind that Barrett’s table is only an auxiliary tool for the MCI hypothesis; an attempt at better qualifying, or amending, the MCI hypothesis in a way that it would either cease to produce the Mickey Mouse problem or have a quick fix up its sleeve. The fact that this attempted fix doesn’t work as it should may be revealing of a deeper weakness, a weakness within the MCI hypothesis itself. Varieties of the Mickey Mouse problem subsist and are still there to be dealt with by CSR.

Perhaps the takeaway of the discussion carried out here is that maybe one should stop trying to spell out what gods are in terms of necessary and sufficient conditions altogether, and start thinking in terms of prototypical traits, instead. If not, then at least scholars should seriously consider making the table more sophisticated. For instance, Gods are represented as supernatural beings, but even supernatural beings have a “natural” aspect to them. That is, they are represented as entities that have not been created by us, humans. Gods are typically represented as possessing virtue, at least within the context of certain specific domains that they govern, things such as wisdom, and not just knowledge, or data, in those domains. Those additions would certainly help to prevent things such as Google to pass the test as genuine gods, that they are not.

References

  • ATRAN, S. 2002. In Gods We Trust: The Evolutionary Landscape of Religion, Oxford University Press.
  • BARRETT, J. 2000. Exploring the natural foundations of religion. Trends in Cognitive Science 4: p. 29-34.
  • BARRETT, J. 2004. Why would anyone believe in God? Lanham, MD: AltaMira.
  • BARRETT, J. 2008a. Why Santa Claus is Not a God. Journal of Cognition and Culture, 8(1): p. 149-161.
  • BARRETT, J. 2008b. Coding and quantifying counterintuitiveness in religious concepts: theoretical and methodological reflections. Method Theory Study Relig 20: p. 308-338.
  • BARRETT, J.; KEIL, F. 1996 Anthropomorphism and God concepts: Conceptualizing a non-natural entity. Cognitive Psychology, 31(1): p. 219-247.
  • BELL, C. 1997. Ritual: Perspectives and Dimensions. Oxford University Press on Demand.
  • BOYER, P. 1994a. Cognitive constraints on cultural representations: Natural ontologies and religious ideas. In: L. A. HIRSCHFELD; S. A. GELMAN (Eds.), Mapping the mind: Domain specificity in cognition and culture. New York: Cambridge University Press, p. 391-411.
  • BOYER, P. 1994b. The naturalness of religious ideas Berkeley: University of California Press.
  • BOYER, P. 2001. Religion explained: The evolutionary origins of religious thought. New York: Basic Books.
  • BOYER, P.; RAMBLE, C. 2001. Cognitive templates for religious concepts: Crosscultural evidence for recall of counter-intuitive representations. Cognitive Science 25(4): p. 535-564.
  • COFNAS, N. 2018. Religious authority and the transmission of abstract god concepts. Philosophical Psychology, 31(4): p. 609-628.
  • EVANS-PRITCHARD, E. 1956. Nuer Religion Oxford: Clarendon Press.
  • FAZZOLARI, B. 2017. Tracing a Technological God: A Psychoanalytic Study of Google and the Global Ramifications of Its Media Proliferation. [PhD Dissertation] Florida Atlantic University, 215 pages.
  • FIRTH, R. 1959. Problem and assumption in an anthropological study of religion. Journal of the Royal Anthropological Institute, 89(2): p. 129-148.
  • GALLOWAY, S. 2017. The Four: The Hidden DNA of Amazon, Apple, Facebook, and Google. New York: Portfolio.
  • GERVAIS, W.; HENRICH, J. 2010. The Zeus Problem: Why Representational Content Biases Cannot Explain Faith in Gods. Journal of Cognition and Culture, (10): p. 383-389.
  • GERVAIS W.; WILLARD, A.; NORENZAYAN, A.; HENRICH, J. 2011. The Cultural Transmission of Faith: Why innate intuitions are necessary, but insufficient, to explain religious belief. Religion 41(3): p. 389-410.
  • GRIMES, R. 2013. The craft of ritual studies Oxford University Press.
  • HORST, S. 2013 Notions of Intuition in the Cognitive Science of Religion. The Monist 96(3): p. 377-398.
  • KEIL, F. 1989. Concepts, kinds, and cognitive development Cambridge: Bradford Book/MIT Press.
  • KRAFFT, T.; GAMER, M.; ZWEIG, K. 2019. What did you see? A study to measure personalization in Google’s search engine. EPJ Data Sci, 8(38).
  • KUEHN, B. 2013. More Than One-Third of US Individuals Use the Internet to Self-diagnose. Journal of the American Medical Association, 309(8): p. 756-757.
  • KWON, H. 2022. Ritual of Everyday Digital Life: Towards Human-Centred Smart Living. Archives of Design Research, 35(2): p. 27-43.
  • LARSEN, T. 2012. E. B. Tylor, religion and anthropology. The British Journal for the History of Science, 46(03): p. 467-485.
  • LAWSON, T; MCCAULEY, R. 1990. Rethinking Religion: Connecting Cognition and Culture. Cambridge: Cambridge University Press.
  • LAWSON, T. 2000. Towards a Cognitive Science of Religion. Numen, 47(3): p. 338-349.
  • LEWIS, G. 1988. Day of Shining Red: An Essay on Understanding Ritual. Cambridge University Press.
  • MCCAULEY R.; COHEN, E. 2010. Cognitive science and the naturalness of religion. Philosophy Compass 5: p. 779-792.
  • NASS, C.; MOON, Y. 2000. Machines and mindlessness: Social responses to computers. Journal of Social Issues, 56(1): p. 81-103.
  • SEMIGRAN, H.; LINDER, J.; GIDENGIL, C.; MEHROTRA, A. 2015. Evaluation of Symptom Checkers for Self Diagnosis and Triage: Audit Study. BMJ: British Medical Journal 351: p. 1-9.
  • SLEIGH, J. 2017. Google A Religion: Expanding Notions of Religion Online. In Urte Frömming.
  • SPENCER, H. 1864. First principles 4th ed. New York: Appleton.
  • SPERBER, D. 1975. Rethinking symbolism Cambridge, England: Cambridge University Press.
  • SØRENSEN, J. 2005. Religion in Mind: A Review Article of the Cognitive Science of Religion. Numen, 52(4): p. 465-494
  • SWAN, T.; HALBERSTADT, J. 2019. The Mickey Mouse problem: Distinguishing religious and fictional counterintuitive agents. PLoS ONE, 14(8): p. 1-15.
  • TURNER, J. 1992. Ritual, Habitus, and Hierarchy in Fiji. Ethnology, 31(4): p. 291-302.
  • WATTS, S. 1995. Walt Disney: Art and Politics in the American Century. The Journal of American History 82(1): p. 84-110.
  • WEGNER, D.; WARD, A. 2013. How Google Is Changing Your Brain. Scientific American, 39(6): p. 58-61.
  • WHITE, J. 2012. The Church in an Age of Crisis: 25 New Realities Facing Christianity. Baker Books.
  • 1
    The idea that what defines religion is commitment to the existence of some deity has been put forward by several scholars over the years, e.g. Spencer (1864), Evans-Pritchard (1956) and Firth (1959); and seems to have been inherited by CSR scholars, e.g. Boyer (1994b), Lawson (2000), among others.
  • 2
    As Boyer puts it, “Religious concepts could not be acquired, and more radically could simply not be represented, if their ontological assumptions did not confirm an important background of intuitive principles. At the same time, they would not be the object of any attention if they did not contain some principles that are simply ruled out by intuitive expectations. (…) In order to create religious representations that have some chance of cultural survival, that is, of being acquired, memorized, transmitted, one must strike a balance between the requirements of imagination (attention-demanding potential) and learnability (inferential potential)” (Boyer, 1994b, p. 121-122).
  • 3
    (II) breaks several folk biology expectations about animals. For instance, that animals are mortal, that animals are liable to give birth to (and have themselves been birth from) animals of the same species; that animals are made of flesh and require calories in order to stay alive; that animals don’t cease to exist during a certain period of time and then go back to existing, and so forth.
  • 4
    Mickey Mouse is incredibly popular. As Watts (1995) remarks, in 1966 tens of millions of people that had never heard of Franklin Roosevelt or Martin Luther King knew who Mickey Mouse was. That goes to show that this concept enjoyed tremendous cognitive success, having spread across cultures in a large scale.
  • 5
    Reference to the problem was made for the first time by Scott Atran, upon attending a conference together with scholars such as Justin Barrett, Pascal Boyer and Thomas Lawson, whose views presented at the time cumulated for a broad account of religion in terms of MCI representations. The question was then posed by him of how, in principle, that account distinguished Mickey Mouse from God, or fantasy from beliefs one is willing to die for. After prompting a long and inconclusive discussion, it soon came to be known within intellectual circles and field conferences as “the mickey mouse problem”. See Atran (2002: p. 13-15, p. 32).
  • 6
    Gervais & Henrich’s discussion came to be known as “the Zeus problem” - it’s a separate problem that affects the MCI hypothesis as much as the Mickey Mouse problem. Whereas the latter is the problem of explaining why only certain MCI concepts are believed in, the former is the problem of explaining “why people do not believe in other peoples’ gods” (Gervais & Henrich, 2010, p. 388). In spite of the Zeus Problem being is a problem of its own, I’m not going to tackle it here, and I’ll focus instead on the consequences it has to the Mickey Mouse problem.
  • 7
    Gervais & Henrich have not been the only ones to suggest that there is no way to distinguish the content of religious and secular MCI concepts without appeal to context. Other contextual solutions to the Mickey Mouse problem have been offered by McCauley & Cohen (2010), Gervais et. al. (2011) and Cofnas (2018). Those other proposals make no difference for the argument put forward here.
  • 8
    A quick caveat: Alexa and Chat GPT do not belong to the same class of things as Google, DuckDuckGo and Bing. The former are varieties of generative artificial intelligence. What that means is that Alexa and Chat GPT don’t just search a massive database in order to come up with answers to our questions. They also frame those answers and hand them down to us using natural language, in a conversational framework (which search engines like Google by and large don’t). In other words, generative AI mimics human ordinary language when interacting with humans, which makes them sound like, or feel like, a real person more so than search engines. That shouldn’t impact the argument being advanced here.
  • 9
  • 10
  • 11
    Barrett argues that Santa Claus is not really an MCI concept, insofar as it doesn’t really have counterintuitive properties. Popular movies and songs do not consistently represent Santa as a counter-intuitive being in the technical sense. Normally, he is represented as “an especially kind and generous fellow that has surrounded himself with special friends, animals and resources, but otherwise is an ordinary human (Barrett, 2008a, p. 155-156)”. According to him, some facts about Santa could be mistaken for counter-intuitive properties, for instance, the fact that he flies on a carriage pulled by reindeers. However, he flies on this carriage not due to the fact that he has special abilities, but because his reindeers have been fed magic corn.
  • 12
    This happened to John Martinkus, a man that was seized and held hostage by a group or Iraqi militants in Baghdad. Martinkus was released unharmed after his kidnappers “googled” his name to find out that he wasn’t their intended target. The tale is recounted in the 2004 report How Google can save your life, by The Guardian. Available at https://www.theguardian.com/technology/blog/2004/oct/19/howgooglecan
  • 13
    That’s what Rochelle Terman, political science researcher at the University of California Berkeley, has found. See her article Big Data, Big Risks: Google Search Data and Election Predictions, available at https://townsendcente r.berkeley.edu/blog/big-data-big-risks-google-search-data-and-election-predictions
  • 14
  • 15
    In a 2013 CNN Money report named Google’s dreaded ‘blacklist’ Parija Kavilans discusses various cases of unfair blacklisting. Available at: https://money.cnn.com/2013/11/04/smallbusiness/google-blacklist/
  • 16
    It is interesting to note that Barrett’s fourth criterion is the need for some reinforcing behaviours (to the belief in the god in question). He does mention, however, that the question of exactly why a god motivates his followers to undertake such behaviours requires considerably more explanation than is currently available (Barrett, 2008a, p. 154). That is, reinforcing behaviours can be performed for several reasons. They don’t have to be performed because there is doubt that that god is real, or because belief in it is flimsy. So the fact that nobody actually doubts that Google exists is not a problem here.

Edited by

  • Nome dos editores responsáveis pela avaliação:
    Inácio Helfer
    Luís Miguel Rechiki Meirelles

Publication Dates

  • Publication in this collection
    13 Jan 2025
  • Date of issue
    2024

History

  • Received
    14 Nov 2023
  • Accepted
    06 June 2024
location_on
Universidade do Vale do Rio dos Sinos - UNISINOS Av. Unisinos, 950 - São Leopoldo - Rio Grande do Sul / Brasil , cep: 93022-750 , +55 (51) 3591-1122 - São Leopoldo - RS - Brazil
E-mail: deniscs@unisinos.br
rss_feed Acompanhe os números deste periódico no seu leitor de RSS
Acessibilidade / Reportar erro