Artificial Intelligence, ChatGPT and Organizational Studies

Abstract The advancement of the use of Artificial intelligence in the scientific field, such as Connectedpapers and ChatGPT, has allowed us to reflect on how technological tools have become mediators and participants in the context of education and academia. In the field of organizational theories, despite the different perspectives on understanding the incorporation of AIs in academic practice, we highlight two challenges in our daily academic life. The first challenge refers to confronting the digital colonialism that AIs impose on us, considering that they constitute themselves through the reproduction of language models programmed in countries of the "global north” The second challenge concerns its unfoldings in the process of automation of academic writing in administration. We consider the need to reflect on how the uses of AIs can contemporarily reproduce our place in the field of science as one of scientific data extractivism, the limitation of the teaching of academic writing in administration as the reproduction of an "assisted programming" of hegemonic language models, and the possibilities of disentangling as a way of counteracting this dynamic of automation of article writing in administration.


Introduction
In the field of organizational studies, debates about the interactions between humans and non-humans, specifically concerning the use of artificial intelligence, are not new (Leavitt et al., 2020;Zuboff, 1988).With the accelerated advancement of computational capacity and data processing, onto-epistemological opportunities and challenges have arisen regarding the production of science with technological artifacts (Lindebaum & Ashraf, 2021), as well as its impacts on the educational process.Debates such as those proposed by Kerlinger (1973) have already been put into the discussion from a managerial perspective, regarding how artificial intelligence (AI) could be a skillful research tool in one of the required functions of management science (Kerlinger, 1973): predicting variation.
Beyond the managerial vision of a unilateral organizational advance in the use of AIs, provocative questions arise in the field, such as their impact on formal (remote, hybrid, work from anywhere) and informal (platform workers and digital influencers) work dynamics, according to Hafermalz (2021) and Kellogg et al. (2020); new models of organizations, as digital platforms, according to De Vaujany et al. (2021); Duggan et al. (2019) and Srnicek (2016); the reinvention of control and surveillance models, the phenomenon of algocracy and bossware, as per Bailey (2022); Elmholdt et al. (2021) and Neves et al. (2021); the existence of racism and biases in data processing biases in people analytics and credit definition process in fintech, according to Silva (2022); van den Broek et al. (2021) and O'Neil (2016) and the construction of new interaction spaces, such as the metaverse, according to Egliston and Carter (2022).These are some possible examples of open debates that are already showing themselves as relevant research interests.
In the face of the accelerated advance of computational capacity and data processing, critical perspectives have emerged, especially Brazilian ones, as well as onto-epistemological provocations (Faustino & Lippold, 2022) about the production of science with technological artifacts, in addition to discussions about the challenges of using AIs in teaching and learning processes.The question that has been growing in recent years, intensifed since the period of the COVID-19 pandemic in the year 2020, is: How have technological tools become mediators and participants in the context of education and academia?As the appropriation of these tools in scientific production becomes a constant in our daily lives, the "scares" they promote have become equally routine.The growth in the use of websites such as Connectedpapers in the revision of academic production and ChatGPT (Generative Pre-trained Transformer) in the elaboration of academic texts are examples of this phenomenon.
These Chatbots (such as OpenAI's ChatGPT or Meta's Galactica) are based on natural language processing (NLP) and interact in a conversational manner.They are increasingly used in the educational process and in knowledge production (formally and informally) to answer questions, correct spelling and translation errors, and create and challenge analytical assumptions.This phenomenon compels us to reflect on the ontological potentialities and limits of their uses and, above all, how these discussions evidence and meet the demand of mechanisms historically mobilized in the modes of scientific production and reproduction, especially in the field of organizational theories.
These AI tools perform modeling that operationalizes and exposes castrated modes of scientific production.They are primarily concerned with quantifying production and replication of the understanding of the structural organization of scientific articles as a standardized instruction.The model teaches, as an indisputable premise, that standardization is a fundamental element for the evaluation of the quality of our work.The fact is, obviously, that reproducing "models" is something that intelligent technological tools know how to do in a very sophisticated way and in a short period of time.
This logic imposes on Brazilian academia the form and content that we must produce scientifically in order to be considered relevant "players" in the international scenario.The game, then, is established from the ipsi literi reproduction of an "assisted programming" in our academic production from the colonial dynamics.Consequently, linguistic models in the writing of scientific articles outside the "mold" are usually rejected in national journals for not conforming to what is expected.As Alberto Guerreiro Ramos (1995) already warned us, these frameworks are imposed by the colonial models of science production and admitted by the colonized as "the correct ones" to be replicated.
The use of these tools in the field of organizational theories can awaken and intensify some restlessness in the sense of breaking the productivism and the logic of colonial reproduction of articles.Thus, we propose to reflect in this text on two specific challenges of the incorporation of AIs for organizational theories: (a) the confrontation of the digital colonialism that AIs impose on us, considering that they are constituted through the reproduction of programmed language models and (b) their unfoldings in the process of automation of academic writing in administration.
To reflect on the specifics of ChatGPT uses in the field of organizational theories, we conducted an experiment on February 14 (2023) to understand the mechanisms of AI functioning in the scientific field.We asked the tool to develop a theory for our field that would contribute to the understanding of the phenomenon of teamwork in which humans and machines are colleagues.The results of this way of using ChatGPT will be discussed in the next sections.

Building a "theory" with ChatGPT: The "Smart Partnership" Theory
ChatGPT was released in November 2022 and has become known for its ability to create human-like text, write opinion pieces, producing music and poetry in the style of famous writers.Unlike automated text editing tools like Microsoft WORD's text cohesion suggestions and Grammarly, it is designed to be able to create content, reproduce models, and even propose to build scientific theories.We share excerpts from our experiment of interacting with ChatGPT, as it defines it, the experience of "developing together" in "collaborative construction" the "theory of intelligent partnership." Researcher: Can you write academic theories?
ChatGPT: [...] As a language model, I can generate texts about organizational theories.There are many different theories and approaches in the field of management and organizations, including classical, behavioral, contingency, systems, human resources, and other theories.I can generate texts that address these theories, their origins, their main concepts, as well as their practical implications for organizational management and leadership.However, it is important to remember that these texts generated by me must be verified and validated by a human expert in the field before being used for academic or practical purposes.
Researcher: I would like you to come up with a new theory.
ChatGPT: Sure, I can try to create a new theory for you.But please know that creating a new theory is a complex process and often requires a great deal of research and analysis in a specific field.That said, if you have a specific topic in mind, I can try to generate a preliminary theory based on the information available to me.Please provide more details about the topic you have in mind so I can get started.
Researcher: Teamwork theory in which humans and machines are colleagues ChatGPT: One possible theory about human-machine teamwork is the theory of intelligent partnering.This theory suggests that successful teamwork between humans and machines is based on a balanced collaboration where the unique abilities of each are valued and complementary.[…] In short, the theory of intelligent partnering suggests that successful collaboration between humans and machines is based on a balanced partnership, where the unique abilities of each party are valued and complementary.For this collaboration to be effective, it is important to have a clear division of tasks and responsibilities, an organizational culture that values diversity, and clear and effective communication between the parties.
Researcher: What are the assumptions in this theory?
ChatGPT: The theory of intelligent partnering, which I described earlier, assumes some important categories that are relevant to collaboration between humans and machines.Some of these categories include: complementary skills [...]; balanced division of tasks [...]; collaborative organizational culture [...].
Researcher: What would be the field for experimentation?
ChatGPT: The field for experimentation in intelligent partnership theory is wide and varied.Collaboration between humans and machines is a rapidly evolving area of research and development, with applications in a variety of sectors including industry, services, healthcare, education, and others.Some specific areas that can be explored for experimentation with the theory include: process automation [...]; healthcare [...]; education [...]; financial services [...]; transportation [...].These are just a few of the areas where the theory of intelligent partnering can be applied and experimented with.As technology advances and new possibilities emerge, it is likely that even more opportunities will arise for experimentation and application of the theory in different contexts.
Researcher: What sources inspired the theory?
ChatGPT: The theory of intelligent partnering that we developed together in this conversation was a collaborative construct that combines ideas and concepts from various inspirational sources, including the literature of team theory and collaboration [...], artificial intelligence [...], co-learning model [...], current trends and practices in automation and humanmachine collaboration in different industries, such as manufacturing, services, healthcare, education, transportation, and others.These are just some of the sources that inspired the theory of intelligent partnering, which can be refined and improved based on further research and experimentation in the area of human-machine collaboration.
From our experimentation of the use of ChatGPT it is necessary to consider that the proposed theory presents context, categories, and even the suggestion of research fields for its use, enabling the identification of elements of the framework or model that, in general, is taught in the production of scientific articles.However, issues such as systematizations and theoretical elaborations, reliability of information sources, use of references, reflections on temporality, ethics, or the ability to create from contradictions, nuances, or non-linear logics are not yet elements considered by ChatGPT.This leads us to reflect, on the one hand, on how we are teaching what academic writing is in theorizing in organizational theories and, on the other hand, on how AI uses can be configured as a process of digital colonization (Faustino & Lippold, 2022) of our knowledge production.

Digital colonialism in knowledge production
Despite the apparent potential for academic writing, it is important to remember that AI operates based on formal calculative rationality that legitimizes results through probabilistic calculations, submitted to abstract rules (not free of biases) and "universally" valid assumptions (Lindebaum & Ashraf, 2021).This fact builds knowledge production from a kind of automated "ontological blindness" (Cunliffe, 2022) that disregards the influence of the researcher's beliefs about the nature of social and organizational realities in the theorizing process.
Indeed, the automation of academic writing with the uses of AI has legitimized the ontological neglect that silences and does not recognize scientific and methodological pluralism by presenting linear, unsophisticated, and superficial answers to social phenomena, as exemplified in the experiment conducted for the construction of this text.This is because AI, by working from modeling, is not able (yet) to understand the contradictions of meanings of human language, as well as to produce new analytical perspectives.
Another point we highlight is: "who programs" and "who creates the modeling" of these AIs.Considering that most of the companies that produce these technologies come from very specific locations in the world -such as the United States and Western Europe -, we must pay attention to how the process of automation of academic writing is inserted in the debate about the digital colonialism of knowledge production proposed by Faustino and Lippold (2022).For the authors (p.56), digital colonialism: is not a new phase, but one of the objective features of the current stage of development of the capitalist mode of production, and represents a large step towards an ever deeper reification of our experience and sense of reality, raising to a new level the objectification and commodification of relations, from the simplest to the most complex (Faustino and Lippold, 2022, p. 56).
Thus, this process of objectification and commodification also takes place in the scientific field.Not only the instruments but also the processes and products of the flow of knowledge production are constituted in this continuum.This becomes evident in Brazilian administration academia, specifically in the area of organizational theory, when we identify the imposition of language models of academic writing (models imported from the United States and Western Europe), as well as of scientific diffusion.The need for the "delivery" of research results from master's and doctoral programs in the format of English language products, as well as the writing model itself as being successful when read emphatically by researchers from the "global north", can help us reflect to what extent the search for this legitimization disguises the logic of submission to the colonizing countries.We live in a country in which most undergraduate, master's, and doctoral students are not fluent in English.The imposition of this logic also highlights how they think about who institutionally structures these processes of academic training and research.
Following this logic, as highlighted by Faustino and Lippold (2022), we run the risk of collectively occupying the place of late consumers of AI technologies but also of scientific production and theorization.This can be observed when we reflect on the ways we have automated academic writing in a colonized way.Faustino and Lippold (2022, p. 56) also point out that in this process, the countries of the socalled "global south," which includes Brazil, end up becoming sites of "extractive mining of informational data or the delayed consumers of technology".In effect, this automation of academic writing reinforces what Alberto Guerreiro Ramos (1995) already pointed out about our production of knowledge being merely a reproduction of hegemonic frameworks and models.In the context of the use of technologies like ChatGPT, we can transpose the author's analysis to what refers to the reproduction of language models since the use of our production is reduced to a mere data source and not consumed as possible modeling logic.So, the challenge posed in reflecting on the use of AIs from this perspective of digital colonialism is to discuss to what extent academic writing in TO has become so much a way of belatedly consuming academic technologies of language models of articles from the "global north" that it becomes simplistic to the point that it is plausible to elucidate the possibility that, in terms of form -who knows of content -the human dimension is replaced by a program like ChatGPT.

Unfolding of digital colonialism in the automation of colonized academic writing
Therefore, we need to pay attention to who programs and makes the language models that guide these AIs.Therefore, the definition of how and which (source of) data will be processed is centralized in hegemonic and colonial models, which can reinforce our place as consumers of ontologies from the "global north" (Faustino & Lippold, 2022).The reflection dialogues with what Abdias do Nascimento (2016) called the genocide of black epistemologies in the context of European colonization and black enslavement in the American continent.The languages produced in the "global south" are not considered scientific or analytical frameworks amenable to theorization.Consequently, the linguistic modeling used in the automation of academic writing is based on the analytical structure of the "global north" relegating us, the "global south", to the place of late consumers of technologies and reproducers of language models, not as human producers of language models.This can be considered a process of genocide, as our human condition is replaced by the condition of consumers that can be replaced by programmable machines.
Allied to this onto-epistemological neglect and genocide, it is important to consider that there is no impartiality in the content of datafication since the technologies serve and reproduce the interests, goals, perspectives, and investor ontologies of the companies developing the AIs (O'Neil, 2016;Gillespie, 2014).Thus, we open space for the automated reproduction of social biases and prejudices as valid assumptions in data processing, in the production of results, and in academic writing itself.
From the point of view of theorizing, the role of the researcher/researcher is a process that implies, beyond the reproduction of models, reflective exercises about the nuances of the contradictions that constitute social reality.To understand this reality, time and life experiences become important elements in theorizing, as they can broaden the scope of the possibilities of the theory's impact on their field, as well as the relevance of theoretical categories for organizational analyses.An example of this refers to how the themes of gender and race have been amplified in organizational theories.The first one was stimulated by the increase of women researchers in the field, and the second one by the occupation of universities by non-white people, mainly blacks and indigenous, in the Brazilian case, as a consequence of the racial quota policies.As we observed in the experiment, the realities of these populations are not considered as constitutive elements of the language modeling processes.
The use of ChatGPT, therefore, points to a risk of analytical regression in theorizing processes because by emphasizing framing models for theory building, from prompt instructions with data available on the Internet, whose goal is to provide answers based on algorithmic logics structured in models of languages, it ends up reproducing historical mechanisms of silencing of existences not recognized in these so-called "rational" models, as models of languages not based on writing, as well as not operating from colonial "framings".From an analytical point of view, it is to think of organizational theories from the creation of languages, terms, words, and of ruptures of structures modeled from dynamics of exclusion, contradictions, and silencing.
Since AIs, like ChatGPT, work with available data, theorizations must also consider absences.Without them, we will theorize only with those who have always been "available" to the scientific field, leaving those who cannot be here, unavailable, and neglect their existence in our models of languages.Perhaps, because of this, machismo, racism, and sexism, for example, are categories so emblematic and little mobilized to be used in our organizational analyses because as they become "available data" they will have to be considered in our models of languages for theorizations and, consequently, academic publications.
And the future... AIs are programmed through emphatically human language models.If, to some extent, their uses have enhanced our understanding of how our ways of working are limited and limiting, we understand that this is not a problem of the development of technologies but of the structural roots that sustain them and the purposes for which they have potentially been used.To a large extent, the relationship that we establish with AIs is the same one that structurally organizes us: the relationship of domination-submission.Would it be possible to work with these technologies in scientific production without being from this logic?This leads us to also understand that AI tools can help researchers to manage their bibliographic references, identify grammatical errors, and even suggest improvements in the structure of the text.Today AI can already be used to analyze large datasets and identify (noncontextualized) patterns and systematization of information in technically more sophisticated ways than would be done manually.The question, therefore, is the logic of articulating AIs with and as our scientific, and academic technologies.Research on these forms of methodological incorporation of AIs in our daily work could amplify and sophisticate our understanding of their uses beyond the productivist logic and their constitution as a marketable product.
Another point that we consider controversial and of important reflection refers to the ethical dimension.Besides the authorial issue, it is necessary to consider that when we use IAs, the information that is used to feed it is stored in databases, which, to some extent, can be accessed by other people.The same logic is used, for example, in transportation platforms for food and transportation.The companies that develop these platforms collect a set of sensitive data that can turn our intimacy and privacy into marketable products.For these platforms, more important than providing the service is knowing when, how, where, and what we do on a daily basis (Zubboff, 2019;Srnicek, 2016).In the case of AIs like ChatGPT, these ethical risks are also posed in conjunction with the question of the sources and authorship of scientific data used.How do we deal with these types of techniques and procedures in our research ethics committees?Above all, it is necessary to reflect on the relations of production, use, and dissemination of AIs with colonialism and its structuring elements, such as racism, sexism, xenophobia, misogyny, and homophobia, among others.Academic writing involves important human skills, such as creativity, judgment, and critical analysis, which cannot be replicated by algorithms, and these are elements that should be constitutive of scientific production, especially with the increasing use of technologies to perform replicative and operational academic activities.Technological solutions and creations based on free software, for example, would dialog with this perspective.However, they directly confront the logic of capital accumulation to the extent that they operate on the logic of sharing, which, in theory, should also subsidize our "productive chain" of knowledge and theorization.

Conclusions
In this editorial, we propose brief reflections on the uses of AIs and ChatGPT to carry out some reflections on their impacts on academic writing and theorizing in organizational studies.We understand that the place of this textual production is more in the provocative sense than in presenting answers to the uses of these technologies.It is also important that we reflect on these dynamics in our process of publishing these academic productions.
At this moment, academic journals around the globe focus on reinforcing the primary responsibility of the researcher facing these unprecedented ethical and legal issues, proposing that the authors disclose details of the use of NLP systems (natural language processing, such as ChatGPT -see, for example, Yeo-Teh and Tang (2023), Kappel (2023) and Rossoni (2022) -and arguments, punishments and ways to prevent non-human authorship are exposed.In fact, they are concerned about the originality and legitimacy of the authorship of the works, facing a scenario of Fake News and the assumed trend of ChatGPT to invent false content and build untruths in typical mathematical results of "spurious correlation".These should also be Brazilian concerns, as highlighted by Faustino and Lippold (2022), considering the socioeconomic structure we live in and how knowledge production has been historically structured in our country.We consider it important that Brazilian academic journals also make reflections, and elaborate protocols, practices, and management processes of their editorial processes that consider AI as a contemporary element of academic writing and knowledge production.
Data colonialism (Faustino & Lippold, 2022) is in "who decides who decides" (Zuboff, 2019), as we have pointed out throughout this text.That is, there is no impartiality in datafication content since technologies serve and reproduce the interests of those who invest, program, and disseminate AI uses (O'Neil, 2016;Gillespie, 2018).Another important point is that in this datafication process, the knowledge used for the production of these language models are those made publicly available without verification of their veracity, which implies the possibility of the dissemination of false information, which can produce/reproduce stereotypes, hate/oppression discourses, as well as rearticulating colonial domination mechanisms precisely by collecting, often without any state regulation, the types of data, their modes of storage and use, as well as their dissemination.Faustino and Lippold (2022) also highlight how datafication for the use of AI can also be configured as a process of privatization of public knowledge to the extent that there is a "capture" of collective and collaboratively produced knowledge for its systematization in databases belonging to private companies.In effect, collective knowledge ends up becoming a product processed by AI and also becomes a contemporary mechanism for the reproduction of social inequalities, given the costs of access and use of these technologies.We live in a country where 15.3% of the population does not have access to the Internet at home, and 42.2% of these say they do not know how to use this network, according to data from the Brazilian Institute of Geography and Statistics [IBGE] (2023).
The infamous question "Can machines think?" (Turing, 1950) loses relevance to others: How useful or dangerous are these (so-called) intelligent tools for theorizing in organizational studies?To what extent has the rampant use of AIs only reinforced our place in the digital extractivism of data and reproducers of model languages from the "global north"?We recommend careful consideration for a "single" answer in the face of such a complex discussion in so many dimensions.Provoking these scenarios is not a Luddite position against a technological revolution but a plea for critical consciousness in the academic space of knowledge construction.For now, there are more questions than answers on the table, and we believe that science should continue in constant deconstruction, maintaining spaces for plural reflections, axiologically relevant questioning, and, increasingly, deconstructing.