Abstract:
This editorial aims to share the formative path taken by the editorial board of the journal Ensaio: Research in Science Education on generative artificial intelligence (GenAI). In addition, by highlighting the foundations adopted, the questions raised, and the contradictions observed, we invite the community to an open and critical dialogue about GenAI and scientific practice. We mobilize discourses, positions, and controversies presented nationally and internationally, and, based on critical theories, we seek to expose the contradictions inherent in the production process, appropriation, distribution, and the uses and discourses associated with this technology. We emphasize the risks associated with the use of GenAI, including illegality, non-compliance, illegitimacy, implausibility, and uncriticality. In taking a critical stance towards the use of GenAI for academic production, we acknowledge its complex and intricate nature, as well as the limited research in the area of science education. This perspective aims to encourage the creation of communities and educational contexts in the science education field that promote the in-depth study, production, and critical analysis relevant to emerging technologies, such as GenIA.
Keywords:
science education; scientific ethics; scientific integrity; risk; technology
Resumo:
Este editorial visa compartilhar o percurso formativo trilhado pelo corpo editorial da revista “Ensaio Pesquisa em Educação em Ciências” sobre a Inteligência Artificial Generativa (IAGen). Além disso, ao evidenciar os fundamentos adotados, os questionamentos suscitados e as contradições observadas, convidamos a comunidade para um diálogo aberto e crítico sobre IAGen e o fazer das ciências. Mobilizamos discursos, posicionamentos e controvérsias apresentadas nacional e internacionalmente e, a partir de fundamentos críticos, buscamos expor as contradições inerentes ao processo produtivo, à apropriação, à distribuição e aos usos e discursos associados a essa tecnologia. Propõe-se enfatizar os riscos relacionados ao uso da IAGen, incluindo ilicitude, desconformidade, ilegitimidade, implausibilidade e acriticidade. Ao nos posicionarmos criticamente com relação ao uso da IAGen para produção acadêmica, reconhecemos sua natureza complexa, intrincada e carente de produções na área de educação em ciências. Tal posicionamento aspira estimular a construção de coletivos e contextos formativos no campo da educação em ciências que promovam aprofundamentos, produções e críticas pertinentes.
Palavras-chave:
educação em ciências; ética científica; integridade científica; risco; tecnologia
Resumen:
Este editorial tiene como objetivo compartir el camino formativo recorrido por el cuerpo editorial de la Revista Ensayo sobre la Inteligencia Artificial Generativa (IAGen). Además, al evidenciar los fundamentos adoptados, los cuestionamientos suscitados y las contradicciones observadas, invitamos a la comunidad a un diálogo abierto y crítico sobre IAGen y el hacer de las ciencias. Movilizamos discursos, posicionamientos y controversias presentadas nacional e internacionalmente y, a partir de fundamentos críticos, buscamos exponer las contradicciones inherentes al proceso productivo, a la apropiación, a la distribución y a los usos y discursos asociados a esta tecnología. Se propone enfatizar los riesgos relacionados con el uso de la IAGen, incluyendo ilicitud, disconformidad, ilegitimidad, inverosimilitud y falta de criticidad. Al posicionarnos críticamente con respecto al uso de IAGen para la producción académica, reconocemos su naturaleza compleja, intrincada y carente de producciones en el área de educación en ciencias. Tal posicionamiento aspira a estimular la construcción de contextos formativos que promuevan profundizaciones, producciones y críticas pertinentes.
Palabras clave:
educación científica; ética científica; integridad científica; riesgo; tecnología
Introduction
Generative Artificial Intelligence (GenAI) benefits from an apparent unanimity of accepted reality, an inexorable fact, leaving us only to determine the most appropriate and convenient guidelines for coexistence. However, a distinction must be made: reality is never given, but perpetually in dispute. If we presuppose the GenAI merely as a tool, our protocol will be to develop an ethical framework that is coherent with our perspectives on the good and the right, to guide and regulate its use. On the other hand, by framing the issue as a problem in technology (in the broad sense of studying the human dynamics of historical relations with techniques and practices), we must establish ethical and political benchmarks in line with our conceptions of society, civility, and the future.
In fact, in the modern plethora of inventions and their implications, we are pressed by an overwhelming wave of novelties and demands for immediate decisions, artificially producing urgencies, so that we are left with no time or dedicated spaces to dwell on the issues and contradictions that are at the heart of our work. In our case, as science workers in the roles of professors, researchers, and editors, we are particularly alluding to the undeniable presence of GenAI, which is occupying a considerable space in intellectual, scientific, and academic work and its consequences in the ethical, labor, economic, political, and environmental spheres.
With the points raised in this editorial, we sought to open up space for honest discussion, grounded in critical theory, supported by recent data, and sustained by a critical stance about technology. We understand that this is still a preliminary discussion in the field, and we aim to prevent the unrestrained use of GenIA for the creation of synthetic academic content (which lacks respect for copyright and is rife with ethical violations in its production dynamics) from automatically and mindlessly skewing our research and educational practices. From this perspective, this editorial serves a dual purpose: to share with the scientific community the formative path taken by the editorial board of Ensaio, as well as to highlight the foundations adopted, the questions raised, and the contradictions observed, calling for an open and critical dialogue on GenAI and the practice of science. In this sense, we refer to what Giroux (1997) called the language of possibility, signaling that “transformative intellectuals need to develop a discourse that unites the language of criticism and the language of possibility” (p. 163), so that denunciation and announcement form a consistent critical unity that promotes change.
Definitions, tacit (dis)agreements, and ongoing tensions
Generally referred to by the abstract concept of "Artificial Intelligence"1, GenAI has characteristics that differentiate it considerably from other models, uses, and logics of AI, as it generates textual, image, or audio content based on "learned" patterns (Peñalvo & Ingelmo, 2023). Based on the machine learning approach, AI has been utilized for decades to program machines to perform various operations, including well-established applications in programming, economics, social research, and political science (Yu, 2023; França & Monserrat, 2024). While machine learning is based on algorithms and requires a more modest computational structure (physical and logical), the deep learning technique, the foundation of GenAI, is based on more complex algorithms, such as neural networks, requiring a more robust physical, logical, and conceptual apparatus for its operation.
By being "popularized"2 and massively disseminated as a content production tool based on communication that simulates aspects of human language (Natural Language Processing (NLP)), a user-friendly interface, and operating without the need for advanced programming knowledge, GenAI has been positioned in a tense field of political and economic forces, as well as cultural training for its appropriation, use, production and direction.
Large-scale language models (LLMs), not ironically called stochastic parrots (Bender et al., 2021), do not understand what they write, but merely repeat language patterns extracted from enormous volumes of texts, without any real understanding of their meaning (at least at the time this text was written). This becomes especially problematic in the academic context, where authorship, creativity, theoretical and methodological rigor, and critical positioning are expected. Texts generated by these tools erase the origins of ideas and reinforce biases, prejudices, and inequalities present in the data with which they were trained. Thus, instead of challenging these distorted representations, which are so costly to the field of education, the use of these models to produce content tends to normalize them (Birhane & Guest, 2021), posing a serious risk to any writing that aims to be transformative.
In the context of the rapid development of LLMs (Thirunavukarasu et al., 2023) and generative chatbots, groups of researchers (e.g., Sampaio et al., 2024), journals (e.g., Flanagin et al., 2023; Leung et al., 2023; Ganjavi et al., 2024), organizations, such as UNESCO (Holmes and Miao, 2024), and Brazilian higher education institutions (Schimidt, 2024) have started to publish guidelines on the supposedly responsible use of these technologies. However, this idea of responsibility needs to be problematized, especially given the severity of the environmental (Brevini, 2020, Brevini, 2023; Hodgkinson et al., 2024), political (Iasulaitis & Silveira, 2025) and social (Dias & Schurig, 2024a, 2024b) implications intertwined with the existence and maintenance of these tools within a predominantly neoliberal productive world3. Inserted into the logics of power concentration, massive data extraction, and intensive energy consumption, these technologies reflect and reinforce structural inequalities (Eubanks, 2018; Cecchini & Ferrari, 2025). Thus, we understand that, more than adhering to normative guidelines, thinking critically about the use of GenAI requires situating it within the disputes that shape the present and, consequently, the possible futures of this technology in terms of academic and scientific production in education.
Even if we disregard the political, economic, ethical, and environmental implications involved, it remains unclear whether the use of GenAI yields substantial benefits for scientific research (see França & Monserrat, 2024), including the field of science education. The risks are multiple and interdependent, ranging from copyright violations, such as plagiarism, to the lack of quality and accuracy of the information generated. Added to this is the growing dependence on GenAI in key stages of research (Yu, 2023), aggravated by the opacity of content generation processes, barriers to reproducibility, threats to the reputation of science (Yu, 2023), the formation of knowledge monocultures (Messeri & Crockett, 2024) and the multiple biases of the data used in training (Chen, 2023). There is also a certain productive paradox: with the increased use of GenAI, the need for scrutiny of the generated texts grows, intensifying the overload in the evaluation and publication processes committed to maintaining scientific quality.
Unfortunately, in our editorial work, we have encountered articles and reviews that contain evidence leaving no doubt about the use of GenAI, and this is not an isolated phenomenon. As Van Nordeen and Webb (2024) and Naddaf (2025) point out, similar practices have been identified in multiple scientific publishing contexts, including journals that expressly prohibit the use of GenAI in the preparation of manuscripts and reviews. This scenario is worrying and calls for urgent debate on ethics, authorship, and responsibility in scientific work.
In our professional practice, we have also observed that researchers have turned their attention almost exclusively to the use of these tools as objects of research. This has overshadowed other digital information and communication technologies, such as virtual reality and augmented reality, which are also significant for science education. This shift in focus, driven by enthusiasm for new “artificial intelligence,” could significantly alter the lines of research in the field, favoring what is currently in vogue. This runs the risk of neglecting important research that was already being conducted with other technologies. By following trends, research may ultimately become more superficial and less diverse, which weakens the long-term advancement of the field. Thus, the phenomenon of technological obsolescence reinforces the logic of reduced relevance of other objects of research.
In the face of both the consensus and the dissent related to GenAI, there is an apparent common concern about the risks related to copyright, ethics, and even, to some extent, possible epistemic risks (such as superficiality and homogenization of ideas) associated with the indiscriminate use of GenAI in academic production. There are also voices recommending more normative and regulatory approaches as a way to integrate such technologies in a “responsible” manner. Some authors, such as Sampaio et al. (2024), suggest that clear guidelines can enable the use of GenAI in the research context, while others, such as Spinak (2023), even discuss its application in the stages of scientific arbitration, indicating a fertile field for debate in the construction of LLMs-mediated evaluation criteria for scientific articles.
GenAI as a mythical figure: enchantment, demonization, and the historical processes in new technologies
The debate on AI, in general, has proven increasingly entangled with the diffuse implications of current ideologies and modes of production, requiring multidisciplinary approaches capable of producing complex and robust analytical systems for its tackling. To situate the debate on GenAI within the scope of a scientific journal in the field of education, we chose to engage in a dialogue with critical perspectives that allow us to establish dialectical relationships between the novelty status of GenAI, the aura of fears and apprehensions historically produced and currently manifested, the ideological nature of these technologies, and some pertinent material contradictions.
The novelty status of GenAI is based on the fabricated needs that are essentially a product of modern life. It is a context marked by easy effusiveness as a result of a society saturated with distractions and lacking in references. This lack refers, above all, to the contemporary difficulty of establishing links with historical traditions, which often results in the perpetuation of values and actions that are superficial and immune to critical analysis. The guise of novelty is thus embedded in an ideological matrix that thrives on the lowered critical awareness of a society characterized by bureaucratization and standardization in all aspects of life (Adorno, 1966). This is a semi-formed society, disconnected from its institutional supports and sources of meaning (Feenberg, 1990), whose traditions are no longer capable of linking themselves to the cultural heritage of humanity or of incorporating profound experiences, revealing, as Benjamin (2012b) puts it, a definitive poverty of authentic experience.
In fact, this strategy proves to be quite convenient for propaganda and the dissemination of commercial goods. In a society that needs to differentiate itself through consumption (Fromm, 2024), or for a fraction of the working class that needs to stubbornly deny its historical and concrete belonging, the idea of novelty acts as a charm that leads it to an idyllic fantasy. Paradoxically, while novelty leads to fascination and a fantastic desire to possess, the technological experience also encapsulates uncertainty and fear. Computer technologies, likewise, gave rise to this contradiction.
In the early 1990s, Feenberg (1990) highlighted the ambivalence of computer technologies, questioning whether they were solely predestined to endorse an authoritarian social system or whether they had some democratic potential through the possibilities of controlling their applications and understanding them. More than thirty years ago, the problem of ambivalence, the technical and mechanical advancements in capitalist production, and the ideological basis of automation were already being addressed in critical technology theory, a period when computers and internet access became popular, highlighting the uncertainty of future working conditions.
For Feenberg (1990), artificial intelligence can take on three different meanings: a type of computer program that, despite supporting laboratory analysis and medical diagnosis, would be incapable of progressing toward simulating intellectual functions; the computer conceived as a model of the human mind, based on the rationalist paradigm dominant in our society, which conceives of thought as a kind of machinery; and the ideological slogan, which serves as a support for movements to recontextualize human beings under the model of their automaton. In any case, the myriad metaphors of human-machines or machine-humans lock us into a closed system in which we represent nothing more than a part of the mechanism, creating fertile ground for the reinforcement of ideologies of domination.
Both the concept of automation and artificial intelligence are influenced by the ambiguity of information technology, which simultaneously fosters alignment with authoritarian perspectives and opens up democratic possibilities (Feenberg, 1990).
The place computers are intended to hold in social life and the design of computer systems are intimately connected. A system designed for hierarchical control is congruent with rationalistic assumptions that treat the computer as an automaton, intended to command or replace workers in decision-making roles. A democratically designed system must instead respond to the communicative dimension of the computer through which it facilitates the self-organization of human communities, including those communities of work the control of which founds political power in the modern world. (Feenberg, 1990, p. 730).
Similarly, Vieira Pinto (2005a), when conceptualizing technology, highlights its ambivalent nature, pointing out that it is "both the mainstay and weapon of domination in the hands of the master, and the hope for freedom and the instrument to achieve it in the hands of the slave" (2005, p. 262)4. The discomfort caused by the uncertainties produced by technologies has accompanied humanity throughout its history. The widely reported luddite rebellion against the radical changes in labor accompanied by the increase in machinery in production during the Industrial Revolution (Hobsbawm, 1952) or the strangeness and fear of colonized peoples in the face of enormous ships and their then sophisticated paraphernalia are examples of experiences that reveal that technology has historically been a source of fear and fascination, admiration and rejection, deification and demonization. It is worth noting that the analysis of Luddism has reappeared in debates about GenAI, critically evoking the historical revolt against automation that threatened not only jobs but ways of life.
Today, with the rise and spread of GenAI, what are we afraid of, after all? To meet the sophistication of the means of production and respond to its crises, technological matrices have undergone substantial changes, rapidly transitioning from analog to digital, from an explicit function and domain to complex networks of relationships and interests, and, above all, confirming the old fears of “humanized machines.” It is not surprising, then, that the main domain of fear in the face of the spread of GenAI is in the workplace, whether in the fear of replacement of established positions in the workforce, unfair changes in its dynamics, domination and apparent dishonest competition with our creative and productive capacities, or the loss of supposed control over production and products (McAfee, 2024).
Since the Industrial Revolution, the fear of the mechanization of production processes has haunted the imagination of workers, who are constantly faced with a concrete threat, even if it is fueled by fantasies: human work will be dominated by machines in their various forms and functions. In fact, history has shown that this is not a summary replacement, nor is it a decrease in the use of labor power; quite the opposite.
One of the substantive problems of technological domination, with its machinery, robots, and automation, is that it challenges the well-established social division of labor, now extending its reach to intellectual work. Researchers and knowledge producers find themselves constrained precisely in their creative work. The concern becomes more acute as the threat moves beyond manual tasks such as threading, sewing, or farming. Now, machinery intervenes in the sphere of scientific production, deepening the sophistication of domination.
Under the promise of enhancing reproduction and mimicking creation, GenAI contributes to the loss of the aura of production. In this sense, we recall Benjamin (2012a) and his warning about the loss of authenticity in art through the rise of reproduction techniques. For the author, the technical reproducibility of works of art has resulted in the loss of their aura, which is “a strange web of space and time: the unique appearance of a distance, no matter how close it may be” (p. 184). Consequently, the dimensions of distance (opposed to the tendencies of the masses to possess), the power of worship (and cultivation), and the adequate insertion of the work in the field of tradition are lost. Analogous to art, products of human knowledge expressed textually, for example in essays, articles, or poetry, also experience a dizzying decline in authenticity and their auratic forms on the stage of new technological configurations.
Under the pretext of facilitating or streamlining work, reproductive practices in the context of GenAI suffer from the myth of control (Feenberg, 1990) and the illusion of mechanization as an assistant. In fact, the ideological dimension of computer technologies operates functionally in ensuring the dynamics of capitalist production and the social organization favorable to its reproduction. In the experience with GenAI, an essential distinction must be made between the dimensions of use/consumption and creation/production dimensions. Choosing a search topic, defining its focus, or deciding when to use a tool does not, in itself, constitute participation in the process of idealizing technology, control of the codes that process inputs and output data, nor the material consequences of production or the very dynamics of the work that will be affected by the content produced.
As Marx (2011) pointed out about the increase in machinery in industrial production, technological transformation has not led to an improvement in workers' quality of life or an expansion of their labor skills, nor has it reduced their workload. Thus, the idea of the assistant computer would make sense in a context where the exploitation of labor was not the hallmark of the current production model. Otherwise, it will remain a device to conceal work overload, the loss of creative capacity, and the decline of moments favorable to creation.
"The worker's activity, reduced to a mere abstraction of activity, is determined and regulated on all sides by the movement of the machinery, and not the opposite. The science which compels the inanimate limbs of the machinery, by their construction, to act purposefully, as an automaton, does not exist in the worker's consciousness, but rather acts upon him through the machine as an alien power, as the power of the machine itself. The appropriation of living labour by objectified labour - of the power or activity which creates value by value existing for-itself - which lies in the concept of capital, is posited, in production resting on machinery, as the character of the production process itself, including its material elements and its material motion." (Marx, 2011, p. 581).
Thus, the dialectical relationship between the dynamics of production and the social nature of work is revealed. Contrary to the unyielding enthusiasm surrounding technology, Vieira Pinto (2005b), in analyzing cybernetics, recognizes that it is a “conservation of essence under variation of form” (p.497). Indeed, it should be noted that technological transformations, although disruptive to the social fabric, are historical products of their time, embedded in the logic of current social production and responding to hegemonic interests. It would not be unreasonable to say, therefore, that the exploitative and domineering nature of colonial mining continues to set the tone for data mining, for example, subjugating and threatening the sovereignty and autonomy of peoples.
The historical changes in production dynamics and the sophistication of the apparatus reveal a still quite dramatic scenario: the new alliance between Think Tanks and Big Techs. The power accumulated by Big Tech raises real concerns. Increasingly dependent on them, everyday life is vulnerable to failures and blackouts that can affect everything from communication to essential services. When everything passes through a few hands, the risks are no longer just technical, but also social.
Based on consolidated social research strategies, trend analysis, and public sentiment, and active in the design of manipulation and persuasion strategies, think tanks in the US now operate on an industrial scale through Big Tech companies. The giants of information and communication technology, beneficiaries of Silicon Valley, owners of gigantic data centers that consume scarce raw materials (exploiting rare earths, for example), large amounts of electricity, and highly polluting production processes, call the shots: they dominate technical codes, extract and commercialize personal data and desires expressed in algorithms, influence wills and trends, and operationalize them as a form of manipulation5, taking the lead in the politics of domination (Zuboff, 2018).
Faced with this scenario, Haraway's (1985) perspective prompts us to consider technology as more than a functional extension of the human body, beyond neutrality. For her, technology is an intrinsic part of networks of power and the production of subjectivities. Her proposition of the cyborg figure challenges dichotomies such as human-machine, opening up spaces for understanding that technologies are ambiguous territories: they can be tools of control, but they also carry subversive potential. In the context of GenAI, what is at stake is not only their technical use, but how they can or cannot strain hegemonic structures of knowledge and neoliberal logics of productivity. Perhaps the key to her idea lies in critically engaging with the contradictions of technology, beyond an acceptance-rejection duality, and taking a politically situated stance toward the forms it takes in our practices of both writing and doing science.
Despite other urgent and possible fronts for discussion in the field of AI, such as the environmental impacts of data centers, political controls on access to the raw materials that fuel current computer technologies, tensions and urgent needs for regulation, the reconfiguration of work dynamics, and the sophisticated constriction of workers' skills and knowledge, it is worth emphasizing the relevance of serious discussion based on concrete foundations. Added to this is the importance of analyzing social media advertising on GenAI in terms of persuasive resources aimed at disseminating products for school, academic, and professional use.
In this dialogue, we acknowledge our reluctance to accept fatalism, which views the problem as something given and irremediable. We also recognize the risk of a mythical understanding of AI technology, as it is conceived in a historical vacuum, resulting in a sense of dazzlement at the achievement itself. Thus, we align ourselves with Vieira Pinto (2005a, 2005b), who argues for the treatment of the subject from a dialectical perspective: historical analysis and attention to the movements of concrete categories. In this scenario, resistance may, above all, mean denial. Slowing down and refusing to go with the flow may be the most sensible attitude for those who still have emancipation as their formative horizon.
As workers in science and education, we cannot fail to highlight the proposal of the Brazilian Artificial Intelligence Plan (2024-2028) (MCTI, 2024), which, among its structural actions, includes investments in: AI Infrastructure and Development; AI Dissemination, Education, and Training; AI for Improving Public Services; AI for Business Innovation; Support for the AI Regulatory and Governance Process. Under the slogan “AI for the good of all,” the ongoing actions with immediate impact seek to address specific problems in several priority areas, including education. In the educational sphere, with an investment of almost R$ 29 million, the AI project should address: monitoring student attendance in basic education; tackling dropout and truancy at basic and university levels; improving the monitoring of purchases under the National School Feeding Program (PNAE); support teachers and school administrators in evaluating student activities for better intervention in literacy, meeting the challenges of increasing the time available to teachers for analytical and pedagogical tasks; improving student results in mathematics, overall learning levels, and well-being. Some of the many contradictions that the program presents, still awaiting a qualified critical analysis, include the risks of exposing sensitive data, the mechanization of work, overload, the overexploitation of teaching staff, and technocratic solutions for deep-rooted problems.
Science workers working in publishing: training processes and values
It is necessary to reiterate, with the clarity that the historical moment demands, that scientific editors and authors are, first and foremost, science workers. This is not a rhetorical metaphor, but a political and epistemological recognition of the place we hold in the creation, curation, and circulation of scientific knowledge. In this sense, the academic work we conceive is not limited to the creation of content but involves a historical and social commitment to critical responsibility, rigorous and transparent research methods, and the intellectual integrity that sustains science as a public good.
In this scenario, as we have argued so far, the debate on the use of GenAI in editorial processes and academic production cannot be captured by technocratic trends or uncritical enthusiasm. There are science values that are non-negotiable, such as responsible and committed authorship, transparency and traceability of sources, intellectual autonomy, commitment to informed debate, among others. Therefore, in addressing this topic, we reiterate what we have already shared in previous editorials: our ethical position in this socially lively debate is achieved through the formative work of our collective. This is how we have tackled other complex debates, from peer review to open science, and this is how we also understand the place of the GenAI debate.
The formulation of an editorial policy that takes into account the impacts and challenges brought about by the GenAI has been, for our editorial collective, a less linear process than we initially imagined. At first, inspired by emerging guidelines from major publishers (e.g. Springer Nature, Elsevier, Sage Publications, Taylor & Francis6) and by SciELO's guidelines (2023, 2024), we followed what seemed to us to be a natural path: trying to delimit permitted and non-permitted uses in the process of submitting and publishing manuscripts.
This more prescriptive approach seemed to make sense, since it would be possible, we supposed, to list what is acceptable (for example, using AI to review grammar) and what would be forbidden, such as delegating to GenAI the entire production of an article or textual fragments of it. However, as we moved forward with this attempt at standardization, we were faced with dilemmas that were difficult to overcome with the knowledge we have today: where exactly should we draw the line? Is the use of GenAI to generate titles within the bounds of what is acceptable? What about helping to translate texts? Reorganizing bibliographical references? Adding paragraphs? What about creating schematic images? We realized that, in attempting to control technology through an objective list of behaviors, we would inevitably be confronted with an opaque, hectic, and unregulated field. We understand that it's not a question of banning technology, but of critically situating it in our field as well. This means creating the conditions for publishers, reviewers, authors, and readers to understand the impacts, scope, and limits of these technologies, based on ethical references that do not bend to the logic of immediacy and frightened productivity.
Faced with the impasse of setting artificial limits for the use of these technologies, we began to consider an alternative path: instead of standardizing specific conduct, why not base ourselves on principles? Inspired by our previous experiences with the creation of declarations on models for ethics and open science (see Massi and Silva, 2024; Azevedo and Mendonça, 2024; Mendonça et al., 2023; Bizerra and Sá, 2022), we thought that authors could self-declare their knowledge of and compliance with principles considered necessary within the ethical conduct expected of researchers (e.g., Fapesp, COPE, and ANPed's Ethics and Research in Education series7). This model seemed to us to be less of a surveillance instrument and more of an ethical commitment tool.
This turnaround was mainly due to the desire to escape from a logic of control which, in addition to being ineffective and unsustainable in the face of the editorial challenges of dealing with already overburdened teams, tends to reproduce inspection models that have little to do with the effective ways of producing knowledge today. More than monitoring, we aim to encourage reflection. In doing so, we recognize that the use of GenAI is not just a technical issue, but also a symbolic and political one: it forces us to revisit the very meaning of academic production in times of growing distrust of its social relevance, as well as raising doubts about the future of scientific practice itself.
Initially, the formative debates held within our editorial collective highlighted the need to systematize possible principles for the use of GenAIs8. Nevertheless, starting from principles comes with a certain fragility, since it disregards all the debate we have brought up here. We could easily fall into the illusion that the researcher, in isolation, could guarantee a responsible or transparent use of these technologies, assuming that these qualities could be ensured by individual decisions or by an abstract adherence to ethical statements that are still little discussed in the field.
As we see it, this approach, which tends to place responsibility on the individuals who use the technology, ignores the fact that GenAI systems themselves are not, in essence, transparent, as we have pointed out. Nor does their technical-scientific structure, anchored in opaque models and centralized in large corporations, allow these attributes to be effectively controlled by those who use them. By taking principles as if they were guarantees or sufficient mechanisms for regulation, we run the risk of naturalizing the material and ideological asymmetries that structure the development and circulation of these technologies.
Furthermore, the very idea of “responsible use” can act as a smokescreen that shifts the debate to a strictly moral plane, distancing it from a broader political critique of the structural conditions that produce these systems. These elements are hardly taken into account by principles that ultimately tend to reaffirm a self-referential logic of accountability, without considering the fundamental contradictions of GenIA use.
Thus, although the systematization of principles can have a heuristic or organizing function in the debate, it cannot be taken as a starting point or as an end in itself. Instead of principles, it seems promising to us to contextualise the relationship between science workers and the new technological configurations by building interdisciplinary alliances and governance models rooted in social justice and technological sovereignty, enabling collective practices of resistance9 capable of confronting the extractivist and privatist logics that currently hegemonize the field.
Thus, considering that our discussion here focuses critically on academic writing in the field of science education, we believe it is more productive to shift the focus of the debate from the normative idea of principles to problematizing the dimensions of risk that run through the use of GenAIs in this context. This shift is not just terminological: it carries a political inflection. While principles tend to suggest stability (which has its role, as is the case with historically consolidated discussions on research ethics), the dimensions of risk allow us to deal with the tensions and contradictions that emerge from the incorporation of these technologies into academic practice; especially in a field that is critically built on the relationships between science, education, and society. We have thus organized the dimensions of risk related to the use of GenAI in academic production, seeking to highlight aspects that can be overlooked when adopting a simplistic or uncritical regulatory perspective.
The risk of illegality (Does the use violate laws, norms or terms?) refers to non-compliance with legal and/or institutional norms, violation of individual and collective rights (especially in consideration of the General Data Protection Law (Brasil, 2018)) as well as overlapping practices concerning the guidelines of research ethics committees in their different instances. This includes the Research Ethics Committees, Plataforma Brasil (the national ethics registration and processing system), international ethics committees, and guidelines established by research funding agencies such as CNPq, Capes, and Fapesp. There is also the risk of disregarding the terms of use, service policies, and privacy policies of the GenAI companies and platforms used.
Regarding the risk of non-compliance (Does the use contradict established editorial standards and/or terms of use?), there is evidence of non-compliance and disrespect for the ethical commitments of scientific research and the journal. This includes disregard for originality, intellectual honesty, and clarity regarding the methods and paths of scientific production.
The risk of illegitimacy (Is the use inconsistent with the scientific assumptions of the field?), in turn, is based on the absence of a careful analysis of the pertinence of the use of GenAI, considering the epistemological specificities of the areas, the consensual practices, and the principles of the scientific community. In this sense, an apparent neutrality is assumed in the responses generated by GenAIs, which can mask disputes over meaning, erase epistemological conflicts, and reinforce uncritical techno-scientific consensus.
Expanding on these considerations, we highlight the risk of implausibility (Is the use scientifically necessary and justified in the context of the research?), related to the absence of a solid and contextualized rationale for the choice to use GenAI. In this case, its application is not justified from a scientific point of view and lacks a critical analysis by the researchers.
Finally, the most alarming risk refers to the weakening of critical analysis, the problematization of power structures, and the ability to challenge stabilized discourses. This is precisely what defines, or should define, academic writing in the field of science education: the risk of uncriticality (Does the use consider an expanded perspective on the use of GenAI?). This risk is characterized by the absence of a comprehensive analysis of the multiple ethical, socio-political, environmental and technological impacts, among others, resulting from the use of GenAI, as well as the disregard of its limitations and potential effects, especially those that compromise fundamental values of academic production, such as the avoidance of bias, algorithmic transparency and the exercise of autonomous thinking.
These risks should not be understood as sealed elements, but rather as overlapping dimensions that play an essential role in problematizing the use of GenAIs in academic practices. Such practices can help keep the focus on the formative implications that are interwoven in academic writing. Considering they facilitate committed and critical academic writing, as well as encourage reflection on the scientific practice itself (including its methods, choices, and responsibilities).
We illustrate these risks with the recent case involving researchers who conducted an unauthorized experiment in an online community with millions of users focused on argumentative debate. For four months, without informing participants or obtaining prior consent, researchers used AI-based language models to post comments, aiming to test the persuasive power of technology compared to that of humans. To increase the effectiveness of the arguments, the bots had fictitious identities, ranging from emotionally appealing to controversial personas. The experiment generated a strong reaction from the online community, which felt betrayed, culminating in a formal complaint to the university and leading to the removal of the accounts involved from the platform10.
In light of legality, is it possible to consider an experiment that omits participant consent legitimate? How does this type of practice stand up to regulatory frameworks that ensure the right to privacy? Regarding compliance, would it be acceptable, from a scientific ethics standpoint, for a study to conceal the use of AI and fail to explain its methodology under the standards of integrity required by journals? How can transparency and intellectual honesty be ensured in such cases? From the point of view of reasonableness, would this methodological strategy be appropriate for the specific field in which it was applied? Considering the practices and values of a digital community guided by genuine interactions, where is the line between experimentation and a breach of trust? Plausibility leads us to question: Was there a solid scientific justification for adopting this type of approach? Were there no alternative ways to investigate the issue of digital persuasion without resorting to such strategies? Finally, criticality invites reflection: What ethical, social, and political impacts are implied in this type of intervention? How does it affect trust in future experimental studies in the public spaces? We understand that cases like this require ongoing and committed debate, as they are marked by the need for transparency and shared responsibility.
Thus, the editorial policy we aim to construct through dialogue does not seek to end the debate; instead, it aims to encourage it. Because if there is one thing that this (not so new) technological context imposes on us, it is precisely the need to re-discuss the foundations, purposes, horizons, and productive model of what we currently conceive as scientific production. It would be illusory to ignore that modes of academic production are being reconfigured by technological transformations that operate with a strong ideological bias and material density. Therefore, it is imperative that the various agents involved in the ecosystem of scientific knowledge production reflect and critically discuss the meanings and consequences of these changes (see Vasconcelos and Marušić, 2025).
In times of fast-paced productive dynamics (Stengers, 2018) and algorithmic misinformation, defending formative action as a principle is also a political choice. It is a way of affirming that, even in the face of neoliberal pressures and (seemingly) easy solutions, the production of scientific knowledge is (and must continue to be) a territory of collective responsibility.
Final considerations
Paulo Freire, in Fear and Boldness: the teacher’s daily life (Freire & Shor, 1986), recalls that if there is fear, it is because there are dreams. In this sense, the author reflects: “Fear exists in you precisely because you have a dream. If your dream were to preserve the status quo, then what would you have to fear?” (p. 70). For approximately half a century, we have sought to advance in terms of tracing emancipatory paths to overcome an unequal, economically dependent society marked by historical injustices. To this end, we have set ourselves the unequivocal challenges of confronting anti-democratic political models, regressive ideologies, exploitative productive structures, and practices that are responsible for maintaining the status quo. Thus, it is imperative not to stop in the face of the paralysis of the apparent consensus produced by hegemony.
Committed to the continuity of the historical dreams of emancipation, autonomy, and sovereignty, this editorial also serves as a manifesto. Given the expansion of the use of content produced by GenAI, we present a discourse of warning, caution, and slowing down, which is fatally imposed on us by the appropriation of new technologies based on their status as instruments, tools, or assistants devoid of problematization or historicity. In this sense, an emphatic tone is essential: for a country that is dependent and marginalized in the global productive logic, there is (still) no safe GenAI or use free of contradictions.
With that, we cannot deny the fact that formative and creative processes, as well as scientific practices, are already significantly permeated by the use of GenAI. For this very reason, we feel pressured to accept it as an irreversible inevitability. On the other hand, we cannot fail to mention the groups, nuclei, and intellectuals dedicated to criticizing and resisting these practices. Precisely because of the contradictions inherent in the productive and discursive practices involving GenAI, we aim to mobilize science workers, teachers, publishers, scientists, and readers to engage in a frank, analytical, and critical examination of the current productive configurations and the future of science.
Due to ethical, political, and scientific concerns regarding the new generation of researchers and the future of science, we oppose GenAI, as currently configured, for textual generation and the review process. Authorship, assessment, and editorial work, facing all the obstacles of creative, authentic, and ethical work in precarious conditions, must still be carried out and remain the responsibility of those who, at this historic moment, define the paths of future science.
Thus, in concluding this editorial, we acknowledge the breadth of the problem and leave open the possibilities for further elaboration and discussion, questioning how we relate to these technologies and the meanings we produce for academic writing in this new scenario. Beyond the dimensions of risk and the theoretical discussions, as well as recent cases that support them, we recognize that these issues are not exhausted by conceptual diagnoses but require political positions, collective practices, and institutional experiments capable of challenging the normalization of GenAI use. In this sense, we are interested in reflecting with colleagues and collectives in the field of science education on issues that help us keep the debate and critical action alive in the face of these technologies:
-
To what extent does the use of GenAI contribute significantly to the quality and theoretical and methodological rigor of science education research?
-
Would it be reasonable to think about a sociotechnical adaptation (Dagnino, 2014) of GenAI with popular and emancipatory meanings for the working class and scientific work?
-
What are the possibilities that GenAI technology will favor democratic and sovereignty-strengthening processes when it comes to a product dominated by hegemony?
-
Is it possible to define milder and/or more critical uses of GenAI in the process of science production and dissemination?
Due to the complexity, novelty, and scarcity of consolidated references on the subject in the field of education, we emphasize the need to address it in formative spaces and invite the community to develop qualified theories, methods, and meanings of research in education and teaching through new technological arrangements. We believe it is essential to promote these discussions in the various spaces we occupy (classrooms, collectives, research groups, collegiate bodies, assemblies, forums, specialized literature, and other educational contexts) so that this debate can be constructed in a critical, ethical, and contextualized manner.
Acknowledgments
We extend our appreciation to the editorial team of Ensaio: Research in Science Education for the shared discussions and the space for constant critical and dialectical debate, which significantly contributed to the preparation of this text. We are also grateful to the readers of the first drafts, Luiz Gustavo Franco, Luciana Massi, Daniel Guimarães, and Valeria Cernei, whose invaluable feedback was instrumental; in particular, the last three, whose rigorous observations helped deepen arguments and broaden perspectives.
References
- Adorno, T. W. (1996). Teoria da Semicultura. Educação e Sociedade, XVII(56), pp. 388-411.
-
Azevedo, N.H.; Mendonça, P.C.C. (2024). Dados abertos na pesquisa em educação em ciências: perspectivas, desafios e possibilidades. Ensaio Pesquisa em Educação em Ciências, 26, e51515. https://doi.org/10.1590/1983-21172022240172
» https://doi.org/10.1590/1983-21172022240172 -
Bender, E. M., Gebru, T., McMillan-Major, A., & Shmitchell, S. (2021, March). On the dangers of stochastic parrots: Can language models be too big?. In Proceedings of the 2021ACM conference on fairness, accountability, and transparency (pp. 610-623). https://doi.org/10.1145/3442188.3445922
» https://doi.org/10.1145/3442188.3445922 - Benjamin, W. (2012a). A obra de arte na era de sua reprodutibilidade técnica. In Benjamin, W. Magia e técnica, arte e política: ensaios sobre literatura e história da cultura (8a. ed., pp. 179-212). São Paulo, SP: Brasiliense.
- Benjamin, W. (2012b). Experiência e pobreza. In Benjamin, W. Magia e técnica, arte e política: ensaios sobre literatura e história da cultura (8a. ed., pp. 123-128). São Paulo, SP: Brasiliense.
-
Birhane, A., & Guest, O. (2021). Towards decolonising computational sciences. Women, Gender & Research https://pure.mpg.de/rest/items/item_3287104_1/component/file_3287105/content
» https://pure.mpg.de/rest/items/item_3287104_1/component/file_3287105/content -
Bizerra, A. F., & Sá, L. P. (2022). Vamos conversar sobre autoria?. Ensaio Pesquisa em Educação em Ciências, 24, e39592. https://doi.org/10.1590/1983-21172022240112
» https://doi.org/10.1590/1983-21172022240112 -
Brasil. (2018). Lei nº 13.709, de 14 de agosto de 2018. Lei Geral de Proteção de Dados Pessoais (LGPD). Diário Oficial da União: seção 1, Brasília, DF, 156(157), p. 1. (Atualizada pela Lei nº 13.853, de 8 de julho de 2019). https://www.planalto.gov.br/ccivil_03/_ato2015-2018/2018/lei/L13709.htm
» https://www.planalto.gov.br/ccivil_03/_ato2015-2018/2018/lei/L13709.htm -
Brevini, B. (2020). Black boxes, not green: Mythologizing artificial intelligence and omitting the environment. Big Data & Society, 7(2), 2053951720935141. https://journals.sagepub.com/doi/full/10.1177/2053951720935141
» https://journals.sagepub.com/doi/full/10.1177/2053951720935141 - Brevini, B. (2023). Myths, Techno Solutionism and Artificial Intelligence: Reclaiming AI materiality and its massive environmental costs. in S Lindgren(ed.), Handbook of Critical Studies of Artificial Intelligence, Edward Elgar Publishing.
- Cecchini, V.K.; Ferrari, P. A tecno diversidade nos movimentos sociais populares: articulando inovação social na resistência à extração e controle capitalista da terra, do alimento e dos saberes. In: S. Iasulaitis & S.; Amadeu da Silveira(orgs.). Estudos sociopolíticos da Inteligência Artificial Campina Grande: EDUEPB, 2025. p. 387-413.
-
Chen, Z. (2023). Ethics and discrimination in artificial intelligence-enabled recruitment practices. Humanities and Social Sciences Communications, 10(1), 1-12. https://doi.org/10.1057/s41599-023-02079-x
» https://doi.org/10.1057/s41599-023-02079-x - Dagnino, R. (2014). Tecnologia Social: contribuições conceituais e metodológicas Campina Grande; Florianópolis: EDUEP; INSULAR.
-
DeepSeek-AI, Guo, D., Yang, D., Zhang, H., Song, J., Zhang, R., Xu, R., ... & He, Y. (2025). Deepseek-r1: Incentivizing reasoning capability in LLMs via reinforcement learning. arXiv preprint arXiv:2501.12948. https://arxiv.org/abs/2501.12948
» https://arxiv.org/abs/2501.12948 -
Dias, T., & Schurig, S. (2024, 5 de Junho). Moderadores subterrâneos: Meta paga centavos por checagem sobre enchentes no RS, violência e política para treinar inteligência artificial Intercept Brasil. Disponível em: Disponível em: https://www.intercept.com.br/2024/06/05/meta-paga-centavos-por-checagem-sobre-enchentes-no-rs-violencia-e-politica-para-treinar-inteligencia-artificial/ Acesso em:11 maio 2025.
» https://www.intercept.com.br/2024/06/05/meta-paga-centavos-por-checagem-sobre-enchentes-no-rs-violencia-e-politica-para-treinar-inteligencia-artificial/ -
Dias, T., & Schurig, S. (2024, 22 julho). Proletários de plataforma: Como a indústria de inteligência artificial lucra criando uma nova classe trabalhadora sem direitos no Brasil Intercept Brasil. Disponível em:Disponível em:https://www.intercept.com.br/2024/07/22/inteligencia-artificial-classe-trabalhadora-sem-direitos-no-brasil/ Acesso em:11 maio 2025.
» https://www.intercept.com.br/2024/07/22/inteligencia-artificial-classe-trabalhadora-sem-direitos-no-brasil/ - Eubanks, V. (2018). Automating Inequality: How High-Tech Tools Profile, Police, and Punish the Poor St. Martin's Press.
-
Feenberg, A. (1990). Post-industrial discourses. Theory and Society, 19, pp. 709-737. https://doi.org/10.1007/BF00191895
» https://doi.org/10.1007/BF00191895 -
Flanagin, A., Kendall-Taylor, J., & Bibbins-Domingo, K. (2023). Guidance for authors, peer reviewers, and editors on use of AI, language models, and chatbots. Jama, 330(8), 702-703. https://jamanetwork.com/journals/jama/fullarticle/2807956
» https://jamanetwork.com/journals/jama/fullarticle/2807956 -
França, T. F., & Monserrat, J. M. (2024). The artificial intelligence revolution... in unethical publishing: Will AI worsen our dysfunctional publishing system?. Journal of General Physiology, 156(11), e202413654. https://doi.org/10.1085/jgp.202413654
» https://doi.org/10.1085/jgp.202413654 - Freire, P; & Shor, I. (1986). Medo e Ousadia: o cotidiano do professor Rio de Janeiro, RJ: Paz e Terra.
- Fromm, E. (2024). Ter ou ser? São Paulo, SP: Editora Planeta.
-
Ganjavi, C., Eppler, M. B., Pekcan, A., Biedermann, B., Abreu, A., Collins, G. S., ... & Cacciamani, G. E. (2024). Publishers’ and journals’ instructions to authors on use of generative artificial intelligence in academic and scientific publishing: bibliometric analysis. The bmj, 384. https://doi.org/10.1136/bmj-2023-077192
» https://doi.org/10.1136/bmj-2023-077192 - Giroux, H. A. (1997). Os professores como intelectuais: Rumo a uma pedagogia crítica da aprendizagem Porto Alegre, RS: Artes Médicas.
- Haraway, D. J. (2009) Manifesto ciborgue: ciência, tecnologia e feminismo-socialista no final do século XX. In: Tadeu, T. (Org). Antropologia do ciborgue: as vertigens do pós-humano (2nd ed. pp. 33-118). Belo Horizonte, MG: Autêntica Editora.
- Hobsbawn, E. J. (1952). The Machine Breakers. Past & Present, 1, pp. 57-70.
-
Holmes, W., & Miao, F. (2024). Guia para a IA generativa na educação e na pesquisa. UNESCO Publishing. https://unesdoc.unesco.org/ark:/48223/pf0000390241
» https://unesdoc.unesco.org/ark:/48223/pf0000390241 - Legg, S.; Hutter, M. (2007). A collection of definitions of intelligence. In B. Goertzel & P. Wang (Eds.), Artificial General Intelligence: Concepts, Architectures and Algorithm (pp. 17-24). IOS Press Ebooks: Amsterdam, The Netherlands.
-
Leung, T. I., de Azevedo Cardoso, T., Mavragani, A., & Eysenbach, G. (2023). Best practices for using AI tools as an author, peer reviewer, or editor. Journal of Medical Internet Research, 25, e51584. https://www.jmir.org/2023/1/e51584
» https://www.jmir.org/2023/1/e51584 - Makridakis, S., & Polemitis, A. (2023). Human Intelligence (HI) Versus Artificial Intelligence (AI) and Intelligence Augmentation (IA). In Forecasting with Artificial Intelligence: Theory and Applications(pp. 3-29). Cham: Springer Nature Switzerland.
- Marx, K. (2011). Grundrisse: manuscritos econômicos de 1857-1858: esboços da crítica da economia política São Paulo, SP; Rio de Janeiro, RJ: Boitempo; Ed. UFRJ.
-
Massi, L., & Silva, R.L.F. (2024). O papel das revistas científicas na ética em pesquisa. Ensaio Pesquisa em Educação em Ciências, 26, e55294. https://doi.org/10.1590/1983-21172022240198
» https://doi.org/10.1590/1983-21172022240198 -
McAfee, A.(2024). Generally Faster: The Economic Impact of GenerativeAI [online]. The MIT Initiative on the Digital Economy (IDE). Available from: https://ide.mit.edu/wp-content/uploads/2024/04/Davos-Report-Draft-XFN-Copy-01112024-Print-Version.pdf?x76181
» https://ide.mit.edu/wp-content/uploads/2024/04/Davos-Report-Draft-XFN-Copy-01112024-Print-Version.pdf?x76181 -
Mendonça, P. C. C., Franco, L. G., Massi, L., & Coelho, G. R. (2023). Experiências da revista Ensaio Pesquisa em Educação em Ciências com avaliação por pares aberta. Ensaio Pesquisa em Educação em Ciências, 25, e42617. https://doi.org/10.1590/1983-21172022240137
» https://doi.org/10.1590/1983-21172022240137 -
Messeri, L., & Crockett, M. J. (2024). Artificial intelligence and illusions of understanding in scientific research. Nature, 627(8002), 49-58. https://doi.org/10.1038/s41586-024-07146-0 4.
» https://doi.org/10.1038/s41586-024-07146-0 4 - Ministério da Ciência, Tecnologia e Inovação (MCTI). (2024). IA para o Bem de Todos: Proposta de Plano Brasileiro de Inteligência Artificial 2024-2028. Apresentação na Reunião do Pleno do Conselho Nacional de Ciência e Tecnologia. 29 de Julho de 2024. Brasília, DF: MCTI.
-
Naddaf, M. (2025). AI is transforming peer review-and many scientists are worried. Nature, 639(8056), 852-854. https://doi.org/10.1038/d41586-025-00894-7
» https://doi.org/10.1038/d41586-025-00894-7 - Peñalvo, F. J. G., & Ingelmo, A. V. (2023). What do we mean by GenAI? A systematic mapping of the evolution, trends, and techniques involved in Generative AI. International Journal of Interactive Multimedia and Artificial Intelligence, 8(4), 7-16.
-
Sampaio, R.C., Sabbatini, M., & Limongi, R. (2024). Diretrizes para o uso ético e responsável da Inteligência Artificial Generativa: um guia prático para pesquisadores São Paulo: Editora Intercom. https://prpg.unicamp.br/wp-content/uploads/sites/10/2025/01/livro-diretrizes-ia-1.pdf
» https://prpg.unicamp.br/wp-content/uploads/sites/10/2025/01/livro-diretrizes-ia-1.pdf -
Schmidt, S. (2024, agosto). Universidades brasileiras discutem regras de uso de inteligência artificial: Diante de incertezas por parte de estudantes e pesquisadores, instituições debatem quais seriam os limites éticos do uso dessas ferramentas na escrita e na pesquisa científica. Revista Pesquisa FAPESP 342. Disponível em: Disponível em: https://revistapesquisa.fapesp.br/universidades-brasileiras-discutem-regras-de-uso-de-inteligencia-artificial/ Acesso em:11 maio 2025.
» https://revistapesquisa.fapesp.br/universidades-brasileiras-discutem-regras-de-uso-de-inteligencia-artificial/ -
SciELO. (2023). Guia de uso de ferramentas e recursos de Inteligência Artificial na comunicação de pesquisas na Rede SciELO Disponível em:Disponível em:https://wp.scielo.org/wp-content/uploads/Guia-de-uso-de-ferramentas-e-recursos-de-IA-20230914.pdf Acesso em 06 março -03-2025.
» https://wp.scielo.org/wp-content/uploads/Guia-de-uso-de-ferramentas-e-recursos-de-IA-20230914.pdf -
SciELO. (2024). Critérios, política e procedimentos para a admissão e a permanência de periódicos na Coleção SciELO Brasil Disponível em: Disponível em: https://www.scielo.br/media/files/20240900-Criterios-SciELO-Brasil.pdf Acesso em 06 março 2025.
» https://www.scielo.br/media/files/20240900-Criterios-SciELO-Brasil.pdf -
Spinak, E. (2023). ¿Puede la IA hacer arbitrajes confiables de artículos científicos? . SciELO en Perspectiva Disponível em: Disponível em: https://blog.scielo.org/es/2023/12/06/puede-la-ia-hacer-arbitrajes-confiables-de-articulos-cientificos/ Acesso em23 abril 2025.
» https://blog.scielo.org/es/2023/12/06/puede-la-ia-hacer-arbitrajes-confiables-de-articulos-cientificos/ - Stengers, I. (2018). Another science is possible: a manifesto for slow science Cambridge, UK: Polity Press.
-
Thirunavukarasu, A. J., Ting, D. S. J., Elangovan, K., Gutierrez, L., Tan, T. F., & Ting, D. S. W. (2023). Large language models in medicine. Nature Medicine, 29(8), 1930-1940. https://doi.org/10.1038/s41591-023-02448-8
» https://doi.org/10.1038/s41591-023-02448-8 - Van Noorden, R., & Webb, R. (2023). ChatGPT and science: the AI system was a force in 2023-for good and bad. Nature, 624(7992), 509.
-
Vasconcelos, S., & MARUŠIĆ, A. Research Integrity and Human Agency in Research Intertwined with Generative AI [online]. SciELO in Perspective, 2025. Available from: https://blog.scielo.org/en/2025/05/07/research-integrity-and-human-agency-in-research-gen-ai/
» https://blog.scielo.org/en/2025/05/07/research-integrity-and-human-agency-in-research-gen-ai/ - Vieira Pinto, A. (2005a). O conceito de tecnologia v. 1. Rio de Janeiro: Contraponto.
- Vieira Pinto, A. (2005b). O conceito de tecnologia v. 2. Rio de Janeiro: Contraponto.
-
Yu, H. (2023). Reflection on whether Chat GPT should be banned by academia from the perspective of education and teaching. Frontiers in Psychology, 14: 1181712. https://doi.org/10.3389/fpsyg.2023.1181712
» https://doi.org/10.3389/fpsyg.2023.1181712 - Zuboff, S. (2018). The age of surveillance capitalism: the fight for a human future at the new frontier of power New York: PublicAffairs.
-
1
The concept of intelligence is multifaceted and has more than 70 definitions, which vary according to the area of study, including collective, psychological and technological definitions (Legg and Hutter, 2007). Despite this, it would not be correct, with the technologies we have at the moment, to consider AI as intelligence in the full sense, endowed with singularity. This is because although AI simulates intelligent behavior, it does not possess understanding, consciousness or even intentionality (Makridakis and Polemitis, 2023). It operates on the basis of algorithms and data, without the capacity for deep generalization or biological cognitive processes, as is the case with animal intelligence.
-
2
Here it is worth noting the polysemy of the concept of “popular.” Although in this case we are using it in the sense of widespread or widely recognized, we emphasize that another possible meaning of the concept is something radically appropriated by the people, the antithesis of privilege and hegemonic control.
-
3
Today, we have GenAI tools that rival those in the US, such as Deep Seek, developed in the productive world of socialism with Chinese characteristics. Although they operate in open source and possibly claim to generate less environmental impact (according to the preprint Deep Seek-AI et al., 2025), they continue to carry ethical, sociopolitical, and economic implications that, far from weakening, only reinforce the arguments developed here.
-
4
We keep the original word "slave", but emphasize that the term "enslaved" better represents the historical process of subjugation and domination of men, women and children. In the Brazilian case, this particularly applies to black people brought from Africa. Slave, in this case, reveals an essentialist conception, as a natural and fundamental condition of human beings.
-
5
Here we illustrate with emblematic cases involving the misuse of personal data, such as the scandal between Facebook and Cambridge Analytica in 2014/2015, which revealed the manipulation of information from millions of users for political purposes. More recently, there have also been reports of other equally unethical cases, such as that in the province of Salta, Argentina, where a platform was created to predict teenage pregnancy, reinforcing social injustices and rights violations (see https://outraspalavras.net/outrasaude/podera-a-ia-ser-programada-com-etica/).
-
6
Springer-Nature (i) https://www.springer.com/gp/editorial-policies/artificial-intelligence--ai-/25428500?srsltid=AfmBOoo2NA70PRaRtiQln1W03ZF4iwyJPPmSxwT9xvbcgja05Tgik1ER (ii) https://group.springernature.com/gp/group/ai/ai-principles. Elsevier https://www.elsevier.com/about/policies-and-standards/the-use-of-generative-ai-and-ai-assisted-technologies-in-writing-for-elsevier Taylor & Francis https://taylorandfrancis.com/our-policies/ai-policy/ Sage Publications https://us.sagepub.com/en-us/nam/artificial-intelligence-policy
-
7
Fapesp Code of Ethics and Standards of Conduct https://fapesp.br/codigodeetica.pdf. COPE (Committee on Publication Ethics) https://publicationethics.org/getting-started/what-publication-ethics. ANPed Ethics and Research in Education Series, volumes 1 (2019), 2 (2021) and 3 (2023) https://anped.org.br/e-books/.
-
8
An initial version of our discussion on principles was presented in a text submitted to the XV ENPEC, with the aim of communicating the ethical challenges and some of the lessons learned in the areas of open data and GenAI.
-
9
We mention practices and systems such as cooperatives, incubators, and social movement nuclei. Examples include the EITA cooperative https://eita.coop.br/quem-somos/ and the MTST Technology Center https://www.nucleodetecnologia.com.br
-
10
More information about the case (1) https://retractionwatch.com/2025/04/28/experiment-using-ai-generated-posts-on-reddit-draws-fire-for-ethics-concerns/ (2) https://san.com/cc/university-of-zurichs-unauthorized-ai-experiment-on-reddit-sparks-controversy/ (3) https://www.npr.org/2025/05/07/nx-s1-5387701/a-controversial-experiment-on-reddit-reveals-the-persuasive-powers-of-ai
Publication Dates
-
Publication in this collection
20 June 2025 -
Date of issue
2025
