SciELO - Scientific Electronic Library Online

vol.12 special issueScientometrics: the project for a science of science transformed into an industry of measurements author indexsubject indexarticles search
Home Pagealphabetic serial listing  

Services on Demand




Related links


Scientiae Studia

Print version ISSN 1678-3166

Sci. stud. vol.12 no.spe São Paulo  2014 



Complexity and information technologies: an ethical inquiry into human autonomous action



José Artur Quilici-GonzalezI; Mariana Claudia BroensII; Maria Eunice Quilici-GonzalezIII; Guiou KobayashiIV

ICenter of Mathematics, Computing and Cognition, Federal University of ABC, São Paulo, Brazil.
IIFaculty of Philosophy and Sciences, University of São Paulo State, Marilia, São Paulo, Brazil.
IIIFaculty of Philosophy and Sciences, University of São Paulo State, Marilia, São Paulo, Brazil.
IVCenter of Mathematics, Computing and Cognition, Federal University of ABC, São Paulo, Brazil.




In this article, we discuss, from a complex systems perspective, possible implications of the rising dependency between autonomous human social/individual action, ubiquitous computing, and artificial intelligent systems. Investigation is made of ethical and political issues related to the application of ubiquitous computing resources to autonomous decision-making processes and to the enhancement of human cognition and action. We claim that without the feedback of fellow humans, which teaches us the consequences of our actions in real everyday life, the indiscriminate use of ubiquitous computing in decision-making processes seems to be beyond the reach of any clear ethical control. We argue that the complex systems perspective may help us to foresee possible long-term consequences of our choices, in areas where human autonomous action can be directly affected by informational technologies.

Keywords: Artificial intelligent systems. Autonomous decision-making. Complex systems. Systemic compatibilism. Human enhancement. Self-organization. Ubiquitous computing.




The rapid development of information technologies and their applications in human life raises challenging questions concerning the near future not only of our species, but also of the other animals that surround us. In industrialized societies, decision processes are every day becoming more dependent on artificial intelligent systems; they control airplanes, trains, production lines, electricity power stations, systems of water distribution, and many other aspects of human life. Furthermore, the production of the complex chips involved in the implementation of these intelligent systems itself depends on the operation of artificial intelligent systems (Russell & Norvig, 2003). In this context, what kind of (not too distant) future is unfolding on the horizon of human autonomous action? Who controls the abundant number of ubiquitous computers spread around in our environment in the form of invisible cameras, electronic tags, and so on? What are the advantages and disadvantages of the growing application of information technologies to autonomous decision-making processes?

The above questions will be discussed from a complex systems perspective; they provide the guiding lines of the present paper, which is organized in three sections. In section 1, the topic of ubiquitous computing and human autonomous action is addressed. "Human autonomous action" shall be characterized here not in terms of the absence of restrictions, but rather as actions performed in accordance with consensual agreements established amongst social agents for the benefit of all. In section 2, the concepts of ubiquitous computing and disguisers are investigated from the complex systems perspective. Finally, in section 3, ethical and political implications of applied information technologies are discussed in the context of human autonomous decision-making processes.


1 Ubiquitous computing and human autonomous action

In his paper "The computer for the 21st century", Mark Weiser (1991) envisioned a future in which computers disappear from our environment and from our sight, but would still be present, interacting seamlessly with us, providing services and resources for our activities that would be guided by artificial intelligent systems, although we would not be aware of their presence. This scenario, common in most industrialized societies, brings us back to our initial questions concerning the possible future of human autonomous action following the spread of ubiquitous computing, and the control of information available from the myriads of invisible cameras, electronic tags, and many other devices.

Considering that self-organizing processes constitute much of the dynamics of ubiquitous computing, it does not seem plausible that the immense amount of information, available for instance on the internet, could be controlled by any specific central command. However, it is possible that outcomes of these self-organizing processes could be hetero-organized by a powerful supervisor - a sort of "big brother". In this case, a central controller could have access to metadata resulting from processes of compacting information available in ubiquitous computing, organized by intelligent search systems. These systems, in turn, by interfering with decision-making processes, could slowly degrade human autonomy on many scales, varying from human health decisions to actions that are morally driven.

As indicated above, autonomy is characterized here not in terms of the absence of restrictions, but rather in accordance with consensual agreements established amongst social agents for the benefit of all. In this sense, a human autonomous action would, ideally, be part of (but not confused with) a self-organized system, in that it would express spontaneous synergy with other moral actions.

Examples of practices that seem to affect decision-making processes are going to be discussed in section 3, which addresses aspects of the human enhancement project and the contemporary enterprises under development involving drones and robot killers, amongst others. In the present section, we introduce the main characteristics of ubiquitous computing and disguisers that could help us to analyze some of these practices from a complex systems perspective.

Ubiquitous computing, as defined by Mark Weiser, is a system composed of two parts: (1) computer and information systems, and (2) the human aspect. Ubiquity is an attribute that arises from the interaction between these parts, and it is characterized by the symbiotic nature of this association. On the one hand, humans benefit from information processing and cognitive capabilities acquired from computer systems, and, on the other hand, the computer systems benefit from the huge amounts of data acquired from the pervasive sensors and cameras that continuously analyze contexts and individual behaviors, which enhances their ubiquitous presence. In the case of ubiquitous systems, humans have to learn how to access and use communication and processing resources provided by computer systems; once learned, access and use becomes seamless and natural, no longer requiring any conscious effort. Computers in ubiquitous systems need to remain always present, anywhere, anytime, dynamically adapting to different contexts and human needs in order to provide communication and information processing services for individuals. Implementation of these capabilities sometimes requires people to carry smartphones or wearblae computers, although the communication and computer resources can also be present in the environment, where they are pervasive and invisible in the streets, on the walls, and in appliances, and are all connected in a single network. Figure 1 illustrates the two sides of a ubiquitous computing system.

For ubiquity to work at its full capacity, users have to relinquish their privacy, at least partially; in order to be prompt and useful, the computer system has to have access to information about the individual's habits, behavior, needs, and even idiosyncrasies. It has to be informed where the person is and where he/she is going to be in order to analyze the context and local resources. In turn, users will have access to many services: constant connection to their virtual or real social groups; instant access to vast amounts of information, knowledge, and entertainment; recommendation services for goods, restaurants, and things to do, matching the present need and local context; selected and pre-filtered news that suits their interests and tastes; and any other future services that might arise from this large computer system.

One interesting ubiquitous service is the digital disguiser. The objective of a disguiser is to change the physical characteristics of a person, their appearance, voice, background scene, sounds, and so on, all in real time. This requires substantial image processing and computer graphics power, as well as specialized software. Some of these services are already available, such as real-time voice changers and background sound generators. Disguisers differ from anonymizers (which enable changes of natural to metallic voices, or the appearance of mosaic pixels over the face), because in the case of disguisers the intention is to hide the change itself, so that the other side is unaware that he/she is communicating with a different person. In the near future, when most social and group interactions will be mediated by ubiquitous computers, through social networking services such as Facebook or Orkut, digital disguisers could provide very popular services with widespread uses. Of course, anyone sufficiently skilled can disguise him/herself, given sufficient characterization and scenario, without the need for a digital disguiser. However, ubiquitous computing can make this ability available to anyone, anytime, hence universalizing its use. Figure 2 illustrates an interaction between a disguiser and a real person.

In synthesis, ubiquitous computing and disguisers are important elements of the unfolding informational technological universe that allows hyper-connectivity to occur person-to-person, person-to-machine, and machine-to-machine, generating a huge amount of data.

An interesting fact is that hyper-connectivity tends to increase the amount of information available in a system, while the availability of machines and intelligent systems tends to diminish the amount of autonomous decision-making in humans. Considerations about this opposite tendency can be relevant to the analysis of autonomous human decision processes, especially concerning "who controls what" in democratic collective actions.

Besides the availability of information regarding the object of decisions, democratic processes presuppose the absence of coercive forces in the choice of alternative autonomous actions. The question that remains to investigate is whether the indiscriminate dissemination of ubiquitous computing and of intelligent systems can play the role of coercive forces that deter autonomous action.

By adopting Turing's (1950) hypothesis concerning the algorithmic nature of intelligence, traditional AI characterizes machine intelligence in terms of abstract problem solving capacity in specific domains of competence. In the new scenario of ubiquitous computing with hyper-connectivity, mobile machine intelligence is now conceived as the self-organized capacity to establish connections with the surroundings in order to obtain information necessary for self-adaptation and achievement of goals.

Specificities of the new 21st century scenario, involving the proliferation of ubiquitous computing and intelligent systems with increasing degrees of autonomy and hyper-connectivity, seems to require re-evaluation of ethical and political implications of collective decision processes.

A fundamental characteristic of the new scenario, where human society is intermingled with ubiquitous computing and autonomous intelligent systems, is the diverse nature of its agents. This diversity gives place to an invisible complex system that involves interactions amongst living beings and a myriad of artificial devices, diluting the centralized computation that was dominant in the last century.

In the twentieth century, the development of powerful computers and PCs paved the way to virtual realities in which humans could deliberately include themselves. In contrast, nowadays ubiquitous computing is inserted in real life, sometimes covertly restricting a certain degree of human autonomy. Paradoxically, this insertion of ubiquitous computing into everyday life can provide a great amount of useful information for the processes of decision-making. Ethical and political implications of such a transformation are not well known in the context of human autonomous action, and divergent opinions are available in different areas of investigation.

According to the complex systems perspective of human action proposed here, there are ways of analyzing emergent properties of the interactions between agents and their environment that allow the anticipation of possible types of conduct and their probable future consequences. From this perspective, in what concerns human action, autonomy may or may not be threatened, depending on the capacity of the agents to understand the dynamics of the complex systems in which they form a part, and the conceivable ways of altering (when necessary) the processes that govern such dynamics.

In the next section, we discuss the properties of complex systems that seem suitable for the study of autonomous action in the context of ubiquitous computing and disguisers.


2 Ubiquitous computing and disguisers: a view from the complex systems perspective

There seem to be three main approaches to complex systems that are of interest to the present debate on the role of ubiquitous computing and disguisers in human autonomous action. The first, developed predominantly in the context of the social sciences and humanities in general, emphasizes the roles of order, disorder, emergence, and self-organization in the constitution of individual and collective action (Wiener, 1954, 1996; Morin, 2005; Debrun, 2009). The second approach, mainly applied to physics, biology, engineering, and environmental sciences, deals with mathematical formalisms that allow multiple-scale modeling of individual and collective phenomena constitutive of adaptive systems. The third approach, predominant in cybernetics, information sciences, cognitive sciences, and robotics, encompasses both previous ones in the study of the dynamics of social webs, intelligent systems, and the behavior of unstable environments directly affected by information available at multiple scales (Bourgine, 2013; Haken, 1988; Weaver, 2004 [1948]; Wiener, 1996).

Despite the differences in their focus of analysis, the above views share the general hypothesis that:

A 'complex system' is a group or organization which is made up of many interacting parts. (...) In such systems the individual parts - called 'components' or 'agents' - and the interactions between them often lead to large-scale behaviors which are not easily predicted from a knowledge only of the behavior of the individual agents (Mitchell & Newman, 2002, p. 1).

Another common assumption, as stressed by a document from Unesco (2011), is that complex systems are multi-level reconfigurable systems situated in turbulent and changing environments, adapting through internal and external dynamic interactions. Their study involves scientific challenges related to the transversal theoretical questions raised by complex system science in relation to observation, prediction and management of their multi-scale dynamics. The challenges posed by the multi-scale modeling of natural and artificial adaptive complex systems can only be met with new collective strategies for interdisciplinary research and teaching.

The emphasis on multiple-scale dynamics, an inherent characteristic of complex systems, can be a promising hint in the study of ethical and political issues concerning human decision making. It is known that many collaborative and social insects exhibit a collective behavior that is much more complex and sophisticated than the behavior of the single individuals. How does this "social intelligence" arise in these colonies, and how is an individual autonomous action related to it? These are many of the questions that might be answered through the study of complex networks. In the case of humans, recent studies of social networks, especially those that are internetbased, show us that their properties are best understood from the complex networks perspective, rather than using traditional sociometric analysis. Newman (2003) presents many models of complex networks, bringing complex system tools and perspectives to the study of social networks.

The three main characteristics of complex systems that are of interest for the present inquiry are:

(a) Self-organization. A process through which new forms of organization emerge solely from the dynamic spontaneous interaction among initially independent elements, without any a priori plan or central controller (cf. Ashby, 2004 [1962]; Foerster, 1960; Haken, 1988; Debrun, 2009; Kohonen, 1984; Kelso, 1994).

Self-organization may apply in different degrees to social, organic, and inorganic processes that change from their initial conditions; the greater the gap between the initial conditions and the current state of development of a system, the greater will be its degree of self-organization (cf. Debrun, 2009). Thus, for example, a radical revolution, instigated by citizens breaking down large parts of well-established habits in a society, will present a higher degree of self-organization than a minor reform promoted by these citizens.

Similarly, a self-organized neural network that adjusts the weightings of its connections in accordance with local rules will present a lower degree of self-organization than an autonomous self-organized robot that can learn and evolve, adjusting its behaviour in response to certain tasks, initiating a secondary self-organized process. In both situations, self-organization occurs without the predominant influence of a central controller; its development happens mainly as an emergent property of the interactions amongst its components, generating, at the macroscopic level, order parameters.

(b) Multiple-scale dynamics of order parameters. According to Haken, order parameters (OP) are: "emergent properties of the dynamical interaction amongst elements at the micro-level. When OP's emerge, they enslave the behaviour of individual elements, producing new characteristics at the macroscopic scale" (1988, p. 45).

In human collective actions, order parameters can emerge in the dynamics of individual identity development. Thus, for example, an individual who belongs to an activist group may have his/her identity altered by the influence of the emergent product of his/her interactions with this group, affecting common habits, beliefs, tastes, and ethical options.

(c) Emergence. This is the way that complex systems arise out of a multiplicity of self-organized interactions. It enables consideration of many levels of dependency of interacting elements of a system.

In the context of ubiquitous computing and disguisers, self-organization, order parameters, and emergence change dynamically as multifaceted social interactions happen simultaneously in the virtual environment. These characteristics of complex systems can be used to understand how the clusters of social networks are formed and how individual behavior influences, and is influenced by, collective social behavior.

Ubiquitous computing and disguisers bring new dynamics - anywhere, anytime - and multiple personalities and interactions to today's already complex and far-flung social networks. From a complex systems perspective, a possible answer to the initial question concerning the perspectives for human autonomous action following the spread of ubiquitous computing and disguisers could be that the rising dependency between autonomous human social/individual action and artificial intelligent systems may leave us with more free time to enjoy ourselves, without having to deal with burdensome and tedious tasks. Disguisers could allow people to express themselves freely without fear of collective punishment. Another possibility, not so positive, is that the ubiquitous and indiscriminate use of artificial intelligent systems may slowly erode our autonomy. There is indeed the possibility that these same "facilities" are already enslaving us, reducing our possibilities of choice in many aspects of our autonomous social/individual existence. Furthermore, the interaction with disguisers could trivialize stable moral principles of honesty, transparency, and trust.

Given the existence of many different (and sometimes conflicting) views on this issue, we are going to investigate, from an ethical perspective, advantages and disadvantages of the growing application of ubiquitous information technologies to human decision-making processes, focusing on the topic of autonomy.


3 Ethical and political implications of applied information technologies

The analysis of ethical implications related to the indiscriminate use of applied information technologies and the impact on decision-making processes faces what Beavers calls the hard problem of ethics:

(...) after more than two millennia of moral inquiry, there is still no consensus on how to determine moral right and wrong. Even though most mainstream moral theories agree from a big picture perspective on which behaviors are morally permissible and which are not, there is little agreement on why they are so, that is, what it is precisely about a moral behavior that makes it moral. For simplicity's sake, this question will be here designated as the hard problem of ethics. (Beavers, 2011, p. 2).

Beaver argues that even though there is a minimum general consensus about the nature of meaningful moral conduct related to trust, self-sacrifice, altruism, loyalty, and generosity, theories of moral conduct incorporate several different values and principles considered morally meaningful in relation to human action.

It could be pointed out, for example, that according to deontological conceptions (Kant, 1997 [1775]) agents should act morally, guided by the universal duty of cooperating for the general benefit of society. In contrast, customary ethics (Pascal, 1963) presupposes that moral agents should respect collective habits that have historically been successful. Also, according to utilitarian ethics, the validity of an action should be evaluated in terms of the results it produces in the lives of people affected by it (Bentham, 1907).

Thus, for instance, according to deontological ethics it is not permitted to lie, because human society would collapse if this practice became generalized and was adopted by all citizens. From this perspective, use of disguisers would be wrong, because it would be grounded in a type of lie concerning the real identity of an agent. However, it could be argued that this practice might have positive aspects in a totalitarian State that forced individuals to hide their expression of democratic values; a disguised person could transmit democratic values without being immediately punished or repressed.

It seems that according to deontological ethics, this type of situation would indicate a bottleneck resulting from the universal moral imperative that imposes the moral law independent of specific contexts. In this case, a possible contribution of the complex systems view to elucidate this situation would include other elements in the analysis of the situation: deontological ethics stresses the logical aspects of possible consequences of the generalization of lies, disregarding historical circumstances. By amplifying the scales of contexts in which lies could be accepted, the systemic analysis takes into account the dynamics of contextual circumstances related to the usage of disguisers. By adopting a compatibilist view, this strategy would consider the universal moral imperative as an important guiding principle that could, nevertheless, be adjusted to specific historical contexts without introducing moral relativism.

From the customary ethics perspective, the usage of disguisers may be incompatible with fundamental collective habits related to mutual trust, such as those that have historically prevented society engaging in deep social quarrels. Due to its incompatibility with fundamental habits, the use of disguisers can hardly generate a morally acceptable customary rule. Possible positive benefits of disguisers in specific circumstances should be considered individually, case-by-case, with the risk of arbitrariness. We understand that in the systemic approach, arbitrariness could be investigated by modeling the usage of types of situations in which disguisers could be morally acceptable. In this way, situations in which there were positive uses of disguisers could be detected, preserving social synergy.

Utilitarianism, in turn, considers valuable actions that show a tendency to increase the happiness of those whose interests are at stake (Bentham, 1907). In this approach, the usage of disguisers should be evaluated in terms of the results it produces in the lives of people affected by it. However, one of the main criticisms directed towards utilitarianism arises from the difficulty of assessing what actions effectively express the tendency to promote the common good in different contexts and timeframes. Here also, the approach of complex systems offers the possibility of modeling different scenarios that could assist in evaluating conceivable social consequences of the use of technologies such as disguisers, especially those of long-term duration. The complex systems view could also help with the analysis of problems concerning political implications of the indiscriminate use of disguisers and ubiquitous computing and their impacts on collective decision-making processes, especially those related with the so-called Reason of State (as outlined by Korab-Karpowicz, 2013). Reason of State is the government process of decision-making that gives priority to national interests and practical results, without taking into consideration ethical principles, and is predominant in contemporary societies.

In human relations, moral laws in common and professional ethics offer criteria for differentiating licit from illicit actions. However, in political contexts there is a certain opposition (or independence) between morals and politics (Bobbio, 2000, p. 177), due to the fact that the specificity of the modern State forces it to adopt the logic of Reason of State. Thus, for example, military use of artificial systems such as drones and robot killers1 provides an illustration of the predominance of Reason of State in the collective process of decision-making, even though the activity in question may be in direct opposition to ethical concepts of human rights.

From the reasons presented so far, we understand that classical ethical approaches, such as those outlined above, have to be complemented in order to deal with the consequences of moral actions intermingled with contemporary information technologies.

Accepting for the time being that traditional Ethics does not provide sufficient conceptual tools to deal with contemporary ethical and political implications of the generalized use of ubiquitous computing, disguisers, and IT technologies, the remaining question is: what are the alternative views that may help with the understanding of current ethical problems? Our suggestion is that even though the complex systems view may not be able to solve this difficult kind of problem, its conceptual assumptions might help us to start a promising strategy of investigation. The advantage of this approach is that it does not exclude the previous ones, but includes them in a cooperative manner whenever they seem to make sense in specific embodied embedded situations.

The main difficulty with this suggestion is to avoid undesirable relativism, which would undermine the entire investigation. However, the proposed view expresses a type of systemic compatibilism that could bring about relevant information available in specific situations, amplifying the number of possibilities in the processes of autonomous human decision-making.

To illustrate the possible advantages of adopting systemic compatibilism, let us consider another example of possible limitations of traditional ethics in dealing with contemporary moral and political actions. This may be elucidated by the example of the human enhancement project known as Transhumanism. According to proponents of Transhumanism (Bostron, 2003, 2006; Kurzweil, 2005; de Grey & Rae, 2007), the current human condition can be enhanced by means of drugs and information technology tools that might amplify cognitive, emotional, and moral human capabilities.

The hope of the transhumanists is that, in a near future, the human nervous system could be connected to mechanical implants (such as artificial mechanical limbs, nanorobots, new human-machine interfaces, amongst others), and that these gadgets would lead to:

(1) promotion of the human common good;

(2) enhancement of emotional and cognitive human capacities, and

(3) development of individual and social habits that could contribute to the overcoming of human conflicts.

Considering points (1)-(3), deontological, utilitarian, and customary ethical approaches might possibly approve the trans-humanist project (at least in principle), occasionally allowing minor criticisms aimed at its mode of implementation.

In contrast, complex systems analysis of the human enhancement project would consider several levels of inquiry, including: (i) the long-term ecological scale of evolution of self-organized living beings, and the emergence of their cognitive and emotional abilities; (ii) the multiple-scale dynamics of organization of living beings that varies from molecular structures to the ecological scale of the myriads of existing embodied and embedded organisms.

Careful analysis of (i) and (ii) might lead to the anticipation of difficult problems related with the development of the human enhancement project. For example, the consumption of drugs, initially aimed at promoting moral enhancement, could lead to the emergence of undesirable individual and collective emotions. Furthermore, the incorporation of nano-robots into molecular structures might interfere in an unpredictable way with the delicate neural-dynamics at its emergent level of organization.

Despite the positive aspects and the good intentions underlying the trans-humanist project, the enhancement of humans in isolation from other living beings and their environment could disrupt the self-organized evolutionary chain, as conceived nowadays, without any clear notion of the effects of this disruption.

As indicated in section 2, new forms of organization may emerge in complex systems from the dynamic interaction among independent elements without any central controller, by means of training and refined adjustment. If it is assumed that the selforganized dynamics of living beings results from long-lasting co-evolutionary adaptive interactions between organisms and their environment, then it could be that human enhancement technologies might transform the self-organized co-evolutionary processes into hetero-organized ones. These hetero-organized processes would be controlled by a small group of scientists and technicians, directing evolution of the human species in accordance with their own criteria. Such criteria would, in principle, be fallible.

The complex systems perspective presented here also allows a broader understanding of possible implications of the generalized use of digital technologies in human individual and collective actions. On one hand, there are immediate advantages of ubiquitous computing; smart meters can be used to optimize energy consumption at home, and encyclopedic information is easily accessible from the internet, liberating individuals from the need to memorize multifarious data. Furthermore, Massive Open On-line Courses (MOOC) may help with the eradication of inequalities between the poor and the rich in educational matters. From this point of view, the individual's decision-making process may become less reactive and more autonomous. On the other hand, the generalized use of smart meters may generate a huge amount of data about the personal interests and habits of individuals, which are out of their control and could be used against them in the future. The same could occur with internet search tools that record private information without any control or consent.


Concluding remarks

The invention of combustion engines was possibly one of the major causes of the success of industrial production in human history. Heavy agricultural machines, farm tractors, trains, and trucks, amongst others, have dramatically improved productivity. Aircraft, tanks, ships, and weapons demonstrate the staggering increase in the destructive power of machines. In all these areas, the control of technological processes has, in general, been in the hands of humans. The novelty now is that the new computational technologies are slowly changing the way in which these processes are controlled, and not only in the case of productive processes.

A great technological advance occurred with the creation of intelligent networks with the ability to adapt to environmental changes in order to improve their performance as a function of available data. New cognitive networks have the capacity to learn how to plan, in a proactive way, in order to face tough future conditions. What matters now in industrialized societies is the increasing degree of autonomy of computational devices in relation to the control of processes of material production. In this context, a question of interest to the present investigation consists in what are the possible implications of using autonomous artificial systems, such as killer robots, in conflict situations.

The use of remotely guided drones in the theatre of war raises not only the moral problem of killing at a distance without bearing the local consequences, but also of unfairly extending the area of conflict. Langdon Winner (1980) argues that technological artifacts have a great impact in the process of settling social affairs, therefore embodying social relations. He also claims that technologies are ways of building order in our world.

Following Winner's path, it is possible to conceive technologies that result from a process of secondary self-organization (as indicated in section 2), in that they learn to adjust their behavior with some degree of autonomy, building a particular type of organization in our world. In this scenario, new weapons may be rendering obsolete the traditional role of military divisions, greatly reducing the autonomy of their human warriors and catapulting the lethal power of destruction of increasingly autonomous systems.

Supporters of autonomous artificial systems promise to solve immediate problems, such as the possibility of remote destruction of the enemy's weapons. However, in the longer term, the increasing autonomy of these systems could lead to a proportional loss of human autonomy. A gradual loss of the ability of autonomous decisionmaking (so dear to our species) would occur in situations involving political dilemmas and ethical conflicts.

In the nineteenth century, when machines started to be introduced into industrial production lines, there was a strong reaction from the working classes, motivated by the fear that the installation of machines would cause massive unemployment. After almost two centuries, there is historical evidence to indicate that there were, in fact, higher rates of unemployment in countries that only imported these kinds of machines. However, in countries that produced their own machines to substitute the workers on the production lines, new industries were installed to manufacture innovative types of machines. The initial unemployment therefore decreased, and better qualified workers were employed in new areas of industry.

The supposition that the same logic might apply to the introduction of new electronic devices involved in the creation of ubiquitous computing seems reasonable. Societies that only import ubiquitous computing technologies will experience a high rate of unemployment, but those that produce their own technology will have new demands for more qualified workers.

So, in relation to the impact on the general workforce, ubiquitous computing will probably not bring any significant novelty. However, a different logic might apply to workers in areas that involve decision making; whenever new intelligent machines are created, fewer individuals will be required to make decisions concerning the execution of their tasks. If this reasoning is correct, then the dissemination of ubiquitous computing will generate a qualitative change in the composition of the workforce in almost all areas of human society.

In industrialized countries, the automation of banking systems has drastically reduced the number of employees, and most people seem to have approved of this change. However, serious problems might arise should this kind of qualitative change occur in other sensitive areas of society, such as the intelligence agencies. If collective information were to be placed under the control of a small, undemocratic, group of persons, then the privacy of citizens might be threatened. Given that there is no freedom without safeguards of the privacy of individuals and companies, the possibility exists of the emergence of a totalitarian state.

Finally, in the context of the logic of the Reason of State, some ethical and political consequences can be considered that might derive from the generalized use of contemporary informational technologies and human enhancement. For instance, what can be done with all the data collected by the internet, smart meters, sensors, and so on? At least three possibilities can be conceived:

Possibility 1. In societies governed by non-authoritarian forces, privacy is respected; ubiquitous computing and the techniques of human radical enhancement are applied in accordance with the autonomous person's free will.

Possibility 2. In societies governed by authoritarian forces, control over society is amplified; ubiquitous computing and techniques of human radical enhancement prevent self-organization, otherness, and democratic ways of existing from flourishing.

Possibility 3. A small technocracy of controlling computers renders humans a useless weight. Complex chips and intelligent systems are implemented mainly with the help of these same intelligent systems. With the increasing automation of the production of intelligent systems, there is an exponential increase in the capacity of these systems to make decisions.

Given human history, the realization of Possibility 1 seems to require something equivalent to the Kantian moral imperative, applied to privacy, which would function as a regulator of society. This idealized society would produce laws according to its own principles, practicing the notion of collective freedom and expressing individual autonomy.

Possibility 2 represents, to different degrees, societies dominated by financial lobbies and powerful private organizations that use Reason of State logic to impose their economic interests upon society. As the will of the individual is suppressed by the action of these powerful organizations, social processes of self-regulation become seriously affected. Ethical implications of this type of domination are mainly due to political causes rather than technological ones. Long-term systemic analysis indicates that societies characterized by the predominance of Possibility 2 can be transformed into societies where Possibility 3 predominates. In this scenario, self-organized active participation of citizens could be transformed into heteronomous patterns of actions technologically directed. As suggested by Winner, "with the overload of information so monumental, possibilities once crucial to citizenship are neutralized. Active participation is replaced by a haphazard monitoring" (1978, p. 296).

With increasing informatization of control processes, Possibility 3 models a society in which a technocrat elite is in charge of the decision-making processes. In principle, the inability of such an elite to manipulate society might produce revolutions aspiring for greater democracy in government, individual autonomy, and collective freedom. If such democratic revolutions do not occur, then ethical implications of the scientific and technological progress might be gloomy.

The complex systems strategies presented here to investigate the above issues concerning human autonomy open up a line of reflection whose ultimate direction is uncertain; its definition will depend on the feedback provided by social and environmental factors. Our hope is that individual and collective active engagement in political and ethical debate could bring about new forms of organization to feed and keep alive the human capacity for autonomous action.

Acknowledgments. We would like to thank the Federal University of ABC, the University of São Paulo State (UNESP), FAPESP, and CNPq for supporting this research, and Dr. Andrew George Allen for the English revision of this paper. We are also grateful to the anonymous reviewer for helpful comments.



Ashby, W. R. Principles of the self-organizing system. ECO Special Double Issue, 6, 1-2, p. 102-26, 2004 [1962]         [ Links ].

Beavers, A. F. Moral machines and the threat of ethical nihilism. In: Lin, P.; Bekey, G. & Abney, K. (Ed.). Robot ethics: the ethical and social implications of robotics. Cambridge: The MIT Press, 2011. p. 333-44.         [ Links ]

Bentham, J. An introduction to the principles of morals and legislation. Library of Economics and Liberty, 1907. Available at: <>. Accessed: 20 Jan. 2013.         [ Links ]

Bobbio, N. Teoria geral da política. Rio de Janeiro: Elsevier, 2000.         [ Links ]

Bostron, N. The reversal test: eliminating status quo bias in applied ethics. Ethics, 116, p. 656-70, 2006.         [ Links ]

_______. Human genetic enhancement: a transhumanist perspective. Journal of Value Inquiry, 37, p. 493-506, 2003.         [ Links ]

Debrun, M. Brazilian national identity and self-organization. Campinas: Editora da Unicamp, 2009. (Coleção CLE)        [ Links ]

Foerster, H. von 1960. On self-organizing systems and their environments. 1960. p. 1-19. Available at: <>. Accessed: 28 May 2013.         [ Links ]

Grey, de A.; Rae, M. Ending aging: the rejuvenation breakthroughs that could reverse human aging in our lifetime. New York: Saint Martin's Press, 2007.         [ Links ]

Haken, H. Information and self-organization: a macroscopic approach to complex systems. Berlin: Springer-Verlag, 1988.         [ Links ]

Kant, E. Prolegomena to any future metaphysics. Translation G. Hartfield. Cambridge: Cambridge University Press, 1997 [1775]         [ Links ].

Kelso, J. A. S. The informational character of self-organized coordination dynamics. Human Movement Science. 1, 3, p. 393-414, 1994.         [ Links ]

Kohonen, T. Self-organization and associative memory. Berlin/Heidelberg: Springer, 1984.         [ Links ]

Korab-Karpowicz, W. J. Political realism in international relations. In: The Stanford Encyclopedia of Philosophy (Summer 2013 Edition), Edward N. Zalta (ed.), URL <>         [ Links ].

Kurzweil, R. The singularity is near: when humans transcend biology. London: Penguin, 2005.         [ Links ]

Lafuma, L. (Ed.). Œuvres complètes de Blaise Pascal. Paris: Éditions du Seuil, 1963.

Lin, P.; Bekey, G. & Abney, K. (Ed.). Robot ethics: the ethical and social implications of robotics. Cambridge: The MIT Press, 2011.         [ Links ]

Mitchell, M. & Newman, M. Complex systems theory and evolution. In: Pagel, M. (Ed.). Encyclopedia of evolution. New York: Oxford University Press, 2002. p 1-5. Available at: <>. Accessed: 23 May 2013.         [ Links ]

Morin, E. Ciência com consciência. Tradução M. D. Alexandre & M. A. S. Dória. Rio de Janeiro: Bertrand Brasil, 2005.         [ Links ]

Newman, M. E. J. The structure and function of complex networks. SIAM Review, 45, 2, p. 167-256, 2003.         [ Links ]

Pagel, M. (Ed.). Encyclopedia of evolution. New York: Oxford University Press, 2002.         [ Links ]

Pascal, B. Pensées. In: Lafuma, L. (Ed.). Œuvres complètes de Blaise Pascal. Paris: Éditions du Seuil, 1963. p. 493-640.

Russell, S. & Norvig, P. Artificial intelligence: a modern approach. 2 ed. Prentice Hall: Pearson Education, 2003. (Prentice Hall series in Artificial Intelligence).         [ Links ]

Turing, A. Computing machinery and intelligence. Mind, 54, p. 236-45, 1950.         [ Links ]

Unesco. Why UniTwin CS and the digital campus. Dec. 2011. Available at: <>. Accessed: 20 May 2013.         [ Links ]

Weaver, W. Science and complexity. E:co, 6, 3, p. 65-74, 2004 [1948]         [ Links ].

Wiener, N. Cybernetics: or control and communication in the animal and machine. 2 ed. Cambridge: The MIT Press, 1996.         [ Links ]

_______. The human use of human beings. 2 ed. Boston: Houghton Mifflin/The Riverside Press, 1954.         [ Links ]

Weiser, M. The computer for the 21st century. Scientific American Special Issue on Communications, Computers, and Networks, 265, 3, Sept., 1991. p. 78-89.         [ Links ]

Winner, L. Do artifacts have politics? 1980. Available in: <>. Accessed: 04 May 2014.         [ Links ]

_______. Autonomous technology: technics-out-of-control as a theme in political thought. Cambridge/London: The MIT Press, 1978.         [ Links ]



1 A drone is an unmanned combat air vehicle (UCAV), also known as a combat drone, usually armed and often utilized for military goals, and is remotely controlled by humans ( Killer robots, under development, are autonomous artificial systems that might be able to select and to engage targets without human intervention (

Creative Commons License All the contents of this journal, except where otherwise noted, is licensed under a Creative Commons Attribution License