Acessibilidade / Reportar erro

Safety management in complex and dangerous systems - theories and practices: an interview with René Amalberti

Abstract

René Amalberti is a Professor, MD, and PhD in Medicine. After a residency in Psychiatry, he integrated the Airforce in 1977, got a permanent Medical Research position in 1982, and became Full Professor of Medicine in 1995. He retired in 2007 with the rank of General, dividing his time between the HAS - Haute Autorité de Santé (French Accreditation Agency - senior advisor patient safety), and a position of volunteer director of Foundation for an Industrial Safety Culture (FONCSI). He has published over 150 international papers, and authored or coauthored 12 books on human error and system safety. During his carrier, he occupied several detached positions working as Head Human factors and Flight Safety at the European Joint Aviation Administration, 1993-99, Head of National Research Program Quality Safety in Ground Transportation, 2001-2006; he was also chair of several national and international scientific boards dealing with environmental risk. In 2016 he came to Brazil to lecture at the 57th Work Accident Forum Meeting. On the occasion he launched the Brazilian translation of his book La sécurité: théorie et pratiques sur les compromis et les arbitrages.

Keywords:
industrial safety; safety management; technological risks; safety culture; ultra-safe systems

Resumo

René Amalberti é médico, mestre e doutor em medicina. Após uma residência em psiquiatria, integrou a Força Aérea francesa em 1977. Foi pesquisador e médico permanente em 1982, tornando-se professor titular de medicina em 1995. Ele se aposentou em 2007 no posto de general e divide seu tempo entre a Haute Autorité de Santé (Agência de Acreditação Francesa), como conselheiro sênior em segurança do paciente, e a Fundação para uma Cultura de Segurança Industrial (FonCSI, na sigla em francês), como diretor voluntário. Publicou mais de 150 artigos internacionais, é autor e coautor de 12 livros sobre erro humano e sistema de segurança. Durante sua carreira, ocupou várias posições de destaque trabalhando como chefe de fatores humanos e segurança na aviação no Joint Aviation Authorities (JAA, entre 1993-1999), chefiou ainda o Programa de Pesquisa Nacional sobre qualidade e segurança no transporte terrestre (2001-2006), também foi coordenador de vários comitês científicos nacionais e internacionais, que tratam de riscos ambientais. Esteve no Brasil em 2016 no 57º Encontro Presencial do Fórum Acidentes do Trabalho (Fórum AT) para ministrar palestras e publicar o livro Gestão da segurança: teorias e práticas sobre as decisões e soluções de compromisso necessárias.

Palavras-chave:
segurança industrial; gestão de segurança; riscos tecnológicos; cultura de segurança; sistemas ultrasseguros

Interviewers (I): Could you give us a short presentation on your professional career and the main ideas and approaches that you have been developing as regard to safety?

Amalberti: First I achieved a medical degree. My residency was psychiatry and I wanted to research risks run by people who were not ill, but who, by holding very responsible posts, experienced psychological difficulties in their demands. As, at that time no professor was interested in this subject, I sought for a course at the army and was offered a job as a researcher. But they had no interest in the matter either, so I moved to aviation, where I could learn about safety, a theme that was not discussed in the medical field yet. At the air-force I soon became interested in fighters and their pilots’ work, as well as in the work of civil air men. Automated aircrafts (Airbuses) started to fly at the time. I was fortunate to work for Airbus for a whole year. There I learned all the difficulties these workers from different countries and cultures had to face when flying these complex aircrafts. After this experience I ended up in a ministry service in charge of civil aviation. My job was to manage flight safety within the European context. At the same time, throughout 1986, I was very close to Jens Rasmussen and James Reason - during four years I spent one week a month with them. That was before Reason published his book on human error11. Reason J. Human error. New York: Cambridge University Press; 1990.. Therefore, it was an extraordinary period of learning, although I was very young. After psychiatry, I took some courses on Cognitive Psychology at the University of Paris, which was a nice complement to understand the common man and not only the pathological man. I worked for ten year in aviation. While I kept myself away from medical matters, I oriented dissertations in different areas, particularly on fishing and on the nuclear sector. After this period, I left aviation and returned to healthcare in the late 1990s, when, stimulated by Americans, safety started being discussed within healthcare. I was close to Reason, who kept in touch with them and introduced me to the group. In France the matter was still not being dealt with, but soon the movement expanded there.

At the same time, the French Ministry of Transportation invited me to be the head of the National Research Program Quality Safety in Ground Transportation, which I ran for ten years. So, these are the different universes that built my knowledge, my view of safety - to understand why men in different sectors deal with safety in such a distinct way. At first I was very focused on individuals. Little by little I became interested in managers and, finally, I was increasingly concerned about risk system governance and their management policy. All these phases are interconnected.

I: Your book Navigating Safety, Necessary compromises and trade-offs - theory and practice 2 2. Amalberti R. Piloter la sécurité. Paris: Springer; 2013. , recently translated into Portuguese by Fórum AT as Gestão da segurança: teorias e práticas sobre as decisões e soluções de compromisso necessárias 3 3. Amalberti R. Gestão da segurança: teorias e práticas sobre as decisões e soluções de compromisso necessárias. [S.l]: Forum AT; 2016. features your experience, especially in Civil Aviation, and presents some concepts we would like to understand better. The first one is workplace individual risk management; the second concerns the issue of systemic management; and, finally, the concept of systems management.

Amalberti: Our individual brains are powerful, but they can neither do, nor understand, everything immediately, neither at present time, nor later in a continuous way. It has bounds that limit all human beings. To manage complex situations, humans have developed a state of equilibrium. Such equilibrium must be achieved in order to control the external environment, that is, what is minimally necessary for us to reach our goals; and refrain from exhausting our cognitive level, by spending too many [mental] resources. This adjustment mechanism explains why humans do not try to avoid all errors. People who obsessively avoid errors are doomed to be unproductive. So, we could never carry out work if we tried to avoid all errors. We’d be so mentally engaged, so saturated, slow - because we must do things slowly to avoid mistakes - that we would be unprofitable and no factory would hire us as we couldn’t devote ourselves to our jobs. This attitude is absolutely against a clever use of the brain.

So, because of this limitation, the best choice is to work taking risks, accepting mistakes, looking forward, and thinking about the next step. Anticipation allows us to control the situation and to know where we can face risks, avoiding major hazard areas. As for errors, we more often try to detect them than to avoid them. Human beings are extraordinary detectors of their own mistakes. And we do not only detect and hold them in memory - in 9 out of 10 cases -, but, as a return of the mistakes we’ve made, we also receive, an indicator of the required level of attention. The moment we become aware of our mistakes, we realize we can control the situations. It’s a wonderful system that does not prevent errors but makes use of them to work fast, in an automatic modee e Automatic mode: human beings way of performing actions that do not require very high levels of attention as they are frequent and repetitive. For more information, see Amalberti R. A gestão da segurança: teorias e práticas sobre as decisões e soluções de compromisso necessárias, p. 46. , and which has a highly effective safety capacity. When we accept this, we realize the quality of human knowledge cannot be measured by the number of errors, but of corrections.

I : What you are saying is included in the concept of “cognitive commitment”f f Cognitive or intellectual commitment is that incessant state through which workers must manage external demands, their own savoir faire, tasks and motivations that compete with each other, their psychological state of fatigue and stress (cf. Amalberti, R. op. cit., p. 20). T.N. which was developed in your book.

Amalberti: Considering cognitive commitment, I have two sources of constraintsg g Contraintes in the original (French). : the external constraint - for example, when I am driving my car, I must remain on the road dealing with some constraints, such as the traffic lights, other vehicles etc., that I must respect in this situation I have an external constraint, a world with a task to be fulfilled. But, at the same time, I have a brain with its own limitations, so I have an inner world as another source of constraint. The cognitive commitment is the adjustment between what I could see on the outside - whether the details of billboards, or the other drivers around me - and the question “do I need to do that?”, because every time I do this I spend energy from my brain. So the adjustment is observing just what is needed, if someone made a mistake, and recover it to save my brain and work in an automatic mode. Working this way allows me not to necessarily think about driving: I can think simultaneously about the meeting I’ll have. This is the kind of agreement I make by focusing and prioritizing the constraint which comes from the outside, and the fact of using my brain well is the cognitive commitment.

I: Could we say that it’s all part of the individual risk management?

Amalberti: Yes, it is within the individual sphere.

I: This is the theme of your first book, La conduite des systèmes à risques 4 4. Amalberti R. La conduite des systèmes à risques. Paris: PUF; 1996. [Managing risk systems]. In addition, in the book 3 3. Amalberti R. Gestão da segurança: teorias e práticas sobre as decisões e soluções de compromisso necessárias. [S.l]: Forum AT; 2016. you promote other ideas related to systemic management and to governance of risk systems. Could you talk a little more about it?

Amalberti: As for systemic management, Reason’s11. Reason J. Human error. New York: Cambridge University Press; 1990. model helped disseminating the so called boards or Swiss cheese, that states the following: all human beings commit errors; in order to try to prevent these errors from becoming accidents, we put up barriers. On the other hand, we have managers who instead of making direct mistakes, make latent ones. These managers, by the way they organize the system, the way they exert economic pressure, follows an economic model that may or not allocate resources, that may or not authorize funds, will incite a style, a pressure, a risk organization, on the system. And that is the systemic vision, returning to managers and understanding how they organize the pressure on the operators they work with. There are good models to do this. They are other ways of commitment, but there are also the bad models, which are reported because they cause more accidents.

I: In your book you have developed the idea of cognitive commitment, but also other concepts that are not quite understood in Brazil, such as the “ecological risk management model”, “performance sufficiency”, and their relation to cognitive commitment.

Amalberti: It’s easy to understand. If we consider the example of the streets and driving, we can travel by a two-meter wide street. It would be very difficult to drive there , because we would have to keep a 20 kilometers per hour speed to avoid getting out of the route and colliding. We can also turn into a street where the pavement is covered with gravel, which is very dangerous, or choose not turning into it, because we allow margins to be created. The world creates margins. That’s why we built eight-meter wide streets, with sidewalks of four meters or more on each side, and this will allow us to rectify the mistakes we made, because we build a world that is quite tolerant. The same happens at work. We should not ask workers, even if they are capable, to work 16 hours a day. We ask them to work 8 hours. They can even work more than that, but not for long. If we want work to last long, we’re not interested in demanding too much from people, but in allowing margins. This is sufficiency - the labor world is organized in margins. To be a good worker, or manager, we can’t be at the maximum of our performance capacity. There’s no need for that, as what is considered a very good worker or a manager is the possibility of their having a better performance capacity without using all their margins. They wouldn’t keep their performance for much time, in this case, it’s the system limit That’s why we need sufficiency, because it enables us to have margins, that preserve duration. Duration is necessary, you can’t be good just once, you have to be good every day. The idea of preserving a kind of work ecological view with sufficiency margins is what makes the world so effective. If we consider the metaphor that on Monday morning, back from the weekend, if we work until we became exhausted, we’d have spent all the capacity of the company: this is not a good solution if we aim to last until next Friday. Our interest is counting on a sufficient and cost-effective regime that preserves the man. And, deep down, we’re not very far from Taylor’s idea, but it’s an idea reviewed from the point of view of modernity.

I: Following this thought, we find the whole discussion on the concepts about risk prevention, recovery and mitigation h h Atténuation in French. . Could you discuss it a little further and give us examples?

Amalberti: It’s quite simple. If we’re at the hospital running a risk of infection - everyone knows these risks -, barriers will be created in order to prevent the infection from happening, the preventive barriers: among other measures, people will be asked to wash their hands, to rub them with alcohol before carrying out any surgical procedure, to take antibiotics. So there are many barriers that prevent risks. But it would be naive to believe we can prevent risks from happening, and that’s the reason we also use barriers that allow risks to emerge. In the case of infections in a hospital, these are not barriers that prevent, but cure them. People try hard to prevent them, but when this is not possible, they try at least to heal them. That means detection protocols will be established, for example, checking the patient’s temperature every night and morning, and if the temperature rises, a treatment will be provided by adding an appropriate antibiotic. All this is the recovery [of the risk realization]. The patient will spend a few more days in hospital to recover. But, for those who got an infection that can’t be controlled, we also consider that the infection will worsen bringing together the risk of death. In this case, in addition to recovery, we move to the mitigation step: we will try to avoid the patient’s death. He will probably be very tired. But the recovery barrier will be put up, for example, in a [more complex] hospital. I was in a big hospital in Cuiabá (state of Mato Grosso, Brazil), where it was decided that a patient with a serious infection would be transferred to a hospital in São Paulo. That’s mitigation, since there are more modern reanimation equipment in São Paulo, doctors will try to control the death risk there. This is applied to everything. There are always these three barriers, and fundamentally, when we face a risk, it is necessary to go through all of them and never believe that only prevention is enough. We must also consider the decays, the need for recovery and preparation for extreme events, when it becomes necessary to erect mitigation barriers. Another example, traffic safety: we reduce speed, we place speed traps, and inform drivers - these are preventive measures. In addition, we build wide roads that allow drivers to correct their driving mistakes. Even so, mistakes happen, while there are thousands of people driving, there will be mistakes. To avoid them we carry out an ergonomics that allows admitting errors, to correct errors we make use of technological devices, such as the ABS brake, or other braking systems, this is recovery. Then we’ll accept the idea that we’ll occasionally get out of the road, that’s part of the safety model, there will be mitigation: we’ll implement seatbelts and airbags to restrict severity of the victims’ injuries.

I: Safety culture cannot be ignored when discussing these issues. But there are several ideas and different approaches related to this concept. What can you say about safety culture?

Amalberti: There are two ways. The first one is related to an expression that became popular in companies, and gave rise to a lot of discussion, the Trojan horse. It’s like workload: we don’t know how to measure it, but it’s a good way to start the discussions between managers and unions, workers, and ergonomists. It’s a tool to enter the company to discuss about safety culture. But its meaning varies according to the different industries. The other way are the values. There are values that are responsible for behaviors. There are values, such as exemplarity, tolerance, non-punishment, respect, commitment to the company, that make up a system of values that must be shared by everyone, this is safety culture, a system shared by everyone, stimulating behaviors. There are other deep values, those that people carry inside them; then, we have the system of values, that the company truly wants to promote, and then the behavioral values, the attitudes. We know some of those system models values. The really important issue is that there is no company without a safety culture, but it is not necessarily the one we want. There are no companies without values, there are always values, beliefs, human beings who work and who create a vision about their company that is somehow shared. So let’s work with two axes: are the ideas that others spontaneously have the ones that we want them to have? In what degree does or doesn’t everybody share these ideas? Are they the ideas we want to share? For managers, the ideal is that only the ideas they want are shared. Of course this is not always the case: there may be huge differences between the values shared by workers and the values shared by the top management. In this case, the problem is alignment, and safety culture. It is influenced by work as a whole. Alignment is not one placing oneself on top of others, it is a half-and-half way , the top management with their prospects and, at the bottom, workers with what they expect from the company. Thus, this is the procedure we try to follow when we’re intervening in a company.

I: Is “safety managed” the articulation between the idea of “rule-based safety” and “safety in action”?

Amalberti: Surely there is this connection, there are two components that build safety: everything that we ask people to follow referring to limitations, constraints, rules, top management. And, on the other hand, this individual expertise capacity that people have and which is part “in action.” Through their managers the company is responsible, for the “rule-based” part, since, if it wishes, this is what is going to gradually rule the system. Of course expertise will be needed and the level of standardization must be coherent with this know- -how. With this very simple idea - if everything is totally standardized, [the ability of] adaptation is lost. That is, if that’s what we want, to adapt - and why not? Certain companies or professions face more unexpected situations than others. For example, in medical professions, where unexpectancy is prevalent, we can’t standardize too much, leaving plenty of room for workers to adapt. The result will be a low safety level. On the other hand, in aviation, where standardization is high, because if pilots disagree with what had been foreseen, the airplane will not take off and their actions will be very restrictive and with little possibility of being adapted. These are system choices, and I emphasize it’s not only the top management’s the work, but also of the ergonomy counselors, who lead the company to make a good adjustment. Of course there may also be periods when adjustment becomes more difficult, for example when there is change in the staff - former workers are replaced by newcomers, recently graduated from school. In this occasions there is an imbalance in the system, there is a trend towards increasing standardization, and then the problem is the new workers’ adaptation to the system.

I: Then, in the systems called “ultra-safety,” you state that the relationship between risk and learning is a U-shaped curve, which means that at the same time that the newcomers are learning, they are gradually increasing safety. Then, there is a brief period of stabilization and, subsequently, the curve is reversed, so the more experienced workers may pose more risks to the system than the newcomers. So, the question is the following: wouldn’t it be more appropriate in this case to think that experienced workers are more often blamed for risks than the newcomers, because they are more often faced with these risks?

Amalberti: This U-shaped curve describes three stages: one when learners increase their expertise and decrease risk levels; a stage when they become safe professionals and increase safety while their expertise level grows; and, finally, an escape stage when some experts become even more skillful. There is always a procedure, even in aviation or nuclear industry, which are considered ultra-safe models, as they take people to a range of knowledge where everything is conformed to a protocol that ranges from normal to abnormal. For instance, in aviation there are procedures for flying airplanes under normal conditions as well as in all types of breakdowns and incidents. This is the safe professionalism stage. There’s no improvisation out of those procedures. So pilots are in a familiar environment, since there are procedures even for abnormal situations. In this very broad field, pilots, of course, have a considerable knowledge that will enable them to solve all kinds of problems. It’s the pilots’ job to know how to detect incidents. But there are procedures to be followed. If we don’t follow them we will evade to areas that are not limited by procedures. It is a too high degree of expertise that makes people run away from procedures and open new fields of action, do things that were not checked by the system and which are, therefore, located in areas, with fewer rules and procedures. Of course, when we get to this point, we take risks because it is less organized. Therefore, when leaving this stage we take risks. We can sometimes do extraordinary things, because the stage is limited. When getting out of it, we can do exceptional things, but with exceptional risk. For example, there are highways, both in Brazil and in other countries, with limited speed - 80, 100, 110, 120 - that’s what we call a “safe window.” If we drive within the speed limit, the risk of an accident is very low, because the system is planned to be within this window. But if we take Fangio or Airton Sena, two very famous car racers, they will drive at 200, 220 per hour. If we drive at this speed, we may die. Therefore, the difference is that these experts can do things that ordinary people are not able to do. Although they take incredible risks, their skills lead them to succeed. It doesn’t mean they will always be successful. They may also have an accident next time. Another example: I’m not able to climb Mount Everest or Annapurna. But people who climb the Everest are very experienced climbers, they are not common climbers, they are experts. And the death risk is 1/10 in Everest and 4/10 in Annapurna, that means that there is 40% chance of death while climbing Annapurna, even being one of the best climbers in the world. Of course they are the greatest experts. But it’s not very safe. If our plans are more modest and we just climb La Quebrada de Yucatán, an easier mountain, it won’t be that risky.

I: And how can we determine the limit of this “safety window” between little and much competence?

Amalberti: It is the system that fix this limit. Concerning airplane pilots, they are forbidden to leave an area that is not conforming to the protocol. If there’s no procedure, they don’t go, it’s forbidden and the system is absolute. But there are a lot of systems that are not like that. In the medical field, for instance, procedures are also required because there is a paradox: experts need to acquire knowledge that will allow them to broaden the safety window. But not immediately. Initially, there will be dead people, and later we’ll have a better safety window. There are doctors who have tried things that were totally out the safety window. Of course there will be patients who will die, there will be very successful recoveries, and sometime later there will be some exceptional things. Now, it is true that there would be no heart surgery or artificial heart if people had been confined to that conforming-to-a-protocol area. So it’s important to have people who come out of that particular area and search for new improvisations. But how many patients have died before the current survival rate was achieved? How long does it take for the system to reach this survival rate? It’s the whole issue of whether authorizing or not that he exit from this window. Aviation say “I don’t want to take any risk, I don’t care if I don’t learn anything, but in that window I’m safe, that’s the service I owe the passengers: I don’t want to offer them pilots who improvise.”

I: In this respect, the Air France accident in the Rio-Paris route seems exactly an example of pilots who wouldn’t come out of the safety window, because they didn’t know how.

Amalberti: They didn’t know how to do it, and they must not come out of it.

I: And that’s why they died?

Amalberti: It’s a way of summarizing it, a little excessive, but that covers part of what happened. The pilots faced rare conditions: freezing of the sounding equipment that gathers outside information and inaccurate information about the external condition of the aircraft, which were not suppressed, but became totally inaccurate. And they didn’t understand, because they believed they were carrying out a procedure. Since the system was talking nonsense, the pilots followed its absurdities. Then they did not have the required training to retrieve the airplane from dynamic stall. And if there was an exceptional pilot, maybe, it would be possible to recover the flight. But the air force doesn’t want that. The safety window ensures 10-6 accidents - less than one accident per million. If pilots were allowed to improvise, as they used to do before, we’d be in 10-4. To prevent one accident, we would have 100 accidents. That’s why that accident in which the pilot landed on the Hudson River is absolutely not considered as something you should teach pilots: that’s outside that window. In the stable model we have 10-6 in this window, they don’t learn anymore, they do not make any progress, but accounting for 10-6 is exceptional. And it’s a service to passengers. Neither the nuclear industry nor aviation are ready to give autonomy back to operators, because in the past, when they had this autonomy, the system was 10-4.

I: So that kind of accident, as the Rio-Paris Air France, would be the price we have to pay?

Amalberti: It’s an accident that was in a 10-7 or 10-8 risk within a 10-6 model. It’s a tragedy for those people who were in the flight, but what proves it is that nothing was done after the accident. Pilots will not be taught how to recover from stall, ‘because training them [to do so] is forbidden. The only thing we do - and that works very well - is what we call cognitive vaccination. Aeronautics propagates, among all pilots of this airplane model, details of the accident, especially the approach to the first signs when it was still fairly easy to produce a reaction. And it works very well, since it doesn’t matter if you’re a pilot in Brazil or in Afghanistan, from the moment you fly with this airplane, you’re immediately aware of this accident - which is a strength for this industry. Pilots talk about it, thus becoming a celebrated case. That’s like a vaccine, they know that if they face a similar situation, they will need a lot of attention because there was a colleague who had an accident like that. So instead of running risks, they avoid them. For example, this airplane took a route through the clouds, but nowadays clouds are very frequently passed by. But this is not a radical solution, like “let’s train pilots to retrieve the plane entered into a stall conditions.”

I: Can anticipation change a situation like this, for example, before facing a storm?

Amalberti: Yes. But the anticipation that works best is not the pilot’s. That cloud had very particular crystals, that were not very well known, but that were very aggressive. Since then, many international universities began working towards the understanding of how these ice crystals are formed, in such a way that now flight controllers and the international meteorology know how to trace them. Presently the weather forecast for pilots is better informed about the location of these clouds, and, of course, the safety window procedure is to avoid them. So, we’ve made progress on this issue, in a way anticipation doesn’t concern the pilots, but the supervisors who, in aeronautics, have great knowledge about meteorology, the controls etc. As for the pilots, anticipation is avoiding the cloud, because there is no anticipation in it, in 15 seconds they can be inside the problem and there is no way to think that tomorrow it will be different.

I: This changes a little bit the idea of the industries experience recovery systems, which typically advocate, at least in literature, that workers should identify weak signals to integrate them in the management system. Therefore, in ultra-safety systems, as in aviation and in the nuclear industry.

Amalberti: In this case, there are still weak-signal systems for pilots. Before this accident, there were 15 similar incidents around the world. In these 15 incidents the pilots did nothing, they didn’t give any commands, and the airplanes flew into the cloud - because it is a short zone - and proceeded. At that time the return of these weak signals experience was not what the builders expected, since for them the idea is that pilots must identify the airplane malfunction and follow a clumsy procedure without touching the controls. In these 15 cases nothing was done. Therefore, builders and authorities believed that conclusively there were no catastrophic risks. If no procedure was adopted, everything was fine. But in the Rio-Paris flight, the pilots did something. That’s the big difference. All pilots in the previous incidents did nothing, but this crew did. Maybe because they had less experience with this kind of problem, but as soon as they moved the commands of the airplane aiming at solving the problem, they lost control of the plane. It’s a very short time, there’s no way to turn back. That’s what they did. Hence, today, due to these weak signals, pilots are taught that when they don’t understand what’s going on, they must not touch anything. It’s a way of learning, although it is not very solid.

I: There is also the issue of “deviance normalization” that you’ve been studying for a long time. We would like you to discuss it a little bit.

Amalberti: All systems are designed as a paper safety. What is this? It’s a kind of safety that primarily seeks to demonstrate, through formal risk analyses and by a document, that safety work was carried out correctly. And it is this paper safety, which often responds to leaders, authorities, which shows that we know how to do, is always excessive. On this paper we have a perfect safety, the desirable safety. Carrying out risk analysis, building barriers, describing everything, designing schemes, that’s on the document we sent to the authorities that supervise our performance. But it’s so limiting - safety doesn’t go along with performance. So, if there was no intelligence [in its preparation] (it also happened when there was intelligence), and if the paper document is redundant, the next day the company needs a level of performance that is not compatible with what was described. Thus, the system starts diverting to reach the economic level required for the company. For example, in hospitals, it was established in the eighties that staff had to wash their hands with soap before getting in touch with a patient. It is all very well on the paper, but the time required for handwashing doubles the working time at the end of the journey. If we measure it, it’s twice the time spent with the patient. Of course this doesn’t work. Fortunately, the soap has been replaced by alcoholic solutions that are much easier and faster to use.

But at the time of the soap, when health workers were hired, they were instructed to wash their hands after every patient’s care, but then they had no time for that. So they found another way, they washed their hands every two patients or only if they get dirty while dealing with a patient. This is not what was planned by the top management, but this quick deviation was almost admitted by them. If not, nurses may say: “I can wash my hands every time, but then I will only be able to look after two patients.” The management’s answer will be: “No, you can take care of twenty patients and wash your hands only occasionally, and we’ll turn a blind eye to it” So nurses will take care of twenty patients and divert [from the prescribed behavior]. Perhaps after the management had been tolerant for a while, the nurse will say: “So I was told to do all this and then they ask me to do otherwise.” From that moment on, the system is placed in an “illegal” zone, a zone that is accepted by everyone and, which becomes a norm. And it reached a point where some nurses never washed their hands after dealing with patients.

There’s always a point beyond that. Then they turn to the BBS methodsi i Behavior-Based Safety. because if everybody accepts the fact that it’s not possible to wash your hands after you take care of every patient, what is the limit? It’s the group that will set this limit. Therefore, it is the line manager, with the BBS methods - it’s not the STOP methodj j DuPont’s Behavior-based Safety program. we use at the hospital, but similar ones -, who will try to control the group and make them do it in a way that is acceptable, without taking great risks. That’s the problem of migrating [from the system to the accident]. It was Jens Rasmussen55. Rasmussen J. Risk management in a dynamic society: a modeling problem. Saf Sci. 1997;27(2/3):183-213. who built up this migration concept. All systems migrate, there is no stable system. And I want to highlight that it is the excess of paper regulations at the beginning that promotes migration, as there is a hand-feet between what is the theoretical model and what is required for the actual performance; and they don’t walk together. And since it’s not real, it is performance that always wins, never safety.

I: So deviation is normalized by the managers’ tolerance towards workers’ disregard to the rules - some of them considered golden rules. Thus, there is tolerance first and, then, management integration.

Amalberti: Management tolerates deviation and then integrates it, what is also beneficial for workers. This tolerance is possible because it benefits not only the top management, but also the workers.

I: And concerning the deviations that the management is not necessarily aware of, would they also be included in the system? would there be no migration?

Amalberti: Management doesn’t want to know very much. The managers only want to see performance. So, if you talk to the top management, they do not know and do not want to know about the deviations. The only ones who know about the deviations are line managers and, of course, the workers. It is, therefore, a very well established matter. It’s a problem for ergonomics. Only after being in aeronautics for a long time you will build the confidence that enables you notice this deviation. People hide it. We think they work one way, but they really do in another. Ergonomists must be aware of how things are really being done. They cannot say, what probably would be considered a stupid attitude, “it’s forbidden to do things this way,” when everyone had agreed to do them so. You have to know what’s real and then, intelligently manage it, sometimes by authorizing it, sometimes by just saying “but the rules are nonsense!” You have standards that are impossible to follow! Our mission is to adjust the rules, not to punish the workers.”

I: As for the source of errors, in your book you mention that in the past work overload was seen as the main the source of errors, but, today, we have also been faced by complexity. Do you believe that both of them work together?

Amalberti: In most systems the solution for complexity is technology. It’s a way of trying to change from human control to a technology that will replace it. This is very true in aviation. Of course we can come up against the fact that a human operator is not able to deal with a problem that couldn’t be solved by a robot or technology. It’s impossible to regain control after opting for automating a very complex system. This is too complicated for a human being. So, human beings are placed in a position where they manage a system using technology, but are unable to run it because it’s too complex. It’s the society’s current bet. Of course it goes beyond that, it will reach much more complex areas, with greater performance. Is it wise to go in that direction? Technology, since it is entirely Cartesian, is very safe. But 10-6 is not zero risk, 10-10 is not zero risk, therefore, there will be accidents. And at this point human beings who are in the front lines should not be accused, as they cannot simply resume the system. But it’s a bet: to progressively transfer competences to high-tech systems, placing some people in supervisory positions and most workers in the position of intelligent executors. It’s a society bet too; of course becoming progressively prisoner of technology to the point of not being able to stop or control it is a fact that has little by little been accepted by society. Besides, technology is believed to be good for human beings as it saves time and diminishes hard work. Less-tiring work is shorter, gives chance to more rest, longer free time, and wages almost equivalently to workers, but the aspect “I can do it, I control it” is gradually being lost. And we have to accept it.

Furthermore, does it cause global unemployment? is this technology transfer translated into unemployment? This is what people say, but I’m not so sure. Because, when we take as an example the advent of printing, Gutenberg’s time hundreds of copyist monks lost their jobs, since their only profession was copying books. Thousands of monks were needed to copy one single book. Gutenberg came up with the printers and the copyists’ jobs ended, in less than a hundred years their profession was no longer necessary. The printing machine has created more jobs than the copyists’ work. Thus, whenever a new technology is developed, jobs are cut. But other opportunities are opened. The problem is that they are not for the same job, and many workers lose their jobs. This is very complicated, since the unemployed workers become victims. Even if new jobs are created, they will not suit those who have lost their jobs. We ergonomists, must administer this transition when jobs disappear, generating unemployment, and there is a need for recycling these workers. They will not be the winners of the new system, but they may be young people with new professions that will be practiced in a new context. This is a pretty hard mission.

Acknowledgments

We thank Flora Vezzá for the careful work with the audio transcription and translation and for reviewing, and editing the interview; the Ministry of Public Labor Prosecution from the 15th Region of Campinas, SP, for the financial help that enabled both, the publication of the book in Portuguese and Prof. René Amalberti’s visit to Brazil, Last but not least, we thank all the Fórum AT supporters.

Referências

  • 1
    Reason J. Human error. New York: Cambridge University Press; 1990.
  • 2
    Amalberti R. Piloter la sécurité. Paris: Springer; 2013.
  • 3
    Amalberti R. Gestão da segurança: teorias e práticas sobre as decisões e soluções de compromisso necessárias. [S.l]: Forum AT; 2016.
  • 4
    Amalberti R. La conduite des systèmes à risques. Paris: PUF; 1996.
  • 5
    Rasmussen J. Risk management in a dynamic society: a modeling problem. Saf Sci. 1997;27(2/3):183-213.
  • e
    Automatic mode: human beings way of performing actions that do not require very high levels of attention as they are frequent and repetitive. For more information, see Amalberti R. A gestão da segurança: teorias e práticas sobre as decisões e soluções de compromisso necessárias, p. 46.
  • f
    Cognitive or intellectual commitment is that incessant state through which workers must manage external demands, their own savoir faire, tasks and motivations that compete with each other, their psychological state of fatigue and stress (cf. Amalberti, R. op. cit., p. 20). T.N.
  • g
    Contraintes in the original (French).
  • h
    Atténuation in French.
  • i
    Behavior-Based Safety.
  • j
    DuPont’s Behavior-based Safety program.
  • The authors report that the content is not based on a thesis or dissertation.
  • This interview was funded by Fapesp (04721-1/20122 Process).

Publication Dates

  • Publication in this collection
    30 Aug 2018
  • Date of issue
    2018

History

  • Received
    02 Apr 2018
  • Accepted
    11 Apr 2018
Fundação Jorge Duprat Figueiredo de Segurança e Medicina do Trabalho - FUNDACENTRO Rua Capote Valente, 710 , 05409 002 São Paulo/SP Brasil, Tel: (55 11) 3066-6076 - São Paulo - SP - Brazil
E-mail: rbso@fundacentro.gov.br