Acessibilidade / Reportar erro

Automated risk assessment for material movement in manufacturing

Avaliação de risco automatizada para movimentação de materiais na indústria manufatureira

Abstract:

Proximity movements between vehicles transporting materials in manufacturing plants, or “interfaces”, result in occupational injuries and fatalities. Risk assessment for interfaces is currently limited to techniques such as safety audits, originally designed for static environments. A data-driven alternative for dynamic environments is desirable to quantify interface risks and to enable the development of effective countermeasures. We present a method to estimate the Risk Prioritization Number (RPN) for mobile vehicle interfaces in manufacturing environments, based on the Probability-Severity-Detectability (PSD) formulation. The highlight of the method is the estimation of the probability of occurrence (P) of vehicle interfaces using machine learning and computer vision techniques. A PCA-based sparse feature vector for machine learning characterizes vehicle geometry from a top-down perspective. Supervised classification on sparse feature vectors using Support Vector Machines (SVMs) is employed to detect vehicles. Computer vision techniques are used for position tracking to identify interfaces and to calculate their probability of occurrence (P). This leads to an automated calculation of RPN based on the PSD formulation. Experimental data is collected in the laboratory using a sample work area layout and scale versions of vehicles. Vehicle interfaces and movements were physically simulated to train and test the machine learning model. The performance of the automated system is compared with human annotation to validate the approach.

Keywords:
Risk assessment; FMEA; Machine learning; Work safety

Resumo:

Os movimentos de proximidade entre os veículos que transportam materiais nas fábricas, ou interfaces, resultam em ferimentos e mortes no trabalho. Atualmente, a avaliação de riscos para interfaces está limitada a técnicas como auditorias de segurança, originalmente projetadas para ambientes estáticos. Para ambientes dinâmicos, uma alternativa baseada no uso extensivo de dados é desejável, de maneira a quantificar riscos e possibilitar o desenvolvimento de contramedidas efetivas. Apresentamos um método para estimar o Número de Priorização de Risco (NPR) para interfaces de veículos móveis em ambientes de fabricação, com base na formulação Severidade-Ocorrência-Detecção (SOD). O Método se destaca pela estimativa da probabilidade de Ocorrência (O) de interfaces de veículos utilizando machine learning e técnicas de visão computacional. Um vetor de recursos esparsos baseado em PCA para machine learning para caracterizar a geometria do veículo de uma perspectiva top-down. A classificação supervisionada em vetores de recursos esparsos utilizando SVMs (Support Vector Machines) é empregada para detectar veículos. Técnicas de visão computacional são usadas para rastreamento de posição para identificar interfaces e calcular sua probabilidade de ocorrência (O). Isso leva a um cálculo automatizado de NPR com base na formulação do SOD. Os dados experimentais são coletados em laboratório utilizando um layout de amostra da área de trabalho e versões em escala de veículos. As interfaces e os movimentos do veículo foram fisicamente simulados para treinar e testar o modelo de machine learning. O desempenho do sistema automatizado é comparado com a anotação humana para validar a abordagem.

Palavras-chave:
Avaliação de risco; FMEA; Aprendizagem de máquina; Segurança do trabalho

1 Introduction

This study presents a methodology for automated detection and risk assessment for interactions between vehicles engaged in material movement in manufacturing work areas. The manufacturing sector is particularly vulnerable from a safety perspective: it ranked sixth in the US for number of fatal occupational injuries in 2011 (Bureau of Labor Statistics, 2011Bureau of Labor Statistics. (2011).Occupational outlook handbook. USA: U.S. Department of Labor.). The economic impact of manufacturing safety is also significant. For example, a European Union report in 2008 estimated that 143 million workdays and over 55 billion euros were lost because of workplace accidents (Directorate-General for Employment Social Affairs and Equal Opportunities, 2009Directorate-General for Employment Social Affairs and Equal Opportunities. (2009). Unit E1 – The Social Situation in the European Union 2008 - New Insights into Social Inclusion. London: Dictus Publishing.). We focus on material movement because of the prevalent role of vehicles such as forklifts in manufacturing accidents in factories (Saric et al, 2013Saric, S., Bab-Hadiashar, A., Hoseinnezhad, R., & Hocking, I. (2013). Analysis of forklift accident trends within Victorian industry (Australia). Safety Science, 60, 176-184. http://dx.doi.org/10.1016/j.ssci.2013.07.017.
http://dx.doi.org/10.1016/j.ssci.2013.07...
). The rapid growth of autonomous ground vehicles (AGVs) in manufacturing environments (Kusiak, 2018Kusiak, A. (2018). Smart manufacturing. International Journal of Production Research, 56(1-2), 508-517. http://dx.doi.org/10.1080/00207543.2017.1351644.
http://dx.doi.org/10.1080/00207543.2017....
) further highlight the importance of vehicle risk assessment and risk mitigation in material movement.

We choose quantitative risk assessment methods (Marhavilas et al., 2011Marhavilas, P. K., Koulouriotis, D., & Gemeni, V. (2011). Risk analysis and assessment methodologies in the work sites: on a review, classification and comparative study of the scientific literature of the period 2000–2009. Journal of Loss Prevention in the Process Industries, 24(5), 477-523. http://dx.doi.org/10.1016/j.jlp.2011.03.004.
http://dx.doi.org/10.1016/j.jlp.2011.03....
) as the basis for risk assessment. The metrics designed for these methods signify the risk level of an interface, thereby enabling safety managers to prioritize and mitigate high-risk interfaces. We specifically select a metric called the Risk Prioritization Number (RPN), which has been used in combination with Failure Modes Effects Analysis (FMEA) (Liu et al., 2013Liu, H.-C., Liu, L., & Liu, N. (2013). Risk evaluation approaches in failure mode and effects analysis: a literature review. Expert Systems with Applications, 40(2), 828-838. http://dx.doi.org/10.1016/j.eswa.2012.08.010.
http://dx.doi.org/10.1016/j.eswa.2012.08...
). The caveat in using RPN and similar metrics is that their fidelity depends on the volume of data and sampling techniques used for assessment (Marhavilas et al., 2011Marhavilas, P. K., Koulouriotis, D., & Gemeni, V. (2011). Risk analysis and assessment methodologies in the work sites: on a review, classification and comparative study of the scientific literature of the period 2000–2009. Journal of Loss Prevention in the Process Industries, 24(5), 477-523. http://dx.doi.org/10.1016/j.jlp.2011.03.004.
http://dx.doi.org/10.1016/j.jlp.2011.03....
). The volume of data depends on the practical availability of human teams for data collection and analysis. A significant contribution of this study is to eliminate this restriction on the data collection for risk assessment using automated techniques. Our approach, hereafter called AutoRisk, automatically identifies the type and location of vehicles moving materials in a work area and estimates the RPN value of an interface between vehicles at intersections in the work area. This approach is operationally feasible for two reasons. First, security cameras and camera networks are prevalent in manufacturing (Hanoun et al., 2016Hanoun, S., Bhatti, A., Creighton, D., Nahavandi, S., Crothers, P., & Esparza, C. (2016). Target coverage in camera networks for manufacturing workplaces. Journal of Intelligent Manufacturing, 27(6), 1221-1235. http://dx.doi.org/10.1007/s10845-014-0946-z.
http://dx.doi.org/10.1007/s10845-014-094...
) and this setup can be leveraged to allow manufacturing plants to easily incorporate the automated risk assessment approach. Second, there are several precedents for the use of machine learning algorithms to automatically detect vehicles using video feed from cameras, for example (Brilakis et al, 2011Brilakis, I., Park, M.-W., & Jog, G. (2011). Automated vision tracking of project related vehicles. Advanced Engineering Informatics, 25(4), 713-724. http://dx.doi.org/10.1016/j.aei.2011.01.003.
http://dx.doi.org/10.1016/j.aei.2011.01....
; Chernousov & Savchenko, 2014Chernousov, V. O., & Savchenko, A. V. (2014). A fast mathematical morphological algorithm of video-based moving forklift truck detection in noisy environment. InProceedings of the International Conference on Analysis of Images, Social Networks and Texts. http://dx.doi.org/10.1007/978-3-319-12580-0_5.
http://dx.doi.org/10.1007/978-3-319-1258...
).

AutoRisk employs computer vision and machine learning techniques to achieve its objectives. The highlights of the approach are: 1. Computer vision and machine learning techniques based on Principal Component Analysis (PCA) and Support Vector Machines (SVM) are developed for detection of vehicles in the work areas, 2. RPN calculations are automated to quantify and prioritize interfaces based on risk, 3. The potential for long term data collection is demonstrated for vehicular traffic intersections in manufacturing areas, and 4. A proof-of-concept setup validates the potential for risk prioritization using FMEA metrics and compares automated risk assessment with risk assessment performed by humans.

One of the important potential consequences of this study is its impact on the safety of humans in material movement operations in manufacturing work areas. The interaction between material handling equipment and humans is responsible for more than half of all material handling accidents (Saric et al, 2013Saric, S., Bab-Hadiashar, A., Hoseinnezhad, R., & Hocking, I. (2013). Analysis of forklift accident trends within Victorian industry (Australia). Safety Science, 60, 176-184. http://dx.doi.org/10.1016/j.ssci.2013.07.017.
http://dx.doi.org/10.1016/j.ssci.2013.07...
). Collaboration or unstructured work area sharing between humans and material handling equipment is expected to expand in future work areas as the use of AGVs becomes more prevalent (Pradalier et al., 2008Pradalier, C., Tews, A., & Roberts, J. (2008). Vision-based operations of a large industrial vehicle: autonomous hot metal carrier. Journal of Field Robotics, 25(4-5), 243-267. http://dx.doi.org/10.1002/rob.20240.
http://dx.doi.org/10.1002/rob.20240...
). Efforts have largely focused on the mitigation of risk – for example, the design of a natural language interface between a forklift and pedestrian (Walter et al, 2014Walter, M., Antone, M., Chuangsuwanich, E., Correa, A., Davis, R., Fletcher, L., Frazzoli, E., Friedman, Y., Glass, J., How, J., Jeon, J., Karaman, S., Luders, B., Roy, N., Tellex, S., & Teller, S. (2014). A situationally aware voice‐commandable robotic forklift working alongside people in unstructured outdoor environments. Journal of Field Robotics, 32(4), 590-628. http://dx.doi.org/10.1002/rob.21539.
http://dx.doi.org/10.1002/rob.21539...
) – rather than an improved assessment of risk. The presented study can extend to detection of pedestrians in the work area and improve safety for people and vehicles that cohabit high-risk manufacturing environments. In doing so, it will derive inspiration from person and equipment detection efforts from other high-risk, high-clutter workplaces, e.g. construction industry (Memarzadeh et al., 2013Memarzadeh, M., Golparvar-Fard, M., & Niebles, J. C. (2013). Automated 2D detection of construction equipment and workers from site video streams using histograms of oriented gradients and colors. Automation in Construction, 32, 24-37. http://dx.doi.org/10.1016/j.autcon.2012.12.002.
http://dx.doi.org/10.1016/j.autcon.2012....
; Mosberger et al., 2015Mosberger, R., Leibe, B., Andreasson, H., & Lilienthal, A. (2015). Multi-band hough forests for detecting humans with reflective safety clothing from mobile machinery. InProceedings of the IEEE International Conference on Robotics and Automation (ICRA). USA: IEEE. http://dx.doi.org/10.1109/ICRA.2015.7139255.
http://dx.doi.org/10.1109/ICRA.2015.7139...
).

The paper is organized as follows. Section 2 motivates the need for research in workplace safety and highlights related studies on automated safety and risk assessment. Section 3 explains PCA-based machine learning for vehicle detection and FMEA. Section 4 details the validation of the approach relative to human annotation. Section 5 provides insights for the approach for use by safety managers. Section 6 summarizes the results and provides a discussion of the potential future directions for the study.

2 Background

2.1 Workplace safety

The high incidence rate of work-related accidents, injuries, and fatalities are the primary motivation for this paper. An analysis of occupational fatality trends in the United States from 1992-2010 (Marsh & Fosbroke, 2015Marsh, S. M., & Fosbroke, D. E. (2015). Trends of occupational fatalities involving machines, United States, 1992–2010. American Journal of Industrial Medicine, 58(11), 1160-1173. http://dx.doi.org/10.1002/ajim.22532. PMid:26358658.
http://dx.doi.org/10.1002/ajim.22532...
) shows that a total of 14,625 deaths occurred over this period at an annual average of 770. Machinery (mobile and stationery) related injuries were the second leading cause workplace fatalities in the Unites States between 1980 and 1989 (Pratt et al., 1996Pratt, S. G., Kisner, S. M., & Helmkamp, J. C. (1996). Machinery-related occupational fatalities in the United States, 1980 to 1989. Journal of Occupational and Environmental Medicine, 38(1), 70-76. http://dx.doi.org/10.1097/00043764-199601000-00019. PMid:8871334.
http://dx.doi.org/10.1097/00043764-19960...
). The Bureau of Labor Statistics (2014)Bureau of Labor Statistics. (2014).Employer-reported workplace injuries and illnesses. USA: U.S. Department of Labor. revealed that close to 4000 employees in the United States were fatally injured at work due to machine related incidents. Worldwide, over 5300 people die every day and close to 960,000 people are affected by work-related diseases or accidents (Hämäläinen et al., 2009Hämäläinen, P., Saarela, K. L., & Takala, J. (2009). Global trend according to estimated number of occupational accidents and fatal work-related diseases at region and country level. Journal of Safety Research, 40(2), 125-139. http://dx.doi.org/10.1016/j.jsr.2008.12.010. PMid:19433205.
http://dx.doi.org/10.1016/j.jsr.2008.12....
). The high fatality and injury rates have direct consequences to organizations, including employee turnover, absenteeism, and legal repercussions. This compels organizations to place justifiably high emphasis on workplace safety. De Vries et al. (2016)De Vries, J., de Koster, R., & Stam, D. (2016). Safety does not happen by accident: antecedents to a safer warehouse. Production and Operations Management, 25(8), 1377-1390. http://dx.doi.org/10.1111/poms.12546.
http://dx.doi.org/10.1111/poms.12546...
provide an example of a top-down, management-driven model called safety-specific transformational leadership (SSTL) to improve workplace safety.

Manufacturing is one of the most adversely affected industry sectors from the safety perspective. Marsh & Fosbroke (2015)Marsh, S. M., & Fosbroke, D. E. (2015). Trends of occupational fatalities involving machines, United States, 1992–2010. American Journal of Industrial Medicine, 58(11), 1160-1173. http://dx.doi.org/10.1002/ajim.22532. PMid:26358658.
http://dx.doi.org/10.1002/ajim.22532...
found that, between 2003 to 2010 in the U.S., manufacturing (776 deaths or 14%) and the service industry (725 or 13%) were ranked at 3rd and 4th respectively in terms of the occupational deaths, trailing only agriculture/fishing/forestry and construction. Mobile machinery and industrial vehicles aggregated at total of 7% of all the occupational injuries that occurred in the manufacturing sector between 2003 and 2010. In Finland (2003-2007) out of the total 807 fatal accidents, 202 (25%) fell under the category of material transfer (Perttula & Salminen, 2012Perttula, P., & Salminen, S. (2012). Workplace accidents in materials transfer in Finland. International Journal of Occupational Safety and Ergonomics, 18(4), 541-548. http://dx.doi.org/10.1080/10803548.2012.11076958. PMid:23294658.
http://dx.doi.org/10.1080/10803548.2012....
). For the same data set, manufacturing was responsible for 145,816 (27%) nonfatal injuries of the total count of 538,159 accidents. Bureau of Labor Statistics (2007)Bureau of Labor Statistics. (2007).Occupational outlook handbook. USA: U.S. Department of Labor. reports that out of the total 5488 occupational deaths in the US in 2007, the highest number of fatalities (1423 fatalities) were caused due to “transportation and material moving occupations”. Our study therefore focuses on methods that can contribute to mitigation of risks related to movement of mobile vehicles and machines in manufacturing work areas.

Our study focuses on three types of vehicles: forklifts, trucks, and utility vehicles. Of these, forklifts have an especially checkered history in material movement. On one hand, forklifts are the single most versatile piece of material handling equipment used in manufacturing and warehousing; on the other, their physical characteristics pose a great threat to people which can result in trauma or death (Collins et al, 1999aCollins, J. W., Smith, G. S., Baker, S. P., Landsittel, D. P., & Warner, M. (1999a). A case‐control study of forklift and other powered industrial vehicle incidents. American Journal of Industrial Medicine, 36(5), 522-531. http://dx.doi.org/10.1002/(SICI)1097-0274(199911)36:5<522::AID-AJIM4>3.0.CO;2-F. PMid:10506734.
http://dx.doi.org/10.1002/(SICI)1097-027...
). The movement risks from forklifts have serious consequences for humans sharing the work area. In a study conducted at 54 plants operated by a major automobile manufacturer over a period of 3 years, Collins et al (1999b)Collins, J. W., Smith, G. S., Baker, S. P., & Warner, M. (1999b). Injuries related to forklifts and other powered industrial vehicles in automobile manufacturing. American Journal of Industrial Medicine, 36(5), 513-521. http://dx.doi.org/10.1002/(SICI)1097-0274(199911)36:5<513::AID-AJIM3>3.0.CO;2-K. PMid:10506733.
http://dx.doi.org/10.1002/(SICI)1097-027...
found that forklifts caused a total of 913 non-fatal and 3 fatal injuries. The most common incident involved a pedestrian (321 injuries or 35%) being run over by a forklift or a powered industrial vehicle (PIV), resulting in 41% of the employees missing work contributing to 22,730 lost workdays. The California Department of Industrial Relations reported that out of the total 3041 injuries that happened due to forklifts in 1980, 31% of the cases involved pedestrians being run over by forklifts and another 23% of the cases where the worker was run over, caught in, under or between a forklift and another nearby object (California Department of Industrial Relations, 1982California Department of Industrial Relations. Division of Labor Statistics. (1982). Disabling work injuries involving forklifts. California work injuries and illnesses. California: DIR.). The German Social Accident Insurance (DGUV) reported that nearly 11000 accidents in 2015 involved forklifts (vom Stein et al, 2018vom Stein, A. M., Dorofeev, A., & Fottner, J. (2018). Visual collision warning displays in industrial trucks. InIEEE International Conference on Vehicular Electronics and Safety (ICVES)(pp. 1-7). USA: IEEE.). Bureau of Labor Statistics (2007)Bureau of Labor Statistics. (2007).Occupational outlook handbook. USA: U.S. Department of Labor. data showed that between 2011-2017, there were 614 fatal accidents involving forklifts, out of which 18% (113) of the deaths were a direct result of forklift-pedestrian collision. Marsh & Fosbroke (2015)Marsh, S. M., & Fosbroke, D. E. (2015). Trends of occupational fatalities involving machines, United States, 1992–2010. American Journal of Industrial Medicine, 58(11), 1160-1173. http://dx.doi.org/10.1002/ajim.22532. PMid:26358658.
http://dx.doi.org/10.1002/ajim.22532...
reported that forklifts were the third highest causes of death in the United States from 1992 to 2010, preceded only by tractors and excavators.

These statistics clearly indicate that material movement vehicles pose a significant threat to pedestrians in a manufacturing environment. Though the risks associated with these vehicles are known, risk mitigation techniques have majorly focused on operator training and protection (Marsh & Fosbroke, 2015Marsh, S. M., & Fosbroke, D. E. (2015). Trends of occupational fatalities involving machines, United States, 1992–2010. American Journal of Industrial Medicine, 58(11), 1160-1173. http://dx.doi.org/10.1002/ajim.22532. PMid:26358658.
http://dx.doi.org/10.1002/ajim.22532...
). Attempts have been made to modify the risk environment to counter the ill-effects caused by operating vehicles in proximity to pedestrians but has not been effective enough to significantly reduce workplace accidents. As cited by Bostelman (2009)Bostelman, R. (2009). Towards improved forklift safety: white paper. InProceedings of the 9th Workshop on Performance Metrics for Intelligent Systems(pp. 297-302). USA: ACM. http://dx.doi.org/10.1145/1865909.1865968.
http://dx.doi.org/10.1145/1865909.186596...
, Occupational Health and Safety Administration (OSHA) estimates that 70% of the accidents are avoidable with stringent and proactive measures. There is therefore a need to investigate innovative approaches to risk assessment and mitigation.

2.2 Remote camera-based monitoring of vehicles

Computer vision techniques for vehicle detection have been applied to intersection monitoring for pedestrian collision detection or prediction (Gandhi & Trivedi, 2006Gandhi, T., & Trivedi, M. M. (2006). Pedestrian collision avoidance systems: a survey of computer vision based recent studies. InProceedings of the IEEE Intelligent Transportation Systems Conference(pp. 976-981). USA: IEEE.). Veeraraghavan et al. (2003)Veeraraghavan, H., Masoud, O., & Papanikolopoulos, N. P. (2003). Computer vision algorithms for intersection monitoring. IEEE Transactions on Intelligent Transportation Systems, 4(2), 78-89. http://dx.doi.org/10.1109/TITS.2003.821212.
http://dx.doi.org/10.1109/TITS.2003.8212...
used an approach that combines low-level image-based blob tracking and a high-level Kalman filtering for detecting pedestrians at an intersection. To assist this approach in real-time monitoring, motion segmentation was done by using a mixture of Gaussian models to help in robust outdoor tracking scenarios. Cortes & Vapnik (1995)Cortes, C., & Vapnik, V. (1995). Support-vector networks. Machine Learning, 20(3), 273-297. http://dx.doi.org/10.1007/BF00994018.
http://dx.doi.org/10.1007/BF00994018...
presented one of the early applications of a combined computer vision and machine learning approach to the vehicle detection problem. Their study used a visible light camera to capture the field of view and shape-based approaches to extract characteristic features from the captured images. A trained Support Vector Machine (SVM) classifier is used to extract the foreground (pedestrian) from the background. Papageorgiou & Poggio (2000)Papageorgiou, C., & Poggio, T. (2000). A trainable system for object detection. International Journal of Computer Vision, 38(1), 15-33. http://dx.doi.org/10.1023/A:1008162616689.
http://dx.doi.org/10.1023/A:100816261668...
showed that pedestrian detection is possible using cameras as sensors and SVM for classification. Aoude & How (2009)Aoude, G. S., & How, J. P. (2009).Using support vector machines and Bayesian filtering for classifying agent intentions at road intersections. Retrieved in 2019, April 10, from http://hdl.handle.net/1721.1/46720
http://hdl.handle.net/1721.1/46720...
showed that objects can be classified at an intersection using support vector machines and Bayesian Filtering (SVM-BF). They tested the SVM-BF classification for 60 different scenarios with a “coverage” or recall of 100% along with a precision of 77%. A modified version of the SVM-BF that is based on a discounted BF further improved the precision to 93% at the expense of decreasing the coverage to 93%.

2.3 Onboard sensing systems for manufacturing

Computer vision and machine learning has been employed in combination with onboard cameras primarily for collision avoidance, but also for navigation. The type of sensors commonly used in these applications include rear view cameras, bird’s eye vision system, stereo cameras, time-of-flight cameras and radar and ultrasonic sensors (Cao et al., 2019Cao, L., Depner, T., Borstell, H., & Richter, K. (2019). Discussions on sensor-based Assistance Systems for Forklifts. InProceedings of the European Conference on Smart Objects, Systems and Technologies(pp. 1-8). USA: IEEE.). Behrje et al. (2018)Behrje, U., Himstedt, M., & Maehle, E. (2018). An autonomous forklift with 3d time-of-flight camera-based localization and navigation. InProceedings of the 2018 15th International Conference on Control, Automation, Robotics and Vision (ICARCV)(pp. 1739-1746). USA: IEEE. http://dx.doi.org/10.1109/ICARCV.2018.8581085.
http://dx.doi.org/10.1109/ICARCV.2018.85...
used a 3D time-of-flight camera as the primary sensor for localization and navigation of an automated forklift, using Monte Carlo Localization with particle filters. Lai et al. (2018)Lai, Y. K., Ho, C. Y., Huang, Y. H., Huang, C. W., Kuo, Y. X., & Chung, Y. C. (2018). Intelligent vehicle collision-avoidance system with deep learning. InProceedings of the 2018 IEEE Asia Pacific Conference on Circuits and Systems (APCCAS)(pp. 123-126). USA: IEEE. http://dx.doi.org/10.1109/APCCAS.2018.8605622.
http://dx.doi.org/10.1109/APCCAS.2018.86...
use real-time object detection in conjunction with deep learning to motivate a vehicle collision avoidance system. vom Stein et al. (2018)vom Stein, A. M., Dorofeev, A., & Fottner, J. (2018). Visual collision warning displays in industrial trucks. InIEEE International Conference on Vehicular Electronics and Safety (ICVES)(pp. 1-7). USA: IEEE. assessed the impact of four different visual warnings on brake reaction timings and perceived workload in forklifts. The study based on real-time experiments confirmed that peripheral display warning signs triggered the fastest mean reaction times when compared to the others (vom Stein et al, 2018vom Stein, A. M., Dorofeev, A., & Fottner, J. (2018). Visual collision warning displays in industrial trucks. InIEEE International Conference on Vehicular Electronics and Safety (ICVES)(pp. 1-7). USA: IEEE.). Lang & Günthner (2017)Lang, A., & Günthner, W. A. (2017). Evaluation of the usage of support vector machines for people detection for a collision warning system on a forklift. InProceedings of the International Conference on HCI in Business, Government, and Organizations (pp. 322-337). Cham: Springer. http://dx.doi.org/10.1007/978-3-319-58481-2_25.
http://dx.doi.org/10.1007/978-3-319-5848...
developed the “PräVISION” system aimed at warning the forklift drivers of an imminent collision using time of flight cameras to capture input 2D and 3D data and used Diamler and INRIA datasets for SVM classification of pedestrians. Lang (2018)Lang, A. (2018). Evaluation of an intelligent collision warning system for forklift truck drivers in industry. InProceedings of the International Conference on Digital Human Modeling and Applications in Health, Safety, Ergonomics and Risk Management (pp. 610-622). Cham: Springer. http://dx.doi.org/10.1007/978-3-319-91397-1_50.
http://dx.doi.org/10.1007/978-3-319-9139...
applied a version of the PräVISION system in an industrial setting for pedestrian detection and observed that the accuracy for this application fell by 25% compared to collision avoidance systems. These successes and failures highlight the following challenges for our future studies: 1. Transitioning from proof-of-concept experimental tests to real manufacturing environments, and 2. Application of detection and classification techniques to pedestrian-vehicle risk mitigation systems.

Quan et al. (2013)Quan, N. V., Eum, H. M., Lee, J., & Hyun, C. H. (2013). Vision sensor-based driving algorithm for indoor automatic guided vehicles. International Journal of Fuzzy Logic and Intelligent Systems, 13(2), 140-146. http://dx.doi.org/10.5391/IJFIS.2013.13.2.140.
http://dx.doi.org/10.5391/IJFIS.2013.13....
applies onboard sensing to navigate autonomous guided vehicles (AGVs) in manufacturing facilities and to predict the movement of other vehicles in the vicinity. The vision-based driving algorithm uses two monocular cameras – one forward-facing and one downward-facing – for navigation and path tracking with the help of floor signs. The author also develops an anti-collision algorithm to prevent any mishaps between AGVs sharing a single working space. The noteworthy features of this study are the successful use of low-cost cameras and the application of camera-based technology to workspaces for autonomous vehicles. Our study shares both characteristics, albeit with a change in camera perspective and the overall goal – risk assessment instead of collision avoidance.

Non-camera sensors are used in material movement vehicles for localization, tracking, and collision prevention. Some of the commonly researched anti-collision proximity sensing technologies are ultrasound, radar, RFID, UWB, Multipeer Connectivity (Groza & Briceag, 2017Groza, B., & Briceag, C. (2017). A vehicle collision-warning system based on multipeer connectivity and off-the-shelf smart-devices. InProceedings of the International Conference on Risks and Security of Internet and Systems(pp. 115-123). Cham: Springer.), and GPS (Jo et al., 2019Jo, B. W., Lee, Y. S., Khan, R. M. A., Kim, J. H., & Kim, D. K. (2019). Robust Construction Safety System (RCSS) for collision accidents prevention on construction sites. Sensors (Basel), 19(4), 932. http://dx.doi.org/10.3390/s19040932. PMid:30813334.
http://dx.doi.org/10.3390/s19040932...
). Barral et al. (2019)Barral, V., Suárez-Casal, P., Escudero, C. J., & García-Naya, J. A. (2019). Multi-sensor accurate forklift location and tracking simulation in industrial indoor environments. Electronics (Basel), 8(10), 1152. http://dx.doi.org/10.3390/electronics8101152.
http://dx.doi.org/10.3390/electronics810...
presents a solution based on multiple ultra-wideband (UWB) sensors to locate and track forklifts to obtain highly accurate estimations in indoor scenarios. The accuracy of their results is dependent on a strong line of sight (LOS) between the tag and the anchor. Jo et al. (2019)Jo, B. W., Lee, Y. S., Khan, R. M. A., Kim, J. H., & Kim, D. K. (2019). Robust Construction Safety System (RCSS) for collision accidents prevention on construction sites. Sensors (Basel), 19(4), 932. http://dx.doi.org/10.3390/s19040932. PMid:30813334.
http://dx.doi.org/10.3390/s19040932...
develops an anti-collision system for heavy equipment at construction sites using UWB-based proximity warning and GPS sensors integrated using machine learning. The limitation of this system, called the Robust Construction Safety System (RCSS), is its decreased reliability under environmental and site-specific changes. Sun & Ma (2017)Sun, E., & Ma, R. (2017). The UWB based forklift trucks indoor positioning and safety management system. InProceedings of the IEEE 2nd Advanced Information Technology, Electronic and Automation Control Conference (IAEAC) (pp. 86-90). USA: IEEE. http://dx.doi.org/10.1109/IAEAC.2017.8053982.
http://dx.doi.org/10.1109/IAEAC.2017.805...
presents a prototype safety system for forklifts in indoor environments that uses UWB sensing technology for tracking and prediction of vehicle movements. They compare UWB with other sensing technologies and highlight its advantages including greater bandwidth, multi-path resolution, penetrating ability of the signals and low cost. The limitations of the UWB system is that the tags are energy-intensive, accuracy is limited to 15 cm, and the imaging simulation is limited to two dimensions.

2.4 Summary and research questions

Research studies and industry statistics underscore the severe risk levels for material movement in manufacturing. However, risk assessment strategies for workplace risks such as FMEA based on the RPN metric tend to focus on location-specific activities, such as working in a specific area or manufacturing cell. This unintentionally excludes the formal assessment and mitigation of risk related to material movement in manufacturing. The research question is to study the use of RPN in risk assessment for material movement in manufacturing.

Methods to mitigate material movement risk in manufacturing focus on onboard sensing for vehicles. This is a well-developed research area, with diversity in the type of sensors and techniques used to identify, track, and safely move forklifts and other vehicles inside manufacturing plants. Onboard sensors allow vehicles to be safe regardless of their location, which is especially useful in large facilities. However, onboard sensing limits the line of sight of the safety system. We identify the use of a fixed mount system such as a ceiling-mounted camera to resolve this limitation. The research question is to assess the effectiveness of this system in detection and tracking of vehicles and to use the data to quantify vehicle interface risks.

3 Research methods

The AutoRisk system is developed using the following methodological steps: 1. Data collection: A camera system is identified and an experimental setup is designed for data collection, 2. Data filtering and sample generation: Computer vision techniques are used to isolate areas of the image which contain vehicles. 3. PCA-based shape representation: A novel PCA-based shape descriptor is developed to characterize vehicle appearance and to serve as input to machine learning, 4. Classification: The data collected and filtered in the previous steps is input to several supervised learning algorithms to classify vehicle types and the best performing algorithm is selected, 5. Interface detection: Computer vision heuristics augment the classification output from machine learning to detect close proximity movements between vehicles, and 6. Risk assessment: FMEA is used in estimation of RPN to identify the highest priority risks.

3.1 Data collection

A top-down view of the work area is chosen for setting up the camera. The top-down perspective has several practical advantages. It simplifies shape representation since the most common transformations in appearance of the object are scale and in-plane rotation. Occlusions are rare and typically only manifest because of columns and other structural elements of the work area. These occlusions can be handled in future studies by creating a network of top-down perspective cameras with overlapping fields of view or by relying on tracking continuity methods such as particle filters (Kim & Davis, 2006Kim, K., & Davis, L. S. (2006). Multi-camera tracking and segmentation of occluded people on ground plane using search-guided particle filtering. In Proceedings of the European Conference on Computer Vision (pp. 98-109).Berlin, Heidelberg: Springer. http://dx.doi.org/10.1007/11744078_8.
http://dx.doi.org/10.1007/11744078_8...
) or Kalman filters (Li et al., 2010Li, X., Wang, K., Wang, W., & Li, Y. (2010). A multiple object tracking method using Kalman filter. In Proceedings of the IEEE International Conference on Information And Automation (pp. 1862-1866). USA: IEEE http://dx.doi.org/10.1109/ICINFA.2010.5512258.
http://dx.doi.org/10.1109/ICINFA.2010.55...
).

3.1.1 Objectives

Data collection methods are designed with the following objectives:

  1. 1

    The experimental setup must approximate a manufacturing facility. For reference, the plant layout for a heavy manufacturing facility was used.

  2. 2

    The perspective and scale of the camera in the experiment must approximate a high-ceiling perspective typical to a manufacturing facility.

  3. 3

    The data must support the proof of concept for vehicle detection, tracking, and risk assessment.

3.1.2 Assumptions

Multiple assumptions about the work area are made to set up the experiment:

  1. 1

    The layout of the work area is known and the traffic intersections within the facility are known.

  2. 2

    All vehicle types in the work area are known. There may be multiple vehicles of the same type within the field of view of the camera.

  3. 3

    More than one vehicle from a single category may occupy the work area at any moment.

  4. 4

    Risk assessment is performed offline, that is, the current version of the study is not set up for real-time risk assessment.

3.1.3 Setup

A rectangular boundary was used to identify the region of interest for the experiment. Rectangular regions inside the work area were marked as being the locations of prohibited areas for vehicular traffic, for example: stationary equipment, control rooms, break areas, welding and tooling zones. The spaces outside prohibited areas are the material movement routes for vehicles. Vehicles in material movement routes could potentially interface at multiple locations within the work area. These regions in the work area were identified as intersections, a term borrowed from road traffic terminology (Messelodi et al., 2005Messelodi, S., Modena, C. M., & Zanin, M. (2005). A computer vision system for the detection and classification of vehicles at urban road intersections. Pattern Analysis & Applications, 8(1), 17-31. http://dx.doi.org/10.1007/s10044-004-0239-9.
http://dx.doi.org/10.1007/s10044-004-023...
). Each interface at an intersection is treated, in the language of FMEA, as a “failure mode” or “event”. Figure 1 shows the layout diagram and a real image of the experimental work area, with prohibited areas (filled areas), intersections (rectangular outlines with numbers), and the work area boundary.

Figure 1
Work area layout with prohibited areas, intersections with numbers, and work area boundary marked.

3.1.4 Method

Radio-controlled scale models of vehicles (1:24 scale) were used for simulating the movement of vehicles and for creating interfaces by moving them along traffic lanes. Three categories of vehicles were used in experiments: 1. “Truck” – an 18-wheeler, 2. “Forklift”, and 3. “Car”. A total of 29 videos of vehicle interfaces were recorded, each approximately one-minute duration. All vehicles were present in the work area in each video. The starting location of the vehicles was modified for each video and the motion of vehicles was controlled using joystick controllers. A total of ninety-one interfaces between vehicles were generated at different intersections.

Images of the vehicles were extracted from video for training and testing the machine learning algorithms for vehicle detection. Two varieties of cars were used to test the robustness of the machine learning algorithm to intra-class variation. The model vehicles used in the experiment are shown in Figure 2.

Figure 2
Radio-controlled scale models used for experimental data generation.

3.2 Data filtering and sample generation

Each video was decompiled into its constituent frames and the video data was filtered to remove noisy frames and those with experimenters present in the work area. After filtering, an estimated 50000 frames of data were available for analysis. Considering 50000 frames with four vehicles each (one truck, two cars, and one forklift), a total of 200000 observations were available over the entire dataset.

The input data for supervised learning were fixed size image regions which contained a specific type of vehicle (positive sample) or an absence of the specific type of vehicle (negative sample). The multi-class classifier is required to distinguish between three classes of vehicles: truck, car, and forklift. The classifier was trained on 25 percent of total data. Therefore, 12000 frames were randomly chosen for data filtering and sample generation for machine learning.

The sample generation process was simplified using a combination of human annotations and automated labeling. The first frame of a video was loaded, and the experimenter was asked to label one vehicle at a time. For each vehicle, the experimenter selected seed points inside the vehicle area. Standard floodfill algorithm (Birchfield, 2016Birchfield, S. (2016) Image processing and analysis. Boston: Cengage Learning.) is used to find similar pixels in the neighborhood of the seed point and generate the vehicle region, or foreground. This labeling approach works in a simplified setup in which all vehicles were all painted using the same color (see Figure2) such that it creates a clear contrast with the background. In practical scenarios, foreground extraction may be accomplished using more sophisticated techniques, for example (Rother et al., 2004Rother, C., Kolmogorov, V., & Blake, A. (2004). Grabcut: interactive foreground extraction using iterated graph cuts. ACM Transactions on Graphics, 23(3), 309-314. http://dx.doi.org/10.1145/1015706.1015720.
http://dx.doi.org/10.1145/1015706.101572...
). Following this, the experimenter was asked to assign an identity to the foreground as shown in the user interface in Figure3. The foreground has been extracted in this image for the truck in the top right corner.

Figure 3
Human labeling of data for supervised learning.

The next step was to simplify the collection of samples for remaining frames in the labeling video. This was achieved by automatically identifying “good features to track” (Shi & Tomasi, 1994Shi, J., & Tomasi, C. (1994). Good features to track. In Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR). USA: IEEE. ) from the initialization frame – see the annotation marks on the truck in Figure 3 as an example. The Lucas-Kanade feature tracking algorithm (Lucas & Kanade, 1981Lucas, B. D., & Kanade, T. (1981). An iterative image registration technique with an application to stereo vision. International Joint Conference on Artificial Intelligence (IJCAI), 81(1), 674-679.) was used to track the movement of features in subsequent frames; as an output, the feature tracker identified the updated locations of features that were tracked successfully. Floodfill was applied again using successfully tracked features as seed points to generate a positive data sample. The data sample was saved as an image which could then be processed by the feature extraction algorithm explained in the next section. Figure 4 shows automatically extracted foreground images for each of the vehicles in one frame of video.

Figure 4
Extracted foreground images for vehicles. Top to bottom: forklift, car, truck.

The labeling implementation allowed experimenters to easily relabel frames in which tracking failed. Typical circumstances under which tracking can fail include: 1. Missing or noisy intermediate frames. This results in discontinuity or jumps in vehicle position, which tracking algorithms are calibrated to ignore as tracking failures, and 2. Tracker error. This results in the tracking algorithm incorrectly updating the position of features to a non-vehicle region of the image. Human supervision was therefore needed to ensure that the data samples were correctly isolated in labeling videos. Despite the need for supervision, the automated labeling approach requires only a fraction of the human interaction needed to manually process individual frames to label vehicle regions.

3.3 PCA-based shape representation

The next step is the representation of vehicle “features” for the machine learning algorithm. Sparse features were generated for using Principal Component Analysis (PCA) to reduce the size of the representation. This reduces the dimensionality of the classification problem and with it, the size of the training set for an accurate machine learning algorithm (Friedman et al., 2001Friedman, J., Hastie, T., & Tibshirani, R. (2001).The elements of statistical learning (Vol. 1, Springer series in statistics). Berlin: Springer.). For example, even a low-resolution image (32x32) of a vehicle results in a 1024 element feature vector, compared to our PCA method which uses a 37-element vector. Reduced feature space in learning has the specific advantage of allowing the detection algorithm to become rapidly operational in a new environment.

PCA is applied to a “region of interest” (ROI), which is the binary vehicle foreground extracted as explained in Section 3.2. Singular Value Decomposition (SVD) is applied to find the principal component of the object. P rays spaced uniformly apart are considered for feature vector generation. For example, if the principal component is at an absolute angle θprincipal, then each subsequent ray is 360P degrees from the previous ray. Each element Vshapeiin the feature vector VshapeRe1×36 is computed as the count of the number of foreground pixels along the ith ray radiating from the center to the boundary of the object, where i=1 represents the principal component. This is visualized in Figure 5. The vector is normalized by the largest value in Vshape. The foreground area Aentity is added to Vshape and normalized using a large number Aimage. The normalization factor was chosen to be 0.01 times the area of the full image empirically. The final feature vector Ventity therefore has P+1 elements. This feature vector is rotation invariant because of the use of PCA to establish its initial orientation, and scale invariant because of the normalization with respect to foreground count and image area.

Figure 5
Feature vector generation for the object “truck”. The shorter vector indicates the orientation of the principal component.
V s h a p e = 1 M A X s 1 , , s P s 1 , , s P , s i 0, ,1 V e n t i t y = V s h a p e , A e n t i t y 0.01 A i m a g e (1)

3.4 Classification

A multi-class supervised classifier was trained to classify the input ROI as car, truck, or forklift. Several classifiers were compared after training, cross-validation, and testing routine for each technique. The classifier with the lowest test error was then chosen. The candidate classifiers were (Thrun & Pratt, 2012Thrun, S., & Pratt, L. (2012) Learning to learn. USA: Springer Science & Business Media.): (1) Complex decision trees, (2) Fine KNN (k-nearest neighbors), (3) SVM with medium Gaussian kernel, and (4) SVM with quadratic kernel. The strategy of this exercise was to test the computationally fastest classifiers and switch to slower classifiers should the accuracy of initial candidates be found unsatisfactory.

MATLAB’s Machine Learning Toolbox was used to implement and compare candidate classifiers. The cross-validation level was set to five – that is, data was divided into k=5 subsets, with each used once for training while the remaining k1 sets were used for validation. The advantage of cross validation is that an increase in the value of k decreases the variance in the training estimate. All the labeled feature vectors from the sample generation phase were formatted into a CSV file in which each row had the input-response format: Ventity,Centity, where Ventity was computed using Equation 1 and Centity was the vehicle class. A total of 48063 training samples were available. The approximate distribution of samples across the vehicle classes was, car (50 percent), truck (25 percent), and forklift (25 percent). This is because, of the 4 vehicles present in the work area, there were two types of cars and only one specimen each for forklift and truck.

All classifiers were highly accurate on the training data, each giving a training and cross validation accuracy of more than 99%. A comparison may be seen in Table 1. The classifiers were tested on images containing 13550 (Test 1), 7275 (Test 2), and 9982 (Test 3) samples respectively. The complex decision tree was less accurate in its predictions on the test data as compared to KNN and the two SVM classifiers. On average, the SVM classifier with the quadratic kernel (Q-SVM) (Scholkopf & Smola, 2002Scholkopf, B., & Smola, A. J. (2002).Learning with kernels: support vector machines, regularization, optimization, and beyond. Cambridge: MIT Press.), was most accurate on holdout data. Performance of this classifier on training data was similarly impressive.

Table 1
Results of classifier testing.

Therefore, Q-SVM was determined to be the best performing classifier for the data. The confusion matrix for this classifier on the training and test data, seen in Figure 6, is similarly compelling. Q-SVM was therefore selected as the classifier of choice for the interface detection and risk assessment algorithm.

Figure 6
Confusion matrix for SVM with quadratic kernel.

3.5 Interface detection

Prior to this step, we have established the methods for vehicle detection and classification. These methods are applied to experimental video data to detect vehicle interfaces at traffic intersections in the facility.

3.5.1 Vehicle detection

The first step towards interface detection is to identify the type and position of vehicles in the video. Foreground regions in the image are obtained by thresholding based on the L*a*b* (LAB) colorspace. The underlying experimental assumption was that vehicle color is known. LAB was found to give the best response to the chosen vehicle color for our experimental environment; it consistently, under the low-variance lighting conditions of the work area, was able to separate the vehicle from the background. All frames of the video were transformed to the LAB colorspace according to the CIE 1976 standard (Robertson, 1977Robertson, A. R. (1977). The CIE 1976 color-difference formulae. Color Research and Application, 2(1), 7-11. http://dx.doi.org/10.1002/j.1520-6378.1977.tb00104.x.
http://dx.doi.org/10.1002/j.1520-6378.19...
). The foreground threshold filters applied to the image to mark foreground pixels were (Equation 2):

25.177 L 91.135
15.248 a 65.639 (2)
20.394 b 62.801

The foreground areas in the binary image are further processed using basic morphological operations like opening and closing. This is followed by connected component analysis on the image to identify ROIs for classification. Sparse feature vectors are generated for each ROI based on the PCA-based technique. The Q-SVM classifier predicts the vehicle class based on the sparse feature vector.

3.5.2 Interface detection strategy

Intersections in the experimental setup are identified based on these rules: 1. Two or more traffic paths must meet at an intersection, 2. Traffic flow is higher at intersections compared to other traffic junctions in the facility, 3. Intersections were not located at entry or exit points in the work area. Intersections labeled in the experimental work area are shown in Figure 7. Note that the intersections may not all be the same size. Furthermore, rules in establishing the locations of intersections are deliberately subjective to accommodate the unique preferences of every facility and safety team. For example, a facility in which forklifts move at relatively high speeds may prefer to assign a larger area to an intersection since higher speeds reduce the reaction time for vehicle drivers. The larger area accommodates a situation in which vehicles come temporally close but do not actually cross paths at a traffic junction.

Figure 7
Interface between a forklift and a truck at intersection 4 in the work area.

This study defines an interface as ‘an event during which two or more vehicles were present at an intersection at the same time’. An example interface can be seen in Figure7, occurring at intersection 4 between a forklift and a truck. In computer vision terms, a vehicle crosses into an intersection when its ROI overlaps the ROI of the intersection, that is (Equation 3):

I i n t e r s e c t i o n v e h i c l e = R O I v e h i c l e ^ R O I i n t e r s e c t i o n (3)

If the count of foreground pixels in Iintersectionvehicle1 then the vehicle is inferred to have entered an intersection. Based on the definition of an interface, if two crossovers are observed at an intersection, an interface is identified.

It is possible that more than two vehicles enter an intersection at the same time. In this situation, we assign multiple interfaces to the event by considering interface pairs. In Figure 8, for example, three interfaces are recognized: car-truck, truck-forklift, forklift-car. The rationale for this is that it simplifies the accounting of interfaces in FMEA. For example, the assignment of severity and detectability in the RPN formulation (Marhavilas et al., 2011Marhavilas, P. K., Koulouriotis, D., & Gemeni, V. (2011). Risk analysis and assessment methodologies in the work sites: on a review, classification and comparative study of the scientific literature of the period 2000–2009. Journal of Loss Prevention in the Process Industries, 24(5), 477-523. http://dx.doi.org/10.1016/j.jlp.2011.03.004.
http://dx.doi.org/10.1016/j.jlp.2011.03....
) becomes more complicated if multiple vehicle interfaces are considered as independent events compared to pairwise estimations. Resolving this complication is a valid research consideration for future versions of this study.

Figure 8
An interface with more than two vehicles at intersection 5.

3.6 Risk assessment

The identification of interfaces in videos results in an automated database of Vehicle1,Vehicle2,Intersection combinations. Risk assessment techniques help address key questions based on this database: 1. What is the probability of occurrence of an interface? 2. What is the severity and detectability attributed to an interface? 3. Based on the values for probability, severity, and detectability, what is the RPN score estimated for the interface? 4. Based on RPN scores for all observed interface combinations, what is the prioritized risk for vehicle material movement observed for a facility?

The RPN metric is a product of three numbers for each event (Marhavilas et al., 2011Marhavilas, P. K., Koulouriotis, D., & Gemeni, V. (2011). Risk analysis and assessment methodologies in the work sites: on a review, classification and comparative study of the scientific literature of the period 2000–2009. Journal of Loss Prevention in the Process Industries, 24(5), 477-523. http://dx.doi.org/10.1016/j.jlp.2011.03.004.
http://dx.doi.org/10.1016/j.jlp.2011.03....
):

R P N = P S D (4)

where P is the probability of occurrence of an event, S is its severity, and D is its detectability. The latter two numbers are obtained from work area heuristics based on specific facilities, usually by safety managers. We assigned ad-hoc S and D scores for our experimental data. An example detectability heuristic: “Vehicles approaching intersection 6 have poor detectability because of the presence of large equipment and high noise levels”, and an example severity heuristic: “Car and larger vehicle collisions are most severe for occupants because of the size mismatch”.

Probability values were assigned, based on data, to each intersection and vehicle pair combination Xi,Ej,Ek,i=1Lintersections,j=1Mentities. There were N=LM2=79=63 possible interface types. Of these, 35 combinations were eliminated since it was impossible for them to occur in our setup. For example, a truck-truck interface could never be observed because we used only one truck in the physical simulation. For the remaining 28 interfaces, the probability of occurrence was defined as P=ProbXi,Ej,Ek and calculated for each interface. The observed probability of occurrence for detected interfaces is given in Table 2. The value for P ranged from Pmin=0.0 to Pmax=0.952. This value was scaled between 1 and 10 to give the value P for the interface.

Table 2
Probability of occurrence of detected interfaces.

The RPN value for each interface could be estimated using the above information. The riskiest possible event would be assigned a score of 1000 based on the formulation in Equation 4. Table 3 shows the results of risk assessment, the five highest risk events are highlighted in the table.

Table 3
RPN estimates of risk. Numbers in brackets are probability scores. The shaded cells are the highest risk events.

4 Analysis

4.1 Interface detection accuracy

The AutoRisk system is analyzed by comparing its labeling of interfaces and estimation of RPN values with those labeled and estimated by human supervisors on all videos recorded during the experiments. To begin, the supervisor was asked to validate interfaces identified by AutoRisk in the video. This interaction is outlined in Figure 9. The supervisor rated the AutoRisk annotation using one of three values: 1. Correct: The interface was correctly detected, 2. Machine learning error (ML error): an interface was correctly detected, but one or both the vehicles were detected incorrectly, and 3. Marking error: The algorithm failed to identify an interface that was clearly perceived by the supervisor.

Figure 9
Human validation of interface detection.

Figure 10 shows examples of ML errors in data. A forklift is incorrectly identified as a truck at Intersection 7 and a truck is incorrectly identified as a forklift at Intersection 4. Furthermore, an interface between a truck and car is missed at Intersection 5.

Figure 10
Examples of machine learning errors in interface detection.

The outcome of supervisor validation for interface labeling is shown in Table 4. Out of 91 interfaces labeled by AutoRisk, 82 were correctly labeled, yielding a success rate of 90.1 percent. ML error accounted for 2 of 91 results, yielding a ML error rate of 2.1 percent. The subjective Marking error accounted for 7 of 91 results, yielding an error rate of 7.7 percent. If machine learning accuracy was the only type of error being scrutinized, then it was found that of the 912=182 vehicle detections under consideration, 180 were accurate detections, yielding a machine learning accuracy of 98.9 percent. When not considering missed interfaces, the vehicles and intersection for 82 of 84 interfaces were labeled correctly, yielding an accuracy of 97.6 percent.

Table 4
Interface labeling accuracy for vehicle-pairs. The shaded row shows that the car-forklift interface was most common in the recorded data.

4.2 RPN estimation accuracy

Interface labeling results based on AutoRisk and human estimates are applied to RPN calculations. The severity and detectability scores for each interface are the same for AutoRisk and humans, since these scores are assigned in an ad-hoc manner. RPN prioritization results are compared in Table 5. The priority of risky interfaces identified by AutoRisk matched human labeling, although RPN values were different in some cases because of marking and machine learning errors. These differences are highlighted using shaded cells in Table 5.

Table 5
Comparison of human and AutoRisk RPN estimates.

Figure 11 shows another comparative representation of RPN results, for car-forklift interfaces. The variation in assessed risk for all vehicle pairs by intersection was compared for three types of results: 1. Human-labeled (True) RPN [solid line], 2. Detected RPN [dashed line], and 3. RPN for correct detections, without machine learning error [dotted line]. The vertices of each of these figures represent intersections, the concentric polygons represent RPN scores, and the lines represent the variation in RPN value from one intersection to the next.

Figure 11
Analysis of car-forklift interfaces.

5 Managerial implications

The AutoRisk system supports manufacturing safety managers in assessing vehicle interface risks in three significant ways: 1. Reliable alternative to manual data collection and analysis for FMEA, 2. Automated and simplified visualization of RPN results, and 3. Leveraging existing infrastructure to set up the low-cost system.

The reliability of the AutoRisk system has been demonstrated in Sections 4.1 and 4.2. Therefore, it can be used as an alternative to manual implementation of FMEA for risk assessment of vehicle movements in manufacturing facilities. It provides a workaround to standard FMEA practices, including laborious data collection and annotation, categorization of interfaces, and estimation of the RPN score for each interface category. In doing so, the safety manager saves on labor costs and time associated with the manual safety auditing process.

The visual representation provided by the AutoRisk system will help safety managers prioritize their risk mitigation strategy. An example of the visualization is seen in Figure 11. The safety manager gains some immediate insights about car-forklift interfaces by reviewing this figure: Car-Forklift interfaces were regularly seen in data, and their severity value was high at each intersection, this category of interfaces had the highest median RPN value. Moreover, Intersection 1 and Intersection 7 had the highest RPN scores because of the higher P value at these intersections.

Manufacturing facilities typically have existing infrastructure that can be leveraged to install the AutoRisk system at low-cost. For example, security cameras are often high-mounted, which is ideal for the camera perspective under which AutoRisk has been tested. Once the camera has been identified, an interactive process must be followed to train the classifier for the environment unique to the facility. The training itself requires only about 20 minutes of video, selected such that all vehicles in the facility are visible. Beyond this, the system does not procedurally require human assistance or intervention.

In summary, AutoRisk provides a safety manager with work area data about the ‘what’ and ‘where’ of risk. Looking ahead, this volume of data can provide clues to the ‘why’ of that risk and assist in its mitigation. The understanding and mitigation of risk in a manufacturing environment is critical to productivity and manufacturing efficiency metrics. The AutoRisk method can become operational by integrating it with a facility’s early warning systems to prevent vehicle interface accidents. If installed at multiple facilities, it can generate datasets on material movement vehicle safety that will provide unprecedented insights on this risk category.

6 Summary and future work

The contribution of risk assessment to manufacturing safety is to identify ‘what’ is risky and ‘where’ the risk occurs. The AutoRisk approach achieves this using a combination of computer vision and machine learning. The former identifies candidate locations for mobile vehicles in the work area and the latter identifies the vehicle category: car, truck, or forklift. Data is collected by physically simulating a scale version of a manufacturing work area and using scale models of the vehicles to simulate interfaces at work area intersections. A PCA-based shape descriptor is generated to create a sparse Re37×1feature vector for vehicle classification. SVM with a quadratic kernel (Q-SVM) was found to be the most accurate classifier for the simulated data. Vehicle detection accuracy of over 98 percent and interface detection accuracy of 90.1 percent to 97.6 percent was observed.

Future iterations of the study can potentially address several challenges related to the problem: 1. Can the definition of intersections be standardized to minimize interface detection errors relative to human annotation? 2. What modifications are necessary to the machine learning approach for pedestrian detection and monitoring the risk of pedestrian-vehicle interfaces? 3. How can the fidelity of the physical simulation be quantified and improved? 4. What are the other application domains for this analysis, for example, can it be applied for redesigning facility layouts? 5. How can the study be extended to provide real-time warning for vehicle interfaces? 6. How can data from multiple sensors be integrated into the approach to create an integrated framework for risk assessment and mitigation?

  • Financial support: None.
  • How to cite: Pradhan, N., Balasubramanian, P., Sawhney, R., & Khan, M. H. (2020). Automated risk assessment for material movement in manufacturing. Gestão & Produção, 27(3), e5424. https://doi.org/10.1590/0104-530X5424-20

References

  • Aoude, G. S., & How, J. P. (2009).Using support vector machines and Bayesian filtering for classifying agent intentions at road intersections Retrieved in 2019, April 10, from http://hdl.handle.net/1721.1/46720
    » http://hdl.handle.net/1721.1/46720
  • Barral, V., Suárez-Casal, P., Escudero, C. J., & García-Naya, J. A. (2019). Multi-sensor accurate forklift location and tracking simulation in industrial indoor environments. Electronics (Basel), 8(10), 1152. http://dx.doi.org/10.3390/electronics8101152
    » http://dx.doi.org/10.3390/electronics8101152
  • Behrje, U., Himstedt, M., & Maehle, E. (2018). An autonomous forklift with 3d time-of-flight camera-based localization and navigation. InProceedings of the 2018 15th International Conference on Control, Automation, Robotics and Vision (ICARCV)(pp. 1739-1746). USA: IEEE. http://dx.doi.org/10.1109/ICARCV.2018.8581085
    » http://dx.doi.org/10.1109/ICARCV.2018.8581085
  • Birchfield, S. (2016) Image processing and analysis. Boston: Cengage Learning.
  • Bostelman, R. (2009). Towards improved forklift safety: white paper. InProceedings of the 9th Workshop on Performance Metrics for Intelligent Systems(pp. 297-302). USA: ACM. http://dx.doi.org/10.1145/1865909.1865968
    » http://dx.doi.org/10.1145/1865909.1865968
  • Brilakis, I., Park, M.-W., & Jog, G. (2011). Automated vision tracking of project related vehicles. Advanced Engineering Informatics, 25(4), 713-724. http://dx.doi.org/10.1016/j.aei.2011.01.003
    » http://dx.doi.org/10.1016/j.aei.2011.01.003
  • Bureau of Labor Statistics. (2007).Occupational outlook handbook. USA: U.S. Department of Labor.
  • Bureau of Labor Statistics. (2011).Occupational outlook handbook. USA: U.S. Department of Labor.
  • Bureau of Labor Statistics. (2014).Employer-reported workplace injuries and illnesses. USA: U.S. Department of Labor.
  • California Department of Industrial Relations. Division of Labor Statistics. (1982). Disabling work injuries involving forklifts. California work injuries and illnesses California: DIR.
  • Cao, L., Depner, T., Borstell, H., & Richter, K. (2019). Discussions on sensor-based Assistance Systems for Forklifts. InProceedings of the European Conference on Smart Objects, Systems and Technologies(pp. 1-8). USA: IEEE.
  • Chernousov, V. O., & Savchenko, A. V. (2014). A fast mathematical morphological algorithm of video-based moving forklift truck detection in noisy environment. InProceedings of the International Conference on Analysis of Images, Social Networks and Texts http://dx.doi.org/10.1007/978-3-319-12580-0_5
    » http://dx.doi.org/10.1007/978-3-319-12580-0_5
  • Collins, J. W., Smith, G. S., Baker, S. P., & Warner, M. (1999b). Injuries related to forklifts and other powered industrial vehicles in automobile manufacturing. American Journal of Industrial Medicine, 36(5), 513-521. http://dx.doi.org/10.1002/(SICI)1097-0274(199911)36:5<513::AID-AJIM3>3.0.CO;2-K PMid:10506733.
    » http://dx.doi.org/10.1002/(SICI)1097-0274(199911)36:5<513::AID-AJIM3>3.0.CO;2-K
  • Collins, J. W., Smith, G. S., Baker, S. P., Landsittel, D. P., & Warner, M. (1999a). A case‐control study of forklift and other powered industrial vehicle incidents. American Journal of Industrial Medicine, 36(5), 522-531. http://dx.doi.org/10.1002/(SICI)1097-0274(199911)36:5<522::AID-AJIM4>3.0.CO;2-F PMid:10506734.
    » http://dx.doi.org/10.1002/(SICI)1097-0274(199911)36:5<522::AID-AJIM4>3.0.CO;2-F
  • Cortes, C., & Vapnik, V. (1995). Support-vector networks. Machine Learning, 20(3), 273-297. http://dx.doi.org/10.1007/BF00994018
    » http://dx.doi.org/10.1007/BF00994018
  • De Vries, J., de Koster, R., & Stam, D. (2016). Safety does not happen by accident: antecedents to a safer warehouse. Production and Operations Management, 25(8), 1377-1390. http://dx.doi.org/10.1111/poms.12546
    » http://dx.doi.org/10.1111/poms.12546
  • Directorate-General for Employment Social Affairs and Equal Opportunities. (2009). Unit E1 – The Social Situation in the European Union 2008 - New Insights into Social Inclusion London: Dictus Publishing.
  • Friedman, J., Hastie, T., & Tibshirani, R. (2001).The elements of statistical learning (Vol. 1, Springer series in statistics). Berlin: Springer.
  • Gandhi, T., & Trivedi, M. M. (2006). Pedestrian collision avoidance systems: a survey of computer vision based recent studies. InProceedings of the IEEE Intelligent Transportation Systems Conference(pp. 976-981). USA: IEEE.
  • Groza, B., & Briceag, C. (2017). A vehicle collision-warning system based on multipeer connectivity and off-the-shelf smart-devices. InProceedings of the International Conference on Risks and Security of Internet and Systems(pp. 115-123). Cham: Springer.
  • Hämäläinen, P., Saarela, K. L., & Takala, J. (2009). Global trend according to estimated number of occupational accidents and fatal work-related diseases at region and country level. Journal of Safety Research, 40(2), 125-139. http://dx.doi.org/10.1016/j.jsr.2008.12.010 PMid:19433205.
    » http://dx.doi.org/10.1016/j.jsr.2008.12.010
  • Hanoun, S., Bhatti, A., Creighton, D., Nahavandi, S., Crothers, P., & Esparza, C. (2016). Target coverage in camera networks for manufacturing workplaces. Journal of Intelligent Manufacturing, 27(6), 1221-1235. http://dx.doi.org/10.1007/s10845-014-0946-z
    » http://dx.doi.org/10.1007/s10845-014-0946-z
  • Jo, B. W., Lee, Y. S., Khan, R. M. A., Kim, J. H., & Kim, D. K. (2019). Robust Construction Safety System (RCSS) for collision accidents prevention on construction sites. Sensors (Basel), 19(4), 932. http://dx.doi.org/10.3390/s19040932 PMid:30813334.
    » http://dx.doi.org/10.3390/s19040932
  • Kim, K., & Davis, L. S. (2006). Multi-camera tracking and segmentation of occluded people on ground plane using search-guided particle filtering. In Proceedings of the European Conference on Computer Vision (pp. 98-109).Berlin, Heidelberg: Springer. http://dx.doi.org/10.1007/11744078_8
    » http://dx.doi.org/10.1007/11744078_8
  • Kusiak, A. (2018). Smart manufacturing. International Journal of Production Research, 56(1-2), 508-517. http://dx.doi.org/10.1080/00207543.2017.1351644
    » http://dx.doi.org/10.1080/00207543.2017.1351644
  • Lai, Y. K., Ho, C. Y., Huang, Y. H., Huang, C. W., Kuo, Y. X., & Chung, Y. C. (2018). Intelligent vehicle collision-avoidance system with deep learning. InProceedings of the 2018 IEEE Asia Pacific Conference on Circuits and Systems (APCCAS)(pp. 123-126). USA: IEEE. http://dx.doi.org/10.1109/APCCAS.2018.8605622
    » http://dx.doi.org/10.1109/APCCAS.2018.8605622
  • Lang, A. (2018). Evaluation of an intelligent collision warning system for forklift truck drivers in industry. InProceedings of the International Conference on Digital Human Modeling and Applications in Health, Safety, Ergonomics and Risk Management (pp. 610-622). Cham: Springer. http://dx.doi.org/10.1007/978-3-319-91397-1_50
    » http://dx.doi.org/10.1007/978-3-319-91397-1_50
  • Lang, A., & Günthner, W. A. (2017). Evaluation of the usage of support vector machines for people detection for a collision warning system on a forklift. InProceedings of the International Conference on HCI in Business, Government, and Organizations (pp. 322-337). Cham: Springer. http://dx.doi.org/10.1007/978-3-319-58481-2_25
    » http://dx.doi.org/10.1007/978-3-319-58481-2_25
  • Li, X., Wang, K., Wang, W., & Li, Y. (2010). A multiple object tracking method using Kalman filter. In Proceedings of the IEEE International Conference on Information And Automation (pp. 1862-1866). USA: IEEE http://dx.doi.org/10.1109/ICINFA.2010.5512258
    » http://dx.doi.org/10.1109/ICINFA.2010.5512258
  • Liu, H.-C., Liu, L., & Liu, N. (2013). Risk evaluation approaches in failure mode and effects analysis: a literature review. Expert Systems with Applications, 40(2), 828-838. http://dx.doi.org/10.1016/j.eswa.2012.08.010
    » http://dx.doi.org/10.1016/j.eswa.2012.08.010
  • Lucas, B. D., & Kanade, T. (1981). An iterative image registration technique with an application to stereo vision. International Joint Conference on Artificial Intelligence (IJCAI), 81(1), 674-679.
  • Marhavilas, P. K., Koulouriotis, D., & Gemeni, V. (2011). Risk analysis and assessment methodologies in the work sites: on a review, classification and comparative study of the scientific literature of the period 2000–2009. Journal of Loss Prevention in the Process Industries, 24(5), 477-523. http://dx.doi.org/10.1016/j.jlp.2011.03.004
    » http://dx.doi.org/10.1016/j.jlp.2011.03.004
  • Marsh, S. M., & Fosbroke, D. E. (2015). Trends of occupational fatalities involving machines, United States, 1992–2010. American Journal of Industrial Medicine, 58(11), 1160-1173. http://dx.doi.org/10.1002/ajim.22532 PMid:26358658.
    » http://dx.doi.org/10.1002/ajim.22532
  • Memarzadeh, M., Golparvar-Fard, M., & Niebles, J. C. (2013). Automated 2D detection of construction equipment and workers from site video streams using histograms of oriented gradients and colors. Automation in Construction, 32, 24-37. http://dx.doi.org/10.1016/j.autcon.2012.12.002
    » http://dx.doi.org/10.1016/j.autcon.2012.12.002
  • Messelodi, S., Modena, C. M., & Zanin, M. (2005). A computer vision system for the detection and classification of vehicles at urban road intersections. Pattern Analysis & Applications, 8(1), 17-31. http://dx.doi.org/10.1007/s10044-004-0239-9
    » http://dx.doi.org/10.1007/s10044-004-0239-9
  • Mosberger, R., Leibe, B., Andreasson, H., & Lilienthal, A. (2015). Multi-band hough forests for detecting humans with reflective safety clothing from mobile machinery. InProceedings of the IEEE International Conference on Robotics and Automation (ICRA) USA: IEEE. http://dx.doi.org/10.1109/ICRA.2015.7139255
    » http://dx.doi.org/10.1109/ICRA.2015.7139255
  • Papageorgiou, C., & Poggio, T. (2000). A trainable system for object detection. International Journal of Computer Vision, 38(1), 15-33. http://dx.doi.org/10.1023/A:1008162616689
    » http://dx.doi.org/10.1023/A:1008162616689
  • Perttula, P., & Salminen, S. (2012). Workplace accidents in materials transfer in Finland. International Journal of Occupational Safety and Ergonomics, 18(4), 541-548. http://dx.doi.org/10.1080/10803548.2012.11076958 PMid:23294658.
    » http://dx.doi.org/10.1080/10803548.2012.11076958
  • Pradalier, C., Tews, A., & Roberts, J. (2008). Vision-based operations of a large industrial vehicle: autonomous hot metal carrier. Journal of Field Robotics, 25(4-5), 243-267. http://dx.doi.org/10.1002/rob.20240
    » http://dx.doi.org/10.1002/rob.20240
  • Pratt, S. G., Kisner, S. M., & Helmkamp, J. C. (1996). Machinery-related occupational fatalities in the United States, 1980 to 1989. Journal of Occupational and Environmental Medicine, 38(1), 70-76. http://dx.doi.org/10.1097/00043764-199601000-00019 PMid:8871334.
    » http://dx.doi.org/10.1097/00043764-199601000-00019
  • Quan, N. V., Eum, H. M., Lee, J., & Hyun, C. H. (2013). Vision sensor-based driving algorithm for indoor automatic guided vehicles. International Journal of Fuzzy Logic and Intelligent Systems, 13(2), 140-146. http://dx.doi.org/10.5391/IJFIS.2013.13.2.140
    » http://dx.doi.org/10.5391/IJFIS.2013.13.2.140
  • Robertson, A. R. (1977). The CIE 1976 color-difference formulae. Color Research and Application, 2(1), 7-11. http://dx.doi.org/10.1002/j.1520-6378.1977.tb00104.x
    » http://dx.doi.org/10.1002/j.1520-6378.1977.tb00104.x
  • Rother, C., Kolmogorov, V., & Blake, A. (2004). Grabcut: interactive foreground extraction using iterated graph cuts. ACM Transactions on Graphics, 23(3), 309-314. http://dx.doi.org/10.1145/1015706.1015720
    » http://dx.doi.org/10.1145/1015706.1015720
  • Saric, S., Bab-Hadiashar, A., Hoseinnezhad, R., & Hocking, I. (2013). Analysis of forklift accident trends within Victorian industry (Australia). Safety Science, 60, 176-184. http://dx.doi.org/10.1016/j.ssci.2013.07.017
    » http://dx.doi.org/10.1016/j.ssci.2013.07.017
  • Scholkopf, B., & Smola, A. J. (2002).Learning with kernels: support vector machines, regularization, optimization, and beyond. Cambridge: MIT Press.
  • Shi, J., & Tomasi, C. (1994). Good features to track. In Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR) USA: IEEE.
  • Sun, E., & Ma, R. (2017). The UWB based forklift trucks indoor positioning and safety management system. InProceedings of the IEEE 2nd Advanced Information Technology, Electronic and Automation Control Conference (IAEAC) (pp. 86-90). USA: IEEE. http://dx.doi.org/10.1109/IAEAC.2017.8053982
    » http://dx.doi.org/10.1109/IAEAC.2017.8053982
  • Thrun, S., & Pratt, L. (2012) Learning to learn. USA: Springer Science & Business Media.
  • Veeraraghavan, H., Masoud, O., & Papanikolopoulos, N. P. (2003). Computer vision algorithms for intersection monitoring. IEEE Transactions on Intelligent Transportation Systems, 4(2), 78-89. http://dx.doi.org/10.1109/TITS.2003.821212
    » http://dx.doi.org/10.1109/TITS.2003.821212
  • vom Stein, A. M., Dorofeev, A., & Fottner, J. (2018). Visual collision warning displays in industrial trucks. InIEEE International Conference on Vehicular Electronics and Safety (ICVES)(pp. 1-7). USA: IEEE.
  • Walter, M., Antone, M., Chuangsuwanich, E., Correa, A., Davis, R., Fletcher, L., Frazzoli, E., Friedman, Y., Glass, J., How, J., Jeon, J., Karaman, S., Luders, B., Roy, N., Tellex, S., & Teller, S. (2014). A situationally aware voice‐commandable robotic forklift working alongside people in unstructured outdoor environments. Journal of Field Robotics, 32(4), 590-628. http://dx.doi.org/10.1002/rob.21539
    » http://dx.doi.org/10.1002/rob.21539

Publication Dates

  • Publication in this collection
    29 June 2020
  • Date of issue
    2020

History

  • Received
    10 Apr 2019
  • Accepted
    08 Jan 2020
Universidade Federal de São Carlos Departamento de Engenharia de Produção , Caixa Postal 676 , 13.565-905 São Carlos SP Brazil, Tel.: +55 16 3351 8471 - São Carlos - SP - Brazil
E-mail: gp@dep.ufscar.br